text
stringlengths
6
128k
# Reduction and integrability: a geometric perspective José F. Cariñena111E-mail<EMAIL_ADDRESS> Departamento de Física Teórica Universidad de Zaragoza, 50009 Zaragoza, Spain Real Academia de Ciencias Exactas, Físicas, Químicas y Naturales de Zaragoza ###### Abstract A geometric approach to integrability and reduction of dynamical system is developed from a modern perspective. The main ingredients in such analysis are the infinitesimal symmetries and the tensor fields that are invariant under the given dynamics. Particular emphasis is given to the existence of invariant volume forms and the associated Jacobi multiplier theory, and then the Hojman symmetry theory is developed as a complement to Noether theorem and non- Noether constants of motion. The geometric approach to Hamilton-Jacobi equation is shown to be a particular example of the search for related field in a lower dimensional manifold. Mathematics Subject Classifications (2010): 34A34, 37N05, 53C15 PACS numbers: 02.30.Hq, 02.40.Yy, 02.40.Hw, 02.40.Ky, 45.10.Na ### Keywords: Reduction; Integrability; the Quadratures; Symmetry; First-integrals; Hojmam symmetry. ## 1 Introduction Systems of (may be partial) differential equations play a relevant rôle in the development of science and technology, and they quite often appear in many different branches of science ranging from pure mathematics to classical and quantum physics, control theory, economy, biology, etc.. For instance, the dynamical evolution of deterministic systems is described by systems of (may be time-dependent) first-order systems. However, the solution of such systems is not an easy task. The study of their integrability is a subject of significant interest that has been an active field of research in recent years and appears very often in physics and mathematics and with very different meanings. Here by integrability of an autonomous system we mean that we are able to determine the general solution of the system. Such solutions can be searched in a specific family, for instance polynomial, rational functions or more generally, the so-called integrability by quadratures, that means that it is possible to determine the solutions by means of a finite number of algebraic operations, use of the Implicit Function Theorem and quadratures of some appropriate fundamental functions, including rational, trigonometric and exponential functions, as well as their inverses. To solve specific problems reduction techniques are used. Students in physics relate the reduction process with infinitesimal symmetries and existence of constants of motion, mainly in the framework of Lagrangian or Hamiltonian mechanics. However reduction processes can be developed in a more general framework, allowing therefore other interesting cases also relevant in physics, and need not to be related with symmetry properties but only aim to find simpler solvable systems providing, at least partially, information on the solution of the original problem. In other words, given a difficult to solve problem we can look for related simpler problems whose solutions provide at least partial information on the original one. Sometimes we find geometric structures difficult to handle, for instance because they have some kind of degeneracy. We look then for a related structure of the same kind but easier to manage, for instance a non-degenerate one. The existence of additional compatible geometric structures, like symplectic or Poisson structures may be useful in the search for solutions, and therefore when the original problem admits some geometric structures, the reduction procedure tries to preserve such structures. There is no systematic way of selecting the related reduced systems and sometimes there are different possibilities. We generally assume that a system with less unknown variables or of lower-order differential equations is simpler than other with more unknown variables or of higher-order differential equations, but reduction processes from linear systems produce nonlinear interacting ones. The simplest case is that of an autonomous system of first-order differential equations (the higher-order case can be related to some first-order one) $\dot{x}^{i}=X^{i}(x^{1},\ldots,x^{n})\ ,\qquad i=1,\ldots,n,$ (1.1) while non-autonomous systems correspond to the case in which the functions $X^{i}$ also depend on the independent variable $t$. In order to incorporate holonomic constraints, such a system (1.1) is geometrically interpreted in terms of a vector field $X$ in a $n$-dimensional manifold $M$ with a coordinate expression in a local chart $X=\sum_{i=1}^{n}X^{i}(x^{1},\ldots,x^{n})\frac{\partial}{\partial x^{i}}\ ,$ (1.2) in such a way that its integral curves are the solutions of the given system. In this sense, integrability by quadratures of a vector field means that you can determine its integral curves (i.e. the flow of $X$) by quadratures, i.e. by means of a finite number of algebraic operations and quadratures of some functions. In a general case the flows of the vector fields cannot be found in an explicit way using fundamental functions, and we are happy if, at least, we can express the solutions in terms of quadratures. The purpose of integrability is therefore the characterisation of systems admitting such type of solutions and it has being receiving a lot of attention. The answer is always based on the use of appropriated Lie algebras of vector fields containing the given vector field. From the geometric viewpoint by reduction of the dynamics given by the vector field $X\in\mathfrak{X}(M)$ we mean to find a manifold $N$, a differentiable map $F:M\to N$ and a simpler $F$-related vector field $Y\in\mathfrak{X}(N)$, i.e. such that $TF\circ X=Y\circ F$, and hence the integral curves of $Y$ are images under $F$ of the integral curves of $X$. Sometimes it remains the problem of the reconstruction of the integral curves of $X\in\mathfrak{X}(M)$ from the integral curves of $Y\in\mathfrak{X}(N)$. The first of the two more important cases is when $i:N\to M$ is an immersed submanifold of $M$ (later on we will study when $F:M\to N$ is a surjective submersion). Note that $X\in\mathfrak{X}(M)$ must be such that $X_{|i(N)}$ is tangent to $i(N)$ and therefore there exists a vector field $\bar{X}$ in the lower-dimensional manifold $N$ such that is $i$-related to $X$ and whose integral curves provide us some integral curves of $X$. We can suppose that the $(n-r)$-submanifold $N$ is (at least locally) defined by $r$ functions $F_{1},\ldots,F_{r}$, which should be constants of motion for $X$ because of the assumed tangency condition. Then a relevant technique in the reduction process consists on determining $r$ functionally independent constants of motion, because this reduces the original problem to another problem of a $r$-parameter family involving only $(n-r)$ variables. This provides us foliations such that the vector field $X$ is tangent to the leaves, and the problem is reduced to those of a family of lower dimensional ones, one on each leaf. More specifically, if $r$ functionally independent constants of motion $F_{1},\ldots,F_{r}$ (i.e. such that $dF_{1}\wedge\cdots\wedge dF_{r}\neq 0$) are known, it allows us to reduce the problem to that of a family of vector fields $\widetilde{X}_{c}$, with $c\in\mathbb{R}^{r}$, defined in the $n-r$ dimensional submanifolds $M_{c}$ given by the level sets of the vector valued function of rank $r$, $(F_{1},\ldots,F_{r}):M\to\mathbb{R}^{r}$. Of course the best situation is when $r=n-1$: the leaves are 1-dimensional, giving us the solutions to the problem, up to a reparametrisation. The second important case is when there exists a manifold $N$ and a surjective submersion $F:M\to N$ such that $X\in\mathfrak{X}(M)$ is a projectable vector field. The map $F$ defines an equivalence relation in $M$, and then $N$ is the space of such equivalence classes. Sometimes the starting point is the equivalence relation and $F$ is the projection map. For instance, usually one considers a transformation Lie group $G$ of $M$ that is a symmetry group for $X$ and the equivalence relation associated to the action: $N$ is the space of orbits, assumed to be a differentiable manifold, the function $F$ being the natural projection of each element of $M$ on its orbit. The symmetry condition implies that $X$ is a projectable vector field. A particular example of symmetry is given by the local flow of a vector field $Y$ that is an infinitesimal symmetry of $X$, i.e. a vector field $Y\in\mathfrak{X}(M)$ such that $[Y,X]=0$. Then in a neighbourhood of a point where $Y$ is different from zero we can choose adapted coordinates, $(y^{1},\ldots,y^{n})$, for which $Y$ is written (Straightening out Theorem) $Y=\partial/\partial{y^{n}}$. Then $[Y,X]=0$ implies that $X$ has the form $X=\bar{X}^{1}(y^{1},\ldots,y^{n-1})\,\frac{\partial}{\partial y^{1}}+\ldots+\bar{X}^{n-1}(y^{1},\ldots,y^{n-1})\,\frac{\partial}{\partial y^{n-1}}+\bar{X}^{n}(y^{1},\ldots,y^{n-1})\frac{\partial}{\partial y^{n}}\ ,$ and its integral curves are obtained by solving the system $\left\\{\begin{array}[]{ccl}{\displaystyle\frac{dy^{i}}{dt}}&=&\bar{X}^{i}(y^{1},\ldots,y^{n-1}),\qquad i=1,\ldots,n-1,\cr&&\cr{\displaystyle\frac{dy^{n}}{dt}}&=&\bar{X}^{n}(y^{1},\ldots,y^{n-1})\end{array}\ .\right.$ We have reduced the problem to a subsystem involving only the first $n-1$ equations, and once this has been solved, the last equation is used to obtain the function $y^{n}(t)$ by means of one quadrature. Note however that the new coordinates $y^{1},\ldots,y^{n-1}$, are local constants of the motion for $X$ and therefore we cannot find in an easy way such coordinates in a general case. Note also that the information provided by two different symmetry vector fields, $Y_{1}$ and $Y_{2}$, cannot be used simultaneously in the general case, because it is not possible to find local coordinates $(y^{1},\ldots,y^{n})$ such that $Y_{1}=\partial/\partial{y^{n-1}}$ and $Y_{2}=\partial/\partial{y^{n}}$, unless that $[Y_{1},Y_{2}]=0$. In other situations the equivalence relation is defined by a foliation (involutive distribution) and the equivalence classes are the leaves. The set of leaves is assumed to have a manifold structure. If the original system has an invariance group we can consider invariant foliations in such a way that leaves are preserved. Other reduction procedures for systems can be developed when we know some particular solutions of them or related systems. For instance, one particular example is Riccati equation, of a fundamental importance both in physics (for instance factorisation of second order differential operators, Darboux transformations and in general Supersymmetry in Quantum Mechanics) and in mathematics. It is well known that if one particular solution of Riccati equation is known, we can find the general solution by means of two quadratures, while if two particular solutions are known, we can find the general solution by means of just one quadrature, and finally if three particular solutions are known we can explicitly write the solution without any quadrature, by means of a superposition function. Actually there is a kind of systems admitting such a superposition function, the so-called Lie systems, which appear very often in many problems in science and engineering (see [1, 2] for a review). In the solution of such non-autonomous systems of first- order differential equations we can use techniques imported from group theory, for instance Wei-Norman method [3, 4, 5], and reduction techniques coming from the theory of connections. Throughout the paper we shall take advantage of results in previous publications, where more details can be found. Section 2 is devoted to Lie fundamental result of integrability by quadratures of a system without recourse to any related invariant tensor field but only to properties of a Lie algebra of vector fields containing the given vector field. In section 3 the interest of invariant tensorial fields is pointed out and the particular case of invariant (1,1)-tensor fields giving rise to recursion operators, also called symmetry generators, and Lax equations is analysed. In Section 4, for the sake of completeness we review some basic aspects of symplectic geometry, and the classical first Noether Theorems in both Hamiltonian and Lagrangian approach are summarised. The non-Noether constants of motion are derived in Section 5 from the existence of alternative invariant geometric structures for the dynamical vector field. As we want to add some comments on a different method of finding constants of motion, as a prelude to deal in Section 7 with the theory of Hojman symmetry we develop in Section 6 some remarkable points on invariant volume forms and the associated theory of Jacobi multipliers, a relevant ingredient in integrability of classical mechanical systems. Section 8 is a short summary of how to use the reduction theory to introduce Hamilton- Jacobi equation that appears here as a method of finding lower-dimensional systems related to a Hamiltonian one. ## 2 Lie integrability by quadratures We first summarise the pioneer work of Lie on integrability of some particular types of vector fields, without any recourse to the existence of additional compatible structures, but only using modern tools of algebra and geometry, in particular Lie algebras of symmetry vector fields, and more specifically, solvable Lie algebras [6]. ###### Lemma 2.1 If $n$ vector fields $X_{1}$,…,$X_{n}$, which are linearly independent in each point of an open set $U\subset\mathbb{R}^{n}$, generate a solvable Lie algebra and are such that $[X_{1},X_{i}]=\lambda_{i}\,X_{1}$ with $\lambda_{i}\in\mathbb{R}$, then the differential equation $\dot{x}=X_{1}(x)$ is solvable by quadratures in $U$. Proof.- We only prove here the simplest case $n=2$. Then, the derived algebra is 1-dimensional, and therefore the Lie algebra is solvable. The differential equation can be integrated if we are able to find a first-integral $F$, i.e. $X_{1}F=0$, such that $dF\neq 0$ in an open set $U$. In this case, for each real number $c\in\mathbb{R}$, we can implicitly define one variable, for instance $x_{2}$, in terms of the other one by $F(x_{1},\phi(x_{1}))=k$, and the differential equation determining the integral curves of $X_{1}$ is in separate variables, i.e. integrable by quadratures. Note that the condition $[X_{1},X_{2}]=\lambda_{2}\,X_{1}$, with $\lambda_{2}\in\mathbb{R}$, shows that if $F$ is a first-integral for $X_{1}$ then $X_{2}F$ is a first-integral for $X_{1}$ too, because $X_{1}(X_{2}F)=X_{2}(X_{1}F)+\lambda_{2}\,X_{1}F=0$. Note that as $n=2$ there exists a 1-form $\alpha_{0}$, which is defined up to multiplication by a function, such that $i(X_{1})\alpha_{0}=0$. Obviously as $X_{2}$ is linearly independent of $X_{1}$ at each point, $i(X_{2})\alpha_{0}\neq 0$. We can see that the 1-form $\alpha=(i(X_{2})\alpha_{0})^{-1}\alpha_{0}$ satisfies the condition $i(X_{2})\alpha=1$, by definition of $\alpha$, and that it is closed, because $X_{1}$ and $X_{2}$ generate $\mathfrak{X}(\mathbb{R}^{2})$ and $d\alpha(X_{1},X_{2})=X_{1}\alpha(X_{2})-X_{2}\alpha(X_{1})+\alpha([X_{1},X_{2}])=\alpha([X_{1},X_{2}])=\lambda_{2}\,\alpha(X_{1})=0.$ Therefore, there exists, at least locally, a function $F$ such that $\alpha=dF$, and the condition $i(X_{1})\alpha=0$ means that the function $F$ is a first-integral for $X_{1}$. $\Box$ A generalisation of these results was given in [7] where the vector fields $X_{1},\ldots,X_{n}$ are assumed to close on a real Lie algebra, i.e. $[X_{i},X_{j}]={\displaystyle\sum_{k=1}^{n}}c_{ij}\,^{k}X_{k}$, with $c_{ij}\,^{k}\in\mathbb{R}$, and it was proved that if such Lie algebra is solvable and $A$ is an Abelian ideal, then the vector field $X$ is integrable for each vector field $X$ in the ideal $A$, and in particular, if the Lie algebra is nilpotent, any vector field of the Lie algebra is integrable. This result was extended in [8] where it was proved that solvability of a Lie algebra of vector fields implies their integrability by quadratures. ## 3 Invariant tensor fields and integrability of a vector field We have indicated how the knowledge of a first-integral, i.e. an invariant function, can be used to reduce the integrability of the vector field to a family of lower dimensional problems. This is a particular example of a more general case, the rôle of tensor fields invariant under a vector field $X$ on a $n$-dimensional manifold $M$ in its integrability (see e.g. [9]). For instance, invariant vector fields $Y$ give rise to infinitesimal symmetries of $X$, whose flows transform solutions into solutions. Moreover, if $F$ is a first-integral for $X$, then $Y(F)$ is also a first-integral, because $\mathcal{L}_{X}(Y(F))=Y\left(\mathcal{L}_{X}(F)\right)=0$. The same property holds when the vector field $Y$ is not a symmetry of the vector field $X$ but only of the 1-dimensional distribution spanned by $X$, i.e. there exists a function $h\in C^{\infty}(M)$ such that $[Y,X]=h\,X$, because then $\mathcal{L}_{X}(Y(F))=Y\left(\mathcal{L}_{X}(F)\right)-h\,X(F)=0.$ Later on, we will see how to find new constants of motion when additional tensor fields enter in the game. Of course some invariant multivector fields are also interesting but particularly the case of Poisson bivector fields is very relevant. Differential forms that are invariant under $X$ give rise to absolutely integral invariants, a theory developed by Poincaré in [10]. In particular, invariant 2-forms as for instance the relevant case in mechanics of symplectic forms, are very relevant in the study of integrability of $X$ and Arnold-Liouville integrability is based on the existence of an appropriated invariant symplectic form. Even if we have a $(1,1)$-tensor field $\mathcal{R}$ invariant under $X$, it may be used as a generator of symmetry, in the sense that if $Y$ is a symmetry of $X$, then $\mathcal{L}_{X}\mathcal{R}=0$ implies that $\mathcal{L}_{X}(\mathcal{R}(Y))=0$. Furthermore, if we choose in this case a local basis of vector fields $\\{X_{1},\ldots,X_{n}\\}$, then if the (1,1)-tensor field $\mathcal{R}$ is invariant under $X\in\mathfrak{X}(M)$ and the matrices $A$ and $B$ are the matrix representatives of $\mathcal{R}$ and $\mathcal{L}_{X}$ in such a basis, respectively, then the invariance condition $\mathcal{L}_{X}\mathcal{R}=0$ is written as $\dot{A}=[B,A],$ (3.1) where $\dot{A}_{i}\,^{j}$ denotes $X(A_{i}\,^{j})$, because if $\mathcal{R}(X_{i})={\displaystyle\sum_{j=1}^{n}}A_{i}\,^{j}X_{j}$ and $\mathcal{L}_{X}X_{i}={\displaystyle\sum_{j=1}^{n}}B_{i}\,^{j}X_{j}$, as we have $\mathcal{L}_{X}[\mathcal{R}(X_{i})]={\displaystyle\sum_{j=1}^{n}}\mathcal{L}_{X}[A_{i}\,^{j}X_{j}]={\displaystyle\sum_{j=1}^{n}}\mathcal{L}_{X}(A_{i}\,^{j})X_{j}+{\displaystyle\sum_{j,k=1}^{n}}A_{i}\,^{j}B_{j}\,^{k}X_{k},$ and $\mathcal{R}(\mathcal{L}_{X}X_{i})={\displaystyle\sum_{j,k=1}^{n}}B_{i}\,^{j}A_{j}\,^{k}X_{k},$ then, taking into account that $X$-invariance of $\mathcal{R}$ is equivalent to $\mathcal{L}_{X}[\mathcal{R}(X_{i})]=\mathcal{R}[\mathcal{L}_{X}(X_{i})]$, we find that $\mathcal{L}_{X}\mathcal{R}=0$ is equivalent to $\mathcal{L}_{X}(A_{i}\,^{j})+{\displaystyle\sum_{k=1}^{n}}\ A_{i}\,^{k}B_{k}\,^{j}={\displaystyle\sum_{k=1}^{n}}B_{i}\,^{k}A_{k}\,^{j}$, and therefore to the matrix equation (3.1), which is called Lax equation [11, 12, 13]. Note also that as the matrix representative of $\mathcal{R}^{2}$ is $A^{2}$ and $\mathcal{L}_{X}\mathcal{R}=0$ implies that $\mathcal{L}_{X}\mathcal{R}^{k}=0$, with $k\in\mathbb{N}$, we also have $\dot{A^{k}}=[B,A^{k}],\quad k=1,\ldots n.$ (3.2) The importance of these equations is that, as for any pair of matrices the trace of the commutator is zero, we have that ${\displaystyle\frac{d}{dt}\textrm{Tr}A^{k}=0}$, and consequently a $(1,1)$-tensor field $\mathcal{R}$ invariant under $X$ provides us $n$ constants of motion, in principle not all of them functionally independent, as indicated by the Hamilton-Cayley theorem. Another remark is that, as the coefficients of the characteristic equation $\det(A-\lambda I)=0$ can be reconstructed from the traces of powers of $A$ (Le Verrier method of determining the characteristic equation of a matrix, see e.g. [14]), the roots of such characteristic equation, the eigenvalues of $A$, are constants of motion. The search of these invariant $(1,1)$-tensor fields $\mathcal{R}$ is not an easy task and use to come from the existence of alternative structures [15, 16, 17], the associated constants of motion being called non-Noether constants of motion (see.e.g. [15, 18, 19]), but they may have a different origin [12, 20]. We will be back on this point in Section 5. Another interesting case is when there exists a volume form invariant under $X$. This case and an alternative method to find first-integrals associated to symmetries of 1-dimensional distribution spanned by the vector field $X$ will be analysed in Sections 6 and 7. ## 4 Invariant symplectic structures We recall that $(M,\omega)$ is a symplectic manifold if $M$ is a finite- dimensional differentiable manifold and $\omega$ is a nondegenerate 2–form which satisfies $d\omega=0$ (that is, $\omega\in Z^{2}(M)$). The dimension of $M$ is necessarily even, $\dim M=2n$. For general results, reference textbooks are, for instance, [21] and [22]. By nondegeneracy of $\omega$ we mean that for every point $u\in M$ the map $\widehat{\omega}_{u}:T_{u}M\to T_{u}^{*}M$, given by: $\langle\widehat{\omega}_{u}(v),v^{\prime}\rangle=\omega_{u}(v,v^{\prime})$ with $v,v^{\prime}\in T_{u}M$, has a maximal rank, i.e. $(\omega)^{\wedge n}\neq 0$. Such a map $\widehat{\omega}$ is a base-preserving fibered map, and hence it induces a mapping between the linear spaces of sections which, with a slight abuse of notation, we will also write $\widehat{\omega}:\mathfrak{X}(M)\to\bigwedge^{1}(M)$. The following well-known result completely characterises these symplectic manifolds, from the local point of view: ###### Theorem 4.1 (Darboux) Around each point $u\in M$ with $\dim M=2n$ there is a local chart $(U,\phi)$ such that if $\phi=(q^{1},\dots,q^{n};p_{1},\dots,p_{n})$, then $\omega|_{U}={\displaystyle\sum_{i=1}^{n}}dq^{i}\wedge dp_{i}$. $\Box$ Such coordinates are called Darboux coordinates. Since closed, and in particular exact, 1-forms are distinguished elements in $\bigwedge^{1}(M)$, the corresponding vector fields are called locally Hamiltonian vector fields and Hamiltonian vector fields, respectively. If $H\in C^{\infty}(M)$, the Hamiltonian vector field $X_{H}$ associated with the Hamiltonian $H$ is the unique vector field satisfying $\widehat{\omega}(X_{H})=i(X_{H})\omega=dH$. The set of Hamiltonian vector fields will be denoted $\mathfrak{X}_{{\rm H}}(M,\omega)$ and that of locally Hamiltonian vector fields, $\mathfrak{X}_{{\rm LH}}(M,\omega)$, i.e. $\mathfrak{X}_{{\rm LH}}(M,\omega)=\widehat{\omega}^{-1}(Z^{1}(M))$ and $\mathfrak{X}_{{\rm H}}(M,\omega)=\widehat{\omega}^{-1}(B^{1}(M))$. Observe that $\widehat{\omega}^{-1}$ is an isomorphism of real vector spaces. The Cartan homotopy identity, i.e. $\mathcal{L}_{X}\alpha=i(X)d\alpha+d(i(X)\alpha)$, for any $\alpha\in\bigwedge^{p}(M)$, shows that, given $X\in\mathfrak{X}(M)$, $\mathcal{L}_{X}\omega=0$ if and only if $i(X)\omega$ is a closed 1–form, i.e. $X\in\mathfrak{X}_{{\rm LH}}(M,\omega)$. In particular, $\mathcal{L}_{X_{H}}\omega=0$. In Darboux coordinates the Hamiltonian vector field $X_{H}$ corresponding to the function $H$ is given by $X_{H}=\sum_{i=1}^{N}\left(\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q^{i}}-\frac{\partial H}{\partial q^{i}}\frac{\partial}{\partial p_{i}}\right)\,,$ and therefore, the equations determining its integral curves are similar to Hamilton equations. Remark that if $X,Y\in\mathfrak{X}_{{\rm LH}}(M,\omega)$ the commutator $[X,Y]$ is a Hamiltonian vector field, with Hamiltonian $\omega(Y,X)$, because from the relation $i(X)\mathcal{L}_{Y}\alpha-\mathcal{L}_{Y}i(X)\alpha=i([X,Y])\alpha$, valid for any form $\alpha$, we obtain: $i([X,Y])\omega=i(X)\mathcal{L}_{Y}\omega-\mathcal{L}_{Y}i(X)\omega=-\mathcal{L}_{Y}i(X)\omega=-i(Y)d(i(X)\omega)-d(i(Y)i(X)\omega)\,,$ and then, $i([X,Y])\omega=-d(\omega(X,Y))\,.$ (4.1) Consequently the set $\mathfrak{X}_{{\rm LH}}(M,\omega)$ is a Lie algebra and $\mathfrak{X}_{{\rm H}}(M,\omega)$ is an ideal in $\mathfrak{X}_{{\rm LH}}(M,\omega)$. As an important property, if $(M,\omega)$ is a symplectic manifold of dimension $2n$ we define the Poisson bracket of two functions $F,G\in C^{\infty}(M)$ as being the function $\\{F,G\\}$ given by: $\\{F,G\\}=\omega(X_{F},X_{G})=dF(X_{G})=-dG(X_{F})\ .$ In Darboux coordinates for $\omega$ the expression for $\\{F,G\\}$ is the usual one: $\\{F,G\\}=\sum_{i=1}^{N}\left(\frac{\partial F}{\partial q^{i}}\frac{\partial G}{\partial p_{i}}-\frac{\partial F}{\partial p_{i}}\frac{\partial G}{\partial q^{i}}\right).$ The aforementioned property (4.1) means that if $F,G\in C^{\infty}(M)$, then we have that $d\\{F,G\\}=-i([X_{F},X_{G}])\omega$, i.e. $[X_{F},X_{G}]=X_{\\{G,F\\}}$. This Poisson bracket $\\{\cdot,\cdot\\}$ is a skewsymmetric $\mathbb{R}$–bilinear map on $C^{\infty}(M)$ such that it satisfies Jacobi’s identity, as a consequence of $\omega$ being closed. In fact, if $F,G,H\in C^{\infty}(M)$, $\displaystyle(d\omega)(X_{F},X_{G},X_{H})$ $\displaystyle=X_{F}\omega(X_{G},X_{H})-X_{G}\omega(X_{F},X_{H})+X_{H}\omega(X_{F},X_{G})$ $\displaystyle-\omega([X_{F},X_{G}],X_{H})+\omega([X_{F},X_{H}],X_{G})-\omega([X_{G},X_{H}],X_{F})\ ,$ and taking into account that $X_{F}\omega(X_{G},X_{H})=X_{F}\\{G,H\\}=\\{\\{G,H\\},F\\}$ and that $\omega([X_{F},X_{G}],X_{H})$ can also be rewritten as $\omega([X_{F},X_{G}],X_{H})=\omega(X_{\\{G,F\\}},X_{H})=\\{\\{G,F\\},H\\}$, as well as the corresponding expressions for each cyclic reordering, we find that $(d\omega)(X_{F},X_{G},X_{H})=2[\\{\\{G,H\\},F\\}+\\{\\{H,F\\},G\\}+\\{\\{F,G\\},H\\}]\ .$ Finally, we recall that in each point of a neighborhood of every point of $M$ the values of the set of Hamiltonian vector fields generate the tangent space, and therefore $d\omega=0$ if and only if Jacobi identity holds. Hence, the Poisson bracket endowes $C^{\infty}(M)$ with a real Lie algebra structure, the Jacobi identity being a consequence of the closedness of $\omega$, and the above mentioned property shows that $-\widehat{\omega}^{-1}\circ d:C^{\infty}(M\,\\{\cdot,\cdot\\}\to\mathfrak{X}_{{\rm H}}(M,\omega)$ is a Lie algebra homomorphism. Given a Hamiltonian system $(M,\omega,H)$ one usually look for vector fields whose flows are symplectomorphisms and such that are also symmetries of $H$ and, therefore, symmetries of $X_{H}$. Then for each $F\in C^{\infty}(M)$, the relation $X_{H}F=\\{F,H\\}=-X_{F}H$ shows that $X_{F}$ is a symmetry of $H$ if and only if $F$ is a constant of motion (a result usually known as Noether’s theorem). It is to be remarked that in the case of a Hamiltonian dynamical system the constants of motion $F$ are playing a double rôle in the reduction process, either as constants of motion, or as generating infinitesimal symmetries $X_{F}$, because if $F$ is a constant of the motion, $\\{F,H\\}=0$, then the Hamiltonian vector field $X_{F}$ is a symmetry of $H$. Moreover, both $X_{H}$ and $X_{F}$ are tangent to the level sets of $F$, because $X_{H}F=\\{F,H\\}=0$, and $X_{F}F=\\{F,F\\}=0$. The restriction of $X_{F}$ on each leaf of the foliation $\mathcal{F}_{H}$ whose leaves are the level sets of $F$ can be used to determine adapted coordinates in such a way that the problem of determining the integral curves of $X_{H}$ is reduced not just in one but in two degrees of freedom. In order to be able to use simultaneously the information given by different constants of motion, $F_{1},\ldots,F_{r}$, it is sufficient that $\\{F_{i},F_{j}\\}=0,\,\forall i,j=1,\ldots,r$, because $[X_{F_{i}},X_{F_{j}}]=X_{\\{F_{j},F_{i}\\}}$. If the condition is satisfied, then $[X_{F_{i}},X_{F_{j}}]=0$, for any pair of indices $i$ and $j$, and we can find adapted coordinates such that $X_{F_{i}}=\partial/\partial y_{i}$, for $i=1,\ldots,r$. The condition is not necessary because if, for instance, $\\{F_{i},F_{j}\\}$ is constant for each pair of indices, then it is also true that $[X_{F_{i}},X_{F_{j}}]=0$. The simplest case will be that of Hamiltonian systems in a space of dimension $N=2\,n$ for which such $n$ constants of motion, $F_{1},\ldots,F_{n}$, in involution are known: they are called completely integrable systems. We are now ready to recall the notion of Liouville-Arnold integrability: ###### Definition 4.1 The Hamiltonian dynamical system $(M,\omega,H)$, with $\,\dim M=2n$, is said to be completely integrable if there exists a set of $n$ functions $\\{F_{j}\mid j=1,\ldots,n\\}$, where $F_{j}\in C^{\infty}(M)$, with $F_{1}=H$, which are constants of the motion, i.e. $\frac{dF_{k}}{dt}=\mathcal{L}_{X_{H}}F_{k}=\\{F_{k},H\\}=0,\qquad\forall k=2,\ldots,n,$ that are functionally independent, i.e. $dF_{1}\land\cdots\land dF_{n}\neq 0$, and are pairwise in involution, i.e. $\\{F_{k},F_{j}\\}=0,\qquad\forall j,k=1,\ldots n.$ A completely integrable system that admits more than $n$ functionally independent constants of motion is said to be superintegrable and when the number is the maximum number, $2n-1$, the system is called maximally superintegrable [23]. Many of such systems can be found in the physics literature (see e.g. the recent papers [24, 25] and references therein). The two main examples of Hamiltonian dynamical systems are those of Hamiltonian systems on the cotangent bundle $T^{*}Q$ of a manifold $Q$ and those defined by regular Lagrangians on the tangent bundle $TQ$. In fact the cotangent bundle $\pi:T^{*}Q\to Q$ is endowed with a canonical 1-form $\theta\in\bigwedge^{1}(T^{*}Q)$ defined by $\theta_{\alpha}=\pi^{*}_{\alpha}\alpha$, for all covectors $\alpha$ in $Q$. Then $\omega=-d\theta$ is a symplectic form on $T^{*}Q$. It is to be remarked that if $\alpha$ is a 1-form in $Q$, then as $\alpha^{*}\theta=\alpha$ we have that $\alpha^{*}\omega=\alpha^{*}(-d\theta)=-d(\alpha^{*}\theta)=-d\alpha$. The geometric framework for the study of Lagrangian mechanics is that of tangent bundles [26, 27, 28]. The tangent bundle $\tau:TQ\to Q$ is characterised by two geometric tensors, the vertical endomorphism $S$, a (1,1)-tensor field on $TQ$, also called tangent structure, which satisfies $\text{Im}\,S=\ker S$ and an integrability condition, the vanishing of the Nijenhuis tensor, $N_{S}=0$, and the Liouville vector field $\Delta$ generating dilations along fibres in $TQ$ [29]. If $(U,\varphi)$ is a local chart on $Q$ and ${\rm pr}^{i}:\mathbb{R}^{n}\to\mathbb{R}$ are the natural projections on the $i$-th-factor and $q^{i}={\rm pr}^{i}\circ\varphi$ we define the coordinate system $(U,q^{1},\ldots,q^{n})$ on $Q$, and the corresponding chart in $\mathcal{U}=\tau^{-1}(U)$ given by $(\mathcal{U},\varphi,\varphi_{*})$, defines a coordinate system $(q^{1},\ldots,q^{n},v^{1},\ldots,v^{n})$ on the open set $\mathcal{U}=\tau^{-1}(U)$ of $TQ$. Correspondingly, we consider the coordinate basis of $\mathfrak{X}(U)$ usually denoted $\\{\partial/\partial q^{j}\mid j=1,\ldots,n\\}$ and its dual basis for $\bigwedge^{1}(U)$, $\\{dq^{j}\mid j=1,\ldots,n\\}$. Then a vector $v$ in a point $q\in U$ is $v={\displaystyle\sum_{i=1}^{n}}v^{j}\,(\partial/\partial q^{j})_{|q}$, i.e. $v^{i}=dq^{i}(v)$, while a covector $\alpha$ in a point $q\in U$ is $\alpha={\displaystyle\sum_{j=1}^{n}}p_{j}\,dq^{j}\,_{|q}$, with $p_{i}=\alpha((\partial/\partial q^{i})_{|q})$. With this notation the coordinate expressions of the vertical endomorphism $S$ and the Liouville vector field $\Delta$, are [26, 27]: $S(q,v)=\sum_{i=1}^{n}\frac{\partial}{\partial v^{i}}\otimes dq^{i},\qquad\Delta(q,v)=\sum_{i=1}^{n}v^{i}\frac{\partial}{\partial v^{i}}.$ (4.2) Similarly we can introduce a coordinate system $(q^{1},\ldots,q^{n},p_{1},\ldots,p_{n})$ in the open set $\bar{\mathcal{U}}=\pi^{-1}(U)$ of the cotangent bundle $\pi:T^{*}Q\to Q$ and the coordinate expression of the 1-form $\theta$ is $\theta(q,p)=\sum_{i=1}^{n}p_{i}\,dq^{i},$ (4.3) with shows that $\omega=-d\theta$ is a canonical symplectic structure on $T^{*}Q$. The vector field $\Delta$ and the tensor field $S$ can be used to select a special kind of vector fields whose integral curves are given by lifting solutions of a second order differential equations. These vector fields called second order differential equation vector fields $D\in\mathfrak{X}(TQ)$ (hereafter shortened as SODE vector fields) are characterised by $S(D)=\Delta$. Recall also that given a Lagrangian $L\in C^{\infty}(TQ)$ we can define a 1-form $\theta_{L}=dL\circ S$ and the exact 2-form $\omega_{L}=-d\theta_{L}$. When $\omega_{L}$ is regular the Lagrangian $L$ is said to be regular and then the dynamics is given by the uniquely defined SODE vector field $\Gamma_{L}$ such that $i(\Gamma_{L})\omega_{L}=dE_{L}\Longleftrightarrow\mathcal{L}_{\Gamma_{L}}\theta_{L}-dL=0,$ (4.4) where the energy function $E_{L}\in C^{\infty}(TQ)$ is defined by $E_{L}=\Delta(L)-L$. Moreover, this implies that $\mathcal{L}_{\Gamma_{L}}\omega_{L}=d(i(\Gamma_{L})\omega_{L})=0$ In usual local tangent bundle coordinates the expressions of $\theta_{L}$ and $E_{L}$ are $\theta_{L}=\sum_{i=1}^{n}\frac{\partial L}{\partial v^{i}}\,dq^{i},\qquad E_{L}=\sum_{i=1}^{n}v^{i}\frac{\partial L}{\partial v^{i}}-L,$ (4.5) while that of $\omega_{L}$ is: $\omega_{L}=\sum_{i,j=1}^{n}\frac{\partial^{2}L}{\partial{q}^{j}\partial v^{i}}d{q}^{i}\wedge dq^{j}+\sum_{j,k=1}^{n}\frac{\partial^{2}L}{\partial v^{k}\partial v^{j}}dq^{j}\wedge dv^{k}.$ (4.6) It may be of be interest under what conditions two regular Lagrange functions $L,L^{\prime}\in C^{\infty}(TQ)$ in $TQ$ produce the same symplectic structure and the same energy function, i.e. $L_{0}=L-L^{\prime}$ is such that $\omega_{L_{0}}\equiv 0$ and $E_{L_{0}}=0$. Now, if $\alpha$ is a 1-form on $Q$, $\alpha\in\bigwedge^{1}(Q)$, then $\widehat{\alpha}$ denotes the function $\widehat{\alpha}\in C^{\infty}(TQ)$ defined by $\widehat{\alpha}(u)=\alpha_{\pi(u)}(u)$. If the local coordinate expression of a 1-form $\alpha$ is $\alpha={\displaystyle\sum_{i=1}^{n}}\alpha_{i}(q)\,dq^{i}$, then $\widehat{\alpha}(q,v)={\displaystyle\sum_{i=1}^{n}}\alpha_{i}(q)\,v^{i}$. Consequently $\Delta\widehat{\alpha}=\widehat{\alpha}$, because, by definition $\widehat{\alpha}$ is a homogeneous of degree one function. On the other hand, as $(d\widehat{\alpha}\circ S)\left(\frac{\partial}{\partial q^{i}}\right)=\alpha_{i}(q),\qquad(d\widehat{\alpha}\circ S)\left(\frac{\partial}{\partial v^{i}}\right)=0,$ we see that $d\widehat{\alpha}\circ S=\tau^{*}\alpha$. Then, a function $L_{0}\in C^{\infty}(TQ)$ is such that $\omega_{L_{0}}\equiv 0$ if and only if there exist a closed 1-form $\alpha\in Z^{1}(Q)$ and a function $h\in C^{\infty}(Q)$ such that $L_{0}=\widehat{\alpha}+h\circ\tau$ (see e.g. [30]). But as for such a function $L_{0}$ we have that $E_{L_{0}}=-h$, we see that $L,L^{\prime}\in C^{\infty}(TQ)$ in $TQ$ produce the same Hamiltonian dynamical system, i.e. $\omega_{L}=\omega_{L^{\prime}}$ and $E_{L}=E_{L^{\prime}}$, if and only if there exists a closed 1-form $\alpha\in Z^{1}(Q)$ such that $L^{\prime}=L+\widehat{\alpha}$, and both Lagrangians are then said to be gauge equivalent. It is also possible to identify a complete lift vector field $X^{c}\in\mathfrak{X}(TQ)$ that is a symmetry of the Hamiltonian dynamical system $(TQ,\omega_{L},E_{L})$ in terms of symmetries of $L$ itself. Recall that if $X\in\mathfrak{X}(Q)$, its complete lift, denoted $X^{c}$, is the vector field on $TQ$ whose flow is given by $\phi_{t*}$ where $\phi_{t}$ is the local flow of the vector field $X$. If the local coordinate expression of the vector field $X$ in a chart of $Q$ is $X(q)={\displaystyle\sum_{i=1}^{n}}X^{i}(q)\partial/\partial q^{i}$, then the corresponding expression for $X^{c}$ in the associated chart of $TQ$ is $X^{c}(q,v)={\displaystyle\sum_{i=1}^{n}}X^{i}(q)\partial/\partial q^{i}+{\displaystyle\sum_{i,j=1}^{n}}(\partial X^{i}/\partial q^{j})v^{j}\partial/\partial v^{i}$. It is also to be remarked that if $\Phi\in{\rm Diff}(TQ)$ is a lift of a diffeomorphism $\varphi$ of the manifold $Q$, then $\Phi^{*}\theta_{L}=\theta_{\Phi^{*}L}$ as well as $\Phi^{*}E_{L}=E_{\Phi^{*}L}$ as a consequence of $\Phi_{*}\Delta=\Delta$ and $[\Phi_{*},S]=0$, because $\Phi^{*}\theta_{L}=\Phi^{*}(dL\circ S)=dL\circ S\circ\Phi_{*}=dL\circ\Phi_{*}\circ S=d(L\circ\Phi)\circ S=\theta_{\Phi^{*}L},$ while, as $\Delta$ is $\Phi$-related with itself, $(\Delta L)\circ\Phi=\Delta(L\circ\Phi)$, we have $\Phi^{*}E_{L}=\Phi^{*}(\Delta L)-\Phi^{*}L=\Delta(\Phi^{*}L)-\Phi^{*}L=E_{\Phi^{*}L}.$ At the infinitesimal level this means that if $X^{c}\in\mathfrak{X}(TQ)$ is the complete lift of $X\in\mathfrak{X}(Q)$ we have that $\mathcal{L}_{X^{c}}\theta_{L}=\theta_{X^{c}L},\qquad X^{c}E_{L}=E_{X^{c}L}.$ All these properties can be used to derive the Noether’s theorem in the Lagrangian formalism: ###### Theorem 4.2 (Noether) If the vector field $X\in\mathfrak{X}(Q)$ is such that there exists a function $h\in C^{\infty}(Q)$ such that $X^{c}L=\widehat{dh}$, then the function $f=i(X^{c})\theta_{L}-\tau^{*}h$ is a constant of motion. Proof.- First, the vector field $X^{c}$ is Hamiltonian because $\mathcal{L}_{X^{c}}\theta_{L}=\theta_{X^{c}L}=\theta_{\widehat{dh}}=\tau^{*}(dh)=d(\tau^{*}h),$ and then $i(X^{c})d\theta_{L}+d(i(X^{c})\theta_{L})=d(\tau^{*}h)\Longleftrightarrow i(X^{c})\omega_{L}=d((i(X^{c})\theta_{L}-h)=df.$ Moreover, $X^{c}E_{L}=E_{X^{c}L}=E_{\widehat{dh}}=0$, and then if $\Gamma_{L}$ is the dynamical vector field determined by $L$, i.e. $i(\Gamma_{L})\omega_{L}=dE_{L}$, we have $\Gamma_{L}f=\\{f,E_{L}\\}=-\\{E_{L},f\\}=-X^{c}E_{L}=0.$ $\Box$ Particularly interesting cases are the geodesic Lagrangians defined through a Riemann structure $g$ by $L=\frac{1}{2}g(v,v)$ (see e.g. [31]) and the so called natural Lagrangians defined by means of a Riemann structure $g$ and a function $V$ in $Q$ as follows: $L=\frac{1}{2}g(v,v)-V(q)$. ## 5 Non-Noether constants of motion Given a locally Hamiltonian vector field $X$ in a $2n$-dimensional symplectic manifold $(M,\omega)$, if $\omega^{\prime}$ is another closed 2-form invariant under $X$, then we can consider the pencil of invariant under $X$ closed 2-forms $\\{\omega^{\prime}-\lambda\,\omega\mid\lambda\in\mathbb{R}\\}$, which defines a function $f\in\mathbb{R}\times M\to\mathbb{R}$ such that $(\omega^{\prime}-\lambda\,\omega)^{\wedge n}=f\,(\omega)^{\wedge n}$, called characteristic function of the pencil, and that, by construction, is a constant of the motion for each value of $\lambda$, because from $\mathcal{L}_{X}(\omega^{\prime}-\lambda\,\omega)^{\wedge n}=0$ we find $\mathcal{L}_{X}f=0$. Moreover, the function $f$ is a polynomial function in $\lambda$ whose coefficients are constants of the motion. Note also that $\widehat{\omega}^{-1}\circ\widehat{\omega}^{\prime}:\mathfrak{X}(M)\to\mathfrak{X}(M)$ is $C^{\infty}(M)$-linear and define then a $(1,1)$-tensor field that is $X$-invariant and such that the characteristic polynomial of such composition coincides with the characteristic function of the pencil, and therefore the mentioned constants of the motion for $X$ coincide with those associated to the invariant $(1,1)$-tensor field $\mathcal{R}=\widehat{\omega}^{-1}\circ\widehat{\omega}^{\prime}$, and hence they are related to the traces of different powers of $\mathcal{R}$. The point is how to find such a $X$-invariant 2-form $\omega^{\prime}$. We mention now two particular cases. First, if a vector field $Y$ is such that $[Y,X]=0$ but it is not a locally Hamiltonian vector field, then $\omega^{\prime}=\mathcal{L}_{Y}(\omega)$ is a $X$-invariant closed 2-form, because $\mathcal{L}_{X}(\mathcal{L}_{Y}(\omega))=\mathcal{L}_{Y}(\mathcal{L}_{X}(\omega))=0$. On the other side, if $\phi\in{\rm Diff}(M)$ is a noncanonical symmetry of the Hamiltonian vector field defined by the Hamiltonian dynamical system $(M,\omega,H)$, from the relation $i(\phi_{*}(X_{H}))((\phi^{-1})^{*}(\omega))=d((\phi^{-1})^{*}H)$, we see that if $\phi_{*}(X_{H})=X_{H}$ then $i(X_{H})((\phi^{-1})^{*}(\omega))=d((\phi^{-1})^{*}H)$, and therefore $\mathcal{L}_{X_{H}}((\phi^{-1})^{*}(\omega))=0$, and equivalently $\mathcal{L}_{X_{H}}(\phi^{*}(\omega))=0$. As it was assumed that $\omega^{\prime}=\phi^{*}(\omega)\neq\omega$, we can choose such a $X$-invariant closed 2-form $\omega^{\prime}$ to define, together with $\omega$, the pencil. We also recall that $\phi\in{\rm Diff}(M)$ is said to be a canonoid transformation of the Hamiltonian dynamical system $(M,\omega,H)$ when $\phi_{*}(X_{H})$ is also a Hamiltonian vector field, or equivalently when $X_{H}$ is Hamiltonian with respect to the transformed 2-form $\phi^{*}(\omega)$, and in this case $\mathcal{L}_{X_{H}}(\phi^{*}(\omega))=0$ [16, 17], i.e. we can also choose $\omega^{\prime}=\phi^{*}(\omega)$ as invariant $X_{H}$-form. ## 6 Invariant volume forms and Jacobi multipliers A particularly interesting case with many applications not only in the theory of differential equations but in classical mechanics is that of volume forms invariant under a given vector field $X$ on an oriented manifold $(M,\Omega)$. Each volume form is of the form $R\,\Omega$ with $R\in C^{\infty}(M)$ and the invariance condition is $\mathcal{L}_{X}(R\,\Omega)=0$, and if we take into account that $\mathcal{L}_{X}(R\,\Omega)=d(i(X)(R\,\Omega))=d(i(R\,X)(\Omega))=\mathcal{L}_{R\,X}(\Omega)$ we see that the invariance condition of $R\,\Omega$ under $X$ is equivalent to the invariance condition of $\Omega$ under $R\,X$. Remark also that for each vector field $X\in\mathfrak{X}(M)$ the volume form $\mathcal{L}_{X}\Omega$ is proportional to $\Omega$, and the proportionality function is called divergence of $X$, i.e. $\mathcal{L}_{X}(\Omega)={\rm div}(X)\,\Omega\,,$ (6.1) and that if local coordinates are chosen such that $\Omega=dx^{1}\wedge\cdots\wedge dx^{n}$ and the local expression of $X$ is $X={\displaystyle\sum_{i=1}^{n}X^{i}\frac{\partial}{\partial x^{i}}}$, then the local expression of ${\rm div}(X)$ is: ${\rm div}(X)=\sum_{i=1}^{n}\frac{\partial X^{i}}{\partial x^{i}}.$ Vector fields $X$ such that ${\rm div}(X)=0$ are called divergence-free vector fields and enjoy interesting properties. Recalling the properties of Lie derivatives, we find that, as for each $f\in C^{\infty}(M)$, $\mathcal{L}_{f\,X}(\Omega)=\mathcal{L}_{X}(f\,\Omega)=f\,\mathcal{L}_{X}(\Omega)+(\mathcal{L}_{X}f)\,\Omega$, using the definition (6.1) of ${\rm div}(X)$, we have that ${\rm div}(fX)=f\,{\rm div}(X)+X(f)\,.$ (6.2) The nonvanishing functions $R$ such that $\mathcal{L}_{X}(R\,\Omega)=\mathcal{L}_{R\,X}(\Omega)=0$, i.e. ${\rm div}(R\,X)=0$, are called Jacobi multipliers, and (6.2) shows that the nonvanishing function $R$ is a Jacobi multiplier if and only if (see [32, 33] and references therein) $R\,{\rm div}(X)+X(R)=0\Longleftrightarrow{\rm div}(X)+X(\log R)=0\,.$ (6.3) Locally defined Jacobi multipliers, obtained as particular solutions of (6.3), always exist, but in some particular cases global solutions, giving rise to invariant volume forms, do not exist [34, 35]. It is also to be remarked that it follows from the relation $\mathcal{L}_{X}(R\,\Omega)=d(i(X)(R\,\Omega))=d(i(R\,X)\Omega)$ that the function $R$ is a Jacobi multiplier for $X$ in the oriented manifold $(M,\Omega)$ iff and only if $f\,R$ is a Jacobi multiplier for $X$ in the oriented manifold $(M,f^{-1}\,\Omega)$, for each positive function $f$. As an interesting case we can consider a SODE vector field $\Gamma\in\mathfrak{X}(TQ)$ which admits a Lagrangian formulation $L$, i.e. there exists a regular Lagrange function $L\in C^{\infty}(TQ)$, such that $i(\Gamma)\omega_{L}=dE_{L}$, or equivalently $\mathcal{L}_{\Gamma}\theta_{L}=dL$. Remark first that $TQ$ is orientable and a local chart of $TQ$ induced from one in $Q$ induces an orientation. As $\mathcal{L}_{\Gamma}\omega_{L}=0$, we see that the volume form $(\omega_{L})^{\wedge n}$ is $\Gamma$-invariant and therefore if a volume form was previously fixed, there will be a Jacobi multiplier $R$ such that $(\omega_{L})^{\wedge n}=R\,\Omega$. If, for instance, $\Omega$ is the volume form determined by a tangent bundle local chart, $\Omega=dq^{1}\wedge\cdots\wedge dq^{n}\wedge dv^{1}\wedge\cdots\wedge dv^{n}$, we obtain, according to (4.6), that the determinant of the Hessian matrix $W$ with elements $W_{ij}=\partial^{2}L/\partial v^{i}\partial v^{j}$ is a Jacobi multiplier, because $(\omega_{L})^{\wedge n}$ is a real multiple of $\det W\,dq^{1}\wedge\cdots\wedge dq^{n}\wedge dv^{1}\wedge\cdots\wedge dv^{n}$. ## 7 Hojman symmetry and constants of motion In the preceding sections 4 and 5 we have first summarised the usual way of searching first-integrals by means of infinitesimal symmetries via the first Noether theorem and presented then a second procedure for finding non-Noether constants of motion, which is not based on symmetries but on the existence of alternative geometric structures for the description of the vector field, what leads to the existence of a recursion operator. We mention next a third approach started by Hojman [36] and González-Gascón in [37] and that it is becoming more and more important during the last years for its applications in $f(R)$-gravity and FRW cosmology [38, 39, 40, 41, 42, 43, 44]. The main result is very general and a direct consequence of a simple geometric relation that when considering particular cases contains many results scattered in the physics literature. So, different relations among vector fields and their divergences can be used to establish first-integrals and integral invariants for vector fields in a manifold $M$. The following geometric relation plays a fundamental rôle: If $X,Y$ is an arbitrary pair of vector fields in an oriented manifold $(M,\Omega)$, then $\mathcal{L}_{X}({\rm div\,}(Y))-\mathcal{L}_{Y}({\rm div\,}(X))={\rm div\,}([X,Y]),$ (7.1) because as the Lie derivatives of a $p$-form $\alpha$ satisfy the relation $\mathcal{L}_{X}(\mathcal{L}_{Y}\alpha)-\mathcal{L}_{Y}(\mathcal{L}_{X}\alpha)=\mathcal{L}_{[X,Y]}\alpha,$ (7.2) and in the particular case $\alpha=\Omega$, we have $\mathcal{L}_{X}(\mathcal{L}_{Y}\Omega)-\mathcal{L}_{Y}(\mathcal{L}_{X}\Omega)=\mathcal{L}_{[X,Y]}\Omega,$ (7.3) and from difference of $\mathcal{L}_{X}(\mathcal{L}_{Y}\Omega)=\mathcal{L}_{X}({\rm div}(Y)\,\Omega)=\mathcal{L}_{X}({\rm div}(Y))\,\Omega+{\rm div}(Y)\,{\rm div}(X)\,\Omega$ and the corresponding relation $\mathcal{L}_{Y}(\mathcal{L}_{X}\Omega)=\mathcal{L}_{Y}({\rm div}(X)\,\Omega)=\mathcal{L}_{Y}({\rm div}(X))\,\Omega+{\rm div}(X)\,{\rm div}(Y)\,\Omega$ we obtain (7.1). As a consequence of this relation (7.1) we see that if the vector field $X\in\mathfrak{X}(M)$ in an oriented manifold $(M,\Omega)$ is divergence-free, then if the vector field $Y$ is an infinitesimal symmetry of $X$, i.e. $[X,Y]=0$, we have that $\textrm{div\,}(Y)$ is a constant of the motion for $X$. Something similar happens when the vector field $Y$ is an infinitesimal symmetry of the 1-dimensional distribution generated by $X$, i.e. there exists a function $h$ such that $[Y,X]=h\,X$, because then in this case the function $\textrm{div\,}(Y)+h$ is a constant of the motion for $X$. Actually, if ${\rm div\,}(X)=0$, then $\mathcal{L}_{X}\Omega=0$, and hence, as $[X,Y]=-h\,X$, $\mathcal{L}_{X}(\mathcal{L}_{Y}\Omega)=\mathcal{L}_{Y}(\mathcal{L}_{X}\Omega)+\mathcal{L}_{-h\,X}\Omega=-\mathcal{L}_{X}(h\,\Omega)=-X(h)\,\Omega,$ and then, from $\mathcal{L}_{X}(\mathcal{L}_{Y}\Omega)=\mathcal{L}_{X}({\rm div\,}(Y)\,\Omega)=\mathcal{L}_{X}({\rm div\,}(Y))\Omega,$ we obtain that $\mathcal{L}_{X}({\rm div\,}(Y)+h)=0$, and therefore the following function $I$ is a constant of the motion for $X$. $I={\rm div\,}(Y)+h.$ (7.4) If the vector field is not divergence-free, remark that for any nonvanishing function $R$ the constants of motion of $X$ coincide with those of $R\,X$, and in particular if $R$ is a Jacobi multiplier for $X$ the vector field $\bar{X}=R\,X$ is such that when $[Y,X]=h\,X$, we have that $[Y,\bar{X}]=\bar{h}\,\bar{X}$ with $\bar{h}=(Y(R)/R)+h$, and hence we have the constant of motion for $\bar{X}$, and therefore for $X$, given by $I={\rm div\,}(Y)+\bar{h}={\rm div\,}(Y)+Y(\log R)+h.$ (7.5) These general results can be applied to specific examples, and we recover as particular examples many previously found constants of motion. For instance one can apply the general theory to both Hamltonian and Lagrangian formulations of autonomous systems, or generic second order differential equations. In the particular case of a system on an Euclidean configuration space admitting a Lagrangian formulation we mentioned at the end of the preceding section that a Jacobi multiplier is given by the determinant of the Hessian matrix $W$, and this explains the result obtained by Lutzky [45] as a particular case of the constant of motion given by (7.5). In the more general case of $Q$ being a $n$-dimensional manifold, a local chart of coordinates $(x^{1},\ldots,x^{n})$ for $Q$ induces a local chart of coordinates in its tangent bundle, denoted $(x^{1},\ldots,x^{n},v^{1},\ldots,v^{n})$, as indicated in Section 4, and an associated volume element in such a chart given by $\Omega=dx^{1}\wedge\cdots\wedge dx^{n}\wedge dv^{1}\wedge\cdots\wedge dv^{n}$. If the dynamical vector field $\Gamma$ is given in such coordinates by $\Gamma=\sum_{i=1}^{n}\left(v^{i}\frac{\partial}{\partial x^{i}}+F^{i}(x,v)\frac{\partial}{\partial v^{i}}\right)$ (7.6) is determined by the Lagrangian $L$, i.e. $i(\Gamma)\omega_{L}=dE_{L}$, which implies $\mathcal{L}_{\Gamma}\omega_{L}=0$ (see e.g [28]), it has been shown (see e.g. [32] and references therein) that a particular Jacobi multiplier for $\Gamma$ with respect to the volume form $\Omega$ is given by the determinant of the Hessian matrix $W$ in the velocities, with elements $W_{ij}=\partial^{2}L/{\partial v^{i}\partial v^{j}}$. Actually, from $\mathcal{L}_{\Gamma}\omega_{L}=0$, we see that $(\omega_{L})^{\wedge n}$ is an invariant volume under $\Gamma$, and the proportionality factor of $(\omega_{L})^{\wedge n}$ and $\Omega$ is a real multiple of $\det W$. Therefore, the constant of motion obtained in [45] is just a particular case of the expression (7.5) for $R$ equal to the determinant of the Hessian matrix $W$. As far as nonautonomous systems are concerned, the dynamical vector fields must be replaced by 1-dimensional distributions and therefore the more general condition $[Y,X]=h\,X$, which means that the vector field $Y$ preserves the 1-dimensional distribution generated by $X$, is relevant in the context of symmetry for such systems. As in the autonomous cases, we can study particular examples of non-autonomous systems of first-order differential equations [36, 37] but also of Hamiltonian systems as in [46], or even non- autonomous systems of second-order differential equations [36, 37, 47, 48], and in particular systems admitting a Lagrangian formulation [45, 48]. For a recent geometric presentation of all these results see e.g. [49]. ## 8 Hamilton-Jacobi equation as a reduction procedure As a last example of the reduction theory we analyse the well-know Hamilton- Jacobi equation from a geometric perspective. There is a very recent nice review paper on the subject [50] based on [51] to which I refer for more general results and only the reduction technique for the particular case of a non-autonomous Hamiltonian systems in the phase space $T^{*}Q$ is presented here. Recall that 1-forms on $Q$ are sections of the cotangent bundle $\pi_{Q}:T^{*}Q\to Q$ while vector fields in $Q$ are sections for the projection map $\tau_{Q}:TQ\to Q$ of the tangent bundle. Then, given a Hamiltonian dynamical system $(T^{*}Q,\omega,H)$ which defines a Hamiltonian vector field $X_{H}\in\mathfrak{X}(T^{*}Q)$ by the relation $i(X_{H})\omega=dH$, where $\omega$ is the canonical symplectic structure on $T^{*}Q$, the aim is to find a vector field $Z\in\mathfrak{X}(Q)$ and a 1-form $\alpha\in\bigwedge^{1}(T^{*}Q)$ such that $X_{H}\circ\alpha=T\alpha\circ Z$. Such a pair $(Z,\alpha)$ is said to be a generalised solution of the Hamilton- Jacobi problem, and when $\alpha$ is closed we say that $(Z,\alpha)$ is a standard solution the Hamilton-Jacobi problem. Note that in these conditions if a curve $\gamma$ in $Q$ is an integral curve of $Z$, then the curve $\alpha\circ\gamma$ in $T^{*}Q$ is an integral curve of $X_{H}$. This shows that the vector field $X_{H}$ is tangent to the image of $\alpha$. Moreover, as a consequence, as $X_{H}$ is a vector field in $T^{*}Q$ and $T\pi_{Q}\circ T\alpha={\rm id}_{TQ}$, we have that $T\pi_{Q}\circ X_{H}\circ\alpha=T\pi_{Q}\circ T\alpha\circ Z=Z$, i.e. the vector field restriction of $X_{H}$ on the image of $\alpha$ is projectable on $Z$. In this case the vector field $Z$ in the pair $(Z,\alpha)$ can be derived from $\alpha$. Observe that for each vector field $Y\in\mathfrak{X}(Q)$, if we recall that $\alpha^{*}\omega=d\alpha$, the 1-form $i(Z)d\alpha$ is such that $(i(Z)d\alpha)(Y)=d\alpha(Z,Y)=-\alpha^{*}\omega(Z,Y)=-\omega(T\alpha\circ Z,T\alpha\circ Y)\circ\alpha,$ while $(\alpha^{*}(dH))(Y)=\alpha^{*}(i(X_{H})\omega)(Y)=(i(T\alpha\circ Y)i(X_{H})\omega)\circ\alpha=\omega(X_{H}\circ\alpha,T\alpha\circ Y)\circ\alpha,$ and consequently the relatedness condition $X_{H}\circ\alpha=T\alpha\circ Z$ is equivalent to $i(Z)d\alpha+d(\alpha^{*}H)=0.$ (8.1) Remark that for a standard solution as the 1-form $\alpha$ is closed, $d\alpha=0$, there exists a locally defined function $S$ in $Q$ such that $\alpha=dS$ and the relatedness condition (8.1) can be replaced by $d(\alpha^{*}H)=0$, i.e $\alpha^{*}H$ is locally constant, or more explicitly $(dS)^{*}H=E\Longleftrightarrow H\left(q^{1},\ldots,q^{n},\frac{\partial S}{\partial q^{i}},\ldots,\frac{\partial S}{\partial q^{i}}\right)=E,$ which is the usually known as Hamilton-Jacobi equation. Once a solution of this equation has been obtained we can define the vector field $X^{S}\in\mathfrak{X}(Q)$ by $X^{S}=T\pi\circ X_{H}\circ dS$, and then $dS(Q)$ is an invariant under $X_{H}$ submanifold such that for each integral curve $\gamma$ of $X^{S}$, the curve $dS\circ\gamma$ is an integral curve of $X_{H}$, and in this sense the vector field $X^{S}$ in $Q$ is a reduction of $X_{H}$ which describes, at least partially, the dynamics defined by $X_{H}$ in $T^{*}Q$. In this sense the integral curves of $X_{H}$ are completely in one of such invariant submanifolds. In order to fully solve the problem we need an appropriate $n$-parameter family of functions $S_{\lambda}$, with $\lambda=(\lambda_{1},\ldots,\lambda_{n})$, $\lambda_{i}\in\mathbb{R}$, what is usually known as a complete solution of the Hamilton-Jacobi equation. More details and applications can be found in [50]. ## 9 Conclusions The theory of integrability of systems of differential equations has been analysed from a geometric perspective through the properties of their associated vector fields. The reduction process consists on finding a more easily integrable related system whose solutions give us an at least partial information on the solutions of the original system. The search for such simpler related systems is based on the use of first integrals and infinitesimal symmetries of the vector field or, more generally, invariant tensor fields. So, in Section 2 we have revisited the well-known classical Lie theorem and a more recent generalised result, without recourse to additional compatible structures, while in Section 3 the rôle of invariant tensor fields in integrability has been discussed with a special emphasis on invariant (1,1)-tensor fields leading to symmetry generators and Lax equations with their correspondig first integrals. In Section 4, after a brief summary of the geometric Hamiltonian and Lagrangian mechanics as well as the explicit formulation of Noether theorem in both frameworks, the Arnold-Liouville integrability has been described. The meaning of non-Noether symmetries and how to find such symmetries have been discussed in Section 5. The usefulness of invariant volume forms and its relation to Jacobi multipliers on oriented manifolds has been displayed in Section 6 as a prelude to the geometric approach to Hojman symmetry briefly presented in Section 7. Finally, the geometric approach to Hamilton-Jacobi equation has also been analysed from the perspective of the search of related systems. according to the approach to reduction and integrability developed in this paper. ## Acknowledgements Financial support from the projects of Spanish Ministerio de Ciencia, Innovación y Universidades PGC2018-098265-B-C31 and Gobierno de Aragón $48\\_20$R Análisis y Física Matemática, is acknowledged. ## References * [1] J.F. Cariñena and J. de Lucas, Lie systems: theory, generalisations, and applications, Dissertationes Mathematicae 479, 2011. * [2] J. de Lucas Araujo and C. Sardón, A Guide to Lie Systems with Compatible Geometric Structures, World Sci. Pub., 2020. * [3] J. Wei and E. Norman, Lie algebraic solution of linear differential equations, J. Math. Phys. 4, 575–581 (1963). * [4] J. Wei and and E. Norman, On global representations of the solutions of linear differential equations as a product of exponentials, Proc. Amer. Math. Soc. 15, 327–334 (1964). * [5] J.F. Cariñena, G. Marmo and J. Nasarre, The non-linear superposition principle and the Wei–Norman method, Int. J. Mod. Phys. A 13, 3601–3627 (1998). * [6] S. Lie, Vorlesungen über differentialgleichungen mit bekannten infinitesimalen transformationen, Leipzig, Teubner 1891. Reprinted in AMS Chelsea Publishing vol. CHEL/206.H, Amer. Math. Soc. 1967. * [7] J.F. Cariñena, F. Falceto, J. Grabowski and M.F. Rañada, Geometry of Lie integrability by quadratures, J. Phys. A: Math. Theor. 48, 215206 (2015). * [8] J.F. Cariñena, F. Falceto and J. Grabowski, Solvability of a Lie algebra of vector fields implies their integrability by quadratures, J. Phys. A: Math. Theor. 49, 425202 (2016). * [9] V.V. Kozlov, Tensor invariants and integration of differential equations, Russian Math. Surveys 74, 111–140 (2019). * [10] H. Poincaré, Les méthodes nouvelles de la mechanique céleste, Vol. 3, Gauthier-Villars, Paris, 1892. * [11] P. Lax, Integrals of nonlinear equations of evolution and solitary waves, Communications on Pure and Applied Mathematics 21, 467–490 (1968). * [12] S. de Filippo, G. Marmo and G. Vilasi, A geometrical setting for the Lax representation, Phys. Lett. 107 B, 418–422 (1982). * [13] J.F. Cariñena and L.A. Ibort, A Geometrical Setting for Lax equations associated to Dynamical Systems, Phys. Lett. 107 A, 356–358 (1985). * [14] J.H. Wilkinson, The algebraic eigenvalue problem, Oxford University Press, Oxford, 1965. * [15] J.F. Cariñena and L.A. Ibort, Non-Noether constants of motion, J. Phys. A: Math. Gen. 16, 1–7 (1983). * [16] J.F. Cariñena and M.F. Rañada, Canonoid transformations from a geometric perspective, J. Math. Phys. 29, 2181–2186 (1988). * [17] J.F. Cariñena, F. Falceto and M.F. Rañada, Canonoid transformations and master symmetries, J. Geom. Mech. 5, 151–166 (2013). * [18] M. Crampin, A note on Non-Noether Constants of Motion, Phys. Lett. 95 A, 209–212 (1983). * [19] G. Marmo and C. Rubano, Equivalent Lagrangians and Lax representations, Nuovo Cim. 78 B, 70–84 (1983). * [20] J.F. Cariñena and L.A. Ibort, On Lax equations arising from Lagrangian foliations, Lett. Math Phys. 8, 21–26 (1984). * [21] R. Abraham and J.E. Marsden, Foundations of Mechanics, 2${}^{\text{nd}}$ edition, Benjamin, 1978. * [22] P. Libermann and C.-M. Marle, Symplectic Geometry and Analytical Mechanics, Reidel, 1987. * [23] N.W. Evans, Superintegrability in classical mechanics, Phys. Rev. A 41, 5666–5676 (1990) * [24] J.F. Cariñena, M. Santander and M.F. Rañada, Superintegrability of 3-dimensional Hamiltonian systems with conformally Euclidean metrics. Oscillator-related and Kepler-related systems, J. Phys. A: Math. Theor. 54, 105201 (2021). * [25] J.F. Cariñena, M. Santander and M.F. Rañada, Superintegrability on the 3-dimensional spaces with curvature. Oscillator-related and Kepler-related systems on the Sphere $S^{3}$ and on the Hyperbolic space $H^{3}$, J. Phys. A: Math. Theor. 54, 365201 (2021). * [26] M. Crampin, On the differential geometry of the Euler-Lagrange equations and the inverse problem in Lagrangian dynamics, J. Phys. A: Math. Gen. 14, 2567–2575 (1981). * [27] M. Crampin, Tangent bundle geometry for Lagrangian dynamics, J. Phys. A: Math. Gen. 16, 3755–3772 (1983). * [28] J.F. Cariñena, A. Ibort, G. Marmo and G. Morandi, Geometry from Dynamics: Classical and Quantum. Springer, Dordrecht, 2015. * [29] M. Crampin and G. Thompson, Affine bundles and integrable almost tangent structures, Math. Proc. Camb. Phil. Soc. 98, 61–71 (1985). * [30] J.F. Cariñena and L.A. Ibort, Geometric Theory of the Equivalence of Lagrangians for Constrained Systems, J. Phys. A: Math. Gen. 18, 3335–3341 (1985). * [31] M. Tsamparlis and A. Paliathanasis, Lie and Noether symmetries of geodesic equations and collineations, Gen. Rel. Grav. 42, 2957–2980 (2010). * [32] J.F. Cariñena and P. Santos, Jacobi Multipliers and Hamel’s formalism, J. Phys. A: Math. Theor. 54, 225203 (2021). * [33] J.F. Cariñena and J. Fernández–Núñez, Jacobi multipliers in integrability and the inverse problem of mechanics, Symmetry 13, 1413 (2021). * [34] F. González-Gascón, Divergence-free vector fields and integration via quadratures, Phys. Lett. A 225, 269–273 (1996). * [35] Y.N. Fedorov, L.C. García-Naranjo and J.C. Marrero, Unimodularity and Preservation of Volumes in Nonholonomic Mechanics, J. Nonlinear Sci. 25, 203–246 (2015). * [36] S. Hojman, A new conservation law constructed without using either Lagrangians or Hamiltonians, J. Phys. A. Math. Gen. 25, L 291–L 295 (1992). * [37] F. González-Gascón, Geometric foundations of a new conservation law discovered by Hojman, J. Phys. A: Math. Gen. 27, L 59–L 60 (1994). * [38] F. Darabi, M. Golmohammadi and A. Rezaei-Aghdam, FRW string cosmological solutions via Hojman symmetry, Int. J. Geom. Methods Mod. Phys. 17, 2050175 (2020). * [39] F. Darabi, M. Golmohammadi and A. Rezaei-Aghdam, Generalized (2 + 1)-dimensional BTZ black holes via Hojman symmetry, arXiv:2010.08424v2 (2021). * [40] H. Wei, Y-N. Zhou, H-Y. Lii and X-B. Zou, Hojman symmetry in $f(T)$ theory, Astrophys. Space Sci. 360, 6 (2015). * [41] H. Wei, Y-N. Zhou, H-Y. Lii and X-B. Zou, Exact cosmological solutions of $f(R)$ theories via Hojman symmetry, Nucl. Phys. B 903, 132–149 (2016). * [42] S. Capozziello and M. Roshanc, Exact cosmological solutions from Hojman conservation quantities, Phys. Lett. B 726, 471–480 (2013). * [43] M.C. Paolella and S. Capozziello, Hojman symmetry approach for scalar-tensor cosmology, Phy. Lett. A 379, 1304–1308 (2015). * [44] A. Paliathanasis, P.G.L. Leach and S. Capozziello, On the Hojman conservation quantities in Cosmology, Phy. Lett. B 755, 8–12 (2016). * [45] M. Lutzky, Remarks on a recent theorem about conserved quantities J. Phys. A: Math. Gen. 28, L 637–L 638 (1995). * [46] S-L. Gu and K-X, Wei, Study on The Symmetry and Conserved Quantities for Hamilton Systems, International Conference on Logistics Engineering, Management and Computer Science (LEMCS 2014), Advances in Intelligent Systems Research Series, p. 798–801 (2014). * [47] F. González-Gascón, Notes on the Connection between the Symmetries and the First Integrals of Dynamical Systems, Lett. Nuovo Cim. 19, 366–368 (1977). * [48] H-B. Zhang and L-Q. Chen, The Unified Form of Hojman’s Conservation Law and Lutzky’s Conservation Law, J. Phys. Soc. Japan 74, 905–909 (2005). * [49] J.F. Cariñena and M.F. Rañada, Jacobi multipliers and Hojman symmetry, Int. J. Geom. Methods Mod. Phys. 18, 2150166 (2021). * [50] N. Román-Roy, An Overview of the Hamilton-Jacobi Theory: the Classical and Geometrical Approaches and Some Extensions and Applications, Mathematics 9, 85 (2021). * [51] J.F. Cariñena, X. Gràcia, G. Marmo. E. Martínez, M.C. Muñoz–Lecanda, and N.Román-Roy, Geometric Hamilton–Jacobi theory. Int. J. Geom. Methods Mod. Phys. 3, 1417–1458 (2006).
# Chaotic dynamics of the sugarcane borer-two parasitoid agroecosystem with seasonality Marat Rafikov 1, Alexandre Molter2, João Inácio Moreira Bezerra2, Elvira Rafikova1, Maria Cristina Varriale3 1<EMAIL_ADDRESS>Federal University of ABC, Brazil. 1<EMAIL_ADDRESS>Federal University of ABC, Brazil. 2<EMAIL_ADDRESS>Federal University of Pelotas, Brazil. 2<EMAIL_ADDRESS>Federal University of Pelotas, Brazil. 3<EMAIL_ADDRESS>Federal University of Rio Grande do Sul, Brazil. ###### Abstract Sugarcane production is a significant and profitable agribusiness sector in many countries. Nevertheless, this industry suffers significant losses from the sugarcane pests, among which the most important one is the sugarcane borer (Diatraea saccharalis). This pest population is hard to be controlled due to its different life stages, thus biological control ( with more than one predator species) can be applied. Therefore, in this work, we present and analyze a mathematical model that describes the dynamics of the sugarcane borer and its two different life stages parasitoids: eggs’ (Trichogramma galloi) and larval (Cotesia flavipes). First, a host-parasitoid model is used to obtain the population dynamics, which also considers the influence of seasonal variations. Then, system simulations and bifurcation diagrams show that the introduction of seasonality perturbations causes complex dynamics and results in limit cycles and strange attractors. ###### keywords: Sugarcane borer , Parasitoids , Dynamics , Seasonality , Chaos ††journal: Applied Mathematics and Computation [table]capposition=bottom ## 1 Introduction Sugarcane is a global commodity produced throughout the tropics and used for sweeteners, biofuels, and a growing range of bioproducts (including bioplastics) [WinNT]. The production of sugarcane suffers from insect pests such as sugarcane borer Diatraea sacharalis, which is reportedly the most significant sugarcane pest in Brazil and other countries [parraetal2002, postali2019applied]. This peculiar insect lays eggs on the surface of the sugarcane leaves. The larvae, however, live inside the sugarcane stalk, carving internal galleries and causing significant damages that can lead to the death of the plant. Because the larvae live hidden from the surface, the pesticide control becomes inefficient and biological control is the alternative for this case. Biological control is the pest populations reduction by their natural enemies (predators, parasitoids, and pathogens). The sugarcane borer larvae population has been controlled since the 1970s by the parasitoid, Cotesia flavipes. Recently, the Trichogramma galloi parasitoid is a new alternative for biological control of it’s egg population [parraetal2002, postali2019applied]. The applications of prey-predator and host-parasitoid models for biological control were reviewed in [gamez2009observation, gamez2010open, venturino2007diseases, venturino2008biological, chattopadhyay2015chaos]. In biological systems with more than two species, prey-predator interactions become complex and consequently harder to model prey invasions. Mathematical modeling has an important role in the decision-making of the prey’s population control, providing information about the natural systems’ stability [goh2012management], along with computational simulations, revealing the behavior of these complex systems and understanding how the prey interacts with other species in the environment. Mathematical models for biological control, with only one predator population, are considered in [rafikov2012mathematical] and [rafikov2014dynamical] for egg and larval parasitoid, respectively. The mathematical model of interactions between the sugarcane borer and its eggs and larval parasitoids was proposed by Rafikov and Silveira [rafikov2013dynamics]. In Molnár et al. [molnar2016two], this model was used for the formulation of some scenarios of biological pest control. In both publications, the populations are described by four population compartments: sugarcane borer’s eggs population, sugarcane borer’s larvae population, Trichogramma galloi(eggs parasitoid) population, Cotesia Flavipes(larval parasitoid) population. As pointed out by [rinaldi1993multiple], periodic external forces are of great importance in ecological systems since environments of the population communities vary periodically. There are many systems that have a very simple dynamic behavior in the constant parameter case but become very complex (multiplicity of attractors, catastrophes, and chaos) when they are periodically perturbed. Seasonality in biological systems has already been addressed in several works, such as [rinaldi1993multiple, gakkhar2003chaos, gakkhar2003seasonally, altizer2006seasonality, zhang2012complexity, white2020seasonality, bezerra2021biological, stollenwerk2017hopf]. Meanwhile, the inclusion of two parasitoids (eggs and larval) populations, as well as seasonal variations in the sugarcane borer agroecosystem dynamics are novelties. Bezerra et al. in [bezerra2021biological], models sugarcane borer and its larval parasitoid (Cotesia flavipes) interaction, considering the influence of the seasonal variations on the dynamics of the system. Their results show that this variation generates chaotic dynamics in the system. The system model in the present paper is an extension of the mathematical model in [rafikov2013dynamics], with the addition of the parasitized egg and larvae population of the sugarcane borer. This addition improves the estimation and observation of the system parameters, as it is much easier to monitor sugarcane borer’s parasitized eggs and parasitized larvae than adult parasitoid population in real conditions. Moreover, seasonality variations are introduced into the system dynamics resulting in chaotic behavior. The paper is organized as follows. In Section 2, our six-dimensional, continuous-time, dynamical system is proposed. Section 3 is dedicated to the study of the system’s local and global dynamics. Seasonal dynamics of the sugarcane borer parasitoid agroecosystem is considered in Section 4. Section 5 discusses the results of previous Sections and concludes this paper. ## 2 Mathematical Model The proposed continuous-time mathematical model describes the interactions between the sugarcane borer, its egg and larval parasitoid, considering six population densities: the un-parasitized egg population density of the sugarcane borer, $x_{1}$; the parasitized egg population density of the sugarcane borer,, $x_{2}$; the density of the adult egg parasitoid Trichogramma galloi, $x_{3}$; the unparasitized larvae density of the sugarcane borer, $x_{4}$; the parasitized larvae density of the sugarcane borer, $x_{5}$; the density of the adult larval parasitoid Cotesia flavipes, $x_{6}$. So, the mathematical model has the following form: $\displaystyle\frac{dx_{1}}{dt}$ $\displaystyle=rx_{1}\left(1-\frac{x_{1}}{K}\right)-m_{1}x_{1}-n_{1}x_{1}-\alpha x_{1}x_{3},$ $\displaystyle\frac{dx_{2}}{dt}$ $\displaystyle=\alpha x_{1}x_{3}-m_{2}x_{2}-n_{2}x_{2},$ $\displaystyle\frac{dx_{3}}{dt}$ $\displaystyle=\gamma_{1}n_{2}x_{2}-m_{3}x_{3},$ $\displaystyle\frac{dx_{4}}{dt}$ $\displaystyle=n_{1}x_{1}-m_{4}x_{4}-n_{3}x_{4}-\beta x_{4}x_{6},$ (1) $\displaystyle\frac{dx_{5}}{dt}$ $\displaystyle=\beta x_{4}x_{6}-m_{5}x_{5}-n_{4}x_{5},$ $\displaystyle\frac{dx_{6}}{dt}$ $\displaystyle=\gamma_{2}n_{4}x_{5}-m_{6}x_{6}.$ In the system of differential equations (2), the 16 parameters are defined as follows. $r$ is the intrinsic oviposition rate of female sugarcane borer; $K$ is the potential maximum of oviposition rate of female sugarcane borer; $m_{1},m_{2},m_{3},m_{4},m_{5}$ and $m_{6}$ are the mortality rates of the un- parasitized egg, parasitized egg, egg parasitoid, un- parasitized larvae, parasitized larvae and larvae parasitoid populations, respectively; $n_{1}$ is the fraction of the sugarcane borer larvae population which emerges from the eggs per unit of time; $n_{2}$ is the fraction of the parasitized egg population from which larval parasitoids emerge in a time unit; $n_{3}$ is the fraction of the un-parasitized sugarcane borer larvae from which pupae emerge in a time unit; $n_{4}$ is the fraction of the parasitized sugarcane borer larvae from which larvae parasitoids emerge in a time unit; $\alpha$ and $\beta$ are are the intrinsic parasitism rate of the egg and larvae parasitoids, respectively; $\gamma_{1}$ and $\gamma_{2}$ are numbers of adult parasitoids which emerge from a unit of parasitized eggs and larvae, respectively. ## 3 System Equilibrium states and their stability ### 3.1 Equilibrium states The equilibrium points can be obtained by equalling to zero the right-hand sides of the system (2). We obtain five following equilibrium by sequences of ${x_{i},i=1,2,\cdots,6}$. * 1. Extinction of all populations: $E_{1}=(0,0,0,0,0,0)$. * 2. Extinction of all parasitoid and parasitized populations: $E_{2}=\left(\frac{K}{r}(r-m_{1}-n_{1}),0,0,\frac{Kn_{1}(r-m_{1}-n_{1})}{r(m_{4}+n_{3})},0,0\right)$. * 3. Extinction of the egg parasitoid and parasitized egg populations: $E_{3}=\left(\frac{K}{r}(r-m_{1}-n_{1}),0,0,p_{4}^{\ast},p_{5}^{\ast},p_{6}^{\ast}\right)$. * 4. Extinction of the larvae parasitoid and parasitized larvae populations: $E_{4}=(x_{1}^{\ast},x_{2}^{\ast},x_{3}^{\ast},q^{\ast},0,0)$. * 5. Coexistence of all populations: $E_{5}=(x_{1}^{\ast},x_{2}^{\ast},x_{3}^{\ast},x_{4}^{\ast},x_{5}^{\ast},x_{6}^{\ast}).$ The values cited above are given as follows. $\displaystyle x_{1}^{\ast}$ $\displaystyle=\frac{m_{3}(m_{2}+n_{2})}{\alpha\gamma_{1}n_{2}},$ $\displaystyle x_{3}^{\ast}$ $\displaystyle=\frac{1}{\alpha}\left[r\left(1-\frac{m_{3}(m_{2}+n_{2})}{\alpha\gamma_{1}n_{2}K}\right)-m_{1}-n_{1}\right],$ $\displaystyle x_{2}^{\ast}$ $\displaystyle=\frac{m_{3}}{\gamma_{1}n_{2}}x_{3}^{\ast},$ $\displaystyle x_{4}^{\ast}$ $\displaystyle=\frac{m_{6}(m_{5}+n_{4})}{\beta\gamma_{2}n_{4}},$ $\displaystyle x_{5}^{\ast}$ $\displaystyle=\frac{m_{3}n_{1}(m_{2}+n_{2})}{\alpha\gamma_{1}n_{2}(m_{5}+n_{4})}-\frac{m_{6}(m_{4}+n_{3})}{\beta\gamma_{2}n_{4}},$ $\displaystyle x_{6}^{\ast}$ $\displaystyle=\frac{\gamma_{2}n_{4}}{m_{6}}x_{5}^{\ast},$ $\displaystyle p_{4}^{\ast}$ $\displaystyle=\frac{m_{6}(m_{5}+n_{4})}{\beta\gamma_{2}n_{4}},$ $\displaystyle p_{5}^{\ast}$ $\displaystyle=\frac{n_{1}}{m_{5}+n_{4}}\left[\frac{K}{r}(r-m_{1}-n_{1})\right]-\frac{m_{6}(m_{4}+n_{3})}{\beta\gamma_{2}n_{4}},$ $\displaystyle p_{6}^{\ast}$ $\displaystyle=\frac{\gamma_{2}n_{4}}{m_{6}}p_{5}^{\ast},$ $\displaystyle q^{\ast}$ $\displaystyle=\frac{n_{1}}{m_{4}+n_{3}}x_{1}^{\ast}.$ Since we are modelling a biological system, where the dependent variables are populations, the space phase includes only positive or zero values for all coordinates, thus defining their biological viability. When analyzing, in next section, each one of these equilibrium points, conditions will be set for their biological viability. ### 3.2 Local stability analysis of the equilibrium points In order to study the local the stability of these equilibrium states, system (2) is linearized in a small neighborhood of each equilibrium state, and the Jacobian matrix being computed as: $J=\begin{bmatrix}a_{11}&0&a_{13}&0&0&0\\\ a_{21}&a_{22}&a_{23}&0&0&0\\\ 0&a_{32}&a_{33}&0&0&0\\\ a_{41}&0&0&a_{44}&0&a_{46}\\\ 0&0&0&a_{54}&a_{55}&a_{56}\\\ 0&0&0&0&a_{65}&a_{66}\\\ \end{bmatrix},$ (2) in which: $\displaystyle a_{11}$ $\displaystyle=r-\frac{2rx_{1}}{K}-m_{1}-n_{1}-\alpha x_{3},a_{13}=-\alpha x_{1},a_{21}=\alpha x_{3},a_{22}=-m_{2}-n_{2},$ $\displaystyle a_{23}$ $\displaystyle=\alpha x_{1},a_{32}=\gamma_{1}n_{2},a_{33}=-m_{3},a_{41}=n_{1},a_{44}=-m_{4}-n_{3}-\beta x_{6},$ $\displaystyle a_{46}$ $\displaystyle=-\beta x_{4},a_{54}=\beta x_{6},a_{55}=-m_{5}-n_{4},a_{56}=\beta x_{4},a_{65}=\gamma_{2}n_{4},a_{66}=-m_{6}.$ The matrix (2) can be written as a block matrix: $\begin{bmatrix}A&0\\\ C&B\\\ \end{bmatrix},$ (3) in which: $\displaystyle A$ $\displaystyle=\begin{bmatrix}a_{11}&0&a_{13}\\\ a_{21}&a_{22}&a_{23}\\\ 0&a_{32}&a_{33}\\\ \end{bmatrix},C=\begin{bmatrix}a_{41}&0&0\\\ 0&0&0\\\ 0&0&0\\\ \end{bmatrix},B=\begin{bmatrix}a_{44}&0&a_{46}\\\ a_{54}&a_{55}&a_{56}\\\ 0&a_{65}&a_{66}\\\ \end{bmatrix},$ and 0 is a matrix with all elements equal to zero. Matrices of the form (3) are called block-lower-triangular. The full explanation on the block-triangular matrix determinant computation method can be found in [gantmachertheory], and briefly described below: If D is a block-triangular matrix, then the determinant of the matrix is equal to the product of determinant of diagonal cells: $\det{(D)}=\det{(D_{11})}\det{(D_{22})}\cdots\det{(D_{nn})}.$ (4) The matrix $D=J-\lambda I$, where $I$ is the identity matrix of dimensions $n\times n$, is block-lower-triangular too. Using the above mentioned rule, the characteristic equation is given as follows: $\det{(D)}=\det{(A-\lambda I)}\det{(B-\lambda I)}=0.$ (5) Hence, the characteristic equation (5) is given by: $\begin{vmatrix}a_{11}-\lambda&0&a_{13}\\\ a_{21}&a_{22}-\lambda&a_{23}\\\ 0&a_{32}&a_{33}-\lambda\\\ \end{vmatrix}\begin{vmatrix}a_{44}-\lambda&0&a_{46}\\\ a_{54}&a_{55}-\lambda&a_{56}\\\ 0&a_{65}&a_{66}-\lambda\\\ \end{vmatrix}=0.$ (6) From (6), we get that: $\begin{vmatrix}a_{11}-\lambda&0&a_{13}\\\ a_{21}&a_{22}-\lambda&a_{23}\\\ 0&a_{32}&a_{33}-\lambda\\\ \end{vmatrix}=0,$ (7) $\begin{vmatrix}a_{44}-\lambda&0&a_{46}\\\ a_{54}&a_{55}-\lambda&a_{56}\\\ 0&a_{65}&a_{66}-\lambda\\\ \end{vmatrix}=0.$ (8) Now, applying this rule to the equilibrium points found above, we obtain for the equilibrium point $E_{1}=(0,0,0,0,0,0)$, the matrix (2) has a triangular form, and the eigenvalues are given by: $\lambda_{1}=r-m_{1}-n_{1},\,\lambda_{2}=-m_{2}-n_{2},\,\lambda_{3}=-m_{3},\,\lambda_{4}=-m_{4}-n_{3},\,\lambda_{5}=-m_{5}-n_{4},\,\text{and}\,\lambda_{6}=-m_{6}.$ Therefore, it follows that equilibrium $E_{1}$ is asymptotically stable if $r<m_{1}+n_{1}$. For the equilibrium $E_{2}=\left(\frac{K}{r}(r-m_{1}-n_{1}),0,0,\frac{Kn_{1}(r-m_{1}-n_{1})}{r(m_{4}+n_{3})},0,0\right)$, which is biologically viable if $r_{1}>m_{1}+n_{1}$, the parameters of the characteristic equation (5) are given as: $\displaystyle a_{11}$ $\displaystyle=-(r-m_{1}-n_{1}),a_{13}=-\frac{aK}{r}(r-m_{1}-n_{1}),a_{21}=0,a_{22}=-m_{2}-n_{2},$ $\displaystyle a_{23}$ $\displaystyle=\frac{aK}{r}(r-m_{1}-n_{1}),a_{32}=\gamma_{1}n_{2},a_{33}=-m_{3},a_{44}=-m_{4}-n_{3},$ $\displaystyle a_{46}$ $\displaystyle=-\frac{\beta Kn_{1}(r-m_{1}-n_{1})}{r(m_{4}+n_{3})},a_{54}=0,a_{55}=-m_{5}-n_{4},a_{56}=\frac{\beta Kn_{1}(r-m_{1}-n_{1})}{r(m_{4}+n_{3})},$ $\displaystyle a_{65}$ $\displaystyle=\gamma_{2}n_{4},a_{66}=-m_{6}.$ From (7) and (8), we obtain: $\displaystyle(a_{11}-\lambda)[\lambda^{2}-(a_{22}+a_{33})\lambda+a_{22}a_{33}-a_{23}a_{32}]$ $\displaystyle=0$ (9) $\displaystyle(a_{44}-\lambda)[\lambda^{2}-(a_{55}+a_{66})\lambda+a_{55}a_{66}-a_{56}a_{65}]$ $\displaystyle=0$ (10) According to the Routh-Hurwitz criterion the eigenvalues of the second-degree polynomial have negative real parts if, and only if, both coefficients are positive. Analyzing (7) and (8), we can conclude that when (a) $a_{11}<0$, (b) $a_{22}+a_{33}-a_{23}a_{32}>0$, (c) $a_{55}+a_{66}-a_{56}a_{65}>0$, then all the eigenvalues of the equation (6) have negative real parts. From conditions (a), (b) and (c), we obtain: $\displaystyle\alpha$ $\displaystyle<\frac{rm_{3}(m_{2}+n_{2})}{\gamma_{1}n_{2}K(r-m_{1}-n_{1})}$ (11) $\displaystyle\beta$ $\displaystyle<\frac{rm_{6}(m_{4}+n_{3})(m_{5}+n_{4})}{\gamma_{2}n_{1}n_{4}K(r-m_{1}-n_{1})}$ (12) $\displaystyle r$ $\displaystyle>m_{1}+n_{1}$ (13) Similarly, the equilibrium point $E_{2}$ is asymptotically stable if, and only if, the inequalities (11), (12) and (13) are satisfied. The equilibrium point $E_{3}=\left(\frac{K}{r}(r-m_{1}-n_{1}),0,0,p_{4}^{\ast},p_{5}^{\ast},p_{6}^{\ast}\right)$ is biologically viable if: $\displaystyle\beta$ $\displaystyle>\frac{rm_{6}(m_{4}+n_{3})(m_{5}+n_{4})}{\gamma_{2}n_{1}n_{4}K(r-m_{1}-n_{1})},$ (14) $\displaystyle r$ $\displaystyle>m_{1}+n_{1}.$ The characteristic equation at $E_{3}$ can be written in the form (6), where: $\displaystyle a_{11}$ $\displaystyle=-(r-m_{1}-n_{1}),a_{13}=-\frac{aK}{r}(r-m_{1}-n_{1}),a_{21}=0,a_{22}=-m_{2}-n_{2},$ $\displaystyle a_{23}$ $\displaystyle=\frac{aK}{r}(r-m_{1}-n_{1}),a_{32}=\gamma_{1}n_{2},a_{33}=-m_{3},a_{44}=-m_{4}-n_{3}-\beta p_{6}^{\ast},$ $\displaystyle a_{46}$ $\displaystyle=-\beta p_{4}^{\ast},a_{54}=\beta p_{6}^{\ast},a_{55}=-m_{5}-n_{4},a_{56}=\beta p_{4}^{\ast},a_{65}=\gamma_{2}n_{4},a_{66}=-m_{6}.$ The parameter values of determinant (7) are the same as $E_{2}$, and the conditions (11) and (13) are satisfied. Considering the determinant (8) $\begin{vmatrix}a_{44}-\lambda&0&a_{46}\\\ a_{54}&a_{55}-\lambda&a_{56}\\\ 0&a_{65}&a_{66}-\lambda\\\ \end{vmatrix}=0,$ (15) we have $\lambda^{3}+b_{1}\lambda^{2}+b_{2}\lambda+b_{3}=0,$ (16) where $\displaystyle b_{1}$ $\displaystyle=m_{4}+n_{3}+\beta p_{6}^{\ast}>0,\;\;\;\;\;b_{2}=(m_{4}+n_{3}+\beta p_{6}^{\ast})(m_{5}+n_{4})>0,$ $\displaystyle b_{3}$ $\displaystyle=\beta m_{6}(m_{5}+n_{4})p_{6}^{\ast}>0,\;\;\;\;\;b_{1}b_{2}-b_{3}>0.$ (17) Therefore, we obtain that the equilibrium point $E_{3}$ is asymptotically stable if, and only if, the inequalities (11), (14) and (13) are satisfied. The equilibrium point $E_{4}=(x_{1}^{\ast},x_{2}^{\ast},x_{3}^{\ast},q^{\ast},0,0)$ is biologically viable if $\displaystyle\alpha$ $\displaystyle>\frac{rm_{3}(m_{2}+n_{2})}{\gamma_{1}n_{2}K(r-m_{1}-n_{1})},$ (18) $\displaystyle r$ $\displaystyle>m_{1}+n_{1}.$ The characteristic equation at $E_{4}$ can be written in the form (6) where: $\displaystyle a_{11}$ $\displaystyle=-\frac{rx_{1}^{\ast}}{K},a_{13}=-ax_{1}^{\ast},a_{21}=ax_{3}^{\ast},a_{22}=-m_{2}-n_{2},$ $\displaystyle a_{23}$ $\displaystyle=ax_{1}^{\ast},a_{32}=\gamma_{1}n_{2},a_{33}=-m_{3},a_{44}=-m_{4}-n_{3},$ (19) $\displaystyle a_{46}$ $\displaystyle=-\beta q^{\ast},a_{54}=0,a_{55}=-m_{5}-n_{4},a_{56}=\beta q^{\ast},a_{65}=\gamma_{2}n_{4},a_{66}=-m_{6}.$ Considering the determinant (7) with parameter values (3.2) we have: $\lambda^{3}+c_{1}\lambda^{2}+c_{2}\lambda+c_{3}=0.$ (20) where $c_{1}=m_{2}+m_{3}+n_{2}+\frac{rx_{1}^{\ast}}{K}>0,\;c_{2}=(m_{2}+m_{3}+n_{2})\frac{rx_{1}^{\ast}}{K}>0,\;c_{3}=\alpha m_{3}(m_{2}+n_{2})x_{3}^{\ast}>0.$ From $c_{1}c_{2}-c_{3}>0$, we obtain: $\alpha<\frac{rm_{3}(m_{2}+n_{2})}{\gamma_{1}n_{2}zK},$ (21) where $\displaystyle z$ $\displaystyle=-\frac{h_{1}}{2}+\sqrt{\frac{h_{1}^{2}}{4}+h_{2}}\,,$ $\displaystyle h_{1}$ $\displaystyle=m_{2}+m_{3}+n_{2}+\frac{m_{3}(m_{2}+n_{2})}{m_{2}+m_{3}+n_{2}},$ $\displaystyle h_{2}$ $\displaystyle=\frac{m_{3}(m_{2}+n_{2})(r-m_{1}-n_{1})}{m_{2}+m_{3}+n_{2}}.$ Considering the determinant (8) with parameter values (3.2) we have: $(a_{44}-\lambda)[\lambda^{2}-(a_{55}+a_{66})\lambda+a_{55}a_{66}-a_{56}a_{65}]=0,$ (22) where $a_{44}=-m_{4}-n_{3}<0,-(a_{55}+a_{66})>0.$ From $a_{55}a_{66}-a_{56}a_{65}>0$ we obtain: $\beta<\frac{m_{6}(m_{5}+n_{4})}{\gamma_{2}n_{4}q^{\ast}}=\frac{\alpha\gamma_{1}n_{2}m_{6}(m_{4}+n_{3})(m_{5}+n_{4})}{\gamma_{2}n_{1}n_{4}m_{3}(m_{2}+n_{2})}.$ (23) The equilibrium point $E_{4}$ is asymptotically stable if, and only if, the following inequalities are satisfied: $\frac{rm_{3}(m_{2}+n_{2})}{\gamma_{1}n_{2}K(r-n_{1}-m_{1})}<\alpha<\frac{rm_{3}(m_{2}+n_{2})}{\gamma_{1}n_{2}zK},$ (24) $\beta<\frac{\alpha\gamma_{1}n_{2}m_{6}(m_{4}+n_{3})(m_{5}+n_{4})}{\gamma_{2}n_{1}n_{4}m_{3}(m_{2}+n_{2})}.$ (25) Consider the equilibrium point $E_{5}=(x_{1}^{\ast},x_{2}^{\ast},x_{3}^{\ast},x_{4}^{\ast},x_{5}^{\ast},x_{6}^{\ast})$ where $\displaystyle x_{1}^{\ast}$ $\displaystyle=\frac{m_{3}(m_{2}+n_{2})}{\alpha\gamma_{1}n_{2}},x_{3}^{\ast}=\frac{1}{\alpha}\left[r\left(1-\frac{m_{3}(m_{2}+n_{2})}{\alpha\gamma_{1}n_{2}K}-m_{1}-n_{1}\right)\right],x_{2}^{\ast}=\frac{m_{3}}{\gamma_{1}x_{2}}x_{3}^{\ast},$ (26) $\displaystyle x_{4}^{\ast}$ $\displaystyle=\frac{m_{6}(m_{5}+n_{4})}{\beta\gamma_{2}n_{4}},x_{5}^{\ast}=\frac{m_{3}n_{1}(m_{2}+n_{2})}{\alpha\gamma_{1}n_{2}(m_{5}+n_{4})}-\frac{m_{6}(m_{4}+n_{3})}{\beta\gamma_{2}n_{4}},x_{6}^{\ast}=\frac{\gamma_{2}n_{4}}{m_{6}}x_{5}^{\ast}.$ The equilibrium point $E_{5}$ is biologically viable if $x_{3}^{\ast}>0$ and $x_{5}^{\ast}>0$. So, we get: $\displaystyle\alpha$ $\displaystyle>\frac{rm_{3}(m_{2}+n_{2})}{\gamma_{1}n_{2}K(r-n_{1}-m_{1})},$ $\displaystyle\beta$ $\displaystyle>\frac{\alpha\gamma_{1}n_{2}m_{6}(m_{4}+n_{3})(m_{5}+n_{4})}{\gamma_{2}n_{1}n_{4}m_{3}(m_{2}+n_{2})},$ (27) $\displaystyle r$ $\displaystyle>m_{1}+n_{1}.$ The characteristic equation at $E_{5}$ can be written in the form (6) where: $\displaystyle a_{11}$ $\displaystyle=-\frac{rx_{1}^{\ast}}{K},a_{13}=-ax_{1}^{\ast},a_{21}=ax_{3}^{\ast},a_{22}=-m_{2}-n_{2},$ $\displaystyle a_{23}$ $\displaystyle=ax_{1}^{\ast},a_{32}=\gamma_{1}n_{2},a_{33}=-m_{3},a_{44}=-m_{4}-n_{3}-\beta x_{6}^{\ast},$ (28) $\displaystyle a_{46}$ $\displaystyle=-\beta x_{4}^{\ast},a_{54}=\beta x_{6}^{\ast},a_{55}=-m_{5}-n_{4},a_{56}=\beta x_{4}^{\ast},a_{65}=\gamma_{2}n_{4},a_{66}=-m_{6}.$ The parameter values of determinant (7) are the same of $E_{4}$, and the inequality (21) is satisfied. Considering the determinant (8), we obtain: $\lambda^{3}+g_{1}\lambda^{2}+g_{2}\lambda+g_{3}=0,$ (29) where $\displaystyle g_{1}$ $\displaystyle=m_{4}+m_{5}+m_{6}+n_{3}+n_{4}+\beta x_{6}^{\ast}>0,$ $\displaystyle g_{2}$ $\displaystyle=(m_{4}+n_{3}+\beta x_{6}^{\ast})(m_{5}+n_{4}+m_{6})>0,$ (30) $\displaystyle g_{3}$ $\displaystyle=\beta m_{6}(m_{5}+n_{4})x_{6}^{\ast}>0,\;\;\;g_{1}g_{2}-g_{3}>0.$ Therefore, we obtain that equilibrium point $E_{5}$ is asymptotically stable if, and only if, the following inequalities are satisfied: $\frac{rm_{3}(m_{2}+n_{2})}{\gamma_{1}n_{2}K(r-m_{1}-n_{1})}<\alpha<\frac{rm_{3}(m_{2}+n_{2})}{\gamma_{1}n_{2}zK},$ (31) $\beta>\frac{\alpha\gamma_{1}n_{2}m_{6}(m_{4}+n_{3})(m_{5}+n_{4})}{\gamma_{2}n_{1}n_{4}m_{3}(m_{2}+n_{2})}.$ (32) We can now summarize the results of this local stability analysis, after defining the following dimensionless parameters related to our resulting conditions (regions in the parameter space) for biological viability (b. v.) and for local stability (l.s.) of each equilibrium point: $\displaystyle A_{1}$ $\displaystyle\equiv\frac{r}{m_{1}+n_{1}};\ A_{2}\equiv\frac{\alpha\gamma_{1}n_{2}K(r-m_{1}-n_{1})}{rm_{3}(m_{2}+n_{2})};\ A_{3}\equiv\frac{\beta\gamma_{2}n_{1}n_{4}K(r-m_{1}-n_{1})}{rm_{6}(m_{4}+n_{3})(m_{5}+n_{4})};$ (33) $\displaystyle A_{4}$ $\displaystyle\equiv\frac{\beta\gamma_{2}n_{1}n_{4}m_{3}(m_{2}+n_{2})}{\alpha\gamma_{1}n_{2}m_{6}(m_{4}+n_{3})(m_{5}+n_{4})}=\frac{A_{3}}{A_{2}};\ A_{5}\equiv\frac{\alpha\gamma_{1}n_{2}zK}{rm_{3}(m_{2}+n_{2})}=\frac{A_{2}z}{r-m_{1}-n_{1}}.$ With these dimensionless parameters, whose critical value 1 is associated with a bifurcation in the behavior of the system, all conditions previously deduced can be presented in a summary and complete form as shown in Table 1. | | | | $E_{1}$ | $E_{2}$ | $E_{3}$ | $E_{4}$ | $E_{5}$ ---|---|---|---|---|---|---|---|--- $A_{1}<1$ | b.v. | l.s. | Not b.v. | Not b.v. | Not b.v. | Not b.v. $A_{1}>1$ | $A_{2}<1$ | $A_{3}<1$ | b.v. | un. | b.v. | l.s. | Not b.v. | Not b.v. | Not b.v. $A_{3}>1$ | b.v. | un. | b.v. | un. | b.v. | l.s. | Not b.v. | Not b.v. $A_{2}>1$ | $A_{5}<1$ | $A_{4}<1$ | b.v. | un. | b.v. | un. | b.v. | un. | b.v. | l.s. | Not b.v. $A_{4}>1$ | b.v. | un. | b.v. | un. | b.v. | un. | b.v. | un. | b.v. | l.s. $A_{5}>1$ | $A_{4}>1$ | b.v. | un. | b.v. | un. | b.v. | un. | b.v. | un. | b.v. | un. Table 1: Local stability and biological viability of the system equilibria, in which b.v. means biologically viable, l.s. means locally stable and un. means unstable. ### 3.3 Global stability analysis of the coexistence equilibrium From local stability analysis, we get the equilibrium point $E_{5}$ is the only one with no null population density. Therefore this is the point of interest for the forthcoming global stability analysis. We define a Lyapunov function as follows: $V(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6})=\int_{x_{1}^{\ast}}^{x_{1}}\frac{y-x_{1}^{\ast}}{y}dy+\sum_{i=2}^{6}\frac{(x_{i}-x_{1}^{\ast})^{2}}{2}.$ (34) It can be easily verified that the function $V$ is zero at the equilibrium point $E_{5}$ and is positive for all other biologically viable equilibria. The function is also radially unbounded, which means that $V\rightarrow\infty$ when $x\rightarrow\infty$. The time derivative of $V$ along (2) can be written as $\dot{V}=e^{T}Pe,$ (35) where the matrix $P$ is $\begin{bmatrix}-\frac{r}{K}&0&-\alpha&0&0&0\\\ \alpha x_{3}&-m_{2}-n_{2}&\alpha x_{1}^{\ast}&0&0&0\\\ 0&\gamma_{1}n_{2}&-m_{3}&0&0&0\\\ n_{1}&0&0&-m_{4}-n_{3}-\beta x_{6}&0&-\beta x_{4}^{\ast}\\\ 0&0&0&\beta x_{6}&-m_{5}-n_{4}&\beta x_{4}^{\ast}\\\ 0&0&0&0&\gamma_{2}n_{4}&-m_{6}\\\ \end{bmatrix},$ and the elements of vector $e$ are: $e_{i}=x_{i}-x_{i}^{\ast},i=1,2,...,6.$ If conditions (31) and (32) are satisfied then the matrix $P$ in (35) is negative definite, as is the time derivative of $V$ along the trajectories of (2), and consequently the equilibrium point $E_{5}$ is globally asymptotically stable. ### 3.4 Hopf bifurcation analysis From Table 1, it is evident the role of the dimensionless parameter $A_{5}$ in determining the region in the parameter space in which each one of the equilibrium points $E_{4}$ and $E_{5}$ is asymptotically stable. Furthermore, from the definition of $A_{5}$ in (33), it can be seen that the condition $A_{5}<1$ can be written equivalently in the form of $\alpha<\alpha_{c}$, where the critical value $\alpha_{c}$ is defined by $\alpha_{c}\equiv\frac{rm_{3}(m_{2}+n_{2})}{\gamma_{1}n_{2}zK}.$ (36) In (36) we observe that the condition is exactly the same one as we got in (21), from the characteristic (20), for the local stability of the equilibria $E_{4}$ and $E_{5}$. When $A_{5}>1$, that is, $\alpha>\alpha_{c}$, the positive coexistence equilibrium $E_{5}$ becomes unstable and a Hopf bifurcation occurs. Now we can analyze the bifurcation of the model (2) assuming $\alpha$ as the bifurcation parameter and considering only three first equations of the system (2) which are not dependent on the variables $x_{4},x_{5}$ and $x_{6}$. The traditional Hopf bifurcation criterion is stated in terms of the properties of the eigenvalues. Alternatively, Liu (1994) [liu1994criterion] presented a criterion of Hopf bifurcation without using the eigenvalues of the characteristic equation. Liu’s approach is the one that is applied in the present Hopf bifurcation analysis, as follows: Liu’s criterion. If the characteristic equation of the positive equilibrium point is given by: $\lambda^{3}+c_{1}(\alpha)\lambda^{2}+c_{2}(\alpha)\lambda+c_{3}(\alpha)=0,$ where $c_{1}(\alpha),c_{2}(\alpha)$ and $c_{3}(\alpha)$ are smooth functions of $\alpha$ in an open interval about $\alpha_{c}\in\mathcal{R}$ such that: 1. (a) $c_{1}(\alpha_{c})>0,\Delta(\alpha_{c})=c_{1}(\alpha_{c})c_{2}(\alpha_{c})-c_{3}(\alpha_{c})=0,c_{3}(\alpha_{c})>0$. 2. (b) $\left(\frac{d\Delta}{d\alpha}\right)_{\alpha=\alpha_{c}}\neq 0$, then a simple Hopf bifurcation occurs at $\alpha=\alpha_{c}$. Applying the Liu’s criterion to the characteristic equation (20), we observe that $c_{1}=m_{2}+m_{3}+n_{2}+\frac{rx_{1}^{\ast}}{K}>0,\;\;c_{2}=(m_{2}+m_{3}+n_{2})\frac{rx_{1}^{\ast}}{K}>0,\;\;c_{3}=\alpha(m_{2}+n_{2})x_{3}^{\ast}>0,$ for all positive values of $\alpha$. Solving the equation $c_{1}(\alpha_{c})c_{2}(\alpha_{c})-c_{3}(\alpha_{c})=0,$ we obtain $\alpha_{c}=\frac{rm_{3}(m_{2}+n_{2})}{\gamma_{1}n_{2}zK},$ (37) where $\displaystyle z$ $\displaystyle=-\frac{h_{1}}{2}+\sqrt{\frac{h_{1}^{2}}{4}+h_{2}}\,,$ $\displaystyle h_{1}$ $\displaystyle=m_{2}+m_{3}+n_{2}+\frac{m_{3}(m_{2}+n_{2})}{m_{2}+m_{3}+n_{2}},$ $\displaystyle h_{2}$ $\displaystyle=\frac{m_{3}(m_{2}+n_{2})(r-m_{1}-n_{1})}{m_{2}+m_{3}+n_{2}}.$ Considering condition (b) of the Liu’s criterion, we have $\left(\frac{d\Delta}{d\alpha}\right)_{\alpha=\alpha_{c}}=-\frac{B_{1}}{\alpha_{c}^{2}}-\frac{2B_{2}}{\alpha_{c}^{3}}<0,$ where $\displaystyle B_{1}$ $\displaystyle=\frac{rm_{3}(m_{2}+n_{2}+m_{3})^{2}(m_{2}+n_{2})+rm_{3}(m_{2}+n_{2})^{2}}{\gamma_{1}n_{2}K},$ $\displaystyle B_{2}$ $\displaystyle=\frac{r^{2}(m_{2}+n_{2}+m_{3})(m_{2}+n_{2})^{2}+m_{3}^{2}}{\gamma_{1}^{2}n_{2}^{2}K^{2}}.$ Hence, according to Liu’s criterion, a simple Hopf bifurcation occurs at $\alpha=\alpha_{c}$, that is, $A_{5}=1$. ## 4 Seasonal dynamics of the sugarcane borer-parasitoid agroecosystem Several environmental parameters (such as air temperature, air humidity, rainfall dispersion, among others) fluctuate periodically affecting an ecological system dynamics. Thus, they can be represented as periodic-time functions. In this section, the intrinsic growth rate $r$ in system (2) is considered as a sinusoidal function representing these seasonal perturbations. The parameter $r$ being defined by the following function [rinaldi1993multiple, gakkhar2003chaos, gakkhar2003seasonally, altizer2006seasonality]: $r(t)=r_{0}\left(1+r_{1}\sin\left(\frac{2\pi t}{365}\right)\right),$ (38) where $t$ is measured in days, so $r_{0}$ is the average value of $r$ over an integer number of years. The parameter $r_{1}$ represents the degree of seasonality, hence $r_{0}r_{1}$ is the magnitude of the perturbation in $r$. Next, we are interested in the seasonal dynamics of the equilibrium point $E_{5}$, where there is coexistence of the parasitoid and pest populations. For this, we will keep the values of the parameters fixed as follows [parraetal2002, rafikov2012mathematical, rafikov2014dynamical]. $\displaystyle m_{1}$ $\displaystyle=0,\ m_{2}=0.03566,\ m_{3}=\frac{1}{4},\ m_{4}=0.00257,\ m_{5}=m_{4},\ m_{6}=\frac{1}{5},\ $ $\displaystyle n_{1}$ $\displaystyle=\frac{1}{8},\ n_{2}=\frac{1}{9},\ n_{3}=\frac{1}{50},\ n_{4}=\frac{1}{16},\ r_{0}=0.19,\ $ (39) $\displaystyle\beta$ $\displaystyle=0.000009,\ \gamma_{1}=2.29,\ \gamma_{2}=40,\ K=25000.$ Setting the parameter values as specified in (4), it can be shown from (3.2) that without seasonality, that is, $r_{1}=0$ in (38), depending on the value of $\alpha>0.169\times 10^{-4}$, the attractor in the phase space can be an equilibrium point or a limit cycle, namely: * 1. For $0.169\times 10^{-4}<\alpha<\alpha_{c}$, where $\alpha_{c}=0.9135\times 10^{-4}$, the corresponding attractor is the coexistence equilibrium point $E_{5}$, whose components depend on the value of $\alpha$, as specified in (26); * 2. For $\alpha>\alpha_{c}$, that is, beyond the Hopf bifurcation, the attractor is a period one limit-cycle and, the amplitude of this limit cycle in the phase space increases with increasing the value of $\alpha$. The addition of seasonality to the model, through the population growth rate according to (38), induces a destabilizing effect and may even trigger chaotic behavior. This destabilizing effect will be confirmed further by computer simulations with the parameter values fixed according to (4). First, we investigate the effect of seasonality for a value of $\alpha$ when the attractor is the equilibrium point $E_{5}$ in the 6D phase space, as shown in Fig. 1 for $\alpha=0.6\times 10^{-4}$. Considering this value of $\alpha$, the bifurcation diagram for $0\leq r_{1}\leq 0.35$ is shown in Fig. 2, where it can be immediately verified that the value of $x_{1}(t)$ for $r_{1}=0$ is, as expected, the same value of this component of the equilibrium point obtained without seasonality, given by (26), and shown in Fig. 1. Increasing the value of $r_{1}$, there is a periodic solution followed by a period doubling sequence, which constitutes a route to chaos that occurs for $r_{1}>0.33$. The projections of the 6D strange attractor, in the phase space, with $r_{1}=0.35$, are plotted in Fig. 3. Thus, the seasonality destabilizes the $E_{5}$ equilibrium, changing the attractor from equilibrium point to limit cycle, and then to more complex dynamics as $r_{1}$ increases. Figure 1: The equilibrium $E_{5}$, reached by the populations of system (2), without seasonality, for the parameters fixed in (4) and $\alpha=0.6\times 10^{-4}<\alpha_{c}$. Figure 2: Keeping the parameter values fixed in (4) and $\alpha=0.6\times 10^{-4}$, the bifurcation diagram of $x_{1}(t)$ for $0\leq r_{1}\leq 0.35$. Figure 3: Keeping the parameter values fixed according to (4) and $\alpha=0.6\times 10^{-4}$, the projections of the 6D strange attractor in the phase space, corresponding to $r_{1}=0.35$, in the subspaces $x_{1}x_{2}x_{3}$ and $x_{4}x_{5}x_{6}$. Now, the range $\alpha>\alpha_{c}$ is considered, with the system having a period one limit cycle in the phase space, whose amplitude increases as $\alpha$ increases. We consider $\alpha=1\times 10^{-4}$ in Fig. 4, and the maximum and minimum values of $x_{1}$ as the same that as we identify for $r_{1}=0$ in the bifurcation diagram of $x_{1}(t)$ plotted with seasonality in Fig 5 for $0\leq r_{1}\leq 0.35$. Increasing the value of $r_{1}$, a period doubling sequence can be noted, which constitutes a route to chaos that occurs for $r_{1}>0.2$. The projections of the 6D strange attractor in the phase space, corresponding to $r_{1}=0.25$, are plotted in Figure 6. Increasing even more the value of $r_{1}$, periodic attractors emerge, as the one plotted in Fig. 7, corresponding to $r_{1}=0.28$, and similar periodic attractors occur for $0.26\leq r_{1}\leq 0.3$. Therefore, the seasonality destabilizes the attractor from period one limit cycle, changing the behavior of our system to more complex dynamics as $r_{1}$ increases. Regarding to the value of $r_{1}$ at which chaos is observed to occur, the comparison of bifurcation diagrams in Figs. 2 and 5 show that if $\alpha>\alpha_{c}$, chaos occurs at a lower value of $r_{1}$ than if $\alpha<\alpha_{c}$. Figure 4: The projection of the 6D limit cycle reached by the populations of system (2), without seasonality, for the parameters fixed in (4) and $\alpha=1\times 10^{-4}>\alpha_{c}$, in the subspaces $x_{1}x_{2}x_{3}$ and $x_{4}x_{5}x_{6}$. Figure 5: Keeping the parameter values fixed in (4) and $\alpha=1\times 10^{-4}$, the bifurcation diagram of $x_{1}(t)$ for $0\leq r_{1}\leq 0.35$. Figure 6: Keeping the parameter values fixed in (4) and $\alpha=1\times 10^{-4}$, the projections of the 6D strange attractor in the phase space, corresponding to $r_{1}=0.25$, in the subspaces $x_{1}x_{2}x_{3}$ and $x_{4}x_{5}x_{6}$. Figure 7: Keeping the parameter values fixed in (4) and $\alpha=1\times 10^{-4}$, the projections of the 6D periodic attractor in the phase space, corresponding to $r_{1}=0.28$, in the subspaces $x_{1}x_{2}x_{3}$ and $x_{4}x_{5}x_{6}$. Additionally, we can investigate the effect of seasonality for fixed values of its degree $r_{1}$, while varying the parameter $\alpha$. For that, bifurcation diagrams of $x_{1}(t)$ are plotted in Figures 8 and 9, keeping the parameter values fixed in (4) for $0.169\times 10^{-4}\leq\alpha\leq 1.2\times 10^{-4}$, and setting values for the parameter $r_{1}$. The diagram presents in Fig. 8 corresponds to $r_{1}=0.25$, whose strange attractor considering $\alpha=1\times 10^{-4}$ was visualized in Fig. 6, while the diagram in Fig. 9 the degree of seasonality is $r_{1}=0.35$, whose strange attractor for $\alpha=0.6\times 10^{-4}$ was visualized in Fig. 3. Comparing these two bifurcation diagrams, we conclude that the higher value of $r_{1}$, the lower the value of $\alpha$ at which chaos is established, as it occurs at $\alpha=0.8\times 10{-4}$ for $r_{1}=0.25$ and at $\alpha=0.6\times 10{-4}$ for $r_{1}=0.35$. Figure 8: Bifurcation diagram of $x_{1}(t)$ for $0.169\times 10^{-4}\leq\alpha\leq 1.2\times 10^{-4}$, and keeping the parameter values fixed according to 4, for $r_{1}=0.25$. Figure 9: Bifurcation diagram of $x_{1}(t)$ for $0.169\times 10^{-4}\leq\alpha\leq 1.2\times 10^{-4}$, and keeping the parameter values fixed according to 4, for $r_{1}=0.35$. ## 5 Conclusions According to [white2020seasonality], seasonality is a significant feature in ecological systems driven by periodic climatic conditions, but it is often not explicitly included in either empirical or theoretical studies. Therefore this article is an effort toward the integration of the complex dynamics involving such seasonal influences into the sugarcane borer agroecosystem and its parasitoids. In this work, we have proposed a novel, six-dimensional continuous-time dynamical system, modeling interactions between the sugarcane borer and its egg and larval parasitoid. On the analytical side, five equilibrium states and conditions for their local stability were found out. Moreover, the Lyapunov function stability analysis ensured the global asymptotical stability of the equilibrium state in which all considered populations coexist. Then, the occurrence of a Hopf bifurcation was investigated applying Liu’s theorem. Numerical simulations revealed the chaotic behavior of the system with seasonality. These results show how seasonality changes considerably the agroecosystem dynamics leading an asymptotically stable system, as shown in Fig 1 to the period-doubling and subsequently to a chaotic attractor shown in Fig 2. For a real system, this means sudden changes can occur in populations that without seasonality could coexist in an equilibrium, in the presence of seasonal conditions. Moreover, when populations exhibit periodic oscillations, as shown in Fig. 4, the introduction of seasonality can transform these oscillations into chaos, even for smaller values of $r_{1}$ then in the case of Fig. 5. Finally, bifurcation diagrams of the maximum and minimum population values, in the presence of seasonal influences, show that an increase in the parasitism coefficient $\alpha$ can lead the stabilized system to a chaotic regime, as shown by Figures 8 and 9. The present results help understand the dynamics of the six-dimensional agro- ecological system with the seasonal forcing. Using these results the biological control strategies can be investigated in future research.
# Eigenvalue asymptotics for the one-particle density matrix Alexander V. Sobolev Department of Mathematics University College London Gower Street London WC1E 6BT UK<EMAIL_ADDRESS> ###### Abstract. The one-particle density matrix $\gamma(x,y)$ for a bound state of an atom or molecule is one of the key objects in the quantum-mechanical approximation schemes. We prove the asymptotic formula $\lambda_{k}\sim(Ak)^{-8/3}$, $A\geq 0$, as $k\to\infty$, for the eigenvalues $\lambda_{k}$ of the self-adjoint operator $\boldsymbol{\Gamma}\geq 0$ with kernel $\gamma(x,y)$. ###### Key words and phrases: Multi-particle Schrödinger operator, one-particle density matrix, eigenvalues, spectral asymptotics ###### 2010 Mathematics Subject Classification: Primary 35J10; Secondary 47G10, 81Q10 ## 1\. Introduction Consider on $\textup{{{L}}}^{2}(\mathbb{R}^{3N})$ the Schrödinger operator (1.1) $\displaystyle\mathcal{H}=\sum_{k=1}^{N}\bigg{(}-\Delta_{k}-\frac{Z}{|x_{k}|}\bigg{)}+\sum_{1\leq j<k\leq N}\frac{1}{|x_{j}-x_{k}|},$ describing an atom with $N$ particles (e.g. electrons) with coordinates $\mathbf{x}=(x_{1},x_{2},\dots,x_{N})$, $x_{k}\in\mathbb{R}^{3}$, $k=1,2,\dots,N$, and a nucleus with charge $Z>0$. The notation $\Delta_{k}$ is used for the Laplacian w.r.t. the variable $x_{k}$. The operator $\mathcal{H}$ acts on the Hilbert space $\textup{{{L}}}^{2}(\mathbb{R}^{3N})$ and it is self-adjoint on the domain $D(\mathcal{H})=\textup{{{H}}}^{2}(\mathbb{R}^{3N})$, since the potential in (1.1) is an infinitesimal perturbation relative to the unperturbed operator $-\Delta=-\sum_{k}\Delta_{k}$, see e.g. [20, Theorem X.16]. Let $\psi=\psi(\mathbf{x})$, be an eigenfunction of the operator $\mathcal{H}$ with an eigenvalue $E\in\mathbb{R}$, i.e. $\psi\in D(\mathcal{H})$ and $\displaystyle(\mathcal{H}-E)\psi=0.$ For each $j=1,\dots,N$, we represent $\displaystyle\mathbf{x}=(\hat{\mathbf{x}}_{j},x_{j}),\quad\textup{where}\ \hat{\mathbf{x}}_{j}=(x_{1},\dots,x_{j-1},x_{j+1},\dots,x_{N}),$ with obvious modifications if $j=1$ or $j=N$. The one-particle density matrix is defined as the function (1.2) $\displaystyle\gamma(x,y)=\sum_{j=1}^{N}\int\limits_{\mathbb{R}^{3N-3}}\overline{\psi(\hat{\mathbf{x}}_{j},x)}\psi(\hat{\mathbf{x}}_{j},y)\ d\hat{\mathbf{x}}_{j},\quad(x,y)\in\mathbb{R}^{3}\times\mathbb{R}^{3}.$ This function is one of the key objects in the multi-particle quantum mechanics, see [9], [10], [18], [19] for details and futher references. If one assumes that all $N$ particles are spinless fermions (resp. bosons), i.e. that the function $\psi$ is antisymmetric (resp. symmetric) under the permutations $x_{j}\leftrightarrow x_{k}$, then the definition (1.2) simplifies: (1.3) $\displaystyle\gamma(x,y)=N\int_{\mathbb{R}^{3N-3}}\overline{\psi(\hat{\mathbf{x}},x)}\psi(\hat{\mathbf{x}},y)d\hat{\mathbf{x}},\ \quad\textup{where}\ \hat{\mathbf{x}}=\hat{\mathbf{x}}_{N}.$ Our main result however does not require any symmetry assumptions. For the sake of completeness mention that, as found in [16], the function (1.2) is real-analytic for all $x\not=0,y\not=0,x\not=y$. In the current paper our focus is on spectral properties of the self-adjoint non-negative operator $\boldsymbol{\Gamma}$ with the kernel $\gamma(x,y)$, which we call the one- particle density operator. The operator $\boldsymbol{\Gamma}$ is easily shown to be trace class, and in [13] it was shown that $\boldsymbol{\Gamma}$ has infinite rank. However no sharp results on the behaviour of the eigenvalues $\lambda_{k}(\boldsymbol{\Gamma})>0$ as $k\to\infty$ had been available until paper [22] (see however [6], [7] for relevant quantum chemistry calculations), where it was shown that $\lambda_{k}(\boldsymbol{\Gamma})=O(k^{-8/3})$. We always label eigenvalues in non-increasing order counting multiplicity. The purpose of the paper is to prove the asymptotic formula (1.4), which confirms the sharpness of the bound from [22]. Apart from being a mathematically interesting and challenging question, spectral asymptotics for the operator $\boldsymbol{\Gamma}$ are important for electronic structure computations as it limits accuracy of electronic properties computed with finite basis sets, see e.g. [6], [8], [13] and [15] for discussion. We assume throughout that $\psi$ decays exponentially as $|\mathbf{x}|\to\infty$: $\displaystyle|\psi(\mathbf{x})|\lesssim e^{-\varkappa_{\scaleto{0}{3pt}}|\mathbf{x}|},\ \mathbf{x}\in\mathbb{R}^{3N}$ Here $\varkappa_{0}>0$ is a constant, and the notation “$\lesssim$” means that the left-hand side is bounded from above by the right-hand side times some positive constant whose precise value is of no importance for us. This notation is used throughout the paper. The property (1) holds for the eigenfunctions associated with discrete eigenvalues (i.e. the ones below the essential spectrum), and in particular, for the ground state. For references and detailed discussion we quote [21]. The next theorem contains a concise version of the main result. ###### Theorem 1.1. Suppose that the eigenfunction $\psi$ satisfies the bound (1). Then the eigenvalues $\lambda_{k}(\boldsymbol{\Gamma}),k=1,2,\dots$, of the operator $\boldsymbol{\Gamma}$ with kernel (1.2) satisfy the relation (1.4) $\displaystyle\lim_{k\to\infty}k^{\frac{8}{3}}\lambda_{k}(\boldsymbol{\Gamma})=A^{\frac{8}{3}},$ with an explicit constant $A\geq 0$. The complete statement includes a formula for the coefficient $A$, and it is given as Theorem 2.3. ###### Remark. Theorem 1.1 extends to the case of a molecule with several nuclei whose positions are fixed, i.e. the operator (1.1) can be replaced by $\displaystyle\mathcal{H}=\sum_{k=1}^{N}\bigg{(}-\Delta_{k}-\sum_{l=1}^{N_{0}}\frac{Z_{l}}{|x_{k}-R_{l}|}\bigg{)}+\sum_{1\leq j<k\leq N}\frac{1}{|x_{j}-x_{k}|},$ with constant $R_{l}\in\mathbb{R}^{3}$ and nuclear charges $Z_{l}>0$, $l=1,2,\dots,N_{0}$. The modifications are straightforward. Let us outline the main ideas of the proof. First we represent the operator $\boldsymbol{\Gamma}$ as the product $\boldsymbol{\Gamma}=\boldsymbol{\Psi}^{*}\boldsymbol{\Psi}$, where the operator $\boldsymbol{\Psi}:\textup{{{L}}}^{2}(\mathbb{R}^{3})\to\textup{{{L}}}^{2}(\mathbb{R}^{3N-3})$ with a vector-valued kernel is defined in Subsect. 2.2. Therefore we have $\lambda_{k}(\boldsymbol{\Gamma})=s_{k}(\boldsymbol{\Psi})^{2},k=1,2,\dots$, where $s_{k}(\boldsymbol{\Psi})$ are the singular values ($s$-values) of the operator $\boldsymbol{\Psi}$. As a consequence, the asymptotic formula (1.4) rewrites as (1.5) $\displaystyle\lim_{k\to\infty}k^{\frac{4}{3}}s_{k}(\boldsymbol{\Psi})=A^{\frac{4}{3}}.$ For the sake of discussion consider the fermionic (or bosonic) case, in which the kernel $\gamma(x,y)$ is given by (1.3). Then it is straightforward that $\boldsymbol{\Gamma}=\boldsymbol{\Psi}^{*}\boldsymbol{\Psi}$ with the operator (1.6) $\displaystyle(\boldsymbol{\Psi}u)(\hat{\mathbf{x}})=\sqrt{N}\int_{\mathbb{R}^{3}}\psi(\hat{\mathbf{x}},x)u(x)dx,\ u\in\textup{{{L}}}^{2}(\mathbb{R}^{3}).$ For integral operators the rate of decay of singular values increases with the smoothness of their kernels, and the appropriate estimates via suitable Sobolev norms can be found in [3]. Such estimates, together with the recent regularity estimates for $\psi$ obtained in [11], were used in [22] to prove the bound $s_{k}(\boldsymbol{\Psi})\lesssim k^{-4/3}$, $k=1,2,\dots$. The study of spectral asymptotics of the operator (1.6) requires more precise information on the singularities of $\psi$. By elliptic regularity, the function $\psi$ is real analytic away from the coalescence points of the particles, i.e. for $x_{j}\not=x_{k},1\leq j<k\leq N$ and $x_{j}\not=0$, $j=1,2,\dots,N$, and hence only the coalescence points contribute to the asymptotics (1.5). As shown by T. Kato in [17], the function $\psi$ is Lipschitz. Of course, this fact alone is not sufficient to obtain an asymptotic formula for $\boldsymbol{\Psi}$ – one needs to know the precise shape of the function $\psi$ near the coalescence points. A suitable representation formula for the function $\psi$ was obtained in [12]. To explain in more detail we make a further simplifying assumption and consider the special case $N=2$, so that $\mathbf{x}=(t,x)\in\mathbb{R}^{3}\times\mathbb{R}^{3}$, and the operator $\boldsymbol{\Psi}$ acts from $\textup{{{L}}}^{2}(\mathbb{R}^{3})$ into $\textup{{{L}}}^{2}(\mathbb{R}^{3})$. According to [12], there exists a neighbourhood (open connected set) $\Omega_{1,2}\subset\big{(}\mathbb{R}^{3}\setminus\\{0\\}\big{)}\times\big{(}\mathbb{R}^{3}\setminus\\{0\\}\big{)}$ of the diagonal set $\\{(x,x):x\in\mathbb{R}^{3}\setminus\\{0\\}\\}$ and two functions $\xi_{1,2},\eta_{1,2}$, real analytic in $\Omega_{1,2}$, such that the eigenfunction $\psi=\psi(t,x)$ admits the representation (1.7) $\displaystyle\psi(t,x)=\xi_{1,2}(t,x)+|t-x|\,\eta_{1,2}(t,x),\quad\textup{for all}\quad(t,x)\in\Omega_{1,2}.$ The form of the second term is in line with Kato’s observation (see [17]) that $\psi$ is Lipschitz. The representation (1.7) is ideally suited for the study of spectral asymptotics. Indeed, the Lipschitz factor on the right-hand side of (1.7) is homogeneous of order one. The behaviour of eigenvalues for a wide class of integral operators including those with homogeneous kernels, was studied by M. Birman and M. Solomyak in [1],[2] and [4], see also [3]. However, the existing results are not directly applicable, since the functions $\xi_{1,2}$ and $\eta_{1,2}$ may not be smooth on the closure $\overline{\Omega_{1,2}}$. Moreover, there is no information on the integrability of $\xi_{1,2}$ and $\eta_{1,2}$ over $\Omega_{1,2}$. To circumvent this difficulty we approximate $\xi_{1,2},\eta_{1,2}$ by suitable $\textup{{{C}}}^{\infty}_{0}$-functions supported inside $\Omega_{1,2}$. The error incurred is controlled with the help of the bounds obtained in [22]. Using the Birman-Solomyak results and subsequently taking the limit of these smooth approximations we arrive at the formula (1.5) with the coefficient $\displaystyle A=\frac{1}{3}\bigg{(}\frac{2}{\pi}\bigg{)}^{\frac{5}{4}}\int_{\mathbb{R}^{3}}|2^{1/2}\eta_{1,2}(x,x)|^{3/4}dx.$ The finiteness of the above integral is a by-product of the proof. Note that the coalescence points $x=0$ and $t=0$ do not affect the asymptotics. For $N\geq 3$ application of the existing results on spectral asymptotics for integral operators is not immediate. It relies on the reduction to a certain model operator whose kernel includes the functions $\eta_{j,k}$ describing the eigenfunction $\psi$ in a neighbourhood of all pair coalescence points $x_{j}=x_{k}$, $j,k=1,2,\dots,N$, $j\not=k$. We emphasize that neither the points $x_{j}=0,j=1,2,\dots,N$, nor the coalescence points of higher orders (e.g. $x_{j}=x_{k}=x_{l}$ with pair-wise distinct $j,k,l$) contribute to the asymptotics (1.5). The paper is organized as follows. In Section 2 we describe the representation of the function $\psi$ near the pair coalescence points (see (1.7) for the case $N=2$), state the main result in its complete form as Theorem 2.3, which includes the formula (2.7) for the coefficient $A$, and give the details of the factorization $\boldsymbol{\Gamma}=\boldsymbol{\Psi}^{*}\boldsymbol{\Psi}$. Section 3 contains necessary facts about compact operators, and it includes asymptotic formulas for spectra of integral operators with homogeneous kernels. Section 4 is focused on spectral asymptotics of the model integral operator that is instrumental to the case $N\geq 3$. Using the factorization $\boldsymbol{\Gamma}=\boldsymbol{\Psi}^{*}\boldsymbol{\Psi}$, in Sections 5 and 6 the main Theorem 2.3 is restated in terms of the operator $\boldsymbol{\Psi}$, see Theorem 5.1. Here we also construct suitable approximations for $\boldsymbol{\Psi}$, to which one can apply the results of Sect. 4. Section. 7 completes the proof of Theorem 5.1 and hence that of Theorem 2.3. We conclude the introduction with some general notational conventions. Coordinates. As mentioned earlier, we use the following standard notation for the coordinates: $\mathbf{x}=(x_{1},x_{2},\dots,x_{N})$, where $x_{j}\in\mathbb{R}^{3}$, $j=1,2,\dots,N$. In order to write formulas in a more compact and unified way, we sometimes use the notation $x_{0}=0$. The vector $\mathbf{x}$ is often represented in the form $\displaystyle\mathbf{x}=(\hat{\mathbf{x}}_{j},x_{j})\quad\textup{with}\quad\hat{\mathbf{x}}_{j}=(x_{1},x_{2},\dots,x_{j-1},x_{j+1},\dots,x_{N})\in\mathbb{R}^{3N-3},$ for arbitrary $j=1,2,\dots,N$. Most frequently we use this notation with $j=N$, and write $\hat{\mathbf{x}}=\hat{\mathbf{x}}_{N}$, so that $\mathbf{x}=(\hat{\mathbf{x}},x_{N})$. For $N\geq 3$ it is also useful to introduce the notation for $\mathbf{x}$ with $x_{j}$ and $x_{k}$ taken out: (1.8) $\displaystyle\begin{cases}\tilde{\mathbf{x}}_{j,k}=(x_{1},\dots,x_{j-1},x_{j+1},\dots,x_{k-1},x_{k+1},\dots,x_{N}),\quad\textup{if}\ j<k,\\\\[5.69046pt] \qquad\textup{and}\ \tilde{\mathbf{x}}_{j,k}=\tilde{\mathbf{x}}_{k,j},\quad\textup{if}\ j>k.\end{cases}$ If $j<k$, then we write $\mathbf{x}=(\tilde{\mathbf{x}}_{j,k},x_{j},x_{k})$. For any $j\leq N-1$ the vector $\hat{\mathbf{x}}$ can be represented as $\hat{\mathbf{x}}=(\tilde{\mathbf{x}}_{j,N},x_{j})$. The notation $B_{R}$ is used for the ball $\\{x\in\mathbb{R}^{3}:|x|<R\\}$. Derivatives. Let $\mathbb{N}_{0}=\mathbb{N}\cup\\{0\\}$. If $x=(x^{\prime},x^{\prime\prime},x^{\prime\prime\prime})\in\mathbb{R}^{3}$ and $m=(m^{\prime},m^{\prime\prime},m^{\prime\prime\prime})\in\mathbb{N}_{0}^{3}$, then the derivative $\partial_{x}^{m}$ is defined in the standard way: $\displaystyle\partial_{x}^{m}=\partial_{x^{\prime}}^{m^{\prime}}\partial_{x^{\prime\prime}}^{m^{\prime\prime}}\partial_{x^{\prime\prime\prime}}^{m^{\prime\prime\prime}}.$ Cut-off functions. We systematically use the following smooth cut-off functions. Let (1.9) $\displaystyle\theta\in\textup{{{C}}}^{\infty}_{0}(\mathbb{R}),\quad\zeta(t)=1-\theta(t),$ be functions such that $0\leq\theta\leq 1$ and (1.10) $\displaystyle\theta(t)=0,\quad\textup{if}\quad|t|>1;\ \quad\theta(t)=1,\quad\textup{if}\quad|t|<\frac{1}{2}.\ $ Integral operators. The notation $\operatorname{{\sf Int}}(\mathcal{K})$ is used for the integral operator with kernel $\mathcal{K}$, e.g. $\boldsymbol{\Gamma}=\operatorname{{\sf Int}}(\gamma)$. The functional spaces, where $\operatorname{{\sf Int}}(\mathcal{K})$ acts are obvious from the context. Bounds. As explained earlier, for two non-negative numbers (or functions) $X$ and $Y$ depending on some parameters, we write $X\lesssim Y$ (or $Y\gtrsim X$) if $X\leq CY$ with some positive constant $C$ independent of those parameters. To avoid confusion we often make explicit comments on the nature of (implicit) constants in the bounds. ## 2\. Representation formula. Details of the main result ### 2.1. Representation formula Our approach is built on the sharp qualitative result for $\psi$ obtained in [12]. In order to write all the formulas in a more compact and unified way, we use the notation $x_{0}=0$. As before, $\mathbf{x}=(x_{1},x_{2},\dots,x_{N})\in\mathbb{R}^{3N}$. Thus, unless otherwise stated, the indices labeling the particles, run from $0$ to $N$. Denote (2.1) $\displaystyle{\sf S}_{l,s}=\\{\mathbf{x}\in\mathbb{R}^{3N}:x_{l}\not=x_{s}\\},\ l\not=s.$ The function $\psi$ is real-analytic on the set $\displaystyle{\sf{U}}=\bigcap_{0\leq l<s\leq N}{\sf S}_{l,s}.$ For each pair $j,k:j\not=k$, we are interested in the behaviour of $\psi$ on the set (2.2) $\displaystyle{\sf{U}}_{j,k}=\bigcap_{\begin{subarray}{c}l\not=s\\\ (l,s)\not=(j,k)\end{subarray}}{\sf S}_{l,s}.$ In words, ${\sf{U}}_{j,k}$ includes the coalescence point $x_{j}=x_{k}$, but excludes all the others. Our main focus will be on the function $\psi$ near the “diagonal” set (2.3) $\displaystyle{\sf{U}}^{(\rm d)}_{j,k}=\\{\mathbf{x}\in{\sf{U}}_{j,k}:x_{j}=x_{k}\\}.$ The sets introduced above are obviously symmetric with respect to permutations of indices, e.g. ${\sf{U}}_{j,k}={\sf{U}}_{k,j}$, ${\sf{U}}^{(\rm d)}_{j,k}={\sf{U}}^{(\rm d)}_{k,j}$. Observe also that the sets ${\sf{U}}_{j,k}$, ${\sf{U}}_{j,k}^{(\operatorname{d})}$ are of full measure in $\mathbb{R}^{3N}$ and $\mathbb{R}^{3N-3}$ respectively, and that they are connected. The following property follows from [12, Theorem 1.4]. ###### Proposition 2.1. For each pair of indices $j,k=0,1,\dots,N$ such that $j\not=k$, there exists an open connected set $\Omega_{j,k}=\Omega_{k,j}\subset\mathbb{R}^{3N}$, such that (2.4) $\displaystyle{\sf{U}}_{j,k}^{(d)}\subset\Omega_{j,k}\subset{\sf{U}}_{j,k},$ and two uniquely defined functions $\xi_{j,k},\eta_{j,k}$, real analytic on $\Omega_{j,k}$, such that for all $\mathbf{x}\in\Omega_{j,k}$ the following representation holds: (2.5) $\displaystyle\psi(\mathbf{x})=\xi_{j,k}(\mathbf{x})+|x_{j}-x_{k}|\eta_{j,k}(\mathbf{x}).$ Due to the uniqueness of functions $\xi_{j,k},\eta_{j,k}$, we have the symmetry $\xi_{j,k}=\xi_{k,j}$, $\eta_{j,k}=\eta_{k,j}$ for all $j\not=k$. The asymptotic coefficient $A$ in the formula (1.4) is defined via the functions $\eta_{j,k}$, $j,k=1,2,\dots,N,j<k$, on the sets (2.3). Using the notation (1.8) we write the function $\eta_{j,k}(\mathbf{x})$ on ${\sf{U}}_{j,k}^{(\operatorname{d})}$ as $\eta_{j,k}(\tilde{\mathbf{x}}_{j,k},x,x)$. As a by-product of the proof we obtain the following integrability properties. ###### Theorem 2.2. If $N\geq 3$, then each function $\eta_{j,k}(\ \cdot\ ,x,x)$, $1\leq j<k\leq N$, belongs to $\textup{{{L}}}^{2}(\mathbb{R}^{3N-6})$ for a.e. $x\in\mathbb{R}^{3}$ and the function (2.6) $\displaystyle H(x):=\bigg{[}2\sum\limits_{1\leq j<k\leq N}\int_{\mathbb{R}^{3N-6}}\big{|}\eta_{j,k}(\tilde{\mathbf{x}}_{j,k},x,x)\big{|}^{2}d\tilde{\mathbf{x}}_{j,k}\bigg{]}^{\frac{1}{2}},$ belongs to $\textup{{{L}}}^{\frac{3}{4}}(\mathbb{R}^{3})$. If $N=2$, then the function $H(x):=\sqrt{2}|\eta_{1,2}(x,x)|$ belongs to $\textup{{{L}}}^{\frac{3}{4}}(\mathbb{R}^{3})$. Having at our disposal this theorem, we can now state the main result of the paper in its complete form. ###### Theorem 2.3. Suppose that the eigenfunction $\psi$ satisfies the bound (1). Then the eigenvalues $\lambda_{k}(\boldsymbol{\Gamma}),k=1,2,\dots,$ of the operator $\boldsymbol{\Gamma}$ satisfy the asymptotic formula (1.4) with the constant (2.7) $\displaystyle A=\frac{1}{3}\bigg{(}\frac{2}{\pi}\bigg{)}^{\frac{5}{4}}\int_{\mathbb{R}^{3}}H(x)^{\frac{3}{4}}dx.$ ###### Remark 2.4. The coefficient $A$ can be equal to zero for some eigenfunctions $\psi$. For example, if we assume that the particles are spinless fermions, i.e. the function $\psi$ is antisymmetric, then it is immediate to see that for all $j,k$, $j\not=k$, both components $\xi_{j,k}$ and $\eta_{j,k}$ in (2.5) vanish on the diagonal ${\sf{U}}_{j,k}^{(\operatorname{d})}$, and as a consequence $A=0$. This means that $\lambda_{k}(\boldsymbol{\Gamma})=o(k^{-8/3})$. This fact can be interpreted by saying that antisymmetric eigenfunctions possess better than Lipschitz smoothness at the coalescence points, and hence the eigenvalues of $\boldsymbol{\Gamma}$ decay faster. The fermionic nature of particles may manifest itself differently if we introduce the spin variable. In this case the antisymmetry of the full eigenfunction comes either from the spatial component $\psi(\mathbf{x})$ or from the spin component. For illustration first consider the case of two electrons, i.e. $N=2$. In the triplet configuration the antisymmetry is carried by the spatial component $\psi(x_{1},x_{2})$, see [15, Subsect. 3.3.2], and then, as pointed out a few lines above, we have $A=0$. If the electrons are in the singlet configuration, then the spin component is antisymmetric, whereas the function $\psi=\psi(x_{1},x_{2})$ is symmetric and the diagonal value $\eta_{1,2}(x,x)$ is not identically zero, see [15, Subsect. 3.3.1]. Thus $A>0$. In the case $N\geq 3$ different electron pairs may form different configurations, in which case the triplet coalescences will not contribute to the coefficient $A$. ###### Remark 2.5. If we assume that the function $\psi$ is symmetric or antisymmetric, then both the proof of the main asymptotic formula (1.4), and the formula (2.6) can be simplified. Indeed, as we have seen, the factorization $\boldsymbol{\Gamma}=\boldsymbol{\Psi}^{*}\boldsymbol{\Psi}$ holds with the simple looking integral operator $\boldsymbol{\Psi}$ given by (1.6). This is in contrast with the general case, as will be evident from the next subsection. Furthermore, as discussed in Remark 2.4, for the antisymmetric $\psi$ we have $A=0$. Assume that $N\geq 3$ and that $\psi$ is totally symmetric. It follows that for all $\tilde{\mathbf{y}}=(y_{1},y_{2},\dots,y_{N-2})\in\mathbb{R}^{3N-6}$, $x,t\in\mathbb{R}^{3}$ and $j\not=k,l\not=s$, we have $\displaystyle\psi(y_{1},\dots,y_{j-1},x,y_{j},$ $\displaystyle\ \dots,y_{k-2},t,y_{k-1},\dots,y_{N-2})$ $\displaystyle=$ $\displaystyle\ \psi(y_{1},\dots,y_{l-1},x,y_{l},\dots,y_{s-2},t,y_{s-1},\dots,y_{N-2}).$ Due to the uniqueness of functions $\xi_{j,k},\eta_{j,k}$ in Proposition 2.1, the above equality leads to $\displaystyle\eta_{j,k}(y_{1},\dots,y_{j-1},x,y_{j},$ $\displaystyle\ \dots,y_{k-2},t,y_{k-1},\dots,y_{N-2})$ $\displaystyle=$ $\displaystyle\ \eta_{l,s}(y_{1},\dots,y_{l-1},x,y_{l},\dots,y_{s-2},t,y_{s-1},\dots,y_{N-2}).$ As a consequence, the formula (2.6) rewrites as $\displaystyle H(x)=\bigg{[}N(N-1)\int_{\mathbb{R}^{3N-6}}\big{|}\eta_{N-1,N}(\tilde{\mathbf{y}},x,x)\big{|}^{2}\,d\tilde{\mathbf{y}}\bigg{]}^{\frac{1}{2}}.$ ### 2.2. Factorization of $\boldsymbol{\Gamma}$: change of variables $(\hat{\mathbf{x}}_{j},x)\mapsto(\hat{\mathbf{x}},x)$ In the general case (i.e. without any symmetry assumptions on $\psi$) the operator $\boldsymbol{\Psi}$ in the identity $\boldsymbol{\Gamma}=\boldsymbol{\Psi}^{*}\boldsymbol{\Psi}$ looks more complicated compared to (1.6). The purpose of this subsection is to describe this factorization and the associated change of variables. Rewrite the definition (1.2) in the form: $\displaystyle\gamma(x,y)=$ $\displaystyle\ \sum_{j=1}^{N}\int_{\mathbb{R}^{3N-3}}\overline{\psi_{j}(\hat{\mathbf{x}},x)}\psi_{j}(\hat{\mathbf{x}},y)d\hat{\mathbf{x}},\quad\textup{where}$ (2.8) $\displaystyle\psi_{j}(\hat{\mathbf{x}},x)=$ $\displaystyle\ \psi(x_{1},\dots,x_{j-1},x,x_{j},\dots,x_{N-1}),\quad j=1,2,\dots,N.$ Therefore $\boldsymbol{\Gamma}$ can be represented as a product $\boldsymbol{\Gamma}=\boldsymbol{\Psi}^{*}\boldsymbol{\Psi}$, where $\boldsymbol{\Psi}:\textup{{{L}}}^{2}(\mathbb{R}^{3})\to\textup{{{L}}}^{2}(\mathbb{R}^{3N-3};\mathbb{C}^{N})$ is the integral operator with the vector-valued kernel (2.9) $\displaystyle\boldsymbol{\Psi}(\hat{\mathbf{x}},x)=\\{\psi_{j}(\hat{\mathbf{x}},x)\\}_{j=1}^{N}.$ As explained in the Introduction, given this factorization, the asymptotic relation (1.4) translates to the formula (1.5). Later we state this fact again as Theorem 5.1 using a more convenient notation. The change of variables $(\hat{\mathbf{x}}_{j},x)\mapsto(\hat{\mathbf{x}},x)$ plays an important role throughout the paper. In particular, it is crucial to recast Proposition 2.1 in terms of the new variables $(\hat{\mathbf{x}},x)$, which is done below. Let $j\not=k$, and let $\Omega_{j,k}$ be the sets and $\xi_{j,k}(\mathbf{x})$, $\eta_{j,k}(\mathbf{x})$ be the functions from Proposition 2.1. For all $j=1,2,\dots,N$ and all $k=0,1,\dots,N-1,$ denote $\displaystyle\tilde{\Omega}_{j,k}=\begin{cases}\\{(\hat{\mathbf{x}},x)\in\mathbb{R}^{3N}:(x_{1},\dots,x_{j-1},x,x_{j},\dots,x_{N-1})\in\Omega_{j,k}\\},\quad\textup{if}\ j\geq k+1,\\\ \\{(\hat{\mathbf{x}},x)\in\mathbb{R}^{3N}:(x_{1},\dots,x_{j-1},x,x_{j},\dots,x_{N-1})\in\Omega_{j,k+1}\\},\quad\textup{if}\ j\leq k.\end{cases}$ According to (2.4) we have (2.10) $\displaystyle{\sf{U}}_{N,k}^{(\operatorname{d})}\subset\tilde{\Omega}_{j,k}\subset{\sf{U}}_{N,k},$ for all $k\leq N-1$ and $j=1,2,\dots,N$, $j\not=k$. Together with functions $\xi_{j,k},\eta_{j,k}$ define $\displaystyle\tilde{\xi}_{j,k}(\hat{\mathbf{x}},x)=\begin{cases}\xi_{j,k}(x_{1},\dots,x_{j-1},x,x_{j},\dots,x_{N-1}),\quad\textup{if}\ j\geq k+1,\\\\[5.69046pt] \xi_{j,k+1}(x_{1},\dots,x_{j-1},x,x_{j},\dots,x_{N-1}),\quad\textup{if}\ j\leq k,\end{cases}$ and (2.11) $\displaystyle\tilde{\eta}_{j,k}(\hat{\mathbf{x}},x)=\begin{cases}\eta_{j,k}(x_{1},\dots,x_{j-1},x,x_{j},\dots,x_{N-1}),\quad\textup{if}\ j\geq k+1,\\\\[5.69046pt] \eta_{j,k+1}(x_{1},\dots,x_{j-1},x,x_{j},\dots,x_{N-1}),\quad\textup{if}\ j\leq k.\end{cases}$ By Proposition 2.1, for each $j=1,2,\dots,N$, and each $k=0,1,\dots,N-1$, we have (2.12) $\displaystyle\psi_{j}(\hat{\mathbf{x}},x)=\tilde{\xi}_{j,k}(\hat{\mathbf{x}},x)+|x_{k}-x|\tilde{\eta}_{j,k}(\hat{\mathbf{x}},x),\quad\textup{for all}\ (\hat{\mathbf{x}},x)\in\tilde{\Omega}_{j,k}.$ Observe that the newly introduced sets $\tilde{\Omega}_{j,k}$ and the functions $\tilde{\xi}_{j,k},\tilde{\eta}_{j,k}$ are not symmetric under the permutation $j\leftrightarrow k$. The function (2.6) can be easily rewritten via the new functions $\tilde{\eta}_{j,k}$: (2.13) $\displaystyle H(x)=\begin{cases}\big{(}|\tilde{\eta}_{1,1}(x,x)|^{2}+|\tilde{\eta}_{2,1}(x,x)|^{2}\big{)}^{1/2},\quad\textup{if}\ N=2;\\\\[8.5359pt] \bigg{[}\sum\limits_{j=1}^{N}\sum\limits_{k=1}^{N-1}\int_{\mathbb{R}^{3N-6}}\big{|}\tilde{\eta}_{j,k}(\tilde{\mathbf{x}}_{k,N},x,x)\big{|}^{2}d\tilde{\mathbf{x}}_{k,N}\bigg{]}^{\frac{1}{2}},\quad\textup{if}\ N\geq 3.\end{cases}$ For $N=2$ the above formula is a consequence of the symmetry relation $\eta_{1,2}=\eta_{2,1}$ and equalities $\eta_{1,2}(x,x)=\tilde{\eta}_{1,1}(x,x)$, $\eta_{2,1}(x,x)=\tilde{\eta}_{2,1}(x,x)$, which follow from the definition (2.11). Now assume that $N\geq 3$. In view of the symmetry $\eta_{j,k}=\eta_{k,j}$ we can rewrite (2.6) extending the summation to all $j,k$ such that $j\not=k$: $\displaystyle H(x)^{2}=\sum_{j=1}^{N-1}\sum_{k=j+1}^{N}\int_{\mathbb{R}^{3N-6}}\big{|}\eta_{j,k}(\tilde{\mathbf{x}}_{j,k},x,x)\big{|}^{2}d\tilde{\mathbf{x}}_{j,k}+\sum_{j=1}^{N}\sum_{k=1}^{j-1}\int_{\mathbb{R}^{3N-6}}\big{|}\eta_{j,k}(\tilde{\mathbf{x}}_{j,k},x,x)\big{|}^{2}d\tilde{\mathbf{x}}_{j,k}.$ By (2.11), the second sum coincides with $\displaystyle\sum_{j=1}^{N}\sum_{k=1}^{j-1}\int_{\mathbb{R}^{3N-6}}\big{|}\tilde{\eta}_{j,k}(\tilde{\mathbf{x}}_{k,N},x,x)\big{|}^{2}d\tilde{\mathbf{x}}_{k,N},$ and the first one coincides with $\displaystyle\sum_{j=1}^{N-1}\sum_{k=j+1}^{N}\int_{\mathbb{R}^{3N-6}}\big{|}\tilde{\eta}_{j,k-1}(\tilde{\mathbf{x}}_{k-1,N},x,x)\big{|}^{2}d\tilde{\mathbf{x}}_{k-1,N}=\sum_{j=1}^{N-1}\sum_{k=j}^{N-1}\int_{\mathbb{R}^{3N-6}}\big{|}\tilde{\eta}_{j,k}(\tilde{\mathbf{x}}_{k,N},x,x)\big{|}^{2}d\tilde{\mathbf{x}}_{k,N}.$ Adding the first and second sums together we obtain (2.13), as claimed. ## 3\. Compact operators ### 3.1. Compact operators For information on compact operators we use mainly Chapter 11 of the book [5], where one can also find further references. Let $\mathcal{H}$ and $\mathcal{G}$ be separable Hilbert spaces. Let $T:\mathcal{H}\to\mathcal{G}$ be a compact operator. If $\mathcal{H}=\mathcal{G}$ and $T=T^{*}\geq 0$, then $\lambda_{k}(T)$, $k=1,2,\dots$, denote the positive eigenvalues of $T$ numbered in descending order counting multiplicity. For arbitrary spaces $\mathcal{H}$, $\mathcal{G}$ and compact $T$, by $s_{k}(T)>0$, $k=1,2,\dots$, we denote the singular values of $T$ defined by $s_{k}(T)^{2}=\lambda_{k}(T^{*}T)=\lambda_{k}(TT^{*})$. We classify compact operators by the rate of decay of their singular values. If $s_{k}(T)\lesssim k^{-1/p},k=1,2,\dots$, with some $p>0$, then we say that $T\in\mathbf{S}_{p,\infty}$ and denote $\displaystyle\|T\|_{p,\infty}=\sup_{k}s_{k}(T)k^{\frac{1}{p}}.$ These classes are discussed in detail in [5, §11.6]. The class $\mathbf{S}_{p,\infty}$ is a complete linear space with the quasi-norm $\|T\|_{p,\infty}$. For all $p>0$ the quasi-norm satisfies the following “triangle” inequality for operators $T_{1},T_{2}\in\mathbf{S}_{p,\infty}$: (3.1) $\displaystyle\|T_{1}+T_{2}\|_{{\scaleto{p}{4pt}},{\scaleto{\infty}{3pt}}}^{{\frac{{\scaleto{p}{3pt}}}{{\scaleto{p+1}{4pt}}}}}\leq\|T_{1}\|_{{\scaleto{p}{4pt}},{\scaleto{\infty}{3pt}}}^{{\frac{{\scaleto{p}{3pt}}}{{\scaleto{p+1}{4pt}}}}}+\|T_{2}\|_{{\scaleto{p,\infty}{4pt}}}^{{\frac{{\scaleto{p}{3pt}}}{{\scaleto{p+1}{4pt}}}}}.$ For $T\in\mathbf{S}_{p,\infty}$ the following numbers are finite: (3.2) $\displaystyle\begin{cases}{\sf{G}}_{p}(T)=\big{(}\limsup\limits_{k\to\infty}k^{\frac{1}{p}}s_{k}(T)\big{)}^{p}=\limsup\limits_{s\to 0}s^{p}n(s,T),\\\\[8.5359pt] {\sf{g}}_{p}(T)=\big{(}\liminf\limits_{k\to\infty}k^{\frac{1}{p}}s_{k}(T)\big{)}^{p}=\liminf\limits_{s\to 0}s^{p}n(s,T),\end{cases}$ and they clearly satisfy the inequalities $\displaystyle{\sf{g}}_{p}(T)\leq{\sf{G}}_{p}(T)\leq\|T\|_{p,\infty}^{p}.$ Note that ${\sf{G}}_{q}(T)=0$ for all $q>p$. Observe that (3.3) $\displaystyle{\sf{g}}_{p}(TT^{*})={\sf{g}}_{p}(T^{*}T)={\sf{g}}_{2p}(T),\quad{\sf{G}}_{p}(TT^{*})={\sf{G}}_{p}(T^{*}T)={\sf{G}}_{2p}(T).$ If ${\sf{G}}_{p}(T)={\sf{g}}_{p}(T)$, then the singular values of $T$ satisfy the asymptotic formula $\displaystyle s_{n}(T)=\big{(}{\sf{G}}_{p}(T)\big{)}^{\frac{1}{p}}n^{-\frac{1}{p}}+o(n^{-\frac{1}{p}}),\ n\to\infty.$ The functionals ${\sf{g}}_{p}(T)$, ${\sf{G}}_{p}(T)$ also satisfy the inequalities of the type (3.1): (3.4) $\displaystyle\begin{cases}{\sf{G}}_{p}(T_{1}+T_{2})^{{\frac{{\scaleto{1}{3pt}}}{{\scaleto{p+1}{4pt}}}}}\leq{\sf{G}}_{p}(T_{1})^{{\frac{{\scaleto{1}{3pt}}}{{\scaleto{p+1}{4pt}}}}}+{\sf{G}}_{p}(T_{2})^{{\frac{{\scaleto{1}{3pt}}}{{\scaleto{p+1}{4pt}}}}},\\\\[8.5359pt] {\sf{g}}_{p}(T_{1}+T_{2})^{{\frac{{\scaleto{1}{3pt}}}{{\scaleto{p+1}{4pt}}}}}\leq{\sf{g}}_{p}(T_{1})^{{\frac{{\scaleto{1}{3pt}}}{{\scaleto{p+1}{4pt}}}}}+{\sf{G}}_{p}(T_{2})^{{\frac{{\scaleto{1}{3pt}}}{{\scaleto{p+1}{4pt}}}}}.\end{cases}$ It follows from these inequalities that the functionals ${\sf{G}}_{p}$ and ${\sf{g}}_{p}$ are continuous on $\mathbf{S}_{p,\infty}$: $\displaystyle\big{|}{\sf{G}}_{p}(T_{1})^{{\frac{{\scaleto{1}{3pt}}}{{\scaleto{p+1}{4pt}}}}}-{\sf{G}}_{p}(T_{2})^{{\frac{{\scaleto{1}{3pt}}}{{\scaleto{p+1}{4pt}}}}}\big{|}\leq$ $\displaystyle\ {\sf{G}}_{p}(T_{1}-T_{2})^{{\frac{{\scaleto{1}{3pt}}}{{\scaleto{p+1}{4pt}}}}},$ $\displaystyle\big{|}{\sf{g}}_{p}(T_{1})^{{\frac{{\scaleto{1}{3pt}}}{{\scaleto{p+1}{4pt}}}}}-{\sf{g}}_{p}(T_{2})^{{\frac{{\scaleto{1}{3pt}}}{{\scaleto{p+1}{4pt}}}}}\big{|}\leq$ $\displaystyle\ {\sf{G}}_{p}(T_{1}-T_{2})^{{\frac{{\scaleto{1}{3pt}}}{{\scaleto{p+1}{4pt}}}}}.$ We need the following two corollaries of this fact: ###### Corollary 3.1. Suppose that ${\sf{G}}_{p}(T_{1}-T_{2})=0$. Then $\displaystyle{\sf{G}}_{p}(T_{1})={\sf{G}}_{p}(T_{2}),\quad{\sf{g}}_{p}(T_{1})={\sf{g}}_{p}(T_{2}).$ The next corollary is more general: ###### Corollary 3.2. Suppose that $T\in\mathbf{S}_{p,\infty}$ and that for every $\nu>0$ there exists an operator $T_{\nu}\in\mathbf{S}_{p,\infty}$ such that ${\sf{G}}_{p}(T-T_{\nu})\to 0$, $\nu\to 0$. Then the functionals ${\sf{G}}_{p}(T_{\nu}),{\sf{g}}_{p}(T_{\nu})$ have limits as $\nu\to 0$ and $\displaystyle\lim_{\nu\to 0}{\sf{G}}_{p}(T_{\nu})={\sf{G}}_{p}(T),\quad\lim_{\nu\to 0}{\sf{g}}_{p}(T_{\nu})={\sf{g}}_{p}(T).$ ### 3.2. Estimates for singular values of integral operators The final ingredients of the proof are the results due to M.S. Birman and M.Z. Solomyak, investigating the membership of integral operators in various classes of compact operators. For estimates of the singular values we rely on [3, Corollaries 4.2, 4.4, Theorem 4.4], which we state here in a form convenient for our purposes. Below we use the following notation which is standard in the theory of Sobolev spaces: $\textup{{{H}}}^{l}(\mathbb{R}^{d})=\textup{{{W}}}^{2,l}(\mathbb{R}^{d})$. ###### Proposition 3.3. Let $a\in\textup{{{L}}}^{\infty}(\mathbb{R}^{d})$, $b\in\textup{{{L}}}^{2}_{\textup{\tiny{\rm loc}}}(\mathbb{R}^{n})$. Assume that the function $a$ has compact support. Suppose that $T(t,x)$, $t\in\mathbb{R}^{n}$, $x\in\mathbb{R}^{d}$, is a kernel such that $T(t,\ \cdot\ )\in\textup{{{H}}}^{l}(\mathbb{R}^{d})$ with some $l=0,1,\dots$, for a.e. $t\in\mathbb{R}^{n}$, and the function $\|T(t,\ \cdot\ )\|_{\textup{{{H}}}^{l}}$ is in $\textup{{{L}}}^{2}(\mathbb{R}^{n},|b(t)|^{2}dt)$. Let $T_{ba}:\textup{{{L}}}^{2}(\mathbb{R}^{d})\to\textup{{{L}}}^{2}(\mathbb{R}^{n})$, be the integral operator $\displaystyle(T_{ba}u)(t)=b(t)\int T(t,x)a(x)u(x)\,dx,\quad u\in\textup{{{L}}}^{2}(\mathbb{R}^{d}).$ Then (3.5) $\displaystyle\sum_{k=0}^{\infty}k^{\frac{2l}{d}}s_{k}(T_{ba})^{2}<\infty,$ and hence $s_{k}(T_{ba})=o(k^{-1/q})$, where $1/q=1/2+l/d$. In other words, ${\sf{G}}_{q}(T_{ba})=0$. The original results in [3, Corollaries 4.2, 4.4, Theorem 4.4] are considerably more general and more precise: instead of just the finiteness statement (3.5), they contain estimates depending explicitly on the kernel $T$ and weights $a,b$. These estimates have slightly different form for different cases $2l>d,2l=d$ and $2l<d$, and therefore, to avoid cumbersome formulations we chose not to quote them in detail. The next group of results is concerned with spectral asymptotics for integral operators. ### 3.3. Integral operators with homogeneous kernels First we consider pseudo - differential operators with asymptotically homogeneous matrix-valued symbols. Spectral asymptotics for such operators were studied in [2], [4]. In fact, these papers allow for more general operators, but we need only a relatively simple special case of those results. Precisely, let $\mathcal{A}(x),\mathcal{B}(x),X(\xi)$, where $x,y,\xi\in\mathbb{R}^{d}$, be rectangular matrix-valued functions of matching dimensions, so that the product (3.6) $\displaystyle\mathcal{B}(x)X(\xi)\mathcal{A}(y)$ is again a rectangular matrix. Assume that (3.7) $\displaystyle\mathcal{B}\in\textup{{{C}}}_{0}(\mathbb{R}^{d}),\quad\mathcal{A}\in\textup{{{C}}}_{0}(\mathbb{R}^{d}).$ We do not reflect the matrix nature of the functional spaces in the notation to avoid cumbersome formulas, and this should not cause confusion. Suppose that $X(\xi)$ is a bounded function which is asymptotically homogeneous of negative order, i.e. there exists a matrix-valued function $X_{\infty}\in\textup{{{C}}}^{\infty}(\mathbb{R}^{d}\setminus\\{0\\})$ such that for some $\tau>0$, (3.8) $\displaystyle X_{\infty}(t\xi)=t^{-\tau}X_{\infty}(\xi),\ \xi\not=0,$ for all $t>0$, and (3.9) $\displaystyle X(\xi)-X_{\infty}(\xi)=o(|\xi|^{-\tau}),\quad|\xi|\to\infty.$ Define the matrix-valued function $\displaystyle\mathcal{T}_{\infty}(x,\xi)=\mathcal{B}(x)X_{\infty}(\xi)\mathcal{A}(x).$ ###### Proposition 3.4. Let the above conditions on $\mathcal{A},\mathcal{B},X$ be satisfied and let $p=d\tau^{-1}$. Then the pseudo-differential operator $T:\textup{{{L}}}^{2}(\mathbb{R}^{d})\to\textup{{{L}}}^{2}(\mathbb{R}^{d})$ defined by the formula (3.10) $\displaystyle(Tu)(x)=\frac{1}{(2\pi)^{d}}\iint\mathcal{B}(x)e^{i\xi(x-y)}X(\xi)\mathcal{A}(y)u(y)dyd\xi,$ is compact, it belongs to $\mathbf{S}_{p,\infty}$ and satisfies the asymptotic formula (3.11) $\displaystyle{\sf{G}}_{p}(T)={\sf{g}}_{p}(T)=\frac{1}{d(2\pi)^{d}}\int\limits_{\mathbb{R}^{d}}\int\limits_{\mathbb{S}^{d-1}}\sum\limits_{k}\big{[}s_{k}\big{(}\mathcal{T}_{\infty}(x,\omega)\big{)}\big{]}^{p}\,d\omega dx.$ This proposition is a consequence of Theorem 2 from [2] and Remark 3 following this theorem. We apply Proposition 3.4 to integral operators with homogeneous kernels. Let $\Phi\in\textup{{{C}}}^{\infty}(\mathbb{R}^{d}\setminus\\{0\\})$ be a matrix- valued function such that (3.12) $\displaystyle\Phi(tx)=t^{\alpha}\Phi(x),\quad x\not=0,\alpha>-d,$ for all $t>0$. Consider the integral operator $W$ with the matrix-valued kernel $\displaystyle W(x,y)=\mathcal{B}(x)\Phi(x-y)\mathcal{A}(y)$ with $\mathcal{A},\mathcal{B}$ satisfying the conditions (3.7), and assuming that the matrix dimensions are matched in the same way as for the symbol (3.6). We study spectral asymptotics of the operator $W$ by reducing it to the operator of the form (3.10). Let $\theta$ be as defined in (1.9), (1.10), and let $R_{0}>0$ be a number such that $\displaystyle W(x,y)=W(x,y)\theta\big{(}|x-y|R^{-1}\big{)},\quad\textup{for all}\quad R\geq R_{0}.$ Consequently, the operator $W$ has the form (3.10) with the function (3.13) $\displaystyle X(\xi)=X_{R}(\xi)=\int e^{-i\xi x}\theta\big{(}|x|R^{-1}\big{)}\Phi(x)dx.$ Integrating by parts, we conclude that for each $\xi\not=0$ the function $X_{R}(\xi)$ converges as $R\to\infty$ to a $\textup{{{C}}}^{\infty}(\mathbb{R}^{d}\setminus\\{0\\})$-function (3.14) $\displaystyle X_{\infty}(\xi)=\lim_{R\to\infty}X_{R}(\xi).$ The function $X_{\infty}$ satisfies (3.8) with $\tau=\alpha+d$. Indeed, using (3.12) write for $t>0$: $\displaystyle X_{R}(t\xi)=$ $\displaystyle\ t^{-\alpha-d}\int e^{-i\xi x}\theta\big{(}|x|(Rt)^{-1}\big{)}\Phi(x)dx$ (3.15) $\displaystyle=$ $\displaystyle\ t^{-\alpha-d}X_{Rt}(\xi).$ Passing to the limit as $R\to\infty$, we get (3.8) with $\tau=\alpha+d$, as claimed. The equality (3.3) also implies that $\displaystyle X_{R}(t\xi)-X_{\infty}(t\xi)=t^{-\alpha-d}\big{(}X_{Rt}(\xi)-X_{\infty}(\xi)\big{)}=o(t^{-\alpha-d}),\quad t\to\infty,$ for each $\xi\in\mathbb{R}^{d}$ and $R>0$, which entails (3.9). Thus, applying Proposition 3.4, we obtain the spectral asymptotics for the operator $W$. ###### Corollary 3.5. The operator $W$ has the form (3.10) with the function $X\in\textup{{{C}}}^{\infty}(\mathbb{R}^{d})$ defined in (3.13). The singular values of $W$ satisfy the relation (3.11) with $p^{-1}=1+\alpha d^{-1}$. We need this result for the special case of scalar $\mathcal{A}=a\in\textup{{{C}}}_{0}(\mathbb{R}^{d}),\mathcal{B}=b\in\textup{{{C}}}_{0}(\mathbb{R}^{d})$, and (3.16) $\displaystyle\Phi(x)=\\{\phi_{j}(x)\\}_{j=1}^{m}$ with scalar $\alpha$-homogeneous functions $\phi_{j}$, $j=1,2,\dots,m$. As the next assertion shows, in this case the right-hand side of (3.11) can be easily evaluated. ###### Corollary 3.6. Suppose that $\Phi$ is given as in (3.16) with some $\alpha$-homogeneous scalar functions $\phi_{j}$, $j=1,2,\dots,k,$ with $\alpha>-d$. Then $\displaystyle{\sf{G}}_{p}(W)={\sf{g}}_{p}(W)=\frac{1}{d(2\pi)^{d}}\int_{\mathbb{S}^{d-1}}|X_{\infty}(\omega)|^{p}\,d\omega\int_{\mathbb{R}^{d}}|a(x)b(x)|^{p}\,dx,$ where $p^{-1}=1+\alpha d^{-1}$. ###### Proof. The matrix $\mathcal{T}_{\infty}(x,\xi)$ is rank one and $\displaystyle s_{1}\big{(}\mathcal{T}_{\infty}(x,\xi)\big{)}=|a(x)|\,|b(x)|\,|X_{\infty}(\xi)|.$ The required formula follows from Corollary 3.5. ∎ Consider two examples in which the above formula can be simplified further. The first example is crucial for the proof of Theorem 2.3. ###### Example 3.7. Let $m=1$, and let $\Phi(x)=\phi(x)=|x|^{\alpha}$, $\alpha>-d$, be a scalar function. Then (see, e.g. [14, Ch. 2, Sect. 3.3]) $\displaystyle X_{\infty}(\xi)=2^{d+\alpha}\pi^{\frac{d}{2}}\frac{\Gamma\big{(}\frac{d+\alpha}{2}\big{)}}{\Gamma\big{(}-\frac{\alpha}{2}\big{)}}|\xi|^{-(d+\alpha)},\quad\alpha\not=0,2,4,\dots,$ and $X_{\infty}(\xi)=0$ for $\alpha=0,2,4,\dots$. Thus, for $1/p=1+\alpha/d$ and $\alpha\not=0,2,4,\dots,$ we have (3.17) $\displaystyle\mu_{\alpha,d}:=\frac{1}{d(2\pi)^{d}}\int_{\mathbb{S}^{d-1}}|X_{\infty}(\omega)|^{p}d\omega=\bigg{[}\frac{\Gamma\big{(}\frac{d+\alpha}{2}\big{)}}{\pi^{\frac{\alpha}{2}}|\Gamma\big{(}-\frac{\alpha}{2}\big{)}|}\bigg{]}^{p}\frac{1}{\Gamma\big{(}\frac{d}{2}+1\big{)}}.$ Now Corollary 3.6 yields $\displaystyle{\sf{G}}_{p}(W)={\sf{g}}_{p}(W)=\mu_{\alpha,d}\int_{\mathbb{R}^{d}}|a(x)b(x)|^{p}dx,\quad\frac{1}{p}=1+\frac{\alpha}{d}.$ Note that the case of scalar functions $\Phi$ was studied in [1], see also [3, Theorem 10.9]. Next we consider an important example of a vector-valued function $\Phi$. We do not need it for the current paper but prepare it for future use. ###### Example 3.8. Let $m=d$, and let $\Phi(x)=\nabla|x|^{\alpha+1}=(\alpha+1)|x|^{\alpha-1}x$, $\alpha>-d$. This vector-valued function is homogeneous of order $\alpha$ and (similarly to [14, Ch. 2, Sect. 3.3]) $\displaystyle X_{\infty}(\xi)=-i(\alpha+1)2^{d+\alpha}\pi^{\frac{d}{2}}\frac{\Gamma\big{(}\frac{d+\alpha+1}{2}\big{)}}{\Gamma\big{(}\frac{-\alpha+1}{2}\big{)}}|\xi|^{-(\alpha+1+d)}\xi,\quad\alpha\not=1,3,5,\dots,$ and $X_{\infty}(\xi)=0$ for $\alpha=1,3,5,\dots$. Thus, for $1/p=1+\alpha/d$ and $\alpha\not=1,3,5,\dots,$ we have (3.18) $\displaystyle\nu_{\alpha,d}:=\frac{1}{d(2\pi)^{d}}\int_{\mathbb{S}^{d-1}}|X_{\infty}(\omega)|^{p}d\omega=\bigg{[}\frac{(\alpha+1)\Gamma\big{(}\frac{d+\alpha+1}{2}\big{)}}{\pi^{\frac{\alpha}{2}}|\Gamma\big{(}\frac{-\alpha+1}{2}\big{)}|}\bigg{]}^{p}\frac{1}{\Gamma\big{(}\frac{d}{2}+1\big{)}}.$ Now Corollary 3.6 yields $\displaystyle{\sf{G}}_{p}(W)={\sf{g}}_{p}(W)=\nu_{\alpha,d}\int_{\mathbb{R}^{d}}|a(x)b(x)|^{p}dx,\quad\frac{1}{p}=1+\frac{\alpha}{d}.$ ## 4\. Spectral asymptotics for the model problem The objective of this section is to find the spectral asymptotics for a model integral operator. Recall that for any function $\mathcal{K}=\mathcal{K}(x,y)$, $x\in\mathbb{R}^{n},y\in\mathbb{R}^{d}$, we denote by $\operatorname{{\sf Int}}(\mathcal{K})$ the integral operator acting from $\textup{{{L}}}^{2}(\mathbb{R}^{d})$ into $\textup{{{L}}}^{2}(\mathbb{R}^{n})$. In each case the values of $n$ and $d$ are clear from the context. If $\mathcal{K}(x,y)$ is $\mathbb{C}^{s}$-valued then the “target” space $\textup{{{L}}}^{2}(\mathbb{R}^{n})$ is replaced by $\textup{{{L}}}^{2}(\mathbb{R}^{n};\mathbb{C}^{s})$. ### 4.1. The model operator Let $a,b_{j,k},\beta_{j,k}$, $j=1,2,\dots,N$, $k=1,2,\dots,N-1$, be scalar functions such that (4.1) $\displaystyle\begin{cases}a\in\textup{{{C}}}^{\infty}_{0}(\mathbb{R}^{3}),&\ \quad b_{j,k}\in\textup{{{C}}}^{\infty}_{0}(\mathbb{R}^{3N-3}),\\\\[5.69046pt] \beta_{j,k}\in\textup{{{C}}}^{\infty}(\mathbb{R}^{3N}),\end{cases}$ for all $j=1,2,\dots,N$, $k=1,2,\dots,N-1$. Let $\Phi\in\textup{{{C}}}^{\infty}(\mathbb{R}^{3}\setminus\\{0\\})$ be a vector- valued function with $m$ scalar components, homogeneous of order $\alpha>-3$, as defined in (3.16). Consider the vector-valued kernel $\mathcal{M}(\hat{\mathbf{x}},x)$ with $mN$ components: (4.2) $\displaystyle\begin{cases}\mathcal{M}(\hat{\mathbf{x}},x)=\\{\mathcal{M}_{j}(\hat{\mathbf{x}},x)\\}_{j=1}^{N},\ \quad\mathcal{M}_{j}(\hat{\mathbf{x}},x)=\sum_{k=1}^{N-1}\mathcal{M}_{j,k}(\hat{\mathbf{x}},x),\\\\[8.5359pt] \mathcal{M}_{j,k}(\hat{\mathbf{x}},x)=b_{j,k}(\hat{\mathbf{x}})\Phi(x_{k}-x)a(x)\beta_{j,k}(\hat{\mathbf{x}},x).\end{cases}$ Our aim is to find an asymptotic formula for the singular values of the operator $\operatorname{{\sf Int}}(\mathcal{M}):\textup{{{L}}}^{2}(\mathbb{R}^{3})\to\textup{{{L}}}^{2}(\mathbb{R}^{3N-3};\mathbb{C}^{mN})$. Although the function $\Phi(x)$ is homogeneous, the results on homogeneous kernels, notably Corollary 3.6, are not applicable directly, since the number of “target” variables (i.e. $3N-3$) is greater than the number of the input variables (i.e. $3$), unless $N=2$. The proof of Theorem 4.1 below amounts to reducing the operator $\operatorname{{\sf Int}}(\mathcal{M})$ to a form for which Corollary 3.6 can be used. Recall that the weights $\mathcal{A}$ and $\mathcal{B}$ in Corollary 3.6 are only required to be continuous (with compact support). Thus the smoothness restrictions on the functions $a$, $b_{j,k}$ $\beta_{j,k}$ in the definition (4.2) can be relaxed, but for our purposes it suffices to assume conditions (4.1). Moreover, this assumption allows us to avoid unnecessary technical complications. We use the representations $(\hat{\mathbf{x}},x)=(\tilde{\mathbf{x}}_{k,N},x_{k},x)$ introduced in (1.8). Denote (4.3) $\displaystyle\begin{cases}h(t)=\bigg{[}\sum_{j=1}^{N}\sum_{k=1}^{N-1}\int_{\mathbb{R}^{3N-6}}|b_{j,k}(\tilde{\mathbf{x}}_{k,N},t)\beta_{j,k}(\tilde{\mathbf{x}}_{k,N},t,t)|^{2}d\tilde{\mathbf{x}}_{k}\bigg{]}^{\frac{1}{2}},\ \textup{if}\ N\geq 3;\\\\[8.5359pt] h(t)=\big{(}|b_{1,1}(t)\beta_{1,1}(t,t)|^{2}+|b_{2,1}(t)\beta_{2,1}(t,t)|^{2}\big{)}^{\frac{1}{2}},\ \textup{if}\ N=2.\end{cases}$ Let $X_{\infty}(\xi),\xi\in\mathbb{R}^{3}$, be the function defined by (3.13) and (3.14). ###### Theorem 4.1. Let $\mathcal{M}$ be the operator defined above, where $\Phi\in\textup{{{C}}}^{\infty}(\mathbb{R}^{3}\setminus\\{0\\})$ is a homogeneous vector function of order $\alpha>-5/2$. Then the operator $\operatorname{{\sf Int}}(\mathcal{M})$ belongs to $\mathbf{S}_{p,\infty}$, $1/p=1+\alpha/3$, and (4.4) $\displaystyle{\sf{G}}_{p}\big{(}\operatorname{{\sf Int}}(\mathcal{M})\big{)}={\sf{g}}_{p}\big{(}\operatorname{{\sf Int}}(\mathcal{M})\big{)}=\frac{1}{24\pi^{3}}\int_{\mathbb{S}^{2}}|X_{\infty}(\omega)|^{p}\,d\omega\int_{\mathbb{R}^{3}}\big{(}|a(x)h(x)|\big{)}^{p}\,dx.$ Throughout the proof we assume that $N\geq 3$. For $N=2$ the argument simplifies, and we omit it. We begin the proof with the following lemma. ###### Lemma 4.2. For each $j=1,2,\dots,N$ and each pair $k,l=1,2,\dots,N-1$, $k\not=l$, we have $\displaystyle{\sf{G}}_{p/2}\big{(}\operatorname{{\sf Int}}(\mathcal{M}_{j,k})^{*}\operatorname{{\sf Int}}(\mathcal{M}_{j,l})\big{)}=0.$ ###### Proof. Fix a $j=1,2,\dots,N$ and write the kernel of the operator $\operatorname{{\sf Int}}(\mathcal{M}_{j,k})^{*}\operatorname{{\sf Int}}(\mathcal{M}_{j,l})$: $\displaystyle\mathcal{P}_{k,l}(x,y)=$ $\displaystyle\ \overline{a(x)}a(y)\int\overline{\mathcal{M}_{j,k}(\hat{\mathbf{x}},x)}\mathcal{M}_{j,l}(\hat{\mathbf{x}},y)d\hat{\mathbf{x}}$ $\displaystyle=$ $\displaystyle\ \overline{a(x)}a(y)\int\overline{\Phi(x-x_{k})}\cdot\Phi(x_{l}-y)\,\overline{b_{j,k}(\hat{\mathbf{x}})\beta_{j,k}(\hat{\mathbf{x}},x)}b_{j,l}(\hat{\mathbf{x}})\beta_{j,l}(\hat{\mathbf{x}},y)\,d\hat{\mathbf{x}}.$ Write $\hat{\mathbf{x}}=(\tilde{\mathbf{x}}_{l,N},x_{l})$, $d\hat{\mathbf{x}}=d\tilde{\mathbf{x}}_{l,N}dx_{l}$ and change $x_{l}$ to $x_{l}+y$, so that $\displaystyle\mathcal{P}_{k,l}(x,y)=\overline{a(x)}a(y)\int\overline{\Phi(x-x_{k})}\cdot$ $\displaystyle\ \Phi(x_{l})\overline{b_{j,k}(\tilde{\mathbf{x}}_{l,N},x_{l}+y)\beta_{j,k}(\tilde{\mathbf{x}}_{j,N},x_{j}+y,x)}$ $\displaystyle\qquad\qquad\times b_{j,l}(\tilde{\mathbf{x}}_{l,N},x_{l}+y)\beta_{j,l}(\tilde{\mathbf{x}}_{l,N},x_{l}+y,y)d\tilde{\mathbf{x}}_{l,N}dx_{l}.$ Because of the conditions (4.1) for all $x\in\mathbb{R}^{3}$ the kernel $\mathcal{P}_{k,l}$ is a $\textup{{{C}}}^{\infty}_{0}$-function of $y\in\mathbb{R}^{3}$. Hence by Proposition 3.3 the singular values of the operator $\operatorname{{\sf Int}}(\mathcal{P}_{k,l})$ decay faster than any negative power of their number. In particular, ${\sf{G}}_{p/2}\big{(}\operatorname{{\sf Int}}(\mathcal{P}_{k,l})\big{)}=0$, as required. ∎ ### 4.2. Proof of Theorem 4.1 for $\beta_{jk}=1$ First we prove Theorem 4.1 for the simpler case $\beta_{j,k}=1$. It follows from Lemma 4.2 and from the inequality (3.4) that $\displaystyle{\sf{G}}_{p/2}\bigg{(}\sum_{j=1}^{N}\sum_{k\not=l}\operatorname{{\sf Int}}(\mathcal{M}_{j,k})^{*}\operatorname{{\sf Int}}(\mathcal{M}_{j,l})\bigg{)}=0.$ By Corollary 3.1 this implies that $\displaystyle{\sf{G}}_{p/2}\big{(}\operatorname{{\sf Int}}(\mathcal{M})^{*}\operatorname{{\sf Int}}(\mathcal{M})\big{)}={\sf{G}}_{p/2}\bigg{(}\sum_{j=1}^{N}\sum_{k=1}^{N-1}\operatorname{{\sf Int}}(\mathcal{M}_{j,k})^{*}\operatorname{{\sf Int}}(\mathcal{M}_{j,k})\bigg{)},$ and the same equality holds for the functional ${\sf{g}}_{p/2}$. Let us write the kernel $\mathcal{F}(x,y)$ of the operator on the right-hand side, remembering that $\beta_{j,k}=1$: $\displaystyle\mathcal{F}(x,y)=\overline{a(x)}a(y)$ $\displaystyle\ \sum_{j=1}^{N}\sum_{k=1}^{N-1}\int\overline{\Phi(x-x_{k})}\cdot\Phi(x_{k}-y)|b_{j,k}(\hat{\mathbf{x}})|^{2}d\hat{\mathbf{x}}$ $\displaystyle=$ $\displaystyle\ \overline{a(x)}a(y)\sum_{j=1}^{N}\sum_{k=1}^{N-1}\int_{\mathbb{R}^{3}}\overline{\Phi(x-t)}\cdot\Phi(t-y)\int_{\mathbb{R}^{3N-6}}|b_{j,k}(\tilde{\mathbf{x}}_{k,N},t)|^{2}d\tilde{\mathbf{x}}_{k,N}dt$ $\displaystyle=$ $\displaystyle\ \overline{a(x)}a(y)\int_{\mathbb{R}^{3}}\overline{\Phi(x-t)}\cdot\Phi(t-y)\ h(t)^{2}dt,$ where the function $\displaystyle h(t)=\bigg{[}\sum_{j=1}^{N}\sum_{k=1}^{N-1}\int_{\mathbb{R}^{3N-6}}|b_{j,k}(\tilde{\mathbf{x}}_{k,N},t)|^{2}d\tilde{\mathbf{x}}_{k,N}\bigg{]}^{\frac{1}{2}}$ coincides with (4.3) for $\beta_{j,k}=1$. Define the vector-valued kernel $\mathcal{G}$ by $\displaystyle\mathcal{G}(x,y)=h(x)\Phi(x-y)a(y),$ so that $\operatorname{{\sf Int}}(\mathcal{F})=\operatorname{{\sf Int}}(\mathcal{G})^{*}\operatorname{{\sf Int}}(\mathcal{G})$. Thus the functionals ${\sf{G}}_{p/2}$ for the operators $\operatorname{{\sf Int}}(\mathcal{M})^{*}\operatorname{{\sf Int}}(\mathcal{M})$ and $\operatorname{{\sf Int}}(\mathcal{G})^{*}\operatorname{{\sf Int}}(\mathcal{G})$ coincide with each other, and the same applies to the functionals ${\sf{g}}_{p/2}$. Consequently, by virtue of (3.3), (4.5) $\displaystyle{\sf{G}}_{p}\big{(}\operatorname{{\sf Int}}(\mathcal{M})\big{)}={\sf{G}}_{p}\big{(}\operatorname{{\sf Int}}(\mathcal{G})\big{)},\quad{\sf{g}}_{p}\big{(}\operatorname{{\sf Int}}(\mathcal{M})\big{)}={\sf{g}}_{p}\big{(}\operatorname{{\sf Int}}(\mathcal{G})\big{)}.$ Since $b_{j,k}\in\textup{{{C}}}^{\infty}_{0}$, the function $h$ belongs to $\textup{{{C}}}_{0}$. Thus, to find ${\sf{G}}_{p}$ and ${\sf{g}}_{p}$ for the operator $\operatorname{{\sf Int}}(\mathcal{G})$ we can apply Corollary 3.6 with $d=3$ and with the weights $b=h\in\textup{{{C}}}_{0}$ and $a\in\textup{{{C}}}^{\infty}_{0}$, which gives $\displaystyle{\sf{G}}_{p}\big{(}\operatorname{{\sf Int}}(\mathcal{G})\big{)}={\sf{g}}_{p}\big{(}\operatorname{{\sf Int}}(\mathcal{G})\big{)}=\frac{1}{24\pi^{3}}\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}\big{(}|a(x)h(x)||X_{\infty}(\omega)|\big{)}^{p}\,d\omega dx,$ with $1/p=1+\alpha/3$. By (4.5), this equality implies (4.4), which completes the proof of Theorem 4.1 for $\beta_{j,k}=1$. ### 4.3. Proof of Theorem 4.1 for arbitrary $\beta_{j,k}\in\textup{{{C}}}^{\infty}$ We reduce the general case to the one considered in Subsect. 4.2. Since $b_{j,k}$ and $a$ are compactly supported, without loss of generality we may assume that $\beta_{j,k}\in\textup{{{C}}}^{\infty}_{0}(\mathbb{R}^{3N})$. For each $j=1,2,\dots,N$ represent $\displaystyle\mathcal{M}_{j}=\mathcal{A}_{j}+\sum_{k=1}^{N-1}\mathcal{F}_{j,k},$ where $\displaystyle\mathcal{A}_{j}(\hat{\mathbf{x}},x)=$ $\displaystyle\ \sum_{k=1}^{N-1}\Phi(x_{k}-x)b_{j,k}(\hat{\mathbf{x}})a(x)\beta_{j,k}(\hat{\mathbf{x}},x_{j}),$ $\displaystyle\mathcal{F}_{j,k}(\hat{\mathbf{x}},x)=$ $\displaystyle\ \Phi(x_{k}-x)b_{j,k}(\hat{\mathbf{x}})a(x)\big{(}\beta_{j,k}(\hat{\mathbf{x}},x)-\beta_{j,k}(\hat{\mathbf{x}},x_{k})\big{)},\quad j=1,2,\dots,N-1.$ Representing $\displaystyle\beta_{j,k}(\hat{\mathbf{x}},x)-\beta_{j,k}(\hat{\mathbf{x}},x_{k})=(x-x_{k})\cdot\int_{0}^{1}\nabla_{x}\beta_{j,k}(\hat{\mathbf{x}},x_{k}+s(x-x_{k}))ds=:(x_{k}-x)\cdot\sigma_{j,k}(\hat{\mathbf{x}},x),$ we can rewrite $\mathcal{F}_{j,k}$ as $\displaystyle\mathcal{F}_{j,k}(\hat{\mathbf{x}},x)=\Xi_{j,k}(\hat{\mathbf{x}},x)b_{j,k}(\hat{\mathbf{x}})a(x),\quad\textup{where}\quad\Xi_{j,k}(\hat{\mathbf{x}},x)=\Phi(x_{k}-x)\,\big{[}(x_{k}-x)\cdot\sigma_{j,k}(\hat{\mathbf{x}},x)\big{]}.$ Remembering that $\Phi$ is homogeneous of order $\alpha$ and that $\sigma_{j,k}\in\textup{{{C}}}^{\infty}_{0}(\mathbb{R}^{3N})$, we conclude that $\displaystyle\big{|}\partial^{m}_{x}\Xi_{j,k}(\hat{\mathbf{x}},x)\big{|}\lesssim|x-x_{j}|^{\alpha+1-|m|},\quad m\in\mathbb{N}_{0}^{3}.$ Since $\sigma_{j,k}$ is compactly supported, the kernel $\Xi_{j,k}(\hat{\mathbf{x}},x)$, as a function of $x\in\mathbb{R}^{3}$, belongs to $\textup{{{H}}}^{l}(\mathbb{R}^{3})$ for all $0\leq l<\alpha+5/2$. As $\alpha>-5/2$ the set of such values $l$ is non-empty. Moreover, the $\textup{{{H}}}^{l}$-norm of the kernel, as a function of $\hat{\mathbf{x}}\in\mathbb{R}^{3N-3}$, is uniformly bounded, and hence it trivially belongs to $\textup{{{L}}}^{2}(\mathbb{R}^{3N-3},|b_{j,k}(\hat{\mathbf{x}})|^{2}d\hat{\mathbf{x}})$. By virtue of Proposition 3.3, we obtain that ${\sf{G}}_{q}(\operatorname{{\sf Int}}(\mathcal{F}_{j,k}))=0$, $1/q=1/2+l/3$. Note that $\displaystyle\frac{1}{q}\geq\frac{1}{p}=1+\frac{\alpha}{3},\quad\textup{for}\quad l\geq\alpha+\frac{3}{2}.$ Consequently, taking $l$ to be the only non-negative integer in the interval $[\alpha+3/2,\alpha+5/2)$, we conclude that ${\sf{G}}_{p}(\operatorname{{\sf Int}}(\mathcal{F}_{j,k}))=0$, and, by (3.4), $\displaystyle{\sf{G}}_{p}\bigg{(}\sum_{k=1}^{N-1}\operatorname{{\sf Int}}(\mathcal{F}_{j,k})\bigg{)}=0.$ By Corollary 3.1, (4.6) $\displaystyle{\sf{G}}_{p}\big{(}\operatorname{{\sf Int}}(\mathcal{M})\big{)}={\sf{G}}_{p}\big{(}\operatorname{{\sf Int}}(\mathcal{A})\big{)},\quad{\sf{g}}_{p}\big{(}\operatorname{{\sf Int}}(\mathcal{M})\big{)}={\sf{g}}_{p}\big{(}\operatorname{{\sf Int}}(\mathcal{A})\big{)},$ where $\mathcal{A}$ is the vector function with components $\mathcal{A}_{j}(\hat{\mathbf{x}},x)$, $j=1,2,\dots,N$. To find ${\sf{G}}_{p}$ and ${\sf{g}}_{p}$ for the operator $\operatorname{{\sf Int}}(\mathcal{A})$, we observe that each kernel $\mathcal{A}_{j}(\hat{\mathbf{x}},x)$ has the form $\displaystyle\sum_{k=1}^{N-1}\Phi(x_{k}-x)\tilde{b}_{j,k}(\hat{\mathbf{x}})a(x)\quad\textup{with}\quad\tilde{b}_{j,k}(\hat{\mathbf{x}})=b_{j,k}(\hat{\mathbf{x}})\beta_{j,k}(\hat{\mathbf{x}},x_{k}),$ Using the result of Subsect 4.2 we obtain the formula (4.4) for the operator $\operatorname{{\sf Int}}(\mathcal{A})$ with the function $h$ defined in (4.3). In view of (4.6) this implies (4.4) for the operator $\operatorname{{\sf Int}}(\mathcal{M})$, as claimed. ∎ ###### Corollary 4.3. Let $\Phi(x)$ be as in Example 3.7 with $\alpha=1,d=3$, i.e. $\Phi(x)=|x|$ and $1/p=1+\alpha/d=4/3$. According to (4.4) and (3.17), (4.7) $\displaystyle{\sf{G}}_{3/4}\big{(}\operatorname{{\sf Int}}(\mathcal{M})\big{)}={\sf{g}}_{3/4}\big{(}\operatorname{{\sf Int}}(\mathcal{M})\big{)}=\mu_{1,3}\int_{\mathbb{R}^{3}}\big{(}|a(x)h(x)|\big{)}^{\frac{3}{4}}\,dx,$ where $\mu_{\alpha,d}$ is defined in (3.17). ###### Corollary 4.4. Let $\Phi(x)$ be as in Example 3.8 with $\alpha=0,d=3$, i.e. $\Phi(x)=\nabla|x|=|x|^{-1}x$ and $1/p=1+\alpha/d=1$. According to (4.4) and (3.18), $\displaystyle{\sf{G}}_{1}\big{(}\operatorname{{\sf Int}}(\mathcal{M})\big{)}={\sf{g}}_{1}\big{(}\operatorname{{\sf Int}}(\mathcal{M})\big{)}=\nu_{0,3}\int_{\mathbb{R}^{3}}|a(x)h(x)|\,dx,$ where $\nu_{\alpha,d}$ is defined in (3.18). In the current paper we need only Corollary 4.3. Corollary 4.4 is needed for future use. ## 5\. Factorization of $\boldsymbol{\Gamma}$: operator $\boldsymbol{\Psi}$ ### 5.1. Reformulation of the problem Using the functionals (3.2), one can rewrite the sought formula (1.4) as $\displaystyle{\sf{G}}_{3/8}(\boldsymbol{\Gamma})={\sf{g}}_{3/8}(\boldsymbol{\Gamma})=A.$ Since $\boldsymbol{\Gamma}=\boldsymbol{\Psi}^{*}\boldsymbol{\Psi}$ with the operator $\boldsymbol{\Psi}:\textup{{{L}}}^{2}(\mathbb{R}^{3})\to\textup{{{L}}}^{2}(\mathbb{R}^{3N-3})$ defined in (2.9), by (3.3) the above equalities rewrite as (5.1) $\displaystyle{\sf{G}}_{3/4}(\boldsymbol{\Psi})={\sf{g}}_{3/4}(\boldsymbol{\Psi})=A.$ Thus the main Theorem 2.3 can be recast as follows: ###### Theorem 5.1. Under the conditions of Theorem 2.3 the formula (5.1) holds with the constant $A$ which is defined in (2.7). The rest of the paper is focused on the proof of Theorem 5.1. As explained in the Introduction, at the heart of the proof is the formula (2.5) for the function $\psi$, which translates to the representation (2.12) for the kernels $\psi_{j}$ defined in (2.2). This representation allows us to reduce the problem to the model operator considered in Sect. 4 with the function $\Phi(x)=|x|$. At the first stage of this reduction we construct $\textup{{{C}}}^{\infty}_{0}$ approximations of the functions $\tilde{\xi}_{j,k}$ and $\tilde{\eta}_{j,k}$ from (2.12). ### 5.2. Cut-off functions Firt we construct appropriate cut-offs. Fix a $\delta>0$. Along with the sets (2.1) introduce (5.2) $\displaystyle{\sf S}_{l,s}(\delta)={\sf S}_{s,l}(\delta)=\\{\mathbf{x}\in\mathbb{R}^{3N}:|x_{l}-x_{s}|>\delta\\},\ 0\leq l<s\leq N,$ and for all $k=0,1,\dots,N-1$, define (5.3) $\displaystyle{\sf{U}}_{k}(\delta)=\bigg{(}\bigcap_{0\leq l<s\leq N-1}{\sf S}_{l,s}(\delta)\bigg{)}\bigcap\bigg{(}\bigcap_{\begin{subarray}{c}0\leq s\leq N-1\\\ s\not=k\end{subarray}}{\sf S}_{s,N}(\delta)\bigg{)}.$ Comparing with (2.2) we see that ${\sf{U}}_{k}(\delta)\subset{\sf{U}}_{k,N}$, and for $\mathbf{x}\in{\sf{U}}_{k}(\delta)$ all the coordinate pairs, except for $x_{k}$ and $x_{N}$, are separated by a distance $\delta$. Similarly to (2.3) define the diagonal set $\displaystyle{\sf{U}}^{(\rm d)}_{k}(\delta)=\\{\mathbf{x}\in{\sf{U}}_{k}(\delta):x_{j}=x_{N}\\}\subset{\sf{U}}_{k,N}^{(\operatorname{d})}.$ Recall that the representation (2.12) holds on the domain $\tilde{\Omega}_{j,k}$ which satisfies (2.10) for all $j=1,2,\dots,N$, $k=0,1,\dots,N-1$. We construct a compact subset of $\tilde{\Omega}_{j,k}$ in the following way. For $R>0$ let $\displaystyle{\sf{U}}_{k}(\delta,R)=$ $\displaystyle\ {\sf{U}}_{k}(\delta)\bigcap\ (B_{R})^{N},$ $\displaystyle{\sf{U}}^{(\rm d)}_{k}(\delta,R)=$ $\displaystyle\ \\{\mathbf{x}\in{\sf{U}}_{k}(\delta,R):x_{k}=x_{N}\\},$ where $B_{R}=\\{x\in\mathbb{R}^{3}:|x|<R\\}$. The set ${\sf{U}}^{(\rm d)}_{k}(\delta,R)$ is bounded and its closure belongs to $\tilde{\Omega}_{j,k}$ for all $\delta>0,R>0$. Therefore, there exists an $\varepsilon_{0}=\varepsilon_{0}(\delta,R)>0$ such that the $\varepsilon$-neighbourhood (5.4) $\displaystyle\tilde{\Omega}_{k}(\delta,R,\varepsilon):=\\{\mathbf{x}\in{\sf{U}}_{k}(\delta,R):|x_{k}-x_{N}|<\varepsilon\\},$ together with its closure, belongs to $\tilde{\Omega}_{j,k}$ for all $\varepsilon\in(0,\varepsilon_{0})$: (5.5) $\displaystyle\overline{\tilde{\Omega}_{k}(\delta,R,\varepsilon)}\subset\tilde{\Omega}_{j,k},\quad\forall\varepsilon\in(0,\varepsilon_{0}).$ Now we specify $\textup{{{C}}}^{\infty}_{0}$ cutoffs supported on the domains $\tilde{\Omega}_{k}(\delta,R,\varepsilon)$. Let $\theta\in\textup{{{C}}}^{\infty}_{0}(\mathbb{R})$ and $\zeta=1-\theta$ be as defined in (1.9), (1.10). Denote (5.6) $\displaystyle Y_{\delta}(\hat{\mathbf{x}})=\prod_{0\leq l<s\leq N-1}\zeta\big{(}|x_{l}-x_{s}|(4\delta)^{-1}\big{)}.$ By the definition of $\zeta$, (5.7) $\displaystyle\operatorname{{supp}}Y_{\delta}\subset\bigcap_{0\leq l<s\leq N-1}{\sf S}_{l,s}(2\delta),$ where ${\sf S}_{l,s}(\ \cdot\ )$ is defined in (5.2). Define also cut-offs at infinity. Denote (5.8) $\displaystyle Q_{R}(\hat{\mathbf{x}})=\prod_{1\leq l\leq N-1}\theta\big{(}|x_{l}|R^{-1}\big{)},\quad K_{R}(x)=\theta\big{(}|x|R^{-1}\big{)}.$ ###### Lemma 5.2. Let $\tilde{\Omega}_{k}(\delta,R,\varepsilon)$ be the set introduced in (5.4). Then for all $\varepsilon<\min\\{\varepsilon_{0},\delta\\}$ the support of the function (5.9) $\displaystyle Q_{R}(\hat{\mathbf{x}})K_{R}(x)Y_{\delta}(\hat{\mathbf{x}})\theta\big{(}|x-x_{k}|\varepsilon^{-1}\big{)}$ belongs to $\tilde{\Omega}_{k}(\delta,R,\varepsilon)$ for all $k=0,1,\dots,N-1$. ###### Proof. Assume that $\mathbf{x}$ belongs to the support of the function (5.9). In view of (5.7), for such $\mathbf{x}$ we have (5.10) $\displaystyle|x_{l}-x_{s}|>2\delta,\ 0\leq l<s\leq N-1,\quad\textup{and}\quad|x-x_{k}|<\varepsilon.$ As $\varepsilon<\delta$, for all $s=0,1,\dots,N-1$, $s\not=k$, we can write $\displaystyle|x-x_{s}|\geq|x_{k}-x_{s}|-|x-x_{k}|>2\delta-\varepsilon>\delta,$ By definition (5.3), together with (5.10) this gives $\mathbf{x}\in{\sf{U}}_{k}(\delta)$. Moreover, since $\operatorname{{supp}}(Q_{R}K_{R})\subset(B_{R})^{N}$, this means that $\mathbf{x}\in{\sf{U}}_{k}(\delta,R)$. Now the claimed inclusion follows from the definition (5.4). ∎ ### 5.3. Using the cut-offs introduced above we construct a convenient approximation for the kernels $\psi_{j}(\hat{\mathbf{x}},x)$. Taking if necessary, a smaller $\varepsilon_{0}$ in (5.5), we will assume that $\varepsilon_{0}(\delta,R)\leq\delta$, and hence for all $\varepsilon<\varepsilon_{0}(\delta,R)$, apart from the inclusion (5.5) we have Lemma 5.2. Thus, for these values of $\varepsilon$ the real analytic functions $\tilde{\xi}_{j,k},\tilde{\eta}_{j,k}$ are well-defined on the support of (5.9), and hence the kernel (5.11) $\displaystyle\Upsilon_{j}[\delta,R,\varepsilon](\hat{\mathbf{x}},x)=Q_{R}(\hat{\mathbf{x}})Y_{\delta}(\hat{\mathbf{x}})K_{R}(x)\sum_{k=1}^{N-1}\theta\big{(}|x-x_{k}|\varepsilon^{-1}\big{)}|x-x_{k}|\tilde{\eta}_{j,k}(\hat{\mathbf{x}},x),$ is well-defined for all $(\hat{\mathbf{x}},x)\in\mathbb{R}^{3N}$, and each of the functions $\displaystyle Q_{R}(\hat{\mathbf{x}})Y_{\delta}(\hat{\mathbf{x}})K_{R}(x)\theta\big{(}|x-x_{k}|\varepsilon^{-1}\big{)}\tilde{\eta}_{j,k}(\hat{\mathbf{x}},x),\quad k=1,2,\dots,N-1,$ is $\textup{{{C}}}^{\infty}_{0}(\mathbb{R}^{3N})$. Our objective is to prove that the vector-valued kernel $\displaystyle\boldsymbol{\Upsilon}[\delta,R,\varepsilon](\hat{\mathbf{x}},x)=\big{\\{}\Upsilon_{j}[\delta,R,\varepsilon](\hat{\mathbf{x}},x)\big{\\}}_{j=1}^{N}$ is an approximation for $\boldsymbol{\Psi}(\hat{\mathbf{x}},x)$(see (2.9)) in the following sense. ###### Lemma 5.3. The following relations hold: $\displaystyle{\sf{G}}_{3/4}(\boldsymbol{\Psi})=\lim\limits_{\begin{subarray}{c}\delta\to 0\\\ R\to\infty\end{subarray}}\lim_{\varepsilon\to 0}{\sf{G}}_{3/4}\big{(}\operatorname{{\sf Int}}(\boldsymbol{\Upsilon}[\delta,R,\varepsilon])\big{)},\quad{\sf{g}}_{3/4}(\boldsymbol{\Psi})=\lim\limits_{\begin{subarray}{c}\delta\to 0\\\ R\to\infty\end{subarray}}\lim_{\varepsilon\to 0}{\sf{g}}_{3/4}\big{(}\operatorname{{\sf Int}}(\boldsymbol{\Upsilon}[\delta,R,\varepsilon])\big{)},$ where the limits on the right-hand side exist. The proof of this lemma is given in the next section. ## 6\. Proof of Lemma 5.3 ### 6.1. Spectral estimates for $\boldsymbol{\Psi}$ Our proof of Lemma 5.3 relies on the bounds obtained in [22]. Let $\mathcal{C}_{n}=(0,1)^{3}+n$, $n\in\mathbb{Z}^{3}$. Assume that $b\in\textup{{{L}}}^{\infty}(\mathbb{R}^{3N-3})$ and that $a\in\textup{{{L}}}^{2}_{\textup{\tiny loc}}(\mathbb{R}^{3})$ is such that $\displaystyle\sup_{n\in\mathbb{Z}^{3}}\|a\|_{\textup{{{L}}}^{2}(\mathcal{C}_{n})}<\infty.$ Then the functionals $\displaystyle S_{\varkappa}(a)=\bigg{[}\sum_{n\in\mathbb{Z}^{3}}e^{-\frac{3}{4}\varkappa|n|}\|a\|_{\textup{{{L}}}^{2}(\mathcal{C}_{n})}^{\frac{3}{4}}\bigg{]}^{\frac{4}{3}}$ and $\displaystyle M_{\varkappa}(b)=\biggl{[}\int_{\mathbb{R}^{3N-3}}|b(\hat{\mathbf{x}})|^{2}e^{-2\varkappa|\hat{\mathbf{x}}|}d\hat{\mathbf{x}}\biggr{]}^{\frac{1}{2}},$ are both finite for all $\varkappa>0$. Recall that the functional ${\sf{G}}_{p}$ is defined in (3.2), and $\psi_{j}$ – in (2.2). The next bound for the operators $b\,\operatorname{{\sf Int}}(\psi_{j})a$ follows from [22, Theorem 3.1]. ###### Proposition 6.1. Assume that $\psi$ satisfies (1), and let $j=1,2,\dots,N$. Let the functions $a$ and $b$ be as described above. Then $b\,\operatorname{{\sf Int}}(\psi_{j})a\in\mathbf{S}_{3/4,\infty}$ and for some $\varkappa\leq\varkappa_{0}$ we have (6.1) $\displaystyle{\sf{G}}_{3/4}(b\,\operatorname{{\sf Int}}(\psi_{j})a)\lesssim\big{(}M_{\varkappa}(b)S_{\varkappa}(a)\big{)}^{\frac{3}{4}}.$ ### 6.2. Proof of Lemma 5.3 The strategy of the proof is to “trim down” the kernel (2.9) in several steps, by multiplying it by appropriate cut-offs including the functions (5.6) and (5.8), or dropping some of the components, until it reduces to the kernel (5.11). At every step of this process we justify the trimming using either Corollary 3.1 or Corollary 3.2. The first stage is described in the next lemma. ###### Lemma 6.2. The following relations hold: (6.2) $\displaystyle{\sf{G}}_{3/4}(\boldsymbol{\Psi})=\lim\limits_{\begin{subarray}{c}\delta\to 0\\\ R\to\infty\end{subarray}}{\sf{G}}_{3/4}(Q_{R}Y_{\delta}\boldsymbol{\Psi}K_{R}),\quad{\sf{g}}_{3/4}(\boldsymbol{\Psi})=\lim\limits_{\begin{subarray}{c}\delta\to 0\\\ R\to\infty\end{subarray}}{\sf{g}}_{3/4}(Q_{R}Y_{\delta}\boldsymbol{\Psi}K_{R}),$ where the limits on the right-hand side exist. ###### Proof. First we check that (6.3) $\displaystyle\begin{cases}\lim\limits_{\delta\to 0}{\sf{G}}_{3/4}\big{(}(I-Y_{\delta})\boldsymbol{\Psi}\big{)}=0,\\\\[8.5359pt] \lim\limits_{R\to\infty}{\sf{G}}_{3/4}\big{(}(I-Q_{R})\boldsymbol{\Psi}\big{)}=0,\ \lim\limits_{R\to\infty}{\sf{G}}_{3/4}\big{(}\boldsymbol{\Psi}(I-K_{R})\big{)}=0.\end{cases}$ It suffices to check the above relations for each operator $\operatorname{{\sf Int}}(\psi_{j})$, $j=1,2,\dots,N$. Consider first $(I-Y_{\delta})\operatorname{{\sf Int}}(\psi_{j})$. Since $\displaystyle 1-Y_{\delta}(\hat{\mathbf{x}})\leq\sum_{0\leq l<s\leq N-1}\theta\big{(}|x_{l}-x_{s}|(4\delta)^{-1}\big{)},$ it follows from (6.1) that $\displaystyle{\sf{G}}_{3/4}\big{(}(1-Y_{\delta})\operatorname{{\sf Int}}(\psi_{j})\big{)}\lesssim$ $\displaystyle\ \big{(}M_{\varkappa}(1-Y_{\delta})\big{)}^{3/4}$ $\displaystyle\lesssim$ $\displaystyle\ \sum_{0\leq l<s\leq N-1}\bigg{[}\int\theta\big{(}|x_{l}-x_{s}|(4\delta)^{-1}\big{)}^{2}e^{-2\varkappa|\hat{\mathbf{x}}|}d\hat{\mathbf{x}}\bigg{]}^{3/8}\lesssim\delta^{9/8}\to 0,\ \delta\to 0,$ and hence the first relation in (6.3) holds. In a similar way one estimates $(I-Q_{R})\operatorname{{\sf Int}}(\psi_{j})$ and $\operatorname{{\sf Int}}(\psi_{j})(I-K_{R})$. Estimate, for example, the first of these operators. Since $\displaystyle 1-Q_{R}(\hat{\mathbf{x}})\leq\sum_{1\leq l\leq N-1}\zeta\big{(}|x_{l}|R^{-1}\big{)},$ it follows from (6.1) again that $\displaystyle{\sf{G}}_{3/4}\big{(}(I-Q_{R})\operatorname{{\sf Int}}(\psi_{j})\big{)}\lesssim$ $\displaystyle\ \big{(}M_{\varkappa}(1-Q_{R})\big{)}^{3/4}$ $\displaystyle\lesssim$ $\displaystyle\ \sum_{0\leq l\leq N-1}\bigg{[}\int_{\mathbb{R}^{3N-3}}\zeta(|x_{l}|R^{-1})^{2}e^{-2\varkappa|\hat{\mathbf{x}}|}\,d\hat{\mathbf{x}}\bigg{]}^{3/8}\lesssim e^{-3\varkappa R/8}\to 0,\ R\to\infty,$ whence the second equality in (6.3). Represent $\boldsymbol{\Psi}$ in the form $\displaystyle\boldsymbol{\Psi}=Q_{R}Y_{d}\boldsymbol{\Psi}K_{R}+(I-Q_{R})\boldsymbol{\Psi}+Q_{R}(1-Y_{\delta})\boldsymbol{\Psi}+Q_{R}Y_{\delta}\boldsymbol{\Psi}(I-K_{R}),$ According to (3.4), $\displaystyle{\sf{G}}_{3/4}\big{(}\boldsymbol{\Psi}-Q_{R}Y_{d}\boldsymbol{\Psi}K_{R}\big{)}^{\frac{{\scaleto{3}{3pt}}}{{\scaleto{7}{3pt}}}}\leq$ $\displaystyle\ {\sf{G}}_{3/4}\big{(}(I-Q_{R})\boldsymbol{\Psi}\big{)}^{\frac{{\scaleto{3}{3pt}}}{{\scaleto{7}{3pt}}}}$ $\displaystyle\ +{\sf{G}}_{3/4}\big{(}Q_{R}(1-Y_{\delta})\boldsymbol{\Psi}\big{)}^{\frac{{\scaleto{3}{3pt}}}{{\scaleto{7}{3pt}}}}+{\sf{G}}_{3/4}\big{(}Q_{R}Y_{\delta}\boldsymbol{\Psi}(I-K_{R})\big{)}^{\frac{{\scaleto{3}{3pt}}}{{\scaleto{7}{3pt}}}}$ $\displaystyle\leq$ $\displaystyle\ {\sf{G}}_{3/4}\big{(}(I-Q_{R})\boldsymbol{\Psi}\big{)}^{\frac{{\scaleto{3}{3pt}}}{{\scaleto{7}{3pt}}}}$ $\displaystyle\ +{\sf{G}}_{3/4}\big{(}(1-Y_{\delta})\boldsymbol{\Psi}\big{)}^{\frac{{\scaleto{3}{3pt}}}{{\scaleto{7}{3pt}}}}+{\sf{G}}_{3/4}\big{(}\boldsymbol{\Psi}(I-K_{R})\big{)}^{\frac{{\scaleto{3}{3pt}}}{{\scaleto{7}{3pt}}}}.$ By virtue of (6.3) the right-hand side tends to zero as $\delta\to 0,R\to\infty$. By Corollary 3.2 this implies (6.2). ∎ At the next stage we partition the kernel (6.4) $\displaystyle Q_{R}(\hat{\mathbf{x}})Y_{\delta}(\hat{\mathbf{x}})\boldsymbol{\Psi}(\hat{\mathbf{x}},x)K_{R}(x)$ of the operator $Q_{R}Y_{\delta}\boldsymbol{\Psi}K_{R}$ on the right-hand side of the formulas (6.2). We do this by introducing the cut-offs $\theta\big{(}|x-~{}x_{k}|\varepsilon^{-1}\big{)}$, $k=0,1,\dots,N-1$, assuming that $\varepsilon<\delta$. In view of the definition (5.6) it is straightforward to check that under this condition, we have $\displaystyle Y_{\delta}(\hat{\mathbf{x}})\sum_{k=0}^{N-1}\theta\big{(}|x-x_{k}|\varepsilon^{-1}\big{)}+Y_{\delta}(\hat{\mathbf{x}})\prod_{k=0}^{N-1}\zeta\big{(}|x-x_{k}|\varepsilon^{-1}\big{)}=Y_{\delta}(\hat{\mathbf{x}}),$ and hence the $j$’th component of (6.4) can be represented as follows: (6.5) $\displaystyle Q_{R}(\hat{\mathbf{x}})Y_{\delta}(\hat{\mathbf{x}})\psi_{j}(\hat{\mathbf{x}},x)K_{R}(x)=\sum_{k=0}^{N-1}\phi_{j,k}[\delta,R,\varepsilon](\hat{\mathbf{x}},x)+\tau_{j}[\delta,R,\varepsilon](\hat{\mathbf{x}},x)$ with $\displaystyle\phi_{j,k}[\delta,R,\varepsilon](\hat{\mathbf{x}},x)=$ $\displaystyle\ Q_{R}(\hat{\mathbf{x}})Y_{\delta}(\hat{\mathbf{x}})\theta\big{(}|x-~{}x_{k}|\varepsilon^{-1}\big{)}\psi_{j}(\hat{\mathbf{x}},x)K_{R}(x),\quad k=0,1,\dots,N-1,$ $\displaystyle\tau_{j}[\delta,R,\varepsilon](\hat{\mathbf{x}},x)=$ $\displaystyle\ Q_{R}(\hat{\mathbf{x}})Y_{\delta}(\hat{\mathbf{x}})\prod_{k=0}^{N-1}\zeta\big{(}|x-x_{k}|\varepsilon^{-1}\big{)}\psi_{j}(\hat{\mathbf{x}},x)K_{R}(x).$ First we show that the kernels $\tau_{j}[\delta,R,\varepsilon]$ and $\phi_{j,0}[\delta,R,\varepsilon]$ give negligible contributions to the asymptotics. ###### Lemma 6.3. For each $\delta>0,R>0$ and $\varepsilon<\delta$ one has (6.6) $\displaystyle{\sf{G}}_{3/4}\big{(}\operatorname{{\sf Int}}\big{(}\tau_{j}[\delta,R,\varepsilon]\big{)}\big{)}=0,\ j=1,2,\dots,N.$ ###### Proof. By the definitions (5.6) and (1.10), the support of the kernel $\tau_{j}[\delta,R,\varepsilon]$ belongs to the bounded domain $\displaystyle{\bigcap_{0\leq l<s\leq N}{\sf S}_{l,s}(\varepsilon/2)\cap(B_{R})^{N}}.$ The function $\psi_{j}$ is real-analytic on this domain and it is uniformly bounded together with all its derivatives, so that $\tau_{j}[\delta,R,\varepsilon]\in\textup{{{C}}}^{\infty}_{0}(\mathbb{R}^{3N})$. By Proposition 3.3, ${\sf{G}}_{p}(\operatorname{{\sf Int}}(\tau_{j}[\delta,\mathbb{R},\varepsilon]))=0$ for all $p>0$, and in particular, for $p=3/4$, as claimed. ∎ ###### Lemma 6.4. For each $\delta>0,R>0$ one has (6.7) $\displaystyle\lim_{\varepsilon\to 0}{\sf{G}}_{3/4}\big{(}\operatorname{{\sf Int}}\big{(}\phi_{j,0}[\delta,R,\varepsilon]\big{)}\big{)}=0,\ j=1,2,\dots,N.$ ###### Proof. As $x_{0}=0$ by definition, the kernel $\phi_{j,0}[\delta,R,\varepsilon]$ has the form $\displaystyle\phi_{j,0}[\delta,R,\varepsilon](\hat{\mathbf{x}},x)=Q_{R}(\hat{\mathbf{x}})Y_{\delta}(\hat{\mathbf{x}})\psi_{j}(\hat{\mathbf{x}},x)\theta\big{(}|x|\varepsilon^{-1}\big{)}K_{R}(x).$ Estimating $Q_{R}Y_{\delta}\leq 1$, $K_{R}\leq 1$, one sees that the singular values of $\operatorname{{\sf Int}}(\phi_{j,0}[\delta,R,\varepsilon])$ do not exceed those of the operator $\operatorname{{\sf Int}}(\psi_{j})a$ with the weight $a(x)=\theta(|x|\varepsilon^{-1})$. By (6.1), $\displaystyle{\sf{G}}_{3/4}(\operatorname{{\sf Int}}(\psi_{j})a)\lesssim S_{\varkappa}(a)^{3/4}\lesssim\bigg{(}\int_{\mathbb{R}^{3}}\theta\big{(}|x|\varepsilon^{-1}\big{)}^{2}dx\bigg{)}^{3/8}\lesssim\varepsilon^{9/8}\to 0,\ \varepsilon\to 0.$ This implies (6.7). ∎ ###### Corollary 6.5. Denote by $\boldsymbol{\alpha}[\delta,R,\varepsilon](\mathbf{x},x)=\\{\alpha_{j}[\delta,R,\varepsilon]\\}_{j=1}^{N}$ the vector-valued kernel with the components $\displaystyle\alpha_{j}[\delta,R,\varepsilon](\hat{\mathbf{x}},x)=\sum_{k=1}^{N-1}\phi_{j,k}[\delta,R,\varepsilon](\hat{\mathbf{x}},x).$ Then for all $\delta>0$ and $R>0$, we have (6.8) $\displaystyle\begin{cases}{\sf{G}}_{3/4}(Q_{R}Y_{\delta}\boldsymbol{\Psi}K_{R})=&\ \lim\limits_{\varepsilon\to 0}{\sf{G}}_{3/4}(\operatorname{{\sf Int}}(\boldsymbol{\alpha}[\delta,R,\varepsilon])),\\\\[5.69046pt] {\sf{g}}_{3/4}(Q_{R}Y_{\delta}\boldsymbol{\Psi}K_{R})=&\ \lim\limits_{\varepsilon\to 0}{\sf{g}}_{3/4}(\operatorname{{\sf Int}}(\boldsymbol{\alpha}[\delta,R,\varepsilon])),\end{cases}$ where the limits on the right-hand side exist. ###### Proof. By (6.5), the kernel $Q_{R}Y_{\delta}\psi_{j}K_{R}$ has the form $\displaystyle\alpha_{j}[\delta,R,\varepsilon]+\phi_{j,0}[\delta,R,\varepsilon]+\tau_{j}[\delta,R,\varepsilon].$ By virtue of (3.4) and (6.6), (6.7), we have $\displaystyle\lim_{\varepsilon\to 0}{\sf{G}}_{3/4}(\operatorname{{\sf Int}}\big{(}\phi_{j,0}[\delta,R,\varepsilon]+\tau_{j}[\delta,R,\varepsilon]\big{)}=0.$ Now (6.8) follows from Corollary 3.2. ∎ ###### Completion of the proof of Lemma 5.3. According to Lemma 5.2, under the condition $\varepsilon<\varepsilon_{0}(\delta,R)$, the support of each kernel $\displaystyle\phi_{j,k}[\delta,R,\varepsilon],\quad j=1,2,\dots,N,\quad k=1,2,\dots,N-1,$ belongs to $\tilde{\Omega}_{k}(\delta,R,\varepsilon)$, see (5.4) for the definition. Therefore one can use the representation (2.12) for the function $\psi_{j}$: $\displaystyle\alpha_{j}[\delta,R,\varepsilon](\hat{\mathbf{x}},x)=\sum_{k=1}^{N-1}$ $\displaystyle\ \phi_{j,k}[\delta,R,\varepsilon](\hat{\mathbf{x}},x)=\sum_{k=1}^{N-1}Q_{R}(\hat{\mathbf{x}})Y_{\delta}(\hat{\mathbf{x}})\theta\big{(}|x-x_{k}|\varepsilon^{-1}\big{)}\tilde{\xi}_{j,k}(\hat{\mathbf{x}},x)K_{R}(x)$ $\displaystyle\ \quad+\sum_{k=1}^{N-1}Q_{R}(\hat{\mathbf{x}})Y_{\delta}(\hat{\mathbf{x}})\theta\big{(}|x-x_{k}|\varepsilon^{-1}\big{)}|x_{k}-x|\tilde{\eta}_{j,k}(\hat{\mathbf{x}},x)K_{R}(x).$ Each term in the first sum on the right-hand side is $\textup{{{C}}}^{\infty}_{0}(\mathbb{R}^{3N})$. Thus, by Proposition 3.3, the functional ${\sf{G}}_{p}$ for the associated operator equals zero for all $p>0$, and in particular, for $p=3/4$. The second sum coincides with the kernel $\Upsilon_{j}[\delta,R,\varepsilon](\hat{\mathbf{x}},x)$, defined in (5.11). Therefore, by Corollary 3.1, (6.9) $\displaystyle\begin{cases}{\sf{G}}_{3/4}(\operatorname{{\sf Int}}(\boldsymbol{\alpha}[\delta,R,\varepsilon]))={\sf{G}}_{3/4}(\operatorname{{\sf Int}}(\boldsymbol{\Upsilon}[\delta,R,\varepsilon])),\\\\[5.69046pt] {\sf{g}}_{3/4}(\operatorname{{\sf Int}}(\boldsymbol{\alpha}[\delta,R,\varepsilon]))={\sf{g}}_{3/4}(\operatorname{{\sf Int}}(\boldsymbol{\Upsilon}[\delta,R,\varepsilon])),\end{cases}$ for each $\delta>0,R>0$ and $\varepsilon<\varepsilon_{0}(\delta,R)$. Putting together (6.2), (6.8) and (6.9), and using Corollary 3.2, we conclude the proof of Lemma 5.3. ∎ ## 7\. Proof of Theorems 2.2 and 5.1, 2.3 ###### Lemma 7.1. The operator $\operatorname{{\sf Int}}(\boldsymbol{\Upsilon}[\delta,R,\varepsilon])$ belongs to $\mathbf{S}_{3/4,\infty}$ for all $\delta>0,R>0,\varepsilon<\varepsilon_{0}(\delta,R)$ and (7.1) $\displaystyle{\sf{G}}_{3/4}\big{(}\operatorname{{\sf Int}}(\boldsymbol{\Upsilon}[\delta,R,\varepsilon])\big{)}={\sf{g}}_{3/4}\big{(}\operatorname{{\sf Int}}(\boldsymbol{\Upsilon}[\delta,R,\varepsilon])\big{)}=\mu_{1,3}\int\big{(}K_{R}(t)H_{\delta,R}(t)\big{)}^{\frac{3}{4}}dt,$ where $\displaystyle H_{\delta,R}(t)=Q_{R}(t)Y_{\delta}(t)\,\big{(}|\tilde{\eta}_{1,1}(t,t)|^{2}+\tilde{\eta}_{1,2}(t,t)|^{2}\big{)}^{1/2},\ \textup{if}\ N=2,$ and (7.2) $\displaystyle H_{\delta,R}(t)=\bigg{[}\sum_{j=1}^{N}\sum_{k=1}^{N-1}\int_{\mathbb{R}^{3N-6}}\big{|}Q_{R}(\tilde{\mathbf{x}}_{k,N},t)Y_{\delta}(\tilde{\mathbf{x}}_{k,N},t)\tilde{\eta}_{j,k}(\tilde{\mathbf{x}}_{k,N},t,t)\big{|}^{2}d\tilde{\mathbf{x}}_{k,N}\bigg{]}^{\frac{1}{2}},\ \textup{if}\ N\geq 3,$ and $\mu_{\alpha,d}$ is defined in (3.17). ###### Proof. The kernel $\boldsymbol{\Upsilon}[\delta,R,\varepsilon]$ (see (5.11)) has the form (4.2) with $\displaystyle a(x)=K_{2R}(x),\ $ $\displaystyle\ b_{j,k}(\hat{\mathbf{x}})=Q_{2R}(\hat{\mathbf{x}})Y_{\delta/2}(\hat{\mathbf{x}}),$ $\displaystyle\beta_{j,k}(\hat{\mathbf{x}},x)=$ $\displaystyle\ \theta\big{(}|x-x_{k}|\varepsilon^{-1}\big{)}\tilde{\eta}_{j,k}(\hat{\mathbf{x}},x)Q_{R}(\hat{\mathbf{x}})Y_{\delta}(\hat{\mathbf{x}})K_{R}(x),$ and the homogeneous function $\Phi(x)=|x|$. Here we have used the fact that $\displaystyle Q_{R}(\hat{\mathbf{x}})Q_{2R}(\hat{\mathbf{x}})=Q_{R}(\hat{\mathbf{x}}),\quad Y_{\delta}(\hat{\mathbf{x}})Y_{\delta/2}(\hat{\mathbf{x}})=Y_{\delta}(\hat{\mathbf{x}})\quad\textup{and}\quad K_{R}(x)K_{2R}(x)=K_{R}(x).$ Therefore we can use Corollary 4.3. It is immediate to see that in this case the function $h$ defined in (4.3), coincides with $H_{\delta,R}$, so that (4.7) entails (7.1), as required. ∎ ###### Proof of Theorems 2.2, 5.1 and 2.3. By Lemma 5.3, each term in the relation (7.1) has a limit as $\delta\to 0,R\to\infty$. Therefore the integral on the right-hand side of (7.1) is bounded uniformly in $\delta>0,R>0$. Assume for convenience that the function $\theta$ defined in (1.10) is monotone decreasing for $t\geq 0$. Therefore the pointwise convergencies $\displaystyle Y_{\delta}(\tilde{\mathbf{x}}_{k,N},t)\to 1,\ \delta\to 0\quad\textup{and}\quad K_{R}(t)\to 1,Q_{R}(\tilde{\mathbf{x}}_{k,N},t)\to 1,\ R\to\infty,$ are monotone increasing. By the Monotone Convergence Theorem, the integrand $K_{R}(t)H_{\delta,R}(t)$ on the right-hand side of (7.1) converges for a.e. $t\in\mathbb{R}^{3}$ as $\delta\to 0,R\to\infty$ to an $\textup{{{L}}}^{3/4}(\mathbb{R})$-function, which we denote by $\tilde{H}(t)$, and the integral in (7.1) converges to (7.3) $\displaystyle\mu_{1,3}\int\big{(}\tilde{H}(t)\big{)}^{3/4}dt.$ If $N=2$, then this concludes the proof of Theorem 2.2, since in this case $\displaystyle H_{\delta,R}(t)\to\big{(}|\tilde{\eta}_{1,1}(t,t)|^{2}+\tilde{\eta}_{2,1}(t,t)|^{2}\big{)}^{1/2},$ a.e. $t\in\mathbb{R}^{3}$, and by virtue of (2.13) this limit coincides with $H(t)$. If $N\geq 3$, then the convergence to $\tilde{H}(t)$ implies that for a.e. $t\in\mathbb{R}^{3}$ the function $K_{R}(t)H_{\delta,R}(t)$, and hence $H_{\delta,R}(t)$, is bounded uniformly in $\delta$ and $R$. Applying the Monotone Convergence Theorem to the integral (7.2), we conclude that the a.e.-limit $\displaystyle|\tilde{\eta}_{j,k}(\tilde{\mathbf{x}}_{k,N},t,t)|=\lim_{\delta\to 0,R\to\infty}\big{|}Q_{R}(\tilde{\mathbf{x}}_{k,N},t)Y_{\delta}(\tilde{\mathbf{x}}_{k,N},t)\tilde{\eta}_{j,k}(\tilde{\mathbf{x}}_{k,N},t,t)\big{|},\ $ belongs to $\textup{{{L}}}^{2}(\mathbb{R}^{3N-6})$, a.e. $t\in\mathbb{R}^{3}$, and $\displaystyle\lim_{\delta\to 0,R\to\infty}H_{\delta,R}(t)=H(t),\quad\textup{a.e.}\quad t\in\mathbb{R}^{3},$ where we have used the formula (2.13) for $H$. Thus $H=\tilde{H}\in\textup{{{L}}}^{3/4}(\mathbb{R}^{3})$. As (2.13) is equivalent to (2.6), this completes the proof of Theorem 2.2. An easy calculation shows that $\mu_{1,3}=3^{-1}(2/\pi)^{5/4}$, so that the limit (7.3) coincides with the coefficient $A$ in (2.7). Together with Lemma 5.3 this completes the proof of Theorem 5.1. As explained before, Theorem 5.1 is equivalent to Theorem 2.3. This completes the proof. ∎ Acknowledgments. The author is grateful to S. Fournais, T. Hoffmann-Ostenhof, M. Lewin and T. Ø. Sørensen for stimulating discussions and advice. The author thanks J. Cioslowski for his comments and for bringing to the author’s attention papers [6], [7] and [15]. The author was supported by the EPSRC grant EP/P024793/1. ## References * [1] M. S. Birman and M. Z. Solomyak, _Asymptotics of the spectrum of weakly polar integral operators_. Izv. Akad. Nauk SSSR Ser. Mat. 34: 1142–1158, 1970. * [2] M. S. Birman and M. Z. Solomyak, _Asymptotic behavior of the spectrum of pseudodifferential operators with anisotropically homogeneous symbols_. Vestnik Leningrad. Univ. Mat. Mekh. Astronom. 13(3): 13–21, 169, 1977\. * [3] M. S. Birman and M. Z. Solomyak, _Estimates for the singular numbers of integral operators. (Russian)_. Uspehi Mat. Nauk 32(1(193)): 17–84, 1977\. * [4] M. S. Birman and M. Z. Solomyak, _Asymptotic behavior of the spectrum of pseudodifferential operators with anisotropically homogeneous symbols. II_. Vestnik Leningrad. Univ. Mat. Mekh. Astronom. 13(3): 5–10, 121, 1979\. * [5] M. S. Birman and M. Z. Solomyak, _Spectral Theory of Selfadjoint Operators in Hilbert Space_. Mathematics and its Applications (Soviet Series), D. Reidel, 1987. Translated from the 1980 Russian original by S. Khrushchëv and V. Peller. * [6] J. Cioslowski, _Off-diagonal derivative discontinuities in the reduced density matrices of electronic systems_. The Journal of Chemical Physics 153(15): 154108, 2020. * [7] J. Cioslowski and F. Pratnicki, _Universalities among natural orbitals and occupation numbers pertaining to ground states of two electrons in central potentials_. The Journal of Chemical Physics 151(184107), 2019. * [8] J. Cioslowski and K. Strasburger, _Angular-Momentum Extrapolations to the Complete Basis Set Limit: Why and When They Work_. Journal of Chemical Theory and Computation 17(6): 3403–3413, 2021. * [9] A. Coleman and V. Yukalov, _Reduced Density Matrices_ , _Lecture Notes in Chemistry_ , vol. 72. Springer-Verlag Berlin Heidelberg, 2000. * [10] E. Davidson, _Reduced Density Matrices in Quantum Chemistry_. Academic Press, 1976. * [11] S. Fournais and T. Ø. Sørensen, _Pointwise estimates on derivatives of Coulombic wave functions and their electron densities_. J. Reine Angew. Math., arXiv:1803.03495 [math.AP] 2018. * [12] S. Fournais, M. Hoffmann-Ostenhof, T. Hoffmann-Ostenhof and T. Ø. Sørensen, _Analytic structure of many-body Coulombic wave functions_. Comm. Math. Phys. 289(1): 291–310, 2009. * [13] G. Friesecke, _On the infinitude of non-zero eigenvalues of the single-electron density matrix for atoms and molecules_. R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci. 459(2029): 47–52, 2003. * [14] I. M. Gel’fand and G. E. Shilov, _Generalized Functions, Volume 1: Properties and Operations_. AMS Chelsea Publishing, 1964. * [15] C. Hättig, W. Klopper, A. Köhn, D. P. Tew, _Explicitly Correlated Electrons in Molecules_. Chem. Rev 112(1): 4–74, 2012. * [16] P. Hearnshaw and A. V. Sobolev, _Analyticity of the one-particle density matrix_. Arxiv 2020. 2006.11785. * [17] T. Kato, _On the eigenfunctions of many-particle systems in quantum mechanics_. Comm. Pure Appl. Math. 10: 151–177, 1957. * [18] M. Lewin, E. H. Lieb, and R. Seiringer, _Universal Functionals in Density Functional Theory_. 2019. 1912.10424. * [19] E. H. Lieb and R. Seiringer, _The stability of matter in quantum mechanics_. Cambridge University Press, Cambridge, 2010. * [20] M. Reed and B. Simon, _Methods of modern mathematical physics. II. Fourier analysis, self-adjointness_. Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1975. * [21] B. Simon, _Exponential decay of quantum wave functions,_. http://www.math.caltech.edu/simon/Selecta/ExponentialDecay.pdf, Online notes, part of B. Simon’s Online Selecta at http://www.math.caltech.edu/simon/selecta.html. * [22] A. V. Sobolev, _Eigenvalue estimates for the one-particle density matrix_ , to appear in Journal of Spectral Theory, Arxiv, 2020. 2008.10935.
# Quantum phenomena inside a black hole: quantization of the scalar field iniside horizon in Schwarzschild spacetime Pawel Gusin1, Andrzej Radosz1, Andy T. Augousti2 Janos Polonyi3, Oleg B. Zaslavskii4 and Romuald J. Ściborski5 1Department of Quantum Technologies, Wroclaw University of Science and Technology, Wroclaw, Poland 2Faculty of Science, Engineering and Computing, Kingston University London, London, UK 3CNRS-IPHC, Strasbourg University, 23 r. du Loess BP28 67037, Strasbourg Cedex 2, France 4Department of Physics and Technology, Kharkov V.N. Karazin National University, 4 Svoboda Square, Kharkov 61022, Ukrainey 5Jaramogi Oginga Odinga University of Science and Technology in Bondo (Kenya) ###### Abstract We discuss the problem of the quantization and dynamic evolution of a scalar free field in the interior of a Schwarzschild black hole. A unitary approach to the dynamics of the quantized field is proposed: a time-dependent Hamiltonian governing the Heisenberg equations is derived. It is found that the system is represented by a set of harmonic oscillators coupled via terms corresponding to the creation and annihilation of pairs of particles and that the symmetry properties of the spacetime, homogeneity and isotropy are obeyed by the coupling terms in the Hamiltonian. It is shown that Heisenberg equations for annihilation and creation operators are transformed into ordinary differential equations for appropriate Bogolyubov coefficients. Such a formulation leads to a general question concerning the possibility of gravitationally driven instability, that is however excluded in this case. ## 1 Introduction The horizon of a black hole (BH) may be regarded as a geometrical singularity (”fake geometrical singularity”). Indeed, considering a Schwarzschild BH in Schwarzschild coordinates one finds the metric tensor exhibiting an on-horizon singularity that is absent in other, singularity-free coordinate systems. There are a variety of singularity-free coordinate systems in this case, e.g. Kruskal-Szekers, Eddington-Finkelstein, Novikov and others [1-2]. Two interesting observations might be made here. The first is to notice that the presence of the event horizon is manifested both in coordinates revealing the horizon’s singularity as well as in the singularity-free systems. The second one is to note the surprising similarities and/or analogies for phenomena taking place outside and inside black holes. A rather well-known example of such a property is the so-called BSW effect [3]. Two-particle collisions occuring in the vicinity of the black hole’s horizon may lead to a high-energy outcome according to two scenarios [4-5]. These two scenarios turn out to be the same in the exterior as well as in the interior of BH. A variety of other aspects of the Exterior vs Interior (a)symmetry have been discussed in Ref. [6]. It was shown by Doran et al. [7] that the interior of a Schwarzschild BH’s, which is a dynamically changing spacetime, may be regarded as a solution of Einstein’s equation. This interior spacetime, also called ”T-sphere” (see [8]) which is globally hyperbolic, gains then the status of a cosmological model. Its 3D spatial-like section is a hypercylinder $\mathbf{R}^{1}\times S^{2}$, expanding longitudinally, along the homogeneity direction $\mathbf{R}^{1},$ (see also [6-8]) and contracting transversally, perpendicularly to this direction in the angular coordinates of the sphere $S^{2}$. However, as shown in Ref. [7], such a process may be preceded by a process of expansion of the sphere and collapsing of the cylinder to its base sphere of radius $r_{S}$. Such an expansion followed by a contraction constitutes the full cycle for the cosmological model introduced in [7]. Various phenomena and processess have been considered both in the interior of the Schwarzschild BH [8, 10-13] and in its extension [7] to which we will hereafter refer to as the ”T-model”, an anisotropic cosmological model. In particular the Yang-Mills and Higgs fields in the Kantowski-Sachs anisotropic, cigar-like – referred to above as a hypercylinder – cosmological model were discussed in [14] (see also [15]). Canonical quantization of the scalar field inside a Schwarzschild BH was presented by Yajnik and Narayan [16], where a so-called tortoise coordinate was used, in consequence leading to a Hamiltonian of diagonal form and, as claimed by the authors, to ”QFT set up by the freely falling observer”. Other studies of the quantum properties of scalar field were given for instance in Refs. [17-18] and the investigations of the interior of the Schwarzschild BH were presented in Refs. [19-20]. The most recent results have been given by Almeida and Rodrigues in Ref. [21] where the quantization of the BH gravity was discussed and by Giddings and Perkins in Ref. [22], in which the quantum evolution of the Hawking state in Schwarzschild spacetime was investigated. In this paper we will present a particular quantum aspect of the ”T-model”. Namely the problem of dynamics, i.e. the temporal evolution of the quantized scalar field in the case of such a cosmology will be introduced and briefly discussed within a unitary approach. The Hamiltonian of the system, represented by a set of harmonic oscillators, coupled via creation and annihilation of pairs of particles, revealing interesting symmetry properties, will be derived. The Heisenberg equations of motion for appropriate annihilation and creation operators will be converted into ordinary differential equations for Bogolyubov coefficients and will be shown to reveal the possibility of an instability that is referred to as a gravitationally driven instability. The paper is organized as follows. In Sec. 2 we discuss the properties of the Schwarzschild BH and a T-model is formulated. In Sec. 3 a scalar field and its quantization are discussed. In Sec. 4. the Hamiltonian of the scalar field is derived and a discussion is presented in the final section, Sec.5; Appendix is devoted for a derivation of explicit form the temporal part of (factorized) Klein-Gordon equation. ## 2 ”T-sphere” model - an anisotropic cosmological model The metric $g_{\mu\nu}$ for the exterior of the Schwarzchild black hole, diagonal in the Schwarzschild coordinates $\left(t,r,\theta,\varphi\right)$, reveals the singularity on the horizon: $ds^{2}=g_{t}\left(r\right)dt^{2}-g_{r}\left(r\right)dr^{2}-g_{2}\left(r\right)d\Omega^{2}.$ (2.1) where $g_{t}=1-\frac{2M}{r}=g_{r}^{-1}$ (2.2) $g_{2}\left(r\right)=r^{2},$ and $d\Omega^{2}$ denotes the metric on the two- dimensional unit sphere $S^{2}$ with the coordinates $\left(\theta,\phi\right):$ $d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}.$ (2.3) The geometrical singularity at the horizon, $r_{S}=2M$ may be removed by a transformation to a singularity-free coordinate system, such as Kruskal- Szekeres, Eddington-Finkelstein, Novikov, Lemaitre or other systems [1-2]. The coordinate system (2.1), though ill-defined on the horizon, may be applied inside the horizon (see e.g. [6-7]). The interior of a BH, $r<r_{S}$ possesses, apart from some well-known, some not so well-known, properties too (see [9]). The Killing vector $\partial_{t}$ becomes a spatial one that results in momentum conservation instead of energy conservation, as obeyed outside BH (see below). This is accompanied by the interchange of the roles of the coordinates: $t$ and $r$ play the role of the spatial- and temporal-like coordinates, respectively. The interesting feature of the interior of a Schwarzschild BH is that it may be regarded as a unique spacetime, a cosmological anisotropic model called a ”T-sphere” model or simply T-model [8]. It is described by the line element (2.1) for $r<r_{S}$ but now expressed in terms of $T\left(=-r\right)$ (temporal) and $z$ (spatial) coordinates instead of $r$ and $t$, coordinates, respectively $ds_{-}^{2}=g_{T}dT^{2}-g_{z}dz^{2}-g_{2}\left(T\right)\left(d\theta^{2}+\sin^{2}\theta d\varphi^{2}\right),$ (2.4) where, $T\in\left\langle-r_{S},0\right\rangle,$ $z\in\left(-\infty,+\infty\right),$ $g_{T}=\left(\frac{r_{S}}{T}-1\right)^{-1}=g_{z}^{-1}$. At each instant of $T_{0}$ the spatial slice is a hypercylinder $\mathbf{R}^{1}\times S^{2}$, longitudinally expanding and transversally, a two-sphere of radius $\left|T_{0}\right|,$ contracting (see e.g. [6]). Along the cylinder axis $z$ the system is homogeneous and that represents the momentum $z$-component conservation. Phenomena of a classical nature have been considered in the T-model both within a more traditional approach (see e.g.[10-13]) as well as from other specific perspectives (see [9], [23-25]). Here we will consider a special quantum phenomenon, namely the problem of dynamics of the quantized scalar field in the case T-model will be introduced and briefly discussed within a unitary approach. ## 3 Scalar free field in a T-model A scalar free field $\Phi$ in a space-time $M$ with a metric $g_{\mu\nu}$ is described in terms of Lagrangian density $\mathcal{L}$: $\mathcal{L}=\frac{1}{2}\sqrt{-g}g^{\mu\nu}\partial_{\mu}\Phi\partial_{\nu}\Phi-\left(\mu^{2}+\xi R\right)\Phi^{2},$ (3.1) where $-g=\det\left[g_{\alpha\beta}\right],$ the parameter $\mu$ can be interpreted as the mass only in asymptomatically flat space-time, $R$ is the scalar curvature of $M$ and $\xi$ is the field coupling to the spacetime curvature. In the case of the spacetime (2.4) the coupling with gravitational field vanishes (as $R=0$) and the action of the scalar free field (3.1) takes the form $S=\frac{1}{2}\int dT\int\limits_{\mathbf{\Sigma}}dzd\Omega T^{2}\left[\frac{1}{g_{T}}\left(\partial_{T}\Phi\right)^{2}-\frac{1}{g_{z}}\left(\partial_{z}\Phi\right)^{2}+\frac{1}{T^{2}}\Phi\Delta_{S^{2}}\Phi-\mu^{2}\Phi^{2}\right],$ (3.2) where $\Sigma=\mathbf{R}^{1}\times S^{2},$ $d\Omega=\sin\theta d\varphi d\theta$ and we have integrated by parts in the sector $S^{2}$ which resulted in the Laplace operator $\Delta_{S^{2}}$ on $S^{2}$: $\Delta_{S^{2}}\Phi=\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial\Phi}{\partial\theta}\right)+\frac{1}{\sin^{2}\theta}\frac{\partial^{2}\Phi}{\partial\varphi^{2}}.$ (3.3) The Klein-Gordon (or Euler-Lagrange) equation $\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\Phi\right)+\mu^{2}\Phi=0,$ (3.4) takes in this case the following form: $\partial_{T}\left(T^{2}g_{z}\partial_{T}\Phi\right)-\frac{T^{2}}{g_{z}}\partial_{z}^{2}\Phi-\Delta_{S^{2}}\Phi+\mu^{2}T^{2}\Phi=0,$ (3.5) Taking the field $\Phi$ in the form of a product: $\Phi\left(T,z,\theta,\phi\right)=R\left(T\right)u\left(z\right)Y\left(\theta,\phi\right).$ (3.6) it follows that the wave equation ( 3.5) separates into the following equations: $\Delta_{S^{2}}Y=-l\left(l+1\right)Y,$ (3.7) $\frac{d^{2}u_{\varepsilon}}{dz^{2}}=-\varepsilon^{2}u_{\varepsilon},$ (3.8) $\frac{d}{dT}\left(T^{2}g_{z}\frac{dR_{\varepsilon l}}{dT}\right)+T^{2}\left(\frac{\varepsilon^{2}}{g_{z}}+\mu^{2}+\frac{l\left(l+1\right)}{T^{2}}\right)R_{\varepsilon l}=0,$ (3.9) where $\varepsilon$ is a (separation) constant. The solution of Eq.(3.7) is given by the spherical harmonics $Y_{lm}\left(\theta,\phi\right),$ $\int\limits_{S^{2}}d\Omega Y_{lm}\left(\theta,\varphi\right)Y_{l^{\prime}m^{\prime}}^{\ast}\left(\theta,\varphi\right)=\delta_{ll^{\prime}}\delta_{mm^{\prime}},$ (3.10) $\int\limits_{S^{2}}d\Omega Y_{lm}\left(\theta,\varphi\right)Y_{l^{\prime}-m^{\prime}}\left(\theta,\varphi\right)=\delta_{ll^{\prime}}\delta_{m,-m^{\prime}}$ (3.11) where $m=-l,-\left(l-1\right),...0....,l$. The solution of equation (3.8) is $u\left(z\right)=e^{\pm i\varepsilon z}.$ (3.12) One can decompose the field $\Phi$ into the complete system of functions on $\mathbf{R}^{1}$ and $S^{2}$. Thus, the real field $\Phi=\Phi^{\ast}$ is represented as: $\Phi\left(T,z,\theta,\varphi\right)=\sum\limits_{\varepsilon,l,m}\left[R_{\varepsilon l}\left(T\right)e^{i\varepsilon z}Y_{lm}\left(\theta,\varphi\right)A_{\varepsilon lm}+R_{\varepsilon l}^{\ast}\left(T\right)e^{-i\varepsilon z}Y_{lm}^{\ast}\left(\theta,\varphi\right)A_{\varepsilon lm}^{\ast}\right],$ (3.13) where $R_{\varepsilon l}\left(T\right)$ are the functions of the temporal variable $T$ satisfying second order differential equation (3.9) and $A_{\varepsilon lm}$ are Fourier-like coefficients. The scalar product $\left(\cdot,\cdot\right)$ (Klein-Gordon) is in general defined as : $\left(\Phi,\Psi\right)=i\int\limits_{\Sigma_{t}}\left(\Phi^{\ast}\partial_{\mu}\Psi-\Psi\partial_{\mu}\Phi^{\ast}\right)n^{\mu}dvol\left(\Sigma_{t}\right),$ (3.14) where $n=n^{\mu}\partial_{\mu}$ denotes the unit time-like vector field orthogonal to a space-like hypersurface (slice) $\Sigma_{t}$ and $\Phi,\Psi$ are the solutions of the Klein-Gordon equation. In this case $\Sigma_{t}\simeq A\times S^{2}$ and the scalar product takes the form (see [17], [26]): $\left(\Phi,\Psi\right)=iT^{2}g_{z}\int\limits_{S^{2}}\sin\theta d\theta d\phi\int\limits_{A}\left(\Phi^{\ast}\partial_{T}\Psi-\Psi\partial_{T}\Phi^{\ast}\right)dz.$ (3.15) There is the following normalization condition $A_{\varepsilon lm}=\left(R_{\varepsilon l}\left(T\right)e^{i\varepsilon z}Y_{lm}\left(\theta,\varphi\right),\Phi\right)$ (3.16) where $\Phi$ is given by (3.6), which is equivalent to the claim of the canonical commutation relations (see also below). After some (lengthy but simple) algebra one finds that condition (3.16) is satisfied iff $T^{2}g_{z}\left[R_{\varepsilon l}^{\ast}\overset{\cdot}{R}_{\varepsilon l}-\overset{\cdot}{R}_{\varepsilon l}^{\ast}R_{\varepsilon l}\right]=-i,$ (3.17) $R_{\varepsilon l}^{\ast}\overset{\cdot}{R}_{-\varepsilon l}^{\ast}-R_{-\varepsilon l}^{\ast}\overset{\cdot}{R}_{\varepsilon l}^{\ast}=0.$ (3.18) The condition (3.17) is derived from the differential equation (3.9). First, one writes Eq.(3.9) for the complex conjugated function $R_{\varepsilon l}^{\ast}$; then one multiplies it by $R_{\varepsilon l}$ and Eq.(3.9) by $R_{\varepsilon l}^{\ast}$; finally one subtracts the former from the latter obtaining $d_{T}\left(T^{2}g_{z}\left[R_{\varepsilon l}^{\ast}\overset{\cdot}{R}_{\varepsilon l}-\overset{\cdot}{R}_{\varepsilon l}^{\ast}R_{\varepsilon l}\right]\right)=0.$ (3.19) Therefore, (3.17) turns out to be a normalization condition for $R_{\varepsilon l}$ i.e. the Wronskian in this case, as it should be. On the other hand Eq. (3.18) is just an equivalence. ### 3.1 Quantization Quantization of the field (3.1-2) is performed in a canonical way. Namely, one introduces the momentum field as the field canonically conjugated to $\Phi\left(T,z,\theta,\varphi\right),$i.e. $\pi=\frac{\partial\mathcal{L}}{\partial\left(\partial_{T}\Phi\right)}=\frac{T^{2}}{g_{T}}\partial_{T}\Phi.$ (3.20) Then one imposes canonical commutation relations where $\mathbf{x,y}\in\Sigma_{t}$. In our case the slice $\Sigma_{t}$ has the topology of the product space of the set $A\subset\mathbf{R}^{1}$ and the two- dimensional sphere $S^{2}$. The momentum field given in its Fourier decomposed form is: $\widehat{\pi}\left(t,r,\theta,\phi\right)=\frac{T^{2}}{g_{T}}\sum\limits_{\varepsilon,l,m}\left[\widehat{A}_{\varepsilon lm}\overset{\cdot}{R}_{\varepsilon l}\left(T\right)e^{i\varepsilon z}Y_{lm}\left(\theta,\phi\right)+\widehat{A}_{\varepsilon lm}^{{\dagger}}\overset{\cdot}{R}_{\varepsilon l}^{\ast}\left(T\right)e^{-i\varepsilon z}Y_{lm}^{\ast}\left(\theta,\phi\right)\right]$ (3.22) The canonical commutation relations Eqs. (3.21) turn out to be satisfied under the following conditions: a) $\widehat{A}_{\varepsilon lm}$, $\widehat{A}_{\varepsilon lm}^{{\dagger}},$ are the annihilation and creation operators, respectively, i.e. the only nonvanishing commutator is $\left[\widehat{A}_{\varepsilon lm},\widehat{A}_{\varepsilon^{\prime}l^{\prime}m^{\prime}}^{{\dagger}}\right]=\delta_{\varepsilon\varepsilon^{\prime}}\delta_{ll^{\prime}}\delta_{mm^{\prime}}$ (3.22) b) the Wronskian (3.17) must hold. ## 4 Hamiltonian of the scalar field in a T-model The Hamiltonian of the field described by the Lagrangian density $\mathcal{L}$ is determined as an integral over the spatial part $\mathbf{\Sigma}$ of the spacetime $H=\int\limits_{\mathbf{\Sigma}}d^{3}x\left[\pi\partial_{T}\Phi-\mathcal{L}\right],$ (4.1) and this expression is equivalent to the (integrated) $T_{TT}$ element of the stress-energy tensor. Applying formula (4.1) for the case (2.4) and (3.1) one obtains $H=\frac{1}{2}\int\limits_{\mathbf{\Sigma}}dzd\theta d\varphi T^{2}\sin\theta\left[\frac{1}{g_{T}}\left(\partial_{T}\Phi\right)^{2}+\frac{1}{g_{z}}\left(\partial_{z}\Phi\right)^{2}-\Phi\Delta_{S^{2}}\Phi+\mu^{2}\Phi^{2}\right]$ (4.2) Using the Fourier decomposition of the quantized field and momentum (see Eqs. (3.13), (3.22)) one finds the Hamiltonian of the quantized scalar field as expressed in terms of annihilation and creation operators: $H=\frac{1}{2}\sum\limits_{\varkappa}\left[\omega_{\varkappa}\widehat{A}_{\varkappa}\widehat{A}_{\varkappa}^{{\dagger}}+\gamma_{\varkappa\varkappa^{\prime}}\widehat{A}_{\varkappa}\widehat{A}_{\varkappa^{\prime}}+\left(c.c\right)\right]$ (4.3) where indices $\varkappa,\varkappa^{\prime}$ correspond to the appropriate three letter sets $\varepsilon lm.$ The parameters $\omega_{\varkappa},\gamma_{\varkappa\varkappa^{\prime}}$ are given as $\gamma_{\varepsilon lm/\varepsilon^{\prime}lm^{\prime}}=\left[T^{2}g_{z}\overset{\cdot}{R}_{\varepsilon l}\overset{\cdot}{R}_{-\varepsilon l}+T^{2}\left\\{\frac{\varepsilon^{2}}{g_{z}}+\frac{l\left(l+1\right)}{T^{2}}+\mu^{2}\right\\}R_{\varepsilon l}\left(T\right)R_{-\varepsilon l}\left(T\right)\right]\delta_{\varepsilon,-\varepsilon^{\prime}}\delta_{m,-m^{\prime}}$ (4.4) $\omega_{\varepsilon lm}=\left[T^{2}g_{z}\overset{\cdot}{R}_{\varepsilon l}\overset{\cdot}{R}_{\varepsilon l}^{\ast}+T^{2}\left\\{\frac{\varepsilon^{2}}{g_{z}}+\frac{l\left(l+1\right)}{T^{2}}+\mu^{2}\right\\}R_{\varepsilon l}\left(T\right)R_{\varepsilon l}^{\ast}\left(T\right)\right].$ (4.5) Therefore, the Hamiltonian of the scalar field in the T-model, i.e. anisotropic cosmological model representing interior of the Schwarzschild BH, turns out to be $H=\frac{1}{2}\sum\limits_{\varepsilon lm}\left[\omega_{\varepsilon lm}\left(\widehat{A}_{\varepsilon lm}\widehat{A}_{\varepsilon lm}^{{\dagger}}+\widehat{A}_{\varepsilon lm}^{{\dagger}}\widehat{A}_{\varepsilon lm}\right)+\gamma_{\varepsilon lm/-\varepsilon l-m}\widehat{A}_{\varepsilon lm}\widehat{A}_{-\varepsilon l-m}+\gamma_{\varepsilon lm/-\varepsilon l-m}^{\ast}\widehat{A}_{\varepsilon lm}^{{\dagger}}\widehat{A}_{-\varepsilon l-m}^{{\dagger}}\right].$ (4.6) representing the set of interacting, time-dependent harmonic oscillators. On this basis one can study the dynamics of the quantized scalar field. The evolution of the system is described by the Heisenberg equation of motion for the operators $\widehat{A}_{\varepsilon lm}$ $i\frac{d}{dt}\widehat{A}_{\varepsilon lm}=\left[\widehat{A}_{\varepsilon lm},\widehat{H}\right]=\omega_{\varepsilon lm}\left(t\right)\widehat{A}_{\varepsilon lm}\left(t\right)+\gamma_{\varepsilon lm}^{\ast}\left(t\right)\widehat{A}_{-\varepsilon l-m}^{{\dagger}}\left(t\right)$ (4.7) where, $\gamma_{\varepsilon lm/-\varepsilon l-m}\equiv\gamma_{\varepsilon lm}.$ One can search for the solutions of the above equations by using the following ansatz: $\widehat{A}_{\varepsilon lm}\left(t\right)=\alpha_{\varepsilon lm}\left(t\right)\widehat{A}_{\varepsilon lm}+\beta_{\varepsilon lm}\left(t\right)\widehat{A}_{-\varepsilon l-m}^{{\dagger}},$ (4.8) where $\alpha_{\varepsilon lm}\left(t\right)$ and $\beta_{\varepsilon lm}\left(t\right)$ are some unknown complex functions and $\widehat{A}_{\varepsilon lm}$ and $\widehat{A}_{-\varepsilon l-m}^{{\dagger}}$ are time independent operators. By definition the relation (4.8) preserves the commutation relations (3.22), hence it turns out to be the Bogolyubov transformation, $\left|\alpha_{\varepsilon lm}\left(t\right)\right|^{2}-\left|\beta_{\varepsilon lm}\left(t\right)\right|^{2}=1.$ (4.9) Then, the Heisenberg equations (4.7) are converted into differential equations for the Bogolyubov coefficients $i\frac{d}{dt}\alpha_{\varepsilon lm}\left(t\right)=\omega_{\varepsilon lm}\left(t\right)\alpha_{\varepsilon lm}\left(t\right)+\gamma_{\varepsilon lm}^{\ast}\left(t\right)\beta_{\varepsilon lm}^{\ast}\left(t\right),$ (4.10) $i\frac{d}{dt}\beta_{\varepsilon lm}\left(t\right)=\omega_{\varepsilon lm}\left(t\right)\beta_{\varepsilon lm}\left(t\right)+\gamma_{\varepsilon lm}^{\ast}\left(t\right)\alpha_{\varepsilon lm}^{\ast}\left(t\right).$ (4.11) In general, one can’t expect exact solutions of the equations (4.10-11) and approximate schemes would therefore be proposed. Our forthcoming paper will be devoted to the comprehenssive discussion of this problem. ## 5 Discussion Considering the interior of a Schwarzschild BH as a unique spacetime, an anisotropic cosmological model, we have performed the quantization of the free (noninteracting) scalar field by imposing the canonical commutation relations. One decomposes the field and momentum in terms of the complete set of solutions of the Klein-Gordon (or in fact Euler-Lagrange equations) with the coefficients of expansion being annihilation and creation operators. This procedure leads to the Hamiltonian of the quantized scalar field taking the form of the set of harmonic, time-dependent oscillators coupled in a special way: there are terms in the Hamiltonian corresponding to creation, $\gamma_{\varepsilon lm}\widehat{A}_{\varepsilon lm}\widehat{A}_{-\varepsilon l-m}$ and annihilation $\gamma_{\varepsilon lm}^{\ast}\widehat{A}_{\varepsilon lm}^{{\dagger}}\widehat{A}_{-\varepsilon l-m}^{{\dagger}}$ particles in pairs. Such a picture, peculiar at first sight, appears to have a deeper sense. The spacetime considered is a dynamic one - there is no energy conservation there, hence the Hamiltonian contains terms representing spontaneous creation and annihilation pairs of particles. Homogeneity of the spacetime along the $z$-direction results in the presence of a spatial-like Killing vector representing, $z$-momentum-component conservation. Hamiltonian (4.6) reflects this symmetry property: pairs of the particles with opposite $z$-component momenta may be created, $\widehat{A}_{\varepsilon lm}^{{\dagger}}\widehat{A}_{-\varepsilon l-m}^{{\dagger}}$ and annihilated $\widehat{A}_{\varepsilon lm}\widehat{A}_{-\varepsilon l-m}$; the Hamiltonian of the system also obeys rotational invariance. The conservation of the $z$-momentum component in the terms represented by $\gamma_{\varepsilon lm}$ and $\gamma_{\varepsilon lm}^{\ast}$ in the Hamiltonian is an analogue of energy conservation outside the BH, i.e. the particles in a pair carry positive/negative energy; the one with negative energy cannot survive outside the BH but only within the horizon of the BH. There is a more or less obvious interpretation of the $\beta_{\varepsilon lm}\left(t\right)$ coefficient of the Bogolyubov’s transformation (4.8): it is proportional to the number of the particles created during the evolution of the system, $\left\langle 0\left(t\right)\right|\widehat{A}_{\varepsilon lm}^{{\dagger}}\widehat{A}_{\varepsilon lm}\left|0\left(t\right)\right\rangle=\left\langle 0\right|\widehat{A}_{\varepsilon lm}^{{\dagger}}\left(t\right)\widehat{A}_{\varepsilon lm}\left(t\right)\left|0\right\rangle=\left|\beta_{\varepsilon lm}\left(t\right)\right|^{2},$ (5.1) where $\left|0\right\rangle$ is the vacuum state for fixed time $t=0$ and annihilation operators $\widehat{A}_{\varepsilon lm}$ while $\left|0\left(t\right)\right\rangle$ is the vacuum state for later time $t$ and annihilation operators $\widehat{A}_{\varepsilon lm}\left(t\right).$ Due to the violent dynamics of the background spacetime, one may expect the dynamics of the creation and annihilation of the (pairs of) particles to be violent, and conventional adiabatic-like approaches (see e.g. [17-18]) could hardly be regarded as a working scheme. Therefore, attempts to find an approximate solution within a treatment here proposed that might be called a ”unitary approach” as based on a unitarity of the evolution of the system, will be discussed in our following paper. An interesting aspect of the dynamics of the model (3.1) will be however briefly discussed here. That is the question of the possible instability of the system of interacting harmonic oscillators (4.6) (see [27-28]). The oscillators interact in pairs, $\left(\varepsilon lm\right)/\left(-\varepsilon l-m\right)$ and one can consider diagonalization (at an arbitrary instant $T^{\prime}$) of the Hamiltonian corresponding to such a subsystem. Then the frequency in such a diagonalized case is given as: $\Omega_{\varepsilon lm}^{2}=\omega_{\varepsilon lm}^{2}-\left|\gamma_{\varepsilon lm/-\varepsilon l-m}\right|^{2}.$ (5.2) This expression should be positive, otherwise the system is unstable (see [27]) (this problem will be discussed in detail in our following paper) - this would be named a ”gravitationally driven instability”. One can check that in this case, Eqs. (4.4-5) the right hand side of Eq.(5.2) $\Omega_{\varepsilon lm}^{2}=\frac{1}{g_{z}}\left[\frac{\varepsilon^{2}}{g_{z}}+\frac{l\left(l+1\right)}{T^{2}}+\mu^{2}\right]$ (5.3) is positive: there is no gravitational instability in the scalar field quantized in Doran et al. [7] spacetime. An interesting issue is that, apart from the possible instability of type (5.2), that might be referred to as ”a restoring force instability” there is also another possible instability, namely ”a friction driven instability” but the problem of its origin and character will be discussed elsewhere. ## 6 Appendix Let us briefly analyze the form of the temporal part of Klein-Gordon equation in this case, i.e. Eq. (3.9): $\frac{1}{T^{2}}\frac{d}{dT}\left(T^{2}g_{z}\frac{dR}{dT}\right)+\left(\frac{\varepsilon^{2}}{g_{z}}+\mu^{2}+\frac{l\left(l+1\right)}{T^{2}}\right)R=0,$ (A.1) where $g_{z}=\frac{r_{S}}{T}-1,$ and lower labels have been omitted here. Making the substitution, $R=f\eta,$ one finds $\frac{1}{T^{2}}\frac{d}{dT}\left(T^{2}g_{z}\frac{dR}{dT}\right)=\frac{1}{T^{2}}\left[\left(r_{S}-2T\right)\left(f^{\prime}\eta+f\eta^{\prime}\right)+\left(r_{S}T-T^{2}\right)\left(f^{\prime\prime}\eta+f\eta^{\prime\prime}+2f^{\prime}\eta^{\prime}\right)\right]$ (A.2) and prime means differentiation with respect to $T$. Claiming $\left(r_{S}-2T\right)f+2\left(r_{S}T-T^{2}\right)f^{\prime}=0,$ (A.3) one gets $R\left(T\right)$ in the form $R=\frac{\eta}{\sqrt{T\left(r_{S}-T\right)}},$ (A.4) and $\eta\left(T\right)$ satisfies the following confluent Heun equation $\left[\frac{d^{2}}{dT^{2}}+\nu^{2}\left(T\right)\right]\eta=0,$ (A.5) where $\nu^{2}\left(T\right)=A+\frac{B}{T}+\frac{C}{\left(r_{S}-T\right)}+\frac{D}{T^{2}}+\frac{E}{\left(r_{S}-T\right)^{2}},$ (A.6) and the five coefficients $A,...,E$ are equal to: $\displaystyle A=\left(\varepsilon^{2}-\mu^{2}\right),\text{ \ \ }B=\frac{1}{2r_{S}}\left(2l\left(l+1\right)+1\right),$ $\displaystyle C=r_{S}\left(\mu^{2}+2\varepsilon^{2}\right)+B,\text{ \ \ }D=\frac{1}{4},$ $\displaystyle E=D-2\left(1+r_{S}^{2}\varepsilon^{2}\right).$ ## 7 References [1] V.P. Frolov, V.P. and I.D. Novikov, Black Hole Physics: Basic Concepts and New Developments; Kluwer Academic: Dordrecht, The Netherlands, 1998 [2] J.B. Hartle, Gravitation, Addison Wesley, 2003 [3] M. Bańados, J. Silk, and S.M. West, Kerr Black Holes as Particle Accelerators to Arbitrarily High Energy, Phys. Rev. Lett. 103, 111102 (2009); arXiv:0909.0169 [4] T. Harada and M. Kimura, Black holes as particle accelerators: A brief review, Classical Quantum Gravity 31, 243001 (2014) [5] O.B. Zaslavskii The Bańados-Silk-West effect with immovable particles near static black holes and its rotational counterpart; arXiv:2207.03213v2 [gr-qc] (2022), https://doi.org/10.48550/arXiv.2207.03213. [6] P. Gusin, A. T. Augousti, F. Formalik, and A. Radosz, The (A)symmetry between the Exterior and Interior of a Schwarzschild Black Hole, Symmetry (2018) 10, 366 [7] R. Doran, F. S. Lobo, and P. Crawford, Interior of a Schwarzschild black hole revisited, Foundations of Physics, vol. 38, no. 2, pp. 160 (2008) [8] V.A. Ruban, Spherically Symmetric T-Models in the General Theory of Relativity, Gen. Rel. and Grav., 33, No. 2, 2001 [9] A.J.S. Hamilton and G. Polhemus, Stereoscopic visualization in curved spacetime: Seeing deep inside a black hole, New J. Phys. (2010), 12, 123027–123052 [10] A. Radosz, P. Gusin, A. T. Augousti and F. Formalik, Inside spherically symmetric black holes or how a uniformly accelerated particle may slow down Eur. Phys. J. C 2019, 79, 876. https://doi.org/10.1140/epjc/s10052-019-7372-5 [11] A.V. Toporensky and O.B. Zaslavskii, Zero-momentum trajectories inside a black hole and high energy particle collisions, J. Cosmol. Astr. Phys. 2019(12):063-063. [12] A. T. Augousti, P. Gusin, B. Kuśmierz, J. Masajada and A. Radosz, On the speed of a test particle inside the Schwarzschild event horizon and other kinds of black holes. Gen. Relativ. Gravit. 2018, 50, 131, doi:10.1007/s10714-018-2445-6. [13] A. V. Toporensky and O. B. Zaslavskii, Redshift of a photon emitted along the black hole horizon,” E. Phys. J. C, vol. 77, no. 3, p. 179, 201 [14] D. V. Gal’tsov and E. E. Donets, Power–law mass inflation in Einstein–Yang–Mills–Higgs black holes, arXiv:gr-qc/9706067v1 22 Jun 1997 [15] E.E. Donets, D. V. Gal’tsov and M.Y. Zotov, Internal Structure of Einstein–Yang–Mills Black Holes, https://arxiv.org/pdf/gr-qc/9612067 [16] U. A. Yajnik and K. Narayan, CPT invariance and canonical quantization inside the Schwarzschild black hole, Class. Quantum Grav. 15 1315, 1998 [17] E. Parker, D. J. Toms, Quantum Field Theory In Curved Spacetime, Quantized Fields and Gravity, 2009 [18] S. Habib, C. Molina-Paris, E. Mottola, Energy-Momentum Tensor of Particles Created in an Expanding Universe, Phys.Rev. D61 (2000) 024010, [arXiv:gr-qc/9906120] [19] G. Tsoupros, Conformal Scalar Propagation inside the Schwarzschild Black Hole, Gen.Rel.Grav. 44 (2012) 309-351 [20] N. Oshita, Resolution to the firewall paradox: The black hole information paradox and highly squeezed interior quantum fluctuations, Class.Quant.Grav. 34, 19, 195002, 2017 [21] C. R. Almeida and D. C. Rodrigues, Quantization of a black-hole gravity: geometrodynamics and the quantum, Class.Quant.Grav. 40, 3, 035004, 2023 [22] S. B. Giddings and J. Perkins, Quantum evolution of the Hawking state for black holes, arXiv.2204.1312 [hep-th] [23] Christodoulou, M.; Rovelli, C. How big is a black hole? Phys. Rev. D 2015, 91, 064046 [24] P. Gusin, A. Radosz, The volume of the black holes - The constant curvature slicing of the spherically symmetric spacetime, Mod. Phys. Lett. A 32, 1750115 (2017) [25] Zaslavskii, O.B. Schwarzschild Black Hole as Accelerator of Accelerated Particles JETP Lett. 2020, 111, doi:10.1134/S0021364020050033 [26] N. D. Birrel, P. C. W. Davies, Quantum Fields in Curved Space, Cambridge 1984. [27] Rajeev, K., Chakraborty, S. & Padmanabhan, T. Inverting a normal harmonic oscillator: physical interpretation and applications, Gen Relativ Gravit 50, 116 (2018). https://doi.org/10.1007/s10714-018-2438-5 [28] Ya. B. Zel’dovich and A. A. Starobinsky, Particle production and vacuum polarization in an anisotropic gravitational field, Sov. Phys. (JETP) 34, 1159 (1972)
Identifying diffusive length scales in one-dimensional Bose gases Frederik Møller1$\star$, Federica Cataldini1 and Jörg Schmiedmayer1 1 Vienna Center for Quantum Science and Technology (VCQ), Atominstitut, TU Wien, Vienna, Austria ⋆<EMAIL_ADDRESS> ## Abstract In the hydrodynamics of integrable models, diffusion is a subleading correction to ballistic propagation. Here we quantify the diffusive contribution for one-dimensional Bose gases and find it most influential in the crossover between the main thermodynamic regimes of the gas. Analysing the experimentally measured dynamics of a single density mode, we find diffusion to be relevant only for high wavelength excitations. Instead, the observed relaxation is solely caused by a ballistically driven dephasing process, whose time scale is related to the phonon lifetime of the system and is thus useful to evaluate the applicability of the phonon bases typically used in quantum field simulators. ###### Contents 1. 1 Introduction 2. 2 Propagation of quasi-particles in the 1D Bose gas 1. 2.1 Generalized Hydrodynamics at the diffuse scale 2. 2.2 Linearization of the diffusive GHD equation 3. 2.3 Critical length scale of diffusion 3. 3 Magnitude of diffusive corrections in thermal states 4. 4 Quasi-particle dynamics following a single-mode quench in a quasi-condensate 1. 4.1 Experimental setup and protocol 2. 4.2 Analysis of quasi-particle propagation 1. 4.2.1 Comparing experimental observations with linearized GHD 2. 4.2.2 Quasi-particle dephasing and applicability of phonon basis 3. 4.2.3 Diffusive length scales 3. 4.3 Single-mode quench in the maximal diffusive regime 5. 5 Conclusion 6. A Thermodynamic Bethe Ansatz 7. B Quasi-particle propagation versus energy ## 1 Introduction The last few decades have seen significant advances in techniques for realizing and manipulating quantum many-body systems, particularly in the form of gases of ultracold atoms [1]. The behavior of interacting quantum many-body systems is, however, notoriously difficult to describe [2, 3], spurring the development of new theoretical models. In continuous systems with large number of particles, microscopic descriptions are generally intractable; instead, models describing the emergent, large-scale behavior of the system are more appealing [4]. For such purpose, hydrodynamics provides a powerful framework, describing the large-scale flow of densities of conserved quantities [5, 6]. However, integrable systems, such as the one-dimensional (1D) Bose gas, feature an infinite number of conserved quantities, thus complicating the formulation of a hydrodynamic theory; this infinite number of conservation laws poses a severe constraint of dynamics and inhibits thermalization [7]. The breakthrough came with the advent of Generalized Hydrodynamics (GHD) [8, 9], which parameterizes the hydrodynamics in terms of quasi-particles given by the Bethe Ansatz solution to integrable models. At lowest order (Euler scale), the working GHD equation is a collisionless Boltzmann equation [10], where quasi-particles propagate ballistically at an effective velocity modified by collisions with other particles. Beyond the Euler scale one must account for diffusive effects [11], which arise from the number of collisions experienced by the particle to be subject to equilibrium thermal fluctuations [12]. The result is a Gaussian broadening of the quasi- particle trajectories [13]. Unlike for most non-integrable systems, diffusion in GHD arises as a subleading correction to Euler-scale hydrodynamics. Thus, diffusive effects become evident in dynamics mainly at long time scales. For instance, in the presence of an inhomogeneous potential, diffusion has been shown to cause a gradual thermalization of a 1D Bose gas [14]. In real systems, however, other mechanisms of integrability breaking [15], such as noise [16], losses [17], or violation of one-dimensionality [18, 19, 20, 21], are likely stronger and may therefore mask signatures of diffusion. Indeed, in experimental tests of GHD with 1D Bose gases, no clear signs of diffusion have been observed thus far [22, 23]. In this work, we study and quantify the influence of diffusion on the dynamics of one-dimensional Bose gases, particularly on time and length scales accessible in experimental setups. First, we calculate the diffusive spreading of quasi-particle trajectories relative to their ballistic propagation velocity in thermal states. We consider interaction strengths and temperatures ranging across several orders of magnitude, thus allowing the identification of the thermodynamic regimes where diffusive effects are most prominent. Next, we analyse the experimental results of Ref. [24] in order to identify the length scales at which diffusive effects become relevant in a quasi- condensate; the setup of Ref. [24] is especially well-suited for such analysis, as the high degree of control over the potential enabled the excitation of a single momentum mode, thus creating excitations at a desired length scale. Our analysis also provides a proxy for the phonon lifetime, enabling one to gauge the applicability of a phonon basis to describe dynamics. Phonon bases have been used extensively to describe quantum field simulators realized with ultracold atoms [25, 26, 27, 28, 29, 30, 31]; identifying the time scales at which a low-energy effective theory remains valid is thus beneficial for a number of applications. ## 2 Propagation of quasi-particles in the 1D Bose gas Upon confinement to a single spatial dimension, an ultracold gas of $N$ repulsively interacting bosons of mass $m$ is well described by the Lieb- Lininger model [32, 33], whose Hamiltonian reads ${\mathcal{H}}=-\sum_{i}^{N}\frac{\hbar^{2}}{2m}\frac{\partial^{2}}{\partial z_{i}^{2}}+g\sum_{i<j}^{N}\delta\left(z_{i}-z_{j}\right)\;.$ (1) Here, $z_{i}$ is the position of the $i$’th boson and $g>0$ is their coupling strength, which in an experimental system depends on the s-wave scattering length of the atoms and the potential confining the atoms to 1D [34]. Often, the coupling strength is parameterized as $c=mg/\hbar^{2}$, in units of inverse length. By virtue of integrability, the 1D Bose gas can be solved exactly using the Bethe Ansatz, which identifies the elementary excitations as fermionic quasi- particles uniquely labelled by their quasi-momentum, or rapidity, $\theta$ [32, 33]. In the thermodynamics limit, the density of occupied rapidities, i.e. the density of quasi-particles, $\rho_{\mathrm{p}}(\theta)$ fully characterises the thermodynamic properties of the local equilibrium macrostate. Similarly, a density of holes $\rho_{\mathrm{h}}(\theta)$ can be introduced, which describes the density of unoccupied rapidities; together the two densities form the density of states $\rho_{\mathrm{s}}(\theta)$. The occupational fraction of allowed rapidity states, dubbed the filling function $\vartheta(\theta)=\rho_{\mathrm{p}}(\theta)/\rho_{\mathrm{s}}(\theta)$, equivalently characterises the macrostate. At thermal equilibrium, the state of a homogeneous 1D Bose gas with density $n$ is characterized by only two dimensionless parameters: the interaction strength $\gamma=c/n$ and the reduced temperature $\mathcal{T}=\frac{2\hbar^{2}k_{\mathrm{B}}T}{mg^{2}}$, where $T$ is the temperature and $k_{\mathrm{B}}$ Boltzmann’s constant. The interaction strength and temperature span an entire phase-diagram of the 1D Bose gas, whose regimes can be identified via the $g^{(2)}$-function [35]. At asymptotic values of $\gamma$ and $\mathcal{T}$, the description of the Bose gas drastically simplifies. However, in order to describe the state at arbitrary values of these parameters, the Thermodynamic Bethe Ansatz (TBA) is generally needed [36]. For more detail on the TBA and GHD of the 1D Bose gas, see Appendix A and Refs. [37, 38, 39]). ### 2.1 Generalized Hydrodynamics at the diffuse scale Hydrodynamics is an expansion of dynamics in spatial derivatives. At lowest order in derivatives, where currents only depend on the local densities of conserved quantities, one obtains the Euler scaling limit, which in GHD captures the ballistic propagation of quasi-particles. Semi-classically, or in soliton-gas interpretation of GHD [40], the bare propagation velocity of quasi-particles is modified though collision with other particles, as each two-body elastic collision is associated with a Wigner time delay [41]. Thus, as a particle with rapidity $\theta$ traverses a thermodynamic state with quasi-particle density $\rho_{\mathrm{p}}$, averaging over all the Wigner time delays experienced by the particle results in an effective propagation velocity $v^{\mathrm{eff}}(\theta)$ (see figure 1). On the quantum mechanical level, the effective velocity can be viewed as a modified group velocity of the quasi-particles, following local interactions with other particles, whereby properties, such as the momentum $p(\theta)$ and energy $\epsilon(\theta)$, of the particle $\theta$ become modified or "dressed" [42, 43]. Figure 1: Illustration of quasi-particle propagation. A ’tagged’ quasi- particle with rapidity $\theta_{i}$ (blue) propagates ballistically with an average velocity $v^{\mathrm{eff}}(\theta_{i})$ through a system with quasi- particle density $\rho_{\mathrm{p}}(\theta)$. The effective velocity is a modification of the bare velocity $v^{\mathrm{bare}}(\theta_{i})=\hbar\theta_{i}/m$ following collision with other particles, as each collision is associated with an apparent delay of the particles (or equivalently a positional shift $\Delta$). Thermal fluctuations of $\rho_{\mathrm{p}}(\theta)$, and thus the number of collisions experienced by the particle, result in diffusive corrections, which manifest as a broadening of the quasi-particle trajectory on the order of $\sqrt{t}$. Figure inspired by Ref. [44]. At the next order of spatial derivatives, diffusive contributions to the dynamics are accounted for [11]. Quasi-particle diffusion arises from thermal fluctuations [12, 13]. For the duration $t$, a quasi-particle will traverse a region of length $L\propto t$, through which $\rho_{\mathrm{p}}$ exhibits thermal fluctuations of order $1/\sqrt{L}$. Thus, the number of collisions experienced by the particle, and in turn the distance it travels, fluctuates by $\sqrt{t}$ [13]. The result is a diffusive broadening of the quasi-particle front, as illustrated in figure 1. Further, the Euler scale terms of the GHD equation are unaltered by scaling $z\to zL$ and $t\to tL$, whilst the diffusive terms are rescaled by a factor $1/L$ [14], thus highlighting diffusive effects in GHD as subleading to the ballistic quasi-particle propagation. Assuming only large scale variations in space and time, the quasi-particle distribution of an inhomogeneous system becomes time and space dependent $\rho_{\mathrm{p}}=\rho_{\mathrm{p}}(z,t,\theta)$; the GHD equation describes the evolution of this distribution. Note, in the following all $(z,t)$-dependencies have been omitted for a more compact notation. On the diffusive scale, the advective form of the GHD equation (describing the evolution of the filling $\vartheta$) reads $\partial_{t}\vartheta(\theta)+v^{\mathrm{eff}}(\theta)\,\partial_{z}\vartheta(\theta)=\frac{1}{2\rho_{\mathrm{s}}(\theta)}\left(\mathbf{1}-\frac{\vartheta\mathbf{\Delta}}{2\pi}\right)\>\partial_{z}\left(\left(\mathbf{1}-\frac{\vartheta\mathbf{\Delta}}{2\pi}\right)^{-1}\rho_{s}(\theta)\>\mathbf{D}\>\partial_{z}\vartheta(\theta)\right)\;,$ (2) where the effective velocity is given by $v^{\mathrm{eff}}(\theta)=\frac{(\partial_{\theta}\epsilon)^{\mathrm{dr}}(\theta)}{(\partial_{\theta}p)^{\mathrm{dr}}(\theta)}\;,$ (3) with $\epsilon(\theta)=\hbar^{2}\theta^{2}/(2m)$ and $p(\theta)=\hbar\theta$ being the single-particle energy and momentum, respectively. The superscript ’dr’ denotes that the function has been dressed, following the equation $g^{\mathrm{dr}}(\theta)=g(\theta)-\frac{1}{2\pi}\int_{-\infty}^{\infty}\mathrm{d}\theta^{\prime}\;\Delta(\theta,\theta^{\prime})\vartheta(\theta^{\prime})g^{\mathrm{dr}}(\theta^{\prime})\;,$ (4) where $\Delta(\theta,\theta^{\prime})=-\frac{2c}{c^{2}+(\theta-\theta^{\prime})^{2}}$ is the two-body scattering kernel of the Lieb-Liniger model. Further, in eq. (2) we have employed bold symbols to indicate integral operators and the following shorthand notation for their action $\left(\mathbf{K}g\right)(\theta)\coloneqq\int_{-\infty}^{\infty}\mathrm{d}\theta^{\prime}\>K\left(\theta,\theta^{\prime}\right)g\left(\theta^{\prime}\right)\;,$ (5) where $K\left(\theta,\theta^{\prime}\right)$ is the associated kernel of the generic integral operator $\mathbf{K}$. The kernel of the integral operator $\mathbf{D}$ in eq. (2) reads $D(\theta,\theta^{\prime})=\frac{1}{\rho_{\mathrm{s}}^{2}(\theta)}\left(\delta(\theta-\theta^{\prime})w(\theta^{\prime})-W(\theta,\theta^{\prime})\right)\;,$ (6) where the off-diagonal and diagonal elements, respectively, are given by $\displaystyle W(\theta,\theta^{\prime})$ $\displaystyle=\rho_{\mathrm{p}}(\theta)(1-\vartheta(\theta))\left[\frac{\Delta^{\mathrm{dr}}(\theta,\theta^{\prime})}{2\pi}\right]^{2}\left|v^{\mathrm{eff}}(\theta)-v^{\mathrm{eff}}(\theta^{\prime})\right|\;,$ (7) $\displaystyle w(\theta)$ $\displaystyle=\int_{-\infty}^{\infty}\mathrm{d}\theta^{\prime}\>W(\theta^{\prime},\theta)\;.$ (8) The diffusive effects arise mainly from fluctuations of the state $\vartheta$, which are diagonal in rapidity and given by the expression $\langle\delta\vartheta(\theta)\>\delta\vartheta(\theta^{\prime})\rangle=\delta(\theta-\theta^{\prime})\frac{\vartheta(\theta^{\prime})\left(1-\vartheta(\theta^{\prime})\right)}{\rho_{\mathrm{s}}(\theta^{\prime})}\;.$ (9) Indeed, we see that the diffusion kernel $D(\theta,\theta^{\prime})$ of eq. (6) is proportional to the filling fluctuations. ### 2.2 Linearization of the diffusive GHD equation Simulating diffusive GHD according to eq. (2) generally requires powerful numerics [45, 46]. However, following the approach of Ref. [47], one can diagonalize the GHD propagation kernel, enabling a direct study of diffusive length scales. Here we briefly summarize the approach. Consider the time-dependent filling function consisting of a stationary, homogeneous background $\vartheta_{0}$ and an evolving perturbation $\delta\vartheta$, such that $\vartheta(z,t,\theta)=\vartheta_{0}(\theta)+\delta\vartheta(z,t,\theta)\;.$ (10) Assuming a small perturbation $\delta\vartheta\ll\vartheta_{0}$, we can neglect interactions within the perturbation itself and only treat the interactions between the perturbation and the stationary background. Thus, both the effective velocity and the diffusion kernel are evaluated with respect to the background filling $\vartheta_{0}$, whereby each Fourier mode of the filling function perturbation evolves independently. A perturbation consisting of only a single Fourier mode $\delta\vartheta(z,t,\theta)=\delta\vartheta_{j}(t,\theta)e^{ik_{j}z}$, where $k_{j}=2\pi j/L$ with $L$ being the system size, has the time-dependent solution $\partial_{t}\delta\vartheta_{j}(t,\theta)+\int_{-\infty}^{\infty}\mathrm{d}\theta^{\prime}\>\mathfrak{D}_{j}(\theta,\theta^{\prime})\>\delta\vartheta_{j}(t.\theta)=0\;,$ (11) where we have introduced $\mathfrak{D}_{j}$, which is kernel of the propagation operator of the $j$’th mode and is given by $\mathfrak{D}_{j}(\theta,\theta^{\prime})=ik_{j}v_{0}^{\mathrm{eff}}(\theta)\delta(\theta-\theta^{\prime})+\frac{k_{j}^{2}}{2}{D}_{0}(\theta,\theta^{\prime})\;.$ (12) The propagation operator is a linear integral operator with eigenstates $f_{\omega,j}(\theta)$ and corresponding eigenvalues $\lambda_{\omega,j}\in\mathbb{C}$, where the index $\omega$ enumerates the eigenvalues. The solution to eq. (11) can be expressed in terms of the eigenstates as $\delta\vartheta_{j}(\theta,t)=\sum_{\omega}\eta_{\omega,j}f_{\omega,j}(\theta)e^{-\lambda_{\omega,j}t}\;,$ (13) where the coefficients $\eta_{\omega,j}$ are determined from the initial state, such that $\delta\vartheta(z,t=0,\theta)=\sum_{j}e^{ik_{j}z}\sum_{\omega}\eta_{\omega,j}f_{\omega,j}(\theta)\;.$ (14) According to eqs. (13) and (14), each $k$-mode (Fourier mode) of the filling can be expressed as a sum of eigenstates $f_{\omega,j}(\theta)$, whose evolution is determined by their corresponding eigenvalue $\lambda_{\omega,j}$. The imaginary part of $\lambda$ describes ballistic propagation, here manifesting as an oscillation of the eigenstate; the real part of $\lambda$ represents diffusion, which dampens the oscillation. Following the non-degeneracy of the spectrum of $\mathfrak{D}_{j}$, each eigenstate $f_{\omega}$ evolves ballistically at a different frequency and eventually dephase with respect to one-another, causing an apparent relaxation of the mode $\delta\vartheta_{j}$. On shorter time scales, accounting for diffusion merely leads to (slightly) faster apparent relaxation of the mode, thus highlighting the difficulty in experimentally observing diffusion. On long time scales, however, the redistributes of rapidities via diffusion causes the $k$-mode to relax from a decoherent superposition of waves to a stationary state. The single-mode picture above has clear analogies to that of phonons in Bogoliubov theory for quasi-condensates [48] (or more generally in Luttinger liquid theory [49, 50]): For these effective low-energy Hamiltonians, collective phase-density excitations in the form of phonons are the exact eigenstates. In analog quantum field simulators, basis of phonons have been used extensively to analyze experimental results (see e.g. Refs. [27, 28, 29, 30, 31]). However, when considering the underlying microscopic Hamiltonian (here the Lieb-Liniger Hamiltonian), phonons are superpositions of the true eigenstates and therefore do not have infinite lifetime. In Ref. [51], the phonon lifetime was derived by expressing the phonon as a coherent superposition of Bethe particle-hole states; over time, dephasing of the particle-hole states leads to an apparent relaxation of the phonon, similarly to the filling of a single $k$-mode $\delta\vartheta_{j}$. ### 2.3 Critical length scale of diffusion The propagation operator kernel (12) explicitly shows that ballistic propagation scales linearly with momentum $k$, whereas diffusion scales quadratically. Thus, we can estimate the length scale at which diffusive effects become important by diagonalizing the propagation kernel $\mathfrak{D}_{j}$ and analysing its spectrum. First, we write the eigenvalues of $\mathfrak{D}_{j}$ as $\lambda_{\omega,j}\equiv i\alpha_{\omega,j}+\beta_{\omega,j}\;,$ (15) where $\alpha_{\omega,j}$ and $\beta_{\omega,j}$ are both real. From the expression (12), we see that $\alpha_{\omega,j}$ and $\beta_{\omega,j}$ describes the ballistic propagation and diffusive propagation of the eigenstates, respectively. In the numerical study conducted in Ref. [47], it was demonstrated that the spectrum of $\mathfrak{D}_{j}$ is non-degenerate and can be ordered with respect to the value of $\alpha_{\omega}$. Further, the eigenvalues are, to a good approximation, smooth functions of $j$ scaling as $\displaystyle\alpha_{\omega,j}$ $\displaystyle\approx j\>\alpha_{\omega,j=1}$ (16) $\displaystyle\beta_{\omega,j}$ $\displaystyle\approx j^{2}\>\beta_{\omega,j=1}$ Thus, in order to estimate whether a given mode $j$ propagates mainly ballistically or diffusively, we compute the ratio $\frac{\alpha_{\omega,j}}{\beta_{\omega,j}}\approx\frac{1}{j}\frac{\alpha_{\omega,j=1}}{\beta_{\omega,j=1}}\coloneqq\frac{j_{\omega}^{\star}}{j}\;,$ (17) where we have defined the critical mode $j_{\omega}^{\star}\coloneqq\frac{\alpha_{\omega,j=1}}{\beta_{\omega,j=1}}\;.$ (18) Eigenstates of modes with $j>j_{\omega}^{\star}$ propagate mainly diffusively, while eigenstates of modes with $j<j_{\omega}^{\star}$ propagate mainly ballistically. ## 3 Magnitude of diffusive corrections in thermal states Figure 2: Propagation metric $\Gamma$ scaled by the coupling strength $c$ for thermal states at different dimensionless temperatures $\mathcal{T}$ and interaction strengths $\gamma$. Lower values signify a relative increased contribution of diffusion to quasi-particle propagation. The solid lines indicate the boundary between the three main thermodynamic regimes of the Lieb-Liniger model, namely the quasi-condensate (QC), ideal Bose gas (IBG), and and Tonks-Girardeau gas (TG). The dashed lines mark additional sub- regimes, namely the thermal-quantum statistics boundary for the fluctuations of the quasi-condensate and the degeneracy transition of the ideal Bose gas. For the purpose of quantifying the contribution of diffusion to the overall propagation of quasi-particle in the state $\rho_{\mathrm{p}}$, we introduce the measure $\Gamma=\frac{1}{n}\int_{-\infty}^{\infty}\mathrm{d}\theta\>\rho_{\mathrm{p}}(\theta)\frac{|v^{\mathrm{eff}}(\theta)|}{D^{\mathrm{diag}}(\theta)}\;,$ (19) where $D^{\mathrm{diag}}(\theta)=w(\theta)\rho_{\mathrm{s}}^{-2}(\theta)$ are the diagonal elements of the diffusion kernel and $n$ the atomic density. The measure $\Gamma$ relates the ballistic propagation velocity to the diagonal of the diffusion kernel weighted by the quasi-particle density. For fixed values of $\gamma=c/n$ we find that it scales linearly with the coupling strength $c$. Hence, we calculate $\Gamma/c$ for a large number of thermal states for various combinations of reduced temperatures $\mathcal{T}$ and interaction strength $\gamma$. The results are plotted in figure 2, where the different regimes of the Lieb-Liniger phase diagram have been indicated by solid lines. We find that a minimum of $\Gamma/c$, indicating a maximal broadening of quasi-particle trajectories relative to their distance travelled, is located near the crossover between the three main regimes of the Lieb-Liniger model, corresponding to $\mathcal{T}\approx\gamma\approx 1$. At higher temperatures, diffusive effects appear to be most relevant for temperatures $\mathcal{T}\sim\gamma^{-3/2}$, which coincides with the crossover between the quasi-condensate and ideal Bose gas regimes. Interestingly, we find that diffusion persists even at rather low temperatures; a similar observation was made in Ref. [52], where diffusion at very low temperatures was shown to result in an effective viscous hydrodynamics. Upon approaching any of the asymptotic regimes, the quasi-particle propagation becomes completely ballistic, indicated by a very large value of $\Gamma$. In the Tonks-Girardeau and ideal Bose gas limits this behavior is expected; free particles do not diffuse, and in these two regimes the gas behaves as free fermions and bosons, respectively. Meanwhile, in the quasi-condensate regime, the interaction energy dominates the average energy per atom, thus making density fluctuations energetically costly, which in turn represses diffusion. The 1D Bose gas is essentially maximally diffusive in the thermodynamic regime furthest from any of the asymptotic regimes above. Note, that we find no direct relation between $\Gamma$ and the $g^{(2)}$-function; instead, diffusion appears maximal for kinetic and interaction energies around $E_{\mathrm{int}}\sim E_{\mathrm{kin}}\sim Nk_{B}T/2$. See Appendix B for more. ((a)) ((b)) ((c)) Figure 3: Temperature scaling of diffusive and ballistic propagation. (a) Vertical cuts of the diagram 2 for three different interaction strengths $\gamma$. (b) Diagonal elements of the diffusion kernel weighted by the quasi- particle density: Diffusive contributions scale with increasing thermal fluctuations, until the gas starts transitioning into a regime of free particles. (c) Effective velocity weighted by the quasi-particle density. The ballistic contribution plateaus at lower temperatures as the filling function approaches a Fermi sea. To unravel the individual scaling behavior of ballistic and diffusive propagation, we plot their weighted contribution as function of temperature for different values of $\gamma$ in figure 3. For reference, figure 3(a) shows the corresponding values of $\Gamma/c$ (equivalent to vertical cuts of figure 2). Starting with the diffusive contribution, plotted in figure 3(b), we find that it is maximal at a temperature $\mathcal{T}$ inversely proportional to interaction $\gamma$; for lower temperatures, the diffusive contribution increase linearly with $\mathcal{T}$, while for higher temperatures it decreases again, scaling as $\mathcal{T}^{-1/2}$. The vanishing of diffusion at very low temperatures follows from the suppression of thermal fluctuations (see eq. (9)), as the filling $\vartheta$ approaches a Fermi sea state. Meanwhile, for very high temperatures and fixed particle number, the occupation of individual rapidity states becomes very low as quasi-particles occupy increasingly higher rapidities; such behavior is indicative of the transition towards free bosons, resulting in the eventual decrease of diffusion. In contrast, the ballistic contribution, plotted in figure 3(c), is almost constant for lower temperatures until it suddenly starts increasing proportionally to $\mathcal{T}^{1/2}$. Near the ground state (Fermi sea of rapidity states), the effective velocity evaluated at the Fermi point is equal to the sound velocity of the gas [53], while for rapidities between the two Fermi points, $v^{\mathrm{eff}}(\theta)$ is approximately a linear function. Thus, approaching lower temperatures, the fermionic statistics of the quasi- particles ensure that rapidity states up to the Fermi point remain populated, resulting in the plateau of the weighted ballistic contribution. As the temperature increases and the Fermi sea melts, interactions in the gas become less relevant, and the effective velocity approaches $\hbar\theta/m$; for large rapidity this velocity is far greater than the elements of the diffusion kernel, thus leading to a quasi-particle propagation completely dominated by ballistic motion. Finally, it should be stressed that $\Gamma$ can be computed for any state $\vartheta$, not just thermal states. Indeed, explicitly constructing a state which maximizes fluctuations will result in a larger diffusive broadening of the quasi-particle trajectory. An important non-thermal state, is the initial state of the seminal quantum Newton’s cradle experiment [54]; here, two halves of a (typically thermal) state have been boosted to large, opposite rapidities [55]. Notably, boosting the state substantially increases the quasi-particle velocities while merely shifting $D^{\mathrm{diag}}(\theta)$ towards higher rapidities, thus resulting in an increase in $\Gamma$. Nevertheless, diffusive effects are important to the long time-scale dynamics of the quantum Newton’s cradle, as purely ballistic propagation can produce quasi-particle distributions featuring very fine structures in the ($z,\theta$)-phase-space [56]. Such small wavelength features are much more susceptible to diffusive effects, which over time lead to the features being washes out and the system thermalizing [46]. ## 4 Quasi-particle dynamics following a single-mode quench in a quasi- condensate Diffusive effects become increasingly important at shorter length scales. To quantify these length scales, we can calculate the critical mode $j^{\star}$ of eq. (18). Note that $\Gamma$ and $j^{\star}$ are closely related; states with low $\Gamma$ will likewise feature lower critical modes. One particular system, where this sort of analysis is very insightful, is the single-mode quench in a quasi-condensate realized in the recent experiment of Ref. [24]. Although the Bose gas was realized in a parameter regime where diffusive effects are rather weak (based on our previous analysis), the ability to excite a single $k$-mode (of the atomic density) makes such an experimental platform ideal for studying diffusive effects. ### 4.1 Experimental setup and protocol The experimental setup is already described in detail in Ref. [24]. Hence, we will only summarize the details most relevant to this study. In the experiment, a Bose gas of 87Rb atoms are trapped on an atom chip [57], which creates a strong magnetic potential along two (transverse) axes, while providing weak trapping along the third (longitudinal or 1D) axis, effectively realizing a quasi-1D gas. A digital micromirror device (DMD) enables the creation of arbitrary optical potentials along the 1D axis [58]; for this protocol, the atoms are initially trapped in a box potential of length $L=80\>\mu\mathrm{m}$. By modulating the bottom of the box trap in the shape of a cosine, a density perturbation in the shape of single $k$-mode is imprinted in the condensate (see Fig. 4(a)). Once confined in the box, the condensate density is 60-80 $\text{atoms}/\mu\mathrm{m}$, and the initial state is well-described by a thermal state with temperature in the range 50-120 nK (depending on the degree of cooling performed). Here we consider three quenches with dimensionless interaction strength and temperature: 1. (a) $\gamma=1.8\cdot 10^{-3}$ and $\mathcal{T}=1.2\cdot 10^{3}$, 2. (b) $\gamma=2.1\cdot 10^{-3}$ and $\mathcal{T}=2.9\cdot 10^{3}$, 3. (c) $\gamma=1.6\cdot 10^{-3}$ and $\mathcal{T}=2.8\cdot 10^{3}$. Comparing with figure 2, we find that the gas is realized in the quasi- condensate phase near the thermal-quantum statistics boundary for the fluctuations; in this parameter regime, diffusive effects are expected to be weak. To initiate dynamics, the confining potential is quenched to a flat-bottomed box trap at time $t=0$ and the subsequent dynamics and relaxation is monitored by recording the atomic density via absorption imaging. An example of the measured density evolution is plotted in figure 4(b). Owing to a high degree of control over the 1D potential, a single density mode of the condensate can be excited, that is, the atomic density inside of the box trap is accurately given by $n(z,t)=n_{0}+\delta n_{j}(t)\cos\left(k_{j}z\right)$, where $\delta n_{j}(t)$ is the amplitude of the mode and $k_{j}=2\pi j/L$ is its momentum. For the quenches analysed here, only the $j=1$ mode is excited, however, the ability to also address higher modes was also demonstrated in Ref. [24]. In the context of Bogoliubov theory, the excited density mode corresponds to an eigenstate of the effective low-energy Hamiltonian [48]. However, as previously mentioned, the phononic basis does not represent the true eigenstates of the system [51]. Further, due to the high temperature of the condensate, the applicability of the low-energy model is very limited; we will discuss and demonstrate this in the following section. ((a)) ((b)) ((c)) Figure 4: (a) Illustration of the experimental protocol of Ref. [24]: A 1D box trap with a cosine-modulated bottom is realized using a DMD. At time $t=0$, dynamics is initiated by quenching the box bottom to flat. (b) Measured evolution of density perturbation in quench (c). The dashed lines indicate the position of the box walls. (c) Filling function of the initial thermal state of quench (c), along with its decomposition into a stationary homogeneous background and perturbation. ### 4.2 Analysis of quasi-particle propagation #### 4.2.1 Comparing experimental observations with linearized GHD ((a)) ((b)) ((c)) ((d)) ((e)) Figure 5: Analysis of propagation in the single-mode quench experiment of Ref. [24]. On the left, the experimentally measured evolution of the lowest cosine mode (black dots) compared with linearized GHD simulations (colored lines). The interaction strength and reduced temperatures of the background state of the three quenches are: (a) $\gamma=1.8\cdot 10^{-3}$ and $\mathcal{T}=1.2\cdot 10^{3}$, (b) $\gamma=2.1\cdot 10^{-3}$ and $\mathcal{T}=2.9\cdot 10^{3}$, (c) $\gamma=1.6\cdot 10^{-3}$ and $\mathcal{T}=2.8\cdot 10^{3}$. On the right, (d) the critical modes $j_{\omega}^{\star}$ of eq. (18) for eigenstates of the propagation operator, and (e) the coefficients (i.e. population) of the eigenstates for the single- mode perturbation excited in the experiment. We label the eigenstates by the imaginary part of the corresponding eigenvalue $\alpha_{\omega,j=1}$. To assess whether a linearization of the dynamics is valid for the system, we simulate the dynamics of the gas using linearized GHD [47] and compare with experimental observations. First, to extract the filling function of the background and perturbation, we spatially Fourier transform the filling of the initial thermal states and identify the $j=0$ mode as the stationary background. See figure 4(c) for an example. The perturbation $\delta\vartheta(z,\theta)$ is comprised of all remaining modes; since density is a non-linear function of the filling, an excitation of a single-density mode generally features a perturbation perturbation $\delta\vartheta$ containing multiple modes. Indeed, after Fourier transforming $\delta\vartheta(z,\theta)$ we find higher modes of the filling to be occupied, however, their amplitude is relative low (less than 10% of the $j=1$ mode), whereby we will focus solely on the evolution of $\delta\vartheta_{j=1}(t,\theta)$. Next, given the background filling $\vartheta_{0}(\theta)$ we calculate and diagonalize the propagation kernel, whereby the evolution of the perturbation can be calculated efficiently using eq. (13). The results are plotted in figure 5 for the three single mode quenches. We find that the linearized GHD describes the measured dynamics to high accuracy, basically reproducing the predictions of fully interacting (Euler-scale) GHD. The agreement remains very accurate even for the higher amplitude quenches, where the density of the perturbation $\delta n_{1}$ is around 20% of the background density $n_{0}$. Note, the colored lines plotted in figures 5(a), 5(b), and 5(c) show the results of the GHD simulation without accounting for diffusion; when simulating the linearized GHD dynamics of the system with diffusion, we find no discernible difference in the results. Other mechanisms of relaxation through integrability-breaking processes can also be neglected: At such high temperatures, excitations in the transverse trapping potential would normally lead to thermalization of the system [59, 20], however an emergent Pauli blocking of the associated scattering events protects the one-dimensionality of the system [24]. By virtue of the high chemical potential of the condensate, all low-rapidity states are completely filled, while at higher rapidities the filling features large thermal tails. In transverse-exciting scattering events, outgoing rapidities are much smaller than incoming due to the large gain in transverse potential energy; thus, all outgoing scattering states fall within rapidity states already occupied, and the amplitude of the scattering process vanishes. #### 4.2.2 Quasi-particle dephasing and applicability of phonon basis With contributions of both diffusion and a dimensional crossover being negligible, the apparent relaxation of the density mode seen in figure 5 must be due to the ballistic dephasing of quasi-particle trajectories. In figure 5(e) we plot the coefficients $\eta_{\omega,j=1}$, representing the population of each propagation eigenstate for the $j=1$ perturbation. The eigenstates are ordered by the imaginary part (ballistic contribution) of their corresponding eigenvalue $\alpha_{\omega,j=1}$. At low values of $\alpha_{\omega,j=1}$ the population of the corresponding eigenstates is zero, as these states belongs to the background $\vartheta_{0}(\theta)$. For all quenches, the population $\eta_{\omega,j=1}$ is peaked around the edge of the background and features long thermal tails. Having compared to other quenches of higher modes, we find that $\eta_{\omega,j}$ is mostly independent of the mode number $j$; instead its width/extend depends on the temperature, while its total area depends on the initial mode amplitude. Thus, one can immediately understand the temperature dependence of the relaxation rate: A larger spread in populated ballistic eigenvalues $\alpha_{\omega,j=1}$ leads to a faster dephasing of the eigenstates comprising the mode $\delta\vartheta_{j=1}$. In the context of phonons, the extend of the populations $\eta_{\omega,j=1}$ is thus an indication of the phonon lifetime. Given the fast relaxation of the density mode (particularly in the hot realizations), a phonon basis is not particularly suitable for describing the system. However, as the temperature decreases, the population of large $\alpha_{\omega,j=1}$ vanishes, substantially increasing the phonon lifetime. For this particular setup, a condensate of 30 nK (half the temperature of quench (a)) would be reasonably described by an effective low-energy Hamiltonian. #### 4.2.3 Diffusive length scales Finally, we calculate the critical mode $j_{\omega}^{\star}$, whose spectra are shown in figure 5(d), also labelled by the imaginary part of the corresponding eigenvalue. We find that $j_{\omega}^{\star}\gg 1$ for all eigenstates, demonstrating that the GHD dynamics of the addressed mode is entirely dominated by ballistic propagation of the quasi-particles. From the spectrum of $j_{\omega}^{\star}$ we may identify the length scales at which diffusion would become relevant. First, we note that the critical mode is highly dependent on the eigenstate: Since $\alpha_{\omega,j=1}$ denotes the ballistic contribution to quasi-particle propagation, we would expect $j_{\omega}^{\star}$ to be minimal around low $\alpha$-values. This is indeed the case, however, the corresponding states all belong to the stationary background. Likewise, at large $\alpha$ the critical mode is very high. Interestingly, at intermediate values of $\alpha_{\omega,j=1}$, the spectra of $j_{\omega}^{\star}$ all feature a local minimum. The location of the minimum is related to the states with maximal thermal thermal fluctuations (see eq. (9)). Further, comparing with the eigenstate population of the perturbation of figure 5(e), we find that the location of the minimum coincides with the most populated eigenstates; the observed dynamics of the system is thus mainly driven by the evolution of these particular eigenstates. From figure 5(d), we find a minimum critical mode of $j_{\omega}^{\star}\sim 200$. Hence, given the same experimental parameters, diffusive effects would become dominant for quenches of the $j=200$ mode and higher. For an experimental box length of $L=80$ µm, the corresponding diffusive length scale is around 0.4 µm, which is close to the healing length of the condensate. At such length scales, higher order (e.g. dispersive) corrections become relevant for dynamics [60, 61, 62, 63]. Further, from an experimental perspective, length scales on that order are typically not resolvable by the available imaging techniques. ### 4.3 Single-mode quench in the maximal diffusive regime ((a)) ((b)) ((c)) Figure 6: Single-mode quench in a thermodynamic regime of maximal diffusion (see text for parameters). (a) The critical modes $j_{\omega}^{\star}$ the propagation operator and coefficients (i.e. population) of the eigenstates. We label the eigenstates by the imaginary part of the corresponding eigenvalue $\alpha_{\omega,j=1}$. (b) Evolution of addressed $j=1$ density mode calculated using the linearized GHD with and without diffusion, labelled as ballistic and diffusive, respectively. The area between the two curves has been shaded in orange for improved visibility. (c) Filling function $\vartheta$ after evolution time $t=t_{\mathrm{evol}}$. To illustrate the difficulty in observing diffusive effects in the 1D Bose gas from measuring its density, we consider a single-mode quench similar to the experiment [24] but realized near the maximally diffusive regime, identified via figure 2, here $\gamma=1$ and $\mathcal{T}=2$. The resulting parameters are close to a system considered in Ref. [47]. For a gas of 87Rb atoms, this would correspond to an extremely dilute system with a density of $1\>\mathrm{atoms}/\mu\mathrm{m}$ and temperature $5.5$nK. In order to simplify comparison to the experiment, we here also treat a quench of the $j=1$ density mode, however, we scale the length of the system $L$ to near the diffusive length scale, resulting in $L=2\>\mu\mathrm{m}$. The resulting critical mode spectrum and eigenstate population are plotted in figure 6(a). Compared with the spectrum of the experimental system (Fig. 5(d)), we find that the critical mode here is much lower for all populated states. Further, the lack of a Fermi sea in the background state means that the highly diffusive eigenstates (low $\alpha$) are also occupied by the perturbation. Thus, on average, we would expect modes of length scale around $L$ to exhibit diffusive behavior (following our choice of system length). Next, in figure 6(b), we plot the density of the $j=1$ mode obtained from linearized GHD simulations for a duration of $t_{\mathrm{evol}}=3.5\,\mathrm{ms}$, both with and without diffusion. Despite the system being realized around the maximally diffusive thermodynamic regime, the influence of diffusion on the density dynamics is very limited. Indeed, we find a hardly measurable difference in the relaxation of the density mode upon accounting for diffusion. Comparing higher moments [64] does not yield a clear indication of diffusion either. Finally, figure 6(c) depicts the filling function $\vartheta$ at time $t=t_{\mathrm{evol}}$. Here, the effect of adding diffusive corrections to the ballistic propagation is evident; following ballistic propagation, the filling function develops fine structures, which are washed out in the presence of diffusion. Upon calculating expectation values of observables, the rapidity is integrated over; functions, which are very sensitive to the exact shape of the filling, could be used to detect the presence (or lack of) of fine structure in the rapidity distribution. However, experimental setups are highly limited in the measurable observables available. Measurement of the rapidity density [65] integrates over space, thus similarly hiding the underlying structure. Hence, for a 1D Bose gas confined to a box, is it very difficult to differentiate a ballistically dephased system from one that has relaxed via diffusion. ## 5 Conclusion To summarize, we have quantified the effect of diffusion in 1D Bose gases by calculating the diffusive spreading of quasi-particle trajectories relative to their ballistic propagation velocity. Computing this measure for a number of thermal states, we have found that diffusive effects are most prominent in the transition regions between the different thermodynamic regimes of the Lieb- Liniger model, particularly for interaction strengths and temperatures around $\gamma\sim\mathcal{T}\sim 1$. Diffusion, which scales with thermal fluctuations of the state, increases with temperature up to a certain point, where the gas transitions into a state of free particles. The parameter regime of maximal diffusion is accessible by existing experimental setups, which is encouraging for future studies. Next, by diagonalizing the linearized propagation operator, we have identified diffusive length scales in an experimental quench of a single density mode in a quasi-condensate. For this system, diffusive effects are found to be very weak. Only upon approaching lengths scales around the condensate healing length, do diffusive effects become dominent. Thus, on experimental time scales, the effect of diffusion on the density evolution can be completely neglected; the observed relaxation can instead be understood solely from the dephasing of ballistic quasi-particle trajectories. Nevertheless, diffusive effects are still important for the dynamics at longer time scales: Following ballistic dephasing of the density mode, the system remains in a non- equilibrium state where ever finer structures in the filling develop [66, 56, 67]; these structures are integrated over to obtain the density, and are therefore not observed in the experiment. Once the length scale of said structures approach the critical mode, diffusive effects wash out their features and drive the system towards thermal equilibrium [14, 68]. Hence, in order to observe diffusive dynamics in an experimental setup, one must realize a system where expectation values of measurable observables for balistically dephased states and thermalized states are distinguishable. One particular setup, which fulfills this condition, is the quantum Newton’s cradle. Finally, the analysis facilitates an evaluation of the applicability of a phonon basis, which have been used extensively to describe quantum field simulators realized with ultracold gases. For the given setup, the lifetime of the excited mode is highly dependent on temperature: The particular quenches analyzed here feature a rather short lifetime, however, for lower temperatures (within experimental reach) the lifetime of the mode exceeds experimental time scales. ## Acknowledgements We thank Jacopo de Nardis and Andrew Urichuk for enlightening discussions and for suggesting the measure $\Gamma$. ##### Funding information This work is supported by the European Research Council: ERC-AdG ’Emergence in Quantum Physics’ (EmQ) under Grant Agreement No. 101097858 and the DFG/FWF CRC 1225 ’ISOQUANT’, (Austrian Science Fund (FWF) P 36236) and Research Unit FOR 2724 ’Thermal machines in the thermal world’, (Austrian Science Fund (FWF) I 6047).- ## Appendix A Thermodynamic Bethe Ansatz The Bethe Ansatz identifies the elementary excitations of the 1D Bose gas as fermionic quasi-particles, uniquely labelled by their quasi-momentum, or rapidity, $\theta$ [32, 33]. In the thermodynamics limit, the rapidity becomes a continuous variable, with the density of occupied rapidities, i.e. the density of quasi-particles, $\rho_{\mathrm{p}}(\theta)$ fully characterising the thermodynamic properties of the local equilibrium macrostate. Similarly, a density of holes $\rho_{\mathrm{h}}(\theta)$ can be introduced, which describes the density of unoccupied rapidities; together the two densities form the density of states $\rho_{\mathrm{s}}(\theta)$, which obeys the relation $\rho_{\mathrm{s}}(\theta)=\rho_{\mathrm{p}}(\theta)+\rho_{\mathrm{h}}(\theta)=\frac{1}{2\pi}-\frac{1}{2\pi}\int_{-\infty}^{\infty}d\theta^{\prime}\,\Delta(\theta,\theta^{\prime})\rho_{\mathrm{p}}(\theta^{\prime})\;,$ (20) where $\Delta(\theta,\theta^{\prime})=-\frac{2c}{c^{2}+(\theta-\theta^{\prime})^{2}}$ is the two-body scattering kernel of the Lieb-Liniger model. Given $\rho_{\mathrm{p}}$ one can compute coarse-grained expectation values of local observables, in particular the atomic density following $n=\int_{-\infty}^{\infty}\mathrm{d}\theta\>\rho_{\mathrm{p}}(\theta)$. Equivalently, one can encode the thermodynamic properties of the system in the filling function $\vartheta(\theta)=\rho_{\mathrm{p}}(\theta)/\rho_{\mathrm{s}}(\theta)$, which describes the occupational fraction of allowed rapidity states. Quasi- particles of the Lieb-Liniger model follow Fermionic statistics, whereby the filling assumes values between 0 and 1. The ground (zero temperature) state is given by a Fermi sea of rapidities, where $\vartheta(\theta)=1$ for rapidities within the interval of two Fermi points (whose value is determined by $\gamma$) and 0 everywhere else. Meanwhile, the filling of a thermal state is given by $\vartheta(\theta)=\frac{1}{1+e^{\varepsilon(\theta)\beta}}\;,$ (21) where $\beta$ is the inverse temperature, and the pseudo-energy $\varepsilon(\theta)$ is acquired from solving the Yang-Yang equation [36] $\varepsilon(\theta)=\frac{\hbar^{2}\theta^{2}}{2m}-\mu+\frac{1}{2\pi\beta}\int_{-\infty}^{\infty}\mathrm{d}\theta^{\prime}\>\Delta(\theta,\theta^{\prime})\ln\left(1+e^{\varepsilon(\theta^{\prime})\beta}\right)\;,$ (22) where $\mu$ is the local chemical potential. ## Appendix B Quasi-particle propagation versus energy The thermodynamic regimes of the Lieb-Liniger model are identified by their two-point correlation function $g^{(2)}(0)$ [35]. However, although the minimum of the propagation measure $\Gamma$ (corresponding to a relative maximal influence of diffusion on quasi-particle propagation) aligns with the crossover between the different regimes, we find no clear relation between it and the $g^{(2)}$-function. Instead, we study the relation between $\Gamma$ and the distribution of energy in the gas, which itself it related to the $g^{(2)}$-function. In the asymptotic regimes of the Lieb-Liniger model, the system energy is dominated either by kinetic energy (free particle regimes) or interaction energy (quasi-condensate regime). To study the relation between energy and diffusion, we plot $\Gamma/c$ as function of the mean interaction and kinetic energy per atom of the system, which, respectively, are given by [69] $\displaystyle E_{int}/N$ $\displaystyle=\frac{\hbar^{2}}{2m}cng^{(2)}(0)\;,$ (23) $\displaystyle E_{kin}/N$ $\displaystyle=E/N-E_{int}/N\;,$ (24) where the total energy per atom $E/N$ is $E/N=\frac{1}{n}\int_{-\infty}^{\infty}\mathrm{d}\theta\>\epsilon^{\mathrm{dr}}\vartheta(\theta)\;.$ (25) The result can be seen in figure 7, with the minimum of $\Gamma/c$ being situated around the equipartition value $E_{int}/(Nk_{B}T)\sim E_{kin}/(Nk_{B}T)\sim 0.5$. Figure 7: Propagation measure $\Gamma$ of eq. (19) scaled by the coupling strength $c$ as function of the mean kinetic and interaction energy per atom in the gas. The dashed lines mark $E_{kin}/(Nk_{B}T)=0.5$ and $E_{int}/(Nk_{B}T)=0.5$. ## References * [1] I. Bloch, J. Dalibard and W. Zwerger, _Many-body physics with ultracold gases_ , Rev. Mod. Phys. 80(3), 885 (2008), 10.1103/RevModPhys.80.885. * [2] R. Zwanzig, _Nonequilibrium Statistical Mechanics_ , Oxford University Press, ISBN 9780198032151 (2001). * [3] J. Eisert, M. Friesdorf and C. Gogolin, _Quantum many-body systems out of equilibrium_ , Nature Physics 11(2), 124 (2015), 10.1038/nphys3215. * [4] M. A. Cazalilla, R. Citro, T. Giamarchi, E. Orignac and M. Rigol, _One dimensional bosons: From condensed matter systems to ultracold gases_ , Rev. Mod. Phys. 83, 1405 (2011), 10.1103/RevModPhys.83.1405. * [5] D. Pines and P. Nozières, _The Theory of Quantum Liquids: Normal Fermi liquids_ , Advanced book classics series. W.A. Benjamin (1966). * [6] H. Spohn, _Large Scale Dynamics of Interacting Particles_ , Theoretical and Mathematical Physics. Springer Berlin Heidelberg, ISBN 9783642843716 (2012). * [7] M. Rigol, _Breakdown of thermalization in finite one-dimensional systems_ , Phys. Rev. Lett. 103, 100403 (2009), 10.1103/PhysRevLett.103.100403. * [8] O. A. Castro-Alvaredo, B. Doyon and T. Yoshimura, _Emergent hydrodynamics in integrable quantum systems out of equilibrium_ , Phys. Rev. X 6(4), 041065 (2016), 10.1103/PhysRevX.6.041065. * [9] B. Bertini, M. Collura, J. De Nardis and M. Fagotti, _Transport in out-of-equilibrium XXZ chains: Exact profiles of charges and currents_ , Phys. Rev. Lett. 117(20), 207201 (2016), 10.1103/PhysRevLett.117.207201. * [10] V. B. Bulchandani, R. Vasseur, C. Karrasch and J. E. Moore, _Bethe-Boltzmann hydrodynamics and spin transport in the XXZ chain_ , Phys. Rev. B 97, 045407 (2018), 10.1103/PhysRevB.97.045407. * [11] J. De Nardis, D. Bernard and B. Doyon, _Hydrodynamic diffusion in integrable systems_ , Phys. Rev. Lett. 121(16), 160603 (2018), 10.1103/PhysRevLett.121.160603. * [12] J. D. Nardis, D. Bernard and B. Doyon, _Diffusion in Generalized Hydrodynamics and quasiparticle scattering_ , SciPost Phys. 6, 049 (2019), 10.21468/SciPostPhys.6.4.049. * [13] S. Gopalakrishnan, D. A. Huse, V. Khemani and R. Vasseur, _Hydrodynamics of operator spreading and quasiparticle diffusion in interacting integrable systems_ , Phys. Rev. B 98, 220303 (2018), 10.1103/PhysRevB.98.220303. * [14] A. Bastianello, A. De Luca, B. Doyon and J. De Nardis, _Thermalization of a trapped one-dimensional Bose gas via diffusion_ , Phys. Rev. Lett. 125(24), 240604 (2020), 10.1103/PhysRevLett.125.240604. * [15] A. Bastianello, A. D. Luca and R. Vasseur, _Hydrodynamics of weak integrability breaking_ , J. Stat. Mech. 2021(11), 114003 (2021), 10.1088/1742-5468/ac26b2. * [16] A. Bastianello, J. De Nardis and A. De Luca, _Generalized Hydrodynamics with dephasing noise_ , Phys. Rev. B 102, 161110 (2020), 10.1103/PhysRevB.102.161110. * [17] I. Bouchoule, B. Doyon and J. Dubail, _The effect of atom losses on the distribution of rapidities in the one-dimensional Bose gas_ , SciPost Phys. 9(4), 44 (2020), 10.21468/SciPostPhys.9.4.044. * [18] I. E. Mazets, T. Schumm and J. Schmiedmayer, _Breakdown of integrability in a quasi-1D ultracold bosonic gas_ , Phys. Rev. Lett. 100, 210403 (2008), 10.1103/PhysRevLett.100.210403. * [19] I. E. Mazets and J. Schmiedmayer, _Thermalization in a quasi-one-dimensional ultracold bosonic gas_ , New J. Phys. 12(5), 055023 (2010), 10.1088/1367-2630/12/5/055023. * [20] F. Møller, C. Li, I. Mazets, H.-P. Stimming, T. Zhou, Z. Zhu, X. Chen and J. Schmiedmayer, _Extension of the Generalized Hydrodynamics to the dimensional crossover regime_ , Phys. Rev. Lett. 126(9), 090602 (2021), 10.1103/PhysRevLett.126.090602. * [21] M. Panfil, S. Gopalakrishnan and R. M. Konik, _Thermalization of interacting quasi-one-dimensional systems_ , Phys. Rev. Lett. 130, 030401 (2023), 10.1103/PhysRevLett.130.030401. * [22] M. Schemmer, I. Bouchoule, B. Doyon and J. Dubail, _Generalized Hydrodynamics on an atom chip_ , Phys. Rev. Lett. 122(9), 090601 (2019), 10.1103/PhysRevLett.122.090601. * [23] N. Malvania, Y. Zhang, Y. Le, J. Dubail, M. Rigol and D. S. Weiss, _Generalized Hydrodynamics in strongly interacting 1D Bose gases_ , Science 373(6559), 1129 (2021), 10.1126/science.abf0147. * [24] F. Cataldini, F. Møller, M. Tajik, J. Sabino, S.-C. Ji, I. Mazets, T. Schweigler, B. Rauer and J. Schmiedmayer, _Emergent Pauli blocking in a weakly interacting Bose gas_ , Phys. Rev. X 12, 041032 (2022), 10.1103/PhysRevX.12.041032. * [25] M. Gring, M. Kuhnert, T. Langen, T. Kitagawa, B. Rauer, M. Schreitl, I. Mazets, D. A. Smith, E. Demler and J. Schmiedmayer, _Relaxation and prethermalization in an isolated quantum system_ , Science 337(6100), 1318 (2012), 10.1126/science.1224953. * [26] T. Langen, S. Erne, R. Geiger, B. Rauer, T. Schweigler, M. Kuhnert, W. Rohringer, I. E. Mazets, T. Gasenzer and J. Schmiedmayer, _Experimental observation of a generalized Gibbs ensemble_ , Science 348(6231), 207 (2015), 10.1126/science.1257026, https://www.science.org/doi/pdf/10.1126/science.1257026. * [27] B. Rauer, S. Erne, S. Thomas, F. Cataldini, M. Tajik and J. Schmiedmayer, _Recurrences in an isolated quantum many-body system_ , Science 360(6386), 307 (2018), 10.1126/science.aan7938. * [28] T. Schweigler, M. Gluza, M. Tajik, S. Sotiriadis, F. Cataldini, S.-C. Ji, F. S. Møller, J. Sabino, B. Rauer, J. Eisert and J. Schmiedmayer, _Decay and recurrence of non-gaussian correlations in a quantum many-body system_ , Nature Physics 17(5), 559 (2021), 10.1038/s41567-020-01139-2. * [29] C. Viermann, M. Sparn, N. Liebster, M. Hans, E. Kath, Á. Parra-López, M. Tolosa-Simeón, N. Sánchez-Kuntz, T. Haas, H. Strobel, S. Floerchinger and M. K. Oberthaler, _Quantum field simulator for dynamics in curved spacetime_ , Nature 611(7935), 260 (2022), 10.1038/s41586-022-05313-9. * [30] M. Tajik, I. Kukuljan, S. Sotiriadis, B. Rauer, T. Schweigler, F. Cataldini, J. Sabino, F. Møller, P. Schüttelkopf, S.-C. Ji, D. Sels, E. Demler _et al._ , _Verification of the area law of mutual information in a quantum field simulator_ , Nature Physics 19(7), 1022 (2023), 10.1038/s41567-023-02027-1. * [31] M. Tajik, M. Gluza, N. Sebe, P. Schüttelkopf, F. Cataldini, J. Sabino, F. Møller, S.-C. Ji, S. Erne, G. Guarnieri, S. Sotiriadis, J. Eisert _et al._ , _Experimental observation of curved light-cones in a quantum field simulator_ , Proceedings of the National Academy of Sciences 120(21), e2301287120 (2023), 10.1073/pnas.2301287120, https://www.pnas.org/doi/pdf/10.1073/pnas.2301287120. * [32] E. H. Lieb and W. Liniger, _Exact analysis of an interacting Bose gas. I. The general solution and the ground state_ , Phys. Rev. 130(4), 1605 (1963), 10.1103/PhysRev.130.1605. * [33] E. H. Lieb, _Exact analysis of an interacting Bose gas. II. The excitation spectrum_ , Phys. Rev. 130, 1616 (1963), 10.1103/PhysRev.130.1616. * [34] M. Olshanii, _Atomic scattering in the presence of an external confinement and a gas of impenetrable bosons_ , Phys. Rev. Lett. 81(5), 938 (1998), 10.1103/PhysRevLett.81.938. * [35] K. V. Kheruntsyan, D. M. Gangardt, P. D. Drummond and G. V. Shlyapnikov, _Finite-temperature correlations and density profiles of an inhomogeneous interacting one-dimensional Bose gas_ , Phys. Rev. A 71, 053615 (2005), 10.1103/PhysRevA.71.053615. * [36] C. N. Yang and C. P. Yang, _Thermodynamics of a one-dimensional system of bosons with repulsive delta-function interaction_ , J. Math. Phys. 10(7), 1115 (1969), 10.1063/1.1664947. * [37] V. E. Korepin, N. M. Bogoliubov and A. G. Izergin, _Quantum Inverse Scattering Method and Correlation Functions_ , Cambridge Monographs on Mathematical Physics. Cambridge University Press, 10.1017/CBO9780511628832 (1993). * [38] M. Takahashi, _Thermodynamics of One-Dimensional Solvable Models_ , Cambridge University Press, Cambridge, 10.1017/CBO9780511524332 (1999). * [39] I. Bouchoule and J. Dubail, _Generalized Hydrodynamics in the one-dimensional Bose gas: theory and experiments_ , Journal of Statistical Mechanics: Theory and Experiment 2022(1), 014003 (2022), 10.1088/1742-5468/ac3659. * [40] B. Doyon, T. Yoshimura and J.-S. Caux, _Soliton gases and Generalized Hydrodynamics_ , Phys. Rev. Lett. 120, 045301 (2018), 10.1103/PhysRevLett.120.045301. * [41] E. P. Wigner, _Lower limit for the energy derivative of the scattering phase shift_ , Phys. Rev. 98, 145 (1955), 10.1103/PhysRev.98.145. * [42] M. Borsi, B. Pozsgay and L. Pristyák, _Current operators in Bethe ansatz and Generalized Hydrodynamics: An exact quantum-classical correspondence_ , Phys. Rev. X 10, 011054 (2020), 10.1103/PhysRevX.10.011054. * [43] M. Borsi, B. Pozsgay and L. Pristyák, _Current operators in integrable models: a review_ , Journal of Statistical Mechanics: Theory and Experiment 2021(9), 094001 (2021), 10.1088/1742-5468/ac0f6b. * [44] B. Doyon, S. Gopalakrishnan, F. Møller, J. Schmiedmayer and R. Vasseur, _Generalized hydrodynamics: a perspective_ (2023), 2311.03438. * [45] F. S. Møller and J. Schmiedmayer, _Introducing iFluid: a numerical framework for solving hydrodynamical equations in integrable models_ , SciPost Phys. 8(3), 41 (2020), 10.21468/SciPostPhys.8.3.041. * [46] F. Møller, N. Besse, I. Mazets, H. Stimming and N. Mauser, _The dissipative generalized hydrodynamic equations and their numerical solution_ , Journal of Computational Physics 493, 112431 (2023), https://doi.org/10.1016/j.jcp.2023.112431. * [47] M. Panfil and J. Pawełczyk, _Linearized regime of the Generalized Hydrodynamics with diffusion_ , SciPost Phys. Core 1(1), 2 (2019), 10.21468/SciPostPhysCore.1.1.002. * [48] C. Mora and Y. Castin, _Extension of Bogoliubov theory to quasicondensates_ , Phys. Rev. A 67(5), 053615 (2003), 10.1103/PhysRevA.67.053615. * [49] F. D. M. Haldane, _Effective harmonic-fluid approach to low-energy properties of one-dimensional quantum fluids_ , Phys. Rev. Lett. 47, 1840 (1981), 10.1103/PhysRevLett.47.1840. * [50] F. D. M. Haldane, _'Luttinger liquid theory' of one-dimensional quantum fluids. I. Properties of the Luttinger model and their extension to the general 1D interacting spinless Fermi gas_ , Journal of Physics C: Solid State Physics 14(19), 2585 (1981), 10.1088/0022-3719/14/19/010. * [51] I. Bouchoule, J. Dubail, L. Dubois and D. M. Gangardt, _Relaxation of phonons in the lieb-liniger gas by dynamical refermionization_ , Phys. Rev. Lett. 130, 140401 (2023), 10.1103/PhysRevLett.130.140401. * [52] A. Urichuk, S. Scopa and J. D. Nardis, _Navier–Stokes equations for low-temperature one-dimensional fluids_ (2023), 2309.14476. * [53] M. A. Cazalilla, _Bosonizing one-dimensional cold atomic gases_ , Journal of Physics B: Atomic, Molecular and Optical Physics 37(7), S1 (2004), 10.1088/0953-4075/37/7/051. * [54] T. Kinoshita, T. Wenger and D. S. Weiss, _A quantum Newton’s cradle_ , Nature 440(7086), 900 (2006), 10.1038/nature04693. * [55] Y. Le, Y. Zhang, S. Gopalakrishnan, M. Rigol and D. S. Weiss, _Observation of hydrodynamization and local prethermalization in 1d Bose gases_ , Nature 618(7965), 494 (2023), 10.1038/s41586-023-05979-9. * [56] J.-S. Caux, B. Doyon, J. Dubail, R. Konik and T. Yoshimura, _Hydrodynamics of the interacting Bose gas in the Quantum Newton Cradle setup_ , SciPost Phys. 6(6), 70 (2019), 10.21468/SciPostPhys.6.6.070. * [57] J. Reichel and V. Vuletić, _Atom Chips_ , Wiley-VCH, Weinheim, Germany, 10.1002/9783527633357 (2011). * [58] M. Tajik, B. Rauer, T. Schweigler, F. Cataldini, J. Sabino, F. S. Møller, S.-C. Ji, I. E. Mazets and J. Schmiedmayer, _Designing arbitrary one-dimensional potentials on an atom chip_ , Opt. Express 27(23), 33474 (2019), 10.1364/OE.27.033474. * [59] C. Li, T. Zhou, I. Mazets, H.-P. Stimming, F. S. Møller, Z. Zhu, Y. Zhai, W. Xiong, X. Zhou, X. Chen and J. Schmiedmayer, _Relaxation of Bosons in One Dimension and the Onset of Dimensional Crossover_ , SciPost Phys. 9(4), 58 (2020), 10.21468/SciPostPhys.9.4.058. * [60] M. Fagotti, _Higher-order generalized hydrodynamics in one dimension: The noninteracting test_ , Phys. Rev. B 96, 220302 (2017), 10.1103/PhysRevB.96.220302. * [61] R. S. Watson, S. A. Simmons and K. V. Kheruntsyan, _Benchmarks of Generalized Hydrodynamics for 1D Bose gases_ , arXiv:2208.06614 (2022), 10.48550/ARXIV.2208.06614. * [62] J. D. Nardis and B. Doyon, _Hydrodynamic gauge fixing and higher order hydrodynamic expansion_ , Journal of Physics A: Mathematical and Theoretical 56(24), 245001 (2023), 10.1088/1751-8121/acd153. * [63] F. Møller, P. Schüttelkopf, J. Schmiedmayer and S. Erne, _The Whitham approach to Generalized Hydrodynamics_ (2023), 2304.10533. * [64] A. Bastianello, L. Piroli and P. Calabrese, _Exact local correlations and full counting statistics for arbitrary states of the one-dimensional interacting Bose gas_ , Phys. Rev. Lett. 120, 190601 (2018), 10.1103/PhysRevLett.120.190601. * [65] J. M. Wilson, N. Malvania, Y. Le, Y. Zhang, M. Rigol and D. S. Weiss, _Observation of dynamical fermionization_ , Science 367(6485), 1461 (2020), 10.1126/science.aaz0242. * [66] X. Cao, V. B. Bulchandani and J. E. Moore, _Incomplete thermalization from trap-induced integrability breaking: Lessons from classical hard rods_ , Phys. Rev. Lett. 120, 164101 (2018), 10.1103/PhysRevLett.120.164101. * [67] D. Bagchi, J. Kethepalli, V. B. Bulchandani, A. Dhar, D. A. Huse, M. Kulkarni and A. Kundu, _Unusual ergodic and chaotic properties of trapped hard rods_ (2023), 2306.11713. * [68] L. Biagetti, G. Cecile and J. D. Nardis, _Three-stage thermalisation of a quasi-integrable system_ (2023), 2307.05379. * [69] M. J. Davis, P. B. Blakie, A. H. van Amerongen, N. J. van Druten and K. V. Kheruntsyan, _Yang-Yang thermometry and momentum distribution of a trapped one-dimensional Bose gas_ , Phys. Rev. A 85, 031604 (2012), 10.1103/PhysRevA.85.031604.
# Active Covering Heinrich Jiang Afshin Rostamizadeh ###### Abstract We analyze the problem of active covering, where the learner is given an unlabeled dataset and can sequentially label query examples. The objective is to label query all of the positive examples in the fewest number of total label queries. We show under standard non-parametric assumptions that a classical support estimator can be repurposed as an offline algorithm attaining an excess query cost of $\widetilde{\Theta}(n^{D/(D+1)})$ compared to the optimal learner, where $n$ is the number of datapoints and $D$ is the dimension. We then provide a simple active learning method that attains an improved excess query cost of $\widetilde{O}(n^{(D-1)/D})$. Furthermore, the proposed algorithms only require access to the positive labeled examples, which in certain settings provides additional computational and privacy benefits. Finally, we show that the active learning method consistently outperforms offline methods as well as a variety of baselines on a wide range of benchmark image-based datasets. Active Learning ## 1 Introduction Active learning is an increasingly important practical area of machine learning as the amount of data grows faster than the resources to label these datapoints, which can be costly. In this paper, we introduce a variant of the active learning problem, called active covering, where the goal is to label query all of the positive examples given an unlabeled dataset in as few total queries as possible. Active covering arises in many machine learning problems. In credit card fraud detection, one goal is to find the instances of fraud with as few queries as possible to the user asking if a transaction was fraudulent (Awoyemi et al., 2017). In mineral exploration, the goal is to find all of the valuable resources with as little exploration as necessary (Acosta et al., 2019). In computational drug discovery, one goal is to discover all of the effective drugs with as few suggested candidates as possible, as running trials on each candidate can be costly (Ou-Yang et al., 2012). In bank loan applications, a label query means granting the loan to an applicant (as the only way to know if the applicant would default or not)– as such, negative label queries can be costly (Khandani et al., 2010). Finally, many internet applications need to moderate abusive content, and the goal is to quickly identify and remove all of the abusive content with as few label queries as possible to human operators (Nobata et al., 2016). For our analysis, we assume that an unlabeled pool of $n$ datapoints is generated by drawing i.i.d. samples from a mixture of two distributions, where one of the distributions represents the positive examples and the other the negative examples. Our goal is to retrieve all of the positive examples from this pool with as few label queries as possible. Furthermore, we assume that the support of the positive examples is compact and its density function is lower bounded by a positive quantity. Finally, we leverage a few additional standard and mild nonparametric regularity assumptions on the the curvature of the support of positive examples. These assumptions allow for a wide range of distributions (e.g. mixtures of truncated multivariate Gaussians), and in particular the support of positives and negatives can be of any shape and can arbitrarily overlap. We first establish the optimal active covering algorithm, which has knowledge of the support of positive examples. This algorithm, thus, will query all of the examples that lie in the support of positive examples (which will contain some negative examples where there is an overlap with support of negative examples). Then, the performance of any active covering algorithm can be compared to the optimal learner via the expected excess query cost that the algorithm incurs in addition to what the optimal learner is expected to incur. To begin with, we analyze an offline algorithm that is based on a classical method in support estimation, providing both upper and lower bounds on the excess query cost. We then provide an active learning procedure based on the explore-then-commit strategy, where we first explore with an initial random sample, and then exploit by querying the example closest to any of the positive examples found thus far. We show that the active learning procedure achieves a provably better excess query cost than the offline algorithm. The presented methods also have the beneficial property of using only the queried positive examples for learning. This not only has desirable computational implications in that we don’t need to store the negative examples (especially in highly label imbalanced datasets with few positives), but also is practical in situations where there are strict privacy requirements for the negative examples. For example, this arises in the problem of fake account detection faced by social network platforms (García- Recuero, 2016), where real account information is highly sensitive data and may even not be allowed to use for training. Another such application is spam email detection, where it may be desirable (or necessary) to avoid training with non-spam emails (Li et al., 2008). While this privacy aspect is not a focus of the paper, it may be a feature of independent interest. We now summarize our contributions as follows. * • In Section 2, we introduce and formalize the active covering problem and establish the notion of excess query cost. * • In Section 3, we analyze the offline learner and show matching upper and lower bounds on the excess query cost of $\widetilde{\Theta}(n^{D/(D+1)})$. * • In Section 4, we introduce and analyze the Explore-then-Commit active algorithm and show it has excess query cost $\widetilde{O}(n^{(D-1)/D})$. * • In Section 6, we show empirical results on a wide range of benchmark image- based datasets (Letters, MNIST, Fashion MNIST, CIFAR10, CelebA) comparing the Explore-then-Commit algorithm to a number of offline and active baselines. ## 2 Active Covering In this section, we formulate the active covering problem. We are given an unlabeled dataset of $n$ datapoints, $X$, and the goal is to minimize the number of label queries necessary until all positive examples are labeled. We provide the assumptions on the underlying distribution from which $X$ is drawn from and establish the notion of excess query cost, which is the additional label queries compared to the optimal procedure, which will be defined later. ### 2.1 Theoretical Setup We make the following assumption on the data generating process, which says that with probability $p$, we draw a positive example from distribution $\mathcal{P}_{+}$ and with probability $1-p$, we draw a negative example from distribution $\mathcal{P}_{-}$. ###### Assumption 1. The dataset $X$ is drawn i.i.d. from a distribution $\mathcal{P}:=p\cdot\mathcal{P}_{+}+(1-p)\cdot\mathcal{P}_{-}$, for some $p\in(0,1)$ and $\mathcal{P}_{+}$ and $\mathcal{P}_{-}$ are distributions over $\mathbb{R}^{D}$ of the positive and negative examples respectively. We then require the following regularity assumption on the distribution of positive examples. The first part ensures that the density of positive examples is lower bounded in its support; otherwise, some positive examples will only appear as outliers, making it impossible for any learner to find them in a non-trivial manner. The second part ensures that the support of positive examples does not become arbitrarily thin anywhere, otherwise there may not be any samples drawn from these regions and it will be very difficult to recover the entire support from a finite sample. This is a standard assumption in a variety of density estimation scenarios e.g. (Cuevas et al., 1997; Singh et al., 2009). ###### Assumption 2. There exists density function $f_{+}:\mathbb{R}^{D}\rightarrow\mathbb{R}$ corresponding to $\mathcal{P}_{+}$ with a compact support $\mathcal{X}_{+}$ that can be decomposed into a finite number of connected components. There exists $\lambda_{0},r_{0},C_{+}>0$ such that the following holds: * • $f_{+}(x)\geq\lambda_{0}$ for all $x\in\mathcal{X}_{+}$. * • For all $0<r<r_{0}$ and $x\in\mathcal{X}_{+}$, we have $\text{Vol}(B(x,r)\cap X_{+})\geq C_{+}\cdot\text{Vol}(B(x,r))$, where $B(x,r):=\\{x^{\prime}\in\mathbb{R}^{D}:|x-x^{\prime}|\leq r\\}$. The final assumption ensures that the negative example’s density is upper bounded to ensure that there are no possible arbitrarily dense clumps of negative examples near the boundary of $\mathcal{X}_{+}$. Otherwise, querying in such regions may yield too few positive examples and it may be difficult to learn that such a region is part of the support of positive examples. ###### Assumption 3. There exists density function $f_{-}:\mathbb{R}^{D}\rightarrow\mathbb{R}$ corresponding to $\mathcal{P}_{-}$ and $\lambda_{1}>0$ such that $f_{-}(x)\leq\lambda_{1}$ for all $x\in\mathcal{X}$. Our assumptions are quite mild as they allow for a wide range of distributions. In particular, they are non-parametric so there are no assumptions that the data is generated based on a parameterized model. Moreover, the support of the positive examples (as well as the support of the negative examples) can be of arbitrary bounded shape, need not be a single connected component (i.e. can appear as a number of connected components), and can intersect arbitrarily with the support of the opposite label. Such mild assumptions can model the wide range of data distributions arising in practice. Perhaps the strongest of our assumptions is that the density on $\mathcal{X}_{+}$ is lower bounded. We stress that if the density of the positive examples on $\mathcal{X}_{+}$ can become arbitrarily small, then some regions can become so low-density that the positive examples in those regions can be considered outliers. For such outliers, it may be unrealistic to expect a procedure to efficiently find them. Nonetheless, in practice, our methodology can still be applied on datasets containing positive outliers, but without guarantees that those outliers will be efficiently recovered. We made this assumption to keep our theoretical analysis from becoming too involved; however it’s worth noting that it is possible to relax this assumption and allow the density to have smooth boundaries. To do so, we would assume parameters that bound the rate of decay of the density as it goes to $0$ and provide final guarantees that depend on the decay parameters. Assumptions used in recent analysis on densities with smooth boundaries (Zhao & Lai, 2020) can be adapted here, which is a future research direction. ### 2.2 Optimal Learner and Excess Query Cost We will analyze a number of new learners for this problem. To do so, we establish a metric called excess query cost, which compares the learner to that of the the optimal learner, which takes the optimal strategy given knowledge of $\mathcal{X}_{+}$. This learner is unattainable in practice, but serves as a theoretical limit in which to quantify the excess query cost of an algorithm with respect to, to be defined below. The optimal learner has knowledge of $\mathcal{X}_{+}$ and therefore its strategy is to label query every point in $\mathcal{X}_{+}$. Let $Q_{\text{opt}}$ be the number of label queries incurred by the optimal learner. Thus, the optimal learner attains an expected number of queries of: $\displaystyle\mathbb{E}[Q_{\text{opt}}]=n\cdot\mathcal{P}(\mathcal{X}_{+}).$ We can then for any algorithm define the notion of excess query cost, which is the additional label queries needed compared to the optimal learner. That is, if the cost of an algorithm is $C$, then the excess query cost is ###### Definition 1 (Excess Query Cost). Suppose that an algorithm needs to make $Q$ label queries before labeling all of the positive examples. Then the excess query cost of the procedure is defined as: $C:=Q-Q_{\text{opt}}$. For the results in the paper, we analyze the expected excess query cost, where the expectation is taken over the distribution from which sample $X$ is drawn from. ### 2.3 Passive Learner We first provide a result for the passive learning algorithm, which queries labels uniformly at random until all positive examples have been retrieved and serves as our most basic baseline. In the following theorem we show a lower bound on the excess query cost that is linear in the pool size. ###### Theorem 1 (Lower bound for Passive Learner). There exists a distribution satisfying Assumptions 1, 2, and 3 such that with probability at least $\frac{1}{2}$, we have (letting $C_{\text{passive}}$ be the excess query cost of the passive learner): $\mathbb{E}[C_{\text{passive}}]\geq\frac{1}{2}\cdot n$. Algorithm | Excess Query Cost ---|--- Passive | $\Theta(n)$ Offline | $\widetilde{\Theta}(n^{D/(D+1)})$ Active | $\widetilde{O}(n^{(D-1)/D})$ Table 1: Summary of algorithms and results. ## 3 Offline Learner Algorithm 1 Offline Learner Inputs: Dataset $X$, initial sample size $m$. Let $X_{0}$ be $m$ examples sampled uniformly without replacement from $X$. Label query $X_{0}$ and let $X_{0,+}$ be the positive examples. Label query remaining examples in ascending order of minimum distance to $X_{0,+}$ (i.e. $x\rightarrow\min_{x^{\prime}\in X_{0,+}}|x-x^{\prime}|$) until all positive examples are label queried. The offline learner (Algorithm 1) will first randomly sample $m$ points to label query and then label queries the remaining examples in ascending order of minimum distance to any of the initially sampled positive examples until all positives are labeled. It’s worth noting that we may not know when all of the positive examples are labeled– thus, in practice, we can terminate the algorithm when enough positives are found depending on the task or when the labeling budget runs out. The offline learner is based on a classical technique in the support estimation literature (Cuevas et al., 1997), where the support of a distribution can be covered with a finite sample of $m$ points drawn from this distribution by taking the $\epsilon$-neighborhood of the sample for appropriate $\epsilon$. We apply this methodology to only the positive examples in our initial sample of $m$ points. We show that the Algorithm 1 finishes when it label queries everything within $\epsilon\approx(\log(m)/m)^{1/D}$ of the initial positive examples, and thus it will cover all the examples in $\mathcal{X}_{+}$ and the excess cost will be proportional to $\epsilon\cdot n$ (all details in the Appendix). The formal guarantee for the excess query cost is a follows: ###### Theorem 2 (Excess Query Cost for Offline Learner). Suppose that Assumptions 1, 2, and 3 hold. Let $0<\delta<1$. There exists $C,M_{0}>0$ depending on $\mathcal{P}$ such that the following holds. In Algorithm 1, suppose that $m$ is chosen sufficiently large such that $m\geq\frac{2\log(2/\delta)}{p^{2}}$ and $\frac{m}{\log m}\geq\log(4/\delta)\cdot M_{0}$. Then, with probability at least $1-\delta$, Algorithm 1 has an expected excess query cost of: $\displaystyle\mathbb{E}[C_{\text{offline}}]$ $\displaystyle\leq(1-p)\cdot m+C\cdot\left(\frac{\log(4/\delta)\cdot\log(p\cdot m/2)}{m}\right)^{1/D}\cdot n.$ We now have the following immediate result by optimizing $m$ as a function of $n$, trading off the cost from the initial exploration (first term) and the cost from the exploitation (second term): ###### Corollary 1. Under the conditions of Theorem 2, setting $m\approx n^{D/(D+1)}$ results in expected excess query cost bounded as follows: $\displaystyle\mathbb{E}[C_{\text{offline}}]\leq\text{PolyLog}(n,1/\delta)\cdot n^{D/(D+1)}.$ We also provide the following lower bound, which shows that the offline learner cannot achieve a better rate (proof found in the appendix). ###### Theorem 3 (Lower Bound for Offline Learner). There exists a distribution satisfying Assumptions 1, 2, and 3 such that with probability at least $\frac{1}{4}$, we have for $n$ sufficiently large and some constant $C>0$: $\displaystyle\mathbb{E}[C_{\text{offline}}]\geq(1-p)\cdot m+C\cdot\left(\frac{\log m}{m}\right)^{1/D}\cdot n.$ ## 4 Active Explore-then-Commit Learner We next show an active approach (Algorithm 2) inspired by Explore-then-Commit strategy (Garivier et al., 2016) that proceeds by first exploring by randomly sampling a set of examples, and then commits to a greedy approach of choosing the closest unlabeled example to any positive example labeled thus far until all of the positive examples are labeled. Algorithm 2 Active Explore-then-Commit Learner Inputs: Dataset $X$, initial sample size $m$. Let $X_{0}$ be $m$ examples sampled uniformly without replacement from $X$. Label query $X_{0}$ and let $X_{+,0}$ be the positive examples. Initialize $X_{p}\leftarrow X_{+,0}$ and $X_{a}\leftarrow X_{0}$ while not all positives examples in $X$ are labeled do Label query $x=\operatorname*{arg\,min}_{x\in X\backslash X_{a}}d(x,X_{p})$ if $x$ has a positive label then $X_{p}\leftarrow X_{p}\cup\\{x\\}$. end if $X_{a}\leftarrow X_{a}\cup\\{x\\}$. end while To analyze the algorithm there are three key steps: first we show that in the explore phase, we choose at least one example from each connected component (CC) of $\mathcal{X}_{+}$ (Lemma 1). Next, we show that in any CC of $\mathcal{X}_{+}$, all of the positive examples are in the same connected component in the $\epsilon$-neighborhood graph for some $\epsilon$ specified later (Lemma 2). The final step is combining these two results to show a bound on the excess query cost. We now give the following result which says that for $m$ sufficiently large, depending on the probability mass distribution of the CCs of $\mathcal{X}_{+}$, we will with high probability have a positive example from each of the CCs in the initial sample. ###### Lemma 1. Suppose Assumptions 1 and 2 hold and let $0<\delta<1$. Let the connected components of $\mathcal{X}_{+}$ be $\mathcal{X}_{+,1},...,\mathcal{X}_{+,c}$. Let $q:=\min_{i\in[c]}\mathcal{P}_{+}(\mathcal{X}_{+,i})$. If $\displaystyle m\geq\max\left\\{\frac{2\log(2c/\delta)}{p\cdot\log(1/(1-q))},\frac{2\log(2/\delta)}{p^{2}}\right\\},$ then with probability at least $1-\delta$, the initial $m$ examples will contain a positive example in each of $\mathcal{X}_{+,i}$ for $i\in[c]$. The next result shows that the positive examples in each CC of $\mathcal{X}_{+}$ appear in the same CC of the $\epsilon$-neighborhood graph of the positive example for appropriate choice of $\epsilon$. This will be important in showing that after greedily sampling enough examples after the explore phase, we will query all of the examples in $\mathcal{X}_{+}$ but not query examples that are more than $\epsilon$ away from $\mathcal{X}_{+}$. ###### Lemma 2 (Connectedness). Suppose Assumptions 1 and 2 hold. Let $0<\delta<1$ and $\mathcal{X}_{+,1},...,\mathcal{X}_{+,c}$ be the connected components of $\mathcal{X}_{+}$. The following holds with probability at least $1-\delta$. For each $i\in[c]$, we have that $\mathcal{X}_{+,i}\cap X_{+}$ is connected in the $\epsilon$-neighborhood graph of $X_{+}$, where $\displaystyle\epsilon=3\left(\frac{C_{0}\cdot D\log(2/\delta)\cdot\log n}{p\cdot C_{+}\cdot\lambda_{0}\cdot v_{D}\cdot n}\right)^{1/D},$ and $n$ is sufficiently large so that $\epsilon\leq r_{0}$. We now combine the two results to obtain an excess query cost guarantee for the active learner. Lemma 1 ensures that our initial sample of $m$ examples contains an example from each CC of $\mathcal{X}_{+}$ and Lemma 2 ensures that when we actively sample in a greedy manner, we eventually query all of the positive examples and never query any example that is too far from $\mathcal{X}_{+}$– this farness determines how much the active algorithm samples outside of $\mathcal{X}_{+}$ and hence determines the expected excess query cost. ###### Theorem 4 (Excess query cost for Algorithm 2). Suppose Assumptions 1, 2, and 3 hold and let $0<\delta<1$. Let the connected components of $\mathcal{X}_{+}$ be $\mathcal{X}_{+,1},...,\mathcal{X}_{+,c}$ and $q:=\min_{i\in[c]}\mathcal{P}_{+}(\mathcal{X}_{+,i})$. There exists constants $C,N_{0}>0$ depending on $\mathcal{P}$ such that the following holds. If $\displaystyle m$ $\displaystyle\geq\max\left\\{\frac{2\log(4c/\delta)}{p\cdot\log(1/(1-q))},\frac{2\log(4/\delta)}{p^{2}}\right\\},$ and $\frac{n}{\log n}\geq N_{0}\log(4/\delta)$, then with probability at least $1-\delta$, we have the following excess query cost guarantee for Algorithm 2: $\displaystyle\mathbb{E}[C_{\text{exp-commit}}]\leq m+C\cdot\left((\log(4/\delta)\cdot\log n\right)^{1/D}\cdot n^{(D-1)/D}.$ ###### Remark 1. Our requirement for $m$ is tight w.r.t. $q$ and $c$ in the case where $q=\frac{1}{c}$ (i.e. equal probability of each CC). In this case, it reduces down to the classic coupon collector problem (Boneh & Hofri, 1997): each CC is a coupon and the expected number of times we draw a coupon with replacement until we receive one example from each is $\Omega(c\log c)=\Omega(\log c/\log(1/(1-q)))$ by Taylor expansion of $\log(1/(1-q))$. ###### Remark 2. While our results all have a strong dependence on the dimension (commonly referred to as the curse of dimensionality), it’s been shown that non- parametric techniques such as these algorithms can automatically adapt to the intrinsic dimension of the data and the convergence rates can be shown to depend only on this dimension and not the ambient dimension (i.e. arguments from Pelletier (2005); Kpotufe (2011); Jiang (2017, 2019) can be adapted here). ## 5 Related Works The problem of actively retrieving the positive labeled examples was studied as active search by Garnett et al. (2012), where the goal is to label query as many positive examples given a fixed budget and they propose a sequential Bayesian approach that optimizes the expected number of positive labels across timesteps; however their method is computationally expensive requiring $O((2\cdot n)^{\ell})$ runtime where $\ell$ is the number of lookahead timesteps to optimize over. Efficient approximations to this active search technique have been proposed (Jiang et al., 2018, 2019). In our theoretical setting, the goal is to label query all of the positive examples with as few label queries as possible rather than having a fixed known labeling budget. A recent work by Jain & Jamieson (2019) designs an algorithm to actively identify the largest number of positive examples while minimizing or constraining the false-discovery rate (i.e. false-negative rate). They propose a bandit-style active elimination procedure which strategically label queries datapoints (i.e. each datapoint can be seen as an arm and label querying can be seen as pulling the arm) to find the best classification. In our work, we leverage the structure of the data while Jain & Jamieson (2019) considers each datapoint as its own bandit arm and doesn’t explicitly use information about the location of these datapoints. The contributions of (Jain & Jamieson, 2019) are primarily in the theoretical analysis of the setting and proposed algorithm, while the practicality of the algorithm itself is limited due to its computation complexity. A related line of work is learning under one-sided feedback, first studied under the name apple tasting by Helmbold et al. (2000), where the learner receives the true labels for only examples it predicted positively on and the goal is to have as high accuracy as possible. Recently, Jiang et al. (2020) studied the one-sided feedback problem for generalized linear models and attain regret guarantees using an adaptive UCB-based approach under their proposed one-sided loss. This problem is similar to our proposed active covering problem in that both cases we desire to label query the positive examples; however, a key difference is that both Helmbold et al. (2000) and Jiang et al. (2020) operate in the streaming setting, where predictions must be made in real-time whereas here, the learner has access to the entire corpus of unlabeled data. It’s also worth mentioning the tangentially related set cover problem (Slavık, 1997), where the goal is identify the smallest sub-collection of sets whose union equals the whole. The submodular set cover problem (Iwata & Nagano, 2009) involves finding a set that minimizes a modular cost function subject to a submodular function constraint. Guillory & Bilmes (2010) propose an active approach to solve the submodular set cover problem. Active covering however is a different problem that tries to recover the set of datapoints rather than a collection of subsets. Our work is also related to the support estimation literature, which has a long history. Some works include Geffroy (1964); Devroye & Wise (1980); Korostelev & Tsybakov (1993); Cuevas et al. (1997); Biau et al. (2008). Our offline algorithm applies the classical support estimator on the positive samples found to find a covering for $\mathcal{X}_{+}$, which is the union of the $\epsilon$-balls around the initial positive examples. Those works established both upper and lower bounds on $\epsilon$ of order $(\log m/m)^{1/D}$, which were key to the analysis for the offline algorithm. More broadly, support estimation has also been studied under the name of one- class classification, where the goal is identify the examples of a particular class given only training examples from that class. There have been a wide range of approaches proposed including using SVMs (Schölkopf et al., 2001; Manevitz & Yousef, 2001), density functions (Hempstalk et al., 2008), clustering (Ypma & Duin, 1998), optimization (Crammer & Chechik, 2004), and deep learning (Ruff et al., 2018). Figure 1: Plots of percentage of all the positive examples retrieved after each batch. Top: Letters Recognition dataset for the first $4$ letters as the positive classes. Middle: Various datasets using the label $4$ as the positive class. Bottom: CelebA using various attributes as the label. We compare our Explore-Commit method against the offline algorithm as well as the active variants of the baselines we tested across. We see that in all these cases, our method performs the best across batch sizes. Results averaged across $100$ runs. ## 6 Experiments In this section, we describe the details of our experimental results. We test the Explore-then-Commit algorithm (Algorithm 2) against a number of baselines including the offline algorithm (Algorithm 1). We note that these algorithms do not come with any additional hyperparameters. ### 6.1 Baselines We propose a number of additional baselines based on the one-class classification methods implemented in scikit-learn (Pedregosa et al., 2011) that can score examples based on likelihood that they are in-class, namely the One-Class SVM (Schölkopf et al., 2001), Isolation Forest (Liu et al., 2008), and Robust Covariance (Rousseeuw & Driessen, 1999). For each of these baselines, we have variants: offline and active. The offline version trains the one-class classifier on the positive examples in the initial sample (see next subsection) and scores all of the unlabeled examples and then samples in order from most likely to least likely of being in-class. The active version retrains the one-class classifier on the label queried examples found thus far after each batch, and then scores the remaining examples to choose the next batch. All methods have the property of only utilizing the positive queried examples for learning. We thus can list the baselines: 1. Offline (O); 2. Offline Linear SVM (O-LS); 3. Active Linear SVM (A-LS); 4. Offline RBF SVM (O-RS); 5. Active RBF SVM (A-RS); 6. Offline Isolation Forest (O-IF); 7. Active Isolation Forest SVM (A-IF); 8. Offline Robust Covariance (O-RC); 9. Active Robust Covariance (A-RC). ### 6.2 Experiment Setup For each of the datasets, we combine all of the data (i.e. any predetermined train/test/validation splits) into one dataset and operate on this dataset. We fix the initial sample size to a random stratified sample of $100$ datapoints. We train a neural network on the initial sample and use the activations of the second-last layer (i.e. the layer immediately before the logits) as an embedding for the data and we fix the embedding throughout and all of the methods will operate on this embedding instead of the original input. This is because recent work has shown that it’s effective to use classical methods on the intermediate embeddings of the neural network (Papernot & McDaniel, 2018; Bahri et al., 2020; Bahri & Jiang, 2021); moreover, the original features may have undesirable properties (i.e. relative scaling of the features, high- dimensional pixel data, etc) which can hurt the performance of some of the baselines. For each dataset, we run experiments using each of the classes as the positive class, as the datasets we used were all multiclass (with the exception of CelebA, which came with a wide range of binary attributes which we used as classes). Then for each of the datasets with the exception of SVHN and CelebA, we let the batch size be $5\%$ of the remainder of the dataset (i.e. after removing the initial sample) to obtain $20$ batches, and for SVHN and CelebA, due to their size, we let the batch size be $1\%$ of the remainder of the dataset and ran for $50$ batches for SVHN and $30$ batches for CelebA. For all of the experimental results, we averaged across $100$ runs randomizing over different initial samples and ran on a cluster of NVIDIATM TeslaTM V100 Tensor Core GPUs. ### 6.3 Datasets and Embeddings We tested on the following datasets: 1: UCI Letters Recognition (Dua & Graff, 2017), which has $20000$ datapoints and $16$ features based on various statistics on the pixel intensities of the original images of the letters, with $26$ classes – one for each letter. To train the embedding, we used a fully-connected network with one hidden layer of $100$ units and ReLU activations and trained for $20$ epochs. 2: MNIST, with 70000 28x28 pixel grayscale images of handwritten digits and $10$ classes. We use the same model and epochs as Letters for the embeddings. 3: Fashion MNIST (Xiao et al., 2017), with same dimensions and embedding training procedure as that of MNIST. 4: CIFAR10 with 60000 32x32 colour images in 10 classes. For the embeddings use a simple VGG (Zhang et al., 2015) network and extract the second-last layer which has $128$ dimensions and train for $100$ epochs. 5: SVHN (Netzer et al., 2011) with 99289 color images, cropped to 32x32 pixels. For the embeddings, we use LeNet5 (LeCun et al., 1998) and train for $20$ epochs. 6\. CelebA (Liu et al., 2018) a large-scale face attributes dataset with more than 162770; we resized the images to 28x28 celebrity images. The dataset has 40 attribute annotations which we use as separate binary classification tasks. We use the same embedding procedure as that of SVHN. ### 6.4 Hyperparameters Our method doesn’t come with any additional hyperparameters; however the baselines do require the tuning of hyperparameters. For these methods, we perform $5$-fold cross-validation on the initial sample using accuracy as the metric (these methods as implemented in scikit-learn have predict methods which classifies whether an example is an outlier relative to the positive class). For the SVM methods, we tune the gamma parameter (kernel coefficient) and nu (upper bound on the fraction of training errors and a lower bound of the fraction of support vectors). For Isolation Forest, we tune the number of estimators in the ensemble. For Robust Covariance, we tune the proportion of contamination of the data set. For all of these aforementioned hyperparameters, we search over a grid of powers of two. ### 6.5 Evaluation Metrics We plot the percentage of positive examples label queried across batches for each of the methods to illustrate the performance of each method. We also compute the area under the curve for each method, defined as the average number of positive examples retrieved across each batch, and use this as the primary evaluation metric to compare the methods. Since we average over $100$ runs, we also compute an error band on the area under the curve metric. We do this by computing the standard deviation of percentage of positive examples retrieved for each of the $20$ (or $50$ and $30$ in the case of SVHN and CelebA) batches. We average across these standard deviations and divide by square root of the number of runs to obtain an estimate of the standard deviation of the mean. We then form a $95\%$ confidence band based on this and consider methods which have overlapping bands as statistical ties. ### 6.6 Results We show the results for each baseline and dataset/label pair under our area under the curve metric in Table 2. Due to space, we could only show partial results for Letters and defer the rest along with the CelebA results to the Appendix. We nonetheless summarize all of the results here: 1\. Letters. Our proposed method, Explore-then-Commit, outright outperforms all the other baselines on all $26$ tasks. 2\. MNIST. Our method again outright outperforms all the other baselines on all $10$ tasks. 3\. Fashion MNIST. Our method performs competitively on $9$ out of the $10$ tasks, with the next most competitive baseline (Active Isolation Forest) being competitive on $7$ out of the $10$ tasks. 4\. CIFAR10. Here, our method performs competitively on $6$ of the $10$ tasks. It’s worth noting that we only perform poorly when all of the methods perform poorly suggesting that in such settings not much learning is possible (i.e. from Table 2, we only perform non-competitively when the AUC metric is under $55\%$. A passive learner that samples uniformly at random is expected to have an AUC of $50\%$). 5\. SVHN. Our method is competitive for all tasks and outright wins for all but one task. 6\. CelebA. Our method is competitive for 32 out of the 40 tasks. Due to space, the results are shown in the Appendix. We again see a similar pattern as in CIFAR10 where our method only performs poorly when all of the methods perform poorly (i.e. we only perform non-competitively when the AUC metric is under $20\%$. For comparison, a passive learner is expected to have an AUC of $15\%$). Dataset | Label | O | O-LS | A-LS | O-RS | A-RS | O-IF | A-IF | O-RC | A-RC | EC (Ours) ---|---|---|---|---|---|---|---|---|---|---|--- Letters | A | 91.14 | 52.52 | 52.49 | 84.81 | 89.02 | 59.52 | 84.69 | 64.27 | 86.87 | 97.12 B | 83.41 | 52.73 | 52.57 | 75.95 | 82.41 | 56.13 | 75.89 | 61.38 | 80.18 | 93.76 C | 84.48 | 56.19 | 56.08 | 75.92 | 83.78 | 55.78 | 78.68 | 59.21 | 81.92 | 94.2 D | 83.51 | 52.57 | 52.45 | 76.14 | 82.23 | 56.09 | 76.03 | 61.51 | 78.83 | 93.73 E | 78.6 | 52.85 | 52.99 | 74.84 | 81.41 | 55.77 | 73.7 | 60.17 | 77.27 | 89.5 F | 83.63 | 53.4 | 53.41 | 78.72 | 83.16 | 57.44 | 77.83 | 64.69 | 80.88 | 94.0 | G | 82.23 | 52.72 | 52.67 | 75.76 | 81.88 | 58.15 | 74.58 | 63.45 | 79.79 | 92.35 MNIST | 0 | 86.67 | 81.81 | 81.96 | 52.53 | 52.95 | 83.44 | 90.8 | 74.31 | 86.48 | 94.44 1 | 95.89 | 55.22 | 55.11 | 58.47 | 90.31 | 90.16 | 94.27 | 87.34 | 89.8 | 96.46 2 | 75.86 | 60.64 | 60.37 | 52.54 | 52.56 | 72.66 | 80.18 | 62.81 | 77.83 | 85.47 3 | 80.77 | 60.64 | 60.36 | 52.52 | 52.65 | 76.67 | 84.31 | 66.87 | 81.03 | 87.63 4 | 83.05 | 54.23 | 54.14 | 52.47 | 53.08 | 76.86 | 83.61 | 66.71 | 78.31 | 89.14 5 | 75.59 | 52.88 | 52.81 | 52.51 | 52.63 | 61.65 | 69.44 | 57.82 | 71.18 | 87.24 6 | 86.53 | 59.98 | 59.77 | 52.49 | 54.03 | 81.19 | 88.33 | 67.37 | 81.95 | 93.24 7 | 87.05 | 55.83 | 55.63 | 52.54 | 57.02 | 80.71 | 87.26 | 70.14 | 81.76 | 91.62 8 | 75.7 | 56.27 | 56.17 | 52.49 | 52.62 | 69.73 | 78.3 | 61.71 | 77.32 | 83.37 9 | 84.91 | 54.64 | 54.7 | 52.51 | 55.06 | 77.22 | 84.66 | 67.88 | 79.79 | 90.71 Fashion MNIST | 0 | 87.81 | 54.51 | 54.49 | 52.9 | 66.76 | 86.35 | 90.14 | 81.5 | 87.44 | 89.75 1 | 94.73 | 55.84 | 55.67 | 55.12 | 85.21 | 92.67 | 94.73 | 90.42 | 92.54 | 95.93 2 | 84.44 | 55.57 | 55.57 | 52.72 | 63.65 | 82.97 | 87.6 | 78.46 | 85.6 | 87.19 3 | 88.86 | 53.15 | 53.15 | 52.64 | 63.56 | 85.39 | 89.74 | 83.08 | 87.06 | 91.29 4 | 84.9 | 56.47 | 56.41 | 52.68 | 62.54 | 83.23 | 87.25 | 79.68 | 83.61 | 88.09 5 | 88.09 | 52.54 | 52.52 | 52.62 | 57.14 | 79.59 | 84.7 | 82.92 | 75.2 | 89.16 6 | 77.31 | 52.5 | 52.5 | 52.98 | 63.62 | 75.94 | 81.63 | 71.46 | 80.18 | 81.1 7 | 94.41 | 52.47 | 52.49 | 52.97 | 69.91 | 92.66 | 95.06 | 90.72 | 93.05 | 95.17 8 | 82.86 | 55.43 | 55.46 | 52.5 | 54.1 | 78.23 | 86.56 | 78.33 | 83.97 | 85.96 9 | 92.5 | 70.39 | 70.13 | 52.59 | 58.31 | 90.93 | 94.12 | 90.35 | 91.4 | 93.49 CIFAR10 | 0 | 67.06 | 54.33 | 54.27 | 64.84 | 64.8 | 64.6 | 68.03 | 65.49 | 69.36 | 70.69 1 | 57.24 | 52.57 | 52.54 | 55.67 | 55.01 | 54.25 | 53.68 | 57.68 | 50.3 | 54.06 2 | 65.43 | 52.48 | 52.51 | 59.75 | 59.92 | 62.09 | 64.61 | 63.42 | 65.15 | 68.71 3 | 53.98 | 52.5 | 52.51 | 53.21 | 53.28 | 53.34 | 52.8 | 54.13 | 50.09 | 52.2 4 | 70.94 | 52.5 | 52.55 | 66.68 | 67.52 | 68.54 | 71.52 | 67.95 | 68.29 | 73.97 5 | 57.59 | 52.53 | 52.48 | 56.13 | 56.09 | 56.55 | 56.7 | 58.83 | 53.12 | 53.9 6 | 72.16 | 52.48 | 52.49 | 67.67 | 67.89 | 67.79 | 70.87 | 70.78 | 65.26 | 74.73 7 | 58.51 | 52.54 | 52.53 | 56.0 | 55.93 | 56.48 | 57.7 | 58.07 | 53.92 | 58.36 8 | 70.25 | 52.48 | 52.47 | 66.67 | 66.86 | 67.92 | 71.57 | 68.13 | 69.91 | 71.42 9 | 62.79 | 52.54 | 52.49 | 61.8 | 61.74 | 63.51 | 66.18 | 63.97 | 62.65 | 55.63 SVHN | 0 | 31.11 | 25.47 | 25.49 | 28.54 | 29.89 | 28.0 | 30.12 | 31.57 | 39.59 | 39.67 1 | 28.23 | 25.53 | 25.52 | 25.63 | 25.25 | 25.08 | 25.44 | 32.81 | 35.19 | 37.32 2 | 28.66 | 25.53 | 25.52 | 26.28 | 26.49 | 26.32 | 26.69 | 30.86 | 32.68 | 34.31 3 | 28.19 | 25.57 | 25.56 | 26.11 | 26.47 | 26.34 | 26.79 | 29.37 | 31.0 | 33.81 4 | 28.01 | 25.51 | 25.49 | 25.54 | 25.16 | 25.21 | 26.66 | 27.31 | 33.4 | 36.31 5 | 29.02 | 25.56 | 25.53 | 26.55 | 26.98 | 26.77 | 27.75 | 29.67 | 32.81 | 34.44 6 | 28.8 | 25.49 | 25.5 | 26.37 | 26.6 | 26.24 | 27.72 | 28.7 | 32.34 | 35.29 7 | 29.36 | 25.52 | 25.48 | 26.46 | 26.18 | 25.7 | 27.22 | 27.79 | 35.14 | 37.62 8 | 28.04 | 25.51 | 25.48 | 26.29 | 27.03 | 26.41 | 27.43 | 27.29 | 31.43 | 33.37 9 | 28.48 | 25.49 | 25.49 | 26.82 | 27.5 | 26.28 | 28.04 | 27.62 | 32.83 | 34.36 Table 2: Area under the curve metric for various benchmark image-based datasets. For each of the datasets and possible labels, we show the area under the curve metric averaged across $100$ runs, with the top value bolded (any methods whose $95\%$ confidence intervals overlap were considered statistical ties). Due to space, we show the rest of the Letters results as well as the CelebA results in the Appendix. ## 7 Conclusion We have formalized the problem of active covering and introduced several baselines, including a principled active approach that attains better guarantees than the offline algorithm. We showed in experiments that our proposed Explore-the-Commit algorithm has strong performance against a number of baselines while having desirable properties including not having additional hyperparameters, and not needing to store or use the queried negative examples. Future work involves extending theoretical analysis relaxing the hard boundary density assumption on $\mathcal{X}_{+}$, letting $\mathcal{X}_{+}$ be a lower dimensional manifold embedded in the $D$-dimensional space (rather than being full-dimensional) and attain excess query cost guarantees that depend on this lower dimension, and investigating the computational and privacy implications of such approaches. ## References * Acosta et al. (2019) Acosta, I. C. C., Khodadadzadeh, M., Tusa, L., Ghamisi, P., and Gloaguen, R. A machine learning framework for drill-core mineral mapping using hyperspectral and high-resolution mineralogical data fusion. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ , 12(12):4829–4842, 2019. * Awoyemi et al. (2017) Awoyemi, J. O., Adetunmbi, A. O., and Oluwadare, S. A. Credit card fraud detection using machine learning techniques: A comparative analysis. In _2017 International Conference on Computing Networking and Informatics (ICCNI)_ , pp. 1–9. IEEE, 2017. * Bahri & Jiang (2021) Bahri, D. and Jiang, H. Locally adaptive label smoothing for predictive churn. _arXiv preprint arXiv:2102.05140_ , 2021. * Bahri et al. (2020) Bahri, D., Jiang, H., and Gupta, M. Deep k-nn for noisy labels. In _International Conference on Machine Learning_ , pp. 540–550. PMLR, 2020. * Biau et al. (2008) Biau, G., Cadre, B., and Pelletier, B. Exact rates in density support estimation. _Journal of Multivariate Analysis_ , 99(10):2185–2207, 2008. * Boneh & Hofri (1997) Boneh, A. and Hofri, M. The coupon-collector problem revisited—a survey of engineering problems and computational methods. _Stochastic Models_ , 13(1):39–66, 1997. * Chaudhuri & Dasgupta (2010) Chaudhuri, K. and Dasgupta, S. Rates of convergence for the cluster tree. In _Advances in neural information processing systems_ , pp. 343–351, 2010. * Crammer & Chechik (2004) Crammer, K. and Chechik, G. A needle in a haystack: local one-class optimization. In _Proceedings of the twenty-first international conference on Machine learning_ , pp. 26, 2004. * Cuevas et al. (1997) Cuevas, A., Fraiman, R., et al. A plug-in approach to support estimation. _The Annals of Statistics_ , 25(6):2300–2312, 1997. * Devroye & Wise (1980) Devroye, L. and Wise, G. L. Detection of abnormal behavior via nonparametric estimation of the support. _SIAM Journal on Applied Mathematics_ , 38(3):480–488, 1980. * Dua & Graff (2017) Dua, D. and Graff, C. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml. * García-Recuero (2016) García-Recuero, Á. Discouraging abusive behavior in privacy-preserving online social networking applications. In _Proceedings of the 25th International Conference Companion on World Wide Web_ , pp. 305–309, 2016. * Garivier et al. (2016) Garivier, A., Lattimore, T., and Kaufmann, E. On explore-then-commit strategies. _Advances in Neural Information Processing Systems_ , 29:784–792, 2016. * Garnett et al. (2012) Garnett, R., Krishnamurthy, Y., Xiong, X., Schneider, J., and Mann, R. Bayesian optimal active search and surveying. In _Proceedings of the 29th International Coference on International Conference on Machine Learning_ , pp. 843–850, 2012. * Geffroy (1964) Geffroy, J. Sur un probleme d’estimation géométrique. _Publ. Inst. Statist. Univ. Paris_ , 13:191–210, 1964. * Gorin (1983) Gorin, A. On the volume of tubes. _Illinois Journal of Mathematics_ , 27(1):158–171, 1983. * Guillory & Bilmes (2010) Guillory, A. and Bilmes, J. Interactive submodular set cover. _arXiv preprint arXiv:1002.3345_ , 2010. * Helmbold et al. (2000) Helmbold, D. P., Littlestone, N., and Long, P. M. Apple tasting. _Information and Computation_ , 161(2):85–139, 2000. * Hempstalk et al. (2008) Hempstalk, K., Frank, E., and Witten, I. H. One-class classification by combining density and class probability estimation. In _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_ , pp. 505–519. Springer, 2008. * Iwata & Nagano (2009) Iwata, S. and Nagano, K. Submodular function minimization under covering constraints. In _2009 50th Annual IEEE Symposium on Foundations of Computer Science_ , pp. 671–680. IEEE, 2009. * Jain & Jamieson (2019) Jain, L. and Jamieson, K. G. A new perspective on pool-based active classification and false-discovery control. In _Advances in Neural Information Processing Systems_ , pp. 13992–14003, 2019. * Jiang (2017) Jiang, H. Density level set estimation on manifolds with dbscan. In _International Conference on Machine Learning_ , pp. 1684–1693. PMLR, 2017. * Jiang (2019) Jiang, H. Non-asymptotic uniform rates of consistency for k-nn regression. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pp. 3999–4006, 2019. * Jiang et al. (2020) Jiang, H., Jiang, Q., and Pacchiano, A. Learning the truth from only one side of the story. _arXiv preprint arXiv:2006.04858_ , 2020. * Jiang et al. (2018) Jiang, S., Malkomes, G., Abbott, M., Moseley, B., and Garnett, R. Efficient nonmyopic batch active search. In _Advances in Neural Information Processing Systems_ , pp. 1099–1109, 2018. * Jiang et al. (2019) Jiang, S., Garnett, R., and Moseley, B. Cost effective active search. In _Advances in Neural Information Processing Systems_ , pp. 4880–4889, 2019. * Khandani et al. (2010) Khandani, A. E., Kim, A. J., and Lo, A. W. Consumer credit-risk models via machine-learning algorithms. _Journal of Banking & Finance_, 34(11):2767–2787, 2010. * Korostelev & Tsybakov (1993) Korostelev, A. P. and Tsybakov, A. B. Estimation of the density support and its functionals. _Problemy Peredachi Informatsii_ , 29(1):3–18, 1993. * Kpotufe (2011) Kpotufe, S. k-nn regression adapts to local intrinsic dimension. _arXiv preprint arXiv:1110.4300_ , 2011. * LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_ , 86(11):2278–2324, 1998. * Li et al. (2008) Li, K., Zhong, Z., and Ramaswamy, L. Privacy-aware collaborative spam filtering. _IEEE Transactions on Parallel and Distributed systems_ , 20(5):725–739, 2008. * Liu et al. (2008) Liu, F. T., Ting, K. M., and Zhou, Z.-H. Isolation forest. In _2008 Eighth IEEE International Conference on Data Mining_ , pp. 413–422. IEEE, 2008. * Liu et al. (2018) Liu, Z., Luo, P., Wang, X., and Tang, X. Large-scale celebfaces attributes (celeba) dataset. _Retrieved August_ , 15:2018, 2018. * Manevitz & Yousef (2001) Manevitz, L. M. and Yousef, M. One-class svms for document classification. _Journal of machine Learning research_ , 2(Dec):139–154, 2001. * Netzer et al. (2011) Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. Reading digits in natural images with unsupervised feature learning. 2011\. * Nobata et al. (2016) Nobata, C., Tetreault, J., Thomas, A., Mehdad, Y., and Chang, Y. Abusive language detection in online user content. In _Proceedings of the 25th international conference on world wide web_ , pp. 145–153, 2016. * Ou-Yang et al. (2012) Ou-Yang, S.-s., Lu, J.-y., Kong, X.-q., Liang, Z.-j., Luo, C., and Jiang, H. Computational drug discovery. _Acta Pharmacologica Sinica_ , 33(9):1131–1140, 2012. * Papernot & McDaniel (2018) Papernot, N. and McDaniel, P. Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning. _arXiv preprint arXiv:1803.04765_ , 2018. * Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al. Scikit-learn: Machine learning in python. _the Journal of machine Learning research_ , 12:2825–2830, 2011. * Pelletier (2005) Pelletier, B. Kernel density estimation on riemannian manifolds. _Statistics & probability letters_, 73(3):297–304, 2005. * Rousseeuw & Driessen (1999) Rousseeuw, P. J. and Driessen, K. V. A fast algorithm for the minimum covariance determinant estimator. _Technometrics_ , 41(3):212–223, 1999. * Ruff et al. (2018) Ruff, L., Vandermeulen, R., Goernitz, N., Deecke, L., Siddiqui, S. A., Binder, A., Müller, E., and Kloft, M. Deep one-class classification. In _International conference on machine learning_ , pp. 4393–4402, 2018. * Schölkopf et al. (2001) Schölkopf, B., Platt, J. C., Shawe-Taylor, J., Smola, A. J., and Williamson, R. C. Estimating the support of a high-dimensional distribution. _Neural computation_ , 13(7):1443–1471, 2001\. * Singh et al. (2009) Singh, A., Scott, C., Nowak, R., et al. Adaptive hausdorff estimation of density level sets. _The Annals of Statistics_ , 37(5B):2760–2782, 2009. * Slavık (1997) Slavık, P. A tight analysis of the greedy algorithm for set cover. _Journal of Algorithms_ , 25(2):237–254, 1997\. * Xiao et al. (2017) Xiao, H., Rasul, K., and Vollgraf, R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. _arXiv preprint arXiv:1708.07747_ , 2017. * Ypma & Duin (1998) Ypma, A. and Duin, R. P. Support objects for domain approximation. In _International Conference on Artificial Neural Networks_ , pp. 719–724. Springer, 1998. * Zhang et al. (2015) Zhang, X., Zou, J., He, K., and Sun, J. Accelerating very deep convolutional networks for classification and detection. _IEEE transactions on pattern analysis and machine intelligence_ , 38(10):1943–1955, 2015. * Zhao & Lai (2020) Zhao, P. and Lai, L. Analysis of knn density estimation. _arXiv preprint arXiv:2010.00438_ , 2020. Appendix ## Appendix A Proofs ### A.1 Proofs for Section 2 ###### Proof of Theorem 1. Consider the distribution where there are two connected components, one for $\mathcal{X}_{+}$ and one for $\mathcal{X}_{-}$, each with mixture probability $p=\frac{1}{2}$. Thus, Assumption 1 holds and we are choose to free the other parameters of the distribution in any way that satisfies Assumption 2 and 3 (e.g. a mixture of uniform density functions satisfies these assumptions). Now note that with probability $\frac{1}{2}$, the final point that is label queried by the passive learner will be positive and, thus, the passive algorithm will need to query all of the points with probability $\frac{1}{2}$ in order to retrieve all positive points. In such an event, the excess query cost is at least $\frac{1}{2}\cdot n$. ∎ ### A.2 Proofs for Section 3 Much of our technical results require the following uniform high-probability guarantee that balls of sufficient probability mass contain an example: ###### Lemma 3. Let $0<\delta<1$ and $\mathcal{F}$ be some distribution over $\mathbb{R}^{D}$ and $X$ be a sample of size $n$ drawn i.i.d. from $\mathcal{F}$. There exists universal constant $C_{0}$ such that the following holds with probability at least $1-\delta$ uniformly over all balls $B\in\mathbb{R}^{D}$: $\displaystyle\mathcal{F}(B)\geq\frac{C_{0}\cdot D\cdot\log(2/\delta)\log n}{n}\Rightarrow|B\cap X|>0.$ ###### Proof. This follows by Lemma 7 of Chaudhuri & Dasgupta (2010). ∎ The following result bounds the volume of the $\epsilon$-neighborhood around $\mathcal{X}_{+}$, which will be used later to bound the excess number of points queried around $\mathcal{X}_{+}$. The result says that the volume of the $\epsilon$-neighboorhood around $\mathcal{X}_{+}$ (and not including $\mathcal{X}_{+}$) is linear in $\epsilon$. ###### Lemma 4. Suppose Assumption 2 holds. Then there exists constants $r_{1},C_{+}^{\prime}>0$ depending only on $\mathcal{X}_{+}$ such that for all $0<\epsilon<r_{1}$, we have $\displaystyle\text{Vol}(B(\mathcal{X}_{+},\epsilon)\backslash\mathcal{X}_{+})\leq C_{+}^{\prime}\cdot\epsilon,$ where $B(\mathcal{X}_{+},\epsilon):=\\{x\in\mathbb{R}^{D}:\inf_{x^{\prime}\in\mathcal{X}_{+}}|x-x^{\prime}|\leq\epsilon\\}$. ###### Proof of Lemma 4. This follows from Gorin (1983). To see this, the equation on page 159 of Gorin (1983) states that if $M$ and $N$ are respectively $d$-dimensional and $(d+k)$-dimensional compact smooth Riemannian manifolds and $f:M\rightarrow N$ is a smooth isometric embedding, then we have $\displaystyle\text{Vol}(B(f(M),\epsilon))=V_{k}\cdot\epsilon^{k}\cdot\text{Vol}(M)+O(\epsilon^{k+1}),$ where $V_{k}$ is the volume of a $k$-dimensional ball. Here, we take $M=\mathcal{X}_{+}$ and $N=B(\mathcal{X}_{+},r_{1})$ for some $r_{1}>0$. Then, we have $k=0$ and taking $f$ to be the identity function, we have $\displaystyle\text{Vol}(B(\mathcal{X}_{+},\epsilon))=\text{Vol}(\mathcal{X}_{+})+O(\epsilon),$ and the result follows immediately. ∎ ###### Proof of Theorem 2. By Hoeffding’s inequality, out of the initial $m$ examples that Algorithm 1 label queries, we have with probability at least $1-\delta/2$ that at least $p-\sqrt{\frac{1}{2m}\cdot\log(2/\delta)}$ fraction of them are positively labeled, since the example being positive follows a Bernoulli distribution with probability $p$. Then by the condition on $m$, we have that at least $p/2$ fraction of the points are positively labeled. Take $\displaystyle\epsilon=\left(\frac{2\cdot C_{0}\cdot D\cdot\log(4/\delta)\log(p\cdot m/2)}{p^{2}\cdot\lambda_{0}\cdot C_{+}\cdot v_{D}\cdot m}\right)^{1/D},\hskip 14.22636ptM_{0}=\max\left\\{\frac{2\cdot C_{0}\cdot D(\log(p\cdot/2)+1)}{p^{2}\cdot\lambda_{0}\cdot C_{+}\cdot v_{D}\cdot\min\\{r_{0},r_{1}\\}^{D}},2e\right\\},$ where $v_{D}$ is the volume of a unit ball in $\mathbb{R}^{D}$. Then, we have that the condition on $m$ and $M_{0}$ guarantees that $\epsilon<\min\\{r_{0},r_{1}\\}$. Let $x\in\mathcal{X}_{+}$. We have that the probability mass of positive examples in $B(x,\epsilon)$ w.r.t. $\mathcal{P}$ is: $\displaystyle p\cdot\mathcal{P}_{+}(B(x,\epsilon))$ $\displaystyle\geq p\cdot\lambda_{0}\cdot\text{Vol}(B(x,\epsilon)\cap\mathcal{X}_{+})$ $\displaystyle\geq p\cdot\lambda_{0}\cdot C_{+}\cdot\text{Vol}(B(x,\epsilon))$ $\displaystyle\geq p\cdot\lambda_{0}\cdot C_{+}\cdot v_{D}\cdot\epsilon^{D}$ $\displaystyle\geq\frac{2\cdot C_{0}\cdot D\log(4/\delta)\log(p\cdot m/2)}{p\cdot m}.$ Then by Lemma 3, we have with probability at least $1-\delta/2$ that all the positive examples in $X$ are within $\epsilon$ of one of the positive examples among the initially sampled $m$ examples. Therefore, Algorithm 1 retrieves all of the positive examples. Now we bound the expected regret: $\displaystyle\mathbb{E}[C_{\text{offline}}]$ $\displaystyle\leq(1-p)\cdot m+n\cdot(1-p)\cdot\mathcal{P}_{-}(B(\mathcal{X}_{+},\epsilon)\backslash\mathcal{X}_{+})$ $\displaystyle\leq(1-p)\cdot m+n\cdot(1-p)\cdot\lambda_{1}\cdot C_{+}^{\prime}\cdot\epsilon.$ The result follows. ∎ ###### Proof of Theorem 3. Let $\mathcal{P}_{+}$ be the uniform distribution on the unit hypercube $[0,1]^{D}$ and $\mathcal{P}_{-}$ be the uniform distribution on $[-1,2]^{D}$. In the initial sampling phase of Algorithm 1, at most $m$ of the examples will be positively labeled. Let $\widehat{\mathcal{X}_{+}}=X\cap\left(\cup_{x\in X_{0,+}}B(x,\epsilon)\right)$, the set of points that Algorithm 1 labeled. Then, Theorem 3b in (Cuevas et al., 1997) shows that for $n$ sufficiently large, with probability at least $1/4$, we have $\displaystyle d_{H}(\widehat{\mathcal{X}_{+}},\mathcal{X}_{+})\geq\frac{1}{4}\left(\frac{\log m}{m}\right)^{1/D}$ for any $\epsilon>0$, where $d_{H}(A,B):=\max\\{\sup_{x\in A}d(x,B),\sup_{x\in B}d(x,A)\\}$ is the Hausdorff distance. Therefore, we have (in the case of taking $\epsilon\rightarrow 0$): $\displaystyle d_{H}(X_{0,+},\mathcal{X}_{+})\geq\frac{1}{4}\left(\frac{\log m}{m}\right)^{1/D}.$ Since $X_{0,+}\subseteq\mathcal{X}_{+}$, it follows that $d_{H}(X_{0,+},\mathcal{X}_{+})=\sup_{x\in\mathcal{X}_{+}}d(x,X_{0,+})$. Therefore, we need $\epsilon\geq\frac{1}{4}\left(\frac{\log m}{m}\right)^{1/D}$ in order for Algorithm 1 to recover all of the positive examples. Thus, the expected regret is at least (for some $C>0$) $\displaystyle\mathbb{E}[C_{\text{offline}}]\geq(1-p)\cdot m+C\cdot\left(\frac{\log m}{m}\right)^{1/D}\cdot n,$ as desired. ∎ ### A.3 Proofs for Section 4 ###### Proof of Lemma 1. By Hoeffding’s inequality, out of the initial $m$ examples that Algorithm 2 label queries, we have with probability at least $1-\delta/2$ that at least $p-\sqrt{\frac{1}{2m}\cdot\log(2/\delta)}$ are fraction of them are positively labeled, since the example being positive follows a Bernoulli distribution with probability $p$. Then by the condition on $m$, we have that at least $p/2$ fraction of the points are positively labeled and thus we have at least $mp/2$ positive examples. Then, we have that out of these $mp/2$ examples, the probability that none of them are in $\mathcal{X}_{+,i}$ for each $i\in[c]$ is at most $\displaystyle(1-\mathcal{P}_{+}(\mathcal{X}_{+,i}))^{mp/2}\leq(1-q)^{mp/2}\leq\frac{\delta}{2c}.$ The result follows by union bound. ∎ ###### Proof of Lemma 2. Let $x,x^{\prime}\in\mathcal{X}_{+,i}$. There exists a path $x=x_{1}\to x_{2}\to\ldots\to x_{q}=x^{\prime}$ in $\mathcal{X}_{+,i}$ such that $||x_{j}-x_{j+1}||\leq\epsilon/3$. We also have that the probability mass of positive examples in $B(x_{j},\epsilon/3)$ w.r.t. $\mathcal{P}$ is: $\displaystyle p\cdot\mathcal{P}_{+}(B(x_{j},\epsilon/2))$ $\displaystyle\geq p\cdot C_{+}\cdot\lambda_{0}\cdot v_{D}\cdot(\epsilon/3)^{D}\geq\frac{C_{0}\cdot D\log(2/\delta)\cdot\log n}{n}.$ Therefore by Lemma 3, there exists $x_{j}^{\prime}\in B(x_{j},\epsilon/3)\cap X_{+}$, where $X_{+}$ are the positive examples in $X$. Hence, by triangle inequality, there exists path $x=x_{1}^{\prime}\to x_{2}^{\prime}\to...\to x_{q}^{\prime}=x^{\prime}$ all in $X_{+}$ where $||x_{j}^{\prime}-x_{j+1}^{\prime}||\leq||x_{j}^{\prime}-x_{j}||+||x_{j}-x_{j+1}||+||x_{j+1}-x_{j+1}^{\prime}||\leq\epsilon$ implying that $\mathcal{X}_{+,i}\cap X_{+}$ is connected in the $\epsilon$-neighborhood graph of $X_{+}$. The result follows immediately. ∎ Finally, we combine these two results to show the final excess query cost guarantee for Algorithm 2. ###### Proof of Theorem 4. Take $\displaystyle N_{0}=\frac{3^{D}\cdot C_{0}\cdot D}{\min\\{r_{0},r_{1}\\}^{D}\cdot p\cdot C_{+}\cdot\lambda_{0}\cdot v_{D}}.$ By Lemma 1, there exists at least one positive example in the initial $m$ samples from each connected component of $\mathcal{X}_{+}$. Define $\displaystyle\epsilon:=3\left(\frac{C_{0}\cdot D\log(4/\delta)\cdot\log n}{p\cdot C_{+}\cdot\lambda_{0}\cdot v_{D}\cdot n}\right)^{1/D}.$ We have that the condition on $n$ implies that $\epsilon\leq\min\\{r_{0},r_{1}\\}$. By Lemma 2, we have that all of the positive examples of each connected component of $\mathcal{X}_{+}$ are in the same CC of the $\epsilon$-neighborhood graph of the positive examples. Therefore, when the algorithm terminates, the set of examples it will select from will be contained in $B(\mathcal{X}_{+},\epsilon)$. Therefore, we have $\displaystyle\mathbb{E}[C_{\text{exp-commit}}]\leq(1-p)\cdot m+C_{+}^{\prime}\cdot\epsilon\cdot n\leq(1-p)\cdot m+C\cdot\left((\log(4/\delta)\cdot\log n\right)^{1/D}\cdot n^{(D-1)/D},$ for some $C$ depending on $\mathcal{P}$, as desired. ∎ ## Appendix B Additional Experiment Plots In Table 3, we show the full results for the Letters dataset for the area under curve metrics. We see that in all cases, our method outperforms outright. In Table 4, we show the full results for CelebA. We see that our method is competitive for 32 out of the 40 tasks. Dataset | Label | O | O-LS | A-LS | O-RS | A-RS | O-IF | A-IF | O-RC | A-RC | EC (Ours) ---|---|---|---|---|---|---|---|---|---|---|--- Letters | A | 91.14 | 52.52 | 52.49 | 84.81 | 89.02 | 59.52 | 84.69 | 64.27 | 86.87 | 97.12 B | 83.41 | 52.73 | 52.57 | 75.95 | 82.41 | 56.13 | 75.89 | 61.38 | 80.18 | 93.76 C | 84.48 | 56.19 | 56.08 | 75.92 | 83.78 | 55.78 | 78.68 | 59.21 | 81.92 | 94.2 D | 83.51 | 52.57 | 52.45 | 76.14 | 82.23 | 56.09 | 76.03 | 61.51 | 78.83 | 93.73 E | 78.6 | 52.85 | 52.99 | 74.84 | 81.41 | 55.77 | 73.7 | 60.17 | 77.27 | 89.5 F | 83.63 | 53.4 | 53.41 | 78.72 | 83.16 | 57.44 | 77.83 | 64.69 | 80.88 | 94.0 G | 82.23 | 52.72 | 52.67 | 75.76 | 81.88 | 58.15 | 74.58 | 63.45 | 79.79 | 92.35 H | 69.07 | 52.4 | 52.63 | 61.19 | 69.51 | 53.95 | 64.15 | 52.03 | 66.94 | 81.78 I | 84.26 | 52.37 | 52.59 | 73.96 | 80.72 | 55.95 | 77.56 | 59.11 | 83.83 | 93.89 J | 84.24 | 54.45 | 54.35 | 75.16 | 81.72 | 56.22 | 77.06 | 58.58 | 81.61 | 94.28 K | 72.7 | 52.57 | 52.62 | 65.15 | 72.03 | 55.46 | 68.12 | 51.45 | 74.38 | 89.4 L | 80.98 | 52.59 | 52.67 | 72.74 | 79.87 | 55.44 | 77.73 | 59.16 | 78.57 | 93.18 M | 81.83 | 54.41 | 54.25 | 77.48 | 83.23 | 59.55 | 78.61 | 60.62 | 81.67 | 93.07 N | 75.15 | 52.67 | 52.68 | 68.58 | 75.31 | 55.37 | 69.43 | 55.85 | 73.74 | 89.8 O | 87.71 | 52.51 | 52.62 | 80.88 | 86.08 | 57.88 | 78.08 | 67.62 | 82.71 | 94.69 P | 84.76 | 52.7 | 52.7 | 79.24 | 84.47 | 57.63 | 79.18 | 64.81 | 83.19 | 93.47 Q | 82.25 | 53.22 | 53.08 | 74.97 | 80.09 | 55.71 | 74.66 | 60.49 | 79.55 | 92.18 R | 82.68 | 52.59 | 52.48 | 76.72 | 82.47 | 56.32 | 75.3 | 61.37 | 79.26 | 92.77 S | 76.51 | 52.83 | 52.91 | 70.42 | 75.85 | 55.24 | 72.01 | 59.66 | 75.84 | 89.27 T | 83.47 | 55.32 | 55.35 | 76.38 | 82.95 | 56.87 | 79.06 | 62.87 | 81.96 | 93.34 U | 78.05 | 52.91 | 53.05 | 73.72 | 81.28 | 55.86 | 72.81 | 58.56 | 76.41 | 92.7 V | 89.82 | 54.61 | 54.59 | 82.3 | 87.65 | 59.74 | 82.18 | 61.01 | 85.18 | 96.19 W | 90.15 | 55.25 | 55.29 | 84.57 | 89.11 | 59.37 | 82.04 | 63.1 | 86.65 | 96.79 X | 80.18 | 52.63 | 52.6 | 74.84 | 80.68 | 55.92 | 73.04 | 62.48 | 78.1 | 92.93 Y | 81.73 | 54.65 | 54.7 | 72.3 | 79.67 | 57.49 | 76.38 | 53.48 | 79.33 | 92.46 Z | 83.78 | 54.6 | 54.54 | 76.89 | 84.18 | 56.46 | 78.56 | 60.14 | 82.64 | 93.35 Table 3: Letters: Area under curve metric. Label | O | O-LS | A-LS | O-RS | A-RS | O-IF | A-IF | O-RC | A-RC | EC (Ours) ---|---|---|---|---|---|---|---|---|---|--- 5-o-Clock-Shadow | 20.17 | 15.54 | 15.55 | 17.31 | 18.28 | 17.89 | 18.85 | 20.25 | 22.49 | 23.85 Arched-Eyebrows | 19.29 | 15.52 | 15.51 | 17.54 | 17.69 | 18.34 | 18.65 | 19.87 | 19.76 | 20.93 Attractive | 18.4 | 19.19 | 18.32 | 18.6 | 18.7 | 18.86 | 18.89 | 18.03 | 17.93 | 18.61 Bags-Under-Eyes | 17.06 | 15.5 | 15.51 | 16.58 | 17.54 | 17.26 | 17.11 | 17.3 | 16.33 | 16.97 Bangs | 20.88 | 15.52 | 15.52 | 20.6 | 19.97 | 20.49 | 21.43 | 22.12 | 19.9 | 22.08 Bald | 18.52 | 15.48 | 15.49 | 16.6 | 16.98 | 15.48 | 17.92 | NA | NA | 28.34 Big-Lips | 15.81 | 15.51 | 15.5 | 15.44 | 14.91 | 15.26 | 15.46 | 15.61 | 15.68 | 16.35 Big-Nose | 16.58 | 15.53 | 15.53 | 15.92 | 16.58 | 15.95 | 16.23 | 16.88 | 16.12 | 17.23 Black-Hair | 22.87 | 15.49 | 15.5 | 20.8 | 19.43 | 21.35 | 22.52 | 21.23 | 20.5 | 24.6 Blond-Hair | 36.36 | 15.67 | 15.67 | 31.41 | 34.64 | 35.92 | 37.84 | 37.31 | 39.24 | 41.72 Blurry | 16.75 | 15.44 | 15.48 | 16.38 | 16.06 | 16.48 | 16.66 | 16.66 | 16.81 | 17.79 Brown-Hair | 20.88 | 15.48 | 15.48 | 18.82 | 20.45 | 20.64 | 21.34 | 20.94 | 20.77 | 21.57 Bushy-Eyebrows | 19.11 | 15.49 | 15.49 | 17.87 | 18.12 | 18.18 | 18.7 | 18.88 | 19.2 | 21.3 Chubby | 16.77 | 15.51 | 15.5 | 15.98 | 16.2 | 15.83 | 16.15 | 16.51 | 19.01 | 18.63 Double-Chin | 18.05 | 15.53 | 15.52 | 16.73 | 17.31 | 16.17 | 17.28 | 16.66 | 22.57 | 22.3 Eyeglasses | 16.48 | 15.51 | 15.5 | 16.14 | 17.12 | 16.48 | 16.78 | 17.74 | 15.88 | 15.44 Goatee | 17.57 | 15.49 | 15.51 | 16.86 | 16.69 | 16.22 | 16.93 | 16.98 | 17.98 | 19.81 Gray-Hair | 23.19 | 15.53 | 15.55 | 19.55 | 21.77 | 17.01 | 23.12 | 20.84 | 32.86 | 31.66 Heavy-Makeup | 21.42 | 15.85 | 15.58 | 20.49 | 20.5 | 21.24 | 21.94 | 21.29 | 20.4 | 22.45 High-Cheekbones | 19.17 | 15.38 | 15.17 | 18.59 | 18.79 | 18.91 | 18.89 | 19.78 | 18.77 | 20.07 Male | 18.64 | 15.49 | 15.57 | 20.09 | 19.36 | 19.14 | 18.8 | 18.43 | 16.5 | 19.24 Mouth-Slightly-Open | 17.37 | 15.66 | 15.56 | 17.43 | 17.34 | 17.42 | 17.16 | 17.79 | 17.2 | 17.77 Mustache | 17.16 | 15.5 | 15.48 | 16.77 | 16.46 | 15.98 | 16.99 | 16.64 | 17.05 | 18.13 Narrow-Eyes | 15.48 | 15.49 | 15.5 | 15.66 | 15.95 | 15.78 | 15.78 | 15.68 | 15.36 | 14.83 No-Beard | 15.82 | 16.38 | 16.37 | 15.79 | 15.93 | 16.0 | 15.99 | 16.08 | 15.98 | 15.83 Oval-Face | 18.02 | 15.5 | 15.5 | 16.76 | 17.11 | 17.18 | 17.37 | 18.56 | 18.37 | 19.22 Pale-Skin | 18.15 | 15.45 | 15.48 | 19.89 | 19.15 | 16.95 | 20.95 | 18.75 | 17.94 | 19.55 Pointy-Nose | 18.55 | 15.49 | 15.49 | 17.03 | 17.22 | 17.23 | 17.75 | 18.69 | 18.8 | 19.97 Receding-Hairline | 18.5 | 15.5 | 15.5 | 17.09 | 17.47 | 16.67 | 17.55 | 18.61 | 22.2 | 22.38 Rosy-Cheeks | 23.96 | 15.5 | 15.51 | 19.77 | 22.49 | 18.5 | 22.3 | 22.45 | 32.64 | 34.69 Sideburns | 18.32 | 15.51 | 15.52 | 17.23 | 17.19 | 16.9 | 17.39 | 17.41 | 19.36 | 21.77 Smiling | 19.46 | 16.01 | 15.52 | 18.97 | 19.05 | 19.31 | 19.15 | 19.89 | 18.81 | 20.15 Straight-Hair | 16.1 | 15.5 | 15.5 | 15.97 | 16.25 | 15.91 | 16.02 | 16.24 | 16.13 | 16.29 Wavy-Hair | 21.0 | 15.52 | 15.52 | 20.18 | 20.09 | 20.56 | 20.97 | 21.13 | 20.42 | 21.88 Wearing-Earrings | 18.75 | 15.47 | 15.47 | 16.93 | 17.87 | 17.75 | 18.37 | 19.28 | 20.0 | 20.62 Wearing-Hat | 18.32 | 15.48 | 15.49 | 18.33 | 18.27 | 17.44 | 19.95 | 20.12 | 15.16 | 16.82 Wearing-Lipstick | 20.42 | 19.0 | 17.34 | 19.64 | 19.65 | 20.49 | 20.8 | 20.21 | 19.51 | 21.06 Wearing-Necklace | 18.99 | 15.48 | 15.49 | 16.72 | 17.85 | 17.54 | 18.15 | 19.44 | 21.28 | 21.48 Wearing-Necktie | 19.56 | 15.52 | 15.51 | 17.51 | 18.13 | 16.53 | 18.36 | 18.45 | 23.79 | 24.46 Young | 15.51 | 16.22 | 16.21 | 15.54 | 15.63 | 15.55 | 15.55 | 15.56 | 15.51 | 15.57 Table 4: CelebA: Area under curve metric. We note that for Bald, there were no results for the Robust Covariance metrics. This is because due to the low rate of positive examples, it was not possible to tune Robust Covariance’s hyper- parameters via cross-validation on the initial sample.
# The Spin-down of PSR J0821–4300 and PSR J1210–5226: Confirmation of Central Compact Objects as Anti-Magnetars E. V. Gotthelf, J. P. Halpern, and J. Alford Columbia Astrophysics Laboratory, Columbia University, 550 West 120th Street, New York, NY 10027 ###### Abstract Using XMM–Newton and Chandra, we measure period derivatives for the second and third known pulsars in the class of Central Compact Objects (CCOs) in supernova remnants, proving that these young neutron stars have exceptionally weak dipole magnetic field components. For the 112 ms PSR J0821$-$4300 in Puppis A, $\dot{P}=(9.28\pm 0.36)\times 10^{-18}$. Its proper motion, $\mu=61\pm 9$ mas yr-1, was also measured using Chandra. This contributes a kinematic term to the period derivative via the Shklovskii effect, which is subtracted from $\dot{P}$ to derive dipole $B_{s}=2.9\times 10^{10}$ G, a value similar to that of first measured CCO PSR J1852$+$0040 in Kes 79, which has $B_{s}=3.1\times 10^{10}$ G. Antipodal surface hot spots with different temperatures and areas are deduced from the X-ray spectrum and pulse profiles. Paradoxically, such nonuniform surface temperature appears to require strong crustal magnetic fields, probably toroidal or quadrupolar components much stronger than the external dipole. A spectral feature, consisting of either an emission line at $\approx 0.75$ keV or absorption at $\approx 0.46$ keV, is modulated in strength with the rotation. It may be due to a cyclotron process in a magnetic field on the surface that is slightly stronger than the dipole deduced from the spin-down. We also timed anew the 424 ms PSR J1210$-$5226, resolving previous ambiguities about its spin-down rate. Its $\dot{P}=(2.22\pm 0.02)\times 10^{-17}$, corresponding to $B_{s}=9.8\times 10^{10}$ G. This is compatible with a cyclotron resonance interpretation of its prominent absorption line at 0.7 keV and harmonics. These results deepen the mystery of the origin and evolution of CCOs: why are their numerous descendants not evident? ###### Subject headings: ISM: individual (Puppis A) — pulsars: individual (PSR J0821$-$4300, PSR J1210$-$5226, PSR J1852$+$0040) — stars: neutron ## 1\. Introduction The class of faint X-ray sources in supernova remnants (SNRs) known as central compact objects (CCOs) are characterized by steady flux, predominantly surface thermal X-ray emission, lack of a surrounding pulsar wind nebula, and absence of detection at any other wavelength. Table 1 lists basic data on the well- studied CCOs, as well as proposed candidates whose qualifications are not yet well established. Of the eight most secure CCOs, three are known to be neutron stars (NSs) with spin periods of 0.105, 0.424, and 0.112 s. Spin-down has been detected for two of these, the 0.105 s pulsar PSR J1852$+$0040 in Kes 79 and the 0.424 s pulsar PSR J1210$-$5226 in the SNR PKS 1209$-$51/52. For PSR J1852$+$0040, the implied surface dipole field is only $B_{s}=3.1\times 10^{10}$ G (Halpern & Gotthelf, 2010a), smaller than that of any other young known NS. In the case of PSR J1210$-$5226, archival data allow two alternative timing solutions, with $B_{s}=9.9\times 10^{10}$ or $2.4\times 10^{11}$ G (Halpern & Gotthelf, 2011). It is natural to assume that CCOs that have not yet been seen to pulse are isolated, weakly magnetized NSs of the same class as the CCO pulsars. Where pulsar searches have been unsuccessful, it is possible that an even weaker magnetic field, a more uniform surface temperature, or an unfavorable viewing geometry, prevents detection of rotational modulation. The absence of pulsations from the youngest known NS, the $\approx 330$ year old CCO in Cassiopeia A, has been used, in combination with fitting of its X-ray spectrum, to argue that it is covered with a uniform temperature, non- magnetized atmosphere of carbon, the product of nuclear burning of H and He (Ho & Heinke, 2009). Rapid cooling of the NS in Cas A, directly detected by Chandra (Heinke & Ho, 2010), has been interpreted as evidence for neutron superfluidity in the core (Page et al., 2011; Shternin et al., 2011). Table 1Central Compact Objects in Supernova Remnants CCO | SNR | Age | $d$ | $P$ | $f_{p}$aaUpper limits on pulsed fraction are for a search down to $P=12$ ms or smaller. | $B_{s}$ | $L_{x,\rm bol}$ | References ---|---|---|---|---|---|---|---|--- | | (kyr) | (kpc) | (s) | (%) | ($10^{10}$ G) | (erg s-1) | RX J0822.0$-$4300 | Puppis A | 4.5 | 2.2 | 0.112 | 11 | 2.9 | $5.6\times 10^{33}$ | 1,2,3,4,5,6 CXOU J085201.4$-$461753 | G266.1$-$1.2 | 1 | 1 | … | $<7$ | … | $2.5\times 10^{32}$ | 7,8,9,10,11 1E 1207.4$-$5209 | PKS 1209$-$51/52 | 7 | 2.2 | 0.424 | 9 | 9.8 | $2.5\times 10^{33}$ | 6,12,13,14,15,16,17 CXOU J160103.1$-$513353 | G330.2$+$1.0 | $\mathrel{\hbox{\hbox to0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}\hss}\hbox{$>$}}}3$ | 5 | … | $<40$ | … | $1.5\times 10^{33}$ | 18,19 1WGA J1713.4$-$3949 | G347.3$-$0.5 | 1.6 | 1.3 | … | $<7$ | … | $\sim 1\times 10^{33}$ | 11,20,21 XMMU J172054.5$-$372652 | G350.1$-$0.3 | 0.9 | 4.5 | … | … | … | $3.9\times 10^{33}$ | 22,23 CXOU J185238.6$+$004020 | Kes 79 | 7 | 7 | 0.105 | 64 | 3.1 | $5.3\times 10^{33}$ | 24,25,26,27 CXOU J232327.9$+$584842 | Cas A | 0.33 | 3.4 | … | $<12$ | … | $4.7\times 10^{33}$ | 27,28,29,30,31,32,33 2XMMi J115836.1$-$623516 | G296.8$-$0.3 | 10 | 9.6 | … | … | … | $1.1\times 10^{33}$ | 34 XMMU J173203.3$-$344518 | G353.6$-$0.7 | $\sim 27$ | 3.2 | … | $<9$ | … | $1.3\times 10^{34}$ | 35,36,37,38 CXOU J181852.0$-$150213 | G15.9$+$0.2 | $1-3$ | (8.5) | … | … | … | $\sim 1\times 10^{33}$ | 39 Note. — Above the line are eight well-established CCOs. Below the line are three candidates. References. — (1) Hui & Becker 2006a; (2) Gotthelf & Halpern 2009; (3) Gotthelf et al. 2010; (4) De Luca et al. 2012; (5) Becker et al. 2012; (6) this paper; (7) Slane et al. 2001; (8) Kargaltsev et al. 2002; (9) Bamba et al. 2005; (10) Iyudin et al. 2005; (11) De Luca 2008; (12) Zavlin et al. 2000; (13) Mereghetti et al. 2002; (14) Bignami et al. 2003; (15) De Luca et al. 2004; (16) Gotthelf & Halpern 2007; (17) Halpern & Gotthelf 2011; (18) Park et al. 2006; (19) Park et al. 2009; (20) Lazendic et al. 2003; (21) Cassam-Chenaï et al. 2004; (22) Gaensler et al. 2008; (23) Lovchinsky et al. 2011; (24) Seward et al. 2003; (25) Gotthelf et al. 2005; (26) Halpern et al. 2007; (27) Halpern & Gotthelf 2010a; (28) Pavlov et al. 2000; (29) Chakrabarty et al. 2001; (30) Mereghetti et al. 2002; (31) Pavlov & Luna 2009; (32) Ho & Heinke 2009; (33) Heinke & Ho 2010; (34) Sánchez-Ayaso et al. 2012; (35) Tian et al. 2008; (36) Abramowski et al. 2011; (37) Halpern & Gotthelf 2010b; (38) Halpern & Gotthelf 2010c; (39) Reynolds et al. 2006. The “anti-magnetar” explanation of CCOs, which is motivated by their weak magnetic fields, absence of variability, and location on the $P-\dot{P}$ diagram, remains incomplete in detail. Specifically, it does not yet account for the hot spots that are seen on the surfaces of CCO pulsars. Since the spin-down power of a CCO pulsar is less than its X-ray luminosity, the latter must be thermal emission from residual cooling, which can only be nonuniform if there is anisotropic heat conduction. In the absence of strong magnetic fields or magnetospheric activity, it is difficult to reproduce the light curve and pulsed fraction of 64% from PSR J1852$+$0040 in Kes 79 (Halpern & Gotthelf, 2010a; Shabaltas & Lai, 2012), or the two antipodal hot spots of different temperatures and areas on RX J0822$-$4300 in Puppis A (Gotthelf & Halpern, 2009; Gotthelf et al., 2010). The latter 0.112 s pulsar, hereafter PSR J0821$-$4300, is a subject of this paper. Its spectrum is especially puzzling in also displaying a phase-dependent emission feature at 0.7–0.8 keV (Gotthelf & Halpern, 2009), which is reported to be variable in the long term (De Luca et al., 2012). Here we report the first spin-down measurement for PSR J0821$-$4300, based on a dedicated program of phase-coherent timing jointly scheduled between XMM–Newton and Chandra. It was also necessary to incorporate Chandra HRC observations of the position and proper motion of PSR J0821$-$4300 in order to determine its small period derivative accurately. The astrometric analysis is described in Section 2. (The latter work was also carried out, with consistent results, by Becker et al. 2012.) The results of the timing are presented in Section 3. The X-ray flux and spectra are discussed in Section 4, with particular attention paid to the spectral line and the question of its possible variability. We also obtained new timing observations of PSR J1210$-$5226 that resolve the prior ambiguity about its spin-down rate in favor of the smaller value; this definitive result is presented in Section 5. The nature of CCOs as anti-magnetars, and their possible evolutionary status, are discussed in Section 6. Conclusions and proposals for future work follow in Section 7. ## 2\. X-ray Position and Proper Motion Evidence that PSR J0821$-$4300 has high proper motion from Chandra HRC images over a 5 year baseline was reported by Hui & Becker (2006b) and Winkler & Petre (2007), but with slightly disparate measurements of $\mu=107\pm 34$ mas yr-1 and $\mu=165\pm 25$ mas yr-1, respectively, from the same data. Here, we are concerned with timing this high-velocity pulsar over an extended period of time with millisecond accuracy. When trying to measure a small $\dot{P}$ using X-ray data, position and proper motion can contribute significant errors via three effects. The first is an instrumental property of the Chandra CCDs when used in continuous-clocking (CC) mode (see Section 3); the position of the pulsar must be known a priori to $<0.\\!^{\prime\prime}5$ in order to determine the time of arrival of each source photon. The second consideration is the accuracy of the barycentric correction. The third effect is the magnitude of the proper motion, which contributes a purely kinematic period derivative via the “train whistle” effect (Shklovskii, 1970). The original measurements of proper motion are not accurate enough to measure this effect, which is crucial in the case of PSR J0821$-$4300. Accordingly, we have reanalyzed the position and proper motion of PSR J0821$-$4300 using the Chandra HRC-I data listed in Table 2, which now includes a more recent pair of observations in 2010 August that extends the baseline to 10.6 yr, enabling higher precision on both the contemporary position for timing, and the proper motion. We will describe here any differences between our method and previous work. For example, we did not use an HRC-S observation (ObsID 1851) because of known systematic differences between HRC-S and HRC-I. Ultimately, however, our results are consistent with the recent analysis of the same data (including ObsID 1851) by Becker et al. (2012). Table 2Log of Chandra HRC-I Observations of PSR J0821$-$4300 ObsID | Date | Start Epoch | Exposure | Roll angle | Star A | PSR J0821$-$4300 ---|---|---|---|---|---|--- | (UT) | (MJD) | (ks) | (∘) | (Counts)aaTotal counts collected in a $1.\\!^{\prime\prime}5$ radius aperture centered on the source. | (Counts)aaTotal counts collected in a $1.\\!^{\prime\prime}5$ radius aperture centered on the source. 749 | 1999 Dec 21 | 51533.95 | 18.0 | 338.7 | 47 | 3257 4612 | 2005 Apr 25 | 53485.31 | 40.2 | 261.9 | 123 | 7260 11819 | 2010 Aug 10 | 55418.72 | 33.7 | 163.4 | 101 | 5455 12201 | 2010 Aug 11 | 55419.13 | 38.7 | 162.9 | 117 | 6296 The data from all epochs were reprocessed and analyzed using the latest calibration files and software (CIAO 4.4/CALDB 4.4.8). This processing accounts for the HRC AMP_SF electronic ringing distortions discussed by Hui & Becker (2006b). The HRC detector is well suited to astrometry, with its processed pixel size of $0.\\!^{\prime\prime}1318$ that oversamples the on- axis point spread function (PSF) by a factor of 5. For all observations, the pulsar was placed close to the optical axis where the PSF is essentially symmetric. In the following analysis we assume, as there is no evidence to the contrary, that the HRC focal plane is linear and the aspect reconstruction introduces no errors in roll angle. The two pointings on consecutive days in 2010 August are sufficiently different in their aspect reconstruction that we analyze them individually. The nominal uncertainty in aspect reconstruction for a typical Chandra observation is $0.\\!^{\prime\prime}6$. It is often possible to remove most of this systematic error by using nearby X-ray point sources with precisely measured optical coordinates to correct the absolute astrometry. Hui & Becker (2006b) used their X-ray detected “star A” as their sole fiducial point, and fitted a model PSF interpolated from the CIAO library appropriate for its position and estimated photon energy to determine its position. Winkler & Petre (2007) used this star and two additional stars, and followed the updated method outlined in the CIAO thread for generating a Monte Carlo PSF using the CHaRT/MARX software for input into the CIAO/Sherpa spectral/image fitting software package. Figure 1.— Chandra HRC-I images around reference star A (3UC094-058669) used to define the coordinate system for position and proper motion of PSR J0821$-$4300\. Each panel shows the observed counts from Star A (left) and the simulated PSF (right) for its location on the focal plane in native HRC-I pixels. The plots cover $12^{\prime\prime}\times 12^{\prime\prime}$ in celestial coordinates. The total counts in a $1.\\!^{\prime\prime}5$ radius aperture are given in Table 3. In our analysis, we also follow the CIAO thread to characterize the HRC-I PSF, but we adopt a simpler approach to measuring source locations, one that is not dependent on model fitting in the image domain. Our method is guided by the following observations. First, the on-axis, symmetric image of the pulsar contains enough counts that a simple centroid calculation is a sufficiently accurate measurement of its position on the detector. Second, star A of Hui & Becker (2006b), which lies $2\farcm 7$ from PSR J0821$-$4300, is the only useful fiducial source for registering the X-ray image. The position and proper motion of star A are taken from the UCAC3 (Zacharias et al., 2010), where it is listed as 3UC094-058669 with coordinates (J2000.0) R.A.=$08^{\rm h}21^{\rm m}46.\\!^{\rm s}2924(16)$, decl.=$-43^{\circ}02^{\prime}03.\\!^{\prime\prime}640(49)$, and proper motion $\mu_{\alpha}\,{\rm cos}\,\delta=-14.3(2.0)$, $\mu_{\delta}=-3.6(5.5)$ mas yr-1. X-ray position measurements of the two weaker, off-axis stars used by Winkler & Petre (2007) only add to the uncertainty (as quantified below) in the absolute astrometry. Third, the few X-ray photons from star A, and its broad off-axis PSF, do not warrant a sophisticated image fitting technique. Instead, we use a “corrected centroid” method, as described below. To determine the source location of star A in the X-ray images we start with the CHaRT/MARX simulation of the PSF, as described in the CIAO user webpages111http://cxc.harvard.edu/chart, for its respective locations on the focal plane. Figure 1 shows the distribution of the counts from each HRC image and corresponding Monte Carlo PSF. It is apparent that star A is poorly sampled in the data, with total source counts in the range $47\leq N\leq 123$ (Table 2). The maximum number of counts per pixel is typically only $2-4$, making forward fitting poorly constrained statistically, while sharp features in the model can cause systematic offsets. Furthermore, the source is immersed in a substantial diffuse background from the Puppis A SNR which, although it only contributes a few photons over the source region, adds uncertainty to the position measurement because of the small count statistics, especially when fitting over a larger area. Figure 2.— Four position measurements of PSR J0821$-$4300 spanning 10.6 yr, after correction using the optical/X-ray star A (3UC094-058669) as a reference. The two observations in 2010 nearly coincide in time. The positions (diamonds) and their errors are fitted with a linear model (dashed lines). The fitted parameters are listed in Table 4. To sidestep these effects, the coordinates of star A are first found from its centroid. Photons were extracted from circular aperture of radius $1.\\!^{\prime\prime}5$, chosen to minimize the background counts. This aperture encloses essentially all of the signal in that fraction of the PSF with a finite probability of producing a single count during the observation. This measurement was made using the CIAO tool dmstat and is iterated to produce the final coordinates. However, while this results in a well-defined and statistically meaningful measurement, it does not account for the shape of the complex off-axis PSF, whose orientation depends on the spacecraft roll angle (which differs for each observation; see Figure 1), or for the monotonic off-axis deviations introduced by the flat focal plane. To quantify these systematic effects we also measure the centroid of the simulated PSF of star A in sky coordinates in each image and compare it to the coordinates that were input to CHaRT/MARX for that PSF. The difference constitutes the small, but critical correction to the centroid of star A. We estimated the uncertainty in the derived coordinates of star A using a Monte Carlo method. We generated 500 realizations of star A by sampling the PSF using a random number generator to match the observed counts, and accumulated the centroid measurements to build up a distribution in right ascension and declination. To account for the observed background, we included a random distribution of photons within the source aperture. The resulting (Gaussian) width of the distribution of centroids is typically $\sigma\approx 0.\\!^{\prime\prime}06$ in each of the two coordinates (see Table 3). These reproduce the expected “standard error” for a centroid, $\approx{\sigma}/\sqrt{N}$. We also simulated the uncertainties for the two other fiducial sources used in Winkler & Petre (2007) and find that their inclusion in the position determination would only increase the uncertainty on the final coordinates. The pulsar itself is a strong source (see Table 2) whose coordinates are precisely measured using a centroid calculation with uncertainty an order of magnitude smaller than those measured for star A. Because of the symmetry of its on-axis PSF, no systematic correction is required for the pulsar. Its coordinates are then adjusted by the difference between the optical and X-ray coordinates of star A at each epoch to produce its final astrometric position in Table 3. Fitting the position of the pulsar as a function of time then yields the proper motion. Figure 2 shows the $\chi^{2}$ fit with constant velocity in each coordinate, and Table 4 lists the formal solution and quantities derived from it. The derived total proper motion of $61\pm 9$ mas yr-1 is in agreement with the value determined by Becker et al. (2012), $71\pm 12$ mas yr-1, and is smaller than previously published numbers for reasons discussed in that paper. Table 3Position Measurements for PSR J0821$-$4300 Epoch | Star A (Optical) | Star A (X-ray) | PSR J0821$-$4300 (corrected) ---|---|---|--- (year) | R.A. (h m s) | Decl. (${}^{\circ}\ {}^{\prime}\ {}^{\prime\prime}$) | R.A. (h m s) | Decl. (${}^{\circ}\ {}^{\prime}\ {}^{\prime\prime}$) | R.A. (h m s) | Decl. (${}^{\circ}\ {}^{\prime}\ {}^{\prime\prime}$) 1999.975 | 08 21 46.2928(16) | –43 02 03.637(49) | 08 21 46.2882(65) | -43 02 03.397(83) | 08 21 57.4024(67) | -43 00 16.894(96) 2005.316 | 08 21 46.2858(21) | –43 02 03.657(57) | 08 21 46.3011(47) | -43 02 03.756(41) | 08 21 57.3685(52) | -43 00 17.023(71) 2010.610 | 08 21 46.2789(31) | –43 02 03.676(76) | 08 21 46.2621(62) | -43 02 04.113(47) | 08 21 57.3488(69) | -43 00 17.185(89) 2010.611 | 08 21 46.2789(31) | –43 02 03.676(76) | 08 21 46.2575(57) | -43 02 04.068(51) | 08 21 57.3476(65) | -43 00 17.190(92) Note. — All coordinates are equinox J2000. Optical coordinates for star A (3UC094-058669) are corrected for the epoch of proper motion. The X-ray position of star A is determined using the method described in the text. The pulsar coordinates are corrected by the difference between the optical and X-ray coordinates of star A. Uncertainties on the last digits are in parentheses. The tangential velocity of PSR J0821$-$4300 then depends on a distance determination for Puppis A, which has ranged from 1 to 2.5 kpc according to various methods, and is a matter of unresolved debate in the most recent studies quoted here. One (Reynoso et al., 1995, 2003) is based on 21 cm H I velocities, and a perceived morphological association of features in H I surrounding the pulsar and the SNR at $v_{\rm lsr}=+16$ km s-1, the latter corresponding to $d=2.2\pm 0.3$ kpc. The second method uses spectra of ground- state hydroxyl lines (Woermann et al., 2000), which show absorption at $v_{\rm lsr}<+7.6$ km s-1, and emission above this velocity, from which $d=1.3^{+0.6}_{-0.8}$ kpc is derived. Both sets of authors employ assumptions that are not mutually accepted, and are beyond the scope of this paper to evaluate. We will adopt a fiducial distance of $2.2\pm 0.3$ kpc for purposes of further calculations, while noting the implications of a possible smaller distance where relevant. Table 4Ephemeris of PSR J0821$-$4300 Parameter | Value ---|--- Position and Proper Motion Epoch of position and $\mu$ (MJD) | 53964.0 R.A. (J2000) | $08^{\rm h}21^{\rm m}57.\\!^{\rm s}3653(31)$ Decl. (J2000) | $-43^{\circ}00^{\prime}17.\\!^{\prime\prime}074(43)$ R.A. proper motion, $\mu_{\alpha}\,{\rm cos}\,\delta$ | $-54.1\pm 8.3$ mas yr-1 Decl. proper motion, $\mu_{\delta}$ | $-28.1\pm 10.5$ mas yr-1 Total proper motion, $\mu$ | $61.0\pm 8.8$ mas yr-1 Position angle of proper motion | $242.\\!^{\circ}5\pm 9.\\!^{\circ}5$ Tangential velocityaaAssuming $d=2.2\pm 0.3$ kpc (Reynoso et al., 1995)., $v_{\perp,c}$ | $629\pm 126$ km s-1 Timing Solution Epoch of ephemeris (MJD TDB)bbEpoch of fitted minima of the $1.5-4.5$ keV pulse profile; phase zero in Figure 5. | 55580.0000006 Span of ephemeris (MJD) | 55,182–56,027 Frequency, $f$ | 8.86529105448(32) Hz Frequency derivative, $\dot{f}$ | $(-7.29\pm 0.28)\times 10^{-16}$ Hz s-1 Period, $P$ | 0.1127994550720(41) s Period derivative, $\dot{P}$ | $(9.28\pm 0.36)\times 10^{-18}$ Kinematic period derivativeaaAssuming $d=2.2\pm 0.3$ kpc (Reynoso et al., 1995)., $\dot{P}_{k}$ | $(2.24\pm 0.72)\times 10^{-18}$ Intrinsic period derivativeaaAssuming $d=2.2\pm 0.3$ kpc (Reynoso et al., 1995)., $\dot{P}_{\rm int}$ | $(7.04\pm 0.80)\times 10^{-18}$ Surface dipole magnetic field, $B_{s}$ | $2.9\times 10^{10}$ G Spin-down luminosity, $\dot{E}$ | $1.9\times 10^{32}$ erg s-1 Characteristic age, $\tau_{c}$ | 254 Myr The tangential velocity of PSR J0821$-$4300 at $d=2.2\pm 0.3$ kpc is $v_{\perp}=636\pm 126$ km s-1. This velocity is at the high end of the distribution of two-dimensional velocities of pulsars measured by Hobbs et al. (2005), who find mean values of $\bar{v}_{\perp}=246\pm 22$ km s-1 for 121 ordinary (non-recycled) pulsars, and $\bar{v}_{\perp}=307\pm 47$ km s-1 for 46 pulsars whose characteristic ages are $<3$ Myr. The individual pulsar velocities are corrected for the solar motion with respect to the local standard of rest and a flat Galactic rotation curve in order to express them in the frame of the rotating Galactic disk. In the case of PSR J0821$-$4300 this correction is dominated by the solar motion for the range of plausible distances, and is only $-7$ km s-1, resulting in a corrected tangential velocity of $v_{\perp,c}=629$ km s-1. If the distance is as small as 1 kpc, this reduces to an unexceptional 290 km s-1. The Shklovskii (1970) effect, a purely kinematic contribution to the observed period derivative, will be significant. For a source moving at constant velocity the kinematic contribution is $\dot{P}_{k}\,={\mu^{2}\,P\,d\over c}\ =\ {v^{2}_{\perp}\,P\over d\,c}\ .$ $None$ From the proper motion measurement of PSR J0821$-$4300, $\dot{P}_{k}=(2.24\pm 0.72)\times 10^{-18}$ is calculated, where we have propagated the uncertainties on both $\mu$ and $d$. ## 3\. X-ray Timing Previous observations of PSR J0821$-$4300 were only able to set upper limits on its period derivative (Gotthelf & Halpern, 2009; Gotthelf et al., 2010; De Luca et al., 2012). Evidently $\dot{P}$ is so small that it can only be measured by phase-coherent timing. Accordingly, we designed a sequence of observations coordinated between XMM–Newton and Chandra that would start and maintain phase connection over a 2 year span, 2010 May – 2012 April. The scheduling strategy is the same as was used for PSR J1852$+$0040 in Kes 79 (Halpern & Gotthelf, 2010a). By design, the resulting ephemeris was also connected backward to archival observations that were obtained in 2009 December and 2010 April (De Luca et al., 2012), which extended the time span to 2.3 years. All of the timing observations used in this analysis are listed in Table 5. The discovery observations from 2001 (Gotthelf & Halpern, 2009) are not included, as they are too far removed in time to be reliably connected. Table 5Log of X-ray Timing Observations of PSR J0821$-$4300 Mission | Instr/Mode | ObsID | Date | Elapsed time/ | Start Epoch | PeriodaaBarycentric period derived from a $Z^{2}_{1}$ test. The Leahy et al. (1983) uncertainty on the last digits is in parentheses. | $Z^{2}_{1}$ ---|---|---|---|---|---|---|--- | | | (UT) | Livetime (ks) | (MJD) | (s) | XMM | EPIC-pn/SW | 0606280101 | 2009 Dec 17,18 | 85.1/54.9 | 55182.820 | 0.112799488(12) | 173.0 XMM | EPIC-pn/SW | 0606280201 | 2010 Apr 05 | 42.2/29.4 | 55291.377 | 0.112799451(20) | 199.1 XMM | EPIC-pn/SW | 0650220201 | 2010 May 02 | 28.0/19.6 | 55318.782 | 0.112799390(41) | 135.5 Chandra | ACIS-S3/CC | 12108 | 2010 Aug 16 | 34.0/34.0 | 55424.625 | 0.112799470(21) | 192.3 XMM | EPIC-pn/SW | 0650220901 | 2010 Oct 15 | 23.5/16.4 | 55484.109 | 0.112799519(44) | 147.0 XMM | EPIC-pn/SW | 0650221001 | 2010 Oct 15 | 23.5/16.4 | 55484.987 | 0.112799462(39) | 156.7 XMM | EPIC-pn/SW | 0650221101 | 2010 Oct 19 | 26.5/18.6 | 55488.332 | 0.112799518(40) | 150.5 XMM | EPIC-pn/SW | 0650221201 | 2010 Oct 25 | 24.5/17.2 | 55494.228 | 0.112799486(35) | 162.1 XMM | EPIC-pn/SW | 0650221301 | 2010 Nov 12 | 23.5/16.5 | 55512.524 | 0.112799391(52) | 144.7 XMM | EPIC-pn/SW | 0650221401 | 2010 Dec 20 | 27.2/19.0 | 55550.159 | 0.112799450(35) | 163.3 Chandra | ACIS-S3/CC | 12109 | 2011 Feb 04 | 33.0/33.0 | 55596.837 | 0.112799445(27) | 173.4 XMM | EPIC-pn/SW | 0650221501 | 2011 Apr 12 | 30.0/21.0 | 55663.857 | 0.112799449(30) | 156.6 XMM | EPIC-pn/SW | 0657600101 | 2011 May 18 | 36.5/25.6 | 55699.925 | 0.112799480(17) | 195.6 Chandra | ACIS-S3/CC | 12541 | 2011 Aug 11 | 33.0/33.0 | 55784.655 | 0.112799412(22) | 180.4 XMM | EPIC-pn/SW | 0657600201 | 2011 Nov 08 | 37.2/26.1 | 55873.289 | 0.112799459(28) | 142.2 Chandra | ACIS-S3/CC | 12542,14395 | 2012 Feb 18,19 | 33.1/33.1 | 55975.446 | 0.112799481(35) | 165.8 XMM | EPIC-pn/SW | 0657600301 | 2012 Apr 10 | 35.3/24.7 | 56027.022 | 0.112799445(17) | 196.5 The Chandra observations used the Advanced Camera for Imaging and Spectroscopy (ACIS-S3) in continuous-clocking (CC) mode to provide time resolution of 2.85 ms. All data were reprocessed from the level 1 event files with the coordinates corrected for the proper motion of PSR J0821$-$4300 given in Table 4, and analyzed using the latest calibration files and software (CIAO 4.4/CALDB 4.4.8). Reprocessing with a source position that is accurate to $<1$ pixel ($<0.\\!^{\prime\prime}5$) ensures that the time assignment is precise to $\lesssim 3$ ms. All of the XMM–Newton observations used the pn detector of the European Photon Imaging Camera (EPIC-pn) in “small window” (SW) mode to achieve 5.7 ms time resolution, and an absolute uncertainty of $\approx 3$ ms on the arrival time of any photon. Each data set was examined and cleaned of intervals of high particle background due to solar activity, as necessary. Two pairs of data sets that were acquired on consecutive days were merged to improve their statistics. The photon arrival times from all data were transformed to Barycentric Dynamical Time (TDB) using the coordinates of the pulsar corrected for proper motion. For PSR J0821$-$4300, the pulsed signal strength is a strong function of energy, not only because of its spectrum relative to the background but because of the cancellation by emission from the opposite pole, as described in Gotthelf et al. (2010). For each observation listed in Table 5 we extracted source photons using an aperture centered on the source and optimized for the signal strength in the hard $1.5-4.5$ keV energy band. For the XMM–Newton observations we used an aperture of radius of $30^{\prime\prime}$. For the Chandra CC-mode observations we selected five columns ($2.\\!^{\prime\prime}4$). We also examined the soft, phase shifted $0.5-1.0$ keV band. However, these data are noisier and their use did not in the end improve the timing results significantly. The phase cancellation effect also prevents pulsations from being detected by the Chandra HRC, which has insufficient energy resolution. As in our previous timing studies of CCOs (Halpern & Gotthelf, 2010a, 2011), we employed two complementary approaches to fitting an ephemeris. First, we used the $Z^{2}_{1}$ (Rayleigh) test (Strutt, 1880; Buccheri et al., 1983) in a coherent analysis of the entire set of 17 observations. Beginning with the closely spaced set spanning 2010 October 15–19, the $Z^{2}_{1}$ test determined the pulse frequency with sufficient accuracy to connect in phase uniquely to the next observation. This procedure was iterated by adding each subsequent observation, and including a frequency derivative when it became evident. We also worked backward in time, incorporating all 17 observations in the resulting unique ephemeris. The fitted frequency derivative is $\dot{f}=(-6.94\pm 0.28)\times 10^{-16}$ Hz s-1, where the $1\sigma$ uncertainty comes from the $\Delta Z^{2}_{1}=-2.3$ contour around the peak power in ($f,\dot{f}$) space. The second method also started with the $Z^{2}_{1}$ test statistic, this time to find the period and pulse profile separately at each epoch. The 17 profiles were cross-correlated, shifted, and summed to create a master pulse profile template. The process was iterated to generate a more accurate template and a set of time-of-arrival (TOA) measurements and their uncertainties for each epoch. These TOAs were fitted with a quadratic model in frequency and frequency derivative using a $\chi^{2}$ fitting routine to minimize their phase residuals. We searched for a coherent phase-connected solution over a grid of $f$ and $\dot{f}$ covering the range $f=8.8652906\pm 0.0000016$ Hz (at epoch MJD 55,580) and $-3.1\times 10^{-14}<\dot{f}<1.9\times 10^{-14}$ Hz s-1, with an oversampling factor of 10 for accuracy. This range corresponds to the $3\sigma$ limits of an incoherent fit to all of the measured frequencies, including the 2001 discovery observations. The resulting frequency derivative from TOA fitting, $\dot{f}=(-7.29\pm 0.28)\times 10^{-16}$ Hz s-1, is consistent with the value found above from the coherent $Z^{2}_{1}$ search. We adopt the TOA result for the final timing solution listed in Table 4. Figure 3.— $P-\dot{P}$ diagram of isolated pulsars (dots), binary radio pulsars (circled dots), and other types of isolated X-ray pulsars (colored symbols). The CCO pulsars (red stars) in Kes 79 and Puppis A have virtually the same spin parameters. The upper limit on Calvera’s $\dot{P}$ is from Halpern (2011). The radio pulsar death line $B/P^{2}=1.7\times 10^{11}$ G s-2 of Bhattacharya et al. (1992) is indicated. The van den Heuvel (1987) spin-up limit for recycled pulsars corresponds to $P({\rm ms})=1.9\,(B/10^{9}\,{\rm G})^{6/7}$. The exponent in this equation corrects a typographical error in the caption to Figure 7 of Halpern & Gotthelf (2010a), although the corresponding line in the Figure was correct. The observed $\dot{P}=(9.28\pm 0.36)\times 10^{-18}$ can now be split into the sum of its intrinsic and kinematic contributions, $\dot{P}=\dot{P}_{\rm int}+\dot{P}_{k}$. Since we determined in Section 2 that $\dot{P}_{k}=(2.24\pm 0.72)\times 10^{-18}$, the intrinsic period derivative is $\dot{P}_{\rm int}=(7.04\pm 0.80)\times 10^{-18}$. Parenthetically, we note that the small observed period derivative is independent evidence that the proper motion of the pulsar is not as large as the value originally quoted by Winkler & Petre (2007), $\mu=165\pm 25$ mas yr-1. If so, and if $d=2.2$ kpc, $\dot{P}_{k}$ would be $(1.64\pm 0.55)\times 10^{-17}$, requiring $\dot{P}_{\rm int}$ to be negative, i.e., the pulsar would be spinning up. In the vacuum dipole spin-down formalism, the values of $P$ and $\dot{P}_{\rm int}$ imply a surface magnetic field strength $B_{s}=3.2\times 10^{19}(P\dot{P})^{1/2}~{}G=2.9\times 10^{10}$ G, a spin-down luminosity $\dot{E}=-I\Omega\dot{\Omega}=4\pi^{2}I\dot{P}/P^{3}=1.9\times 10^{32}$ erg s-1, and characteristic age $\tau_{c}\equiv P/2\dot{P}=254$ Myr. PSR J0821$-$4300 is nearly identical in its spin properties to PSR J1852$+$0040 in Kes 79, as shown in Figure 3. The uncertainties in distance and proper motion have only a small effect on the derived magnetic field. For a smaller distance of 1 kpc, $\dot{P}_{k}$ is reduced to $1.02\times 10^{-17}$, and $B_{s}=3.1\times 10^{10}$ G. An absolute upper limit regardless of distance and proper motion is $B_{s}<3.3\times 10^{10}$ G. The phase residuals from the best-fit solution are shown in Figure 4. The weighted rms of the phase residuals is 5.1 ms, or 0.045 pulse cycles, which is comparable to the individual measurement errors (average $\sigma=3.6$ ms). It is not clear if there is any real timing noise and/or systematic errors in the TOAs. The light curves of PSR J0821$-$4300 in the soft and hard bands are shown in Figure 5. These were derived by folding all the timing data on the best fitting ephemeris given in Table 4. As revealed by a cross-correlation, the soft and hard pulses are out of phase by $0.45\pm 0.02$ cycles, consistent with that found by De Luca et al. (2012) using the 2009 December and 2010 April XMM–Newton data. The energy dependence of the pulsar modulation provided an important diagnostic for modeling the viewing geometry and surface emission of PSR J0821$-$4300 (Gotthelf et al., 2010). By analyzing the lightcurve in narrower energy bands than in Figure 5, we can resolve the signal modulation and phase for PSR J0821$-$4300 over the $0.3-5$ keV range. The data were grouped into 23 energy bands that are at least $100$ eV in width and have a signal-to-noise $N_{s}/\sqrt{N_{s}+N_{b}}>100$, where $N_{s},N_{b}$ are the source and background counts, respectively. We used $Z_{1}^{2}$ to provide a model of the unbinned lightcurve. The first Fourier component is a reasonable estimate as the lightcurve is sinusoidal in each energy band to within counting statistics. The error bar for the phase is calculated by cross-correlation, with the profile of Figure 5 serving as a template. The result is presented in Figure 6. As the energy dependent modulation decreases, the phase becomes undefined in two energy bands; these two phase points are not plotted. The modulation is qualitatively similar to that predicted by the antipodal model of Gotthelf et al. (2010) (cf. their Figure 6), providing confirmation of the basic model. The prediction was based on fitting the modulation in only three energy bands, using much less data, and differs somewhat from the new, resolved data having an order-of-magnitude more counts. In particular, the energy of the minimum modulation is lower (1.12 vs. 1.28 keV), and the form of the modulation is more complex than predicted. Furthermore, the phase is seen to drift at lower energies and the transition is not as sharp compared to the antipodal case, in which the phase was statistically either 0.0 or 0.5. The observed characteristics likely imply that the hotspots are offset from a strictly antipodal geometry. The high quality data presented herein should allow a far more detailed modeling of the surface emission of PSR J0821$-$4300. Figure 4.— For the timing observations of PSR J0821$-$4300 listed in Table 5, pulse-phase residuals from the linear term (dash-dot line) of the phase ephemeris presented in Table 4. The quadratic term (solid line) contributes $\approx\pm 0.5$ cycles to the ephemeris over the 2.3 years of timing. Figure 5.— Summed pulse profiles of PSR J0821$-$4300 in the $1.5-4.5$ keV band (top) and the $0.5-1.0$ keV band (bottom) using all of the observations listed in Table 5, folded according to the ephemeris of Table 4. These hard and soft pulse profiles are out of phase by $\phi=0.45\pm 0.02$ cycles. The intervals between the vertical lines ($\Delta\phi=0.3$ cycles) correspond to the two phase regions used in the phase resolved analysis of Section 4.3 and Table 8. Figure 6.— Modulation and pulse phase as a function of energy for PSR J0821$-$4300 using all of the observations listed in Table 5, folded according to the ephemeris of Table 4. The modulation (top) and pulse phase (bottom) reproduce the form predicted for the antipodal model of Gotthelf et al. (2010). Two undefined phase points are omitted. ## 4\. Spectral Analysis Previously, we modelled the original (year 2001) XMM–Newton spectra of PSR J0821$-$4300 as surface blackbody emission from two antipodal spots of different temperatures and areas. Crucially, this model is also able to account for the observed energy-dependent pulse modulation and phase shift (Gotthelf & Halpern, 2009; Gotthelf et al., 2010). Additionally, we found evidence of a spectral line feature around 0.8 keV, which more recent data obtained by De Luca et al. (2012) suggests has a time-variable centroid energy. With the increased quantity of data now in hand on PSR J0821$-$4300, we can re-examine this spectral feature, first by combining all 13 XMM–Newton observations presented in Table 5 plus the two observations obtained in 2001, and then by testing for any variability among the observations. ### 4.1. Summed XMM–Newton Spectrum The spectral analysis presented here uses exclusively the data collected with the EPIC pn. Although background is much reduced in the EPIC MOS, this instrument is less sensitive at soft energies, collecting 4.7 times fewer photons than the EPIC pn in the $0.3-1$ keV band. Furthermore, much of the EPIC MOS data were lost as the high surface brightness of the Puppis A SNR frequently triggered an automatic shutoff of that detector. We also neglect the Chandra ACIS spectra here because of their even poorer low-energy sensitivity and the increased background and other uncertainties involved in the analyzing the Chandra CC-mode data. Lastly, one ACIS image taken in timed exposure mode (ObsID 750) suffers from pileup, and was not used. For the EPIC pn data, the main technical issue is that the SNR background exceeds the point source counts below 1 keV. Surprisingly, the background intensity and spectral shape are both strong functions of spacecraft roll angle. This effect is made evident because the XMM–Newton data sets were acquired in two narrow ranges of roll angle roughly 180∘ apart associated with their respective visibility windows, with the time divided nearly equally between the two. We checked all of our spectral results carefully for features that might be dependent on roll angle due to systematic errors in background subtraction. No such systematic effect were found, which indicates that the background subtraction is reliable in general. We extracted spectra for each observation from the EPIC pn detector using an aperture of radius $0\farcm 3$ and a concentric background annulus of $0\farcm 5<r<0\farcm 6$, selecting only events with ${\tt PATTERN}\leq 4$ and FLAG$=0$. The data were filtered to exclude time intervals of high background identified by count rate $>0.1$ s-1 in the $10-12$ keV energy band. An inspection of the pattern distribution of single and double events shows no evidence of pile-up and suggests that a lower energy bound in the range $0.3-0.4$ keV is acceptable. Response matrices and effective area files were generated for each observation using the SAS software suite. We combined data from all observations using the FTOOL addascaspec to produce a single source spectrum and associated files. The combined spectrum was grouped to include at least 1000 counts per channel and was fitted using XSPEC v12.21 to a two blackbody model with interstellar absorption over the energy range $0.3-5$ keV (see Figure 7 and Table 6). Table 6Models for the Summed XMM–Newton Spectrum of PSR J0821$-$4300 Model | Two Blackbody | $+$ Emis. line | $+$ Abs. line | $+$ Two Abs. lines | $+$ Cyclabs ---|---|---|---|---|--- $N{\rm{}_{H}}$ ($10^{21}$ cm-2) | $3.8\pm 0.1$ | $4.3\pm 0.3$ | $3.2\pm 0.2$ | $2.9\pm 0.4$ | … | $2.8\pm 1.01$ | … $k{\rm T}_{w}$ (keV) | $0.26\pm 0.01$ | $0.25\pm 0.01$ | $0.29\pm 0.01$ | $0.29\pm 0.01$ | … | $0.28\pm 0.03$ | … $k{\rm T}_{h}$ (keV) | $0.46\pm 0.01$ | $0.45\pm 0.01$ | $0.49\pm 0.02$ | $0.49\pm 0.03$ | … | $0.47\pm 0.02$ | … $L_{w}({\rm bol})$ ($10^{33}$ erg s-1)aaBlackbody bolometric luminosity for a distance of 2.2 kpc. | $3.3\pm 0.1$ | $3.6\pm 0.2$ | $3.1\pm 0.2$ | $3.0\pm 0.2$ | … | $3.2\pm 0.8$ | … $L_{h}({\rm bol})$ ($10^{33}$ erg s-1)aaBlackbody bolometric luminosity for a distance of 2.2 kpc. | $2.0\pm 0.2$ | $2.0\pm 0.2$ | $1.4\pm 0.3$ | $1.4\pm 0.3$ | … | $1.7\pm 0.4$ | … $A_{w}$ (km2) | $72\pm 11$ | $89\pm 18$ | $44\pm 6$ | $40\pm 8$ | … | $53\pm 14$ | … $A_{h}$ (km2) | $4.4\pm 0.9$ | $4.5\pm 0.9$ | $2.5\pm 0.8$ | $2.4\pm 1.0$ | … | $3.4\pm 0.8$ | … $E_{0}$ (keV) | … | $0.75\pm 0.01$ | $0.46\pm 0.05$ | $0.46\pm 0.01$ | $2E_{o}$ | $0.46\pm 0.01$ | $2E_{o}$ WidthbbGaussian $\sigma$ for the emission or absorption lines, natural width $W$ for the cyclotron absorption model (Makishima et al., 1990; Mihara et al., 1990). (eV) | … | $75\pm 20$ | $85\pm 50$ | $106\pm 20$ | $34-62$ | $53-97$ | $35-290$ EW (eV) | … | $53\pm 10$ | … | … | … | … | … $\tau_{o}$ccOptical depth at line center. | … | … | $0.1-0.9$ | $0.6-1.3$ | $<0.035$ | $0.9-1.7$ | $<0.14$ $\chi^{2}({\rm DoF})$ | $1.50(359)$ | 1.08(356) | $1.16(356)$ | $1.08(354)$ | $1.12(354)$ Note. — The $1\sigma$ uncertainties for three interesting parameters ($\Delta\chi^{2}=3.53$) are given. The resulting high signal-to-noise ratio of the fitted spectrum reveals significant features that are unaccounted for by the two blackbody model. The best fit model, with $N_{\rm H}=(3.8\pm 0.01)\times 10^{21}$ cm-2, $kT_{w}=0.26\pm 0.01$ keV, and $kT_{h}=0.46\pm 0.01$ keV, has reduced $\chi^{2}_{\nu}=1.50$ for 359 degrees of freedom (DoF), which is formally unacceptable. The deviations are evident in structure in the residuals in Figure 7a. Adding a Gaussian emission line to the model as suggested by our previous work improves the fit to $\chi^{2}_{\nu}=1.08$ for 358 DoF (Figure 7b). The centroid energy of the line is $0.75\pm 0.01$ keV, and its equivalent width is $53\pm 10$ eV. Figure 7.— (a) EPIC pn spectrum of the 16 summed XMM–Newton observations of PSR J0821$-$4300 fitted to a double blackbody model. The residuals from the fit are shown in the bottom panel. (b) The same spectrum fitted with a double blackbody model plus Gaussian emission line. The parameters of these fits are given in Table 6. Considering the shape of the residuals from the two blackbody fit in Figure 7a, an alternative hypothesis is that an absorption feature, or features, is responsible. Accordingly we applied a Gaussian absorption line, two Gaussian absorption lines, and finally, the cyclotron absorption model of Makishima et al. (1990) and Mihara et al. (1990), which is available in XSPEC as cyclabs. As shown in Table 6 and Figure 8, all of these models gave acceptable fits. Two absorption lines, when fitted independently, are separated by nearly a factor of 2 in energy, which is suggestive of the fundamental and first harmonic in a cyclotron model. Therefore we fixed the ratio of their centroid energies at 2, with the result that their centroids are at 0.46 keV and 0.92 keV, bracketing the energy previously ascribed to an emission line. The cyclabs model, which also fixes this ratio, yields the same centroid energies. We conclude that, from the phase-averaged spectra alone, it is not possible to distinguish an emission feature from one or two absorption lines, which leaves the physical interpretation uncertain. The fundamental energy of the electron cyclotron resonance falls at $E_{0}=1.16\,(B/10^{11}\,{\rm G})/(1+z)\ {\rm keV},$ $None$ where $z$ is the gravitational redshift. Assuming a typical value of $z=0.3$, and if $B\approx B_{s}=2.9\times 10^{10}$ G, the equatorial field from the vacuum dipole spin-down result, then $E_{0}\approx 0.26$ keV is expected. When compared with $E_{0}=0.46$ keV from the absorption-line fit, or $0.75$ keV from the emission-line fit, this hints that the local surface field where the line is formed is larger than the equatorial dipole field, but is perhaps more compatible with the field at the pole, $B_{p}=2\,B_{s}$. Other factors possibly affecting a comparison between the dipole spin-down $B$-field and the surface $B$-field from the cyclotron energy will be explored in Section 6.2. Figure 8.— Same as Figure 7 but showing the residuals from fits to a double blackbody model plus, from top to bottom: no line, a Gaussian emission line, a Gaussian absorption line, two Gaussian absorption lines, and a cyclotron absorption line model. The parameters of the fits are given in Table 6. The addition of emission or absorption lines to the model near the low-energy end of the XMM–Newton spectrum affects the fitted value of the column density $N_{\rm H}$, with values ranging from $(2.8-4.3)\times 10^{21}$ cm-2 for the different models in Table 6. Therefore, it is possible that comparison with independent measurements of $N_{\rm H}$ may suggest a preference for one or another of these models. For example, the 21 cm H I emission in the foreground of Puppis A amounts to $N_{\rm HI}=2.5\times 10^{21}$ cm-2 according to Reynoso et al. (2003). To obtain this value, they integrated the H I line emission in the radial velocity range $-10$ to $+16$ km s-1, the latter velocity corresponding to their assumed 2.2 kpc distance of Puppis A. Comparison with the values of $N_{\rm H}$ in Table 6 tends to favor the X-ray spectral models that include absorption lines. While this agreement is encouraging, it is subject to the caveat that the 21 cm column density is formally a lower limit, assuming as it does that the line is optically thin. Better support for a low column density comes from the XMM–Newton RGS spectra of several regions of the Puppis A SNR fitted by Katsuda et al. (2012). These require X-ray $N_{\rm H}$ in the range $(2.58-2.85)\times 10^{21}$ cm-2, which also agrees closely with the $N_{\rm H}$ from our absorption-line models for the pulsar spectrum. Figure 9.— Blackbody temperatures and flux in the $0.5-5$ keV band for the 16 individual XMM–Newton observations of PSR J0821$-$4300, fitted to the two blackbody model. Data are taken from Table 7. Errors bars are 1$\sigma$. The weighted mean values are indicated by the dashed lines. Figure 10.— Combined XMM–Newton EPIC pn spectrum of PSR J0821$-$4300 in two pulse-phase intervals defined in Figure 5 ($\Delta\phi_{1}$, blue, $\Delta\phi_{2}$, red), fitted to a double blackbody model plus Gaussian emission line. The blackbody temperatures and line centroid energy are linked between the two spectra. The fitted parameters are given in the last column of Table 8. The residuals to the fit are shown in the bottom panel. Finally, we note that the total luminosity and blackbody areas reported in Table 6 differ from those presented in Gotthelf & Halpern (2009), the areas by a factor of two or more. This is a consequence of the best fit values for each model fit. The derived blackbody areas depend on blackbody temperature as $T^{-4}$, which itself is strongly correlated with the fitted column density. Future work, simultaneously modeling of the phase dependent spectra, should better constrain the column density and temperatures, and consequently provide a more accurate measurement of the blackbody areas. ### 4.2. Search for Variability To test for long-term variability of PSR J0821$-$4300, we also fitted the individual XMM–Newton spectra, searching for temperature variations on the surface, for example. The spectra for each observation are well-fitted by the two blackbody model, with parameters listed in Table 7; their statistics do not warrant an additional line component. For this comparison, we held the absorbing column fixed at the value in Table 6 derived from the fit of the composite spectrum to the two blackbody model. Figure 9 displays the results. No significant variation is evident in the flux or in either of the two blackbody temperatures. Table 7Individual XMM–Newton Spectra of PSR J0821$-$4300 Group | ObsID | Livetime | $kT_{w}$ | $kT_{h}$ | Flux ($10^{-12}$ | $\chi^{2}_{\nu}({\rm DoF})$ ---|---|---|---|---|---|--- | | (ks) | (keV) | (keV) | erg s-1 cm-2) | 1 | 0113020101S | 15.2 | $0.27(0.25,0.29)$ | $0.47(0.44,0.52)$ | $4.23(4.14,4.30)$ | $1.06(154)$ | 0113020301S | 16.1 | $0.27(0.25,0.28)$ | $0.46(0.43,0.50)$ | $4.21(4.12,4.27)$ | $1.05(184)$ 2 | 0606280101S | 25.7 | $0.27(0.26,0.28)$ | $0.47(0.45,0.51)$ | $4.15(4.09,4.20)$ | $1.19(282)$ | 0606280101U | 24.5 | $0.25(0.24,0.26)$ | $0.45(0.43,0.47)$ | $4.11(4.05,4.16)$ | $0.91(268)$ | 0606280201S | 24.6 | $0.27(0.26,0.28)$ | $0.47(0.45,0.50)$ | $4.14(4.09,4.20)$ | $1.02(261)$ | 0650220201S | 8.1 | $0.22(0.19,0.26)$ | $0.40(0.38,0.45)$ | $4.11(4.01,4.20)$ | $0.88(94)$ 3 | 0650220901S | 16.4 | $0.25(0.24,0.27)$ | $0.45(0.42,0.48)$ | $4.13(4.06,4.19)$ | $1.01(191)$ | 0650221001S | 15.5 | $0.24(0.22,0.25)$ | $0.42(0.40,0.44)$ | $4.13(4.06,4.20)$ | $0.99(183)$ | 0650221101S | 18.6 | $0.26(0.25,0.28)$ | $0.48(0.45,0.51)$ | $4.19(4.12,4.25)$ | $0.97(222)$ | 0650221201S | 17.1 | $0.27(0.26,0.28)$ | $0.49(0.46,0.54)$ | $4.17(4.08,4.23)$ | $1.08(201)$ | 0650221301S | 16.4 | $0.28(0.27,0.29)$ | $0.52(0.48,0.57)$ | $4.06(3.98,4.12)$ | $1.21(182)$ | 0650221401S | 15.5 | $0.24(0.23,0.26)$ | $0.43(0.41,0.46)$ | $4.13(4.05,4.20)$ | $1.00(174)$ 4 | 0650221501S | 20.3 | $0.27(0.25,0.28)$ | $0.45(0.43,0.48)$ | $4.26(4.20,4.32)$ | $0.96(225)$ | 0657600101S | 22.4 | $0.26(0.25,0.27)$ | $0.47(0.44,0.50)$ | $4.13(4.07,4.19)$ | $0.99(242)$ | 0657600201S | 26.1 | $0.26(0.25,0.27)$ | $0.45(0.43,0.47)$ | $4.24(4.19,4.30)$ | $1.07(285)$ | 0657600301S | 24.7 | $0.26(0.25,0.28)$ | $0.46(0.44,0.49)$ | $4.22(4.16,4.27)$ | $0.94(262)$ Note. — Results for a fit to an absorbed, two-blackbody model with the column density held fixed at $N_{\rm H}=3.75\times 10^{21}$ cm-2, the value obtained for the summed spectrum in Table 6. The absorbed flux in the $0.5-5$ keV range is tabulated. The range of uncertainty of each value is the $1\sigma$ confidence interval for two interesting parameters. We next test for pulse-phase dependence of the spectral line feature(s) by generating spectra in two phase intervals, each of $\Delta\phi=0.3$ cycles in width centered on the peaks of the soft and hard light curves, respectively (see Figure 5.) This is motivated by the original finding in Gotthelf & Halpern (2009) that an emission line is more strongly associated with the warm region, i.e., the soft phase of the pulse. For simplicity, given the reduced counts in the phased spectra, we used only the Gaussian emission-line model as a representative for all of the possible line models. The two phase-resolved spectra combined from all epochs are shown in Figure 10, where they are fitted to the two blackbody plus Gaussian emission-line model. In the simultaneous fit, the two temperatures and the line centroid energy are linked. As shown in Table 8, the equivalent width of the Gaussian line is indeed about factor of 2 larger in the soft-phase spectrum than in the hard phase. Figure 11.— XMM–Newton EPIC pn spectra of PSR J0821$-$4300, grouped into the four sets as listed in Table 7, and fitted to a double blackbody model with Gaussian emission line. As in Figure 10, red and blue represent the two pulse- phase intervals defined in Figure 5. The residuals from the fits are shown in the bottom panels. The fitted parameters are given in Table 8. De Luca et al. (2012) presented evidence that the emission-line centroid had decreased from 0.80 keV to 0.73 keV between 2001 and 2009. Here we extend the examination for variability of the spectral feature, including its phase dependence, by combining the XMM–Newton spectra into four groups of adjacent observations as indicated Table 7. We fitted these four sets of spectra to the two blackbody plus Gaussian emission-line model, with their temperatures and line width (sigma) linked between the two phase intervals. The four fits are displayed in Figure 11, and the fitted parameters for each set and their sum are presented in Table 8. It is clear that the temperatures and their contributed fluxes are steady in time, consistent with their phase-averaged behavior and with no time dependence in their phase ratios. On the other hand, the equivalent width of the fitted Gaussian emission line shows evidence for having increased between 2001 and 2009, while the line centroid energy decreased from $0.79^{+0.02}_{-0.03}$ keV to $0.71^{+0.02}_{-0.03}$ keV. While these results are consistent with the De Luca et al. (2012) analysis of the same data, there is little if any additional variability between 2009 and 2012. Table 8XMM–Newton Phase-Resolved Spectra of PSR J0821$-$4300 Model | Group 1 | Group 2 | Group 3 | Group 4 | Sum ---|---|---|---|---|--- Parameter | 2001 Apr–Nov | 2009 Dec – 2010 May | 2010 Oct–Dec | 2011 Apr – 2012 Apr | 2001 Apr – 2012 Apr $kT_{w}$ (keV) | $0.23(0.21,0.25)$ | $0.25(0.24,0.26)$ | $0.24(0.23,0.25)$ | $0.27(0.25,0.28)$ | $0.25(0.24,0.25)$ $kT_{h}$ (keV) | $0.41(0.39,0.44)$ | $0.45(0.43,0.47)$ | $0.44(0.42,0.46)$ | $0.47(0.44,0.52)$ | $0.44(0.43,0.45)$ $F[{\Delta\phi_{1}]}$aaAbsorbed flux quoted for the $0.5-5.0$ keV band in units of $10^{-12}$ erg s-1 cm-2. | $4.00(3.97,4.08)$ | $3.91(3.88,3.93)$ | $3.94(3.91,3.96)$ | $3.99(3.96,4.01)$ | $3.96(3.94,3.97)$ $F[{\Delta\phi_{2}]}$aaAbsorbed flux quoted for the $0.5-5.0$ keV band in units of $10^{-12}$ erg s-1 cm-2. | $4.33(4.28,4.38)$ | $4.37(4.35,4.40)$ | $4.37(4.34,4.39)$ | $4.45(4.42,4.47)$ | $4.40(4.38,4.41)$ $E_{0}$ (keV) | $0.79(0.76,0.81)$ | $0.71(0.68,0.73)$ | $0.72(0.69,0.74)$ | $0.69(0.63,0.73)$ | $0.72(0.70,0.73)$ $\sigma$ (eV) | $\leq 62$ | $44(21,70)$ | $68(45,95)$ | $133(78,195)$ | $69(52,89)$ $EW[{\Delta\phi_{1}]}$ (eV) | $40(24,51)$ | $77(59,89)$ | $58(46,76)$ | $86(67,107)$ | $61(53,74)$ $EW[{\Delta\phi_{2}]}$ (eV) | $14(3,22)$ | $34(23,42)$ | $42(32,57)$ | $39(26,52)$ | $31(26,39)$ $\chi^{2}_{\nu}{\rm(DoF)}$ | $1.01(87)$ | $1.19(229)$ | $1.05(269)$ | $0.99(250)$ | $1.20(212)$ Note. — Results from simultaneous fits to XMM–Newton spectra of PSR J0821$-$4300 extracted from two pulse-phase intervals, $\Delta\phi_{1}$, $\Delta\phi_{2}$, as defined in the Figure 5. Groups numbers are defined in Table 7. The blackbody temperatures and Gaussian line energy are linked between the two phases. The column density is fixed at $N_{\rm H}=4.28\times 10^{21}$ cm-2, the phase-averaged value for the summed spectrum in Table 6. Quoted uncertainties are $1\sigma$ for three interesting parameters. We also repeated this test for variability with the Gaussian absorption-line model for the spectral feature (results not tabulated here). For either the emission-line or the absorption-line model, the measured line centroids for all epochs but 2001 cluster well within their 1$\sigma$ uncertainty (for two interesting parameters, $\Delta\chi^{2}=2.3$). The 2001 set deviates from the mean defined by the other three sets by $\approx 2\sigma$. In terms of percentage deviation from the mean, the line centroid measured in 2001 was 14% and 8% higher in the emission line and the absorption line model, respectively. We conclude that, although the deviation seems large for the emission line model, it is not inconsistent with the expected variance. Further observations would be necessary to establish more definite evidence of long-term variability. ## 5\. A Definitive Spin-Down Measurement for PSR J1210$-$5226 Archival timing observations of PSR J1210$-$5226 spanning the years 2000–2008 were too sparse to securely determine its spin-down rate, as described in Halpern & Gotthelf (2011). Searching all possible parameter space for a phase- coherent, quadratic ephemeris, we found two equally acceptable solutions, with $\dot{f}=-1.243(22)\times 10^{-16}$ Hz s-1 and $\dot{f}=-7.084(22)\times 10^{-16}$ Hz s-1, corresponding to $B_{s}=9.9\times 10^{10}$ G (solution 1) and $B_{s}=2.4\times 10^{11}$ G (solution 2), respectively. Since such low $\dot{E}$ pulsars are generally very stable rotators with little timing noise or glitch activity, it was deemed likely that one of these is the true solution, and the other one is an alias with an incorrect cycle count. It is also important that no solutions with smaller dipole $B_{s}$, and no spinning- up solutions, were found. Table 9Log of New X-ray Timing Observations of PSR J1210$-$5226 Mission | Instr/Mode | ObsID | Date | Exposure | Start Epoch | FrequencyaaBarycentric frequency derived from a $Z^{2}_{1}$ test. The given uncertainty is for the $1\sigma$ confidence interval. | $Z^{2}_{1}$ ---|---|---|---|---|---|---|--- | | | (UT) | (ks) | (MJD) | (s) | Chandra | ACIS-S3/CC | 14199 | 2011 Nov 25 | 31.0 | 55890.233 | 2.3577625(28) | 48.2 Chandra | ACIS-S3/CC | 14202 | 2012 Apr 10 | 33.0 | 56027.637 | 2.3577709(43) | 22.2 XMM | EPIC-pn/SW | 0679590101 | 2012 Jun 22 | 26.5 | 56100.537 | 2.3577687(25) | 68.3 XMM | EPIC-pn/SW | 0679590201 | 2012 Jun 24 | 22.3 | 56102.752 | 2.3577621(34) | 69.0 XMM | EPIC-pn/SW | 0679590301 | 2012 Jun 28 | 24.9 | 56106.490 | 2.3577636(23) | 109.6 XMM | EPIC-pn/SW | 0679590401 | 2012 Jul 02 | 24.5 | 56110.918 | 2.3577626(28) | 61.4 XMM | EPIC-pn/SW | 0679590501 | 2012 Jul 18 | 27.3 | 56126.553 | 2.3577640(27) | 51.9 XMM | EPIC-pn/SW | 0679590601 | 2012 Aug 11 | 27.3 | 56150.408 | 2.3577637(23) | 81.8 Chandra | ACIS-S3/CC | 14200 | 2012 Dec 01 | 31.1 | 56262.095 | 2.3577634(28) | 39.8 In 2011–2012 we started a new series of observations of PSR J1210$-$5226 with Chandra and XMM–Newton that was designed to initiate and maintain a unique, phase-connect timing solution of this pulsar for the first time and eliminate the prior timing ambiguity. Table 9 is a log of the new observations. The instrumental setups and analysis methods are the same as those described in Section 3 for PSR J0821$-$4300, except that the XMM–Newton source photons were extracted from a $20^{\prime\prime}$ radius aperture instead of $30^{\prime\prime}$, and the $0.5-2.5$ keV band was selected to optimize the pulsed signal. Table 10Ephemeris of PSR J1210$-$5226 Parameter | Value ---|--- R.A. (J2000)aaMeasured from Chandra ACIS-S3 ObsID 3913. Typical uncertainty is $0.\\!^{\prime\prime}6$. | $12^{\rm h}10^{\rm m}00^{\rm s}\\!.91$ Decl. (J2000)aaMeasured from Chandra ACIS-S3 ObsID 3913. Typical uncertainty is $0.\\!^{\prime\prime}6$. | $-52^{\circ}26^{\prime}28^{\prime\prime}\\!.4$ Ephemeris Epoch (MJD TDB)bbEpoch of fitted minima of summed pulse profile; phase zero in Figure 13 | 53562.0000006 Ephemeris Span (MJD) | 51,549–56,262 Frequency, $f$ | 2.357763502865(65) Hz Frequency derivative, $\dot{f}$ | $(-1.2363\pm 0.0091)\times 10^{-16}$ Hz s-1 Period, $P$ | 0.424130748816(12) s Period derivative, $\dot{P}$ | $(2.224\pm 0.016)\times 10^{-17}$ Surface dipole dipole field, $B_{s}$ | $9.8\times 10^{10}$ G Spin-down luminosity, $\dot{E}$ | $1.2\times 10^{31}$ erg s-1 Characteristic age, $\tau_{c}$ | 302 Myr The coherently measured period from the new observations is precise enough to reject previous timing solution 2, while solution 1 with the smaller $B_{s}$ extrapolates precisely to the new period. Furthermore, the new pulse phases are aligned accurately with solution 1, while solution 2 gives random phases. Thus a unique quadratic ephemeris fits all 12.9 years of Chandra and XMM–Newton timing. Table 10 gives this global ephemeris, derived from a least- square fit to the TOAs as described in Section 3, with the phase residuals shown in Figure 12. The previous uncertainty on the $\dot{f}$ of solution 1 is reduced by a factor of 2, with the result $\dot{f}=-1.2363(91)\times 10^{-16}$ Hz s-1; the corresponding dipole magnetic field is $B_{s}=9.8\times 10^{10}$ G. Figure 13 shows the pulse profile using data from all the observations folded on the presented ephemeris. Figure 12.— Pulse-phase residuals from the linear term (dash-dot line) of the phase ephemeris of PSR J1210$-$5226 presented in Table 10. Included are the new observations listed in Table 9 and the archival timing observations from Table 1 of Halpern & Gotthelf (2011). The quadratic term (solid line) corresponds to the uniquely determined period derivative spanning the years 2000–2012. The error bars are generally smaller than the symbol size. Figure 13.— Pulse profiles of PSR J1210$-$5226 in the $0.5-2.5$ keV band using data from all timing observations, folded according to the ephemeris in Table 10. Phase zero corresponds to the listed TDB epoch of the ephemeris. Included are the new observations in Table 9 and the archival timing observations from Table 1 of Halpern & Gotthelf (2011). Strictly speaking, the derived period derivative is an upper limit to the intrinsic one, as any proper motion has not been measured and taken into account. PSR J1210$-$5226 and PSR J0821$-$4300 are at similar distances, and the location of PSR J1210$-$5226 with respect to the geometric center of SNR PKS 1209$-$51/52 allows (but does not require, since the kinematics of the SNR are unknown) a proper motion of $\sim 70$ mas yr-1, or $v_{\perp}\sim 730$ km s-1 (De Luca et al., 2011), similar to that of PSR J0821$-$4300\. If so, the Shklovskii effect given by equation (1) would contribute $\dot{P}_{k}\sim 1.1\times 10^{-17}$, or half of the total $\dot{P}$, and the spin-down dipole field would be reduced somewhat, to $B_{s}\approx 7\times 10^{10}$ G. Of course, if these two CCOs both have velocities toward the high end of the distribution of pulsars, it would be a tantalizing physical result in itself. However, examination of another CCO, the NS in Cas A, doesn’t necessarily support such high velocities for CCOs, in general; Thorstensen et al. (2001) and Fesen et al. (2006) find $v_{\perp}\approx 350$ km s-1 with respect to the explosion center of Cas A. ## 6\. Discussion ### 6.1. On the Age of Puppis A Becker et al. (2012) discussed the impact of the revised proper motion of PSR J0821$-$4300 on the inferred age of Puppis A. The age of the SNR had been derived previously as $3700\pm 300$ yr from the proper motions of optical filaments that point back to a common center, presumed to be the site of the explosion, and assuming no deceleration. The motion of the NS also extrapolates to the same center, but the distance traveled, $371^{\prime\prime}\pm 31^{\prime\prime}$, corresponds to an age of $5200\pm 1000$ yr (or $6100\pm 1000$ yr for our proper motion of $61\pm 9$ mas yr-1). Becker et al. (2012) refer to these marginally contradictory measurements as independent, and choose to average them, giving the value 4.5 kyr listed in Table 1. However, they are not truly independent, because they assume the same starting location. Furthermore, the discrepancy is worse if the filaments have decelerated. Just as the new Chandra observations of the NS have improved the accuracy of its proper motion, we suggest that a contemporary observation of the optical filaments of Puppis A may improve the precision of the original Winkler et al. (1988) study on which the optical proper-motion age is based. Such an investigation could lead to a more detailed understanding of dynamics of the filaments, and a more accurate age for Puppis A. ### 6.2. PSR J0821–4300 and PSR J1210$-$5226 as Anti-Magnetars The spin-down of power PSR J0821$-$4300, $\dot{E}=1.9\times 10^{32}$ erg s-1, is consistent with being caused by magnetic braking of an isolated neutron star with a weak dipole field of $B_{s}=2.9\times 10^{10}$ G. Its spin-down power is much smaller than its observed thermal X-ray luminosity, $L_{x}\approx 5.6\times 10^{33}\,d_{2.2}^{2}$ erg s-1, which rules out rotation as a significant power source. As discussed in Halpern & Gotthelf (2010a) for PSR J1852$+$0040, the very small $\dot{P}$ disfavors propeller spin-down and accretion as a source of the X-rays. Thus, residual cooling remains the most plausible source of the X-ray luminosity of PSR J0821$-$4300 and, by extension, of all CCOs. Given the meager spin-down power of PSR J0821$-$4300, we can also discount the suggestion of Reynoso et al. (2003) and Castelletti et al. (2006) that structure in the Puppis A SNR and associated H I is caused by jets emitted by the pulsar, similar to the SS433/W50 system. Parenthetically, if the H I morphology is not caused by the pulsar, then it does not provide supporting evidence of its distance. An unresolved question about PSR J0821$-$4300 is the origin of its phase- dependent, possibly variable emission or absorption feature. In the absence of accretion, it would be difficult to understand how an emission line is generated, or why it would vary in a few years. The indication of variability is not strong, and there is no other evidence of accretion. Therefore, we will consider here an absorption-line interpretation, for which there is a good precedent in PSR J1210$-$5226, the only isolated pulsar to show a series of strong absorption lines in its spectrum (Sanwal et al., 2002; Mereghetti et al., 2002; Bignami et al., 2003). The spectral features in PSR J1210$-$5226 are now widely considered to comprise the electron cyclotron fundamental at $E_{0}=0.7$ keV and its harmonics. The surface magnetic field strength where the lines are formed is inferred to be $B\approx 8\times 10^{10}$ G according to equation (2) and assuming $z\approx 0.3$. The relative strength of the harmonics is explained by treating them as resonances in the photospheric free-free opacity in the presence of the magnetic field (Suleimanov et al., 2010, 2012). PSR J1210$-$5226 is exceptional in having the largest known spin-down dipole magnetic field among CCOs, now confirmed as $B_{s}\leq 9.8\times 10^{10}$ G. This is only slightly larger than $B\approx 8\times 10^{10}$ G inferred from the spectral features, and is the first case in which such a comparison has been made between independent methods of measuring surface $B$-field on an isolated pulsar. Effects that could eliminate the already minor discrepancy include the unmeasured proper motion, which would decrease the inferred spin- down dipole field, and the results of numerical models of a force-free magnetosphere, which imply a different spin-down law from the standard vacuum dipole expression $B_{s}=\left(3c^{3}IP\dot{P}\over 8\pi^{2}R^{6}\,{\rm sin}^{2}\alpha\right)^{1/2}=\ 3.2\times 10^{19}\,\left(P\dot{P}\over{\rm sin}^{2}\alpha\right)^{1/2}\,{\rm G}$ $None$ such that, more accurately, $B_{s}\approx\left(c^{3}IP\dot{P}\over 4\pi^{2}R^{6}\,[1+{\rm sin}^{2}\alpha]\right)^{1/2}=\ 2.6\times 10^{19}\,\left(P\dot{P}\over 1+{\rm sin}^{2}\alpha\right)^{1/2}\,{\rm G}$ $None$ (Spitkovsky, 2006). This means that a measured spin-down rate is obtained with $B_{s}$ smaller by a factor $0.58-0.82$ than the conventional approximation $B_{s}=3.2\times 10^{19}(P\dot{P})^{1/2}$ G. In the case of PSR J0821$-$4300, by the same logic, its weaker inferred dipole magnetic field of $B_{s}\leq 2.9\times 10^{10}$ G wouldn’t naturally account for a spectral feature at $0.7-0.8$ keV, instead predicting $E_{0}\leq 0.26$ keV. The absorption model with the cyclotron fundamental at $E_{0}=0.46$ keV comes closer to the prediction. However, additional variables can affect the comparison of a dipole field inferred from spin-down with that from a cyclotron line. First is the factor of 2 variation of dipole field strength over the surface, with $B_{p}=2\,B_{s}$. Second, the actual field is not likely to be a centered dipole, and may be larger than $B_{s}$ or $B_{p}$ in places where the line is formed. The asymmetric distribution of surface temperature on PSR J0821$-$4300 already appears to require a complex magnetic field geometry, such as an off-center dipole or higher multipoles. Third, the inferred spin-down field depends on the uncertain NS mass and radius as in equations (3) and (4). $B_{s}$ scales as ${I^{1/2}\,R^{-3}}\propto M^{1/2}R^{-2}(1+z)$ (Ravenhall & Pethick, 1994), where $1+z=(1-2GM/Rc^{2})^{-1/2}\propto B/E_{0}$ from equation (2). For an astrophysically likely $M=1.4\,M_{\odot}$, theoretical NS equations of state allow $8<R<15$ km, therefore $1.18<1+z<1.44$. The spectroscopic $B$ is therefore uncertain by $\pm 10\%$ when we adopt $z=0.3$, while the dipole $B_{s}$ is uncertain by the much larger factor of $\sim 2$. (The gravitational redshift cancels out in their ratio.) Allowing for these uncertainties, it is reasonable to adopt the hypothesis that feature(s) in the spectrum of PSR J0821$-$4300 are due to the cyclotron process. The continuum X-ray spectrum and pulse profiles of PSR J0821$-$4300 are indicative of antipodal hot spots of different areas and temperatures (Gotthelf & Halpern, 2009; Gotthelf et al., 2010), which are difficult to account for if the magnetic field is weak. A related problem is the high pulsed fraction of 64% from PSR J1852$+$0040 (Halpern & Gotthelf, 2010a), a timing twin of PSR J0821$-$4300\. Polar cap heating by any magnetospheric accelerator must be negligible as a source of surface heating in CCOs, being only a small fraction of the already insignificant spin-down power. Thermal X-rays from residual cooling can be nonuniform if there is anisotropic heat conduction in the star. The effect of different magnetic field configurations on heat transport in the crust and envelope of NSs has been modelled, most recently by Geppert et al. (2004, 2006), Pérez-Azorín et al. (2006a), and Pons et al. (2009). A toroidal field is expected to be the initial configuration generated by differential rotation in the proto-neutron star dynamo (Thompson & Duncan, 1993). One of the effects of crustal toroidal field is to insulate the magnetic equator from heat conduction, resulting in warm spots at the poles. The warm regions can even be of different sizes due to the antisymmetry of the poloidal component of the field (Geppert et al., 2006), which is evocative of the antipodal thermal structure of PSR J0821$-$4300\. To have a significant effect on the heat transport, the crustal toroidal field strength required in these models is $\sim 10^{15}$ G, many orders of magnitude greater than the poloidal field if the latter is measured by the spin-down. Purely toroidal or poloidal fields are thought to be unstable in an initially fluid NS (Tayler, 1973; Flowers & Ruderman, 1977), although the toroidal field may be stabilized by a poloidal field that is several orders of magnitude weaker (Braithwaite, 2009). We suggest the latter as a viable configuration for a CCO. Shabaltas & Lai (2012) tried to model the pulse profile of PSR J1852$+$0040 with anisotropic conduction, and concluded that they needed a toroidal crustal field of $B_{\phi}>2\times 10^{14}$ G to produce its high pulsed fraction. Even then, they observe, the shape of the modelled light curve doesn’t match the observed one. Since the shape of the light curve is not reproduced, the surface temperature distribution of PSR J1852$+$0040 and its physical origin are still not known. Page & Sarmiento (1996), Pérez-Azorín et al. (2006b), Zane & Turolla (2006), and Zane (2007) investigated NS surface emission patterns using a combination of star-centered dipole and quadrupole magnetic field components to model asymmetric pulse profiles. The behavior of PSR J0821$-$4300 may ultimately be explained by similar models. It remains to be shown that if CCOs can have field configurations that are strong enough to affect heat transport to the extent required, while not exceeding the spin-down limits on the external dipole field. ### 6.3. Origin and Evolution of Anti-magnetars PSR J0821$-$4300 is nearly a twin of PSR J1852$+$0040 in its spin properties, and there are no other young neutron stars with measured magnetic fields this weak. They fall in a region of the $P-\dot{P}$ diagram (Figure 3) that is devoid of ordinary (non-recycled) radio pulsars (Manchester et al., 2005). They overlap with the supposed mildly recycled pulsars in this area (Belczynski et al., 2010). The characteristic age of 254 Myr for PSR J0821$-$4300 is not meaningful because the pulsar was born spinning at its current period, as is the case for the other CCO pulsars. If their magnetic fields remain constant, they will not move in $P-\dot{P}$ space for $>10^{8}$ years (but see below). That CCOs are found in SNRs in comparable numbers to other classes of NSs implies that they must represent a significant fraction of NS births. If PSR J0821$-$4300 and PSR J1852$+$0040 are typical CCOs, the area around them in the $P-\dot{P}$ diagram should be densely populated with “orphaned CCOs” that remain after their SNRs dissipate in $\sim 10^{5}$ years. Why there are few ordinary radio pulsars and no older X-ray pulsars near their location is then a mystery, as emphasized by Kaspi (2010), which may indicate that radio luminosity is a function of spin-down power. There are not yet enough CCOs to know whether they are intrinsically radio- quiet relative to ordinary and recycled pulsars, rather than unfavorably beamed. It is also possible that some ordinary radio pulsars with $B_{s}<10^{11}$ G are actually much younger than their characteristic ages, and may be orphaned CCOs that could be recognized in X-rays. Whether or not they are radio pulsars, nearby orphaned CCOs should be detectable as thermal X-ray sources for $10^{5}-10^{6}$ years, similar to the seven ROSAT discovered isolated NSs (INSs: Haberl, 2007) which, however, have strong magnetic fields (Kaplan et al., 2009). It is likely that the INSs are kept hot for longer than CCOs by continuing magnetic field decay (Pons et al., 2007), which would explain their observed abundance relative to the elusive orphaned CCOs. It may be difficult to detect and/or recognize orphaned CCOs if they cool faster than ordinary neutron stars. One effect that can accelerate cooling is an accreted light-element envelope, which has higher heat conduction than an iron surface (Kaminker et al., 2006). The newly discovered 59 ms pulsar 1RXS J141256.0$+$792204 (“Calvera”), originally detected in the ROSAT All-Sky Survey, may be the first recognized example of an orphaned CCO (Zane et al., 2011; Rutledge et al., 2008; Halpern, 2011), pending a measurement of its period derivative. The isolated NS 2XMM J104608.7$-$594306 in the Carina Nebula (Pires et al., 2012) has been suggested as another orphaned CCO. It is possible that the weak magnetic field of CCOs is causally related to slow rotation at birth through the turbulent dynamo (Thompson & Duncan, 1993) that generates the magnetic field. (See Spruit 2008 for a review of possible mechanisms for the origin of magnetic fields in neutron stars.) If the dipole magnetic field $B_{s}$ is simply related to the initial spin period, and assuming that the present period $P$ is in fact the birth period because the spin-down time is so much longer than the true age, we may expect an anti- correlation between $B_{s}$ and $P$. With only three data points to compare, and with two of them having nearly identical values, there is not much evidence to examine for a trend. In fact, the CCO with the longest period, PSR J1210$-$5226, also has the strongest dipole field of the three, which would not by itself support such a simple correlation. A population analysis of radio pulsars by Faucher-Giguère & Kaspi (2006) concludes that there is a wide distribution of birth periods, with a mean of $\sim 300$ ms and a dispersion of $\sigma\sim 150$ ms. If this is true, the birth periods of CCOs are not in fact long, and their weak dipole fields may be the effect of some as-yet unknown parameter. In an alternative theory for CCOs, a normal ($\sim 10^{12}$ G) magnetic field is buried in the core or crust of a NS by prompt fall-back of supernova debris, and takes thousands of years to diffuse back to the surface, during which time the NS appears as an anti-magnetar. This is assuming that the accreted matter is itself not magnetized. The timescale for diffusion is highly dependent on the amount of matter accreted. According to Muslimov & Page (1995), for accretion of $\sim 10^{-5}\,M_{\odot}$, the regrowth of the surface field is largely complete after $\sim 10^{3}$ yr, but if $>0.01\,M_{\odot}$ is accreted, then the diffusion time could be millions of years. Chevalier (1989) calculated that the neutron star in SN 1987A could have accreted $\sim 0.1\,M_{\odot}$ of fallback material in the hours after the SN explosion, aided by a reverse shock from the helium layer of the progenitor. If so, it may never emerge as a radio pulsar. Interesting support for the theory of field burial and regrowth is the absence of evidence for magnetic field strengths $<10^{11}$ G in accreting high-mass X-ray binary pulsars, as inferred from their pulse periods and period derivatives (Popov & Turolla, 2013). If intrinsic fields in the range $10^{10}-10^{11}$ G were common, equilibrium spin periods of $0.1-1$ s should be frequent in HMXBs, but they are not. If the NSs in these systems are born in the same way as isolated pulsars, this could imply that birth fields are never so small; instead, field regrowth has occurred in all cases after it was initially buried. One caveat here is that $\sim 40\%$ of HMXBs do not have measured spin periods, although there is no strong selection effect against detecting $P<1$ s. Bernal et al. (2010) has revisited the process of hypercritical accretion onto a magnetized neutron star, while Ho (2011) made new calculations of the subsequent diffusion to constrain the magnetic fields of CCOs at birth and the accreted mass, finding $10^{-4}-10^{-5}\,M_{\odot}$ for the latter. These models are difficult to test using CCOs, because it would involve measuring the braking index or the change of the dipole magnetic field directly. Ho (2011) implied that the isolation of CCOs in the $P-\dot{P}$ diagram may be less problematic in this model because, as their dipole magnetic fields increase, they will evolve to join the bulk of the pulsar population. However, for the first $\sim 10^{5}$ yr, field growth can only move a CCO vertically upward in the $P-\dot{P}$ diagram, as the braking index has a large, negative value. Orphaned CCOs should still have periods of $\sim 0.1$ s and lie in a sparse region. For this reason, an important test for an example of field growth is to determine the $\dot{P}$ of Calvera, which will reveal if its dipole field is greater than that of PSR J0821$-$4300 and PSR J1852$+$0040. Previous calculations of field burial and re-emergence for CCOs were one dimensional. The first two-dimensional calculations for this purpose were recently reported by Viganò & Pons (2012). They find that the accretion must be essentially isotropic to bury the field sufficiently to cause the required orders-of-magnitude reduction of the external dipole. If the accretion were instead confined to the equator or the magnetic poles, the reduction of the dipole component would not be significant enough to produce a CCO. Finally, it remains to be seen if the resulting surface thermal distribution during the CCO phase can be made compatible with observed spectra and pulse profiles. Viganò & Pons (2012) briefly presented the temperature distribution from one of their models that produces hot polar caps at a time when the external dipole field is $10^{10}$ G. But even in this case, the temperature does not vary by as much as a factor of 2 over the surface, a range that would be required to match the properties of PSR J0821$-$4300 and other CCOs. ## 7\. Conclusions and Future Work Measurement of the spin-down rate of the 112 ms PSR J0821$-$4300 in Puppis A was achieved using X-ray observations coordinated between XMM–Newton and Chandra, the resulting phase-connected ephemeris spanning 2.3 yr. We also measured the proper motion of the pulsar in Chandra HRC images over 10.6 years. The proper motion makes a non-negligible contribution to the period derivative via the Shklovskii effect, and the uncertainty in proper motion and distance limits the accuracy of the intrinsic period derivative to $\approx 16\%$. The straightforward interpretation of these results is dipole spin-down due to a weak surface magnetic field of $B_{s}=2.9\times 10^{10}$ G, the smallest measured for any young neutron star, and nearly identical to that of the CCO pulsar PSR J1852$+$0040 in Kes 79. A phase-dependent spectral feature in PSR J0821$-$4300 can be modelled either as an emission line of energy $\approx 0.75$ keV, or as a cyclotron absorption line and its harmonic, with $E_{0}\approx 0.46$ keV. For reasons that are not clear, it is stronger during the pulse-phase interval in which the continuum spectrum is softer. There is only marginal evidence for long-term variability of this feature. The local magnetic field strength in the area where the spectral feature is produced may have to be larger than the dipole spin-down value. The existing spin-down measurements for three CCOs, including a definitive new result for PSR J1210$-$5226, are compelling evidence that a weak dipole field component is the physical origin of the CCO class in general. It is reasonable to assume that the CCOs which have not yet been seen to pulse have magnetic fields that are similar to or weaker than those of PSR J0821$-$4300 and PSR J1852$+$0040\. Otherwise, if their fields were stronger, their spectra should show cyclotron lines similar to PSR J1210$-$5226 and (possibly) PSR J0821$-$4300\. Deep X-ray timing searches of the remaining members of this class, to smaller limits on pulsed fraction, could still discover new pulsars and supply valuable data on their birth properties and evolution. A remaining theoretical puzzle about CCOs is the origin of their surface temperature anisotropies, in particular, the one or two warm/hot regions that are smaller than the full neutron star surface. The measured spin-down power is too small to contribute to this emission. Continuing accretion is unlikely because of the small spin-down rates. It appears that any explanation will have to involve stronger magnetic field components in the crust, with toroidal or quadrupolar geometries, that do not contribute to the dipole spin-down torque. A physical model of thermal and magnetic field structure that self- consistently reproduces the spin-down rates, X-ray spectra, and pulse profiles, is still needed. If it can be studied with better data, the details of the phase-dependent and possibly variable spectral feature from PSR J0821$-$4300 may contribute to a more definite model of its surface magnetic field geometry. The CCO pulsars PSR J1852$+$0040 and PSR J0821$-$4300 fall in a region of $B-P$ space that overlaps with what are assumed to be moderately recycled pulsars, but is otherwise empty. An understanding of the evolutionary status of CCOs is critically dependent upon a search for their descendants, the orphaned CCOs without SNRs. Some single radio pulsars with spin parameters similar to CCOs may be orphaned CCOs rather than recycled pulsars, and they may be much younger than their characteristic ages. X-ray observations of pulsars in this region may find evidence of their relative youth via surface thermal emission. The spin-down rates of orphaned CCOs would reveal whether CCOs have intrinsically weak magnetic fields, or if field re-emerges on a timescale of $\sim 10^{4}$ yr from a normal magnetic field that was buried by prompt fallback of supernova debris, as is invoked in some theoretical studies. Such rapid evolution would lead to orphaned CCOs that lie directly above CCOs on the $P-\dot{P}$ diagram but are still detectable as thermal X-ray sources. The radio-quiet pulsar Calvera is a possible such candidate. Whether or not CCOs have buried fields, the paucity of radio pulsars with similar spin parameters is real and requires an explanation. We thank Dany Page, and an anonymous referee, for helpful comments. This investigation is based on observations obtained with XMM–Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA, and Chandra. The opportunity to propose joint, coordinated observations between XMM–Newton and Chandra was crucial for the success of this effort. Financial support was provided by NASA grants NNX11AD19G and NNX12AD41G for the XMM–Newton observations, and by Chandra awards GO1-12001X and SAO GO1-12071X, issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. ## References * Abramowski et al. (2011) Abramowski, A., et al. 2011, A&A, 531, 81 * Bamba et al. (2005) Bamba, A., Yamazaki, R., & Hiraga, J. S. 2005, ApJ, 632, 294 * Becker et al. (2012) Becker, W., Prinz, T., Winkler, P. F., & Petre, R. 2012, ApJ, 755, 141 * Belczynski et al. (2010) Belczynski, K., Lorimer, D. R., Ridley, J. P., & Curran, S. J. 2010, MNRAS, 407, 1245 * Bernal et al. (2010) Bernal, C. G., Lee, W. H., & Page, D. 2010, Rev. Mex. Astron. Astrofis., 46, 309 * Bhattacharya et al. (1992) Bhattacharya, D., Wijers, R. A. M. J., Hartman, J. W., & Verbunt, F. 1992, A&A, 254, 198 * Bignami et al. (2003) Bignami, G. F., Caraveo, P. A., De Luca, A., & Mereghetti, S. 2003, Nature, 423, 725 * Braithwaite (2009) Braithwaite, J. 2009, MNRAS, 397, 763 * Buccheri et al. (1983) Buccheri, R., et al. 1983, A&A, 128, 245 * Case & Bhattacharya (1998) Case, G. L., & Bhattacharya, D. 1998, ApJ, 504, 761 * Cassam-Chenaï et al. (2004) Cassam-Chenaï, G., Decourchelle, A., Ballet, J., Sauvageot, J.-L., Dubner, G., & Giacani, E. 2004, A&A, 427, 199 * Castelletti et al. (2006) Castelletti, G., Dubner, G., Golap, K., & Goss, W. M. 2006, A&A, 459, 535 * Chakrabarty et al. (2001) Chakrabarty, D., Pivovaroff, M. J., Hernquist, L. E., Heyl, J. S., & Narayan, R. 2001, ApJ, 548, 800 * Chevalier (1989) Chevalier, R. A. 1989, ApJ, 346, 847 * De Luca et al. (2011) De Luca, A., Mignani, R. P., Sartori, A., Hummel, W., Caraveo, P. A., Mereghetti, S., & Bignami, G. F. 2011, A&A, 525, A106 * De Luca (2008) De Luca, A. 2008, 40 Years of Pulsars: Millisecond Pulsars, Magnetars, and More (AIP Conf. Ser.. 983), ed C. Bassa, Z. Wang, A. Cumming, & V. M. Kaspi (Melville, NY: AIP), 311 * De Luca et al. (2004) De Luca, A., Mereghetti, S., Caraveo, P. A., Moroni, M., Mignani, R. P., & Bignami, G. F. 2004, A&A, 418, 625 * De Luca et al. (2012) De Luca, A., et al. 2012, MNRAS, 421, L72 * Dickey & Lockman (1990) Dickey. J. M., & Lockman, F. J. 1990, ARAA, 28, 215 * Faucher-Giguère & Kaspi (2006) Faucher-Giguère, C.-A., & Kaspi, V. M. 2006, ApJ, 643, 332 * Fesen et al. (2006) Fesen, R. A., et al. 2006, ApJ, 645, 283 * Flowers & Ruderman (1977) Flowers, E., & Ruderman, M. A. 1977, ApJ, 215, 302 * Gaensler et al. (2008) Gaensler, B. M., et al. 2008, ApJ, 680, L37 * Geppert et al. (2004) Geppert, U., Küker, M., & Page, D. 2004, A&A, 426, 267 * Geppert et al. (2006) Geppert, U., Külker, M., & Page, D. 2006, A&A, 457, 937 * Gotthelf & Halpern (2007) Gotthelf, E. V., & Halpern, J. P. 2007, ApJ, 664, L35 * Gotthelf & Halpern (2009) Gotthelf, E. V., & Halpern, J. P. 2009, ApJ, 695, L35 * Gotthelf et al. (2005) Gotthelf, E. V., Halpern, J. P., & Seward, F. D. 2005, ApJ, 627, 390 * Gotthelf et al. (2010) Gotthelf, E. V., Perna, R., & Halpern, J. P. 2010, ApJ, 724, 1316 * Haberl (2007) Haberl, F. 2007, Ap&SS, 308, 181 * Halpern (2011) Halpern, J. P. 2011 ApJ, 736, L3 * Halpern & Gotthelf (2009) Halpern, J. P., & Gotthelf, E. V. 2009, ApJ, 710, 941 * Halpern & Gotthelf (2010a) Halpern, J. P., & Gotthelf, E. V. 2010a, ApJ, 709, 436 * Halpern & Gotthelf (2010b) Halpern, J. P., & Gotthelf, E. V. 2010b, ApJ, 710, 941 * Halpern & Gotthelf (2010c) Halpern, J. P., & Gotthelf, E. V. 2010c, ApJ, 725, 1384 * Halpern & Gotthelf (2011) Halpern, J. P., & Gotthelf, E. V. 2011, ApJ, 733, L28 * Halpern et al. (2007) Halpern, J. P., Gotthelf, E. V., Camilo, F., & Seward, F. D. 2007, ApJ, 665, 1304 * Heinke & Ho (2010) Heinke, C. O., & Ho, W. C. G. 2010, ApJ, 719, L167 * Ho (2011) Ho, W. C. G. 2011, MNRAS, 414, 2567 * Ho & Heinke (2009) Ho, W. C. G., & Heinke, C. O. 2009, Nature, 462, 71 * Hobbs et al. (2005) Hobbs, G., Lorimer, D. R., Lyne, A. G., & Kramer, M. 2005, MNRAS, 360, 974 * Huang et al. (2004) Huang, U., et al. 2004, ApJ, 615, L115 * Hui & Becker (2006a) Hui, C. Y. & Becker, W. 2006, A&A, 454, 543 * Hui & Becker (2006b) Hui, C. Y. & Becker, W. 2006, A&A, 457, L33 * Iyudin et al. (2005) Iyudin, A. F., Aschenbach, B., Becker, W., Dennerl, K., & Haberl, F. 2005, A&A, 429, 225 * Kalberla et al. (2005) Kalberla, P.M.W., et al. 2005, A&A, 440, 775 * Kaminker et al. (2006) Kaminker, A. D., Gusakov, M. E., Yakovlev, D. G., & Gnedin, O. Y. 2006, MNRAS, 365, 1300 * Kaplan et al. (2009) Kaplan, D. L., & van Kerkwijk, M. H. 2009, ApJ, 705, 798 * Kargaltsev et al. (2002) Kargaltsev, O., Pavlov, G. G., Sanwal, D., & Garmire, G. P. 2002, ApJ, 580, 1060 * Kaspi (2010) Kaspi, V. M. 2010, PNAS, 107, 7147 * Katsuda et al. (2012) Katsuda, K., et al. 2012, ApJ, 756, 49 * Lazendic et al. (2003) Lazendic, J. S., Slane, P. O., Gaensler, B. M., Plucinsky, P. P., Hughes, J. P., Galloway, D. K., & Crawford, F. 2003, ApJ, 593, L27 * Leahy et al. (1983) Leahy, D. A., Elsner, R. F., & Weisskopf, M. C. 1983, ApJ, 272, 256 * Lovchinsky et al. (2011) Lovchinsky, I., Slane, P., Gaensler, B. M., Hughes, J. P., Ng, C.-Y., Lazendic, J. S., Gelfand, J. D., & Brogan, C. L. 2011, ApJ, 731, 70 * Makishima et al. (1990) Makishima, K., et al. 1990, ApJ, 365, L59 * Manchester et al. (2005) Manchester, R. N., Hobbs, G. B., Teoh, A., & Hobbs, M. 2005, AJ, 129, 1993 (http://www.atnf.csiro.au/research/pulsar/psrcat) * Mereghetti et al. (2002) Mereghetti, S., De Luca, A., Caraveo, P. A., Becker, W., Mignani, R., & Bignami, G. F. 2002a, ApJ, 581, 1280 * Mereghetti et al. (2002) Mereghetti, S., Tiengo, A., & Israel, G. L. 2002b, ApJ, 569, 275 * Mihara et al. (1990) Mihara, T., Makishima, K., Ohashi, T., Sakao, T., & Tashiro, M. 1990, Nature, 346, 250 * Murray et al. (2002) Murray, S. S., Ransom, S. M., Juda, M., Hwang, U., & Holt, S. S. 2002, ApJ, 566, 1039 * Muslimov & Page (1995) Muslimov, A., & Page, D. 1995, ApJ, 400, L77 * Page et al. (2007) Page, D., Geppert, U., & Külker, M. 2007, Ap&SS, 308, 403 * Page et al. (2011) Page, D., Prakash, M., Lattimer, J. M., & Steiner, A. W. 2011, PhRvL, 106, 081101 * Page & Sarmiento (1996) Page, D., & Sarmiento, A. 1996, ApJ, 473, 1067 * Park et al. (2009) Park, S., Kargaltsev, O., Pavlov, G. G., Mori, K., Slane, P. O., Hughes, J. P., Burrows, D. N., & Garmire, G. P. 2009, ApJ, 695, 431 * Park et al. (2006) Park, S., Mori, K., Kargaltsev, O., Slane, P. O., Hughes, J. P., Burrows, D. N., Garmire, G. P., & Pavlov, G. G. 2006, ApJ, 653, L37 * Pavlov & Luna (2009) Pavlov, G. G., & Luna, G. J. M. 2009, ApJ, 703, 910 * Pavlov et al. (2000) Pavlov, G. G., Zavlin, V. E., Aschenbach, B., Trümper, J., & Sanwal, D. 2000, ApJ, 531, L53 * Pérez-Azorín et al. (2006a) Pérez-Azorín, J. F., Miralles, J. A., & Pons, J. A. 2006a, A&A, 451, 1009 * Pérez-Azorín et al. (2006b) Pérez-Azorín, J. F., Pons, J. A., Miralles, J. A., & Miniutti, G. 2006b, A&A, 459, 175 * Pires et al. (2012) Pires, A. M., Motch, C., Turolla, R., Schwope, A., Pilia, M., Treves, A., Popov, S. B., & Janot-Pacheco, E. 2012, A&A, 544, A17 * Pons et al. (2007) Pons, J. A., Link, B. Miralles, J. A., & Geppert, U. 2007, Phys. Rev. Lett., 98, 071101 * Pons et al. (2009) Pons, J. A., Miralles, J. A., & Geppert, U. 2009, A&A, 496, 207 * Popov & Turolla (2013) Popov, S. B., & Turolla, R. 2013, Electromagnetic Radiation from Pulsars and Magnetars (ASP Conf. Ser. XXX), (San Francisco, CA: ASP), in press, arXiv:1206.2819 * Ravenhall & Pethick (1994) Ravenhall, D. G., & Pethick, C. J. 1994, ApJ, 424, 846 * Reynolds et al. (2006) Reynolds, S. P., Borkowski, K. J., Hwang, U., Harrus, I., Petre, R., & Dubner, G. 2006, ApJ, 652, L45 * Reynoso et al. (1995) Reynoso, E. M., Dubner, G. M., Gross, W. M., & Arnal, E. M. 1995, AJ, 110, 318 * Reynoso et al. (2003) Reynoso, E. M., Green, A.J., Johnston, S., Dubner, G. M., Giacani, E. B., & Gross, W. M. 2003, MNRAS, 345, 671 * Rutledge et al. (2008) Rutledge, R. E., Fox, D. B., & Shevchuk, A. H. 2008, ApJ, 672, 1137 * Sánchez-Ayaso et al. (2012) Sánchez-Ayaso, E., Combi, J. A., Albacete Colombi, J. F., López-Santiago, J., Martí, J., & Muñoz-Arjonilla, A. J. 2012, A&AS, 337, 573 * Sanwal et al. (2002) Sanwal, D., Pavlov, G. G., Zavlin, V. E., & Teter, M. A. 2002, ApJ, 574, 61 * Seward et al. (2003) Seward, F. D., Slane, P. O., Smith, R. K., & Sun, M. 2003, ApJ, 584, 414 * Shabaltas & Lai (2012) Shabaltas, N., & Lai, D. 2012, ApJ, 748, 148 * Shklovskii (1970) Shklovskii, I. S. 1970, Soviet Astron., 13, 562 * Shternin et al. (2011) Shternin, P. S., Yakovlev, D. G., Heinke, C. O., Ho, W. C. G., & Patnaude, D. J. 2011, MNRAS, 412, L108 * Slane et al. (2001) Slane, P., Hughes, J. P., Edgar, R. J., Plucinsky, P. P., Miyata, E., Tsunemi, H., & Aschenbach, B. 2001, ApJ, 548, 814 * Spitkovsky (2006) Spitkovsky, A. 2006, ApJ, 648 L51 * Spruit (2008) Spruit, H. C. 2008, 40 Years of Pulsars: Millisecond Pulsars, Magnetars, and More (AIP Conf. Ser. 983), ed C. Bassa, Z. Wang, A. Cumming, & V. M. Kaspi (Melville, NY: AIP), 391 * Strutt (1880) Strutt, J. W. 1880, Phil. Mag, 10, 73 * Suleimanov et al. (2010) Suleimanov, V. E., Pavlov, G. G., & Werner, K. 2010, ApJ, 714, 635 * Suleimanov et al. (2012) Suleimanov, V. E., Pavlov, G. G., & Werner, K. 2012, ApJ, 751, 15 * Tayler (1973) Tayler, R. J. 1973, MNRAS, 161, 365 * Thompson & Duncan (1993) Thompson, C., & Duncan, R. C. 1993, ApJ, 408, 194 * Thorstensen et al. (2001) Thorstensen, J. R., Fesen, R. A., & van den Bergh, S. 2001, AJ, 122, 297 * Tian et al. (2008) Tian, W. W., Leahy, D. A., Haverkorn, M., & Jiang, B. 2008, ApJ, 679, L85 * van den Heuvel (1987) van den Heuvel, E. P. J. 1987, The Origin and Evolution of Neutron Stars (IAU Symp. 125), ed. D. J. Helfand & J. Huang (Dordrecht: Reidel), 393 * Viganò & Pons (2012) Viganò, D., & Pons, J. A. 2012, MNRAS, 425, 2487 * Winkler & Petre (2007) Winkler, P. F., & Petre, R. 2007, ApJ, 670, 635 * Winkler et al. (1988) Winkler, P. F., Tuttle, J. H., Kirshner, R. P., & Irwin, M. J. 1988, Supernova Remnants and the Interstellar Medium (IAU Colloq. 101), ed. R. S. Roger & T. L. Landecker (Cambridge: Cambridge Univ. Press), 65 * Woermann et al. (2000) Woermann, B., Gaylard, M. J., & Otrupcek, R. 2000, MNRAS, 317, 421 * Zacharias et al. (2010) Zacharias, N., et al. 2010, AJ, 139, 2184 * Zane (2007) Zane, S. 2007, Ap&SS, 308, 259 * Zane & Turolla (2006) Zane, S. & Turolla, R. 2006, MNRAS, 366, 727 * Zane et al. (2011) Zane, S., et al. 2011, MNRAS, 410, 2428 * Zavlin et al. (2000) Zavlin, V. E., Pavlov, G. G., Sanwal, D., & Trümper, J. 2000, ApJ, 540, L25
that $\mathbf{x}$ satisfies (2.14) and has nonzero entries. Let $\mathbf{x}\in(\operatorname{\mathbb{C}}^{*})^{\operatorname{\mathbb{Z}}^{3}}$ be a coherent solution of the Kashaev equation satisfying (2.14). Let $A_{j}=[-j,j]^{3}\cap\operatorname{\mathbb{Z}}^{3}$ and $B_{j}=[-j,j]^{3}\cap L$ for $j\in\operatorname{\mathbb{Z}}_{\geq 0}$. We claim that if, for all $j$, there exist $\mathbf{\tilde{x}}_{j}\in(\operatorname{\mathbb{C}}^{*})^{B_{j}}$ satisfying the K-hexahedron equations that agree with $\mathbf{x}$ on $A_{j}$, then there exists $\mathbf{\tilde{x}}\in(\operatorname{\mathbb{C}}^{*})^{L}$ satisfying the K-hexahedron equations that agrees with $\mathbf{x}$ on $\operatorname{\mathbb{Z}}^{3}$. Construct an infinite tree $T$ as follows: * • The vertices of $T$ are solutions of the K-hexahedron equation indexed by $B_{j}$ that agree with $\mathbf{x}$ on $A_{j}$ (over $j\in\operatorname{\mathbb{Z}}_{\geq 0}$). * • Add an edge between $\mathbf{\tilde{x}}_{j}\in(\operatorname{\mathbb{C}}^{*})^{B_{j}}$ and $\mathbf{\tilde{x}}_{j+1}\in(\operatorname{\mathbb{C}}^{*})^{B_{j+1}}$ if $\mathbf{\tilde{x}}_{j+1}$ restricts to $\mathbf{\tilde{x}}_{j}$. Thus, $T$ is an infinite tree in which every vertex has finite degree. By König’s infinity lemma (see [11, Theorem 16.3]), there exists an infinite path $\mathbf{\tilde{x}}_{0},\mathbf{\tilde{x}}_{1},\dots$ in $T$ with $\mathbf{\tilde{x}}_{j}\in(\operatorname{\mathbb{C}}^{*})^{B_{j}}$. Thus, there exists $\mathbf{\tilde{x}}\in(\operatorname{\mathbb{C}}^{*})^{L}$ restricting to $\mathbf{\tilde{x}}_{j}$ for all $j\in\operatorname{\mathbb{Z}}_{\geq 0}$, so $\mathbf{\tilde{x}}$ is a solution of the K-hexahedron equations that agrees with $\mathbf{x}$ on $\operatorname{\mathbb{Z}}^{3}$. Given $j\in\operatorname{\mathbb{Z}}_{\geq 0}$, we claim that there exists $\mathbf{\tilde{x}}\in(\operatorname{\mathbb{C}}^{*})^{B_{j}}$ satisfying the K-hexahedron equations that agree with $\mathbf{x}$ on $A_{j}$. It is straightforward to show that there exists a sequence $\mathbf{x}_{1},\mathbf{x}_{2},\ldots\in(\operatorname{\mathbb{C}}^{*})^{\operatorname{\mathbb{Z}}^{3}}$ of coherent solutions of the Kashaev equation that converge pointwise to $\mathbf{x}$ whose restrictions to $\mathbb{Z}^{3}_{\textup{init}}$ are generic. By Corollary 7.22, there exist $\mathbf{\tilde{x}}_{1},\mathbf{\tilde{x}}_{2},\ldots\in(\operatorname{\mathbb{C}}^{*})^{L}$ satisfying the K-hexahedron equations such that $\mathbf{\tilde{x}}_{i}$ restricts to $\mathbf{x}_{i}$. However, the sequence $\mathbf{\tilde{x}}_{1},\mathbf{\tilde{x}}_{2},\dots$ does not necessarily converge (see Theorem 2.23). Let $\mathbf{\tilde{x}}_{1}^{\prime},\mathbf{\tilde{x}}_{2}^{\prime},\dots\in(\operatorname{\mathbb{C}}^{*})^{B_{j}}$ be the restrictions of $\mathbf{\tilde{x}}_{1},\mathbf{\tilde{x}}_{2},\dots$ to $B_{j}$. There exists a subsequence of $\mathbf{\tilde{x}}_{1}^{\prime},\mathbf{\tilde{x}}_{2}^{\prime},\dots$ that converges to some $\mathbf{\tilde{x}}\in(\operatorname{\mathbb{C}}^{*})^{B_{j}}$. (For each $s\in B_{j}\setminus A_{j}$, we can partition the sequence $\mathbf{\tilde{x}}_{1}^{\prime},\mathbf{\tilde{x}}_{2}^{\prime},\dots$ into two sequences, each of which converges at $s$. Because $B_{j}$ is finite, the claim follows.) The array $\mathbf{\tilde{x}}$ must satisfy the K-hexahedron equations and agree with $\mathbf{x}$ on $A_{j}$, so we are done. ∎ We shall now work towards a proof of Theorem 2.23. ###### Lemma 7.23. Let $\mathbf{\tilde{x}}\in(\operatorname{\mathbb{C}}^{*})^{L}$ be a solution of the K-hexahedron equations. Let $\mathbf{\tilde{x}}_{\textup{init}}\in(\operatorname{\mathbb{C}}^{*})^{L_{\textup{init}}}$ denote the restriction of $\mathbf{\tilde{x}}$ to $L_{\textup{init}}$. Let $\mathbf{t}=(t_{s})\in\\{-1,1\\}^{\mathbb{Z}^{3}_{\boxdot}}$ be in the kernel of $\psi$. Then $(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow L}=(y_{s})_{s\in L}$, where $\displaystyle y_{s}=\begin{cases}x_{s}&\text{if }s\in\operatorname{\mathbb{Z}}^{3},\\\ t_{[s]}x_{s}&\text{if }s\in L-\operatorname{\mathbb{Z}}^{3}.\end{cases}$ ###### Proof. This follows from Lemma 7.18 and Proposition 7.3. ∎ ###### Lemma 7.24. Let $\mathbf{\tilde{x}}\in(\operatorname{\mathbb{C}}^{*})^{L}$ be a solution of the K-hexahedron equations. Let $\mathbf{\tilde{x}}_{\textup{init}}\in(\operatorname{\mathbb{C}}^{*})^{L_{\textup{init}}}$ denote the restriction of $\mathbf{\tilde{x}}$ to $L_{\textup{init}}$. For $\mathbf{t}\in\\{-1,1\\}^{\mathbb{Z}^{3}_{\boxdot}}$, the following are equivalent: * • $\mathbf{\tilde{x}}$ and $(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow L}$ agree on $\operatorname{\mathbb{Z}}^{3}$, * • $\mathbf{t}$ is in the kernel of $\psi$ $($see Proposition 7.16). ###### Proof. If $\mathbf{t}$ is in the kernel of $\psi$, then $\mathbf{\tilde{x}}$ and $(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow L}$ agree on $\operatorname{\mathbb{Z}}^{3}$ by Lemma 7.23. If $\mathbf{t}$ is not in the kernel of $\psi$, let $\mathbf{u}=(u_{s})_{s\in\operatorname{\mathbb{Z}}^{3}+\left(\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}}\right)}=\psi(\mathbf{t})$. Write $\mathbf{\tilde{x}}=(x_{s})_{s\in L}$ and $(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow L}=(y_{s})_{s\in L}$. Choose $v\in\operatorname{\mathbb{Z}}^{3}_{\\{3,4,5,\dots\\}}$ such that $u_{v-\left(\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}}\right)}=-1$ and $u_{w-\left(\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}}\right)}=1$ for all $w\in\operatorname{\mathbb{Z}}^{3}_{\\{3,4,5,\dots\\}}$ with $w<v$. (Such a choice of $v$ exists because if $u_{v-\left(\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}}\right)}=1$ for all $v\in\operatorname{\mathbb{Z}}^{3}_{\\{3,4,5,\dots,\\}}$, then $u_{v-\left(\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}}\right)}=1$ for all $v\in\operatorname{\mathbb{Z}}^{3}$ because $\mathbf{u}$ must satisfy equation (7.10) for all $v\in\operatorname{\mathbb{Z}}^{3}$.) Then by Lemma 7.18, $y_{v-s}=x_{v-s}$ for $s\in\\{0,1\\}^{3}-\\{(0,0,0)\\}$, and $y_{v-s}=t_{[v-s]}x_{v-s}$ for $s\in\left\\{\left(1,\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}}\right),\left(\operatorname{\frac{1}{2}},1,\operatorname{\frac{1}{2}}\right),\left(\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}},1\right)\right\\}$. Hence, $\displaystyle y_{v}-x_{v}=-4\frac{x_{v-\left(1,\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}}\right)}x_{v-\left(\operatorname{\frac{1}{2}},1,\operatorname{\frac{1}{2}}\right)}x_{v-\left(\operatorname{\frac{1}{2}},\operatorname{\frac{1}{2}},1\right)}}{x_{v-\left(1,1,1\right)}^{2}}\not=0,$ so $y_{v}\not=x_{v}$. ∎ ###### Proof of Theorem 2.23. Let $\mathbf{\tilde{x}}_{\textup{init}}\in(\operatorname{\mathbb{C}}^{*})^{L_{\textup{init}}}$ denote the restriction of $\mathbf{\tilde{x}}$ to $L_{\textup{init}}$. Suppose that for some signs $\alpha_{i},\beta_{i},\gamma_{i}\in\\{-1,1\\}$, $i\in\operatorname{\mathbb{Z}}$, $\mathbf{\tilde{y}}\in(\operatorname{\mathbb{C}}^{*})^{L}$ satisfies equations (2.19)–(2.22) for $\left(a,b,c\right)\in\operatorname{\mathbb{Z}}^{3}$. Define $\mathbf{t}=(t_{s})\in\\{-1,1\\}^{\mathbb{Z}^{3}_{\boxdot}}$ by equations (7.7)–(7.9), so $\mathbf{t}$ is in the kernel of $\psi$ by Proposition 7.16. Hence, by Lemma 7.23, $\mathbf{\tilde{y}}=(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow L}$, so $\mathbf{\tilde{y}}$ satisfies the K-hexahedron equations, proving part (a). Next, if $\mathbf{\tilde{y}}\in(\operatorname{\mathbb{C}}^{*})^{L}$ satisfies the K-hexahedron equations, then $\mathbf{\tilde{y}}=(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow L}$ for some $\mathbf{t}=(t_{s})\in\\{-1,1\\}^{\mathbb{Z}^{3}_{\boxdot}}$. By Lemma 7.24, $\mathbf{t}$ is in the kernel of $\psi$. Hence, by Proposition 7.16, there exist signs $\alpha_{i},\beta_{i},\gamma_{i}\in\\{-1,1\\}$, $i\in\operatorname{\mathbb{Z}}$, such that $\mathbf{t}$ is given by equations (7.7)–(7.9). Hence, by Lemma 7.23, $\mathbf{\tilde{y}}$ satisfies equations (2.19)–(2.22) for $\left(a,b,c\right)\in\operatorname{\mathbb{Z}}^{3}$, proving part (b). ∎ ## 8 Coherence for cubical complexes In this section, we generalize Proposition 2.8 and Theorem 2.22 from $\operatorname{\mathbb{Z}}^{3}$ to certain classes of $3$-dimensional cubical complexes. Proposition 2.8 generalizes to arbitrary $3$-dimensional cubical complexes embedded in $\operatorname{\mathbb{R}}^{3}$ (see Proposition 8.1), while Theorem 2.22(b) generalizes to directed cubical complexes corresponding to piles of quadrangulations of a polygon (see Proposition 8.3). Theorem 2.22(a) does not hold for arbitrary directed cubical complexes corresponding to piles of quadrangulations of a polygon. It turns out that an additional property of a cubical complex is required, which we call _comfortable-ness_. This property is satisfied by the standard tiling of $\operatorname{\mathbb{R}}^{3}$ with unit cubes, as well as by cubical complexes corresponding to piles of $\Diamond$-tilings of $\mathbf{P}_{n}$ (see Proposition 8.8). Let $\varkappa$ be the directed cubical complex corresponding to a pile of quadrangulations of a polygon. In Theorems 8.10–8.11, we show that Theorem 2.22(a) holds for $\varkappa$ if and only if $\varkappa$ is comfortable. The proof of Theorem 8.10 is nearly identical to the proof of Theorem 2.22(a) in Section 7. First, we note that Proposition 2.8 generalizes to arbitrary $3$-dimensional cubical complexes embedded in $\operatorname{\mathbb{R}}^{3}$ as follows: ###### Proposition 8.1. Let $\varkappa$ be a $3$-dimensional cubical complex embedded in $\operatorname{\mathbb{R}}^{3}$. Suppose that $\mathbf{x}=(x_{s})_{s\in\varkappa^{0}}$ satisfies the Kashaev equation. Then for any interior vertex $v\in\varkappa^{0}$ $($see Definition 3.2), $\displaystyle\left(\prod_{C\ni v}K_{v}^{C}(\mathbf{x})\right)^{2}=\left(\prod_{S\ni v}(x_{v}x_{v_{2}}+x_{v_{1}}x_{v_{3}})\right)^{2},$ where * • the first product is over $3$-dimensional cubes $C$ incident to the vertex $v$, * • the second product is over $2$-dimensional faces $S$ incident to the vertex $v$, and * • $v$, $v_{1}$, $v_{2}$, $v_{3}$ are the vertices of such a face $S$ listed in cyclic order. ###### Proof. The proof is almost identical to the proof of Proposition 2.8 in Section 7. ∎ ###### Remark 8.2. With Proposition 8.1 in mind, we can think of the notion of coherence from Definition 4.17 as follows. Let $\mathbf{T}=(T_{0},\dots,T_{\ell})$ be a pile of quadrangulations of a polygon with $\varkappa=\varkappa(\mathbf{T})$. Start with an arbitrary array $\mathbf{x}_{\textup{init}}$ indexed by $\varkappa^{0}(T_{0})$ whose entries are “sufficiently generic”. We want to extend $\mathbf{x}_{\textup{init}}$ to an array $\mathbf{x}$ indexed by $\varkappa^{0}$ that is a coherent solution of the Kashaev equation. Building $\mathbf{x}$ inductively, suppose we have defined the values of $\mathbf{x}$ at $\varkappa^{0}(T_{0},\dots,T_{i-1})$, and we need to define the value $x_{w}$ of $\mathbf{x}$ at the new vertex $w$ in $T_{i}$. Let $C\in\varkappa^{3}$ be the cube corresponding to the flip between $T_{i-1}$ and $T_{i}$, and let $v$ be the bottom vertex of $C$, i.e., let $v$ be the unique vertex in $T_{i-1}$ but not $T_{i}$. In order that $\mathbf{x}$ continue to satisfy the Kashaev equation, there are $2$ possible values for $x_{w}$, say $a$ and $b$, so that $K^{C}(\mathbf{x})=0$. If the vertex $v$ is in $T_{0}$, i.e., $v$ is not an interior vertex of $\varkappa$, then we can either set $x_{w}=a$ or $x_{w}=b$, and $\mathbf{x}$ will continue to be a coherent solution of the Kashaev equation. Now, suppose $v$ is not in $T_{0}$, i.e., $v$ is an interior vertex of $\varkappa$. Because we have chosen $\mathbf{x}_{\textup{init}}$ to be “sufficiently generic”, the value of $\prod_{C\ni v}K_{v}^{C}(\mathbf{x})$ depends on whether we set $x_{w}=a$ or $x_{w}=b$. Proposition 8.1 tells us that for one of the $2$ possible values, say $x_{w}=a$, equation (4.3) holds, while for the other value, $x_{w}=b$, the following equation holds: $\displaystyle\prod_{C\ni v}K_{v}^{C}(\mathbf{x})=-\prod_{S\ni v}(x_{v}x_{v_{2}}+x_{v_{1}}x_{v_{3}}).$ Hence, the condition of coherence tells us which of the $2$ solutions is the “correct” one when $v$ is an interior vertex of $\varkappa$. We now prove the following generalization of Theorem 2.22(b). ###### Proposition 8.3. Let $\mathbf{T}$ be a pile of quadrangulations of a polygon. Let $\mathbf{\tilde{x}}=(x_{s})_{s\in\varkappa^{02}(\mathbf{T})}$ be an array $($with $x_{s}\not=0$ for all $s\in\varkappa^{0}(\mathbf{T}))$ satisfying the K-hexahedron equations. Then the restriction of $\mathbf{\tilde{x}}$ to $\varkappa^{0}(\mathbf{T})$ is a coherent solution of the Kashaev equation. ###### Proof. The proof follows almost exactly the same as the proof of Theorem 2.22(b). For an interior vertex $v$ of $\varkappa$, there is exactly one cube $C$ for which $v$ is the top vertex, and exactly one cube $C$ for which $v$ is the bottom vertex. Let $\mathbf{x}$ be the restriction of $\mathbf{\tilde{x}}$ to $\varkappa^{0}$. By Lemma 7.2, taking the product over the cubes incident to $v$, $\displaystyle\prod_{C\ni v}K^{C}_{v}(\mathbf{x})=(-1)^{2}\prod_{S\in\varkappa^{2}\colon S\ni v}x_{S}=\prod_{S\ni v}x_{S}^{2}=\prod_{S\ni v}(x_{v}x_{v_{2}}+x_{v_{1}}x_{v_{3}}),$ so the restriction of $\mathbf{\tilde{x}}$ to $\mathbf{x}$ is a coherent solution of the Kashaev equation. ∎ The following statement generalizes Theorem 2.9: ###### Corollary 8.4. Let $\mathbf{T}$ be a pile of quadrangulations of a polygon. Let $\mathbf{x}=(x_{s})_{s\in\varkappa^{0}(\mathbf{T})}$ be an array satisfying the positive Kashaev recurrence. Then $\mathbf{x}$ is a coherent solution of the Kashaev equation. ###### Proof. This follows immediately from Proposition 8.3 because $\mathbf{x}$ can be extended to an array indexed by $\varkappa^{02}(\mathbf{T})$ satisfying the K-hexahedron equations by choosing the positive solutions from equation (2.9) for $s\in\varkappa^{2}(\mathbf{T})$. ∎ ###### Remark 8.5. The converse of Proposition 8.3 (i.e., the counterpart of Theorems 2.22(a) and 4.19(a)) does not hold for an arbitrary choice of $\mathbf{T}$. In other words, there exist piles $\mathbf{T}$ and arrays $\mathbf{x}$ indexed by $\varkappa^{0}(\mathbf{T})$ with nonzero components that are coherent solutions of the Kashaev equation, where $\mathbf{x}$ cannot be extended to an array indexed by $\varkappa^{02}(\mathbf{T})$ satisfying the K-hexahedron equations. In order for a converse of Proposition 8.3 (equivalently, a generalization of Theorems 2.22(a) and 4.19(a)) to hold, one must impose an additional condition on the underlying cubical complexes; see Definition 8.6 below. ###### Definition 8.6. Let $\varkappa$ be a three-dimensional cubical complex that can be embedded into $\operatorname{\mathbb{R}}^{3}$, cf. Definition 3.2. (While this embeddability condition can be relaxed, it is satisfied in all subsequent applications. In fact, $\varkappa$ will always be the cubical complex associated to a pile of quadrangulations.) Let $\sim$ be the equivalence relation on $\varkappa^{2}$ generated by the equivalences $s_{1}\sim s_{2}$ for all pairs $(s_{1},s_{2})$ involving opposite faces of some $3$-dimensional cube in $\varkappa^{3}$. Let $\varkappa_{\boxdot}$ denote the set of equivalence classes under this equivalence relation. Denote by $[s]\in\varkappa_{\boxdot}$ the equivalence class of $s\in\varkappa^{2}$. By analogy with Definition 7.13, denote by $\psi_{\varkappa}\colon\\{-1,1\\}^{\varkappa_{\boxdot}}\rightarrow\\{-1,1\\}^{\varkappa^{3}}$ the map sending an array $\mathbf{t}=(t_{[s]})_{[s]\in\varkappa_{\boxdot}}$ to the array $\psi_{\varkappa}(\mathbf{t})=(u_{C})_{C\in\varkappa^{3}}$ defined by $u_{C}=t_{[a]}t_{[b]}t_{[c]}$, where $a$, $b$, $c$ are representatives of the three pairs of opposite $2$-dimensional faces of $C$. We say that the cubical complex $\varkappa$ is _comfortable_ if the following statements are equivalent for every $\mathbf{u}=(u_{C})\in\\{-1,1\\}^{\varkappa^{3}}$: * (C1) $\mathbf{u}$ is in the image of $\psi_{\varkappa}$, * (C2) for every interior vertex $v\in\varkappa^{0}$ (cf. Definition 3.2), we have $\displaystyle\prod_{C\ni v}u_{C}=1,$ the product over $3$-dimensional cubes $C\in\varkappa^{3}$ containing $v$. By Lemma 7.19, the standard tiling of $\operatorname{\mathbb{R}}^{3}$ by unit cubes is comfortable. ###### Remark 8.7. In Definition 8.6, the statement (C1) always implies (C2). Indeed, if $\mathbf{u}=\psi_{\varkappa}(\mathbf{t})$ with $\mathbf{t}=\left(t_{[s]}\right)_{[s]\in\varkappa_{\boxdot}}\in\\{-1,1\\}^{\varkappa_{\boxdot}}$, then for any interior vertex $v\in\varkappa^{0}$, $\displaystyle\prod_{v\in C\in\varkappa^{3}}u_{C}=\prod_{v\in s\in\varkappa^{2}}t_{[s]}^{2}=1.$ Thus, in checking comfortableness, we simply must check that (C2) implies (C1). We next state four results (Propositions 8.8–8.9 and Theorems 8.10–8.11) which the rest of this section is dedicated to proving. The reader may want to review Definitions 3.6–3.7 before proceeding with the following proposition. ###### Proposition 8.8. Let $\mathbf{T}$ be a pile of quadrangulations of a polygon. Suppose that the divide associated to each quadrangulation in $\mathbf{T}$ is a pseudoline arrangement. Then $\varkappa=\varkappa(\mathbf{T})$ is comfortable. In particular, if $\mathbf{T}$ is a pile of $\Diamond$-tilings of the polygon $\mathbf{P}_{n}$, then $\varkappa=\varkappa(\mathbf{T})$ is comfortable. ###### Proposition 8.9. There exists a pile $\mathbf{T}$ of quadrangulations of some polygon such that the cubical complex $\varkappa=\varkappa(\mathbf{T})$ is not comfortable. We next state a generalization of Theorems 2.22 and 4.19. ###### Theorem 8.10. Let $\mathbf{T}$ be a pile of quadrangulations of a polygon such that $\varkappa=\varkappa(\mathbf{T})$ is comfortable. Any coherent solution of the Kashaev equation $\mathbf{x}=(x_{s})_{s\in\varkappa^{0}}$ with nonzero components satisfying condition (4.4) can be extended to an array $\mathbf{\tilde{x}}=(x_{s})_{s\in\varkappa^{02}}$ satisfying the K-hexahedron equations. However, Theorems 2.22 and 4.19 don’t generalize to cubical complexes that are not comfortable. ###### Theorem 8.11. Let $\mathbf{T}$ be a pile of quadrangulations of a polygon such that $\varkappa=\varkappa(\mathbf{T})$ is not comfortable. Then there exists a coherent solution of the Kashaev equation $\mathbf{x}$ indexed by $\varkappa^{0}$ which cannot be extended to an array indexed by $\varkappa^{02}$ satisfying the K-hexahedron equations. Note that Theorem 4.19(a) follows directly from Proposition 8.8 and Theorem 8.10. In Remark 8.12 below, we explain that Theorem 2.22(a) follows from Proposition 8.8 and Theorem 8.10 as well. ###### Remark 8.12. Together, Theorem 8.10 and Proposition 8.8 imply Theorem 2.22(a). For each cube $[-j,j]^{3}\in\operatorname{\mathbb{R}}^{3}$, project the “bottom” faces (i.e., $\\{-j\\}\times[-j,j]\times[-j,j]$, $[-j,j]\times\\{-j\\}\times[-j,j]$, $[-j,j]\times[-j,j]\times\\{-j\\}$) onto $\operatorname{\mathbb{R}}^{2}$ to obtain a quadrangulation $T_{j}$ of a region $R_{j}$, as shown in Fig. 22. The divide associated to each quadrangulation $T_{i}$ is a pseudoline arrangement. Hence, by Proposition 8.8, for any pile $\mathbf{T}_{i}$ including $T_{i}$, $\varkappa(\mathbf{T}_{i})$ is comfortable. Choose $\mathbf{T}_{i}$, so that we can associate the vertices of $\varkappa(\mathbf{T}_{i})$ with $\\{-j,\dots,j\\}^{3}$, so that $\bigcup_{j=1}^{\infty}\varkappa^{0}(\mathbf{T}_{i})=\operatorname{\mathbb{Z}}^{3}$. Repeating the König’s infinity lemma argument from the end of the proof of Theorem 2.22(a), Theorem 8.10 implies Theorem 2.22. | ---|--- quadrangulation $T_{1}$ of $R_{1}$ | quadrangulation $T_{2}$ of $R_{2}$ Figure 22: The quadrangulations $T_{j}$ of regions $R_{j}$ described in Remark 8.12. The rest of this section is dedicated to proving Propositions 8.8–8.9 and Theorems 8.10–8.11. We begin by proving Proposition 8.8. ###### Lemma 8.13. Let $\mathbf{T}=(T_{0},\dots,T_{\ell})$ be a pile of quadrangulations of a polygon such that $\varkappa(\mathbf{T})$ is comfortable. Given $0\leq i\leq j\leq\ell$, let $\mathbf{T}^{\prime}=(T_{i},\dots,T_{j})$. Then $\varkappa(\mathbf{T}^{\prime})$ is comfortable. ###### Proof. It suffices to check that (C2) implies (C1) for $\varkappa(\mathbf{T}^{\prime})$ (see Remark 8.7). Note that any $\mathbf{u}=(u_{C})_{C\in\varkappa^{3}(\mathbf{T}^{\prime})}$ satisfying (C2) can be extended to $\mathbf{\tilde{u}}=(u_{C})_{C\in\varkappa^{3}(\mathbf{T})}$ satisfying (C2). Identifying $\varkappa_{\boxdot}(\mathbf{T})$ and $\varkappa_{\boxdot}(\mathbf{T}^{\prime})$, the fact that there exists $\mathbf{t}$ such that $\psi_{\varkappa(\mathbf{T})}(\mathbf{t})=\mathbf{\tilde{u}}$ implies that $\psi_{\varkappa(\mathbf{T}^{\prime})}(\mathbf{t})=\mathbf{u}$, as desired. ∎ We can now prove Proposition 8.8 in the special case where $\mathbf{T}$ be a pile of $\Diamond$-tilings of $\mathbf{P}_{n}$. ###### Lemma 8.14. Let $\mathbf{T}$ be a pile of $\Diamond$-tilings of $\mathbf{P}_{n}$. Then $\varkappa=\varkappa(\mathbf{T})$ is comfortable. ###### Proof. Labeling the vertices of $\varkappa$ by subsets of $[n]$ (as in Section 4), we can label the cubes in $\varkappa^{3}$ by $3$-element subsets of $[n]$ by taking the symmetric difference of the labels of any opposite vertices in the cube. Note that we can extend $\mathbf{T}$ to a longer pile $\mathbf{T}^{\prime}$ so that for every $A\in\binom{[n]}{3}$, at least one cube of $\varkappa(\mathbf{T}^{\prime})$ is labeled by $A$. Hence, by Proposition 8.13, it suffices to prove the theorem under the additional assumption that each set in $\binom{[n]}{3}$ labels at least one cube in $\varkappa^{3}$. Let $A_{1}$ be the set of $\mathbf{u}\in\\{-1,1\\}^{\varkappa^{3}}$ satisfying (C1), and $A_{2}$ be the set of $\mathbf{u}$ satisfying (C2). Because $A_{1}\subseteq A_{2}$, it suffices to show that $\left|A_{1}\right|\geq\left|A_{2}\right|$ in order to prove that $A_{1}=A_{2}$. We claim that both $A_{1}$ and $A_{2}$ have size $2^{\binom{n-1}{2}}$. First, we claim that $\left|A_{1}\right|\geq 2^{\binom{n-1}{2}}$. Identify each element $S\in\varkappa_{\boxdot}$ with a $2$-element subset of $[n]$ by taking the symmetric difference of the labels of any pair of opposite vertices of any tile in $S$. Note that if $\mathbf{u}=\psi_{\varkappa}(\mathbf{t})$, and a cube $C$ labeled by $\\{i,j,k\\}$, then $u_{C}=t_{\\{i,j\\}}t_{\\{i,k\\}}t_{\\{j,k\\}}$. Define a map of vector spaces $f\colon\\{-1,1\\}^{\binom{[n]}{2}}\rightarrow\\{-1,1\\}^{\binom{[n]}{3}}$ where $f\big{(}(t_{S})_{S\in\binom{[n]}{2}}\big{)}=(u_{C})_{C\in\binom{[n]}{3}}$ with $\displaystyle u_{\\{i,j,k\\}}=t_{\\{i,j\\}}t_{\\{i,k\\}}t_{\\{j,k\\}}.$ If we fix $t_{\\{1,2\\}}=\cdots=t_{\\{1,n\\}}=1$, then $u_{\\{1,j,k\\}}=t_{\\{j,k\\}}$, so the rank of $f$ is at least the number of $2$-element subsets of $\\{2,\dots,n\\}$, i.e., $\binom{n-1}{2}$. Hence, it follows that $\left|A_{1}\right|\geq 2^{\binom{n-1}{2}}$. Thus, in order to prove the proposition, we must show that $\left|A_{2}\right|\leq 2^{\binom{n-1}{2}}$. Note that there are $\binom{n-1}{2}$ vertices in the interior of any $\Diamond$-tiling of $\mathbf{P}_{n}$. In choosing $\mathbf{u}$ satisfying (C2), we can make an arbitrary choice of sign for any cube that shares its bottom vertex with $T_{0}$, but the signs of the remaining cubes is determined by condition (C2). Hence, because at most $\binom{n-1}{2}$ cubes can share their bottom vertices with $T_{0}$ (the bottom of a cube cannot be on the boundary of $T_{0}$), there are at most $2^{\binom{n-1}{2}}$ such $\mathbf{u}$ satisfying condition (C2), proving our claim. ∎ We can now prove Proposition 8.8 in its full generality. ###### Proof of Proposition 8.8. Let $\mathbf{T}=(T_{0},\dots,T_{\ell})$. We claim that we can “embed” the quadrangulations $T_{0},\dots,T_{\ell}$ in $\Diamond$-tilings of $\mathbf{P}_{n}$. Let $D_{0},\dots,D_{\ell}$ be the divides associated to $T_{0},\dots,T_{\ell}$. Because $D_{0},\dots,D_{\ell}$ are pseudoline arrangements connected by braid moves, we can extend $D_{0},\dots,D_{\ell}$ to pseudoline arrangements $\tilde{D}_{0},\dots,\tilde{D}_{\ell}$, still connected by braid moves, in which every pair of branches intersects exactly once. By Proposition 3.8, there exists a pile $\mathbf{\tilde{T}}=(\tilde{T}_{0},\dots,\tilde{T}_{\ell})$ of $\Diamond$-tilings of $\mathbf{P}_{n}$, for which the divides associated to $\tilde{T}_{0},\dots,\tilde{T}_{\ell}$ are $\tilde{D}_{0},\dots,\tilde{D}_{\ell}$. By Lemma 8.14, $\varkappa(\mathbf{\tilde{T}})$ is comfortable. The cubical complex $\varkappa(\mathbf{\tilde{T}})$ consists of $\varkappa=\varkappa(\mathbf{T})$, unioned with $2$-dimensional faces that are not part of any $3$-dimensional cube. Hence, it follows that $\varkappa$ is comfortable as well. ∎ ###### Proof of Proposition 8.9. We describe a pile $\mathbf{T}=(T_{0},\dots,T_{8})$ of quadrangulations of a square such that $\varkappa=\varkappa(\mathbf{T})$ is not comfortable. Let $T_{0}$ be as in Fig. 23. It is easier to understand this example by looking at the divides associated to $T_{0},\dots,T_{8}$, displayed in Fig. 24. Note that the divides associated to these quadrangulations are not pseudoline arrangements. Note that $\varkappa$ has no interior vertices. Hence, every $\mathbf{u}\in\\{-1,1\\}^{\varkappa^{3}}$ satisfies (C2). However, it is not difficult to check that if $\mathbf{u}$ satisfies (C1), then the sign on a given cube is determined by the sign on the other $7$. Hence, $\varkappa$ is not comfortable. ∎ Figure 23: The quadrangulation $T_{0}$ from the proof of Proposition 8.9, with the associated divide drawn on top in blue. | | ---|---|--- $T_{0}$ | $T_{1}$ | $T_{2}$ | | $T_{3}$ | $T_{4}$ | $T_{5}$ | | $T_{6}$ | $T_{7}$ | $T_{8}$ Figure 24: The divides associated to the quadrangulations $T_{0},\dots,T_{8}$ from the proof of Proposition 8.9. The rest of this section is dedicated to the proofs of Theorems 8.10–8.11. ###### Definition 8.15. Let $\mathbf{T}=(T_{0},\dots,T_{\ell})$ be a pile of quadrangulations of a polygon, with $\varkappa=\varkappa(\mathbf{T})$, and $\mathbf{x}=(x_{s})_{s\in\varkappa^{0}}$. We say that $\mathbf{x}$ is _generic_ if for all extensions of $\mathbf{x}_{\textup{init}}$ (the restriction of $\mathbf{x}$ to $\varkappa^{0}(T_{0})$) to an array $\mathbf{\tilde{x}}$ indexed by $\varkappa^{02}(\mathbf{T})$ satisfying the K-hexahedron equations, the entries of $\mathbf{\tilde{x}}$ are all nonzero. ###### Definition 8.16. Let $\mathbf{T}=(T_{0},\dots,T_{\ell})$ be a pile of quadrangulations of a polygon, with $\varkappa=\varkappa(\mathbf{T})$. Given an array $\mathbf{\tilde{x}}_{\textup{init}}=(x_{s})_{s\in\varkappa^{02}(T_{0})}$ and $\mathbf{t}=\left(t_{[s]}\right)_{[s]\in\varkappa_{\boxdot}}$, set $\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}}=(y_{s})_{s\in\varkappa^{02}(T_{0})}$, where $\displaystyle y_{s}=\begin{cases}x_{s}&\text{if }s\in\varkappa^{0}(T_{0}),\\\ t_{[s]}x_{s}&\text{if }s\in\varkappa^{2}(T_{0}).\end{cases}$ Given a generic array $\mathbf{\tilde{x}}_{\textup{init}}$ indexed by $\varkappa^{02}(T_{0})$, define $(\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\varkappa^{02}}$ to be the unique extension of $\mathbf{\tilde{x}}_{\textup{init}}$ to an array indexed by $\varkappa^{02}$ satisfying the K-hexahedron equations. Define $(\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\varkappa^{0}}$ to be the restriction of $(\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\varkappa^{02}}$ to $\varkappa^{0}$. ###### Lemma 8.17. Let $\mathbf{T}=(T_{0},\dots,T_{\ell})$ be a pile of quadrangulations of a polygon, with $\varkappa=\varkappa(\mathbf{T})$. Fix a generic array $\mathbf{\tilde{x}}_{\textup{init}}$ indexed by $\varkappa^{02}(T_{0})$ satisfying equation (2.9) for $s\in\varkappa^{2}(T_{0})$, and $\mathbf{t}\in\\{-1,1\\}^{\varkappa_{\boxdot}}$. Then the following are equivalent: * • $\psi_{\varkappa}(\mathbf{t})$ has value $1$ on $C_{1},\dots,C_{i-1}$, and value $-1$ on $C_{i}$, * • $(\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\varkappa^{0}}$ and $(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\varkappa^{0}}$ agree at $\varkappa^{0}(T_{0}),\dots,\varkappa^{0}(T_{i-1})$ but not at $\varkappa^{0}(T_{i})$. ###### Proof. The proof follows directly from Lemma 7.11. ∎ ###### Lemma 8.18. Let $\mathbf{T}=(T_{0},\dots,T_{\ell})$ be a pile of quadrangulations of a polygon, with $\varkappa=\varkappa(\mathbf{T})$. Let $\mathbf{x}$ and $\mathbf{x}^{\prime}$ be generic and distinct coherent solutions of the Kashaev equation, both indexed by $\varkappa^{0}$, such that $\mathbf{x}$ and $\mathbf{x}^{\prime}$ agree at $\varkappa^{0}(T_{0})$. Let $i$ be the minimum value such that $\mathbf{x}$ and $\mathbf{x}^{\prime}$ do not agree at $\varkappa^{0}(T_{i})$. Then the cube $C_{i}$ shares its bottom vertex with $T_{0}$. ###### Proof. Assume (for contradiction) that $C_{i}$ doesn’t share its bottom vertex with $T_{0}$. Then the bottom vertex of $C_{i}$ must be an interior vertex of $\varkappa$. Hence, by the coherence and genericity of $\mathbf{x}$ and $\mathbf{x}^{\prime}$, the values of $\mathbf{x}$ and $\mathbf{x}^{\prime}$ are uniquely determined by their values at $\varkappa^{0}(T_{0}),\dots,\varkappa^{0}(T_{i-1})$, which are the same for $\mathbf{x}$ and $\mathbf{x}^{\prime}$. Hence, $\mathbf{x}$ and $\mathbf{x}^{\prime}$ agree at the top vertex of $C_{i}$, and thus agree at $\varkappa^{0}(T_{i})$, a contradiction. ∎ We can now prove a weaker version of Theorem 8.10, under the additional constraint of genericity. ###### Corollary 8.19. Let $\mathbf{T}$ be a pile of quadrangulations of a polygon such that $\varkappa=\varkappa(\mathbf{T})$ is comfortable. Any generic, coherent solution of the Kashaev equation $\mathbf{x}=(x_{s})_{s\in\varkappa^{0}}$ can be extended to $\mathbf{\tilde{x}}=(x_{s})_{s\in\varkappa^{02}}$ satisfying the K-hexahedron equations. ###### Proof. Choose an arbitrary extension of $\mathbf{x}_{\textup{init}}$, the restriction of $\mathbf{x}$ to $\varkappa^{0}(T_{0})$ to an array $\mathbf{\tilde{x}}_{\textup{init}}^{\prime}$ indexed by $\varkappa^{02}(T_{0})$ satisfying equation (2.9) for $s\in\varkappa^{2}(T_{0})$. The result follows once we can show that there exists $\mathbf{t}\in\\{-1,1\\}^{\varkappa_{\boxdot}}$ such that $(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}}^{\prime})^{\uparrow\varkappa^{02}}$ agrees with $\mathbf{x}$ on $\varkappa^{0}$. We proceed by induction, and assume that there exists $\mathbf{t}$ such that $(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}}^{\prime})^{\uparrow\varkappa^{02}}$ agrees with $\mathbf{x}$ on $\varkappa^{0}(T_{j})$ for $j=0,\dots,i-1$. If $(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}}^{\prime})^{\uparrow\varkappa^{02}}$ agrees with $\mathbf{x}$ on $\varkappa^{0}(T_{j})$ for $j=0,\dots,i$, we are done. Suppose that $(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}}^{\prime})^{\uparrow\varkappa^{02}}$ does not agree with $\mathbf{x}$ on $\varkappa^{0}(T_{i})$. We need to find $\mathbf{t}_{i}$ such that $(\mathbf{t}_{i}\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}}^{\prime})^{\uparrow\varkappa^{02}}$ agrees with $\mathbf{x}$ on $\varkappa^{0}(T_{j})$ for $j=0,\dots,i$. By Lemma 8.17, this is equivalent to finding $\mathbf{t}_{i}$ such that $\psi_{\varkappa}(\mathbf{t}_{i})$ is $1$ on $C_{1},\dots,C_{i-1}$, and $-1$ on $C_{i}$. By Lemma 8.18, the cube $C_{i}$ shares its bottom vertex with $T_{0}$. Hence, there exists $\mathbf{u}=(u_{s})\in\\{-1,1\\}^{\varkappa^{3}}$ satisfying (C2) such that $u_{C_{1}}=\cdots=u_{C_{i-1}}=1$ and $u_{C_{i}}=-1$. (For example, choose $\mathbf{u}$ so that $u_{C_{i}}=-1$, and $u_{C}=1$ for all other cubes $C$ that share a bottom vertex with $T_{0}$. Then the remaining values are determined by condition (C2).) Because $\varkappa$ is comfortable, there exists $\mathbf{t}_{i}$ such that $\psi_{\varkappa}(\mathbf{t}_{i})=\mathbf{u}$, as desired. ∎ ###### Proof of Theorem 8.10.. We need to loosen the condition that $\mathbf{x}$ is generic from Corollary 8.19 to the conditions that $\mathbf{x}$ has nonzero components and satisfies condition (4.4). Let $\mathbf{x}\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{0}}$ be a coherent solution of the Kashaev equation with nonzero components that satisfies condition (4.4). It is straightforward to show that there exists a sequence $\mathbf{x}_{1},\mathbf{x}_{2},\ldots\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{0}}$ of generic, coherent solutions of the Kashaev equation that converge pointwise to $\mathbf{x}$. By Corollary 8.19, there exist $\mathbf{\tilde{x}}_{1},\mathbf{\tilde{x}}_{2},\ldots\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{02}}$ satisfying the K-hexahedron equations such that $\mathbf{\tilde{x}}_{i}$ restricts to $\mathbf{x}_{i}$. There exists a subsequence of $\mathbf{\tilde{x}}_{1},\mathbf{\tilde{x}}_{2},\dots$ that converges to an array $\mathbf{\tilde{x}}$. (For each $s\in\varkappa^{2}(\mathbf{T})$, we can partition the sequence $\mathbf{\tilde{x}}_{1},\mathbf{\tilde{x}}_{2},\dots$ into two sequences, each of which converges at $s$. Because $\varkappa^{2}(\mathbf{T})$ is finite, the claim follows.) The array $\mathbf{\tilde{x}}$ must satisfy the K-hexahedron equations and restrict to $\mathbf{x}$, so we are done. ∎ In order to complete the proof of Theorem 8.11, we will need the following technical lemma. ###### Lemma 8.20. Let $\mathbf{T}=(T_{0},\dots,T_{\ell})$ be a pile of quadrangulations of a polygon such that $\varkappa(\mathbf{T})$ is not comfortable, but $\varkappa(T_{0},\dots,T_{\ell-1})$ is comfortable. Let $C_{\ell}$ be the cube of $\varkappa$ corresponding to the flip from $T_{\ell-1}$ to $T_{\ell}$. 1. $(a)$ Let $v$ be the bottom vertex of the cube $C_{\ell}$, i.e., let $v$ be the vertex of $T_{\ell-1}$ not in $T_{\ell}$. Then $v$ is in $T_{0}$. 2. $(b)$ Let $\mathbf{w}=(w_{C})_{C\in\varkappa^{3}}$ where $w_{C_{\ell}}=-1$, and $w_{C}=1$ for $C\not=C_{\ell}$. Then $\mathbf{w}$ is not in the image of $\psi_{\varkappa}$. ###### Proof. Let $\varkappa^{\prime}=\varkappa(T_{0},\dots,T_{\ell-1})$. Let * • $a_{1}$ be the number of $\mathbf{u}\in\\{-1,1\\}^{(\varkappa^{\prime})^{3}}$ satisfying (C1), * • $a_{2}$ be the number of $\mathbf{u}\in\\{-1,1\\}^{(\varkappa^{\prime})^{3}}$ satisfying (C2), * • $b_{1}$ be the number of $\mathbf{u}\in\\{-1,1\\}^{\varkappa^{3}}$ satisfying (C1), and * • $b_{2}$ be the number of $\mathbf{u}\in\\{-1,1\\}^{\varkappa^{3}}$ satisfying (C2). Because $a_{1}$, $a_{2}$, $b_{1}$, $b_{2}$ enumerate the elements of vector fields over $\mathbb{F}_{2}$, all four quantities must be powers of $2$. Because $\varkappa^{\prime}$ is comfortable, $a_{1}=a_{2}$. Because $\varkappa$ is not comfortable, $b_{1}<b_{2}$. It is clear that $b_{1}\leq 2a_{1}$ and $b_{2}\leq 2a_{2}$. Hence, it follows that $a_{1}=a_{2}=b_{1}=b_{2}/2$. Assume (for contradiction) that $v$ is not in $T_{0}$, so $v$ is in the interior of $\varkappa$. But then if $\mathbf{u}=(u_{C})_{C\in\varkappa^{3}}$ satisfies (C2), $\displaystyle u_{C_{\ell}}=\prod_{C\in\varkappa^{3}\colon w\in C\not=C_{\ell}}u_{C},$ so $a_{2}=b_{2}$, a contradiction. Hence, we have proved (a). Because $a_{1}=b_{1}$, it follows that for each $\mathbf{u}^{\prime}\in\\{-1,1\\}^{(\varkappa^{\prime})^{3}}$ satisfying (C1), there exists exactly one $\mathbf{u}\in\\{-1,1\\}^{\varkappa^{3}}$ satisfying (C1) that restricts to $\mathbf{u}^{\prime}$. Because $\mathbf{u}=(u_{C})_{C\in\varkappa^{3}}$ and $\mathbf{u}^{\prime}=(u_{C})_{C\in(\varkappa^{\prime})^{3}}$ where $u_{C}=1$ for all $C$ satisfy (C1), $\mathbf{w}$ cannot satisfy (C1), proving (b). ∎ ###### Proof of Theorem 8.11. Without loss of generality, we assume that $\varkappa(T_{0},\dots,T_{\ell-1})$ is comfortable. (If not, let $m$ be minimum so that $\varkappa(T_{0},{\dots},T_{m})$ is not comfortable, but $\varkappa(T_{0},{\dots},T_{m{-}1})$ is comfortable. If we can prove the theorem for $\varkappa(T_{0},\dots,T_{m})$, it follows that it holds for $\varkappa(\mathbf{T})$.) We now construct an array $\mathbf{x}$ satisfying the desired conditions. Let $C$ be the cube of $\varkappa$ corresponding to the flip from $T_{\ell-1}$ to $T_{\ell}$, and let $v$ be the top vertex of $C$ (i.e., $v$ is the new vertex in $T_{i}$). Choose arbitrary positive values for $\mathbf{x}_{\textup{init}}$. Extend $\mathbf{x}_{\textup{init}}$ to $\mathbf{x}$ by the positive Kashaev recurrence until we reach $v$, where we choose the other value such that $K^{C}(\mathbf{x})=0$. By construction, $\mathbf{x}$ restricted to $\varkappa(T_{0},\dots,T_{\ell-1})$ satisfies the positive Kashaev recurrence, and hence is a coherent solution of the Kashaev equation. By Lemma 8.20(a), no vertices of $C$ are in the interior of $\varkappa$. Hence, $\mathbf{x}$ is a coherent solution of the Kashaev equation. Next, we show that $\mathbf{x}$ cannot be extended to an array indexed by $\varkappa^{02}$ satisfying the K-hexahedron equations. Let $\mathbf{x}_{\text{pK}}$ be the array satisfying the positive Kashaev recurrence that restricts to $\mathbf{x}_{\textup{init}}$ at $T_{0}$ (so $\mathbf{x}_{\text{pK}}$ agrees with $\mathbf{x}$ everywhere except $v$). Let $\mathbf{\tilde{x}}_{\text{pK}}$ be an extension of $\mathbf{x}_{\text{pK}}$ to $\varkappa^{02}$ satisfying the K-hexahedron equations. Assume (for contradiction) that there exists $\mathbf{t}\in\\{-1,1\\}^{\tilde{\varkappa}_{2}}$ such that $\mathbf{\tilde{x}}(\mathbf{t}\cdot(\mathbf{\tilde{x}}_{\text{pK}})_{0})$ restricts to $\mathbf{x}$. Hence, by Lemma 8.17, $\psi_{\varkappa}(\mathbf{t})$ has value $-1$ at $C$, and value $1$ everywhere else. But Lemma 8.20(b) says that array is not in the image of $\psi_{\varkappa}$, a contradiction. Hence, no such $\mathbf{t}$ exists, so $\mathbf{x}$ cannot be extended to an array indexed by $\varkappa^{02}$ satisfying the K-hexahedron equations. ∎ ## 9 Proofs of Corollary 4.23 and Theorem 4.26 This section contains the proofs of Corollary 4.23 and Theorem 4.26. We use the following lemma in proving Corollary 4.23. ###### Lemma 9.1. Let $\mathbf{T}$ be a pile of $\Diamond$-tilings of $\mathbf{P}_{n}$, with $\varkappa=\varkappa(\mathbf{T})$. Let $\mathbf{x}=(x_{s})\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{0}}$ be a coherent solution of the Kashaev equation satisfying condition (4.2). Suppose $s_{1},s_{2}\in\varkappa^{0}$ are labeled by the same subset of $[n]$. Then $x_{s_{1}}=x_{s_{2}}$. ###### Proof. Note that due to the homogeneity of the Kashaev equation and the coherence equations (equation (4.17)), we can rescale the components of $\mathbf{x}$ to obtain a standard array. Hence, we can assume that $\mathbf{x}$ is standard. By Theorem 4.19(a), we can extend $\mathbf{x}$ to an array $\mathbf{\tilde{x}}$ indexed by $\varkappa^{02}$ satisfying the K-hexahedron equations. Note that we can choose a sequence $\mathbf{\tilde{x}}_{1},\mathbf{\tilde{x}}_{2},\dots$ of standard arrays indexed by $\varkappa^{02}$ satisfying the K-hexahedron equations converging to $\mathbf{\tilde{x}}$ such that the restriction of $\mathbf{\tilde{x}}_{i}$ to $\varkappa^{02}(T)$ for any tiling $T$ in $\mathbf{T}$ is generic. By Theorems 4.9 and 4.12, there exist symmetric $n\times n$ matrices such that $\mathbf{\tilde{x}}_{i}=\mathbf{\tilde{x}}_{\varkappa(\mathbf{T})}(M_{i})$. Hence, the components of $\mathbf{\tilde{x}}_{i}$ at $s_{1}$ and $s_{2}$ must agree, so the components of $\mathbf{\tilde{x}}$ at $s_{1}$ and $s_{2}$ must agree. ∎ ###### Proof of Corollary 4.23.. The first bullet point implies the second two by Corollary 4.14, and it is obvious that the third implies the second. Thus, we need to show that the second bullet point implies the first. Next, suppose $\mathbf{T}=(T_{0},\dots,T_{\ell})$ is a pile of $\Diamond$-tilings of $\mathbf{P}_{n}$ in which every $I\subseteq[n]$ labels at least one vertex of $\varkappa(\mathbf{T})$, and $\mathbf{x}=\mathbf{x}_{\varkappa(\mathbf{T})}(\mathbf{\bar{x}})$ is a coherent solution of the Kashaev equation. Let $\mathbf{T}^{\prime}=(T_{0},\dots,T_{\ell},\dots,T_{\ell^{\prime}})$ be an extension of $\mathbf{T}$ where $\mathbf{T}^{\prime}$ contains the tiling $T_{\textup{min},n}$. By Lemma 9.1, $\mathbf{x}_{\varkappa(\mathbf{T}^{\prime})}(\mathbf{\bar{x}})$ is the unique extension of $\mathbf{x}$ to $\varkappa^{0}(\mathbf{T}^{\prime})$. By Theorem 4.19(a), there exists an array $\mathbf{\tilde{x}}$ indexed by $\varkappa^{02}(\mathbf{T}^{\prime})$ extending $\mathbf{x}_{\varkappa(\mathbf{T}^{\prime})}(\mathbf{\bar{x}})$ that satisfies the K-hexahedron equations. By Theorem 4.10, Proposition 4.11, and Corollary 4.15 (all of which are due to Kenyon and Pemantle [5]), there exists a unique symmetric matrix $M$ such that $\mathbf{\tilde{x}}=\mathbf{\tilde{x}}_{\varkappa(\mathbf{T}^{\prime})}(M)$, so $M$ satisfies condition (4.5). ∎ Next, we shall work towards a proof of Theorem 4.26. ###### Proposition 9.2. Let $M$ be an $n\times n$ symmetric matrix, and let $\mathbf{\bar{x}}=\mathbf{\bar{x}}(M)$. Then for all $I\subseteq[n]$ and $A\in\binom{[n]}{4}$, equation (4.10) holds. ###### Proof. Note that it suffices to prove Proposition 9.2 for generic, symmetric $M$, because any symmetric matrix can be written as a limit of generic, symmetric matrices. Fix a generic, symmetric $n\times n$ matrix $M$ for the rest of the proof. For $I\subset[n]$ and distinct $i,j\in[n]$, let $\displaystyle x_{I,\\{i,j\\}}=(-1)^{\lfloor(\left|I^{\prime}\right|+1)/2\rfloor}M_{I^{\prime}\cup\\{i\\}}^{I^{\prime}\cup\\{j\\}},$ where $I^{\prime}=I\setminus\\{i,j\\}$. Note that if $\mathbf{T}$ is a pile of $\Diamond$-tilings of $\mathbf{P}_{n}$, a cube of $\varkappa(\mathbf{T})$ containing vertices labeled by $I$ and $I\cup\\{i,j,k\\}$ for $i,j,k\not\in I$ has top/bottom vertices labeled by $I\cup\\{j\\}$ and $I\cup\\{i,k\\}$. Hence, by Lemma 7.2 and the fact that $\mathbf{\tilde{x}}_{\varkappa(\mathbf{T})}(M)$ satisfies the K-hexahedron equations, where $\mathbf{T}$ is any pile of $\Diamond$-tilings of $\mathbf{P}_{n}$, it follows that $\displaystyle K_{I,\\{i,j,k\\}}(\mathbf{\bar{x}})=\pm x_{I,\\{i,j\\}}x_{I,\\{\\{i,k\\}}x_{I,\\{j,k\\}},$ (9.1) where the plus sign appears on the right-hand side of equation (9.1) if either * • $i,k\in I$ and $j\not\in I$, or * • $j\in I$ and $i,k\not\in I$, and the minus sign appears otherwise. Let $I\subseteq[n]$ and $A\in\binom{[n]}{4}$. It is straightforward to check that an even number of $\\{i<j<k\\}\in\binom{A}{3}$ satisfy neither of the bullet points above. Hence, by equation (9.1), it follows that $\displaystyle\prod_{J\in\binom{A}{3}}K_{I,J}(\mathbf{\bar{x}})=\prod_{J\in\binom{A}{2}}x_{I,J}^{2}=\prod_{J\in\binom{A}{2}}L_{I,J}(\mathbf{\bar{x}}),$ as desired. ∎ The reader may want to review Example 3.17 before proceeding with the following lemma. ###### Lemma 9.3. Fix an array $\mathbf{\bar{x}}=(x_{I})_{I\subseteq[4]}$, satisfying the conditions that * • $x_{I}\not=0$ for all $I\subseteq[4]$, * • for any $I\subseteq[4]$ and distinct $i,j\in[4]$, $L_{I,\\{i,j\\}}\not=0$, * • for all $I\subseteq[4]$ and distinct $i,j,k\in[4]$, equation (4.8) holds, * • for all $I\subseteq[4]$, equation (4.9) holds. Let $\mathbf{T}_{1}=(T_{1,0},\dots,T_{1,4}),\mathbf{T}_{2}=(T_{2,0},\dots,T_{2,4})\in\mathcal{C}(4)$ be the two distinct piles in $\mathcal{C}(4)$. Let $\mathbf{\tilde{x}}_{1}\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{02}(\mathbf{T}_{1})}$ be an extension of $\mathbf{x}_{\varkappa(\mathbf{T}_{1})}(\mathbf{\bar{x}})$ satisfying the K-hexahedron equations. Then there exists an extension $\mathbf{\tilde{x}}_{2}\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{02}(\mathbf{T}_{2})}$ of $\mathbf{x}_{\varkappa(\mathbf{T}_{2})}(\mathbf{\bar{x}})$ satisfying the K-hexahedron equations that agrees with $\mathbf{\tilde{x}}_{1}$ on $\varkappa^{02}(T_{1,0})=\varkappa^{02}(T_{2,0})$. ###### Proof. By the homogeneity of equations (4.8)–(4.9), we can rescale the components of $\mathbf{\bar{x}}$ so that $x_{\varnothing}=1$. Hence, it follows from Corollary 4.25 that there exists an extension $\mathbf{\tilde{x}}_{1}^{\prime}\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{02}(\mathbf{T}_{1})}$ of $\mathbf{x}_{\varkappa(\mathbf{T}_{1})}(\mathbf{\bar{x}})$ and an extension $\mathbf{\tilde{x}}_{2}^{\prime}\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{02}(\mathbf{T}_{2})}$ of $\mathbf{x}_{\varkappa(\mathbf{T}_{2})}(\mathbf{\bar{x}})$ that agree on $\varkappa^{02}(T_{1,0})=\varkappa^{02}(T_{2,0})$. Let $\mathbf{\tilde{x}}_{\textup{init}}^{\prime}$ be the restriction of $\mathbf{\tilde{x}}_{1}^{\prime}$ to $\varkappa^{02}(T_{1,0})$. Let $\mathbf{t}\in\\{-1,1\\}^{\varkappa^{2}(T_{1,0})}$ (where we associate $\varkappa^{2}(T_{1,0})$ with $\tilde{\varkappa}_{2}(\mathbf{T}_{1})$) so that $\mathbf{\tilde{x}}_{1}=(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}}^{\prime})^{\uparrow\varkappa^{02}(\mathbf{T}_{1})}$. By Lemma 8.17, because $\mathbf{\tilde{x}}_{1}^{\prime}$ and $\mathbf{\tilde{x}}_{1}$ agree on $\varkappa^{0}(\mathbf{T}_{1})$, $\psi_{\varkappa(\mathbf{T}_{1})}(\mathbf{t})$ has value $1$ at every cube of $\varkappa(\mathbf{T}_{1})$. Because $\psi_{\varkappa(\mathbf{T}_{1})}(\mathbf{t})$ has value $1$ at every cube of $\varkappa(\mathbf{T}_{1})$, $\psi_{\varkappa(\mathbf{T}_{2})}(\mathbf{t})$ has value $1$ at every cube of $\varkappa(\mathbf{T}_{2})$. Hence, $(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}}^{\prime})^{\uparrow\varkappa^{2}(\mathbf{T}_{2})}$ agrees with $\mathbf{\tilde{x}}_{1}$ on $\varkappa^{02}(T_{1,0})=\varkappa^{02}(T_{2,0})$ and restricts to $\mathbf{x}_{\varkappa(\mathbf{T}_{2})}(\mathbf{\bar{x}})$. ∎ The reader may want to review Definition 3.21 before proceeding with the following definition. ###### Definition 9.4. Let $\mathbf{T}_{1}=(T_{1,0},\dots,T_{1,\ell})$ and $\mathbf{T}_{2}=(T_{2,0},\dots,T_{2,\ell})$ be two piles, such that the directed cubical complexes $\varkappa(\mathbf{T}_{1})$ and $\varkappa(\mathbf{T}_{2})$ are related by a flip. Label the vertices of $\varkappa(\mathbf{T}_{1})$ and $\varkappa(\mathbf{T}_{2})$ involved in the $3$-flip by subsets of $[4]$, as in Fig. 25. Let $\mathbf{x}_{1}\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{0}(\mathbf{T}_{1})}$ and $\mathbf{x}_{2}\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{0}(\mathbf{T}_{2})}$ be arrays satisfying condition (4.2). We say that the pair $(\mathbf{x}_{1},\mathbf{x}_{2})$ is _K-flipped_ when * • $\mathbf{x}_{1}$ and $\mathbf{x}_{2}$ agree everywhere, except at the vertex at which $\varkappa(\mathbf{T}_{1})$ and $\varkappa(\mathbf{T}_{2})$ differ, * • writing $\mathbf{\bar{x}}=(x_{I})_{I\subseteq[4]}$, where $x_{I}$ is the component of $\mathbf{x}_{1}$ and/or $\mathbf{x}_{2}$ at the vertex labeled by $I$, $\mathbf{\bar{x}}$ satisfies equation (4.8) for all $I\subseteq[4]$ and distinct $i,j,k\in[4]$ and equation (4.9) for all $I\subseteq[4]$. In $\varkappa(\mathbf{T}_{1})$:$\bm{\varnothing}$$\mathbf{1}$$\mathbf{12}$$\mathbf{123}$$\mathbf{1234}$$\mathbf{234}$$\mathbf{34}$$\mathbf{4}$$\mathbf{2}$$\mathbf{3}$$\mathbf{23}$$\bm{\varnothing}$$\mathbf{1}$$\mathbf{12}$$\mathbf{123}$$\mathbf{1234}$$\mathbf{234}$$\mathbf{34}$$\mathbf{4}$$\mathbf{13}$$\mathbf{3}$$\mathbf{23}$$\bm{\varnothing}$$\mathbf{1}$$\mathbf{12}$$\mathbf{123}$$\mathbf{1234}$$\mathbf{234}$$\mathbf{34}$$\mathbf{4}$$\mathbf{13}$$\mathbf{3}$$\mathbf{134}$$\bm{\varnothing}$$\mathbf{1}$$\mathbf{12}$$\mathbf{123}$$\mathbf{1234}$$\mathbf{234}$$\mathbf{34}$$\mathbf{4}$$\mathbf{13}$$\mathbf{14}$$\mathbf{134}$$\bm{\varnothing}$$\mathbf{1}$$\mathbf{12}$$\mathbf{123}$$\mathbf{1234}$$\mathbf{234}$$\mathbf{34}$$\mathbf{4}$$\mathbf{124}$$\mathbf{14}$$\mathbf{134}$ --- In $\varkappa(\mathbf{T}_{2})$:$\bm{\varnothing}$$\mathbf{1}$$\mathbf{12}$$\mathbf{123}$$\mathbf{1234}$$\mathbf{234}$$\mathbf{34}$$\mathbf{4}$$\mathbf{2}$$\mathbf{3}$$\mathbf{23}$$\bm{\varnothing}$$\mathbf{1}$$\mathbf{12}$$\mathbf{123}$$\mathbf{1234}$$\mathbf{234}$$\mathbf{34}$$\mathbf{4}$$\mathbf{23}$$\mathbf{2}$$\mathbf{24}$$\bm{\varnothing}$$\mathbf{1}$$\mathbf{12}$$\mathbf{123}$$\mathbf{1234}$$\mathbf{234}$$\mathbf{34}$$\mathbf{4}$$\mathbf{124}$$\mathbf{2}$$\mathbf{24}$$\bm{\varnothing}$$\mathbf{1}$$\mathbf{12}$$\mathbf{123}$$\mathbf{1234}$$\mathbf{234}$$\mathbf{34}$$\mathbf{4}$$\mathbf{124}$$\mathbf{14}$$\mathbf{24}$$\bm{\varnothing}$$\mathbf{1}$$\mathbf{12}$$\mathbf{123}$$\mathbf{1234}$$\mathbf{234}$$\mathbf{34}$$\mathbf{4}$$\mathbf{124}$$\mathbf{14}$$\mathbf{134}$ Figure 25: Labeling the vertices involved in a flip between $\varkappa(\mathbf{T}_{1})$ and $\varkappa(\mathbf{T}_{2})$ in Lemma 9.5 with subsets of $[4]$ (in blue). ###### Lemma 9.5. Let $\mathbf{T}_{1}=(T_{1,0},\dots,T_{1,\ell})$ and $\mathbf{T}_{2}=(T_{2,0},\dots,T_{2,\ell})$ be two piles, such that the directed cubical complexes $\varkappa(\mathbf{T}_{1})$ and $\varkappa(\mathbf{T}_{2})$ are related by a flip. Let $\mathbf{x}_{1}\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{0}(\mathbf{T}_{1})}$ and $\mathbf{x}_{2}\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{0}(\mathbf{T}_{2})}$ be arrays satisfying condition (4.2), such that the pair $(\mathbf{x}_{1},\mathbf{x}_{2})$ is K-flipped. 1. $(a)$ Then $\mathbf{x}_{1}$ is a coherent solution of the Kashaev equation if and only if $\mathbf{x}_{2}$ is a coherent solution of the Kashaev equation. 2. $(b)$ Suppose $\mathbf{x}_{1}$ and $\mathbf{x}_{2}$ are both coherent solutions of the Kashaev equation. Let $\mathbf{\tilde{x}}_{1}\\!\in\\!(\operatorname{\mathbb{C}}^{*})^{\varkappa^{02}(\mathbf{T}_{1})}$ be an extension of $\mathbf{x}_{1}$ to $\varkappa^{02}(\mathbf{T}_{1})$, and let $\mathbf{\tilde{x}}_{\textup{init}}$ be the restriction of $\mathbf{\tilde{x}}_{1}$ to $\varkappa^{02}(T_{1,0})$. Then, identifying $\varkappa^{02}(T_{1,0})$ and $\varkappa^{02}(T_{2,0})$, $(\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\varkappa^{02}(\mathbf{T}_{2})}$ restricts to $\mathbf{x}_{2}$. ###### Proof. This result follows from Lemma 9.3. ∎ ###### Lemma 9.6. There exists a sequence $\mathbf{T}_{0},\dots,\mathbf{T}_{\binom{n}{3}-\binom{n-1}{2}}\in\mathcal{C}(n)$ of piles, where we write $\mathbf{T}_{i}=(T_{i,0},\dots,T_{i,\binom{n}{3}})$ for $i=0,\dots,\binom{n}{3}-\binom{n-1}{2}$, such that * • the directed cubical complexes $\varkappa(\mathbf{T}_{i-1})$ and $\varkappa(\mathbf{T}_{i})$ are related by a flip for $i=1,\dots,\ell$, * • the directed cubes of $\varkappa(\mathbf{T}_{0})$ corresponding to the flips between $T_{0,i-1}$ and $T_{0,i}$ for $i=1,\dots,\binom{n-1}{2}$ share their bottom vertex with $T_{0,0}$, * • for $i=1,\dots,\binom{n}{3}-\binom{n-1}{2}$, $T_{0,j}=\cdots=T_{i-1,j}$ for $j=i+\binom{n-1}{2},\dots,\binom{n}{3}$, * • for $i=1,\dots,\binom{n}{3}-\binom{n-1}{2}$, the directed cube of $\varkappa(\mathbf{T}_{i-1})$ corresponding to the flip between $T_{i-1,i-1+\binom{n}{3}-\binom{n-1}{2}}$ and $T_{i-1,i-1+\binom{n}{3}-\binom{n-1}{2}}$ is the top of the four cubes of $\varkappa(\mathbf{T}_{i-1})$ involved in the flip between $\varkappa(\mathbf{T}_{i-1})$ and $\varkappa(\mathbf{T}_{i})$. ###### Remark 9.7. The idea behind Lemma 9.6 is as follows. Let $C_{i}$ be the cube of $\varkappa(\mathbf{T}_{0})$ corresponding to the flip between $T_{0,i-1}$ and $T_{0,i}$. The second bullet point states that the cubes $C_{1},\dots,C_{\binom{n-1}{2}}$ share their bottom vertex with $T_{0,0}$, and hence do not have their bottom vertices in the interior of $\varkappa(\mathbf{T}_{0})$. As a consequence of the remaining bullet points, there is a sequence of $\binom{n}{3}-\binom{n-1}{2}$ flips on the directed cubical complex $\varkappa(\mathbf{T}_{0})$ in which $C_{\binom{n-1}{2}+1},\dots,C_{\binom{n}{3}}$ (in that order) are the top cubes involved in the flips. ###### Proof of Lemma 9.6. Define a total order $<_{\textup{lex}}$ on $\binom{[n]}{k}$, where $\\{i_{1}<\cdots<i_{k}\\}<_{\textup{lex}}\\{i_{1}^{\prime}<\cdots<i_{k}^{\prime}\\}$ when there exists $j$ such that $i_{\ell}=i_{\ell}^{\prime}$ for $\ell<j$, and $i_{j}<i_{j}^{\prime}$. Set $\\{\alpha_{1}<_{\textup{lex}}\cdots<_{\textup{lex}}\alpha_{\binom{n}{3}}\\}=\binom{[n]}{3}$. Note that the permutation $\sigma_{0}=(\alpha_{1},\dots,\alpha_{\binom{n}{3}})$ of $\binom{[n]}{3}$ is admissible (see Definition 3.23). Let $\mathbf{T}_{0}$ be the pile corresponding to $(\alpha_{1},\dots,\alpha_{\binom{n}{3}})$ (see Theorem 3.25). Note that $1\in\alpha_{i}$ for $i=1,\dots,\binom{n-1}{2}$, so the second bullet point holds. We now construct the piles $\mathbf{T}_{1},\dots,\mathbf{T}_{\binom{n}{3}-\binom{n-1}{2}}$ inductively as follows. For $i=1,\dots,\binom{n}{3}-\binom{n-1}{2}$, the admissible permutation $\sigma_{i}$ corresponding to $\mathbf{T}_{i}$ should have the following properties: * • The inversion set of $\sigma_{i}$ is $\\{\\{1\\}\cup\alpha_{\binom{n-1}{2}+1},\dots,\\{1\\}\cup\alpha_{\binom{n-1}{2}+i}\\}$. Hence, the inversion sets of $\sigma_{i-1}$ and $\sigma_{i}$ differ by the element $\\{1\\}\cup\alpha_{\binom{n-1}{2}+i}$, so $\varkappa(\mathbf{T}_{i-1})$ and $\varkappa(\mathbf{T}_{i})$ are related by a flip. Hence, the first bullet point holds. * • Writing $\sigma_{i-1}=(\beta_{1},\dots,\beta_{\binom{n}{3}})$, $\beta_{j}=\alpha_{j}$ for $j=i+\binom{n-1}{2},\dots,\binom{n}{3}$. Hence, the third bullet point holds. Because $\beta_{i+\binom{n-1}{2}}=\alpha_{i+\binom{n-1}{2}}$ and the flip between $\varkappa(\mathbf{T}_{i-1})$ and $\varkappa(\mathbf{T}_{i})$ consists of the inclusion of $\\{1\\}\cup\alpha_{i+\binom{n-1}{2}}$ to the inversion set, the fourth bullet point follows. For $i=1,\dots,\binom{n}{3}-\binom{n-1}{2}$, write $\sigma_{i-1}=(\beta_{1},\dots,\beta_{\binom{n}{3}})$. We want to obtain $\sigma_{i}$. Write $\alpha_{i+\binom{n-1}{2}}=\beta_{i+\binom{n-1}{2}}=\\{i_{1}<i_{2}<i_{3}\\}$. Let $(\gamma_{1},\dots,\gamma_{i+\binom{n-1}{2}-4})$ be the subsequence of $(\beta_{1},\dots,\beta_{i+\binom{n-1}{2}-4})$ excluding $\\{1,i_{1},i_{2}\\}$, $\\{1,i_{1},i_{3}\\}$, and $\\{1,i_{2},i_{3}\\}$. Setting $\displaystyle\sigma_{i}=\big{(}\gamma_{1},\dots,\gamma_{i+\binom{n-1}{2}-4},\\{i_{1},i_{2},i_{3}\\},\\{1,i_{2},i_{3}\\},\\{1,i_{1},i_{3}\\},\\{1,i_{1},i_{2}\\},\beta_{i+\binom{n-1}{2}},\dots,\beta_{\binom{n}{3}}\big{)},$ it is straightforward to check that $\sigma_{i}$ is an admissible permutation with the desired properties. ∎ ###### Remark 9.8. The pile $\mathbf{T}_{0}$ constructed in the proof of Lemma 9.6 is a representative for the smallest element of the third higher Bruhat order. The sequence $\big{(}\varkappa(\mathbf{T}_{0}),\dots,\varkappa\big{(}\mathbf{T}_{\binom{n}{3}-\binom{n-1}{2}}\big{)}\big{)}$, where $\mathbf{T}_{0},\dots,\mathbf{T}_{\binom{n}{3}-\binom{n-1}{2}}$ are the piles constructed in the proof of Lemma 9.6, are the first $\binom{n}{3}-\binom{n-1}{2}+1$ elements for a representative for the smallest element of the fourth higher Bruhat order. See [9] or [12] for further discussion of higher Bruhat orders. ###### Lemma 9.9. Let $\mathbf{\bar{x}}=(x_{I})_{I\subseteq[n]}$ be an array satisfying the conditions that $L_{I,\\{i,j\\}}\not=0$ for any $I\subseteq[n]$ and distinct $i,j\in[n]$, and $x_{\varnothing}=1$. Suppose that for all $I\subseteq[n]$ and distinct $i,j,k\in[n]$, equation (4.8) holds, and for all $I\subseteq[n]$ and $A\in\binom{[n]}{4}$, equation (4.10) holds. Then there exists $\mathbf{T}\in\mathcal{C}(n)$ such that $\mathbf{x}_{\varkappa(\mathbf{T})}(\mathbf{\bar{x}})$ is a coherent solution of the Kashaev equation. ###### Proof. Let $\mathbf{T}_{0},\dots,\mathbf{T}_{\binom{n}{3}-\binom{n-1}{2}}\in\mathcal{C}(n)$ be a sequence of piles satisfying the conditions of Lemma 9.6, where we write $\mathbf{T}_{i}=(T_{i,0},\dots,T_{i,\binom{n}{3}})$ for $i=0,\dots,\binom{n}{3}-\binom{n-1}{2}$. We will show that $\mathbf{x}_{\varkappa(\mathbf{T}_{0})}(\mathbf{\bar{x}})$ is a coherent solution of the Kashaev equation. We claim that $\mathbf{x}_{\varkappa(T_{0,0},\dots,T_{0,j})}(\mathbf{\bar{x}})$ is a coherent solution of the Kashaev equation for $j=1,\dots,\binom{n}{3}$ and proceed by induction. Because equation (4.8) holds for all $I\subseteq[n]$ and distinct $i,j,k\in[n]$, $\mathbf{x}_{\varkappa(T_{0,0},\dots,T_{0,j})}(\mathbf{\bar{x}})$ satisfies the Kashaev equation. Hence, we only have to check coherence, i.e., we need to check that equation (4.3) holds for every interior vertex of $\varkappa(T_{0,0},\dots,T_{0,j})$. For $j=1,\dots,\binom{n-1}{2}$, none of the vertices $\varkappa^{0}(T_{0,0},\dots,T_{0,j})$ are interior vertices of $\varkappa(T_{0,0},\dots,T_{0,j})$, so $\mathbf{x}_{\varkappa(T_{0,0},\dots,T_{0,j})}(\mathbf{\bar{x}})$ is a coherent solution of the Kashaev equation. By our inductive hypothesis, $\mathbf{x}_{\varkappa(T_{0,0},\dots,T_{0,j-1})}(\mathbf{\bar{x}})$ is a coherent solution of the Kashaev equation. By construction, $\varkappa(T_{i-1,0},\dots,T_{i-1,j-1})$ and $\varkappa(T_{i,0},\dots,T_{i,j-1})$ are related by a flip for $i=1,\dots,j-\binom{n-1}{2}-1$. Hence, the pairs $\big{(}\mathbf{x}_{\varkappa(T_{i-1,0},\dots,T_{i-1,j-1})}(\mathbf{\bar{x}}),\mathbf{x}_{\varkappa(T_{i,0},\dots,T_{i,j-1})}(\mathbf{\bar{x}})\big{)}$ are K-flipped for $i=1,\dots,j-\binom{n-1}{2}-1$ by the conditions of the lemma. By repeated applications of Lemma 9.5(a), it follows that $\mathbf{x}_{\varkappa\big{(}T_{j-\binom{n-1}{2}-1,0},\dots,T_{j-\binom{n-1}{2}-1,j-1}\big{)}}(\mathbf{\bar{x}})$ is a coherent solution of the Kashaev equation. By construction, the cube of $\varkappa\big{(}T_{j-\binom{n-1}{2}-1,0},\dots,T_{j-\binom{n-1}{2}-1,j}\big{)}$ corresponding to the flip between $T_{j-\binom{n-1}{2}-1,j-1}$ and $T_{j-\binom{n-1}{2}-1,j}$ is the top of four cubes where a flip can take place. Hence, because equation (4.10) holds for all $I\subseteq[n]$ and $A\in\binom{[n]}{4}$, $\mathbf{x}_{\varkappa\big{(}T_{j-\binom{n-1}{2}-1,0},\dots,T_{j-\binom{n-1}{2}-1,j}\big{)}}(\mathbf{\bar{x}})$ is a coherent solution of the Kashaev equation. By construction, $\varkappa(T_{i-1,0},\dots,T_{i-1,j})$ and $\varkappa(T_{i-1,0},\dots,T_{i-1,j})$ are related by a flip for $i=1,\dots,j-\binom{n-1}{2}-1$, so the pairs $\big{(}\mathbf{x}_{\varkappa(T_{i-1,0},\dots,T_{i-1,j})}(\mathbf{\bar{x}}),\mathbf{x}_{\varkappa(T_{i,0},\dots,T_{i,j})}(\mathbf{\bar{x}})\big{)}$ are K-flipped for $i=1,\dots,j-\binom{n-1}{2}-1$. Thus, by repeated applications of Lemma 9.5(a), it follows that $\mathbf{x}_{\varkappa(T_{0,0},\dots,T_{0,j})}(\mathbf{\bar{x}})$ is a coherent solution of the Kashaev equation. ∎ ###### Proof of Theorem 4.26.. By Corollary 4.23, the first two bullet points are equivalent, and by Corollary 4.23 and Proposition 9.2, the first bullet point implies the third bullet point. Hence, we just need to show that the third bullet point implies the first. Suppose that the third bullet point holds. By Lemma 9.9, there exists $\mathbf{T}\in\mathcal{C}(n)$ such that $\mathbf{x}_{\varkappa(\mathbf{T})}(\mathbf{\bar{x}})$ is a coherent solution of the Kashaev equation. Hence, by Theorem 4.19, there exists an extension $\mathbf{\tilde{x}}\in(\operatorname{\mathbb{C}}^{*})^{\varkappa^{02}(\mathbf{T})}$ of $\mathbf{x}_{\varkappa(\mathbf{T})}(\mathbf{\bar{x}})$ to $\varkappa^{02}(\mathbf{T})$ that satisfies the K-hexahedron equations. Hence, there exists a symmetric matrix $M$ such that $\mathbf{\tilde{x}}=\mathbf{\tilde{x}}_{\varkappa(\mathbf{T})}(M)$. Let $\mathbf{\tilde{x}}_{\textup{init}}$ be the restriction of $\mathbf{\tilde{x}}$ to $T_{\min,n}$. Given any $\mathbf{T}^{\prime}\in\mathcal{C}(n)$, by Proposition 3.22, there exists a sequence of piles $\mathbf{T}_{0},\dots,\mathbf{T}_{\ell}$ with $\mathbf{T}=\mathbf{T}_{0}$ and $\mathbf{T}^{\prime}=\mathbf{T}_{\ell}$ such that $\varkappa(\mathbf{T}_{i-1})$ and $\varkappa(\mathbf{T}_{i})$ are related by a flip for $i=1,\dots,\ell$. Hence, $(\mathbf{x}_{\varkappa(\mathbf{T}_{i-1})}(\mathbf{\bar{x}}),\mathbf{x}_{\varkappa(\mathbf{T}_{i})}(\mathbf{\bar{x}}))$ is K-flipped, so by repeated applications of Lemma 9.5(a) and (b), $(\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\varkappa^{02}(\mathbf{T}^{\prime})}$ restricts to $\mathbf{x}_{\varkappa(\mathbf{T}^{\prime})}(\mathbf{\bar{x}})$. Because every $I\subseteq[n]$ labels a vertex in $\varkappa(\mathbf{T}^{\prime})$ for some $\mathbf{T}^{\prime}\in\mathcal{C}(n)$, $\mathbf{\bar{x}}=\mathbf{\bar{x}}(M)$, as desired. ∎ ## 10 Generalizations of the Kashaev equation In this section, we describe an axiomatic setup for equations similar to the Kashaev equation and the examples from Sections 5–6. This allows us to prove all of the results from Sections 5–6. This section is organized as follows: * • Proposition 10.2 and Lemma 10.8 generalize Propositions 2.8, 5.4, 6.1, and 6.7. * • In Definition 10.11, given a polynomial equation resembling the Kashaev equation, we describe how to obtain a set of equations with the same properties as the K-hexahedron equations. * • In Definition 10.18, we define certain signs that appear in our generalized “coherence” equation (10.15). * • Theorem 10.24, the main result in this section, generalizes Theorems 2.22, 5.9, 6.4, and 6.10. The proof of Theorem 10.24 is nearly identical to the proof of Theorem 2.22 from Section 7. ###### Definition 10.1. For $d\geq 1$ and $\mathbf{a}=(a_{1},\dots,a_{d})\in\operatorname{\mathbb{Z}}^{d}$, we denote by $\displaystyle[\mathbf{a}]=\big{\\{}(b_{1},\dots,b_{d})\in\operatorname{\mathbb{Z}}^{d}\colon 0\leq|b_{i}|\leq|a_{i}|\text{ and }a_{i}b_{i}\geq 0\text{ for all $i$}\big{\\}}$ the set of integer points in the $\left|a_{1}\right|\times\cdots\times\left|a_{d}\right|$ box with opposite vertices $(0,\dots,0)$ and $(a_{1},\dots,a_{d})$. For $\mathbf{b}=(b_{1},\dots,b_{d}),\mathbf{c}=(c_{1},\dots,c_{d})\in\operatorname{\mathbb{Z}}^{d}$, we write $\mathbf{b}\odot\mathbf{c}=(b_{1}c_{1},\dots,b_{d}c_{d})\in\operatorname{\mathbb{Z}}^{d}$. Denote by $\mathbf{1}=(1,\dots,1)\in\operatorname{\mathbb{Z}}^{d}$ the all $1$’s vector, and set $\mathbf{1}_{i}=(0,\dots,0,1,0,\dots,0)\in\operatorname{\mathbb{Z}}^{d}$, with $1$ in the $i$th place, for $i=1,\dots,d$. Let $\displaystyle\mathbf{z}_{[\mathbf{a}]}=\\{z_{\mathbf{i}}\colon\mathbf{i}\in[\mathbf{a}]\\}$ be a set of indeterminates. For $i=1,\dots,d$, let $\pi_{\mathbf{a},i}\colon\mathbf{z}_{[\mathbf{a}]}\rightarrow\mathbf{z}_{[\mathbf{a}]}$ be the involution defined by $\displaystyle z_{(j_{1},\dots,j_{d})}\mapsto z_{(j_{1},\dots,a_{i}-j_{i},\dots,j_{d})},$ i.e., we “flip” the index of each variable in its $i$th coordinate. The action of $\pi_{\mathbf{a},i}$ extends from $\mathbf{z}_{[\mathbf{a}]}$ to the polynomial ring $\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$. Given an array $\mathbf{x}=(x_{s})\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}}$ and integer vectors $v\in\operatorname{\mathbb{Z}}^{d}$, $\mathbf{a}\in\operatorname{\mathbb{Z}}_{\geq 0}^{d}$, and $\bm{\alpha}\in\\{-1,1\\}^{d}$, we denote by $\mathbf{x}_{v+[\mathbf{a}\odot\bm{\alpha}]}\in\operatorname{\mathbb{C}}^{[\mathbf{a}]}$ the array whose entries are $\displaystyle(\mathbf{x}_{v+[\mathbf{a}\odot\bm{\alpha}]})_{\mathbf{i}}=x_{v+\mathbf{i}\odot\bm{\alpha}},\qquad\text{for}\quad\mathbf{i}\in[\mathbf{a}].$ In particular, $\displaystyle(\mathbf{x}_{v+[\mathbf{a}]})_{\mathbf{i}}=x_{v+\mathbf{i}},\qquad\text{for}\quad\mathbf{i}\in[\mathbf{a}].$ Thus, given a polynomial $f\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$, the number $f(\mathbf{x}_{v+[\bm{\alpha}\odot\mathbf{a}]})\in\operatorname{\mathbb{C}}$ is obtained by setting $z_{\mathbf{i}}=x_{v+\bm{\alpha}\odot\mathbf{i}}$ for each variable $z_{\mathbf{i}}$ for $\mathbf{i}\in[\mathbf{a}]$. We say that $\mathbf{x}\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}}$ _satisfies $f$_ if $f(\mathbf{x}_{v+[\mathbf{a}]})=0$ for all $v\in\operatorname{\mathbb{Z}}^{d}$. ###### Proposition 10.2. Let $\mathbf{a}=(a_{1},\dots,a_{d})\in\operatorname{\mathbb{Z}}^{d}_{\geq 1}$, and a polynomial $f\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$ satisfy the following conditions: 1. $(1)$ $f$ is invariant under the action of $\pi_{\mathbf{a},i}$ for $i=1,\dots,d$, 2. $(2)$ $f$ has degree $2$ with respect to the variable $z_{\mathbf{a}}$; as a quadratic polynomial in $z_{\mathbf{a}}$, $f$ has discriminant $D$ which factors as a product $D=f_{1}\cdots f_{d}$, where each polynomial $f_{i}\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}-\mathbf{1}_{i}]}]$ is invariant under the action of $\pi_{\mathbf{a}-\mathbf{1}_{i},j}$ for $j=1,\dots,d$. Then for any $\mathbf{x}=(x_{s})\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}}$ satisfying $f$, we have, for all $v\in\operatorname{\mathbb{Z}}^{d}$: $\displaystyle\left(\prod_{\bm{\alpha}\in\\{-1,0\\}^{d}}\\!\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]})\right)^{2}=\left(\prod_{i=1}^{d}\prod_{\begin{subarray}{c}\bm{\beta}=(\beta_{1},\dots,\beta_{d})\in\\{-1,0\\}^{d}\\\ \beta_{i}=0\end{subarray}}\\!\\!f_{i}(\mathbf{x}_{v+\bm{\beta}+[\mathbf{a}]})\right)^{2}\\!\\!.\\!\\!\\!$ (10.1) Moreover, for all $v\in Z^{d}$, we have $\displaystyle\left(\prod_{\begin{subarray}{c}\bm{\alpha}=(\alpha_{1},\dots,\alpha_{d})\\\ \alpha_{1},\dots,\alpha_{d}\in\\{-1,0\\}\\\ \alpha_{1}+\cdots+\alpha_{d}\text{ even}\end{subarray}}\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]})\right)^{2}$ $\displaystyle\qquad{}=\left(\prod_{\begin{subarray}{c}\bm{\alpha}=(\alpha_{1},\dots,\alpha_{d})\\\ \alpha_{1},\dots,\alpha_{d}\in\\{-1,0\\}\\\ \alpha_{1}+\cdots+\alpha_{d}\text{ odd}\end{subarray}}\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]})\right)^{2}$ $\displaystyle\qquad{}=\prod_{i=1}^{d}\prod_{\begin{subarray}{c}\bm{\beta}=(\beta_{1},\dots,\beta_{d})\in\\{-1,0\\}^{d}\\\ \beta_{i}=0\end{subarray}}f_{i}(\mathbf{x}_{v+\bm{\beta}+[\mathbf{a}]}).$ (10.2) ###### Remark 10.3. The subscripts $v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]$ appearing on the left-hand side of (10.1) run over all boxes of size $a_{1}\times\cdots\times a_{d}$ containing $v+[\mathbf{a}-\mathbf{1}]$. The subscripts $v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]$ appearing on the right-hand side of (10.1) run over $i=1,\dots,d$ and boxes of size $a_{1}\times\cdots\times a_{i-1}\times(a_{i}-1)\times a_{i+1}\times\cdots\times a_{d}$ containing $v+[\mathbf{a}-\mathbf{1}]$. In particular, when $a_{1}=\cdots=a_{d}=1$, all of these products are over boxes of a certain size containing the vertex $v$. For example, in the case $\mathbf{a}=(1,2)$ (like in Proposition 6.7), the boxes we are considering on the left-hand side of (10.1) are given in the top row of Fig. 19, while the boxes we are considering on the right-hand side of (10.1) are given in the bottom row of Fig. 19. Before proving Proposition 10.2, we give several examples of polynomials discussed in previous sections that satisfy conditions (1)–(2) from Proposition 10.2. In the examples below, we write $z_{i_{1}\cdots i_{d}}=z_{(i_{1},\dots,i_{d})}$ for $(i_{1},\dots,i_{d})\in\operatorname{\mathbb{Z}}^{d}$. ###### Example 10.4. Our first example is the Kashaev equation. Let $\mathbf{a}=(1,1,1)\in\operatorname{\mathbb{Z}}^{3}$, and let $\displaystyle f=2\big{(}a^{2}+b^{2}+c^{2}+d^{2}\big{)}-(a+b+c+d)^{2}-4(s+t)\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}],$ where $\displaystyle a=z_{000}z_{111},\qquad b=z_{100}z_{011},\qquad c=z_{010}z_{101},\qquad d=z_{001}z_{110},$ $\displaystyle s=z_{000}z_{011}z_{101}z_{110},\qquad t=z_{100}z_{010}z_{001}z_{111}.$ The polynomial $f$ is invariant not only under the action of the $\pi_{\mathbf{a},i}$, but under all symmetries of the cube. Its discriminant (as a polynomial in $z_{111}$) $D$ factors as a product $D=f_{1}f_{2}f_{3}$, with $\displaystyle f_{1}=16(z_{000}z_{011}+z_{010}z_{001}),\qquad f_{2}=z_{000}z_{101}+z_{100}z_{001},\qquad f_{3}=z_{000}z_{110}+z_{100}z_{010}.\ $ Hence, $f$ satisfies conditions (1)–(2) from Proposition 10.2. Therefore, Proposition 2.8 is a special case of Proposition 10.2. ###### Example 10.5. Let $\mathbf{a}=(1,1)\in\operatorname{\mathbb{Z}}^{2}$, and let $\displaystyle f=z_{00}^{2}+z_{10}^{2}+z_{01}^{2}+z_{11}^{2}-2(z_{00}z_{10}+z_{10}z_{11}+z_{11}z_{01}+z_{01}z_{00})$ $\displaystyle\hphantom{f=}{}-6(z_{00}z_{11}+z_{10}z_{01})\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}].$ The polynomial $f$ is invariant not only under the action of the $\pi_{\mathbf{a},i}$, but under all symmetries of the square. Its discriminant (as a polynomial in $z_{11}$) $D$ factors as a product $D=f_{1}f_{2}$, with $\displaystyle f_{1}=32(z_{00}+z_{01}),\qquad f_{2}=z_{00}+z_{10}.$ Hence, $f$ satisfies conditions (1)–(2) from Proposition 10.2. Therefore, Proposition 5.4 is a special case of Proposition 10.2. ###### Example 10.6. Fix $\alpha_{1},\alpha_{2},\alpha_{3}\in\operatorname{\mathbb{C}}$. Let $\mathbf{a}=(3)\in\operatorname{\mathbb{Z}}^{1}$, and let $\displaystyle f=z_{0}^{2}z_{3}^{2}+\alpha_{1}z_{1}^{2}z_{2}^{2}+\alpha_{2}z_{0}z_{1}z_{2}z_{3}+\alpha_{3}\big{(}z_{0}z_{2}^{3}+z_{1}^{3}z_{3}\big{)}\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}].$ The polynomial $f$ is invariant under the action of $\pi_{\mathbf{a},1}$, i.e., replacing $z_{i}$ by $z_{3-i}$. Its discriminant (as a polynomial in $z_{11}$) $D=f_{1}$, with $\displaystyle f_{1}=\alpha_{3}^{2}z_{1}^{6}+2\alpha_{2}\alpha_{3}z_{0}z_{1}^{4}z_{2}+\big{(}\alpha_{2}^{2}-4\alpha_{1}\big{)}z_{0}^{2}z_{1}^{2}z_{2}^{2}-4\alpha_{3}z_{0}^{3}z_{2}^{3}.$ Hence, $f$ satisfies conditions (1)–(2) from Proposition 10.2. Therefore, Proposition 6.1 is a special case of Proposition 10.2. ###### Example 10.7. Fix $\alpha_{1},\alpha_{2}\in\operatorname{\mathbb{C}}$. Let $\mathbf{a}=(1,2)\in\operatorname{\mathbb{Z}}^{2}$, and let $\displaystyle f=z_{00}^{2}z_{12}^{2}+z_{10}^{2}z_{02}^{2}+\frac{\alpha_{2}^{2}-\alpha_{1}^{2}}{4}z_{01}^{2}z_{11}^{2}-\alpha_{1}\big{(}z_{00}z_{02}z_{11}^{2}+z_{10}z_{12}z_{01}^{2}\big{)}$ $\displaystyle\hphantom{f=}{}-2z_{00}z_{10}z_{02}z_{12}-\alpha_{2}(z_{00}z_{12}z_{01}z_{11}+z_{10}z_{02}z_{01}z_{11})\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}].$ The polynomial $f$ is invariant under the action of $\pi_{\mathbf{a},1}$ and $\pi_{\mathbf{a},2}$. Its discriminant (as a polynomial in $z_{12}$) factors as a product $D=f_{1}f_{2}$, with $\displaystyle f_{1}=\alpha_{1}z_{01}^{2}+4z_{00}z_{02},\qquad f_{2}=\alpha_{1}\big{(}z_{00}^{2}z_{11}^{2}+z_{01}^{2}z_{10}^{2}\big{)}+2\alpha_{2}z_{00}z_{01}z_{10}z_{11}.$ Hence, $f$ satisfies conditions (1)–(2) from Proposition 10.2. Therefore, Proposition 6.7 is a special case of Proposition 10.2. ###### Proof of Proposition 10.2.. Let $g$ denote the coefficient of $z_{\mathbf{a}}^{2}$ in $f$ (viewed as a polynomial in $z_{\mathbf{a}}$). It is easy to check that $\displaystyle D=f_{1}\cdots f_{d}=\left(\frac{\partial f}{\partial z_{\mathbf{a}}}\right)^{2}-4fg.$ (10.3) Because $\mathbf{x}$ satisfies $f$, we have $\displaystyle\left(\frac{\partial f}{\partial z_{\mathbf{a}}}\right)^{2}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(1+2\bm{\alpha})\odot\mathbf{a}]})=(f_{1}\cdots f_{d})(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(1+2\bm{\alpha})\odot\mathbf{a}]}).$ Because each $f_{i}$ is invariant under the action of $\pi_{\mathbf{a}-\mathbf{1}_{i},j}$ for $j=1,\dots,d$, we have $\displaystyle\left(\frac{\partial f}{\partial z_{\mathbf{a}}}\right)^{2}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(1+2\bm{\alpha})\odot\mathbf{a}]})=\prod_{i=1}^{d}f_{i}(\mathbf{x}_{v+\bm{\alpha}\odot(\mathbf{1}-\mathbf{1}_{i})+[\mathbf{a}]}).$ (10.4) Given $i\in\\{1,\dots,d\\}$ and $\bm{\beta}=(\beta_{1},\dots,\beta_{d})\in\\{-1,0\\}^{d}$ with $\beta_{i}=0$, there exist exactly two $\bm{\alpha}=(\alpha_{1},\dots,\alpha_{d})\in\\{-1,0\\}^{d}$ with $(\mathbf{1}-\mathbf{1}_{i})\odot\bm{\alpha}=\bm{\beta}$: one with $\alpha_{i}=0$ and the other with $\alpha_{i}=1$. Hence, $\alpha_{1}+\cdots+\alpha_{d}$ is even for one such choice of $\bm{\alpha}$, and odd for the other. Taking the product over $\bm{\alpha}=(\alpha_{1},\dots,\alpha_{d})\in\\{-1,0\\}^{d}$ with $\alpha_{1}+\cdots+\alpha_{d}$ odd (or even) in (10.4), we obtain (10.2). Equation (10.1) follows. ∎ ###### Lemma 10.8. Let $\mathbf{a}=(a_{1},\dots,a_{d})\in\operatorname{\mathbb{Z}}^{d}_{\geq 1}$, and let $f\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$ be a polynomial satisfying conditions $(1)$–$(2)$ from Proposition 10.2. Let $\mathbf{x}=(x_{s})\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}}$ be an array satisfying $f$. Fix $v\in\operatorname{\mathbb{Z}}^{d}$ and $\gamma\in\\{-1,1\\}$. Then $\displaystyle\prod_{\bm{\alpha}\in\\{-1,0\\}^{d}}\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]})=\gamma\prod_{i=1}^{d}\prod_{\begin{subarray}{c}\bm{\beta}=(\beta_{1},\dots,\beta_{d})\in\\{-1,0\\}^{d}\\\ \beta_{i}=0\end{subarray}}f_{i}(\mathbf{x}_{v+\bm{\beta}+[\mathbf{a}]})$ if and only if $\displaystyle\prod_{\begin{subarray}{c}\bm{\alpha}=(\alpha_{1},\dots,\alpha_{d})\\\ \alpha_{1},\dots,\alpha_{d}\in\\{-1,0\\}\\\ \alpha_{1}+\cdots+\alpha_{d}\text{ even}\end{subarray}}\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]})=\gamma\prod_{\begin{subarray}{c}\bm{\alpha}=(\alpha_{1},\dots,\alpha_{d})\\\ \alpha_{1},\dots,\alpha_{d}\in\\{-1,0\\}\\\ \alpha_{1}+\cdots+\alpha_{d}\text{ odd}\end{subarray}}\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]}).$ ###### Proof. This follows immediately from Proposition 10.2. ∎ ###### Definition 10.9. Given $\mathbf{a}=(a_{1},\dots,a_{d})\in\operatorname{\mathbb{Z}}^{d}_{\geq 1}$ and $1\leq i\leq d$, let $\displaystyle F^{\mathbf{a}}_{i}=\big{\\{}v+[\mathbf{a}-\mathbf{1}_{i}]\colon v\in\operatorname{\mathbb{Z}}^{d}\big{\\}}$ denote the set of boxes of size $a_{1}\times\cdots\times a_{i-1}\times(a_{i}-1)\times a_{i+1}\times\cdots\times a_{d}$ in $\operatorname{\mathbb{Z}}^{d}$. Set $\displaystyle F^{\mathbf{a}}=\bigcup_{i=1}^{d}F^{\mathbf{a}}_{i}.$ Given an array $\mathbf{\tilde{x}}=(x_{s})_{s\in\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ with $\mathbf{x}=(x_{s})_{s\in\operatorname{\mathbb{Z}}^{d}}$, $i\in\\{1,\dots,d\\}$, and $v\in\operatorname{\mathbb{Z}}^{d}$, we remark that $\mathbf{x}_{v+[\mathbf{a}-\mathbf{1}_{i}]}$ (with $\mathbf{x}$ bold) refers to the array defined in Definition 10.1, whereas $x_{v+[\mathbf{a}-\mathbf{1}_{i}]}$ (with $x$ not bold) refers to the component of $\mathbf{\tilde{x}}$ indexed by $v+[\mathbf{a}-\mathbf{1}_{i}]\in F^{\mathbf{a}}$. ###### Definition 10.10. For $\mathbf{a}\in\operatorname{\mathbb{Z}}_{\geq 1}^{d}$, define $[\mathbf{a}^{*}]$ by $\displaystyle[\mathbf{a}^{*}]=\left([\mathbf{a}]\setminus\\{\mathbf{a}\\}\right)\cup\bigcup_{i=1}^{d}\\{[\mathbf{a}-\mathbf{1}_{i}]\\}.$ In other words, the set $[\mathbf{a}^{*}]$ consists of $[\mathbf{a}]\setminus\\{\mathbf{a}\\}\subset\operatorname{\mathbb{Z}}^{d}$, along with the sets $[\mathbf{a}-\mathbf{1}_{i}]\in F^{\mathbf{a}}$ for $i=1,\dots,d$. We want to develop a generalization of the K-hexahedron equations for arrays indexed by $\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}$. Suppose that $f\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$ is a polynomial satisfying conditions (1)–(2) from Proposition 10.2, with the polynomials $f_{1},\dots,f_{d}$ from condition (2) fixed. Let $g$ and $h$ be the coefficients of $z_{\mathbf{a}}^{2}$ and $z_{\mathbf{a}}$ in $f$, viewed as a polynomial in $z_{\mathbf{a}}$. We consider arrays $\mathbf{\tilde{x}}=(x_{s})\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ such that $\displaystyle x_{v+\mathbf{a}}=\frac{-h(\mathbf{x}_{v+[\mathbf{a}]})+\prod\limits_{i=1}^{d}x_{v+[\mathbf{a}-\mathbf{1}_{i}]}}{2g(\mathbf{x}_{v+[\mathbf{a}]})}\qquad\text{for all $v\in\operatorname{\mathbb{Z}}^{d}$},$ (10.5) $\displaystyle x_{v+\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}=r_{i}(x_{v+s}\colon s\in[\mathbf{a}^{*}])\qquad\text{for $i=1,\dots,d$}\text{ and for all $v\in\operatorname{\mathbb{Z}}^{d}$},$ (10.6) $\displaystyle x_{v+[\mathbf{a}-\mathbf{1}_{i}]}^{2}=f_{i}(\mathbf{x}_{v+[\mathbf{a}-\mathbf{1}_{i}]})\qquad\text{for $i=1,\dots,d$}\text{ and for all $v\in\operatorname{\mathbb{Z}}^{d}$},$ (10.7) where $r_{1},\dots,r_{d}$ are some rational functions in the variables $z_{s}$ for $s\in[\mathbf{a}^{*}]$. Note that if $\mathbf{\tilde{x}}$ satisfies conditions (10.5) and (10.7), then by the quadratic formula, its restriction $\mathbf{x}=(x_{s})_{s\in\operatorname{\mathbb{Z}}^{d}}$ satisfies $f$. In the following definition, we formulate the properties that our tuple of rational functions $(r_{1},\dots,r_{d})$ should have in order for the subsequent developments to follow. ###### Definition 10.11. Let $\mathbf{a}=(a_{1},\dots,a_{d})\in\operatorname{\mathbb{Z}}^{d}_{\geq 1}$, and let $f\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$ be a polynomial satisfying conditions (1)–(2) from Proposition 10.2. Fix the polynomials $f_{1},\dots,f_{d}$ from condition (2) of Proposition 10.2. Let $g$ be the coefficient of $z_{\mathbf{a}}^{2}$ in $f$, viewed as a polynomial in $z_{\mathbf{a}}$. For $i=1,\dots,d$, let $r_{i}=\frac{p_{i}}{q_{i}}$ be rational functions in the variables $z_{s}$ for $s\in[\mathbf{a}^{*}]$, with $p_{i}$, $q_{i}$ polynomials in these variables. We say that $(r_{1},\dots,r_{d})$ is _adapted to $(f;f_{1},\dots,f_{d})$_ if there exist signs $\beta_{1},\dots,\beta_{d}\in\\{-1,1\\}$ such that the following properties hold for $i=1,\dots,d$: * • the denominator $q_{i}$ of $r_{i}$ is of the form $\displaystyle q_{i}=g^{b_{i}}\prod_{j\in\\{1,\dots,d\\}\setminus\\{i\\}}z_{[\mathbf{a}-\mathbf{1}_{j}]}^{b_{ij}},$ where $b_{i}\in\operatorname{\mathbb{Z}}_{\geq 0}$ and $b_{ij}\in\\{0,1\\}$, * • for all arrays $\mathbf{\tilde{x}}=(x_{s})\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ satisfying (10.5), (10.7), and $\displaystyle q_{i}(x_{v+s}\colon s\in[\mathbf{a}^{*}])\not=0\qquad\text{for all $v\in\operatorname{\mathbb{Z}}^{d}$},$ (10.8) $\displaystyle g(\mathbf{x}_{v+[\mathbf{a}]})\not=0\qquad\text{for all $v\in\operatorname{\mathbb{Z}}^{d}$},$ (10.9) the following condition holds: $\displaystyle r_{i}(x_{v+s}\colon s\in[\mathbf{a}^{*}])=\beta_{i}\frac{\left(\frac{\partial f}{\partial z_{\mathbf{a}}}\right)(\mathbf{x}_{v+a_{i}\mathbf{1}_{i}+[\mathbf{a}\odot(\mathbf{1}-2\mathbf{1}_{i})]})}{\prod\limits_{j\in\\{1,\dots,d\\}-\\{i\\}}x_{v+[\mathbf{a}-\mathbf{1}_{j}]}}.$ (10.10) Note that one can obtain a tuple $(r_{1},\dots,r_{d})$ adapted to $(f;f_{1},\dots,f_{d})$ by choosing the signs $\beta_{1},\dots,\beta_{d}\in\\{-1,1\\}$ and using condition (10.5) to replace all instances of $x_{v+\mathbf{a}}$ in (10.10). In the following proposition, we show that with $(r_{1},\dots,r_{d})$ adapted to $(f;f_{1},\dots,f_{d})$, the recurrence (10.5)–(10.6) “propagates” the condition (10.7). ###### Proposition 10.12. Let $\mathbf{a}=(a_{1},\dots,a_{d})\in\operatorname{\mathbb{Z}}^{d}_{\geq 1}$, and let $f\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$ be a polynomial satisfying conditions $(1)$–$(2)$ from Proposition 10.2. Fix the polynomials $f_{1},\dots,f_{d}$ from condition $(2)$ of Proposition 10.2. Let $g$ be the coefficient of $z_{\mathbf{a}}^{2}$ in $f$, viewed as a polynomial in $z_{\mathbf{a}}$. Let $(r_{1},\dots,r_{d})$ be a $d$-tuple of rational functions in the variables $z_{s}$ for $s\in[\mathbf{a}^{*}]$ adapted to $(f;f_{1},\dots,f_{d})$. Fix $v\in\operatorname{\mathbb{Z}}^{d}$. Let $\mathbf{\tilde{x}}=(x_{s})\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ be an array satisfying conditions (10.5)–(10.6), (10.8)–(10.9), and $\displaystyle x_{v+[\mathbf{a}-\mathbf{1}_{i}]}^{2}=f_{i}(\mathbf{x}_{v+[\mathbf{a}-\mathbf{1}_{i}]})\qquad\text{for $i=1,\dots,d$}.$ Then $\displaystyle x_{v+\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}^{2}=f_{i}(\mathbf{x}_{v+\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]})\qquad\text{for $i=1,\dots,d$}.$ ###### Proof. By identity (10.3), $\displaystyle\left(\frac{\partial f}{\partial z_{\mathbf{a}}}\right)^{2}(\mathbf{x}_{v+a_{i}\mathbf{1}_{i}+[\mathbf{a}\odot(\mathbf{1}-2\mathbf{1}_{i})]})=(f_{1}\cdots f_{d})(\mathbf{x}_{v+a_{i}\mathbf{1}_{i}+[\mathbf{a}\odot(\mathbf{1}-2\mathbf{1}_{i})]})$ $\displaystyle\hphantom{\left(\frac{\partial f}{\partial z_{\mathbf{a}}}\right)^{2}(\mathbf{x}_{v+a_{i}\mathbf{1}_{i}+[\mathbf{a}\odot(\mathbf{1}-2\mathbf{1}_{i})]})}{}=f_{i}(\mathbf{x}_{v+\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]})\prod_{j\in\\{1,\dots,d\\}-\\{i\\}}f_{j}(\mathbf{x}_{v+[\mathbf{a}-\mathbf{1}_{j}]}).$ Hence, $\displaystyle x_{v+\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}^{2}=\beta_{i}^{2}\frac{\left(\frac{\partial f}{\partial z_{\mathbf{a}}}\right)^{2}(\mathbf{x}_{v+a_{i}\mathbf{1}_{i}+[\mathbf{a}\odot(\mathbf{1}-2\mathbf{1}_{i})]})}{\prod\limits_{j\in\\{1,\dots,d\\}-\\{i\\}}x_{v+[\mathbf{a}-\mathbf{1}_{j}]}^{2}}$ $\displaystyle\hphantom{x_{v+\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}^{2}}{}=\frac{f_{i}(\mathbf{x}_{v+\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]})\prod\limits_{j\in\\{1,\dots,d\\}-\\{i\\}}f_{j}(\mathbf{x}_{v+[\mathbf{a}-\mathbf{1}_{j}]})}{\prod\limits_{j\in\\{1,\dots,d\\}-\\{i\\}}f_{j}(\mathbf{x}_{v+[\mathbf{a}-\mathbf{1}_{j}]})}=f_{i}(\mathbf{x}_{v+\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}),$ as desired. ∎ We now describe $d$-tuples of rational functions adapted to $(f;f_{1},\dots,f_{d})$ for the four polynomials $f$ in the Examples 10.4–10.7. ###### Example 10.13. Continuing with Example 10.4, let us write $\displaystyle z_{i_{1}\left(i_{2}+\operatorname{\frac{1}{2}}\right)\left(i_{3}+\operatorname{\frac{1}{2}}\right)}=z_{(i_{1},i_{2},i_{3})+[\mathbf{a}-\mathbf{1}_{1}]},\qquad z_{\left(i_{1}+\operatorname{\frac{1}{2}}\right)i_{2}\left(i_{3}+\operatorname{\frac{1}{2}}\right)}=z_{(i_{1},i_{2},i_{3})+[\mathbf{a}-\mathbf{1}_{2}]},$ $\displaystyle z_{\left(i_{1}+\operatorname{\frac{1}{2}}\right)\left(i_{2}+\operatorname{\frac{1}{2}}\right)i_{3}}=z_{(i_{1},i_{2},i_{3})+[\mathbf{a}-\mathbf{1}_{3}]}.$ Set $\displaystyle r_{1}(z_{s}\colon s\in[\mathbf{a}^{*}])=\frac{4z_{\frac{1}{2}0\frac{1}{2}}z_{\frac{1}{2}\frac{1}{2}0}+z_{0\frac{1}{2}\frac{1}{2}}z_{100}}{z_{000}},\qquad r_{2}(z_{s}\colon s\in[\mathbf{a}^{*}])=\frac{z_{0\frac{1}{2}\frac{1}{2}}z_{\frac{1}{2}\frac{1}{2}0}+4z_{\frac{1}{2}0\frac{1}{2}}z_{010}}{4z_{000}},$ $\displaystyle r_{3}(z_{s}\colon s\in[\mathbf{a}^{*}])=\frac{z_{0\frac{1}{2}\frac{1}{2}}z_{\frac{1}{2}0\frac{1}{2}}+4z_{\frac{1}{2}\frac{1}{2}0}z_{001}}{4z_{000}}.$ It can be checked that $(r_{1},r_{2},r_{3})$ is adapted to $(f;f_{1},f_{2},f_{3})$ by following the construction at the end of Definition 10.11 with $\beta_{1}=\beta_{2}=\beta_{3}=-1$ and using condition (10.7). Note that $r_{1}$, $r_{2}$, $r_{3}$ matches the right-hand-sides of (2.15)–(2.17) and the K-hexahedron equations (2.15)–(2.18) and (2.9) are the same as conditions (10.5)–(10.7) if each $z_{v+[\mathbf{a}-\mathbf{1}_{1}]}$ for $v\in\operatorname{\mathbb{Z}}^{3}$ is rescaled by a factor of $4$. ###### Example 10.14. Continuing with Example 10.5, let us write $\displaystyle z_{i_{1}\left(i_{2}+\operatorname{\frac{1}{2}}\right)}=z_{(i_{1},i_{2})+[\mathbf{a}-\mathbf{1}_{1}]},\qquad z_{\left(i_{1}+\operatorname{\frac{1}{2}}\right)i_{2}}=z_{(i_{1},i_{2})+[\mathbf{a}-\mathbf{1}_{2}]}.$ Set $\displaystyle r_{1}(z_{s}\colon s\in[\mathbf{a}^{*}])=z_{0\frac{1}{2}}+8z_{\frac{1}{2}0},\qquad r_{2}(z_{s}\colon s\in[\mathbf{a}^{*}])=z_{\frac{1}{2}0}+\frac{1}{4}z_{0\frac{1}{2}}.$ It can be checked that $(r_{1},r_{2})$ is adapted to $(f;f_{1},f_{2})$ by following the construction at the end of Definition 10.11 with $\beta_{1}=\beta_{2}=-1$ and using condition (10.7). Note that $r_{1}$, $r_{2}$ matches the right-hand-sides of (5.8)–(5.9) and the conditions (5.7)–(5.11) are the same as conditions (10.5)–(10.7) if each $z_{v+[\mathbf{a}-\mathbf{1}_{1}]}$ for $v\in\operatorname{\mathbb{Z}}^{2}$ is rescaled by a factor of $4\sqrt{2}$. ###### Example 10.15. Continuing with Example 10.6, let us write $\displaystyle w_{i+1}=z_{i+[\mathbf{a}-\mathbf{1}_{1}]}.$ Set $\displaystyle r_{1}(z_{s}\colon s\in[\mathbf{a}^{*}])=\frac{\alpha_{3}^{2}z_{1}^{6}+\alpha_{2}\alpha_{3}z_{0}z_{1}^{4}z_{2}+2\alpha_{3}z_{0}^{3}z_{2}^{3}+w_{1}^{2}+\big{(}{-}2\alpha_{3}z_{1}^{3}-\alpha_{2}z_{0}z_{1}z_{2}\big{)}w_{1}}{2z_{0}^{3}}.$ It can be checked that $(r_{1})$ is adapted to $(f;f_{1})$ by following the construction at the end of Definition 10.11 with $\beta_{1}=1$ and using condition (10.7). Note that $r_{1}$ matches the right-hand side of equation (6.6), and conditions (6.5)–(6.7) are the same as conditions (10.5)–(10.7). ###### Example 10.16. Continuing with Example 10.7, let us write $\displaystyle w_{i_{1}(i_{2}+1)}=z_{(i_{1},i_{2})+[\mathbf{a}-\mathbf{1}_{1}]},\qquad w_{\left(i_{1}+\operatorname{\frac{1}{2}}\right)\left(i_{2}+\operatorname{\frac{1}{2}}\right)}=z_{(i_{1},i_{2})+[\mathbf{a}-\mathbf{1}_{2}]}.$ Set $\displaystyle r_{1}(z_{s}\colon s\in[\mathbf{a}^{*}])=\frac{z_{10}w_{01}+w_{\operatorname{\frac{1}{2}}\operatorname{\frac{1}{2}}}}{z_{00}},$ $\displaystyle r_{2}(z_{s}\colon s\in[\mathbf{a}^{*}])=\frac{z_{01}(\alpha_{1}z_{01}z_{10}+\alpha_{2}z_{00}z_{11})w_{01}+\big{(}\alpha_{1}z_{01}^{2}+2z_{00}z_{02}\big{)}w_{\operatorname{\frac{1}{2}}\operatorname{\frac{1}{2}}}}{2z_{00}^{2}}.$ It can be checked that $(r_{1},r_{2})$ is adapted to $(f;f_{1},f_{2})$ by following the construction at the end of Definition 10.11 with $\beta_{1}=\beta_{2}=-1$ and using condition (10.7). Note that $r_{1}$, $r_{2}$ matches the right-hand side of equation (6.15)–(6.16), and conditions (6.14)–(6.18) are the same as conditions (10.5)–(10.7). ###### Lemma 10.17. Let $\mathbf{a}=(a_{1},\dots,a_{d})\in\operatorname{\mathbb{Z}}^{d}_{\geq 1}$. Let $f\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$ be a polynomial that is irreducible over $\operatorname{\mathbb{C}}$ and satisfies conditions $(1)$–$(2)$ from Proposition 10.2. Fix the polynomials $f_{1},\dots,f_{d}$ from condition $(2)$ of Proposition 10.2. Let $(r_{1},\dots,r_{d})$ be a tuple of rational functions in the variables $z_{s}$ for $s\in[\mathbf{a}^{*}]$ that is adapted to $(f;f_{1},\dots,f_{d})$. Then for all $\bm{\alpha}\in\\{-1,1\\}^{d}$, there exists a unique sign $\gamma_{\bm{\alpha}}\in\\{-1,1\\}$ such the following condition holds for all arrays $\mathbf{\tilde{x}}=(x_{s})\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ satisfying (10.5)–(10.7) and (10.8)–(10.9): $\displaystyle\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v+[\bm{\alpha}\odot\mathbf{a}]})=\gamma_{\bm{\alpha}}\prod_{i=1}^{d}x_{v+[\bm{\alpha}\odot(\mathbf{a}-\mathbf{1}_{i})]}\qquad\text{for all $v\in\operatorname{\mathbb{Z}}^{d}$}.$ (10.11) ###### Definition 10.18. For $f$ and $(r_{1},\dots,r_{d})$ as in Lemma 10.17, we call the signs $(\gamma_{\bm{\alpha}})_{\bm{\alpha}\in\\{-1,1\\}^{d}}$ given in Lemma 10.17 the _propagation signs_ corresponding to $(f;f_{1},\dots,f_{d};r_{1},\dots,r_{d})$. The proof of Lemma 10.17 relies on the following lemma. ###### Lemma 10.19. Let $\mathbf{a}=(a_{1},\dots,a_{d})\in\operatorname{\mathbb{Z}}^{d}_{\geq 1}$. Let $f\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$ be a polynomial that is irreducible over $\operatorname{\mathbb{C}}$ and satisfies conditions $(1)$–$(2)$ from Proposition 10.2. Let $j\not=k\in\\{1,\dots,d\\}$. Then there exists $\alpha_{jk}\in\\{-1,1\\}$ such that for all $\bm{\alpha}\in\\{0,1\\}^{d}$, $\displaystyle\frac{\partial f}{\partial z_{\mathbf{a}\odot\bm{\alpha}}}\frac{\partial f}{\partial z_{\mathbf{a}\odot\bm{\alpha}+(\mathbf{a}-2\mathbf{a}\odot\bm{\alpha})\odot(\mathbf{1}_{j}+\mathbf{1}_{k})}}-\alpha_{jk}\frac{\partial f}{\partial z_{\mathbf{a}\odot\bm{\alpha}+(\mathbf{a}-2\mathbf{a}\odot\bm{\alpha})\odot\mathbf{1}_{j}}}\frac{\partial f}{\partial z_{\mathbf{a}\odot\bm{\alpha}+(\mathbf{a}-2\mathbf{a}\odot\bm{\alpha})\odot\mathbf{1}_{k}}}$ is a multiple of $f$. ###### Proof. Because $f$ satisfies conditions (1)–(2) from Proposition 10.2, $\displaystyle\left(\frac{\partial f}{\partial z_{\mathbf{a}}}\right)^{2}\left(\frac{\partial f}{\partial z_{\mathbf{a}\odot(\mathbf{1}-\mathbf{1}_{j}-\mathbf{1}_{k})}}\right)^{2}-\left(\frac{\partial f}{\partial z_{\mathbf{a}\odot(\mathbf{1}-\mathbf{1}_{j})}}\right)^{2}\left(\frac{\partial f}{\partial z_{\mathbf{a}\odot(\mathbf{1}-\mathbf{1}_{k})}}\right)^{2}$ $\displaystyle\qquad{}\equiv(f_{1}\cdots f_{d})(\pi_{\mathbf{a},j}\pi_{\mathbf{a},k}(f_{1}\cdots f_{d}))-(\pi_{\mathbf{a},j}(f_{1}\cdots f_{d}))(\pi_{\mathbf{a},k}(f_{1}\cdots f_{d}))$ $\displaystyle\qquad{}=(1-1)f_{j}f_{k}(\pi_{\mathbf{a},j}(f_{j}))(\pi_{\mathbf{a},k}(f_{k}))\prod_{i\in\\{1,\dots,d\\}\setminus\\{j,k\\}}f_{j}^{2}=0$ mod $f$. Hence, by the irreducibility of $f$, there exists $\alpha_{jk}\in\\{-1,1\\}$ such that $\displaystyle\frac{\partial f}{\partial z_{\mathbf{a}}}\frac{\partial f}{\partial z_{\mathbf{a}\odot(\mathbf{1}-\mathbf{1}_{j}-\mathbf{1}_{k})}}-\alpha_{jk}\frac{\partial f}{\partial z_{\mathbf{a}\odot(\mathbf{1}-\mathbf{1}_{j})}}\frac{\partial f}{\partial z_{\mathbf{a}\odot(\mathbf{1}-\mathbf{1}_{k})}}$ is a multiple of $f$. The full lemma follows from condition (1) of Proposition 10.2. ∎ ###### Proof of Lemma 10.17. We proceed by induction on the number of $-1$s in $\bm{\alpha}$. When $\bm{\alpha}=\mathbf{1}$, then $\gamma_{\bm{\alpha}}=1$ by condition (10.5). If $\bm{\alpha}$ contains one $-1$, say $\bm{\alpha}=\mathbf{1}-\mathbf{1}_{i}$, then $\gamma_{\bm{\alpha}}=\beta_{i}$, where $\beta_{i}$ is the sign from Definition 10.11. Suppose $\ell\geq 2$, and $\bm{\alpha}$ has $\ell$ $-1$s, including $-1$s at positions $j$ and $k$. Then by Lemma 10.19 and our inductive hypothesis, $\displaystyle\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v+[\bm{\alpha}\odot\mathbf{a}]})=\frac{\alpha_{jk}\frac{\partial f}{\partial z_{\mathbf{a}\odot(\mathbf{1}-\mathbf{1}_{j})}}(\mathbf{x}_{v+[\bm{\alpha}\odot\mathbf{a}]})\frac{\partial f}{\partial z_{\mathbf{a}\odot(\mathbf{1}-\mathbf{1}_{k})}}(\mathbf{x}_{v+[\bm{\alpha}\odot\mathbf{a}]})}{\frac{\partial f}{\partial z_{\mathbf{a}\odot(\mathbf{1}-\mathbf{1}_{j}-\mathbf{1}_{k})}}(\mathbf{x}_{v+[\bm{\alpha}\odot\mathbf{a}]})}$ $\displaystyle\hphantom{\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v+[\bm{\alpha}\odot\mathbf{a}]})}{}=\alpha_{jk}\gamma_{\bm{\alpha}+2(\mathbf{1}_{j}+\mathbf{1}_{k})}\gamma_{\bm{\alpha}+2\mathbf{1}_{j}}\gamma_{\bm{\alpha}+2\mathbf{1}_{k}}\prod_{i=1}^{d}x_{v+[\bm{\alpha}\odot(\mathbf{a}-\mathbf{1}_{i})]},$ so setting $\gamma_{\bm{\alpha}}=\alpha_{jk}\gamma_{\bm{\alpha}+2(\mathbf{1}_{j}+\mathbf{1}_{k})}\gamma_{\bm{\alpha}+2\mathbf{1}_{j}}\gamma_{\bm{\alpha}+2\mathbf{1}_{k}}\in\\{-1,1\\}$, we obtain the desired result. ∎ ###### Example 10.20. Let us continue with Examples 10.4 and 10.13. Following the argument in the proof of Lemma 10.17, it can be shown that $\gamma_{\bm{\alpha}}=1$ if $\bm{\alpha}=\pm\mathbf{1}$, and $\gamma_{\bm{\alpha}}=-1$ otherwise. Note that this fact is equivalent to Lemma 7.2. ###### Example 10.21. Let us continue with Examples 10.5 and 10.14. Following the argument in the proof of Lemma 10.17, it can be shown that $\gamma_{\bm{\alpha}}=1$ if $\bm{\alpha}=\mathbf{1}$, and $\gamma_{\bm{\alpha}}=-1$ otherwise. In particular, $\gamma_{-\mathbf{1}}=1$. Hence, if $\mathbf{\tilde{x}}=(x_{s})\in(\operatorname{\mathbb{C}}^{*})^{\operatorname{\mathbb{Z}}^{2}\cup F_{\mathbf{a}}}$ satisfies conditions (5.7)–(5.11), it follows that $(x_{-s})_{s\in\operatorname{\mathbb{Z}}^{2}\cup F_{\mathbf{a}}}$ cannot satisfy conditions (5.7)–(5.11). ###### Example 10.22. Let us continue with Examples 10.6 and 10.15. It is straightforward to show that $\gamma_{(1)}=\gamma_{(-1)}=1$. ###### Example 10.23. Let us continue with Examples 10.7 and 10.16. Following the argument in the proof of Lemma 10.17, it can be shown that $\gamma_{\bm{\alpha}}=1$ if $\bm{\alpha}=\pm\mathbf{1}$, and $\gamma_{\bm{\alpha}}=-1$ otherwise. We now state the main theorem of this section. ###### Theorem 10.24. Let $\mathbf{a}=(a_{1},\dots,a_{d})\in\operatorname{\mathbb{Z}}^{d}_{\geq 1}$. Let $f\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$ be a polynomial that is irreducible over $\operatorname{\mathbb{C}}$ and satisfies conditions $(1)$–$(2)$ from Proposition 10.2. Fix the polynomials $f_{1},\dots,f_{d}$ from condition $(2)$ of Proposition 10.2. Let $g$ and $h$ be the coefficients of $z_{\mathbf{a}}^{2}$ and $z_{\mathbf{a}}$ in $f$, viewed as a polynomial in $z_{\mathbf{a}}$. Let $(r_{1},\dots,r_{d})$ be a tuple of rational functions in the variables $z_{s}$ for $s\in[\mathbf{a}^{*}]$ that is adapted to $(f;f_{1},\dots,f_{d})$. Let $(\gamma_{\bm{\alpha}})_{\bm{\alpha}\in\\{-1,1\\}^{d}}$ be the propagation signs corresponding to $(f;f_{1},\dots,f_{d};r_{1},\dots,r_{d})$. 1. $(a)$ Let $\mathbf{x}=(x_{s})_{s\in\operatorname{\mathbb{Z}}^{d}}$ be an array such that $\displaystyle\mathbf{x}\text{ satisfies }f,$ (10.12) $\displaystyle\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v+[\mathbf{a}]})\not=0\qquad\text{for all $v\in\operatorname{\mathbb{Z}}^{d}$ if $d>1$},$ (10.13) $\displaystyle g(\mathbf{x}_{v+[\mathbf{a}]})\not=0\qquad\text{for all $v\in\operatorname{\mathbb{Z}}^{d}$},$ (10.14) $\displaystyle\prod_{\bm{\alpha}\in\\{-1,0\\}^{d}}\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]})$ (10.15) $\displaystyle\qquad{}=\left(\prod_{\bm{\alpha}\in\\{-1,1\\}^{d}}\gamma_{\bm{\alpha}}\right)\prod_{i=1}^{d}\prod_{\bm{\beta}=(\beta_{1},\dots,\beta_{d})\in\\{-1,0\\}^{d}\colon\beta_{i}=0}f_{i}(\mathbf{x}_{v+\bm{\beta}+[\mathbf{a}]})\qquad\text{for all $v\in\operatorname{\mathbb{Z}}^{d}$}.$ Then $\mathbf{x}$ can be extended to an array $\mathbf{\tilde{x}}=(x_{s})\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ satisfying (10.5)–(10.7). 2. $(b)$ Conversely, if $\mathbf{\tilde{x}}=(x_{s})\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ satisfies conditions (10.5)–(10.7) and (10.8)–(10.9), then the restriction of $\mathbf{\tilde{x}}$ to $\operatorname{\mathbb{Z}}^{d}$ satisfies $f$ and the condition (10.15). ###### Remark 10.25. The equation (10.15) is a generalization of the coherence condition (equation (2.7)) for the Kashaev equation. The following proposition states that the sign $\prod\limits_{\bm{\alpha}\in\\{-1,1\\}^{d}}\gamma_{\bm{\alpha}}$ in (10.15) is independent of the choice of $(r_{1},\dots,r_{d})$ if $d\geq 2$. ###### Proposition 10.26. Let $\mathbf{a}=(a_{1},\dots,a_{d})\in\operatorname{\mathbb{Z}}^{d}_{\geq 1}$ with $d\geq 2$. Let $f\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$ be a polynomial that is irreducible over $\operatorname{\mathbb{C}}$ and satisfies conditions $(1)$–$(2)$ from Proposition 10.2. Then there exists a sign $\gamma\in\\{-1,1\\}$ such that for any tuple of rational functions $(r_{1},\dots,r_{d})$ in the variables $z_{s}$ for $s\in[\mathbf{a}^{*}]$ that is adapted to $(f;f_{1},\dots,f_{d})$, we have $\displaystyle\gamma=\prod_{\bm{\alpha}\in\\{-1,1\\}^{d}}\gamma_{\bm{\alpha}},$ where $(\gamma_{\bm{\alpha}})_{\bm{\alpha}\in\\{-1,1\\}^{d}}$ are the propagation signs corresponding to $(f;f_{1},\dots,f_{d};r_{1},\dots,r_{d})$. ###### Remark 10.27. By Lemma 10.8, given $f\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$ satisfying conditions (1)–(2) from Proposition 10.2 and an array $\mathbf{x}=(x_{s})_{s\in\operatorname{\mathbb{Z}}^{d}}$ satisfying $f$, the following are equivalent: * • $\mathbf{x}$ satisfies condition (10.15), * • $\mathbf{x}$ satisfies $\displaystyle\prod_{\begin{subarray}{c}\bm{\alpha}=(\alpha_{1},\dots,\alpha_{d})\\\ \alpha_{1},\dots,\alpha_{d}\in\\{-1,0\\}\\\ \alpha_{1}+\cdots+\alpha_{d}\text{ even}\end{subarray}}\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]})$ $\displaystyle\qquad{}=\gamma\prod_{\begin{subarray}{c}\bm{\alpha}=(\alpha_{1},\dots,\alpha_{d})\\\ \alpha_{1},\dots,\alpha_{d}\in\\{-1,0\\}\\\ \alpha_{1}+\cdots+\alpha_{d}\text{ odd}\end{subarray}}\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]})\qquad\text{for all $v\in\operatorname{\mathbb{Z}}^{d}$}.$ (10.16) Hence, one can replace condition (10.15) in Theorem 10.24 by condition (10.16). ###### Example 10.28. Continuing with Examples 10.4, 10.13, and 10.20, Theorem 2.22 is a special case of Theorem 10.24. Theorem 2.9 is a special case of Theorem 10.24(b), where we require all values of $\mathbf{\tilde{x}}$, including the values indexed by $F^{\mathbf{a}}$, to be positive. ###### Example 10.29. Continuing with Examples 10.5, 10.14, and 10.21, Theorem 5.9 is a special case of Theorem 10.24. Theorem 5.7 is a special case of Theorem 10.24(b), where we require $x_{s}>0$ for $s\in\operatorname{\mathbb{Z}}^{2}_{\\{0,1,2,\dots\\}}$, $s\in\operatorname{\mathbb{Z}}^{2}_{\\{0,1,2,\dots\\}}+[\mathbf{a}-\mathbf{1}_{1}]$, and $s\in\operatorname{\mathbb{Z}}^{2}_{\\{0,1,2,\dots\\}}+[\mathbf{a}-\mathbf{1}_{2}]$. ###### Example 10.30. Continuing with Examples 10.6, 10.15, and 10.22, Theorem 6.4 is a special case of Theorem 10.24. Theorem 6.3 is a special case of Theorem 10.24(b), where we require all values of $\mathbf{\tilde{x}}$, including the values indexed by $F^{\mathbf{a}}$, to be positive. ###### Example 10.31. Continuing with Examples 10.7, 10.16, and 10.23, Theorem 6.10 is a special case of Theorem 10.24. Theorem 6.9 is a special case of Theorem 10.24(b), where we require all values of $\mathbf{\tilde{x}}$, including the values indexed by $F^{\mathbf{a}}$, to be positive. Before we prove Theorem 10.24, we first prove Proposition 10.26. Proposition 10.26 follows from the lemma below. ###### Lemma 10.32. Let $\mathbf{a}=(a_{1},\dots,a_{d})\in\operatorname{\mathbb{Z}}^{d}_{\geq 1}$. Let $f\in\operatorname{\mathbb{C}}[\mathbf{z}_{[\mathbf{a}]}]$ be a polynomial that is irreducible over $\operatorname{\mathbb{C}}$ and satisfies conditions $(1)$–$(2)$ from Proposition 10.2. Fix $j\in\\{1,\dots,d\\}$. Let $(r_{1},\dots,r_{d})$ and $(\tilde{r}_{1},\dots,\tilde{r}_{d})$ be tuples of rational functions in the variables $z_{s}$ for $s\in[\mathbf{a}^{*}]$ that are adapted to $(f;f_{1},\dots,f_{d})$, such that $\tilde{r}_{j}=-r_{j}$ and $\tilde{r}_{i}=r_{i}$ for $i\not=j$. Let $(\gamma_{\bm{\alpha}})_{\bm{\alpha}\in\\{-1,1\\}^{d}}$ and $(\tilde{\gamma}_{\bm{\alpha}}\in\\{-1,1\\})_{\bm{\alpha}\in\\{-1,1\\}^{d}}$ be the propagation signs corresponding to $(f;f_{1},\dots,f_{d};r_{1},\dots,r_{d})$ and $(f;f_{1},\dots,f_{d};\tilde{r}_{1},\dots,\tilde{r}_{d})$, respectively. Given $\bm{\alpha}=(\alpha_{1},\dots,\alpha_{d})\in\\{-1,1\\}^{d}$, the following are equivalent: * • $\gamma_{\bm{\alpha}}=\tilde{\gamma}_{\bm{\alpha}}$, * • $\alpha_{j}=1$. ###### Proof. We proceed by induction on the number of $-1$s in $\bm{\alpha}$. Note that $\tilde{\gamma}_{\mathbf{1}}=\gamma_{\mathbf{1}}=1$, $\tilde{\gamma}_{\mathbf{1}-2\mathbf{1}_{j}}=-\gamma_{\mathbf{1}-2\mathbf{1}_{j}}$, and $\tilde{\gamma}_{\mathbf{1}-2\mathbf{1}_{i}}=\gamma_{\mathbf{1}-2\mathbf{1}_{i}}$ for $i\not=j$. Suppose $\bm{\alpha}$ has at least two $-1$s. As we showed in the proof to Lemma 10.17, if $k_{1}$ and $k_{2}$ are distinct values such that $\alpha_{k_{1}}=\alpha_{k_{2}}=1$, then $\gamma_{\bm{\alpha}}=\alpha_{k_{1}k_{2}}\gamma_{\bm{\alpha}+2(\mathbf{1}_{k_{1}}+\mathbf{1}_{k_{2}})}\gamma_{\bm{\alpha}+2\mathbf{1}_{k_{1}}}\gamma_{\bm{\alpha}+2\mathbf{1}_{k_{2}}}$ and $\tilde{\gamma}_{\bm{\alpha}}=\alpha_{k_{1}k_{2}}\tilde{\gamma}_{\bm{\alpha}+2(\mathbf{1}_{k_{1}}+\mathbf{1}_{k_{2}})}\tilde{\gamma}_{\bm{\alpha}+2\mathbf{1}_{k_{1}}}\tilde{\gamma}_{\bm{\alpha}+2\mathbf{1}_{k_{2}}}$. If $\alpha_{j}=1$, let $k_{1}$, $k_{2}$ be distinct values such that $\alpha_{k_{1}}=\alpha_{k_{2}}=-1$, so $\displaystyle\tilde{\gamma}_{\bm{\alpha}}=\alpha_{k_{1}k_{2}}\tilde{\gamma}_{\bm{\alpha}+2(\mathbf{1}_{k_{1}}+\mathbf{1}_{k_{2}})}\tilde{\gamma}_{\bm{\alpha}+2\mathbf{1}_{k_{1}}}\tilde{\gamma}_{\bm{\alpha}+2\mathbf{1}_{k_{2}}}=\alpha_{k_{1}k_{2}}\gamma_{\bm{\alpha}+2(\mathbf{1}_{k_{1}}+\mathbf{1}_{k_{2}})}\gamma_{\bm{\alpha}+2\mathbf{1}_{k_{1}}}\gamma_{\bm{\alpha}+2\mathbf{1}_{k_{2}}}=\gamma_{\bm{\alpha}}.$ If $\alpha_{j}=-1$, let $k\not=j$ be a value such that $\alpha_{k}=-1$, so $\displaystyle\tilde{\gamma}_{\bm{\alpha}}=\alpha_{jk}\tilde{\gamma}_{\bm{\alpha}+2(\mathbf{1}_{j}+\mathbf{1}_{k})}\tilde{\gamma}_{\bm{\alpha}+2\mathbf{1}_{j}}\tilde{\gamma}_{\bm{\alpha}+2\mathbf{1}_{k}}=-\alpha_{jk}\gamma_{\bm{\alpha}+2(\mathbf{1}_{j}+\mathbf{1}_{k})}\gamma_{\bm{\alpha}+2\mathbf{1}_{j}}\gamma_{\bm{\alpha}+2\mathbf{1}_{k}}=-\gamma_{\bm{\alpha}}.$ ∎ ###### Proof of Proposition 10.26. It suffices to prove the proposition for tuples of rational functions $(r_{1},\dots,r_{d})$ and $(\tilde{r}_{1},\dots,\tilde{r}_{d})$ satisfying the conditions of Lemma 10.32. Let $(\gamma_{\bm{\alpha}})_{\bm{\alpha}\in\\{-1,1\\}^{d}}$ and $(\tilde{\gamma}_{\bm{\alpha}}\in\\{-1,1\\})_{\bm{\alpha}\in\\{-1,1\\}^{d}}$ be the propagation signs corresponding to $(f;f_{1},\dots,f_{d};\allowbreak r_{1},\dots,r_{d})$ and $(f;f_{1},\dots,f_{d};\tilde{r}_{1},\dots,\tilde{r}_{d})$, respectively. Then by Lemma 10.32, $\displaystyle\prod_{\bm{\alpha}\in\\{-1,1\\}^{d}}\tilde{\gamma}_{\bm{\alpha}}=(-1)^{2^{d-1}}\prod_{\bm{\alpha}\in\\{-1,1\\}^{d}}\gamma_{\bm{\alpha}}=\prod_{\bm{\alpha}\in\\{-1,1\\}^{d}}\gamma_{\bm{\alpha}},$ because $d\geq 2$. ∎ The remainder of this section is dedicated to the proof of Theorem 10.24. For the rest of this section, we fix all quantities given in Theorem 10.24, and set $\displaystyle a=a_{1}+\cdots+a_{d}.$ ###### Proof of Theorem 10.24(b).. Let $\mathbf{x}$ be the restriction of $\mathbf{\tilde{x}}$ to $\operatorname{\mathbb{Z}}^{d}$. It is clear that $\mathbf{x}$ must satisfy $f$. By Lemma 10.17, $\displaystyle\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]})=\gamma_{\mathbf{1}+2\bm{\alpha}}\prod_{i=1}^{d}x_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(1+2\bm{\alpha})\odot(\mathbf{a}-\mathbf{1}_{i})]}$ $\displaystyle\hphantom{\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]})}{}=\gamma_{\mathbf{1}+2\bm{\alpha}}\prod_{i=1}^{d}x_{v+\bm{\alpha}\odot(\mathbf{1}-\mathbf{1}_{i})+[\mathbf{a}-\mathbf{1}_{i}]}.$ Taking the product over $\bm{\alpha}\in\\{-1,0\\}^{d}$, we get $\displaystyle\prod_{\bm{\alpha}\in\\{-1,0\\}^{d}}\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{v-(\mathbf{a}-\mathbf{1})\odot\bm{\alpha}+[(\mathbf{1}+2\bm{\alpha})\odot\mathbf{a}]})$ $\displaystyle\qquad{}=\left(\prod_{\bm{\alpha}\in\\{-1,1\\}^{d}}\gamma_{\bm{\alpha}}\right)\prod_{i=1}^{d}\prod_{\bm{\beta}=(\beta_{1},\dots,\beta_{d})\in\\{-1,0\\}^{d}\colon\beta_{i}=0}x_{v+\bm{\beta}+[\mathbf{a}-\mathbf{1}_{i}]}^{2}$ $\displaystyle\qquad{}=\left(\prod_{\bm{\alpha}\in\\{-1,1\\}^{d}}\gamma_{\bm{\alpha}}\right)\prod_{i=1}^{d}\prod_{\bm{\beta}=(\beta_{1},\dots,\beta_{d})\in\\{-1,0\\}^{d}\colon\beta_{i}=0}f_{i}(\mathbf{x}_{v+\bm{\beta}+[\mathbf{a}]}).$ ∎ The proof of Theorem 10.24(a) below is nearly identical to the proof of Theorem 2.22(a). ###### Definition 10.33. For $U\subseteq\operatorname{\mathbb{Z}}$, let $\operatorname{\mathbb{Z}}^{d}_{U}$ denote the set $\displaystyle\operatorname{\mathbb{Z}}^{d}_{U}=\big{\\{}(i_{1},\dots,i_{d})\in\operatorname{\mathbb{Z}}^{d}\colon i_{1}+\cdots i_{d}\in U\big{\\}}.$ For $U\subseteq\operatorname{\mathbb{Z}}$, we will also use the notation $\displaystyle F_{\mathbf{a},U}=\\{v+[\mathbf{a}-\mathbf{1}_{i}]\colon v\in\operatorname{\mathbb{Z}}_{U},i\in\\{1,\dots,d\\}\\}.$ In particular, we will be interested in $\mathbb{Z}^{d}_{a,\textup{init}}=\operatorname{\mathbb{Z}}^{d}_{\\{0,\dots,a-1\\}}$ and $F^{\mathbf{a}}_{\textup{init}}=F_{\mathbf{a},\\{0\\}}$. ###### Definition 10.34. We say that an array $\mathbf{\tilde{x}}_{\textup{init}}$ indexed by $\mathbb{Z}^{d}_{a,\textup{init}}\cup F^{\mathbf{a}}_{\textup{init}}$ satisfying condition (10.7) is _generic_ if there exists an extension of $\mathbf{\tilde{x}}_{\textup{init}}$ to an array $\mathbf{\tilde{x}}$ indexed by $\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}$ satisfying equations (10.5)–(10.7) where the restriction of $\mathbf{\tilde{x}}$ to $\operatorname{\mathbb{Z}}^{d}$ satisfies conditions (10.13)–(10.14). Similarly, we say that an array $\mathbf{x}_{\textup{init}}$ indexed by $\mathbb{Z}^{d}_{a,\textup{init}}$ is generic if every extension of $\mathbf{x}_{\textup{init}}$ to an array $\mathbf{\tilde{x}}_{\textup{init}}$ indexed by $\mathbb{Z}^{d}_{a,\textup{init}}\cup F^{\mathbf{a}}_{\textup{init}}$ satisfying condition (10.7) is generic. ###### Definition 10.35. Let $\mathbf{\tilde{x}}_{\textup{init}}$ be a generic array indexed by $\mathbb{Z}^{d}_{a,\textup{init}}\cup F^{\mathbf{a}}_{\textup{init}}$ satisfying condition (10.7). We denote by $(\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\operatorname{\mathbb{Z}}^{d}\\!{\cup}F^{\mathbf{a}}}$ the unique extension of $\mathbf{\tilde{x}}_{\textup{init}}$ to $\operatorname{\mathbb{Z}}^{d}\\!{\cup}F^{\mathbf{a}}$ where $(\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\operatorname{\mathbb{Z}}^{d}\\!{\cup}F^{\mathbf{a}}}$ satisfies equations (10.5)–(10.7). The next lemma generalizes Lemma 7.11. ###### Lemma 10.36. Let $S=[\mathbf{a}]\cup\\{b\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]\colon b\in\\{0,1\\},i\in\\{1,\dots,d\\}\\}$, i.e., $S$ is the set of vertices of $[\mathbf{a}]\subset\operatorname{\mathbb{Z}}^{d}$ and boxes of $F^{\mathbf{a}}$ completely contained in $[\mathbf{a}]$. Fix values $t_{i}\in\\{-1,1\\}$ for $i=1,\dots,d$. Suppose $\mathbf{\tilde{x}}=(x_{s})_{s\in S}$ and $\mathbf{\tilde{y}}=(y_{s})_{s\in S}$ are arrays of complex numbers such that * • $\mathbf{\tilde{x}}$ and $\mathbf{\tilde{y}}$ both satisfy equations (10.5)–(10.7), with the denominators in equations (10.5)–(10.6) non-vanishing, * • $y_{s}=x_{s}$ for $s\in[\mathbf{a}]-\\{\mathbf{a}\\}$, * • $y_{[\mathbf{a}-\mathbf{1}_{i}]}=t_{i}x_{[\mathbf{a}-\mathbf{1}_{i}]}$ for $i=1,\dots,d$, and * • $\prod\limits_{i=1}^{d}t_{i}=1$. Then the following equations hold: $\displaystyle y_{\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}=t_{i}x_{\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}\qquad\text{for $i=1,\dots,d$},$ $\displaystyle y_{\mathbf{a}}=x_{\mathbf{a}}.$ ###### Proof. Note that $\displaystyle y_{\mathbf{a}}-x_{\mathbf{a}}=\left(\prod_{i=1}^{d}t_{i}-1\right)\frac{\prod\limits_{i=1}^{d}x_{[\mathbf{a}-\mathbf{1}_{i}]}}{2g(\mathbf{x}_{[\mathbf{a}-\mathbf{1}_{i}]})}=0,$ so $y_{\mathbf{a}}=x_{\mathbf{a}}$. By (10.11), $\displaystyle\begin{split}&\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{a_{i}\mathbf{1}_{i}+[(\mathbf{1}-2\mathbf{1}_{i})\odot\mathbf{a}]})=\gamma_{(\mathbf{1}-2\mathbf{1}_{i})}\prod_{j=1}^{d}x_{a_{i}\mathbf{1}_{i}+[((\mathbf{1}-2\mathbf{1}_{i}))\odot(\mathbf{a}-\mathbf{1}_{j})]}\\\ &\hphantom{\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{a_{i}\mathbf{1}_{i}+[(\mathbf{1}-2\mathbf{1}_{i})\odot\mathbf{a}]})}{}=\gamma_{(\mathbf{1}-2\mathbf{1}_{i})}x_{\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}\prod_{j\in\\{1,\dots,d\\}\setminus\\{i\\}}x_{[\mathbf{a}-\mathbf{1}_{j}]}\end{split}$ and $\displaystyle\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{y}_{a_{i}\mathbf{1}_{i}+[(\mathbf{1}-2\mathbf{1}_{i})\odot\mathbf{a}]})=\gamma_{(\mathbf{1}-2\mathbf{1}_{i})}\prod_{j=1}^{d}y_{a_{i}\mathbf{1}_{i}+[((\mathbf{1}-2\mathbf{1}_{i}))\odot(\mathbf{a}-\mathbf{1}_{j})]}$ $\displaystyle\hphantom{\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{y}_{a_{i}\mathbf{1}_{i}+[(\mathbf{1}-2\mathbf{1}_{i})\odot\mathbf{a}]})}{}=\gamma_{(\mathbf{1}-2\mathbf{1}_{i})}y_{\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}\prod_{j\in\\{1,\dots,d\\}\setminus\\{i\\}}y_{[\mathbf{a}-\mathbf{1}_{j}]}$ $\displaystyle\hphantom{\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{y}_{a_{i}\mathbf{1}_{i}+[(\mathbf{1}-2\mathbf{1}_{i})\odot\mathbf{a}]})}{}=\gamma_{(\mathbf{1}-2\mathbf{1}_{i})}y_{\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}\prod_{j\in\\{1,\dots,d\\}\setminus\\{i\\}}t_{j}x_{[\mathbf{a}-\mathbf{1}_{j}]}$ $\displaystyle\hphantom{\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{y}_{a_{i}\mathbf{1}_{i}+[(\mathbf{1}-2\mathbf{1}_{i})\odot\mathbf{a}]})}{}=\gamma_{(\mathbf{1}-2\mathbf{1}_{i})}t_{i}y_{\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}\prod_{j\in\\{1,\dots,d\\}\setminus\\{i\\}}x_{[\mathbf{a}-\mathbf{1}_{j}]}.$ Because $\displaystyle\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{x}_{a_{i}\mathbf{1}_{i}+[(\mathbf{1}-2\mathbf{1}_{i})\odot\mathbf{a}]})=\frac{\partial f}{\partial z_{\mathbf{a}}}(\mathbf{y}_{a_{i}\mathbf{1}_{i}+[(\mathbf{1}-2\mathbf{1}_{i})\odot\mathbf{a}]}),$ it follows that $x_{\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}=t_{i}y_{\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}$. ∎ ###### Definition 10.37. Define an equivalence relation on $F^{\mathbf{a}}$ by setting $s_{1}\sim s_{2}$ if and only if $s_{1}=v+[\mathbf{a}-\mathbf{1}_{i}]$ and $s_{2}=v+\beta\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]$ for some $1\leq i\leq d$, $v\in\operatorname{\mathbb{Z}}^{d}$, and $\beta\in\operatorname{\mathbb{Z}}$. Let $F^{\mathbf{a}}_{\boxdot}$ denote the set of equivalence classes under this equivalence relation. Denote by $[s]\in F^{\mathbf{a}}_{\boxdot}$ the equivalence class of $s\in F^{\mathbf{a}}$. ###### Definition 10.38. Define an action of $\\{-1,1\\}^{F^{\mathbf{a}}_{\boxdot}}$ on arrays indexed by $\mathbb{Z}^{d}_{a,\textup{init}}\cup F^{\mathbf{a}}_{\textup{init}}$ as follows: given $\mathbf{t}=(t_{s})_{s\in F^{\mathbf{a}}_{\boxdot}}\in\\{-1,1\\}^{F^{\mathbf{a}}_{\boxdot}}$ and $\mathbf{\tilde{x}}_{\textup{init}}=(x_{s})_{s\in\mathbb{Z}^{d}_{a,\textup{init}}\cup F^{\mathbf{a}}_{\textup{init}}}$, define $\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}}=(\tilde{x}_{s})_{\mathbb{Z}^{d}_{a,\textup{init}}\cup F^{\mathbf{a}}_{\textup{init}}}$, where $\displaystyle\tilde{x}_{s}=\begin{cases}x_{s}&\text{if $s\in\mathbb{Z}^{d}_{a,\textup{init}}$},\\\ t_{[s]}x_{s}&\text{if $s\in F^{\mathbf{a}}_{\textup{init}}$}.\end{cases}$ ###### Definition 10.39. For $\mathbf{t}=(t_{s})\in\\{-1,1\\}^{F^{\mathbf{a}}_{\boxdot}}$, define $\psi(\mathbf{t})=(u_{s})\in\\{-1,1\\}^{\operatorname{\mathbb{Z}}^{d}+\mathbf{a}/2}$ by $\displaystyle u_{s+\mathbf{a}/2}=\prod_{i=1}^{d}t_{[s+[\mathbf{a}-\mathbf{1}_{i}]]}$ for $s\in\operatorname{\mathbb{Z}}^{d}$. The next lemma generalizes Lemma 7.18. ###### Lemma 10.40. Let $\mathbf{\tilde{x}}_{\textup{init}}$ be a generic array indexed by $\mathbb{Z}^{d}_{a,\textup{init}}\cup F^{\mathbf{a}}_{\textup{init}}$ satisfying condition (10.7). Let $\mathbf{t}\in\\{-1,1\\}^{F^{\mathbf{a}}_{\boxdot}}$, and $\mathbf{u}=(u_{s})_{s\in\operatorname{\mathbb{Z}}^{d}+\mathbf{a}/2}=\psi(\mathbf{t})$. Let $(\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}=(x_{s})_{s\in\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$, and $(\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}=(y_{s})_{s\in\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$. Suppose $v\in\operatorname{\mathbb{Z}}^{d}_{\\{a,a+1,\dots\\}}$ satisfies the condition that $u_{w-\mathbf{a}/2}=1$ for all $w\in\operatorname{\mathbb{Z}}^{d}_{\\{a,a+1,\dots\\}}$ with $w\leq v$. Then: 1. $(a)$ $y_{v}=x_{v}$, 2. $(b)$ $y_{v-\mathbf{a}+\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}=t_{[v-\mathbf{a}+\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]]}x_{v-\mathbf{a}+\mathbf{1}_{i}+[\mathbf{a}-\mathbf{1}_{i}]}$ for $i=1,\dots,d$. ###### Proof. We prove parts (a) and (b) together by induction. Assume that we have proved parts (a) and (b) for all $w\in\operatorname{\mathbb{Z}}^{d}_{\\{a,a+1,\dots\\}}$ with $w<v$. By construction, $x_{w}=y_{w}$ for all $w\in\mathbb{Z}^{d}_{a,\textup{init}}$ and statement (b) holds for all $w\in\operatorname{\mathbb{Z}}^{d}_{\\{a-1\\}}$. Hence, $y_{v-s^{\prime}}=x_{v-s^{\prime}}$ for $s\in[\mathbf{a}]-\\{\mathbf{0}\\}$, and $y_{v-\mathbf{a}+[\mathbf{a}-\mathbf{1}_{i}]}=t_{[v-\mathbf{a}+[\mathbf{a}-\mathbf{1}_{i}]]}x_{v-\mathbf{a}+[\mathbf{a}-\mathbf{1}_{i}]}$ for $i=1,\dots,d$. Because $u_{v-\mathbf{a}/2}=1$, statements (a) and (b) follow from Lemma 10.36. ∎ The next lemma generalizes Lemma 7.19. ###### Lemma 10.41. An array $\mathbf{u}=(u_{s})\in\\{-1,1\\}^{\operatorname{\mathbb{Z}}^{d}+\mathbf{a}/2}$ is in the image of $\psi$ $($see Definition 10.39) if and only if for every $v\in\operatorname{\mathbb{Z}}^{3}$, $\displaystyle\prod_{\bm{\alpha}\in\\{0,1\\}^{d}}u_{v+\mathbf{a}/2+\bm{\alpha}}=1.$ (10.17) ###### Proof. First, suppose $\mathbf{u}=\psi(\mathbf{t})$, where $\mathbf{t}=(t_{s})\in\\{-1,1\\}^{F^{\mathbf{a}}_{\boxdot}}$. Then for any $v\in\operatorname{\mathbb{Z}}^{d}$, $\displaystyle\prod_{\bm{\alpha}\in\\{0,1\\}^{d}}u_{v+\mathbf{a}/2+\bm{\alpha}}=\prod_{\bm{\alpha}\in\\{0,1\\}^{d}}\prod_{i=1}^{d}t_{[v+[\mathbf{a}-\mathbf{1}_{i}]]}=\prod_{i=1}^{d}\prod_{\begin{subarray}{c}\bm{\alpha}=(\alpha_{1},\dots,\alpha_{d})\in\\{0,1\\}^{d}\\\ \alpha_{i}=0\end{subarray}}t_{[v+\bm{\alpha}+[\mathbf{a}-\mathbf{1}_{i}]]}^{2}=1.$ Next, suppose that condition (10.17) holds. It is clear that $\mathbf{u}$ is uniquely determined by its components at $S=\left\\{(v_{1},\dots,v_{d})+\mathbf{a}/2\colon v_{1}\cdots v_{d}=0\right\\}$ and condition (10.17). For $v=(v_{1},\dots,v_{d})\in\operatorname{\mathbb{Z}}^{d}_{\\{0\\}}$ and $i\in\\{1,\dots,d\\}$, set $\displaystyle t_{[v+[\mathbf{a}-\mathbf{1}_{i}]]}=\begin{cases}\prod\limits_{j=1}^{i}u_{(v_{1},\dots,v_{j-1},0,v_{j+1},\dots,v_{d})+\mathbf{a}/2}&\text{if }v_{1},\dots,v_{i-1}\not=0,\\\ 1&\text{otherwise}.\end{cases}$ Set $\mathbf{t}=(t_{s})\in\\{-1,1\\}^{F^{\mathbf{a}}_{\boxdot}}$. It is straightforward to check that $\psi(\mathbf{t})$ agrees with $\mathbf{u}$ at $S$. Hence, because $\psi(\mathbf{t})$ and $\mathbf{u}$ both satisfy condition (10.17), it follows that $\mathbf{u}=\psi(\mathbf{t})$. ∎ The next lemma generalizes Lemma 7.21. ###### Lemma 10.42. Let $\mathbf{\hat{x}}$ be an array indexed by $\operatorname{\mathbb{Z}}^{d}_{\\{0,1,\dots,a+d-1\\}}$. Assume that $\mathbf{\hat{x}}$ satisfying $f$, and, moreover, its restriction to $\mathbb{Z}^{d}_{a,\textup{init}}$ is generic. Then there exists an array $\mathbf{\tilde{x}}$ indexed by $\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}$ satisfying equations (10.5)–(10.7) and extending $\mathbf{\hat{x}}$. ###### Proof. For $i=a,\dots,a+d-1$, we will show by induction on $i$ that there exists an array $\mathbf{\tilde{x}}_{\textup{init}}$ indexed by $\mathbb{Z}^{d}_{a,\textup{init}}\cup F^{\mathbf{a}}_{\textup{init}}$ satisfying (10.7) such that $(\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ agrees with $\mathbf{\hat{x}}=(x_{s})_{s\in\operatorname{\mathbb{Z}}^{d}_{\\{0,\dots,a+d-1\\}}}$ on $\operatorname{\mathbb{Z}}^{d}_{\\{0,\dots,i\\}}$. Let $\mathbf{\tilde{x}}_{\textup{init}}^{\prime}$ be an array indexed by $\mathbb{Z}^{d}_{a,\textup{init}}\cup F^{\mathbf{a}}_{\textup{init}}$ satisfying (10.7) such that $(\mathbf{\tilde{x}}_{\textup{init}}^{\prime})^{\uparrow\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}=(y_{s})_{s\in\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ agrees with $\mathbf{\hat{x}}$ on $\operatorname{\mathbb{Z}}^{d}_{\\{0,\dots,i-1\\}}$. (For $i=a$, we can obtain $\mathbf{\tilde{x}}_{\textup{init}}^{\prime}$ be taking an arbitrary extension of $\mathbf{x}_{\textup{init}}$ to $\mathbb{Z}^{d}_{a,\textup{init}}\cup F^{\mathbf{a}}_{\textup{init}}$ satisfying condition (10.7). For $i>a$, we have shown that $\mathbf{\tilde{x}}_{\textup{init}}^{\prime}$ exists by induction.) Choose $\mathbf{\tilde{u}}=(u_{s})\in\\{-1,1\\}^{\operatorname{\mathbb{Z}}^{d}_{\\{a,\dots,a+d-1\\}}-\mathbf{a}/2}$ so that * • $u_{s-\mathbf{a}/2}=1$ if $s\in\operatorname{\mathbb{Z}}^{d}_{\\{i\\}}$ and $x_{s}=y_{s}$, * • $u_{s-\mathbf{a}/2}=-1$ if $s\in\operatorname{\mathbb{Z}}^{d}_{\\{i\\}}$ and $x_{s}=y_{s}$, * • $u_{s-\mathbf{a}/2}=1$ if $s\in\operatorname{\mathbb{Z}}^{d}_{\\{j\\}}$ for $0\leq j<i$. Extend $\mathbf{\tilde{u}}$ to $\mathbf{u}=(u_{s})_{s\in\operatorname{\mathbb{Z}}^{d}+\mathbf{a}/2}$ by condition (10.7). By Lemma 10.41, there exists $\mathbf{t}\in\\{-1,1\\}^{F^{\mathbf{a}}_{\boxdot}}$ such that $\mathbf{u}=\psi(\mathbf{t})$. Set $\mathbf{\tilde{x}}_{\textup{init}}=\mathbf{t}\cdot\mathbf{\tilde{x}}_{\textup{init}}^{\prime}$. Then by Lemma 10.40, $(\mathbf{\tilde{x}}_{\textup{init}})^{\uparrow\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ agrees with $\mathbf{\hat{x}}$ on $\operatorname{\mathbb{Z}}^{d}_{\\{0,\dots,i\\}}$, as desired. ∎ We can now prove a weaker version of Theorem 10.24(a), under the additional constraint of genericity. ###### Corollary 10.43. Let $\mathbf{x}=(x_{s})_{s\in\operatorname{\mathbb{Z}}^{d}}$ be an array that satisfies $f$ and condition (10.15), and whose restriction to $\mathbb{Z}^{d}_{a,\textup{init}}$ is generic. Then $\mathbf{x}$ can be extended to an array $\mathbf{\tilde{x}}$ indexed by $\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}$ satisfying equations (10.5)–(10.7). ###### Proof. Let $\mathbf{x}=(x_{s})_{s\in\operatorname{\mathbb{Z}}^{d}}$ be an array that satisfies $f$ and condition (10.15), and whose restriction to $\mathbb{Z}^{d}_{a,\textup{init}}$ is generic. By Lemma 10.42, there exists an array $\mathbf{\tilde{x}}$ indexed by $\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}$ satisfying equations (10.5)–(10.7) that agrees with $\mathbf{x}$ on $\operatorname{\mathbb{Z}}^{d}_{\\{0,\dots,a+d-1\\}}$. Let $\mathbf{x}^{\prime}$ be the restriction of $\mathbf{\tilde{x}}$ to $\operatorname{\mathbb{Z}}^{d}$. By Theorem 10.24(b), $\mathbf{x}^{\prime}$ satisfies $f$ and (10.15). There is a unique solution of $f$ satisfying condition (10.15) agreeing with $\mathbf{x}$ at $\operatorname{\mathbb{Z}}^{d}_{\\{0,\dots,a+d-1\\}}$, as condition (10.15) gives the remaining values as rational expressions in the values at $\operatorname{\mathbb{Z}}^{d}_{\\{0,\dots,a+d-1\\}}$, where the denominators do not vanish because conditions (10.13)–(10.14) hold for $\mathbf{x}$ (as $\mathbf{x}$ is generic). Hence, $\mathbf{x}^{\prime}=\mathbf{x}$, as desired. ∎ ###### Proof of Theorem 10.24(a).. We need to loosen the genericity condition in Corollary 10.43 to the condition that $\mathbf{x}$ satisfies (10.13)–(10.14). Let $\mathbf{x}$ satisfy $f$ along with conditions (10.13)–(10.14) and condition (10.15). Let $A_{j}=[-j,j]^{d}\cap\operatorname{\mathbb{Z}}^{d}$, and let $B_{j}=\left\\{s\in F^{\mathbf{a}}\colon s\subseteq[-j,j]^{d}\right\\}$. We claim that if there exist $\mathbf{\tilde{x}}_{j}\in\operatorname{\mathbb{C}}^{A_{j}\cup B_{j}}$ satisfying equations (10.5)–(10.7) that agree with $\mathbf{x}$ on $A_{j}$ for all $j$, then there exists $\mathbf{\tilde{x}}\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ satisfying equations (10.5)–(10.7) that agrees with $\mathbf{x}$ on $\operatorname{\mathbb{Z}}^{d}$. Construct an infinite tree $T$ as follows: * • The vertices of $T$ are arrays indexed by $A_{j}\cup B_{j}$ satisfying equations (10.5)–(10.7) that agree with $\mathbf{x}$ on $A_{j}$ (over $j\in\operatorname{\mathbb{Z}}_{\geq 0}$). * • Add an edge between $\mathbf{\tilde{x}}_{j}\in\operatorname{\mathbb{C}}^{A_{j}\cup B_{j}}$ and $\mathbf{\tilde{x}}_{j+1}\in\operatorname{\mathbb{C}}^{A_{j+1}\cup B_{j+1}}$ if $\mathbf{\tilde{x}}_{j+1}$ restricts to $\mathbf{\tilde{x}}_{j}$. Thus, $T$ is an infinite tree in which every vertex has finite degree. By König’s infinity lemma, there exists an infinite path $\mathbf{\tilde{x}}_{0},\mathbf{\tilde{x}}_{1},\dots$ in $T$ with $\mathbf{\tilde{x}}_{j}\in\operatorname{\mathbb{C}}^{A_{j}\cup B_{j}}$. Thus, there exists $\mathbf{\tilde{x}}\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ restricting to $\mathbf{\tilde{x}}_{j}$ for all $j\in\operatorname{\mathbb{Z}}_{\geq 0}$, so $\mathbf{\tilde{x}}$ satisfies equations (10.5)–(10.7) and agrees with $\mathbf{x}$ on $\operatorname{\mathbb{Z}}^{d}$. Given $j\in\operatorname{\mathbb{Z}}_{\geq 0}$, we claim that there exists $\mathbf{\tilde{x}}\in\operatorname{\mathbb{C}}^{A_{j}\cup B_{j}}$ satisfying equations (10.5)–(10.7) that agrees with $\mathbf{x}$ on $A_{j}$. Because $\mathbf{x}$ satisfies conditions (10.13)–(10.14), there exists a sequence $\mathbf{x}_{1},\mathbf{x}_{2},\dots$ of arrays satisfying $f$ along with conditions (10.13)–(10.14) and condition (10.15), whose restrictions to $\mathbb{Z}^{d}_{a,\textup{init}}$ are generic. By Corollary 10.43, there exist $\mathbf{\tilde{x}}_{1},\mathbf{\tilde{x}}_{2},\ldots\in\operatorname{\mathbb{C}}^{\operatorname{\mathbb{Z}}^{d}\cup F^{\mathbf{a}}}$ satisfying equations (10.5)–(10.7) such that $\mathbf{\tilde{x}}_{i}$ restricts to $\mathbf{x}_{i}$. However, the sequence $\mathbf{\tilde{x}}_{1},\mathbf{\tilde{x}}_{2},\dots$ does not necessarily converge. Let $\mathbf{\tilde{x}}_{1}^{\prime},\mathbf{\tilde{x}}_{2}^{\prime},\ldots\in\operatorname{\mathbb{C}}^{A_{j}\cup B_{j}}$ be the restrictions of $\mathbf{\tilde{x}}_{1},\mathbf{\tilde{x}}_{2},\dots$ to $A_{j}\cup B_{j}$. There exists a subsequence of $\mathbf{\tilde{x}}_{1}^{\prime},\mathbf{\tilde{x}}_{2}^{\prime},\dots$ that converges to some $\mathbf{\tilde{x}}\in\operatorname{\mathbb{C}}^{A_{j}\cup B_{j}}$. (For each $s\in B_{j}$, we can partition the sequence $\mathbf{\tilde{x}}_{1}^{\prime},\mathbf{\tilde{x}}_{2}^{\prime},\dots$ into two sequences, each of which converges at $s$. Because $B_{j}$ is finite, the claim follows.) The array $\mathbf{\tilde{x}}$ must satisfy equations (10.5)–(10.7) and agree with $\mathbf{x}$ on $A_{j}$, so we are done. ∎ ### Acknowledgements I would like to thank my Ph.D. advisor, Sergey Fomin, for his invaluable mathematical insights and the countless hours he dedicated to our meetings while I was writing this paper. I am also grateful to Dmitry Chelkak for pointing out the connection with s-holomorphicity, and to Thomas Lam and John Stembridge for helpful discussions and editorial suggestions. Finally, I would like to thank the anonymous referees for their thorough reading of this manuscript, and for their suggestions that improved the quality of this paper. ## References * [1] Chelkak D., Smirnov S., Universality in the 2D Ising model and conformal invariance of fermionic observables, Invent. Math. 189 (2012), 515–580, arXiv:0910.2045. * [2] Felsner S., Geometric graphs and arrangements. Some chapters from combinatorial geometry, Advanced Lectures in Mathematics, Friedr. Vieweg & Sohn, Wiesbaden, 2004. * [3] Fomin S., Pylyavskyy P., Shustin E., Morsifications and mutations, arXiv:1711.10598. * [4] Kashaev R.M., On discrete three-dimensional equations associated with the local Yang–Baxter relation, Lett. Math. Phys. 38 (1996), 389–397, arXiv:solv-int/9512005. * [5] Kenyon R., Pemantle R., Principal minors and rhombus tilings, J. Phys. A: Math. Theor. 47 (2014), 474010, 17 pages, arXiv:1404.1354. * [6] Kenyon R., Pemantle R., Double-dimers, the Ising model and the hexahedron recurrence, J. Combin. Theory Ser. A 137 (2016), 27–63, arXiv:1308.2998. * [7] Kozlov D., Combinatorial algebraic topology, Algorithms and Computation in Mathematics, Vol. 21, Springer, Berlin, 2008. * [8] Leaf A., The Kashaev equation and related recurrences, Ph.D. Thesis, University of Michigan, 2018. * [9] Manin Yu.I., Schechtman V.V., Arrangements of hyperplanes, higher braid groups and higher Bruhat orders, in Algebraic Number Theory, Adv. Stud. Pure Math., Vol. 17, Academic Press, Boston, MA, 1989, 289–308. * [10] Oeding L., Set-theoretic defining equations of the variety of principal minors of symmetric matrices, Algebra Number Theory 5 (2011), 75–109, arXiv:0809.4236. * [11] Wilson R.J., Introduction to graph theory, Longman, Harlow, 1996. * [12] Ziegler G.M., Higher Bruhat orders and cyclic hyperplane arrangements, Topology 32 (1993), 259–279.
# Entropy properties of mostly expanding partially hyperbolic diffeomorphisms Jinhua Zhang ###### Abstract The statistical properties of mostly expanding partially hyperbolic diffeomorphisms have been substantially studied. In this paper, we would like to address the entropy properties of mostly expanding partially hyperbolic diffeomorphisms. We prove that for mostly expanding partially hyperbolic diffeomorphisms with minimal strong stable foliation and one-dimensional center bundle, there exists a $C^{1}$-open neighborhood of them, in which the topological entropy varies continuously and the intermediate entropy property holds. To prove that, we show that each non-hyperbolic ergodic measure is approached by horseshoes in entropy and in weak$*$-topology. ###### Contents 1. 1 Introduction 1. 1.1 Statements of our main results 2. 2 Preliminaries 1. 2.1 Topological entropy and metric entropy 2. 2.2 Liao’s shadowing lemma and Pliss lemma 3. 2.3 Plaque families and uniform size of invariant manifolds 4. 2.4 Constructions of horseshoes from hyperbolic periodic orbits 5. 2.5 Unstable entropy and entropy formulas 6. 2.6 Mostly expanding partially hyperbolic diffeomorphisms 3. 3 Existence of periodic orbits with prescribed behavior 1. 3.1 Generating orbit segments with weak hyperbolicity 2. 3.2 Proof of Theorem 3.1 4. 4 Approximation of ergodic measures by horseshoes: Proofs of our main results ## 1 Introduction Entropy (topological entropy and metric entropy) is an important topological invariant and plays a central role in describing the complexity of a dynamical system. The dependence of entropy on the systems is an interesting topic and has been substantially studied by many mathematicians. Smoothness of a system is relevant to the upper semi-continuity of entropy. For $C^{\infty}$-diffeomorphisms, Newhouse [N] proved that the metric entropy varies upper semi-continuously and Yomdin [Yo] proved that the topological entropy also varies upper semi-continuously. For systems with lower regularity, a bifurcation phenomena called _homoclinic tangency_ (i.e. the existence of non-transverse homoclinic intersections between the stable and unstable manifolds of a hyperbolic saddle) seems to be an obstruction to the upper semi-continuity of topological entropy. Misiurewicz [Mi] constructed $C^{r}$-diffeomorphisms with homoclinic tangencies at which the entropy fails to be upper semi-continuous. For $C^{1}$-diffeomorphisms far away from homoclinic tangencies, one can recover the upper semi-continuity of entropy (see [LVY, DFPV]). As for the lower semi-continuity of the topological entropy, the hyperbolicity is involved in. For $C^{1}$-uniformly hyperbolic diffeomorphisms, the topological entropy is locally constant due to Smale’s structural stability theorem. Using Pesin’s theory, Katok [Ka] proved that hyperbolic ergodic measures of $C^{1+\alpha}$ surface diffeomorphisms are approached by hyperbolic horseshoes and thus the topological entropy is lower semi-continuous for $C^{1+\alpha}$ surface diffeomorphisms. As a consequence, the topological entropy varies continuously among $C^{\infty}$ surface diffeomorphisms. Then it is natural to ask: among which classes of differentiable systems, does the topological entropy vary continuously? The examples in [Mi] and the results in [LVY, DFPV] tell us that one should consider the systems away from homoclinic tangencies. Such diffeomorphisms are partially hyperbolic with multi-one dimensional center bundles due to [CSY]. Therefore, it is reasonable to firstly consider the continuity of topological entropy for partially hyperbolic diffeomorphisms with one-dimensional center bundle. There are three types of classical examples of partially hyperbolic diffeomorphisms with one dimensional center bundle: skew-products with circle fiber, derived from Anosov diffeomorphisms and time one maps of Anosov flows. The topological entropy of these classical examples varies continuously. See for instance [BFSV, U, SY]. In recent years, some new anomalous examples appeared. The continuity of the topological entropy of the anomalous examples in [BGP, BGHP] was proved in [YZ]. Saghin and Yang [SY] conjectured that _the topological entropy varies continuously among the set of partially hyperbolic diffeomorphisms with one-dimensional center bundle_. The classical variational principle gives the relation between metric entropy and topological entropy: topological entropy equals the supremum of the metric entropy. For partially hyperbolic diffeomorphisms with one-dimensional center bundle, the usual approach to prove the continuity of the topological entropy is to find horseshoes approximating ergodic measures in weak$*$-topology and in entropy. This makes the problem of the continuity of topological entropy closely related to the intermediate entropy property, since horseshoes satisfy the intermediate entropy property. Recall that a diffeomorphism $f$ satisfies the _intermediate entropy property_ if for any $h\in[0,h_{top}(f))$, there exists an ergodic measure $\nu$ such that $h_{\nu}(f)=h.$ In general, one can not expect that $h$ is chosen as $h_{top}(f)$, since the ergodic measures whose metric entropies obtain the topological entropy do not always exist [Mi]. Katok proposed a conjecture that _$C^{r}$ ($r\geq 1$)-diffeomorphisms satisfy the intermediate entropy property_. It is classical that hyperbolic systems satisfy the intermediate entropy property. Katok’s result [Ka] implies that $C^{r}$ ($r>1$) non- uniformly hyperbolic systems have the intermediate entropy property. In general, for dynamics beyond uniform hyperbolicity, this conjecture is widely open and there are not so many results. See for instance [S1, S2, S3, GSW, YZ, LSWW]. Even for partially hyperbolic diffeomorphisms with one-dimensional center bundle, Katok’s conjecture is still open. See [S1, S2, YZ] for some partial results. There is a special class of partially hyperbolic diffeomorphisms called mostly expanding diffeomorphisms whose study was initiated in [ABV]111The notion in [ABV] nowadays is called partially hyperbolic diffeomorphisms with non- uniformly expanding center and it is slightly different from the notion of mostly expanding that we use. See the discussions and examples in [AnV1]. It was proved in [AnV1] that mostly expanding partially hyperbolic diffeomorphisms form an open set. There are many works on studying such kind of diffeomorphisms, and people aim to show the statistical properties of such systems, for instance the existence and finiteness of physical measures or SRB measures, statistical stability of the set of SRB measures and the mixing properties of SRB measures. See for instance [An, AnV1, AnV2, ADLP, ALi, Y] and references therein. But there are few results on the entropy properties of such systems. In this paper, we are interested in studying the continuity of topological entropy and intermediate entropy properties of such systems. The method we use is finding horseshoes approximating ergodic measures in weak$*$-topology and in entropy. ### 1.1 Statements of our main results A diffeomorphism $f\in\operatorname{Diff}^{1}(M)$ is _partially hyperbolic_ if there exist a $Df$-invariant continuous splitting $TM=E^{s}\oplus E^{c}\oplus E^{u}$ and some constants $C>1,\lambda\in(0,1)$ such that for any $x\in M$ and $n\in\mathbb{N}$, one has * • $\|Df^{n}|_{E^{s}(x)}\|<C\lambda^{n}\textrm{~{}~{}and~{}~{}}\|Df^{-n}|_{E^{u}(x)}\|<C\lambda^{n}.$ * • $\|Df^{n}|_{E^{s}(x)}\|\cdot\|Df^{-n}|_{E^{c}(f^{n}(x))}\|<C\lambda^{n}\textrm{ and }\|Df^{n}|_{E^{c}(x)}\|\cdot\|Df^{-n}|_{E^{u}(f^{n}(x))}\|<C\lambda^{n}.$ By [Go], up to changing a metric, one can assume that $C=1$, and we will do so throughout this paper. Due to [HPS], the bundles $E^{s}$ and $E^{u}$ are uniquely integrable to $f$-invariant foliations, called strong stable and strong unstable foliations and denoted by ${\cal F}^{s}$ and ${\cal F}^{u}$ respectively. We denote by ${\cal F}^{*}(x)$ the ${\cal F}^{*}$-leaf through the point $x$, for $*=s,u.$ Let $\mu$ be an invariant measure of a partially hyperbolic diffeomorphism $f$ with one dimensional center bundle (i.e. $\operatorname{dim}(E^{c})=1$), then we define $\chi^{c}(\mu,f):=\int\log\|Df|_{E^{c}}\|\operatorname{d}\mu.$ When $\mu$ is ergodic, by Oseledec’s theorem and Birkhoff’s ergodic theorem, $\chi^{c}(\mu,f)$ coincides with the Lyapunov exponent of $\mu$ along the center bundle $E^{c}$ (called _center Lyapunov exponent_). When there is no ambiguity, we will simply write $\chi^{c}(\mu)$. For a partially hyperbolic diffeomorphism $f$, we denote by $G^{u}(f)$ the set of invariant measures satisfying the entropy formula for unstable entropy (see Equation (1)). We remark that for $f\in\operatorname{Diff}^{1+\alpha}(M)$, the set $G^{u}(f)$ coincides with the set of u-Gibbs states (see Theorem 2.12), and moreover, $f$ is called _mostly expanding_ if all the center Lyapunov exponents of each $u$-Gibbs state are positive (see Section 2.6 for more information). Recall that a foliation is _minimal_ if every leaf is dense in the manifold. ###### Theorem A. Let $f\in\operatorname{Diff}^{1}(M)$ be a partially hyperbolic diffeomorphism with $\operatorname{dim}(E^{c})=1$. Assume that * • $\chi^{c}(\mu)>0$ for any $\mu\in G^{u}(f);$ * • the strong stable foliation is minimal. Then there exists a $C^{1}$-open neighborhood $\mathcal{U}\subset\operatorname{Diff}^{1}(M)$ of $f$ such that * – each $g\in{\cal U}$ satisfies the intermediate entropy property; * – the entropy function $g\in{\cal U}\mapsto h_{top}(g)$ is continuous. ###### Remark 1.1. In fact, each $g\in{\cal U}$ also satisfies our assumptions. The first assumption is an open condition due to the upper semi-continuity of the compact set $G^{u}(f)$ (see Lemma 2.13). In general, the second assumption is not an open condition, but combining with the first assumption, one can deduce that the strong stable foliation is $C^{1}$-robustly minimal (see Theorem 2.20). We can apply our result to some conservative partially hyperbolic diffeomorphisms, and obtain the continuity of topological entropy and intermediate entropy property. ###### Corollary B. Let $f\in\operatorname{Diff}_{\operatorname{m}}^{1+\alpha}(M)$ be a partially hyperbolic diffeomorphism preserving a smooth volume $\operatorname{m}$ and with $\operatorname{dim}(E^{c})=1.$ Assume that * • $\int\log\|Df|_{E^{c}}\|\operatorname{d}\operatorname{m}>0$; * • the strong stable foliation is minimal. Then there exists a $C^{1}$-neighborhood $\mathcal{U}\subset\operatorname{Diff}^{1}(M)$ of $f$ such that * – each $g\in{\cal U}$ satisfies the intermediate entropy property; * – the entropy function $g\in{\cal U}\mapsto h_{top}(g)$ is continuous. ###### Remark 1.2. One can deduce that $f$ is $C^{1}$-stably ergodic using the minimality of strong stable foliation and the center Lyapunov exponent being positive (see Theorem 2.22). For the classical examples of partially hyperbolic diffeomorphisms (skew- products with circle fiber, derived from Anosov diffeomorphisms and time one maps of Anosov flows), the results in [BF, TY, CPo] show that under some mild open conditions, ergodic measures with high entropy are hyperbolic. However, for the systems that we consider, it is not clear for us if there exist hyperbolic ergodic measures of maximal entropy (i.e. ergodic measures whose metric entropies equal the topological entropy), and we are not able to exclude the existence of non-hyperbolic ergodic measures of maximal entropy. To show the continuity of topological entropy and intermediate entropy property, we prove that non-hyperbolic ergodic measures are approached by hyperbolic horseshoes in entropy and in weak$*$-topology. Given an invariant compact set $K\subset M$ of $f\in\operatorname{Diff}^{1}(M)$, we denote by $\mathcal{M}_{inv}(f,K)$ and $\mathcal{M}_{erg}(f,K)$ the sets of $f$-invariant and $f$-ergodic measures on $K$ respectively. When $K=M$, we will simply write $\mathcal{M}_{inv}(f)$ and $\mathcal{M}_{erg}(f)$. Recall that an $f$-invariant compact set $\Lambda$ is called a _hyperbolic basic set_ if it is a hyperbolic transitive set and there exists a neighborhood $U$ of $\Lambda$ such that $\cap_{n\in{\mathbb{Z}}}f^{n}(\overline{U})=\Lambda.$ ###### Theorem C. Let $f\in\operatorname{Diff}^{1}(M)$ be a partially hyperbolic diffeomorphism with $\operatorname{dim}(E^{c})=1.$ Assume that * • $\chi^{c}(\mu)>0$ for any $\mu\in G^{u}(f);$ * • the strong stable foliation is minimal. Then there exist a $C^{1}$-neighborhood ${\cal U}\subset\operatorname{Diff}^{1}(M)$ of $f$, and constants $\kappa>0$ and $\chi_{1}>0$ such that for any $g\in{\cal U}$, any ergodic measure $\nu\in\mathcal{M}_{\rm erg}(g)$ with $-\chi_{1}\leq\chi^{c}(\nu)\leq 0$ and any ${\varepsilon}>0$, there exists a hyperbolic basic set $\Lambda_{\varepsilon}$ of $g$ whose center bundle is uniformly expanding such that * – $h_{top}(g,\Lambda_{{\varepsilon}})>\frac{h_{\nu}(g)-{\varepsilon}}{1+\kappa\cdot(|\chi^{c}(\nu)|+{\varepsilon})};$ * – the set ${\cal M}_{\rm inv}(g,{\Lambda_{\varepsilon}})$ is contained in the $\kappa\cdot(|\chi^{c}(\nu)|+{\varepsilon})$-neighborhood of $\nu.$ Furthermore, the set $\big{\\{}\nu\in\mathcal{M}_{erg}(g)|~{}\chi^{c}(\nu)\geq 0\big{\\}}$ is path connected. ###### Remark 1.3. 1. 1. In Theorem C, if $\nu$ is non-hyperbolic (i.e. $\chi^{c}(\nu)=0$), then $\nu$ is approached by horseshoes in weak$*$-topology and in entropy; 2. 2. For any $g\in{\cal U}$, all the periodic points with positive center Lyapunov exponent are homoclinically related222Recall that two hyperbolic periodic orbits are homoclinically related, if the stable manifold of one periodic orbit intersects the unstable manifold of the other transversely, and vice versa. due to Theorem 2.20. 3. 3. Comparing with the results in [BZ, DGS, YZ], we assume neither the minimality of strong unstable foliation nor the existence of blenders. Theorem C is not only used to prove Theorem A and has its own interest. The approximation of ergodic measures by hyperbolic sets in various way has been studied. See for instance the results in $C^{1+\alpha}$-setting by [Ka, SW, Ge] and in $C^{1}$-setting with domination by [G2, C, Ge] for hyperbolic ergodic measures, and the results in $C^{1}$-generic setting without domination by [BCF] for ergodic measures. The approximation of non-hyperbolic ergodic measures came into sight in recent years, and many results have been obtained. See for instance [DGR, BZ, YZ, DGS], and one can refer to [DG] for more results. The main novelty here is that we neither assume the existence of blenders nor the minimality of strong unstable foliation. We use mostly expanding property to replace the existence of blenders and minimality of strong unstable foliation, and in some sense, mostly expanding property which is a non-uniformly hyperbolic property can play the same role in these problems. Besides, one can get more information on the set of points with vanishing center Lyapunov exponent, which in general is a larger set than the union of the generic points of non-hyperbolic ergodic measures. Let $f$ be a partially hyperbolic diffeomorphism with one-dimensional center bundle, and let us define the $0$-level of the center Lyapunov regular set: $\mathcal{R}_{f}(0):=\big{\\{}x\in M|\lim_{|n|\rightarrow\infty}\frac{1}{n}\log\|Df^{n}|_{E^{c}(x)}\|=0\big{\\}}.$ ###### Theorem D. Let $f\in\operatorname{Diff}^{1}(M)$ be a partially hyperbolic diffeomorphism with $\operatorname{dim}(E^{c})=1$. Assume that * • $\chi^{c}(\mu)>0$ for any $\mu\in G^{u}(f);$ * • the strong stable foliation is minimal. Then there exists a $C^{1}$-neighborhood ${\cal U}\subset\operatorname{Diff}^{1}(M)$ of $f$ such that for any $g\in{\cal U}$, and any $h\leq h_{top}(g,\mathcal{R}_{g}(0))$ and ${\varepsilon}>0$, there exists a hyperbolic basic set $\Lambda_{\varepsilon}$ of $g$ whose center is uniformly expanding such that * – $h_{top}(g,\Lambda_{{\varepsilon}})>h-{\varepsilon},$ * – for each $\mu\in\mathcal{M}_{erg}(g,\Lambda_{\varepsilon})$, one has $0<\chi^{c}(\mu,g)<{\varepsilon}.$ ###### Remark 1.4. For any non-hyperbolic ergodic measure $\nu\in\mathcal{M}_{erg}(g)$, one has $\nu(\mathcal{R}_{g}(0))=1$. By Theorem 3 in [Bow] and the monotonicity of entropy, one has $h_{\nu}(g)\leq h_{top}(g,\mathcal{R}_{g}(0))$. Thus, Theorem D also implies that non-hyperbolic ergodic measures are approached by horseshoes in entropy. Acknowledgments. The author would like to thank C. Bonatti, L. Díaz, S. Crovisier, K. Gelfert and R. Saghin for helpful comments. The author benefits a lot from the discussions with L. Díaz, S. Crovisier, K. Gelfert, D. Yang and J. Yang. The author is partially supported by National Key R&D Program of China (2022YFA1005801), National Key R$\&$D Program of China (2021YFA1001900), NSFC 12001027 and the Fundamental Research Funds for the Central Universities. ## 2 Preliminaries In this section, we collect the results and notions that involved in this paper. ### 2.1 Topological entropy and metric entropy In this section, let $f:X\to X$ be a homeomorphism on a compact metric space $(X,\operatorname{d})$. Given $n\in\mathbb{N}$ and ${\varepsilon}>0$, let us recall that * • a _$(n,{\varepsilon})$ -Bowen ball_ centered at a point $x\in X$ is defined as $B_{n}(x,{\varepsilon})=\big{\\{}y\in X:\operatorname{d}(f^{i}(x),f^{i}(y))<{\varepsilon}\textrm{~{}for any $0\leq i<n$}\big{\\}}.$ * • a subset $S\subset X$ is called a _$(n,{\varepsilon})$ -separated set_, if for any two different points $x,y\in S$, there exists $j\in\\{0,\cdots,n-1\\}$ such that $\operatorname{d}(f^{j}(x),f^{j}(y))>{\varepsilon}.$ Given an invariant compact subset $K\subset X$, let $s(n,{\varepsilon},K)$ be the maximal cardinality of $(n,{\varepsilon})$-separated sets contained in $K$. Then _the topological entropy of $f$ on $K$_ is defined as $h_{top}(f,K)=\lim_{{\varepsilon}\rightarrow 0}\limsup_{n\rightarrow+\infty}\frac{1}{n}\log s(n,{\varepsilon},K).$ When $K=X$, we simply denote $h_{top}(f):=h_{top}(f,X).$ In analogy with the definition of Hausdorff dimension, Bowen [Bow] introduced topological entropy for non-compact sets via open covers and here we present an equivalent one via Bowen balls (see [Pe]). Let $Y\subset X$. For any ${\varepsilon}>0$ and $h\in\mathbb{R}$, let us define ${\rm m}_{{\varepsilon},h}(Y)=\lim_{n\rightarrow+\infty}\inf\big{\\{}\sum_{i\in\mathbb{N}}e^{-hn_{i}}|~{}\textrm{$Y\subset\cup_{i\in\mathbb{N}}B_{n_{i}}(x_{i},{\varepsilon})$ and $n_{i}\geq n$}\big{\\}}.$ Define $h_{top}(f,Y,{\varepsilon})=\inf\big{\\{}h|~{}{\rm m}_{{\varepsilon},h}(Y)=0\big{\\}}=\sup\big{\\{}h|~{}{\rm m}_{{\varepsilon},h}(Y)=+\infty\big{\\}}.$ Now, the topological entropy of $f$ on $Y$ is defined as $h_{top}(f,Y)=\lim_{{\varepsilon}\rightarrow 0}h_{top}(f,Y,{\varepsilon}).$ ###### Remark 2.1. Bowen showed that if $Y\subset X$ is compact, then the dimension-like definition of topological entropy coincides with the canonical definition using separated sets (see [Bow, Proposition 1]). One can also define the entropy for the non-compact set $Y$ by counting the cardinality of separated sets. Let $s(n,{\varepsilon},Y)$ be the maximal cardinality of $(n,{\varepsilon})$-separated sets in $Y$, then one can define the _lower capacity entropy_ and _upper capacity entropy_ of $f$ on $Y$ as follows: $\underline{Ch}_{top}(f,Y)=\lim_{{\varepsilon}\rightarrow 0}\liminf_{n\rightarrow+\infty}\frac{1}{n}\log{s(n,{\varepsilon},Y)}$ and $\overline{Ch}_{top}(f,Y)=\lim_{{\varepsilon}\rightarrow 0}\limsup_{n\rightarrow+\infty}\frac{1}{n}\log{s(n,{\varepsilon},Y)}$ Now, we recall some basic properties for topological entropy. ###### Proposition 2.2 (Proposition 2 in [Bow] and Section 11 in [Pe]). Let $f\in\operatorname{Homeo}(X)$ be a homeomorphism on a compact metric space $(X,\operatorname{d})$. Then one has * • for any subsets $Y_{1}\subset Y_{2}\subset X$, one has $h_{top}(f,Y_{1})\leq h_{top}(f,Y_{2}).$ * • $h_{top}(f,Y)=\sup_{n}h_{top}(f,Y_{n})$, where $Y=\cup_{n\in\mathbb{N}}Y_{n}\subset X.$ * • $\overline{Ch}_{top}(f,Y)\geq\underline{Ch}_{top}(f,Y)\geq h_{top}(f,Y).$ At last, we recall the definition of metric entropy of an $f$-ergodic measure $\mu$ given by Katok [Ka]. Given $\delta\in(0,1)$, $n\in\mathbb{N}$ and ${\varepsilon}>0$, let $s(n,{\varepsilon},\delta)$ be the minimal cardinality of $(n,{\varepsilon})$-Bowen balls whose union covers a set with $\mu$-measure no less than $\delta.$ Then the _metric entropy of $f$ with respect to $\mu$_ is defined as $h_{\mu}(f)=\lim_{{\varepsilon}\rightarrow 0}\limsup_{n\rightarrow+\infty}\frac{1}{n}\log s(n,{\varepsilon},\delta)=\lim_{{\varepsilon}\rightarrow 0}\liminf_{n\rightarrow+\infty}\frac{1}{n}\log s(n,{\varepsilon},\delta).$ ### 2.2 Liao’s shadowing lemma and Pliss lemma A $Df$-invariant splitting $T_{\Lambda}M=E\oplus F$ over an invariant compact set $\Lambda$ of $f\in\operatorname{Diff}^{1}(M)$ is a _dominated splitting_ , if there exists $N\in\mathbb{N}$ such that $\|Df^{N}|_{E(x)}\|\cdot\|Df^{-N}|_{F(f^{N}(x))}\|\leq\frac{1}{2}\textrm{~{}for any $x\in\Lambda$.}$ Now, we recall Liao’s shadowing lemma which was improved by S. Gan. This shadowing lemma allows us to find periodic orbits chasing (long) orbit segments with ”weak” center Lyapunov exponent for a large proportion of time. ###### Theorem 2.3 ([Li, G1]). Let $f\in\operatorname{Diff}^{1}(M)$ and $\Lambda$ be an $f$-invariant compact set. Assume that $\Lambda$ admits a dominated splitting of the form $T_{\Lambda}M=E\oplus F$. Then for any $\lambda\in(0,1)$, there exist constants $L>1$ and $d_{0}>0$ such that for any $d\in(0,d_{0})$, any $x\in\Lambda$ and any $n\in\mathbb{N}$ satisfying * • $\operatorname{d}(f^{n}(x),x)<d;$ * • $\|Df^{j}|_{E(x)}\|<\lambda^{j}\textrm{~{}and~{}}\|Df^{-j}|_{F(f^{n}(x))}\|<\lambda^{j},\textrm{~{}for any $1\leq j\leq n$};$ there exists a periodic point $p$ of period $n$ such that $\operatorname{d}(f^{i}(x),f^{i}(p))<L\cdot d\textrm{~{} for any $0\leq i\leq n-1$}.$ ###### Remark 2.4. We will apply this shadowing lemma to the splitting $E^{s}\oplus(E^{c}\oplus E^{u})$ and in this case, one only needs to consider the norm along the bundle $E^{c}\oplus E^{u}.$ Now, we recall the Pliss lemma which can be used for finding points with uniform size of stable or unstable manifolds. ###### Lemma 2.5 (Pliss lemma [Pl]). Let $a_{1},\cdots,a_{k}$ be some real numbers and assume that $\max_{i\leq k}a_{i}\leq C$ for some $C\in{\mathbb{R}}$. Suppose that $\sum_{i=1}^{k}a_{i}\geq k\chi_{1}$ for some $\chi_{1}$. Then for any $\chi_{2}<\chi_{1}$, there exist $1\leq j_{1}<j_{2}<\cdots<j_{l}\leq k$ such that * • $\rho:=\frac{l}{k}\geq\frac{\chi_{1}-\chi_{2}}{C-\chi_{2}}$; * • for each $1\leq n\leq l$ and each $1\leq m\leq j_{n}$, one has $\sum_{i=m}^{j_{n}}a_{i}\geq(j_{n}-m+1)\chi_{2}.$ ### 2.3 Plaque families and uniform size of invariant manifolds The existence of plaque family was given by Theorem 5.5 in [HPS] for a single diffeomorphism and was extended to a neighborhood of a diffeomorphism. See Lemma 3.5 in [CPu]. Given a diffeomorphism $f\in\operatorname{Diff}^{1}(M)$, let $\Lambda$ be an invariant compact set and $E$ be a vector bundle over $\Lambda$. For $x\in\Lambda$ and $r>0$, let us denote $E(x,r):=\big{\\{}v\in E(x)|~{}\|v\|<r\big{\\}}\textrm{~{}and~{}}E(r)=\cup_{x\in\Lambda}E(x,r).$ ###### Theorem 2.6 (Plaque Family Theorem). Let $f\in\operatorname{Diff}^{1}(M)$ and $\Lambda$ be an invariant compact set with the dominated splitting of the form $T_{\Lambda}M=E\oplus F.$ Then there exist a $C^{1}$-neighborhood ${\cal U}$ of $f$ and a neighborhood $U$ of $\Lambda$ such that for any $g\in{\cal U}$, the maximal invariant compact set $\Lambda_{g}$ of $g$ in $U$ admits a dominated splitting $T_{\Lambda_{g}}M=E_{g}\oplus F_{g},$ and there exist two continuous families of maps ${\cal W}^{cs}_{g}:E_{g}(1)\to M$ and ${\cal W}^{cu}_{g}:F_{g}(1)\to M$ satisfying the following properties: * • for each $x\in\Lambda_{g}$, the map ${\cal W}^{cs}_{x,g}:E_{g}(x,1)\to M$ (resp. ${\cal W}^{cu}_{x,g}:F_{g}(x,1)\to M$) is a $C^{1}$-embedding, $x={\cal W}^{cs}_{x,g}(0_{x})$ (resp. $x={\cal W}^{cu}_{x,g}(0_{x})$ ) and its graph is tangent to $E_{g}(x)$ (resp. $F_{g}(x)$) at the point $x$; * • the families $\\{{\cal W}^{cs}_{x,g}\\}$ and $\\{{\cal W}^{cu}_{x,g}\\}$ of $C^{1}$-embedding maps are continuous with respect to $x,g$ in $C^{1}$-topology; * • for any $\delta\in(0,1)$, there exists $\delta^{\prime}>0$ such that $g({\cal W}^{cs}_{x,g}(E_{g}(x,\delta^{\prime})))\subset{\cal W}^{cs}_{g(x),g}(E_{g}(g(x),\delta))$ and $g^{-1}({\cal W}^{cu}_{x,g}(F_{g}(x,\delta^{\prime})))\subset{\cal W}^{cu}_{g^{-1}(x),g}(F_{g}(g^{-1}(x),\delta)).$ For $g\in{\cal U}$, $x\in\Lambda_{g}$ and $\delta\in(0,1)$, we denote by ${\cal W}^{cs}_{\delta}(x,g):={\cal W}^{cs}_{x,g}(E_{g}(x,\delta)))$ and ${\cal W}^{cu}_{\delta}(x,g):={\cal W}^{cu}_{x,g}(F_{g}(x,\delta)))$. When there is no ambiguity, we will drop the index $g$ for simplicity. The $C^{1}$-submanifolds ${\cal W}^{cs}_{\delta}(x,g)$ (resp. ${\cal W}^{cu}_{\delta}(x,g)$) is called the $cs$-plaque (resp. $cu$-plaque) centered at $x$ and of radius $\delta$. The last item in Theorem 2.6 tells us that the $cu$-plaques and $cs$-plaques are locally $g$-invariant. The following result gives the uniformity on the size of unstable manifolds for a single diffeomorphism, whose proof can be found in [ABC, Section 8.2]. ###### Lemma 2.7. Let $f\in\operatorname{Diff}^{1}(M)$ and $\Lambda$ be an invariant compact set with the dominated splitting of the form $T_{\Lambda}M=E\oplus F.$ Consider a $cu$-plaque family ${\cal W}^{cu}$ corresponding to the bundle $F$. For any $\chi>0$, there exists $\delta>0$ such that if $x\in\Lambda$ satisfies that $\prod_{i=0}^{n-1}\|Df|_{F(f^{-i}(x))}\|\leq e^{-n\chi}\textrm{~{}for any $n\in\mathbb{N}$},$ then ${\cal W}^{cu}_{\delta}(x)$ is contained in the unstable manifold of $x$. Using the uniform continuity of the plaque families among nearby systems and Lemma 2.7, one can obtain the similar result for diffeomorphisms close to $f$ and for our purpose we state it in partially hyperbolic setting. ###### Lemma 2.8. Let $f\in\operatorname{Diff}^{1}(M)$ be a partially hyperbolic diffeomorphism with $\operatorname{dim}(E^{c})=1$, and $\widetilde{{\cal U}}$ be a $C^{1}$-neighborhood of $f$ given by Lemma 2.6 together with a plaque family ${\cal W}^{cu}_{g}$ with $g\in\widetilde{\cal U}$, which corresponds to the bundle $E_{g}^{c}\oplus E_{g}^{u}$ and depends continuously on $g$. Then for any $\chi>0$, there exist a $C^{1}$-small neighborhood ${\cal U}\subset\widetilde{{\cal U}}$ of $f$, and a small constant ${\varepsilon}_{0}>0$ such that * • for any $x\in M$, one has * – $g^{-1}({\cal W}^{cu}_{{\varepsilon}_{0}}(x,g))\subset{\cal W}^{cu}_{1/2}(g^{-1}(x),g)$; * – $\big{|}\log\|Dg^{-1}|_{E_{g}^{c}(x)}\|-\log\|Dg^{-1}|_{T_{y}{\cal W}^{cu}_{{\varepsilon}_{0}}(x,g)}\|\big{|}<\chi/4\,\textrm{~{}for any $y\in{\cal W}^{cu}_{{\varepsilon}_{0}}(x,g)$};$ * • if a point $x\in M$ satisfies that $\|Dg^{-n}|_{E_{g}^{c}(x)}\|<e^{-n\chi}$ for any $n\in{\mathbb{N}}$, then * – ${\cal W}^{cu}_{{\varepsilon}_{0}}(x,g)$ is contained in the unstable manifold of $x$ and is tangent everywhere to $E_{g}^{c}\oplus E_{g}^{u};$ * – $\|Dg^{-n}|_{E_{g}^{c}(y)}\|<e^{-3n\chi/4}$ for any $n\in{\mathbb{N}}$ and any $y\in{\cal W}^{cu}_{{\varepsilon}_{0}}(x,g)$. Given ${\varepsilon}>0$, $\ell>0$ and a plaque family ${\cal W}^{cu}$ corresponding to the bundle $E^{c}\oplus E^{u}$, one says that the strong stable foliation $\mathcal{F}^{s}$ is _$(\ell,{\varepsilon})$ -dense with respect to ${\cal W}^{cu}$_, if for any $x,y\in M$ the local strong stable manifold $\mathcal{F}^{s}_{\ell}(x)$ has non-empty transverse intersection with ${\cal W}^{cu}_{\varepsilon}(y)$, where $\mathcal{F}^{s}_{\ell}(x)$ is the $\ell$-neighborhood of $x$ in $\mathcal{F}^{s}(x)$ under its intrinsic topology. The following result comes from the uniform continuity of strong stable manifolds and the plaque families with respect to the bundle $E^{c}\oplus E^{u}$. ###### Lemma 2.9. Let $f\in\operatorname{Diff}^{1}(M)$ be a partially hyperbolic diffeomorphism and $\widetilde{{\cal U}}$ be a $C^{1}$ neighborhood of $f$ given by Lemma 2.6 together with a plaque family ${\cal W}^{cu}_{g}$ with $g\in\widetilde{\cal U}$, which corresponds to the bundle $E_{g}^{c}\oplus E_{g}^{u}$ and depends continuously on $g$. Assume, in addition, that strong stable foliation of $f$ is minimal. Then for any ${\varepsilon}>0$, there exist a $C^{1}$-neighborhood ${\cal U}_{\varepsilon}\subset\widetilde{\cal U}$ of $f$ and a constant $\ell>0$ such that the strong stable foliation of $g\in{\cal U}_{\varepsilon}$ is $(\ell,{\varepsilon})$-dense with respect to ${\cal W}_{g}^{cu}$. ### 2.4 Constructions of horseshoes from hyperbolic periodic orbits Katok and Mendoza [KM] gave a way to construct horseshoes approaching hyperbolic ergodic measures on surfaces in $C^{1+\alpha}$-setting and this has been generalized to higher dimension in [Ge]. Here we state a mechanism using a collection of periodic orbits to find horseshoes in $C^{1}$-setting with domination. Given an invariant compact set $\Lambda$ of $f\in\operatorname{Diff}^{1}(M)$ exhibiting a dominated splitting of the form $T_{\Lambda}M=E\oplus F$ and given $\chi>0$, let us define ${\rm NUH}_{\chi}=\big{\\{}x\in\Lambda|\,\prod_{i=0}^{k-1}\|Df|_{E(f^{i}(x))}\|\leq e^{-k\chi},~{}\prod_{i=0}^{k-1}\|Df^{-1}|_{F(f^{-i}(x))}\|\leq e^{-k\chi}\textrm{~{}for any $k\in{\mathbb{N}}$}\big{\\}}.$ ###### Theorem 2.10 (Theorem 4.1 in [YZ]). Let $f\in\operatorname{Diff}^{1}(M)$ and $\Lambda$ be an invariant compact set admitting a dominated splitting of the form $T_{\Lambda}M=E\oplus F$. For any $\chi>0$ and any ${\varepsilon}>0$, there exists $\xi_{0}>0$ satisfying the following properties. For any $n\in{\mathbb{N}}$ and any $\xi\in(0,\xi_{0})$, if there exist periodic points $p_{1},\cdots,p_{m}\in{\rm NUH}_{\chi}$ of the same period $l$ such that * • $d(p_{i},p_{j})<\xi/16$ for $1\leq i<j\leq m$; * • $\\{p_{1},\cdots,p_{m}\\}$ is a $(l,\xi)$-separated set; then there exists a hyperbolic basic set $K$ whose stable bundle has dimension $\operatorname{dim}(E)$ such that * – $h_{top}(f,K)\geq\frac{\log m}{l}$; * – the set ${\cal M}_{\rm inv}(f,K)$ is contained in the ${\varepsilon}$-neighborhood of $\big{\\{}\sum_{i=1}^{m}t_{i}\delta_{{\cal O}_{p_{i}}}|\,t_{i}\geq 0,~{}\sum_{i=1}^{m}t_{i}=1\big{\\}}$, where $\delta_{{\cal O}_{p_{i}}}=\frac{1}{l}\sum_{j=0}^{l-1}\delta_{f^{j}(p_{i})}.$ ### 2.5 Unstable entropy and entropy formulas In this section, we firstly recall the definition of the metric entropy along unstable manifolds, which was defined by F. Ledrappier and L. Young [LeYo1, LeYo2]. Then we recall its applications based on some entropy formulas. Given a Borel probability measure $\mu$ and a foliation ${\cal F}$ on the manifold $M$, a measurable partition ${\cal A}$ is called _$\mu$ -subordinate to ${\cal F}$_ if for $\mu$ a.e. $x\in M$, one has * • ${\cal A}(x)\subset{\cal F}(x)$, where ${\cal A}(x)$ denotes the element of ${\cal A}$ containing $x$; * • there exists $r_{x}>0$ such that ${\cal F}_{r_{x}}(x)\subset{\cal A}(x)$, where ${\cal F}_{r_{x}}(x)$ denotes the $r_{x}$-neighborhood of $x$ in ${\cal F}(x)$ under its intrinsic topology. Given $f\in\operatorname{Diff}^{1}(M)$, a measurable partition ${\cal A}$ is _increasing_ if $f({\cal A})\prec{\cal A}$. If $f\in\operatorname{Diff}^{1}(M)$ is partially hyperbolic and $\mu$ is an $f$-invariant measures, then there exists an increasing measurable partition which is $\mu$-subordinate to the strong unstable foliation due to [LeS] (see also [Y]). Given two increasing measurable partitions ${\cal A}_{1},{\cal A}_{2}$ which are $\mu$-subordinate to the unstable foliation, Ledrappier and Young [LeYo1] proved that $H_{\mu}({\cal A}_{1}|f({\cal A}_{1}))=H_{\mu}({\cal A}_{2}|f({\cal A}_{2}))$, and this yields the following definition. ###### Definition 2.11. Let $f\in\operatorname{Diff}^{1}(M)$ be a partially hyperbolic diffeomorphism and $\mu\in{\cal M}_{inv}(f)$. The _unstable (metric)-entropy_ of $\mu$ is defined as $h_{\mu}(f,{\cal F}^{u})=H_{\mu}\big{(}{\cal A}|f({\cal A})\big{)},$ where ${\cal A}$ is an increasing measurable partition $\mu$-subordinate to ${\cal F}^{u}.$ Recall that an invariant measure of a $C^{1+\alpha}$-partially hyperbolic diffeomorphism is called a _$u$ -Gibbs state_, if its conditional measures along the strong unstable manifolds are absolutely continuous with respect to the Lebesgue measure on the strong unstable manifolds. The existence of $u$-Gibbs states was firstly proved by Y. Pesin and Y. Sinai [PeSi]. It has been shown by Ledrappier that u-Gibbs property is equivalent to the unstable entropy formula. ###### Theorem 2.12 (Théorème in [Le]). Let $f\in\operatorname{Diff}^{1+\alpha}(M)$ be partially hyperbolic and $\mu\in{\cal M}_{inv}(f)$. Then $\mu$ is a $u$-Gibbs state if and only if $h_{\mu}(f,\mathcal{F}^{u})=\int\log|\operatorname{det}(Df|_{E^{u}})|\operatorname{d}\mu.$ For a $C^{1}$-partially hyperbolic diffeomorphism $f$, let us denote $G^{u}(f):=\big{\\{}\mu\in\mathcal{M}_{inv}(f)|h_{\mu}(f,\mathcal{F}^{u})=\int\log|\operatorname{det}(Df|_{E^{u}})|\operatorname{d}\mu\big{\\}}.$ (1) The following result describes the set $G^{u}(f)$. ###### Lemma 2.13 ([CYZ, HYY, Y]). Let $f$ be a $C^{1}$-partially hyperbolic diffeomorphism. Then * • $G^{u}(f)$ is a non-empty convex compact set; * • $G^{u}(f)$ varies upper semi-continuously with respect to $f$; * • if $\mu$ belongs to $G^{u}(f)$, so do its ergodic components. In $C^{1}$-setting, the large deviation property for the set $G^{u}(f)$ holds due to [CYZ]. Recall that for the partially hyperbolic splitting $TM=E^{s}\oplus E^{c}\oplus E^{u}$, one can associate a continuous cone field of size $\beta>0$ to the strong unstable bundle $E^{u}$: $\mathcal{C}^{u}_{\beta}:=\big{\\{}v\in TM|~{}v=v^{cs}+v^{u},~{}v^{cs}\in E^{s}\oplus E^{c},~{}v^{u}\in E^{u},~{}\|v^{cs}\|\leq\beta\cdot\|v^{u}\|\big{\\}}.$ For $\beta>0$ small, the cone field $\mathcal{C}^{u}_{\beta}$ is strictly $Df$-invariant, that is, there exists $\beta^{\prime}\in(0,\beta)$ such that $Df(\mathcal{C}^{u}_{\beta}(x))\subset\mathcal{C}^{u}_{\beta^{\prime}}(f(x))$ for any $x\in M$. When there is no ambiguity, we would drop the index $\beta$ for simplicity. ###### Theorem 2.14 (Theorem D′ in [CYZ]). Let $f\in\operatorname{Diff}^{1}(M)$ be a partially hyperbolic diffeomorphism. Then for any continuous function $\varphi:M\to\mathbb{R}$ and any $\kappa>0$, there exist positive constants $r_{0},a_{\kappa},b_{\kappa}>0$ and a $Df$-invariant cone field $\mathcal{C}^{u}$ for the strong unstable bundle such that for any disc $D$ tangent to the cone field $\mathcal{C}^{u}$ and of diameter smaller than $r_{0}$, one has $\operatorname{Leb}_{D}\bigg{(}\big{\\{}x\in D:~{}~{}\operatorname{d}\big{(}\frac{1}{n}S_{n}\varphi(x),I(\varphi)\big{)}>\kappa\big{\\}}\bigg{)}<a_{\kappa}\cdot e^{-nb_{\kappa}}\textrm{~{} for any $n\in\mathbb{N}$},$ where $\operatorname{Leb}_{D}$ denotes the Lebesgue measure on the submanifold $D$, $S_{n}\varphi=\sum_{i=0}^{n-1}\varphi\circ f^{i}$ and $I(\varphi)=\big{\\{}\int\varphi\operatorname{d}\nu:\nu\in G^{u}(f)\big{\\}}.$ In $C^{1+\alpha}$-setting, $u$-Gibbs states are the candidate for SRB measures. Recall that an invariant measure is called an _SRB_ if it admits positive Lyapunov exponents and its conditional measures along the Pesin’s unstable manifolds are absolutely continuous with respect to the Lebesgue measure. ###### Theorem 2.15 (Theorem A in [LeYo1]). Let $f\in\operatorname{Diff}^{1+\alpha}(M)$ and $\mu$ be an invariant measure with positive Lyapunov exponents. Then $\mu$ is an SRB if and only if $\mu$ satisfies Pesin’s entropy formula, that is, $h_{\mu}(f)=\int\sum\lambda^{+}(x)\operatorname{d}\mu,$ where $\sum\lambda^{+}(x)$ is the sum of positive Lyapunov exponents of $x$ with multiplicities counted. ###### Remark 2.16. Ledrappier and Young [LeYo1] proved this result in $C^{2}$-setting and Brown [Br] generalized it to $C^{1+\alpha}$-setting. In [LeYo2], Ledrappier and Young also gave an entropy formula in terms of Lyapunov exponents and transverse dimensions. The following result is a direct consequence of Corollary 7.2.2 in [LeYo2] which assumes that $f$ is $C^{2}$, and it has been relaxed to $C^{1+\alpha}$-setting in [Br]. ###### Theorem 2.17. Let $f\in\operatorname{Diff}^{1+\alpha}(M)$ be a partially hyperbolic diffeomorphism and $\mu\in\mathcal{M}_{inv}(f)$. If all the center Lyapunov exponents of $\mu$ are non-positive, then $h_{\mu}(f)=h_{\mu}(f,{\cal F}^{u}).$ ### 2.6 Mostly expanding partially hyperbolic diffeomorphisms Recall that a partially hyperbolic diffeomorphism $f\in\operatorname{Diff}^{1+\alpha}(M)$ is _mostly expanding_ , if all the center Lyapunov exponents of each $u$-Gibbs state of $f$ are positive. The following result comes from Lemma 4.1 and Theorem B in [AnV1], and one can also refer to the proof of Theorem C in [Y]. It tells us that mostly expanding is a $C^{1}$-open property. ###### Theorem 2.18. Let $f\in\operatorname{Diff}^{1+\alpha}(M)$ be a mostly expanding partially hyperbolic diffeomorphism. Then there exist a constant $\chi>0$, an integer $n_{0}\in\mathbb{N}$ and a $C^{1}$-neighborhood ${\cal V}$ of $f$ such that for each $g\in{\cal V}\cap\operatorname{Diff}^{1+\alpha}(M)$ and any $u$-Gibbs state $\mu$ of $g$, one has $\int\log\|Dg^{-n_{0}}|_{E^{c}}\|\operatorname{d}\mu<-\chi.$ As the set of $u$-Gibbs states of $f\in\operatorname{Diff}^{1+\alpha}(M)$ coincides with the compact set $G^{u}(f)$ (due to Theorem 2.12) and $G^{u}(f)$ varies upper semi-continuously with respect to $f$ (due to Lemma 2.13), one immediately obtains the following. ###### Corollary 2.19. Let $f\in\operatorname{Diff}^{1+\alpha}(M)$ be a mostly expanding partially hyperbolic diffeomorphism. Then there exist a constant $\chi>0$, an integer $n_{0}\in\mathbb{N}$ and a $C^{1}$-neighborhood ${\cal V}$ of $f$ such that for each $g\in{\cal V}$ and any $\mu\in G^{u}(g)$, one has $\int\log\|Dg^{-{n_{0}}}|_{E^{c}}\|\operatorname{d}\mu<-\chi.$ In general the minimality of strong stable foliation is not an open property. Nevertheless, the following result tells us that under the hypothesis of mostly expanding property, the minimality of strong stable foliation is a $C^{1}$-open property. ###### Theorem 2.20. Let $f\in\operatorname{Diff}^{1}(M)$ be a partially hyperbolic diffeomorphism with $\operatorname{dim}(E^{c})=1.$ Assume that * • $\int\log\|Df|_{E^{c}}\|\operatorname{d}\mu>0$ for any $\mu\in G^{u}(f)$; * • the strong stable foliation is minimal. Then there exists a $C^{1}$-neighborhood $\mathcal{U}\subset\operatorname{Diff}^{1}(M)$ of $f$ such that for each $g\in{\cal U}$, one has * – $\int\log\|Dg|_{E_{g}^{c}}\|\operatorname{d}\mu>0$ for any $\mu\in G^{u}(g)$; * – the strong stable foliation of $g$ is minimal. ###### Proof. As $\operatorname{dim}(E^{c})=1$, by Corollary 2.19, there exist a $C^{1}$-neighborhood ${\cal V}_{1}$ of $f$ and a constant $\chi>0$ such that for any $g\in{\cal V}_{1}$, each invariant measure $\mu\in G^{u}(g)$ satisfies $\int\log\|Dg|_{E^{c}_{g}}\|\operatorname{d}\mu>\chi.$ For the constant $\chi>0$, by Lemma 2.8, there exist a $C^{1}$-neighborhood $\widetilde{\cal V}_{2}\subset\operatorname{Diff}^{1}(M)$ of $f$, a plaque family ${\cal W}^{cu}_{g}$ with $g\in\widetilde{\cal V}_{2}$ and a constant ${\varepsilon}_{0}>0$ satisfying the posited properties. As the strong stable foliation of $f$ is minimal, by Lemma 2.9, there exist a $C^{1}$-neighborhood ${\cal V}_{2}\subset\widetilde{\cal V}_{2}$ of $f$ and $\ell_{0}>0$ such that for any $g\in{\cal V}_{2}$ and any $x,y\in M$, the local strong stable manifold $\mathcal{F}^{s}_{\ell_{0}}(x)$ of $g$ has non-empty transverse intersection with ${\cal W}_{{\varepsilon}_{0}}^{cu}(y,g).$ Let ${\cal U}={\cal V}_{1}\cap{\cal V}_{2}$, and fix $g\in{\cal U}$. Let us denote by $\chi^{u}:=\inf_{\mu\in G^{u}(g)}\int\log\|Dg|_{E^{c}_{g}}\|\operatorname{d}\mu>\chi,$ $\varphi(x)=\log\|Dg|_{E^{c}_{g}(x)}\|$ and $\kappa=\frac{\chi^{u}-\chi}{2}.$ Now, apply Theorem 2.14 to the continuous function $\varphi$ and $\kappa$, and one obtains positive constants $r_{0},a_{\kappa},b_{\kappa}$ such that for any $w\in M$, one has $\operatorname{Leb}_{\mathcal{F}_{r_{0}}^{u}(w)}\bigg{(}\big{\\{}y\in\mathcal{F}_{r_{0}}^{u}(w):~{}~{}\operatorname{d}\big{(}\frac{1}{n}S_{n}\varphi(y),I(\varphi)\big{)}>\kappa\big{\\}}\bigg{)}<a_{\kappa}\cdot e^{-nb_{\kappa}}\,\textrm{~{}for any $n\in\mathbb{N}.$}$ (2) Let $\chi_{\rm max}(g)=\sup_{x\in M}|\varphi(x)|.$ Apply $C=\chi_{\rm max}(g)$, $\chi_{1}=(\chi^{u}+\chi)/2$ and $\chi_{2}=\chi$ to Lemma 2.5, and one obtains $\rho\in(0,1)$ with the posited properties Fix $x,z\in M$. We are going to show that $\mathcal{F}^{s}(z)\cap B_{2{\varepsilon}}(x)\neq\emptyset$ for any ${\varepsilon}>0$. For any ${\varepsilon}>0$, by the uniform expansion of $g$ along the strong unstable foliation, there exists an integer $k=k({\varepsilon})$ such that $\mathcal{F}^{u}_{r_{0}}(g^{k}(x))\subset g^{k}(\mathcal{F}_{\varepsilon}^{u}(x)).$ Take $N_{1}$ large enough such that for any $j\geq N_{1},$ one has * • $a_{\kappa}\cdot e^{-jb_{\kappa}}<\operatorname{Leb}_{\mathcal{F}_{r_{0}}^{u}(g^{k}(x))}(\mathcal{F}_{r_{0}}^{u}(g^{k}(x))/2,$ * • $e^{-j\chi/2}\cdot{\varepsilon}_{0}\cdot\|Dg^{-1}\|^{k}<{\varepsilon}$, where $\|Dg^{-1}\|=\sup_{w\in M}\|Dg^{-1}(w)\|$. For $n>\rho^{-1}N_{1}+1$, by Equation (2), one has $\operatorname{Leb}_{\mathcal{F}_{r_{0}}^{u}(g^{k}(x))}\bigg{(}\big{\\{}y\in\mathcal{F}_{r_{0}}^{u}(g^{k}(x)):~{}~{}\operatorname{d}\big{(}\frac{1}{n}S_{n}\varphi(y),I(\varphi)\big{)}>\kappa\big{\\}}\bigg{)}<\operatorname{Leb}_{\mathcal{F}_{r_{0}}^{u}(g^{k}(x))}(\mathcal{F}_{r_{0}}^{u}(g^{k}(x))/2.$ Therefore there exists a point $y\in\mathcal{F}^{u}_{r_{0}}(g^{k}(x))$ such that $\operatorname{d}\big{(}\frac{1}{n}S_{n}\varphi(y),I(\varphi)\big{)}\leq\kappa$, and thus $\frac{1}{n}S_{n}\varphi(y)\geq\chi^{u}-\kappa=\frac{\chi^{u}+\chi}{2}$. By Pliss lemma, there exists $n^{\prime}\geq n\rho>N_{1}$ such that $\|Dg^{-j}|_{E^{c}_{g}(g^{n^{\prime}}(y))}\|\leq e^{-j\chi}\,\textrm{~{}for any $j\in[1,n^{\prime}].$}$ (3) As $g\in{\cal U}\subset{\cal V}_{2}$, the local stable manifold $\mathcal{F}^{s}_{\ell_{0}}(g^{k+n^{\prime}}(z))$ has non-empty transverse intersection with ${\cal W}^{cu}_{{\varepsilon}_{0}}(g^{n^{\prime}}(y),g).$ As the strong stable foliation is $g$-invariant, $\mathcal{F}^{s}(g^{k}(z))$ has non-empty transverse intersection with $g^{-n^{\prime}}\big{(}{\cal W}^{cu}_{{\varepsilon}_{0}}(g^{n^{\prime}}(y),g)\big{)}$. As $g\in{\cal V}_{2}\subset\widetilde{\cal V}_{2}$, by Equation (3), the first item in Lemma 2.8 and the locally invariant property of the plaque family, one has $g^{-n^{\prime}}\big{(}{\cal W}^{cu}_{{\varepsilon}_{0}}(g^{n^{\prime}}(y),g)\big{)}\subset{\cal W}^{cu}_{e^{-n^{\prime}\chi/2}{\varepsilon}_{0}}(y,g).$ Note that $e^{-n^{\prime}\chi/2}\cdot{\varepsilon}_{0}\cdot\|Dg^{-1}\|^{k}<{\varepsilon}$ due to $n^{\prime}>N_{1}$, then the strong stable manifold $\mathcal{F}^{s}(z)$ intersects ${\cal W}^{cu}_{{\varepsilon}}(g^{-k}(y),g)$ which is contained in $B_{\varepsilon}(g^{-k}(y))$. Since $g^{-k}(y)\in{\cal F}^{u}_{\varepsilon}(x)$, one has $\mathcal{F}^{s}(z)\cap B_{2{\varepsilon}}(x)\neq\emptyset.$ By the arbitrariness of $x,z$ and ${\varepsilon}$, one deduces that the strong stable foliation of $g$ is minimal. ∎ It has been shown in [Y] that there are plenty of mostly expanding partially hyperbolic diffeomorphisms among conservative partially hyperbolic diffeomorphisms with positive center Lyapunov exponents. Recall that a partially hyperbolic diffeomorphism is _accessible_ , if any pair of points can be connected by a path which is a concatenation of paths lying entirely in a strong stable or a strong unstable manifold. ###### Theorem 2.21 (Theorem D in [Y]). Let $f\in\operatorname{Diff}_{\rm m}^{1+\alpha}(M)$ be a volume preserving partially hyperbolic diffeomorphism with $\operatorname{dim}(E^{c})=1.$ Assume that $f$ is accessible and the volume measure has positive center Lyapunov exponent. Then $f$ is mostly expanding. Instead of assuming accessibility, one can assume the minimality of strong stable foliation and obtain the similar result. It will be used to prove Corollary B. ###### Theorem 2.22. Let $f\in\operatorname{Diff}_{\rm m}^{1+\alpha}(M)$ be a partially hyperbolic diffeomorphism preserving a smooth volume ${\rm m}$. Assume that * • the strong stable foliation of $f$ is minimal; * • $\operatorname{dim}(E^{c})=1$ and $\int\log\|Df|_{E^{c}}\|\operatorname{d}\operatorname{m}>0$. Then one has that * – $f$ is mostly expanding; * – $(f,\operatorname{m})$ is $C^{1}$-stably ergodic, that is, there exists a $C^{1}$-neighborhood $\mathcal{U}\subset\operatorname{Diff}^{1}(M)$ of $f$ such that each $g\in{\cal U}\cap\operatorname{Diff}_{\rm m}^{1+\alpha}(M)$ is ergodic with respect to the volume measure $\operatorname{m}.$ ###### Proof. By the absolute continuity of the strong stable and unstable foliations, each ergodic component of the volume measure $\operatorname{m}$ is both a $u$-Gibbs state and an $s$-Gibbs state (i.e. a $u$-Gibbs state for $f^{-1}$). By assumption, there exists an ergodic component $\operatorname{m}_{0}$ of $\operatorname{m}$ such that $\chi^{c}(\operatorname{m}_{0},f)>0.$ Note that $\operatorname{m}_{0}$ is a $u$-Gibbs state for $f$ as well as a $u$-Gibbs state for $f^{-1}$. As $\operatorname{m}_{0}$ has negative center Lyapunov exponent for $f^{-1}$, thus $\operatorname{m}_{0}$ is also an SRB for $f^{-1}$. ###### Claim 2.23. $f^{-1}$ has a unique $u$-Gibbs state $\operatorname{m}_{0}$. ###### Proof. Let $\mu$ be an ergodic $u$-Gibbs state for $f^{-1}$. Note that the conditional measures of $\operatorname{m}_{0}$ along strong unstable manifolds of $f^{-1}$ are absolutely continuous with respect to the Lebesgue measure, and $\operatorname{m}_{0}$ has negative center Lyapunov exponent for $f^{-1}$, by the absolute continuity of the Pesin stable manifold for $(\operatorname{m}_{0},f^{-1})$, the minimality of the strong unstable foliation for $f^{-1}$ and the ergodicity of the measures $\operatorname{m}_{0}$ and $\mu$, one deduces that $\operatorname{m}_{0}=\mu$. ∎ Claim 2.23 implies that $\operatorname{m}$ has only one ergodic component $\operatorname{m}_{0}$ proving the ergodicity of $\operatorname{m}.$ Consider an ergodic $u$-Gibbs state $\mu$ for $f$, and we need to show that $\mu$ has positive center Lyapunov exponent for $f$. By Theorem 2.12, one has $h_{\mu}(f,\mathcal{F}^{u})=\int\log\operatorname{det}(Df|_{E^{u}})\operatorname{d}\mu$. If $\chi^{c}(\mu,f)\leq 0$, by Theorem 2.17, one has $h_{\mu}(f)=h_{\mu}(f,\mathcal{F}^{u})=\int\log\operatorname{det}(Df|_{E^{u}})\operatorname{d}\mu.$ As $f$ is volume preserving, one has $\int\log\operatorname{det}(Df|_{E^{u}})\operatorname{d}\mu=-\int\log\operatorname{det}(Df|_{E^{s}\oplus E^{c}})\operatorname{d}\mu.$ This gives that $h_{\mu}(f)=-\int\log\operatorname{det}(Df|_{E^{s}\oplus E^{c}})\operatorname{d}\mu$. By Theorem 2.15, $\mu$ is an SRB measure for $f^{-1}$. By the absolute continuity of the strong unstable manifolds of $f^{-1}$, $\mu$ is also a $u$-Gibbs state for $f^{-1}$. Then Claim 2.23 gives that $\mu=\operatorname{m}_{0}$ which contradicts the fact that $\chi^{c}(\mu,f)\leq 0$ and $\chi^{c}(\operatorname{m}_{0},f)>0$. This proves that all the ergodic $u$-Gibbs states of $f$ have positive center Lyapunov exponent, and hence $f$ is mostly expanding. Let ${\cal U}\subset\operatorname{Diff}^{1}(M)$ be the $C^{1}$-open neighborhood of $f$ given by Theorem 2.20. For each $g\in{\cal U}\cap\operatorname{Diff}^{1+\alpha}_{\operatorname{m}}(M),$ by the absolute continuity of the strong unstable foliation, one has $\operatorname{m}\in G^{u}(g)$, and thus $\int\log\|Dg|_{E^{c}_{g}}\|\operatorname{d}\operatorname{m}>0$ due to the first item of Theorem 2.20. Now apply the arguments above to $(g,\operatorname{m})$ with $g\in{\cal U}\cap\operatorname{Diff}^{1+\alpha}_{\operatorname{m}}(M)$, and one deduces that $(g,\operatorname{m})$ is ergodic. ∎ ## 3 Existence of periodic orbits with prescribed behavior This section gives the main ingredient for proving our main results. We will first find some periodic orbits which shadow the generic points of an ergodic measure with non-positive center Lyapunov exponent for some proportion of time. Throughout this section, given a foliation $\mathcal{F}$ on $M$ with $C^{1}$-leaves, $x\in M$ and $\ell>0$, we denote by $\mathcal{F}_{\ell}(x)$ the $\ell$-neighborhood of $x$ in the leaf $\mathcal{F}(x)$ under its intrinsic topology. Given a partially hyperbolic diffeomorphism, we simply call a plaque family corresponding to the splitting $E^{c}\oplus E^{u}$ as a _$cu$ -plaque family_. Given a partially hyperbolic diffeomorphism $f$ with $\operatorname{dim}(E^{c})=1$ and $\chi>0$, define the Pesin block of the following form: ${\rm NUH}_{\chi}=\big{\\{}x\in M:\|Df^{-n}|_{E^{c}(x)}\|\leq e^{-n\chi}\textrm{~{}for any $n\in\mathbb{N}$}\big{\\}}.$ ###### Theorem 3.1. Let $f\in\operatorname{Diff}^{1}(M)$ be a partially hyperbolic diffeomorphism with $\operatorname{dim}(E^{c})=1$. Suppose that * • there exists $\chi_{0}>0$ such that $\int\log\|Df|_{E^{c}}\|\operatorname{d}\mu>\chi_{0}$ for any $\mu\in G^{u}(f)$; * • let ${\cal W}^{cu}$ be a $cu$-plaque family and ${\varepsilon}_{0}>0$ be given by Lemma 2.8 corresponding to $\chi_{0}$, then there exists a constant $\ell_{0}>0$ such that the strong stable foliation is $(\ell_{0},{\varepsilon}_{0}/2)$-dense with respect to ${\cal W}^{cu}$. Then there exist a constant $\rho_{1}>0$ and a hyperbolic periodic point $q$ with positive center Lyapunov exponent and whose unstable manifold has inner radius ${\varepsilon}_{0}$ such that the following properties hold. For any ergodic measure $\nu$ with $\chi_{0}/24<\chi^{c}(\nu)\leq 0$ and any ${\varepsilon}>0$, there exists $\xi_{1}>0$ such that for any $\xi\in(0,\xi_{1})$, there exist periodic points $p_{1},\cdots,p_{m}\in{\rm NUH}_{2(|\chi^{c}(\nu)|+{\varepsilon})}$ of the same period $l\in\mathbb{N}$ satisfying that * • each $p_{i}$ is homoclinically related to $q$; * • $\operatorname{d}(p_{i},p_{j})<\xi/16$ for $1\leq i,j\leq m$; * • $\\{p_{1},\cdots,p_{m}\\}$ is a $(l,\xi)$-separated set; * • $\frac{\log m}{l}>\frac{h_{\nu}(f)-{\varepsilon}}{1+\rho_{1}(|\chi^{c}(\nu)|+{\varepsilon})};$ * • $\operatorname{d}(\delta_{{\cal O}_{p_{j}}},\nu)<\rho_{1}(|\chi^{c}(\nu)|+{\varepsilon})$ for any $1\leq j\leq m$. The periodic orbits obtained in Theorem 3.1 would be used to find horseshoes approaching the ergodic measure $\nu$ with $\chi^{c}(\nu)\leq 0$ in weak$*$-topology and in entropy. To find periodic orbits, we first need to find some periodic pseudo-orbit which are close to the ergodic measure in weak$*$-topology and have weak (but not too weak) center Lyapunov exponent, then Liao’s shadowing lemma is applied. Some similar ideas can be found in [BZ, YZ], which assume both minimality of strong foliations and existence of $cs$-blenders and $cu$-blenders (one can refer to [BD] for the definition of blenders). ### 3.1 Generating orbit segments with weak hyperbolicity The following is a key result to find pseudo orbits with some weak expansion along the center and satisfying the assumption in Liao’s shadowing lemma. ###### Proposition 3.2. Let $f\in\operatorname{Diff}^{1}(M)$ be a partially hyperbolic diffeomorphism with $\operatorname{dim}(E^{c})=1$. Suppose that * • there exists $\chi_{0}>0$ such that $\int\log\|Df|_{E^{c}}\|\operatorname{d}\mu>\chi_{0}$ for any $\mu\in G^{u}(f)$; * • let ${\cal W}^{cu}$ be a $cu$-plaque family and ${\varepsilon}_{0}>0$ be given by Lemma 2.8 corresponding to $\chi_{0}$, then there exists a constant $\ell_{0}>0$ such that the strong stable foliation is $(\ell_{0},{\varepsilon}_{0}/2)$-dense with respect to ${\cal W}^{cu}$. Then there exist a hyperbolic periodic point $q$ and a constant $\rho_{0}>0$ with the following properties: * • $\|Df^{-i}|_{E^{c}(q)}\|<e^{-i\chi_{0}}$ for any $i\in{\mathbb{N}}$; * • for any $C>1$, $\chi\in(-\chi_{0}/24,\chi_{0}/24)$, ${\varepsilon}\in(0,\chi_{0}/24)$ and $d>0$, there exist integers $\tau=\tau({\varepsilon},d)<T=T(C,\chi,{\varepsilon},d)$ such that for any $n\geq T$ and any $x\in M$ satisfying $C^{-1}\cdot e^{k(\chi-{\varepsilon})}\leq\|Df^{k}|_{E^{c}(x)}\|\leq C\cdot e^{k(\chi+{\varepsilon})}\>\textrm{~{}for any $0\leq k\leq n,$}$ there exist a point $w_{x}\in W^{u}_{d/2}(q)$ and an integer $l_{x}\in\big{(}n,\>n+\rho_{0}\cdot(|\chi|+{\varepsilon})\cdot n\big{)}$ such that * – $f^{l_{x}}(w_{x})\in{\cal F}^{s}_{d/2}(q)$; * – $\operatorname{d}(f^{\tau+i}(w_{x}),f^{i}(x))<d$ for any $i\in[\tau,n]$; * – $\|Df^{-i}|_{E^{c}(f^{l_{x}}(w_{x}))}\|\leq e^{-i2(|\chi|+{\varepsilon})}$ for any $i\in[1,l_{x}]$. The idea of proving Proposition 3.2 is that we first replace $x$ by a point $x^{s}$ on the local unstable manifold of a hyperbolic periodic saddle $q$, then we choose a thin $cu$-strip at $x^{s}$ which is $C^{1}$-foliated by discs tangent to a strong unstable cone field and whose size in the center direction is exponentially small; then following the forward orbit of $x^{s}$, we iterate this strip for a long time $n$ and we are interested in those points in the $cu$-strip whose forward orbits stay close to the one of $x^{s}$ until time $n$. Then we iterate the strip at $f^{n}(x^{s})$ by some uniform finite time $\tau$ to make the size in the strong unstable direction at some uniform scale so that we can apply the large deviation property for the set $G^{u}(f)$ given by Theorem 2.14. Combining with Pliss lemma, we can find a point has ”Pliss-property” which guarantees that after iterating the strip at $f^{n+\tau}(x^{s})$ by $m$-times, the $cu$-strip has size at scale ${\varepsilon}_{0}$, and thus it intersects the local strong stable manifold of $q$. Our argument can guarantee that $m/n$ is at scale-$\rho_{0}\cdot(|\chi|+{\varepsilon})$ for some uniform constant $\rho_{0}.$ In this section, a _center curve_ means a $C^{1}$-curve everywhere tangent to the center bundle. ###### Proof of Proposition 3.2. Define $\chi_{\rm max}=\sup_{y\in M}|\log\|Df|_{E^{c}(y)}\||>0.$ By Lemma 2.13, the set $G^{u}(f)$ is non-empty and compact. Let us define $\chi^{u}=\inf\big{\\{}\int\log\|Df|_{E^{c}}\|\operatorname{d}\mu\,|\,~{}\mu\in G^{u}(f)\big{\\}}.$ By assumption, one has $0<\chi_{0}<\chi^{u}\leq\chi_{\rm max}.$ By the local invariant property of the plaque family ${\cal W}^{cu}$, for ${\varepsilon}_{0}$ given in the assumption, there exists $\delta_{0}>0$ such that $f^{-1}({\cal W}_{\delta_{0}}^{cu}(y))\subset{\cal W}^{cu}_{{\varepsilon}_{0}}(f^{-1}(y))\,\textrm{~{} for any $y\in M.$}$ (4) Since hyperbolic ergodic measures are approximated by periodic measures [C, G2, Ge], by the continuity of the function $\log\|Df|_{E^{c}}\|$, there exists a periodic point $q$ with $\chi^{c}(\delta_{{\cal O}_{q}})>\chi_{0}.$ By Pliss lemma (Lemma 2.5), up to replacing $q$ by some $f^{i}(q)$, one can assume that $\|Df^{-i}|_{E^{c}(q)}\|<e^{-i\chi_{0}}\textrm{~{} for any $i\in\mathbb{N}.$}$ (5) In the following, for simplicity, we shall assume that $q$ is a fixed point. Up to changing a metric, we shall assume that $E^{c}$ is orthogonal to $E^{u}$. By Lemma 2.8, the point $q$ has unstable manifold $W^{u}_{{\varepsilon}_{0}}(q)={\cal W}^{cu}_{{\varepsilon}_{0}}(q)$ of size ${\varepsilon}_{0}$ and tangent everywhere to $E^{c}\oplus E^{u}$. ###### Claim 3.3. For any $z\in W^{u}_{{\varepsilon}_{0}}(q)$, one has that ${\cal W}^{cu}_{{\varepsilon}_{0}}(z)\subset W^{u}(q)$. ###### Proof. Take $z\in W^{u}_{{\varepsilon}_{0}}(q)={\cal W}^{cu}_{{\varepsilon}_{0}}(q)$. By Equation (5) and the second item of Lemma 2.8, one has $\|Df^{-i}|_{E^{c}(z)}\|<e^{-3i\chi_{0}/4}\textrm{~{} for any $i\in\mathbb{N}.$}$ Recall that ${\varepsilon}_{0}$ is given by Lemma 2.8 corresponding to $\chi_{0}$. By the first item of Lemma 2.8, for any $w\in{\cal W}^{cu}_{{\varepsilon}_{0}}(z)$, one has $\|Df^{-1}|_{T_{w}{\cal W}^{cu}_{{\varepsilon}_{0}}(z)}\|<e^{-\chi_{0}/2}.$ By the local invariance of the plaque family ${\cal W}^{cu}$, one has $f^{-1}({\cal W}^{cu}_{{\varepsilon}_{0}}(z))\subset{\cal W}^{cu}_{{\varepsilon}_{0}\cdot e^{-\chi_{0}/2}}(f^{-1}(z)).$ Once again using the first item of Lemma 2.8, one has $\|Df^{-2}|_{T_{w}{\cal W}^{cu}_{{\varepsilon}_{0}}(z)}\|<e^{\chi_{0}/2}\cdot\|Df^{-2}|_{E^{c}(z)}\|<e^{-\chi_{0}}.$ By the local invariance of the plaque familly, one has $f^{-2}({\cal W}^{cu}_{{\varepsilon}_{0}}(z))\in{\cal W}^{cu}_{{\varepsilon}_{0}\cdot e^{-\chi_{0}}}(f^{-2}(z)).$ Inductively using the arguments above, one deduces that $f^{-i}({\cal W}^{cu}_{{\varepsilon}_{0}}(z))\subset{\cal W}^{cu}_{{\varepsilon}_{0}\cdot e^{-i\chi_{0}/2}}(f^{-i}(z))$ for any $i\in\mathbb{N}.$ Since $z\in W^{u}_{{\varepsilon}_{0}}(q)$, one deduces that ${\cal W}^{cu}_{{\varepsilon}_{0}}(z)\subset W^{u}(q)$. ∎ Now apply $\varphi(x)=\log\|Df|_{E^{c}(x)}\|$ and $\kappa=(\chi^{u}-\chi_{0})/2$ to Theorem 2.14, and one obtains positive constants $r_{0},a,b>0$ and a $Df$-invariant cone field $\widetilde{\mathcal{C}}^{u}$ with posited properties. We take a smaller $Df$-invariant cone field $\mathcal{C}^{u}\subset\widetilde{\mathcal{C}}^{u}$ such that $f$ is uniformly expanding along $\mathcal{C}^{u}$, that is, there exists $\lambda^{u}>1$ such that $\|Df(v)\|\geq\lambda^{u},\,\textrm{~{}for any $x\in M$ and any unit vector $v\in\mathcal{C}^{u}(x)$}.$ Fix a $C^{1}$-foliation $\widehat{F}^{u}$ on the local unstable manifold $W^{u}_{{\varepsilon}_{0}}(q)$, whose leaves are discs tangent to the cone field $\mathcal{C}^{u}$ (of course, this foliation is not $f$-invariant and not unique, and we just fix one). Let $C_{u}>1$ be an upper bound of the Lipschitz constant for the holonomy maps (from center curves to center curves) of this foliation $\widehat{F}^{u}$. By the uniform continuity of the center bundle, for ${\varepsilon}>0$, there exists $\delta\in(0,\min\\{{\varepsilon}_{0}/2,\delta_{0}\\})$ such that $\big{|}\log\|Df|_{E^{c}(y_{1})}\|-\log\|Df|_{E^{c}(y_{2})}\|\big{|}<{\varepsilon}/4,\textrm{~{}for any $y_{1},y_{2}\in M$ with $\operatorname{d}(y_{1},y_{2})<\delta$.}$ (6) For any $d>0$, up to shrinking $\delta$, one can assume that $\delta<d/2$. As $f$ is uniformly contracting and expanding along $E^{s}$ and $\mathcal{C}^{u}$ respectively, and as $q$ has local unstable manifold of size ${\varepsilon}_{0}$, there exists an integer $\tau=\tau({\varepsilon},d)>0$ such that for any $k\geq\tau$ one has * • $f^{k}({\cal F}^{s}_{\ell_{0}}(y))\subset{\cal F}^{s}_{\delta}(f^{k}(y))$ for any $y\in M$; * • for disc $D$ tangent to $\mathcal{C}^{u}$ and of inner radius $\delta/2$, the disc $f^{k}(D)$ is tangent to $\mathcal{C}^{u}$ and contains a disc of inner radius $r_{0}$; * • $f^{-k}(W^{u}_{{\varepsilon}_{0}}(q))\subset W^{u}_{\delta}(q)$. Take $n>\tau$ large (which will be precised later). Now for every $x\in M$ satisfying $C^{-1}\cdot e^{k(\chi-{\varepsilon})}\leq\|Df^{k}|_{E^{c}(x)}\|\leq C\cdot e^{k(\chi+{\varepsilon})}\>\textrm{~{}for any $0\leq k\leq n,$}$ let $x^{s}\in{\cal F}^{s}_{\ell_{0}}(x)\cap W^{u}_{{\varepsilon}_{0}/2}(q)$, then one has $C^{-1}\cdot e^{-3\tau\chi_{\rm max}}\cdot e^{i(\chi-5{\varepsilon}/4)}\leq\|Df^{i}|_{E^{c}(x^{s})}\|\leq C\cdot e^{3\tau\chi_{\rm max}}\cdot e^{i(\chi+5{\varepsilon}/4)}\>\textrm{~{}for any $0\leq i\leq n$}.$ (7) Now, consider a center curve $\sigma_{n}\subset W^{u}_{{\varepsilon}_{0}}(q)$ centered at $x^{s}$ of length $\delta\cdot e^{-2n(|\chi|+{\varepsilon})}$ and the $C^{1}$-foliation $\widehat{F}^{u}$ on $W^{u}_{{\varepsilon}_{0}}(q)$ whose leaves are disks tangent to the cone field $\mathcal{C}^{u}$. Let us denote by $\mathfrak{F}_{n}^{cu}(x^{s}):=\cup_{z\in\sigma_{n}}\widehat{F}^{u}_{\delta}(z)$ which is a $C^{1}$-submanifold of $W^{u}_{{\varepsilon}_{0}}(q)$. By Claim 3.3, one also has $\mathfrak{F}_{n}^{cu}(x^{s})\subset{\cal W}^{cu}_{{\varepsilon}_{0}}(x^{s})\subset W^{u}(q).$ We call $\cup_{z\in\partial\sigma_{n}}\widehat{F}^{u}_{\delta}(z)$ as the u-boundary of $\mathfrak{F}_{n}^{cu}(x^{s})$ which has two connected components. One defines the center-size of $\mathfrak{F}_{n}^{cu}(x^{s})$ by the infimum of the length of the center curves in $\mathfrak{F}_{n}^{cu}(x^{s})$ which joins the two connected components of the u-boundary of $\mathfrak{F}_{n}^{cu}(x^{s})$. Then the center-size of $\mathfrak{F}_{n}^{cu}(x^{s})$ is bounded from below by $C_{u}^{-1}\cdot\delta\cdot e^{-2n(|\chi|+{\varepsilon}))}$ and from above by $C_{u}\cdot\delta\cdot e^{-2n(|\chi|+{\varepsilon})}$. For each $i\in[0,n]$, let $\mathfrak{F}_{n}^{cu}(f^{i}(x^{s}))$ be the connected component of $f^{i}(\mathfrak{F}_{n}^{cu}(x^{s}))\cap B(f^{i}(x^{s}),\delta)$ containing $f^{i}(x^{s})$. By the locally invariant property of the plaque family, one deduces that $\mathfrak{F}_{n}^{cu}(f^{i}(x^{s}))\subset{\cal W}^{cu}_{\delta}(f^{i}(x^{s})).$ Besides, one can analogously define the u-boundary and the center-size of $\mathfrak{F}_{n}^{cu}(f^{i}(x^{s}))$ respectively. Let $N_{1}=N_{1}({\varepsilon},d,C)$ be an integer such that $C_{u}\cdot C\cdot e^{5\tau\chi_{\rm max}}<e^{k{\varepsilon}/4}\,\textrm{~{}and~{}}\,C\cdot e^{6\tau\chi_{\rm max}}<e^{k{\varepsilon}^{2}/(2\chi_{\rm max})}\,\textrm{~{}for any $k\geq N_{1}$}.$ (8) Therefore for $n\geq N_{1}$, the center size of $\mathfrak{F}_{n}^{cu}(x^{s})$ is much smaller than $\delta$. ###### Claim 3.4. For any $n\geq N_{1}$ and each $i\in[0,n]$, the disc $\mathfrak{F}_{n}^{cu}(f^{i}(x^{s}))$ has its center-size in between $\big{(}\delta\cdot e^{-4n(|\chi|+{\varepsilon})},\>\delta\cdot e^{-n(|\chi|+{\varepsilon}/4)}\big{)}$. ###### Proof. By Equation (6), for each $y\in\mathfrak{F}_{n}^{cu}(f^{i}(x^{s}))$ and any $j\leq i+1$, one has $\|Df^{j}|_{E^{c}(x^{s})}\|\cdot e^{-j{\varepsilon}/4}\leq\|Df^{j}|_{E^{c}(y)}\|\leq\|Df^{j}|_{E^{c}(x^{s})}\|\cdot e^{j{\varepsilon}/4},$ and thus by Equation (7), one has $C^{-1}\cdot e^{-3\tau\chi_{\rm max}}\cdot e^{j(\chi-3{\varepsilon}/2)}\leq\|Df^{j}|_{E^{c}(y)}\|\leq C\cdot e^{3\tau\chi_{\rm max}}\cdot e^{j(\chi+3{\varepsilon}/2)}.$ (9) We inductively prove this claim. Assume that the conclusion holds for $i<n$, then consider a center curve $\gamma\subset f(\mathfrak{F}_{n}^{cu}(f^{i}(x^{s})))$ which joins the two u-boundary components of the $cu$-strip $f(\mathfrak{F}_{n}^{cu}(f^{i}(x^{s})))$, then by the uniform expansion of $f$ along the cone field $\mathcal{C}^{u}$ and the invariance of the center bundle, $f^{-j}(\gamma)$ is a center curve joining the two u-boundary components of $\mathfrak{F}_{n}^{cu}(f^{i+1-j}(x^{s}))$. Let $\tilde{\gamma}:[0,1]\to M$ denote the center curve $f^{-(i+1)}(\gamma)$, then the length $\ell(\tilde{\gamma})$ of $\tilde{\gamma}$ has the following bounds $C_{u}^{-1}\cdot\delta\cdot e^{-2n(|\chi|+{\varepsilon})}\leq\ell(\tilde{\gamma})\leq C_{u}\cdot\delta\cdot e^{-2n(|\chi|+{\varepsilon})},$ and the length $\ell(\gamma)$ of the center curve $\gamma$ is given by $\ell(\gamma)=\int_{0}^{1}\|Df^{i+1}\frac{\operatorname{d}\tilde{\gamma}(t)}{\operatorname{d}t}\|\operatorname{d}t=\int_{0}^{1}\|Df^{i+1}|_{E^{c}(\tilde{\gamma}(t))}\|\cdot\|\frac{\operatorname{d}\tilde{\gamma}(t)}{\operatorname{d}t}\|\operatorname{d}t.$ By Equation (9), one deduces that $C^{-1}\cdot e^{-3\tau\chi_{\rm max}}\cdot e^{{(i+1)}(\chi-3{\varepsilon}/2)}\cdot\ell(\tilde{\gamma})\leq\ell(\gamma)\leq C\cdot e^{3\tau\chi_{\rm max}}\cdot e^{{(i+1)}(\chi+3{\varepsilon}/2)}\cdot\ell(\tilde{\gamma}).$ Combining with Equation (8), one has $\displaystyle\ell(\gamma)$ $\displaystyle\leq\delta\cdot C_{u}\cdot C\cdot e^{3\tau\chi_{\rm max}}\cdot e^{{(i+1)}(\chi+3{\varepsilon}/2)}\cdot e^{-2n(|\chi|+{\varepsilon})}$ $\displaystyle<\delta\cdot e^{n{\varepsilon}/4}\cdot e^{n(|\chi|+3{\varepsilon}/2)}\cdot e^{-2n(|\chi|+{\varepsilon})}$ $\displaystyle=\delta\cdot e^{-n(|\chi|+{\varepsilon}/4)}$ and $\displaystyle\ell(\gamma)$ $\displaystyle\geq\delta\cdot C_{u}^{-1}\cdot C^{-1}\cdot e^{-3\tau\chi_{\rm max}}\cdot e^{{(i+1)}(\chi-3{\varepsilon}/2)}\cdot e^{-2n(|\chi|+{\varepsilon})}$ $\displaystyle>\delta\cdot e^{-n{\varepsilon}/4}\cdot e^{-n(|\chi|+3{\varepsilon}/2)}\cdot e^{-2n(|\chi|+{\varepsilon})}$ $\displaystyle>\delta\cdot e^{-4n(|\chi|+{\varepsilon})}.$ ∎ By the choice of $\tau$, the cu-disc $f^{\tau}(\mathfrak{F}_{n}^{cu}(f^{n}(x^{s})))$ is $C^{1}$-foliated by discs tangent to the cone field $\mathcal{C}^{u}$ of radius no less than $r_{0}$ and its center size is bounded from below by $\delta\cdot e^{-4n(|\chi|+{\varepsilon})}\cdot e^{-\tau\chi_{\rm max}}$. Let $D\subset f^{\tau}(\mathfrak{F}_{n}^{cu}(f^{n}(x^{s})))$ be the $C^{1}$-disc through the point $f^{n+\tau}(x^{s})$ of radius $r_{0}$, then there exists a small neighborhood $\widehat{D}$ of $f^{n+\tau}(x^{s})$ in $D$ such that for each point $y\in\widehat{D}$, the disc $f^{\tau}(\mathfrak{F}_{n}^{cu}(f^{n}(x^{s})))$ contains a $cu$-disc centered at $y$ of radius $\delta_{n}$, where $\delta_{n}:=2^{-1}\cdot\delta\cdot e^{-4n(|\chi|+{\varepsilon})}\cdot e^{-\tau\chi_{\rm max}}$. Now we show that the $cu$-plaque of size $\delta_{n}$ centered at $y\in\widehat{D}$ is contained in $f^{\tau}(\mathfrak{F}_{n}^{cu}(f^{n}(x^{s})))$, which is essentially due to Claim 3.3 and the locally invariant property of $cu$-plaques. ###### Claim 3.5. For any $y\in\widehat{D}$, one has ${\cal W}^{cu}_{\delta_{n}}(y)\subset f^{\tau}(\mathfrak{F}_{n}^{cu}(f^{n}(x^{s}))).$ ###### Proof. By the choice of $\delta_{0}$ in Equation (4), one has $f^{-1}({\cal W}^{cu}_{\delta_{0}}(f^{-n-\tau+1}(y)))\subset{\cal W}^{cu}_{{\varepsilon}_{0}}(f^{-n-\tau}(y)).$ Since $\mathfrak{F}_{n}^{cu}(x^{s})\subset W_{{\varepsilon}_{0}}^{u}(q)$, one has that $f^{-n-\tau}(y)\in\mathfrak{F}_{n}^{cu}(x^{s})\subset W_{{\varepsilon}_{0}}^{u}(q)$. By Claim 3.3, one has that ${\cal W}^{cu}_{{\varepsilon}_{0}}(f^{-n-\tau}(y))\subset W^{u}(q)$. As $\mathfrak{F}_{n}^{cu}(f(x^{s}))$ is the connected component of $f(\mathfrak{F}_{n}^{cu}(x^{s}))\cap B_{\delta}(f(x^{s}))$ containing $f(x^{s})$, combining with Claim 3.4 and $\delta<\delta_{0}$, one deduces that ${\cal W}^{cu}_{\delta_{n}}(f^{-n-\tau+1}(y)))\subset\mathfrak{F}_{n}^{cu}(f(x^{s})).$ By applying the argument above inductively, one can conclude. ∎ Apply the disc $D$ to Theorem 2.14, and one deduces $\operatorname{Leb}_{D}\big{(}\big{\\{}y\in D:\operatorname{d}\big{(}\frac{1}{k}\log\|Df^{k}|_{E^{c}(y)}\|,I(\varphi)\big{)}\leq(\chi^{u}+\chi_{0})/2\big{\\}}\big{)}>\operatorname{Leb}_{D}(D)-a\cdot e^{-kb}\,\textrm{~{} for any $k\in{\mathbb{N}}$},$ where $I(\varphi)=\big{[}\chi^{u},\,\sup\\{\int\log\|Df|_{E^{c}}\|\operatorname{d}\mu\,|~{}\mu\in G^{u}(f)\\}\big{]}$ is an interval. As $b>0$, there exists $k_{0}\in{\mathbb{N}}$ such that $\operatorname{Leb}_{D}\big{(}\big{\\{}y\in\widehat{D}:\|Df^{k}|_{E^{c}(y)}\|\geq e^{k(\chi^{u}+\chi_{0})/2}\big{\\}}\big{)}>0\,\textrm{~{}for any $k\geq k_{0}$.}$ Let $N_{2}=N_{2}(\chi,{\varepsilon})\in\mathbb{N}$ be an integer such that for any $n\geq N_{2}$, one has $\frac{24n(|\chi|+{\varepsilon})}{\chi_{0}\cdot\rho}>k_{0},$ where $\rho=\frac{\chi^{u}-\chi_{0}}{2\chi_{\rm max}-\chi^{u}-\chi_{0}}$ is given by Pliss Lemma (Lemma 2.5) with $C=\chi_{\rm max},~{}\chi_{1}=(\chi^{u}+\chi_{0})/2~{},\chi_{2}=\chi_{0}.$ Take $~{}\widetilde{m}=\big{[}\frac{12n(|\chi|+{\varepsilon})}{\chi_{0}\cdot\rho}\big{]}+1>k_{0}.$ (10) Then there exists a point $y_{0}\in\widehat{D}$ such that $\|Df^{\widetilde{m}}|_{E^{c}(y_{0})}\|\geq e^{\widetilde{m}(\chi^{u}+\chi_{0})/2}$. By Pliss Lemma, there exists an integer $m\in[\widetilde{m}\rho,\widetilde{m}]$ such that $\|Df^{-i}|_{E^{c}(f^{m}(y_{0}))}\|\leq e^{-i\chi_{0}}\,\textrm{~{} for every $1\leq i\leq m$}.$ (11) ###### Claim 3.6. The $cu$-plaque ${\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0}))$ satisfies that * • $f^{-m}({\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0})))\subset{\cal W}^{cu}_{{\varepsilon}_{0}\cdot e^{-3m\chi_{0}/4}}(y_{0})$; * • $\|Df^{-i}|_{T_{w}{\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0}))}\|<e^{-3i\chi_{0}/4}$ for every $w\in{\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0}))$ and $1\leq i\leq m$. ###### Proof. By the first item in Lemma 2.8 and Equation (11), for any point $w\in{\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0}))$, one has $\|Df^{-1}|_{T_{w}{\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0}))}\|<e^{\chi_{0}/4}\cdot\|Df^{-1}|_{E^{c}(f^{m}(y_{0}))}\|\leq e^{-3\chi_{0}/4}.$ By the local invariance of the plaque family, one has $f^{-1}({\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0})))\subset{\cal W}^{cu}_{{\varepsilon}_{0}\cdot e^{-3\chi_{0}/4}}(f^{m-1}(y_{0}))$. Inductively apply the above arguments, and one deduces that * • $f^{-m}({\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0})))\subset{\cal W}^{cu}_{{\varepsilon}_{0}\cdot e^{-3m\chi_{0}/4}}(y_{0})$; * • $\|Df^{-i}|_{T_{w}{\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0}))}\|<e^{-3i\chi_{0}/4}$ for every $w\in{\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0}))$ and $1\leq i\leq m$. ∎ Recall that $\delta$ is determined by ${\varepsilon}$ (by Equation (6)). There exists $N_{3}=N_{3}({\varepsilon})\in{\mathbb{N}}$ such that $2\cdot{\varepsilon}_{0}<\delta\cdot e^{k{\varepsilon}}\,\textrm{~{}for any $k\geq N_{3}.$ }$ Thus for $n>\max\\{N_{1},N_{3}\\}$, as $m\geq\widetilde{m}\rho$, by Equations (8) and (10), one has $\displaystyle{\varepsilon}_{0}\cdot e^{-3m\chi_{0}/4}<{\varepsilon}_{0}\cdot e^{-3\widetilde{m}\rho\chi_{0}/4}<2^{-1}\cdot\delta\cdot e^{n{\varepsilon}-9n(|\chi|+{\varepsilon})}<2^{-1}\cdot\delta\cdot e^{-4n(|\chi|+{\varepsilon})}\cdot e^{-\tau\chi_{\rm max}}=\delta_{n}.$ By Claims 3.5 and 3.6, one gets that $f^{-m}({\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0})))\subset{\cal W}^{cu}_{{\varepsilon}_{0}\cdot e^{-3m\chi_{0}/4}}(y_{0})\subset{\cal W}^{cu}_{\delta_{n}}(y_{0})\subset f^{\tau}(\mathfrak{F}_{n}^{cu}(f^{n}(x^{s}))).$ Since the strong stable foliation is $(\ell_{0},{\varepsilon}_{0}/2)$-dense with respect to the plaque family ${\cal W}^{cu}$, there exists a point $z_{0}\in{\cal F}^{s}_{\ell_{0}}(q)\cap{\cal W}^{cu}_{{\varepsilon}_{0}/2}(f^{m}(y_{0}))\subset{\cal F}^{s}_{\ell_{0}}(q)\cap f^{m+\tau}(\mathfrak{F}_{n}^{cu}(f^{n}(x^{s}))).$ Since $\mathfrak{F}_{n}^{cu}(f^{n}(x^{s})))$ is tangent to $E^{c}\oplus E^{u}$, so is ${\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0}))$. As $E^{c}$ is dominated by $E^{u}$, by the second item of Claim 3.6, one has that $\|Df^{-i}|_{E^{c}(z_{0})}\|<-3i\chi_{0}/4\textrm{~{}for any $1\leq i\leq m$.}$ (12) By the choices of $z_{0}$ and $\tau$, one has $f^{\tau}(z_{0})\subset{\cal F}^{s}_{\delta}(q)$. By the uniform continuity of the center bundle and Equation (5), there exists an integer $t=t({\varepsilon},d)>\tau$ such that $\|Df^{-i}|_{E^{c}(f^{\tau+t}(z_{0}))}\|<e^{-3i\chi_{0}/4}\textrm{~{}for any $i\leq t+\tau$}.$ Thus one has $\|Df^{-i}|_{E^{c}(f^{\tau+t}(z_{0}))}\|<e^{-3i\chi_{0}/4}\textrm{~{} for every $i\leq t+\tau+m$}.$ (13) Since $t$ and $\tau$ depend only on ${\varepsilon}$ and $d$, there exists $N_{4}=N_{4}({\varepsilon},d)\in\mathbb{N}$ such that $(3\tau+t)\cdot\chi_{\rm max}<k\cdot{\varepsilon}\textrm{~{} for any $k\geq N_{4}$}.$ (14) Let $T=T(C,{\varepsilon},d,\chi)=\max\big{\\{}\tau,N_{1},N_{2},N_{3},N_{4}\big{\\}}.$ From now on, we require that $n>T.$ Let $w_{x}=f^{-m-2\tau-n}(z_{0})$ and $l_{x}=m+n+3\tau+t$. Note that $w_{x}\in f^{-n-\tau}(\mathfrak{F}_{n}^{cu}(f^{n}(x^{s})))\subset f^{-\tau}(W^{u}_{{\varepsilon}_{0}}(q))\subset W^{u}_{\delta}(q)\subset W^{u}_{d/2}(q)$ and $f^{l_{x}}(w_{x})=f^{\tau+t}(z_{0})\in\mathcal{F}^{s}_{\delta}(q)\subset\mathcal{F}^{s}_{d/2}(q)$ since $\delta<d/2$, proving the first item for $w_{x}$. Using Equations (10) and (14), and the fact that $m\leq\widetilde{m}$, one has $l_{x}<\frac{12n(|\chi|+{\varepsilon})}{\chi_{0}\cdot\rho}+n+\frac{n{\varepsilon}}{\chi_{\rm max}}\leq n+n\cdot(\frac{1}{\chi_{\rm max}}+\frac{12}{\chi_{0}\cdot\rho})\cdot(|\chi|+{\varepsilon}).$ It suffices to take $\rho_{0}=\frac{1}{\chi_{\rm max}}+\frac{12}{\chi_{0}\cdot\rho}.$ As $z_{0}\in{\cal W}^{cu}_{{\varepsilon}_{0}}(f^{m}(y_{0}))\subset f^{m+\tau}(\mathfrak{F}_{n}^{cu}(f^{n}(x^{s})))$, then for every $0\leq i\leq n$ one deduces that $d(f^{i+\tau}(w_{x}),f^{i}(x^{s}))<\delta.$ Since $\operatorname{d}(f^{i}(x),f^{i}(x^{s}))<\delta$ for every $\tau\leq i\leq n$ and $\delta/2<d$, one gets that $d(f^{i+\tau}(w_{x}),f^{i}(x))<2\delta<d$ for every $\tau\leq i\leq n$, proving the second item for $w_{x}$. It remains to show the last item for $w_{x}$, that is, $\|Df^{j}|_{E^{c}(f^{l_{x}-j}(w_{x}))}\|>e^{2j\cdot(|\chi|+{\varepsilon})}\,\textrm{~{}for any $1\leq j\leq l_{x}$.}$ By Equations (6) and (7), for each $0\leq i\leq n$ one has $\|Df^{i}|_{E^{c}(f^{\tau}(w_{x}))}\|\leq e^{i{\varepsilon}/4}\cdot\|Df^{i}|_{E^{c}(x^{s})}\|\leq C\cdot e^{3\tau\chi_{\rm max}}\cdot e^{i(\chi+3{\varepsilon}/2)}$ (15) and $\|Df^{i}|_{E^{c}(f^{\tau}(w_{x}))}\|\geq e^{-i{\varepsilon}/4}\cdot\|Df^{i}|_{E^{c}(x^{s})}\|\geq C^{-1}\cdot e^{-3\tau\chi_{\rm max}}\cdot e^{i(\chi-3{\varepsilon}/2)}.$ Combining with Equation (13) and the fact that $f^{\tau+t}(z_{0})=f^{l_{x}}(w_{x})$, one has $\displaystyle\|Df^{l_{x}}|_{E^{c}(f(w_{x}))}\|=$ $\displaystyle\|Df^{\tau}|_{E^{c}(w_{x})}\|\cdot\|Df^{n}|_{E^{c}(f^{\tau}(w_{x}))}\|\cdot\|Df^{\tau}|_{E^{c}(f^{n+\tau}(w_{x}))}\|\cdot\|Df^{m+t+\tau}|_{E^{c}(f^{n+2\tau}(w_{x}))}\|$ $\displaystyle\geq$ $\displaystyle e^{-\tau\chi_{\rm max}}\cdot C^{-1}\cdot e^{-3\tau\chi_{\rm max}}\cdot e^{n(\chi-3{\varepsilon}/2)}\cdot e^{-\tau\chi_{\rm max}}\cdot e^{3(t+\tau+m)\chi_{0}/4}$ $\displaystyle=$ $\displaystyle C^{-1}\cdot e^{-5\tau\chi_{\rm max}}\cdot e^{n(\chi-3{\varepsilon}/2)}\cdot e^{3(t+\tau+m)\chi_{0}/4}$ $\displaystyle\text{\tiny by Equation~{}\eqref{eq:choice-of-N-bounding-C}}>$ $\displaystyle e^{n(\chi-2{\varepsilon})}\cdot e^{3(t+\tau+m)\chi_{0}/4}.$ To show that $3(t+\tau+m)\chi_{0}/4+n(\chi-2{\varepsilon})\geq 3(m+3\tau+t+n)(|\chi|+{\varepsilon})$, it suffices to prove $3m\chi_{0}/4-n(|\chi|+2{\varepsilon})\geq 3(m+3\tau+t+n)(|\chi|+{\varepsilon})$. As $|\chi|+{\varepsilon}<\chi_{0}/12<\chi_{\rm max}$ and $n>T\geq N_{4}$, by Equation (14), it suffices to show that $3m\chi_{0}/4\geq n(|\chi|+3{\varepsilon})+3(m+n)(|\chi|+{\varepsilon})$. Since $|\chi|+{\varepsilon}<\chi_{0}/12$, one only needs to show that $m\chi_{0}/2\geq n(4|\chi|+6{\varepsilon})$, and this inequality holds due to the fact that $m\geq\widetilde{m}\rho\geq\frac{12n(|\chi|+{\varepsilon})}{\chi_{0}}.$ This proves that $\|Df^{l_{x}}|_{E^{c}(w_{x})}\|>e^{3(m+3\tau+t+n)\cdot(|\chi|+{\varepsilon})}.$ Let $i_{0}\in[1,l_{x}]$ be the biggest integer satisfying that $\|Df^{j}|_{E^{c}(f^{i_{0}-j}(w_{x}))}\|>e^{2j\cdot(|\chi|+{\varepsilon})}\,\textrm{~{}for any $1\leq j\leq i_{0}$}.$ (16) Then we claim that $i_{0}=l_{x}.$ By Pliss lemma, one has $i_{0}\geq l_{x}\cdot\frac{|\chi|+{\varepsilon}}{\chi_{\rm max}-2(|\chi|+{\varepsilon})}>l_{x}\cdot\frac{|\chi|+{\varepsilon}}{\chi_{\rm max}}>\frac{n{\varepsilon}}{\chi_{\rm max}}.$ Assume, on the contrary, that $i_{0}\in(n{\varepsilon}/\chi_{\rm max},n+2\tau]$. For each $i\in[\tau,n]$, by Equation (15), one has $\|Df^{i}|_{E^{c}(w_{x})}\|=\|Df^{\tau}|_{E^{c}(w_{x})}\|\cdot\|Df^{i-\tau}|_{E^{c}(f^{\tau}(w_{x}))}\|\leq C\cdot e^{4\tau\chi_{\rm max}}\cdot e^{(i-\tau)(\chi+3{\varepsilon}/2)},$ then combining with Equations (8) and (15), one has $\displaystyle\|Df^{i_{0}}|_{E^{c}(w_{x})}\|$ $\displaystyle\leq C\cdot e^{4\tau\chi_{\rm max}}\cdot e^{(i_{0}-\tau)(\chi+3{\varepsilon}/2)}\cdot e^{2\tau\chi_{\rm max}}$ $\displaystyle<e^{n{\varepsilon}^{2}/(2\chi_{\rm max})}\cdot e^{i_{0}(|\chi|+3{\varepsilon}/2)}$ $\displaystyle<e^{i_{0}{\varepsilon}/2}\cdot e^{i_{0}(|\chi|+3{\varepsilon}/2)}$ $\displaystyle\leq e^{i_{0}(|\chi|+2{\varepsilon})}$ which contradicts with $\|Df^{i_{0}}|_{E^{c}(w_{x})}\|>e^{2i_{0}(|\chi|+{\varepsilon})}.$ Thus $i_{0}\in(n+2\tau,l_{x}]$. By Equation (13) and $f^{\tau+t}(z_{0})=f^{l_{x}}(w_{x})$, one has $\|Df^{j}|_{E^{c}(f^{l_{x}-j}(w_{x}))}\|>e^{2j\cdot(|\chi|+{\varepsilon})}\,\textrm{~{}for any $n+2\tau\leq j\leq l_{x}$,}$ which implies that $i_{0}=l_{x}$, since $i_{0}$ is the largest integer in $[1,l_{x}]$ satisfying Equation (16). The proof of Proposition 3.2 is now completed. ∎ According to our proof the constant $\rho_{0}$ is given by $\rho_{0}=\frac{1}{\chi_{\rm max}}+\frac{12(2\chi_{\rm max}-\chi^{u}-\chi_{0})}{\chi_{0}\cdot(\chi^{u}-\chi_{0})},$ where $\chi_{\rm max}=\sup_{y\in M}|\log\|Df|_{E^{c}(y)}\||$ and $\chi^{u}=\inf\big{\\{}\int\log\|Df|_{E^{c}}\|\operatorname{d}\mu~{}|\,~{}\mu\in G^{u}(f)\big{\\}}.$ ### 3.2 Proof of Theorem 3.1 Given a partially hyperbolic diffeomorphism $f$ with $\operatorname{dim}(E^{c})=1$, and constants $C>1$, $\chi\in\mathbb{R}$ and ${\varepsilon}>0$, let us define the following type of Pesin block: ${\rm\mathcal{L}}_{C,\chi,{\varepsilon}}=\big{\\{}x\in M|~{}~{}C^{-1}\cdot e^{k(\chi-{\varepsilon})}\leq\|Df^{k}|_{E^{c}(x)}\|\leq C\cdot e^{k(\chi+{\varepsilon})}\,\textrm{~{}for any $k\in\mathbb{N}$}\big{\\}}.$ Thus ${\rm\mathcal{L}}_{C,\chi,{\varepsilon}}$ concerns about the set of points whose center Lyapunov exponents are ${\varepsilon}$-close to $\chi$. Given some separated sets, the following result allows us to find a collection of periodic points which satisfy the assumptions of Theorem 2.10. We will apply Proposition 3.2 to these separated sets to find some periodic pseudo- orbits with some weak expansion along the center bundle, then we apply Liao’s shadowing lemma to get periodic points. ###### Lemma 3.7. Let $f\in\operatorname{Diff}^{1}(M)$ be a partially hyperbolic diffeomorphism with $\operatorname{dim}(E^{c})=1$. Suppose that * • there exists $\chi_{0}>0$ such that $\int\log\|Df|_{E^{c}}\|\operatorname{d}\mu>\chi_{0}$ for any $\mu\in G^{u}(f)$; * • let ${\cal W}^{cu}$ be a $cu$-plaque family and ${\varepsilon}_{0}>0$ be given by Lemma 2.8 corresponding to $\chi_{0}$, then there exists a constant $\ell_{0}>0$ such that the strong stable foliation is $(\ell_{0},{\varepsilon}_{0}/2)$-dense with respect to ${\cal W}^{cu}$. Then there exist $\rho_{0}>0$ and a hyperbolic periodic point $q$ with positive center Lyapunov exponent such that for any continuous functions $\varphi_{1},\cdots,\varphi_{m}$ on $M$, any ${\varepsilon}\in(0,\chi_{0}/24)$, any $\chi\in(-\chi_{0}/24,0]$, any $C>1$ and any $\xi>0$, there exists $N\in\mathbb{N}$ such that for any $n>N$, any $h\geq 0$ and any $(n,2\xi)$-separated set $S\subset{\rm\mathcal{L}}_{C,\chi,{\varepsilon}}$ with cardinality $\\#S\geq e^{n(h-{\varepsilon}/2)}$, there exists a collection $P\subset{\rm NUH}_{|\chi|+{\varepsilon}}$ of periodic points of the same period $l\in\big{(}n,n+\rho_{0}(|\chi|+{\varepsilon})n\big{)}$ such that * – $\\#P>e^{n(h-{\varepsilon})};$ * – $P$ is a $(l,\xi)$-separated set; * – $\operatorname{d}(p,p^{\prime})<\xi/16$ for any $p,p^{\prime}\in P$; * – each $p\in P$ is homoclinically related to $q$; * – for any $p\in P$, there exists $x\in S$ such that for each $1\leq j\leq m$, one has $\big{|}\int\varphi_{j}\operatorname{d}\delta_{{\cal O}_{p}}-\int\varphi_{j}\operatorname{d}\frac{1}{n}\sum_{i=0}^{n-1}\delta_{f^{i}(x)}\big{|}<\big{(}2\rho_{0}(|\chi|+{\varepsilon})+{\varepsilon}\big{)}\|\varphi_{j}\|_{C^{0}},$ where $\|\varphi_{j}\|_{C^{0}}=\sup_{x\in M}|\varphi_{j}(x)|.$ ###### Proof. Let $\rho_{0}>0$ be the constant and $q$ be the hyperbolic periodic point given by Proposition 3.2. Let us fix ${\varepsilon}\in(0,\chi_{0}/24)$, $\chi\in(-\chi_{0}/24,0]$, $C>1$, $\xi>0$ and continuous functions $\varphi_{1},\cdots,\varphi_{m}$ on $M$. For ${\varepsilon}>0$, by the uniform continuity of the center bundle and the uniform continuity of the functions $(\varphi_{j})_{1\leq j\leq m}$, there exists $\eta_{1}>0$ such that for any $x,y\in M$ with $\operatorname{d}(x,y)<\eta_{1}$, one has $\big{|}\log\|Df|_{E^{c}(x)}\|-\log\|Df|_{E^{c}(y)}\|\big{|}<{\varepsilon}/2\,\textrm{~{}and~{}}\,\max_{1\leq j\leq m}\frac{|\varphi_{j}(x)-\varphi_{j}(y)|}{\|\varphi_{j}\|_{C^{0}}}<{\varepsilon}/2.$ (17) Apply $\lambda=e^{-2(|\chi|+{\varepsilon})}$ and the dominated splitting $TM=E^{s}\oplus(E^{c}\oplus E^{u})$ to Theorem 2.3, and one obtains the constants $L>1$ and $d_{0}>0$ with the posited properties. By Lemma 2.7, there exists $\delta_{0}>0$ such that if a point $x\in M$ satisfies that $\|Df^{-n}|_{E^{c}(x)}\|\leq e^{-n(|\chi|+{\varepsilon})}\,\textrm{~{}for any $n\in\mathbb{N}$},$ then $x$ has unstable manifold $W^{u}_{\delta_{0}}(x)$ of size $\delta_{0}$ and tangent everywhere to $E^{c}\oplus E^{u}.$ As the bundles $E^{s}$ and $E^{c}\oplus E^{u}$ have uniform angle, there exists $\eta_{0}>0$ such that for any $x,y\in M$ with $\operatorname{d}(x,y)<\eta_{0}$, if $D$ is a disc which is centered at $y$, of size $\delta_{0}$ and tangent everywhere to $E^{c}\oplus E^{u}$, then ${\cal F}_{\delta_{0}}^{s}(x)$ has non-empty transverse intersection with $D$. Apply $C>1$, $\chi$, ${\varepsilon}>0$ and $d=\min\\{\,d_{0},\,\eta_{0}\cdot(L+1)^{-1},\,\eta_{1}\cdot(L+1)^{-1},\,32^{-1}\cdot(L+1)^{-1}\xi\,\\}$ to Proposition 3.2, and one obtains integers $\tau=\tau({\varepsilon},d)$ and $T=T(C,\chi,{\varepsilon},d)$ with the posited properties. Let $N_{1}=N_{1}({\varepsilon},d,\xi)$ be an integer such that for any $n\geq N_{1}$, one has * • $\rho_{0}(|\chi|+{\varepsilon})n<e^{n{\varepsilon}/4}$ * • ${4\tau}<n{\varepsilon}$; * • the minimal number of $(\tau,\xi)$-Bowen balls covering the whole manifold is less than $e^{n{\varepsilon}/4}$. Let $N=\max\\{T,N_{1}\\}$. Let $n>N$ and $S$ be a $(n,2\xi)$-separated set in ${\rm\mathcal{L}}_{C,\chi,{\varepsilon}}$ with cardinality $\\#S>e^{n(h-{\varepsilon}/2)}$. Using the pigeonhole principle, one gets a subset $\widetilde{S}\subset S$ such that * • $\widetilde{S}$ is contained in a Bowen ball $B_{\tau}(z_{0},\xi)$ for some $z_{0}\in M$; * • $\\#\widetilde{S}>e^{n(h-3{\varepsilon}/4)}.$ By Proposition 3.2, for any $x\in\widetilde{S}$, there exist $w_{x}\in W^{u}_{d/2}(q)$ and an integer $l_{x}\in\big{(}n,\>n+\rho_{0}\cdot(|\chi|+{\varepsilon})\cdot n\big{)}$ such that * • $f^{l_{x}}(w_{x})\in{\cal F}^{s}_{d/2}(q)$; * • $\operatorname{d}(f^{\tau+i}(w_{x}),f^{i}(x))<d$ for any $i\in[\tau,n]$; * • $\|Df^{-i}|_{E^{c}(f^{l_{x}}(w_{x}))}\|\leq e^{-2i(|\chi|+{\varepsilon})}$ for any $i\in[1,l_{x}]$. As $\rho_{0}(|\chi|+{\varepsilon})n<e^{n{\varepsilon}/4}$, by the pigeonhole principle, there exists a subset $\widehat{S}\subset\widetilde{S}$ such that * • $l_{x}$ is a constant on $\widehat{S}$ and we denote it by $l\in\big{(}n,\>n+\rho_{0}\cdot(\chi+{\varepsilon})\cdot n\big{)}$; * • $\\#\widehat{S}>e^{n(h-{\varepsilon})}.$ For each $x\in\widehat{S}$, since $\operatorname{d}(w_{x},f^{l}(w_{x}))\leq\operatorname{d}(w_{x},q)+\operatorname{d}(q,f^{l}(w_{x}))\leq d<d_{0}$, by Theorem 2.3, there exists a periodic point $p_{x}$ of period $l$ such that $\operatorname{d}(f^{i}(p_{x}),f^{i}(w_{x}))<L\cdot d\textrm{ for any $0\leq i\leq l-1$}.$ (18) This gives a collection of periodic points $P:=\\{\,p_{x}|~{}x\in\widehat{S}\;\\}$ of the same period $l\in\big{(}n,n+\rho_{0}(|\chi|+{\varepsilon})\big{)}$. For each $x\in\widehat{S}$, one has $\operatorname{d}(p_{x},q)\leq\operatorname{d}(p_{x},w_{x})+\operatorname{d}(w_{x},q)<(L+1)d$. As $(L+1)d\leq\xi/32$, one gets that $P\subset B_{\xi/32}(q)$ which implies the third item of Lemma 3.7. It also holds that $(L+1)d<\eta_{0}$, then by the choice of $\eta_{0}$, the periodic point $p_{x}$ is homoclinically related to $q$ which proves the fourth item of Lemma 3.7. Besides, for each $x\in\widehat{S}$, one has $\operatorname{d}(f^{\tau+i}(p_{x}),f^{i}(x))\leq\operatorname{d}(f^{\tau+i}(p_{x}),f^{i}(w_{x}))+\operatorname{d}(f^{\tau+i}(w_{x}),f^{i}(x))<(L+1)d\textrm{~{} for any $i\in[\tau,n]$. }$ (19) Since $(L+1)d<\eta_{1},$ for each $x\in\widehat{S}$, by Equations (17) and (18), one has $\|Df^{-i}|_{E^{c}(p_{x})}\|\leq e^{i{\varepsilon}/2}\cdot\|Df^{-i}|_{E^{c}(f^{l}(w_{x}))}\|<e^{-i(|\chi|+{\varepsilon})}\textrm{ for any $i\in[1,l]$}$ (20) which implies that $P\subset{\rm NUH}_{|\chi|+{\varepsilon}}$. Recall that $4\tau<n{\varepsilon}$. Now, for each $1\leq j\leq m$, and $x\in\widehat{S},$ one has $\displaystyle\big{|}\int\varphi_{j}\operatorname{d}\delta_{{\cal O}_{p_{x}}}-\int\varphi_{j}\operatorname{d}\frac{1}{n}\sum_{i=0}^{n-1}\delta_{f^{i}(x)}\big{|}$ $\displaystyle=\big{|}\int\varphi_{j}\operatorname{d}\frac{1}{l}\sum_{i=0}^{l-1}\delta_{f^{i}(p_{x})}-\int\varphi_{j}\operatorname{d}\frac{1}{n}\sum_{i=0}^{n-1}\delta_{f^{i}(x)}\big{|}$ $\displaystyle\leq\big{|}\frac{1}{l}\big{(}\sum_{i=0}^{l-1}\varphi_{j}(f^{i}(p_{x}))-\sum_{i=0}^{n-1}\varphi_{j}(f^{i}(x))\big{)}+(\frac{1}{l}-\frac{1}{n})\sum_{i=0}^{n-1}\varphi_{j}(f^{i}(x))\big{|}$ $\displaystyle\leq\frac{1}{l}\sum_{i=0}^{2\tau-1}\big{|}\varphi_{j}(f^{i}(p_{x}))\big{|}+\frac{1}{l}\sum_{i=\tau}^{n-1}\big{|}\varphi_{j}(f^{i+\tau}(p_{x}))-\varphi_{j}(f^{i}(x))\big{|}+\frac{1}{l}\sum_{i=n}^{l-1}\big{|}\varphi_{j}(f^{i}(p_{x}))\big{|}$ $\displaystyle\hskip 14.22636pt+\frac{l-n}{n\cdot l}\sum_{i=0}^{n-1}\big{|}\varphi_{j}(f^{i}(x))\big{|}$ by Equations (19) and (17) $\displaystyle\leq\frac{2\tau}{l}\|\varphi_{j}\|_{C^{0}}+\frac{n-\tau}{l}\cdot\frac{{\varepsilon}}{2}\cdot\|\varphi_{j}\|_{C^{0}}+\frac{2(l-n)}{l}\cdot\|\varphi_{j}\|_{C^{0}}$ $\displaystyle\leq\big{(}\frac{{\varepsilon}}{2}+\frac{{\varepsilon}}{2}+2\rho_{0}(|\chi|+{\varepsilon})\big{)}\cdot\|\varphi_{j}\|_{C^{0}}=\big{(}{\varepsilon}+2\rho_{0}(|\chi|+{\varepsilon})\big{)}\cdot\|\varphi_{j}\|_{C^{0}},$ which gives the last item of Lemma 3.7. It remains to show that $P$ is $(l,\xi)$-separated and $\\#P>e^{n(h-{\varepsilon})}$. ###### Claim 3.8. For two different points $x,x^{\prime}\in\widehat{S}$, there exists $2\tau\leq k<n+\tau$ such that $\operatorname{d}(f^{k}(p_{x}),f^{k}(p_{x^{\prime}}))>\xi.$ ###### Proof. Since $\widehat{S}\subset S$ is also a $(n,2\xi)$-separated and is contained in $B_{\tau}(z_{0},\xi)$, there exists $\tau\leq i<n$ such that $\operatorname{d}(f^{i}(x),f^{i}(x^{\prime}))>2\xi$. As $2Ld<\xi$, by Equation (19), one deduces that $\operatorname{d}(f^{i+\tau}(p_{x}),f^{i+\tau}(p_{x^{\prime}}))\geq\operatorname{d}(f^{i}(x),f^{i}(x^{\prime}))-\operatorname{d}(f^{i+\tau}(p_{x}),f^{i}(x))-\operatorname{d}(f^{i+\tau}(p_{x^{\prime}}),f^{i}(x^{\prime}))>\xi.$ ∎ By Claim 3.8, the set of periodic points $P=\\{\,p_{x}|~{}x\in\widehat{S}\;\\}$ is $(n+\tau,\xi)$-separated and has cardinality $\\#P=\\#\widehat{S}>e^{n(h-{\varepsilon})}$, proving the first and the second items of Lemma 3.7. ∎ Now, we use Lemma 3.7 to prove Theorem 3.1. Fix a sequence of continuous functions $(\varphi_{n})$ on the manifold $M$ which forms a dense subset of the space of continuous functions on $M$ under $C^{0}$-topology. Let us define the metric on the space of Borel probability measures. The distance of two Borel probability measures $\mu_{1}$ and $\mu_{2}$ is defined as $\operatorname{d}(\mu_{1},\mu_{2})=\sum_{n\in{\mathbb{N}}}\frac{|\int\varphi_{n}\operatorname{d}\mu_{1}-\int\varphi_{n}\operatorname{d}\mu_{2}|}{2^{n}(1+\|\varphi_{n}\|_{C^{0}})},$ where $\|\varphi\|_{C^{0}}=\sup_{x\in M}|\varphi(x)|$ for any continuous function $\varphi:M\to{\mathbb{R}}$. ###### Proof of Theorem 3.1. Let $\rho_{0}>0$ be the constant and $q$ be the hyperbolic periodic point given by Lemma 3.7. Given $\nu\in{\cal M}_{erg}(f)$ with $\chi_{0}/24<\chi^{c}(\nu)\leq 0$ and ${\varepsilon}\in(0,\chi_{0}/24)$, take $k_{0}\geq 1$ such that the set $\displaystyle{\rm Basin}_{k_{0},{\varepsilon}}(\nu):=\big{\\{}x\in M|~{}~{}$ $\displaystyle e^{k(\chi^{c}(\nu)-{\varepsilon})}\leq\|Df^{k}|_{E^{c}(x)}\|\leq e^{k(\chi^{c}(\nu)+{\varepsilon})}\textrm{~{}and~{}}$ $\displaystyle\hskip 14.22636pt\operatorname{d}(\nu,\frac{1}{k}\sum_{i=0}^{k-1}\delta_{f^{i}(x)})<{\varepsilon}/2,\>\textrm{~{}for any $k\geq k_{0}$}\big{\\}}$ has $\nu$-measure no less than $1/2.$ Furthermore, there exists $C>1$ such that for any point $x\in{\rm Basin}_{k_{0},{\varepsilon}}(\nu)$, one has $C^{-1}\cdot e^{k(\chi^{c}(\nu)-{\varepsilon})}\leq\|Df^{k}|_{E^{c}(x)}\|\leq C\cdot e^{k(\chi^{c}(\nu)+{\varepsilon})}\,\textrm{~{}for any $k\in\mathbb{N}$},$ in formula, ${\rm Basin}_{k_{0},{\varepsilon}}(\nu)\subset{\rm\mathcal{L}}_{C,\chi^{c}(\nu),{\varepsilon}}.$ For any $\xi>0$, denote by $S(n,2\xi)$ a $(n,2\xi)$-separated set of ${\rm Basin}_{k_{0},{\varepsilon}}(\nu)$ with maximal cardinality. By Katok’s definition of metric entropy (see Section 2.1), one has $\lim_{\xi\rightarrow 0}\liminf_{n\rightarrow+\infty}\frac{1}{n}\log\\#S(n,2\xi)\geq h_{\nu}(f).$ Thus there exists $\xi_{1}>0$ such that for any $\xi\leq\xi_{1}$, one has $\liminf_{n\rightarrow+\infty}\frac{1}{n}\log\\#S(n,2\xi)>h_{\nu}(f)-{\varepsilon}/4.$ For any $\xi<\xi_{1}$, there exists an integer $N_{1}>k_{0}$ such that $\\#S(n,2\xi)>e^{n(h_{\nu}(f)-{\varepsilon}/2)}\textrm{~{} for any $n\geq N_{1}$.}$ For ${\varepsilon}>0$, consider an integer $m=m({\varepsilon})$ such that $2^{1-m}<{\varepsilon}.$ Apply the continuous functions $\varphi_{1},\cdots,\varphi_{m}$, ${\varepsilon}\in(0,\chi_{0}/24)$, $\chi=\chi^{c}(\nu)$, $C>1$ and $\xi<\xi_{1}$ to Lemma 3.7, and one obtains an integer $N$. For any $n>\max\\{N,N_{1}\\}$, using the $(n,2\xi)$-separated set $S(n,2\xi)$, one gets a collection $P\subset{\rm NUH}_{|\chi|+{\varepsilon}}$ of periodic points of the same period $l\in\big{(}n,n+\rho_{0}(|\chi^{c}(\nu)|+{\varepsilon})n\big{)}$ such that * • $\\#P>e^{n(h_{\nu}(f)-{\varepsilon})};$ * • $P$ is a $(l,\xi)$-separated set; * • $\operatorname{d}(p,p^{\prime})<\xi/16$ for any $p,p^{\prime}\in P$; * • for any $p\in P$, there exists a point $x_{p}\in S(n,2\xi)$ such that for each $1\leq j\leq m$, one has $\big{|}\int\varphi_{j}\operatorname{d}\delta_{{\cal O}_{p}}-\int\varphi_{j}\operatorname{d}\frac{1}{n}\sum_{i=0}^{n-1}\delta_{f^{i}(x_{p})}\big{|}<\big{(}2\rho_{0}(|\chi^{c}(\nu)|+{\varepsilon})+{\varepsilon}\big{)}\|\varphi_{j}\|_{C^{0}}.$ As $S(n,2\xi)\subset{\rm Basin}_{k_{0},{\varepsilon}}(\nu)$ and $n\geq N_{1}>k_{0}$, one has the following estimate $\displaystyle\operatorname{d}\big{(}\delta_{{\cal O}_{p}},\nu\big{)}$ $\displaystyle\leq\operatorname{d}\big{(}\delta_{{\cal O}_{p}},\frac{1}{n}\sum_{i=0}^{n-1}\delta_{f^{i}(x_{p})}\big{)}+\operatorname{d}\big{(}\frac{1}{n}\sum_{i=0}^{n-1}\delta_{f^{i}(x_{p})},\nu\big{)}$ $\displaystyle<\sum_{j=1}^{\infty}\frac{\big{|}\int\varphi_{j}\operatorname{d}\delta_{{\cal O}_{p}}-\int\varphi_{j}\operatorname{d}\frac{1}{n}\sum_{i=0}^{n-1}\delta_{f^{i}(x_{p})}\big{|}}{2^{j}\|\varphi_{j}\|_{C^{0}}}+\frac{{\varepsilon}}{2}$ $\displaystyle\leq 2\rho_{0}(|\chi^{c}(\nu)|+{\varepsilon})+{\varepsilon}+\sum_{j=m+1}^{\infty}\frac{1}{2^{j}}+\frac{{\varepsilon}}{2}$ $\displaystyle<2\rho_{0}(|\chi^{c}(\nu)|+{\varepsilon})+2{\varepsilon}.$ Besides, one has $\frac{\log\\#P}{l}>\frac{n(h_{\nu}(f)-{\varepsilon})}{n+n\rho_{0}(|\chi^{c}(\nu)|+{\varepsilon})}=\frac{h_{\nu}(f)-{\varepsilon}}{1+\rho_{0}(|\chi^{c}(\nu)|+{\varepsilon})}.$ Now, it suffices to take $\rho_{1}=2\rho_{0}+2,$ ending the proof of Theorem 3.1. ∎ ## 4 Approximation of ergodic measures by horseshoes: Proofs of our main results In this section, we will first show that non-hyperbolic ergodic measures are approached by horseshoes in entropy and in weak$*$-topology (i.e., proving Theorem C). Then we prove the continuity of topological entropy and the intermediate entropy property (i.e., proving Theorem A). At last, we give the proof of Theorem D and Corollary B. ###### Proof of Theorem C. Let $f$ be as in the assumption. Recall that $G^{u}(f)$ is compact and varies upper semi-continuously with respect to $f$ (see Lemma 2.13), and thus there exist a $C^{1}$-open neighborhood ${\cal U}_{1}$ of $f$ and a constant $\chi_{0}>0$ such that $\int\log\|Dg|_{E_{g}^{c}}\|\operatorname{d}\mu>\chi_{0}\,\textrm{~{}for any $g\in{\cal U}_{1}$ and $\mu\in G^{u}(g)$.}$ By Lemma 2.8, for $\chi_{0}>0$, there exist a $C^{1}$-neighborhood $\widetilde{\cal U}_{2}$ of $f$, a $cu$-plaque family ${\cal W}^{cu}_{g}$ relying on $g\in\widetilde{\cal U}_{2}$ continuously and ${\varepsilon}_{0}>0$ satisfying the posited properties. As the strong stable foliation of $f$ is minimal, by Lemma 2.9, there exist a $C^{1}$-neighborhood ${\cal U}_{2}\subset\widetilde{\cal U}_{2}$ of $f$ and $\ell_{0}>0$ such that the strong stable foliation of $g\in{\cal U}_{2}$ is $(\ell_{0},{\varepsilon}_{0}/2)$-dense with respect to ${\cal W}^{cu}_{g}$. Now let ${\cal U}={\cal U}_{1}\cap{\cal U}_{2}$, then each $g\in{\cal U}$ satisfies the assumption of Proposition 3.2. Let $\chi_{1}=\chi_{0}/24$. Up to shrinking ${\cal U}$, one can assume that $\inf\big{\\{}\int\log\|Dg|_{E^{c}_{g}}\|\operatorname{d}\mu\;|\;g\in{\cal U}\,\textrm{~{}and~{}}\mu\in G^{u}(g)\big{\\}}>\chi_{0}.$ Fix $g\in{\cal U}$, then Theorem 3.1 gives a constant $\rho_{1}=\rho_{1}(g)>0$ for $g.$ Let $\nu$ be an ergodic measure of $g$ with $\chi_{0}/24<\chi^{c}(\nu)\leq 0.$ Take ${\varepsilon}>0$ small, apply $\lambda=e^{-(|\chi^{c}(\nu)|+{\varepsilon})}$ and the dominated splitting $TM=E_{g}^{s}\oplus(E_{g}^{c}\oplus E_{g}^{u})$ to Theorem 2.10, and one obtains $\xi_{0}>0$ with the posited properties. Apply $\nu$ and ${\varepsilon}>0$ to Theorem 3.1, and one obtains $\xi_{1}>0$ with posited properties. For $\xi<\min\\{\xi_{0},\xi_{1}\\}$, by Theorem 3.1, there exist finitely many periodic points $p_{1},\cdots,p_{m}\in{\rm NUH}_{|\chi^{c}(\nu)|+{\varepsilon}}$ of the same period $l$ such that * • $\operatorname{d}(p_{i},p_{j})<\xi/16$ for any $i,j\in\\{1,\cdots,m\\}$; * • $\\{p_{1},\cdots,p_{m}\\}$ is a $(l,\xi)$-separated set; * • $\frac{\log m}{l}\geq\frac{h_{\nu}(g)-{\varepsilon}}{1+\rho_{1}(|\chi^{c}(\nu)|+{\varepsilon})};$ * • $\operatorname{d}(\delta_{{\cal O}_{p_{i}}},\nu)<\rho_{1}(|\chi^{c}(\nu)|+{\varepsilon})$ for any $i\in\\{1,\cdots,m\\}$; Now, we apply Theorem 2.10 to the set of period points $p_{1},\cdots,p_{m}$ and obtain a horseshoe $\Lambda$ whose center is uniformly expanding such that * • $h_{top}(g,{\Lambda})\geq\frac{\log m}{l}>\frac{h_{\nu}(g))-{\varepsilon}}{1+\rho_{1}(|\chi^{c}(\nu)|+{\varepsilon})};$ * • ${\cal M}_{inv}(g,\Lambda)$ is contained in the ${\varepsilon}$-neighborhood of $\big{\\{}\sum_{i=1}^{m}t_{i}\cdot\delta_{{\cal O}_{p_{i}}}|~{}t_{i}\geq 0\textrm{~{}and~{}}\sum_{i=1}^{m}t_{i}=1\big{\\}},$ and thus in the $\rho_{1}(|\chi^{c}(\nu)|+{\varepsilon})+{\varepsilon}$-neighborhood of $\nu.$ Let us denote $\chi_{\rm max}(g)=\sup_{x\in M}|\log\|Dg|_{E_{g}^{c}(x)}\||$ and $\chi^{u}(g)=\inf_{\mu\in G^{u}(g)}\int\log\|Dg|_{E_{g}^{c}}\|\operatorname{d}\mu.$ By the proofs of Theorem 3.1 and Proposition 3.2, it suffices to take $\kappa=\sup_{g\in{\cal U}}\rho_{1}(g)+1=\sup_{g\in{\cal U}}2\big{(}\frac{1}{\chi_{\rm max}(g)}+\frac{12(2\chi_{\rm max}(g)-\chi^{u}(g)-\chi_{0})}{\chi_{0}\cdot(\chi^{u}(g)-\chi_{0})}\big{)}+3.$ Now, we need the robustly minimality of the strong stable foliation which is guaranteed by Theorem 2.20. Up to shrinking ${\cal U}$, one can assume that the strong stable foliation of $g\in{\cal U}$ is minimal, and we will show that $\\{\nu\in\mathcal{M}_{erg}(g)|\chi^{c}(\nu,g)\geq 0\\}$ is path connected for any $g\in{\cal U}$. Consider $\nu_{1},\nu_{2}\in\mathcal{M}_{erg}(g)$ with non-negative center Lyapunov exponents, then there exists a sequences of periodic orbits $\\{{\cal O}_{p_{n}}\\}_{n\in{\mathbb{Z}}}$ with positive center Lyapunov exponents such that $\lim_{n\rightarrow+\infty}\delta_{{\cal O}_{p_{n}}}=\nu_{1}$ and $\lim_{n\rightarrow-\infty}\delta_{{\cal O}_{p_{n}}}=\nu_{2}.$ For each $n\in{\mathbb{Z}}$, by the minimality of the strong stable foliation, ${\cal O}_{p_{n}}$ is homoclinically related to ${\cal O}_{p_{n+1}}$ and thus by the Birkhoff-Smale’s theorem, there exists a hyperbolic horseshoe $\Lambda_{n}$ of $g$ containing ${\cal O}_{p_{n}}$ and ${\cal O}_{p_{n+1}}$. By Theorem B in [Sig], there exists a continuous path $\alpha_{n}$ on $\mathcal{M}_{erg}(g,\Lambda_{n})$, which is defined on $[1-2^{-n},1-2^{-(n+1)}]$ provided that $n\geq 0$ or defined on $[2^{n}-1,2^{n+1}-1]$ provided that $n<0$, such that the two endpoints of $\alpha_{n}$ are $\delta_{{\cal O}_{p_{n}}}$ and $\delta_{{\cal O}_{p_{n+1}}}$ respectively; furthermore, one can require that the path $\alpha_{n}$ is contained in the $2\operatorname{d}(\delta_{{\cal O}_{p_{n}}},\delta_{{\cal O}_{p_{n+1}}})$-neighborhood of $\delta_{{\cal O}_{p_{n}}}$ (see the comments after Theorem B in [Sig]). As $\lim_{|n|\rightarrow+\infty}\operatorname{d}(\delta_{{\cal O}_{p_{n}}},\delta_{{\cal O}_{p_{n+1}}})=0$, one gets a continuous path $\alpha:[-1,1]\to\\{\nu\in\mathcal{M}_{erg}(f)|\chi^{c}(\nu)\geq 0\\}$ defined as $\alpha(t)=\begin{cases}\nu_{2}&\text{if }t=-1,\\\ \alpha_{n}(t)&\text{if }t\in[2^{n}-1,2^{n+1}-1]\text{~{}and~{}}n<0,\\\ \alpha_{n}(t)&\text{if }t\in[1-2^{n},1-2^{n+1}]\text{~{}and~{}}n\geq 0,\\\ \nu_{1}&\text{if }t=1.\end{cases}$ ∎ Now, we give the proof of Theorem D which gives some descriptions on the set of points with vanishing center Lyapunov exponents. The idea is that the entropy of $h_{top}(\mathcal{R}_{g}(0))$ is bounded by lower capacity entropy which can give some separated sets, then one can apply Lemma 3.7 to the separated sets and gets some periodic orbits satisfying Theorem 2.10. ###### Proof of Theorem D. We take the $C^{1}$-open neighborhood ${\cal U}$ of $f$ as in the proof of Theorem C. Hence each $g\in{\cal U}$ satisfies the hypothesis of Lemma 3.7 and one obtains a constant $\rho_{0}>0$ and a hyperbolic periodic point $q$ with the posited properties. Now, we fix $g\in{\cal U}.$ Denote by $\chi_{\rm max}(g)=\sup_{x\in M}\big{|}\log\|Dg|_{E^{c}_{g}(x)}\|\big{|}$ Let ${\varepsilon}>0$ and $h\in[0,h_{top}(g,\mathcal{R}_{g}(0))].$ Take ${\varepsilon}^{\prime}\in(0,\chi_{0}/24)$ such that $\frac{h-{\varepsilon}^{\prime}}{1+\rho_{0}{\varepsilon}^{\prime}}>h-{\varepsilon}\,\textrm{~{}and~{}}\,2{\varepsilon}^{\prime}+\big{(}2\rho_{0}{\varepsilon}^{\prime}+{\varepsilon}^{\prime}\big{)}\chi_{\rm max}(g)<{\varepsilon}.$ By the uniform continuity of the center bundle, there exists $\eta_{1}>0$ such that $|\log\|Dg|_{E^{c}_{g}(x)}\|-\log\|Dg|_{E^{c}_{g}(y)}\||<{\varepsilon}^{\prime},\textrm{ for any $x,y\in M$ with $d(x,y)<\eta_{1}.$}$ (21) By the compactness of the set of Borel probability measures on $M$, there exists $\eta_{2}>0$ such that for any Borel probability measures $\nu_{1},\nu_{2}$ on $M$ with $\operatorname{d}(\nu_{1},\nu_{2})<\eta_{2}$, one has $\big{|}\int\log\|Dg|_{E^{c}_{g}}\|\operatorname{d}\nu_{1}-\int\log\|Dg|_{E^{c}_{g}}\|\operatorname{d}\nu_{2}\big{|}<{\varepsilon}^{\prime}.$ (22) Apply $\lambda=e^{-{\varepsilon}^{\prime}}$, $\eta_{2}>0$ and the dominated splitting $TM=E^{s}_{g}\oplus(E^{c}_{g}\oplus E^{u}_{g})$ to Theorem 2.10, and one gets a constant $\xi_{0}>0$ with the posited properties. For any $k\in\mathbb{N}$, define $\mathcal{R}_{g}^{k}(0)=\big{\\{}x\in\mathcal{R}_{g}(0)\;|~{}e^{-|n|{\varepsilon}^{\prime}}\leq\|Dg^{n}|_{E^{c}_{g}(x)}\|\leq e^{|n|{\varepsilon}^{\prime}},\textrm{~{}for any $n\in\mathbb{Z}$ with $|n|>k$}\big{\\}}.$ Then $\mathcal{R}_{g}^{k}(0)\subset\mathcal{R}_{g}^{k+1}(0)$ for any $k\in\mathbb{N}$ and $\mathcal{R}_{g}(0)=\cup_{k\in\mathbb{N}}\mathcal{R}_{g}^{k}(0).$ By the second item in Proposition 2.2, one has $h_{top}(g,\mathcal{R}_{g}(0))=\lim_{k\rightarrow\infty}h_{top}(g,\mathcal{R}^{k}_{g}(0))$. As $0\leq h\leq h_{top}(g,\mathcal{R}_{g}(0))$, there exists $k_{0}\in\mathbb{N}$ such that $h_{top}(g,\mathcal{R}_{g}^{k_{0}}(0))\geq h-{\varepsilon}^{\prime}/4.$ Let $C>1$ be a constant such that for any $x\in\mathcal{R}_{g}^{k_{0}}(0)$, one has $C^{-1}\cdot e^{-|n|{\varepsilon}^{\prime}}\leq\|Dg^{n}|_{E_{g}^{c}(x)}\|\leq C\cdot e^{|n|{\varepsilon}^{\prime}}\textrm{~{}for any $n\in\mathbb{Z}$ .}$ For $\xi>0$ and $n\in\mathbb{N}$, fix a $(n,2\xi)$-separated set $S(n,2\xi)$ of $\mathcal{R}_{g}^{k_{0}}(0)$ with maximal cardinality. By the third item of Proposition 2.2, one has $\underline{Ch}_{top}(g,\mathcal{R}_{g}^{k_{0}}(0))\geq h_{top}(g,\mathcal{R}_{g}^{k_{0}}(0))\geq h-{\varepsilon}^{\prime}/4.$ Then there exists $\xi_{1}>0$ such that for any $\xi\leq\xi_{1}$, there exists $k_{0}<\widehat{N}=\widehat{N}(\xi,{\varepsilon}^{\prime})\in\mathbb{N}$ such that $\\#S(n,2\xi)\geq e^{n(h-{\varepsilon}^{\prime}/2)}$ for any $n>\widehat{N}$. Now, apply the continuous function $\varphi(x)=\log\|Dg|_{E_{g}^{c}(x)}\|$, $\chi=0$, ${\varepsilon}^{\prime}\in(0,\chi_{0}/24)$, $C>1$, and $0<\xi<\min\\{\xi_{0},\xi_{1}\\}$ to Lemma 3.7, and one gets an integer $N\in\mathbb{N}$. For $n>\max\\{N,\widehat{N}\\}$, apply Lemma 3.7 to the $(n,2\xi)$-separated set $S(n,2\xi)$ and one obtains a collection $P\subset{\rm NUH}_{{\varepsilon}^{\prime}}$ of periodic points with the same period $l\in(n,n+\rho_{0}{\varepsilon}^{\prime}n)$ such that * • $\\#P>e^{n(h-{\varepsilon}^{\prime})};$ * • $P$ is a $(l,\xi)$-separated set; * • $\operatorname{d}(p,p^{\prime})<\xi/16$ for any $p,p^{\prime}\in P$; * • for any $p\in P$, there exists $x_{p}\in S(n,2\xi)$ such that $\big{|}\int\log\|Dg|_{E^{c}_{g}}\|\operatorname{d}\delta_{{\cal O}_{p}}-\int\log\|Dg|_{E^{c}_{g}}\|\operatorname{d}\frac{1}{n}\sum_{i=0}^{n-1}\delta_{f^{i}(x_{p})}\big{|}<\big{(}2\rho_{0}{\varepsilon}^{\prime}+{\varepsilon}^{\prime}\big{)}\chi_{\rm max}(g).$ For any $p\in P$, as $S(n,2\xi)\subset\mathcal{R}^{k_{0}}_{g}(0)$ and $n>k_{0}$, one has $\begin{split}\int\log\|Dg|_{E^{c}_{g}}\|\operatorname{d}\delta_{{\cal O}_{p}}&<\int\log\|Dg|_{E^{c}_{g}}\|\operatorname{d}\frac{1}{n}\sum_{i=0}^{n-1}\delta_{f^{i}(x)}+\big{(}2\rho_{0}{\varepsilon}^{\prime}+{\varepsilon}^{\prime}\big{)}\chi_{\rm max}(g)\\\ &\leq{\varepsilon}^{\prime}+\big{(}2\rho_{0}{\varepsilon}^{\prime}+{\varepsilon}^{\prime}\big{)}\chi_{\rm max}(g).\end{split}$ (23) As $\xi<\xi_{0}$, apply $P$ to Theorem 2.10, and one gets a horseshoe $\Lambda$ whose center is uniformly expanding such that * • $h_{top}(g,\Lambda)\geq\frac{\log\\#P}{l}>\frac{h-{\varepsilon}^{\prime}}{1+\rho_{0}{\varepsilon}^{\prime}}>h-{\varepsilon};$ * • each $\mu\in\mathcal{M}_{inv}(g,\Lambda)$ belongs to the $\eta_{2}$-neighborhood of $\\{\sum_{p\in P}t_{p}\delta_{{\cal O}_{p}}|\,t_{p}\geq 0\,,\sum t_{p}=1\\}.$ By Equations (22) and (23), for each $\mu\in{\cal M}_{inv}(g,\Lambda)$, one has $\int\|Dg|_{E_{g}^{c}}\|\operatorname{d}\mu\leq 2{\varepsilon}^{\prime}+\big{(}2\rho_{0}{\varepsilon}^{\prime}+{\varepsilon}^{\prime}\big{)}\chi_{\rm max}(g)<{\varepsilon}.$ Since $\Lambda$ has expanding center, one has $\int\|Dg|_{E_{g}^{c}}\|\operatorname{d}\mu>0.$ This ends the proof of Theorem D. ∎ Now, we turn to prove the continuity of topological entropy and the intermediate entropy property. ###### Proof of Theorem A. Let ${\cal U}$ be the $C^{1}$-open neighborhood of $f$ given by Theorem C. For partially hyperbolic diffeomorphisms with one-dimensional center, by Corollary C and Theorem D in [LVY] (see also Theorem 1.1 in [DFPV]), one has * • the metric entropy varies upper semi-continuously, and in particular, there exist measures of maximal entropy; * • the function $g\in{\cal U}\mapsto h_{top}(g)$ is upper semi-continuous. For each $g\in{\cal U}$, let $\nu$ be an ergodic measure of maximal entropy. By Theorem C and Katok’s result [Ka] (see also [Ge]), $\nu$ is approximated by horseshoes in weak$*$-topology and in entropy. By the structural stability of horseshoes, one deduces that the topological entropy is lower semi-continuous at $g$, and hence is continuous at $g$, proving that $g\in{\cal U}\mapsto h_{top}(g)$ is continuous. It is classical that horseshoes satisfy the intermediate entropy property implying that $g$ also satisfies the intermediate entropy property. ∎ At last, we give the proof of Corollary B. ###### Proof of Corollary B. By Theorem 2.22, $f$ is mostly expanding, and thus all the ergodic measures in $G^{u}(f)$ have positive center Lyapunov exponents. Then Corollary B follows directly from Theorem A. ∎ ## References * [ABC] F. Abdenur, C. Bonatti and S. Crovisier, Nonuniform hyperbolicity for $C^{1}$-generic diffeomorphisms, _Israel J. Math._ 183 (2011), 1–60. * [ABV] J. Alves, C. Bonatti and M. Viana, SRB measures for partially hyperbolic systems whose central direction is mostly expanding. _Invent. Math._ 140 (2000), no. 2, 351–398. * [ADLP] J. Alves, C. Dias, S. Luzzatto,and V. Pinheiro, SRB measures for partially hyperbolic systems whose central direction is weakly expanding. _J. Eur. Math. Soc._ 19 (2017), no. 10, 2911–2946. * [ALi] J. Alves and X. Li, Gibbs-Markov-Young structures with (stretched) exponential tail for partially hyperbolic attractors. _Adv. Math._ 279(2015), 405–437. * [An] M. Andersson, Robust ergodic properties in partially hyperbolic dynamics. _Trans. Amer. Math. Soc._ 362 (2010), no. 4, 1831–1867. * [AnV1] M. Andersson and C. H. Vásquez, On mostly expanding diffeomorphisms. _Ergodic Theory Dynam. Systems_ 38 (2018), no. 8, 2838–2859. * [AnV2] M. Andersson and C. H. Vásquez, Statistical stability of mostly expanding diffeomorphisms. _Ann. Inst. H. Poincaré C Anal. Non Linéaire_ 37 (2020), no. 6, 1245–1270. * [BD] C. Bonatti and L. J. Díaz, Persistent nonhyperbolic transitive diffeomorphisms. _Ann. of Math._ (2) 143 (1996), no. 2, 357–396. * [BGP] C. Bonatti, A. Gogolev and R. Potrie, Anomalous partially hyperbolic diffeomorphisms II: stably ergodic examples. _Invent. Math._ 206 (2016), no. 3, 801–836. * [BGHP] C. Bonatti, A. Gogolev, A. Hammerlindl, and R. Potrie. Anomalous partially hyperbolic diffeomorphisms III: Abundance and incoherence. _Geom. Topol._ 24 (2020) no.4, 1751–1790. * [BZ] Ch. Bonatti and J. Zhang, Periodic measures and partially hyperbolic homoclinic classes. Trans. Amer. Math. Soc., 372(2):755–802, 2019. * [Bow] R. Bowen, Topological entropy for noncompact sets. _Trans. Amer. Math. Soc._ 184 (1973), 125–136. * [Br] A. Brown, Smoothness of stable holonomies inside center-stable manifolds. _Ergodic Theory Dynam. Systems_ 42 (2022), no.12, 3593–3618. * [BCF] J. Buzzi, S. Crovisier and T. Fisher, The entropy of $C^{1}$-diffeomorphisms without a dominated splitting. _Trans. Amer. Math. Soc._ 370 (2018), no.9, 6685–6734. * [BF] J. Buzzi and T. Fisher, Entropic stability beyond partial hyperbolicity, _J. Mod. Dyn._ 7 (4) (2013), 527–552. * [BFSV] J. Buzzi, T. Fisher, M. Sambarino and C. Vásquez, Maximal entropy measures for certain partially hyperbolic, derived from Anosov systems, _Ergodic Theory Dynam. Systems_ 32(1) (2012), 63–79. * [C] S. Crovisier, Partial hyperbolicity far from homoclinic bifurcations. _Adv. Math._ 226 (2011), no. 1, 673–726. * [CPo] S. Crovisier and M. Poletti, _Invariance principle and non-compact center foliations_. arXiv:2210.14989. * [CPu] S. Crovisier and E. Pujals, Essential hyperbolicity and homoclinic bifurcations: a dichotomy phenomenon/mechanism for diffeomorphisms. _Invent. Math_. 201 (2015), no.2, 385–517. * [CSY] S. Crovisier, M. Sambarino and D. Yang, Partial hyperbolicity and homoclinic tangencies. _J. Eur. Math. Soc._ 17 (2015), no. 1, 1–49. * [CYZ] S. Crovisier, D. Yang and J. Zhang, Empirical measures of partially hyperbolic attractors. _Comm. Math. Phys._ 375 (2020), no. 1, 725–764. * [DFPV] L. J. Díaz, T. Fisher, M. Pacifico and J. Vieitez, Entropy-expansiveness for partially hyperbolic diffeomorphisms. _Discrete Contin. Dyn. Syst._ 32 (2012), no. 12, 4195–4207. * [DG] L. J. Díaz and K. Gelfert, Nonhyperbolic dynamics by mingling, blending, and flip-flopping._Topology Appl._ 339 (2023), Paper No. 108571, 29 pp. * [DGR] L. J. Díaz, K. Gelfert and M. Rams, Nonhyperbolic step skew-products: ergodic approximation, _Ann. Inst. H. Poincaré C Anal. Non Linéaire_ 34 (2017), no. 6, 1561–1598. * [DGS] L. J. Díaz, K. Gelfert and B. Santiago, Weak$*$ and entropy approximation of nonhyperbolic measures: a geometrical approach. Math. Proc. Cambridge Philos. Soc., 169(3):507–545, 2020. * [G1] S. Gan, A generalized shadowing lemma. _Discrete Contin. Dyn. Syst._ 8 (2002), no. 3, 627–632. * [G2] S. Gan, Horseshoe and entropy for $C^{1}$ surface diffeomorphisms. _Nonlinearity_ 15 (2002), no. 3, 841–848. * [Ge] K. Gelfert, Horseshoes for diffeomorphisms preserving hyperbolic measures, _Math. Z._ 283 (3-4) (2016), 685–701. * [Go] N. Gourmelon, Adapted metrics for dominated splittings. _Ergodic Theory and Dynamical Systems_ , 27 (2007), no.6, 1839–1849. * [GSW] L. Guan, P. Sun and W. Wu, Measures of intermediate entropies and homogeneous dynamics. _Nonlinearity_ 9 (2017), 3349–3361. * [HPS] M. W. Hirsch, C. C. Pugh and M. Shub, _Invariant manifolds_. Lecture Notes in Mathematics, Vol. 583. Springer-Verlag, Berlin-New York, 1977. ii+149 pp. * [HYY] Y. Hua, F. Yang and J. Yang, A new criterion of physical measures for partially hyperbolic diffeomorphisms. _Trans. Amer. Math. Soc._ 373 (2020), no. 1, 385–417. * [Ka] A. Katok, Lyapunov exponents, entropy and periodic orbits for diffeomorphisms, _Publ. Math. Inst. Hautes Études Sci._ 51 (1980), 137–173. * [KM] A. Katok and L. Mendoza, _Dynamical systems with nonuniformly hyperbolic behavior_ , Supplement to _Introduction to the modern theory of dynamical systems_ , by A. Katok and B. Hasselblatt, Encyclopedia of Mathematics and its Applications, Volume 54 (Cambridge University Press, Cambridge, 1995). * [Le] F. Ledrappier, Propriétés ergodiques des mesures de Sinai. Inst. Hautes Études Sci. Publ. Math. 59 (1984), 163–188. * [LeS] F. Ledrappier and J-M. Strelcyn, A proof of the estimation from below in Pesin’s entropy formula. Ergodic Theory Dynam. Systems 2 (1982), no.2, 203–219. * [LeYo1] F. Ledrappier and L. Young, The metric entropy of diffeomorphisms. I. Characterization of measures satisfying Pesin’s entropy formula. _Ann. of Math._ (2) 122 (1985), no. 3, 509–539. * [LeYo2] F. Ledrappier and L. Young, The metric entropy of diffeomorphisms. II. Relations between entropy, exponents and dimension. _Ann. of Math._ (2) 122 (1985) no. 3, 540–574. * [LSWW] M. Li, Y. Shi, S. Wang and X. Wang, Measures of intermediate entropies for star vector fields, _Israel J. Math._ 240 (2020), no. 2, 791–819. * [LVY] G. Liao, M. Viana and J. Yang, The entropy conjecture for diffeomorphisms away from tangencies, _J. Eur. Math. Soc._ 15 (2013), no. 6, 2043–2060. * [Li] S. T. Liao, An existence theorem for periodic orbits, _Acta Sci. Natur. Univ. Pekinensis_ , 1 (1979), 1–20. * [Mi] M. Misiurewicz, Diffeomorphism without any measure with maximal entropy. _Bull. Acad. Polon. Sci. Sér. Sci. Math. Astronom. Phys._ 21(1973), 903–910. * [N] S. Newhouse, Continuity properties of entropy. _Ann. of Math._ (2) 129 (1989), no. 2, 215–235. * [Pe] Y. Pesin, _Dimension theory in dynamical systems._ Contemporary views and applications. Chicago Lectures in Math. University of Chicago Press, Chicago, IL, 1997. xii+304 pp. * [PeSi] Y. Pesin and Y. Sinai, Gibbs measures for partially hyperbolic attractors. _Ergodic Theory Dynam. Systems_ 2 (1982), no. 3-4, 417–438. * [Pl] V. Pliss, On a conjecture due to Smale, _Differ. Uravn._ 8 (1972), 262–268. * [SY] R. Saghin and J. Yang, Continuity of topological entropy for perturbation of time-one maps of hyperbolic flows, _Israel J. Math._ 215 (2) (2016), 857–875. * [Sig] K. Sigmund, On the connectedness of ergodic systems, _Manuscripta Math_. 22 (1) (1977), 27–32. * [S1] P. Sun, Zero-entropy invariant measures for skew product diffeomorphisms. _Ergodic Theory Dynam. Systems_ 30 (2010), no. 3, 923–930. * [S2] P. Sun, Measures of intermediate entropies for skew product diffeomorphisms. _Discrete Contin. Dyn. Syst_. 27 (2010), no. 3, 1219–1231. * [S3] P. Sun, Density of metric entropies for linear toral automorphisms. _Dyn. Syst._ 27 (2012), no. 2, 197–204. * [SW] W. Sun and Z. Wang, Lyapunov exponents of hyperbolic measures and hyperbolic periodic orbits. _Trans. Amer. Math. Soc_. 362 (2010), no. 8, 4267–4282. * [TY] A. Tahzibi and J. Yang, Invariance principle and rigidity of high entropy measures. _Trans. Amer. Math. Soc._ 371 (2019), no.2, 1231–1251. * [U] R. Ures, Intrinsic ergodicity of partially hyperbolic diffeomorphisms with a hyperbolic linear part. _Proc. Amer. Math. Soc._ 140 (2012), no.6, 1973–1985. * [Y] J. Yang, Entropy along expanding foliations. _Adv. Math._ 389 (2021), Paper No. 107893, 39 pp. * [YZ] D. Yang and J. Zhang, Non-hyperbolic ergodic measures and horseshoes in partially hyperbolic homoclinic classes. J. Inst. Math. Jussieu, 19 (2020) no. 5, 1765–1792. * [Yo] Y. Yomdin, Volume growth and entropy, _Israel J. Math._ 57 no. 3 (1987), 285–300. _Jinhua Zhang_ --- School of Mathematical Sciences, Beihang University, Beijing, 100191, P. R. China <EMAIL_ADDRESS><EMAIL_ADDRESS>
$|y_{i_{0}}\rangle,|y_{i_{1}}\rangle$ for $i_{0}\neq i_{1}$, such that $|y_{i_{0}}\rangle=|y_{i_{1}}\rangle=0$. Then, the vectors $|x_{i_{0}}\rangle,|x_{i_{1}}\rangle$ are orthonormal. We simply define $|S_{0}\rangle=|i_{0}\rangle,|S_{1}\rangle=|i_{1}\rangle$, $|R_{0}\rangle=|x_{i_{0}}\rangle$ and $|R_{1}\rangle=|x_{i_{1}}\rangle$. One can calculate that $R_{*}E_{0}S_{*}={\rm 1l}_{\mathbb{C}^{2}}$ and $R_{*}E_{1}S_{*}=0$. In the third case, for all $i\in\\{0,\ldots,3\\}$ vectors $|x_{i}\rangle,|y_{i}\rangle$ are not linearly independent and there is at most one zero vector $|y_{i_{3}}\rangle$ for some $i_{3}\in\\{0,\ldots,3\\}$. Define indices $i_{0},i_{1},i_{2}\in\\{0,\ldots,3\\}$ as the remaining labels, such that $\\{i_{0},\ldots,i_{3}\\}$ covers the whole set $\\{0,\ldots,3\\}$. Define the matrix $M=\left[\begin{array}[]{ccc}\langle{y_{i_{0}}}|{x_{i_{0}}}\rangle&\langle{y_{i_{1}}}|{x_{i_{1}}}\rangle&\langle{y_{i_{2}}}|{x_{i_{2}}}\rangle\\\ \langle{y_{i_{0}}}|{y_{i_{0}}}\rangle&\langle{y_{i_{1}}}|{y_{i_{1}}}\rangle&\langle{y_{i_{2}}}|{y_{i_{2}}}\rangle\end{array}\right].$ (111) In the first sub-case we assume that $\mathrm{rank}(M)=1$. Define $b=\frac{\langle{y_{i_{1}}}|{y_{i_{1}}}\rangle}{\langle{y_{i_{0}}}|{y_{i_{0}}}\rangle}$. We can take $|S_{0}\rangle=|i_{0}\rangle$, $|S_{1}\rangle=|i_{1}\rangle$, $|R_{0}\rangle=|y_{i_{0}}\rangle$ and $|R_{1}\rangle=\frac{1}{b}|y_{i_{1}}\rangle$. One can calculate that $R_{*}E_{0}S_{*}=\langle{y_{i_{0}}}|{x_{i_{0}}}\rangle{\rm 1l}_{\mathbb{C}^{2}}$ and $R_{*}E_{1}S_{*}=\langle{y_{i_{0}}}|{y_{i_{0}}}\rangle{\rm 1l}_{\mathbb{C}^{2}}$. In the second sub-case we assume that $\mathrm{rank}(M)=2$. Define indices $j_{1},j_{2}\in\\{0,1,2\\}$, such that $\mathrm{rank}\left(\left[\begin{array}[]{cc}M_{0,j_{1}}&M_{0,j_{2}}\\\ M_{1,j_{1}}&M_{1,j_{2}}\end{array}\right]\right)=2.$ (112) Define $j_{0}\in\\{0,1,2\\}$ as the remaining label, such that $\\{j_{0},j_{1},j_{2}\\}$ covers the whole set $\\{0,1,2\\}$. Take $|S_{0}\rangle=|i_{j_{0}}\rangle$, $|R_{0}\rangle=|y_{i_{j_{0}}}\rangle$ and define $(b_{1},b_{2})^{\top}\coloneqq\left[\begin{array}[]{cc}\langle{y_{i_{j_{1}}}}|{x_{i_{j_{1}}}}\rangle&\langle{y_{i_{j_{2}}}}|{x_{i_{j_{2}}}}\rangle\\\ \langle{y_{i_{j_{1}}}}|{y_{i_{j_{1}}}}\rangle&\langle{y_{i_{j_{2}}}}|{y_{i_{j_{2}}}}\rangle\end{array}\right]^{-1}(\langle{y_{i_{j_{0}}}}|{x_{i_{j_{0}}}}\rangle,\langle{y_{i_{j_{0}}}}|{y_{i_{j_{0}}}}\rangle)^{\top}.$ (113) We may take $|S_{1}\rangle=|i_{j_{1}}\rangle+|i_{j_{2}}\rangle$ and $|R_{1}\rangle=\bar{b}_{1}|y_{i_{j_{1}}}\rangle+\bar{b}_{2}|y_{i_{j_{2}}}\rangle$. Direct calculations reveal that $R_{*}E_{0}S_{*}=\langle{y_{i_{j_{0}}}}|{x_{i_{j_{0}}}}\rangle{\rm 1l}_{\mathbb{C}^{2}}$ and $R_{*}E_{1}S_{*}=\langle{y_{i_{j_{0}}}}|{y_{i_{j_{0}}}}\rangle{\rm 1l}_{\mathbb{C}^{2}}$. ∎ ### A.15 Proof of Theorem 16 Theorem 16. Let $\mathcal{E}_{r}\in\mathcal{C}(\mathcal{Y})$ be a random quantum channel defined according to Eq. (32). Then, the following two implications hold $\begin{split}r<\frac{\dim(\mathcal{X})\dim(\mathcal{Y})}{\dim(\mathcal{X})^{2}-1}&\implies\mathcal{P}\left(\mathcal{E}_{r}\in\xi(\mathcal{X},\mathcal{Y})\right)=1,\\\ \mathcal{P}\left(\mathcal{E}_{r}\in\xi_{1}(\mathcal{X},\mathcal{Y})\right)=1&\implies r<\sqrt{\frac{\dim(\mathcal{Y})}{\dim(\mathcal{X})-1}}.\end{split}$ (114) ###### Proof. For $r\in\mathbb{N}$ satisfying $r<\frac{\dim(\mathcal{X})\dim(\mathcal{Y})}{\dim(\mathcal{X})^{2}-1}$, let $(G_{i})_{i=1}^{r}\subset\mathcal{M}(\mathcal{Y})$ be a tuple of random and independent Ginibre matrices and $Q=\sum_{i=1}^{r}G_{i}^{\dagger}G_{i}$. Define the projector $\Pi=\sum_{i=0}^{\dim(\mathcal{X})-1}|i\rangle\\!\langle i|$ and consider the set $A=\left\\{(G_{i})_{i=1}^{r}:\quad\mathrm{rank}(Q)=\dim(\mathcal{Y}),\mathrm{rank}\left(\sum_{i=1}^{r}G_{i}^{\dagger}\Pi G_{i}\right)=\min\\{r\dim(\mathcal{X}),\dim(\mathcal{Y})\\}\right\\}.$ (115) One can observe that $\mathcal{P}((G_{i})_{i=1}^{r}\in A)=1$. Let $\mathcal{E}_{r}\in\mathcal{C}(\mathcal{Y})$ be a random channel defined according to Eq. (32) for $(G_{i})_{i=1}^{r}\in A$, that is $\mathcal{E}_{r}(Y)=\sum_{i=1}^{r}\left(G_{i}Q^{-1/2}\right)Y\left(G_{i}Q^{-1/2}\right)^{\dagger}.$ Define $S=Q^{1/2}\tilde{S}$ for $\tilde{S}\in\mathcal{M}(\mathcal{X},\mathcal{Y})$ and $R=\tilde{R}\Pi$ for $\tilde{R}\in\mathcal{M}(\mathcal{Y},\mathcal{X})$. We obtain $RG_{i}Q^{-1/2}S=\tilde{R}\Pi G_{i}\tilde{S}.$ Utilizing Lemma 9, Proposition 12 and Theorem 1 $(D)$ for $\tilde{\mathcal{E}}=\mathcal{K}\left((\Pi G_{i})_{i=1}^{r}\right)\in s\mathcal{C}(\mathcal{Y})$, there exist $\tilde{S},\tilde{R}$, such that $\tilde{R}\Pi G_{i}\tilde{S}\propto{\rm 1l}_{\mathcal{X}}$ and $\tilde{R}\Pi G_{i_{0}}\tilde{S}\neq 0$ for some $i_{0}$. Eventually, $\mathcal{E}_{r}\in\xi(\mathcal{X},\mathcal{Y})$. Now, for a given $r\in\mathbb{N}$ let us define $B=\\{\mathcal{E}_{r}:\,\,\mathcal{E}_{r}\in\xi_{1}(\mathcal{X},\mathcal{Y})\\}$. From the assumption $\mathcal{P}(B)=1$, we obtain that $B$ is a dense subset of $\\{\mathcal{E}\in\mathcal{C}(\mathcal{Y}):\,\,\mathrm{rank}(J(\mathcal{E}))\leq r\\}$. Imitating the proof of Theorem 7, we get that if $\mathcal{E}\in\mathcal{C}(\mathcal{Y})$ and $\mathrm{rank}(J(\mathcal{E}))\leq r$, then $\mathcal{E}\in\xi_{1}(\mathcal{X},\mathcal{Y})$. That implies $r\leq r_{1}(\mathcal{X},\mathcal{Y})$. By using Lemma 10 we obtain the desired inequality. ∎ ### A.16 Proof of Proposition 17 Proposition 17. Let $\Upsilon\subset\mathcal{C}(\mathcal{Y})$ be a nonempty and convex family of noise channels. Define $\mu$ to be a probability measure defined on $\Upsilon$ and assume that the support of $\mu$ is equal to $\Upsilon$. Let $\bar{\mathcal{E}}=\int_{\Upsilon}\mathcal{E}\mu(d\mathcal{E})\in\mathcal{C}(\mathcal{Y})$ and fix $(\SS,\mathcal{R})\in s\mathcal{C}(\mathcal{X},\mathcal{Y})\times s\mathcal{C}(\mathcal{Y},\mathcal{X})$. The following conditions are equivalent: 1. (A) For each $\mathcal{E}\in\Upsilon$ there exists $p_{\mathcal{E}}\geq 0$ such that $\mathcal{R}\mathcal{E}\SS=p_{\mathcal{E}}\mathcal{I}_{\mathcal{X}}$ and $\int_{\Upsilon}p_{\mathcal{E}}\mu(d\mathcal{E})>0.$ 2. (B) It holds that $0\neq\mathcal{R}\bar{\mathcal{E}}\SS\propto\mathcal{I}_{\mathcal{X}}$. ###### Proof. $(B)\implies(A)$ Let us assume that $\mathcal{R}\bar{\mathcal{E}}\SS=p\mathcal{I}_{\mathcal{X}}$ for $p>0$. There exists a $k$ dimensional affine subspace $\mathcal{L}$ such that $\Upsilon\subset\mathcal{L}$ and $\mathrm{int}_{\mathcal{L}}(\Upsilon)\neq\emptyset$. Take arbitrary $\mathcal{E}_{0}\in\Upsilon$. There exist $\mathcal{E}_{1},\ldots,\mathcal{E}_{k}\in\Upsilon$ such that convex hull of points $\mathcal{E}_{0},\ldots,\mathcal{E}_{k}$ is a $k$-dimensional simplex $\Delta_{k}$. For any state $|\psi\rangle\\!\langle\psi|\in\mathcal{D}(\mathcal{X})$ it holds $p|\psi\rangle\\!\langle\psi|=\mathcal{R}\bar{\mathcal{E}}\SS(|\psi\rangle\\!\langle\psi|)=\int_{\Upsilon}\mathcal{R}\mathcal{E}\SS(|\psi\rangle\\!\langle\psi|)\mu(d\mathcal{E})\geq\int_{\Delta_{k}}\mathcal{R}\mathcal{E}\SS(|\psi\rangle\\!\langle\psi|)\mu(d\mathcal{E}).$ (116) Inside $\Delta_{k}$ each $\mathcal{E}$ can be uniquely represented as $\sum_{i=0}^{k}q_{i}(\mathcal{E})\mathcal{E}_{i}$, where $(q_{i}(\mathcal{E}))_{i=0}^{k}$ is a probability vector which depends on $\mathcal{E}$. Hence, $p|\psi\rangle\\!\langle\psi|\geq\sum_{i=0}^{k}\int_{\Delta_{k}}q_{i}(\mathcal{E})\mathcal{R}\mathcal{E}_{i}\SS(|\psi\rangle\\!\langle\psi|)\mu(d\mathcal{E})\geq\left(\int_{\Delta_{k}}q_{0}(\mathcal{E})\mu(d\mathcal{E})\right)\mathcal{R}\mathcal{E}_{0}\SS(|\psi\rangle\\!\langle\psi|).$ (117) There exists $\epsilon$ small ball $B_{\epsilon}$ around $\mathcal{E}_{0}$, such that for each channel $\mathcal{E}\in B_{\epsilon}\cap\Delta_{k}$ it holds $q_{0}(\mathcal{E})\geq\frac{1}{2}$. Hence, $\int_{\Delta_{k}}q_{0}(\mathcal{E})\mu(d\mathcal{E})\geq\frac{1}{2}\mu\left(B_{\epsilon}\cap\Delta_{k}\right)>0,$ where in the last inequality we used the fact that the support of $\mu$ is equal to $\Upsilon$. Therefore, it holds that for any $|\psi\rangle\\!\langle\psi|\in\mathcal{D}(\mathcal{X})$ we have $\mathcal{R}\mathcal{E}_{0}\SS(|\psi\rangle\\!\langle\psi|)\propto|\psi\rangle\\!\langle\psi|$ and from Lemma 18 there exists $p_{\mathcal{E}_{0}}\geq 0$ such that $\mathcal{R}\mathcal{E}_{0}\SS=p_{\mathcal{E}_{0}}\mathcal{I}_{\mathcal{X}}$. The instant relation $\int_{\Upsilon}p_{\mathcal{E}}\mu(d\mathcal{E})=p>0$ ends the proof. ∎
Comparing Session Type Systems derived from Linear Logict1 Work partially supported by the Dutch Research Council (NWO) under the VIDI Project No. 016.Vidi.189.046 (Unifying Correctness for Communicating Software). Bas van den Heuvel Jorge A. Pérez University of Groningen, The Netherlands Session types are a typed approach to message-passing concurrency, where types describe sequences of intended exchanges over channels. Session type systems have been given strong logical foundations via Curry-Howard correspondences with linear logic, a resource-aware logic that naturally captures structured interactions. These logical foundations provide an elegant framework to specify and (statically) verify message-passing processes. In this paper, we rigorously compare different type systems for concurrency derived from the Curry-Howard correspondence between linear logic and session types. We address the main divide between these type systems: the classical and intuitionistic presentations of linear logic. Along the years, these presentations have given rise to separate research strands on logical foundations for concurrency; the differences between their derived type systems have only been addressed informally. To formally assess these differences, we develop , a session type system that encompasses type systems derived from classical and intuitionistic interpretations of linear logic. Based on a fragment of Girard's Logic of Unity, provides a basic reference framework: we compare existing session type systems by characterizing fragments of that coincide with classical and intuitionistic formulations. We analyze the significance of our characterizations by considering the locality principle (enforced by intuitionistic interpretations but not by classical ones) and forms of process composition induced by the interpretations. Session types are an approach to correct message-passing concurrency in which communication over channels is abstracted by types as sequences of exchanges. Session type systems have been given logical foundations via Curry-Howard correspondences with linear logic, both in intuitionistic and classical presentations. The type systems derived from these two logics enforce communication correctness on classes of $\pi$-calculus processes that largely coincide. However, there are significant differences between these classes. It has been informally observed that, unlike the classical type system, the intuitionistic type system enforces locality for shared channels (i.e. received channels cannot be used for replicated receives). In this paper, we revisit this observation from a formal angle. We develop United Linear Logic (), a logic encompassing both classical and intuitionistic linear logic based on Girard's Logic of Unity. Simultaneously, we follow the Curry-Howard correspondences for session types to define a session type system for the $\pi$-calculus based on , denoted . We then use to formally assess the differences between the intuitionistic and classical type systems and their classes of typable processes, and justify the role of locality and duality therein. We confirm the informal observation concerning locality, but also discuss a newly discovered difference concerning non-local empty send actions. We further corroborate the usefulness of by considering extensions with rules for more expressive parallel composition and channel connection, and find that such extensions do not fit in type systems derived from intuitionistic linear logic. Concurrency, linear logic, $\pi$-calculus, session types. § INTRODUCTION Establishing the correctness of message-passing programs is a central but challenging problem. Within formal methods for concurrency, session types [27, 49, 31] are now a consolidated approach to statically verifying safety and liveness properties of communicating programs. Session types specify communication over channels as sequences of exchanges. This way, e.g., the session type $!\mathsf{int}.?\mathsf{bool}.\tend$ describes a channel's intended protocol: send an integer, receive a boolean, and close the channel. Due to its simplicity and expressivity, the $\pi$-calculus [39, 47]—the paradigmatic model of concurrency and interaction—is a widely used specification language for developing session types and establishing their basic theory. In this paper, we are interested in developing further the logical foundations of session type systems for the $\pi$-calculus. Context: Linear Logic and Session Types In a line of work developed by Caires, Pfenning, Wadler, and several others, the theory of session types has been given firm logical foundations in the form of Curry-Howard-style correspondences. Caires and Pfenning [6] discovered a correspondence between session types for the $\pi$-calculus and Girard's linear logic [22]: session types logical propositions $\pi$-calculus processes process communication cut elimination Based on this logical bridge, the resulting type systems simultaneously ensure important properties for processes, such as session fidelity (processes respect session types), communication safety (absence of communication errors), and progress/deadlock-freedom (processes never reach stuck states). There are two presentations of linear logic, classical and intuitionistic, and session type systems derived from linear logic have inherited this dichotomy. Under Curry-Howard interpretations, typing judgments and typing rules correspond to the logical sequents and inference rules of the underlying linear logic. In both classical and intuitionistic cases, judgments assign independent session protocols to the channels of a process. Because these judgments differ between the presentations of linear logic, there are consequences for their respective interpretations as type systems: * Caires and Pfenning's correspondence [6] uses an intuitionistic linear logic where judgments for process are two-sided, with zero or many channels on the left and exactly one channel on the right. Such judgments have a convenient rely-guarantee reading: the process relies on the behaviors described by the channels on the left to guarantee the behavior of the one channel on the right. In this interpretation, each logical connective thus requires two rules: the right rule specifies how to offer a behavior, while the left rule specifies how to use a behavior. * Wadler's correspondence [52] uses classical linear logic, where judgments are single-sided (all channels appear on the right) and lack a rely-guarantee reading. In this interpretation, there is only a single rule per logical connective, which makes such type systems direct and economical. Although the differences and relationship between intuitionistic and classical linear logic are relevant and well-known (see, e.g., [46, 5, 36]), the differences between their derived session type systems have only been addressed informally. In particular, Caires  [11] observe that, unlike classical interpretations, an intuitionistic interpretation guarantees locality for shared channels. In the meantime, both interpretations have been extended in multiple, exciting directions (see, e.g., [8, 1, 12, 48, 37, 3, 7, 13, 50, 9, 32, 44, 18, 30]). This emergence of families of type systems (one classical, one intuitionistic) has deepened the original dichotomy and somewhat obscured our understanding of logical foundations of concurrency as a whole. This state of affairs calls for a formal comparison between the session type systems derived from classical and intuitionistic linear logics that goes beyond their superficial differences. This seems to us as an indispensable step in consolidating the logical foundations of message-passing concurrency. Goal of the Paper In this paper, we aim at formally comparing the session type systems derived from interpretations of intuitionistic and classical linear logic. We are concerned with two families of type systems for the $\pi$-calculus, which naturally induce two classes of typable processes, namely those typable in intuitionistic and classical interpretations, respectively. These two classes are known to largely overlap with each other; informal observations suggest that some processes are typable in a classical setting but not in an intuitionistic setting. We are interested in determining precisely how these classes of processes relate to each other, but especially in uncovering why their logical underpinnings justify such relationships. The key step in our approach is to define a basic framework of reference in which both type systems, intuitionistic and classical, can be objectively compared. Girard's Logic of Unity (LU) [23] has a similar goal: to objectively compare classical logic, intuitionistic logic, and (classical and intuitionistic) linear logic in a single system, abstracting away from syntactical differences. It is then only natural to use LU as a reference for our comparison. We develop , a session type system for the $\pi$-calculus that subsumes classical and intuitionistic interpretations of linear logic as session types. Based on a fragment of LU that suits our purposes (dubbed ), the type system allows us to define a class of typable processes that contains processes typable both in the intuitionistic type system, as well as in the classical type system. In our approach, the type system provides an effective framework and a yardstick for a formal comparison. Because its typing rules encompass both classical and intuitionistic formulations, we can readily characterize their two (sub-)classes of typable processes by considering the different typing rules that apply in each case. Analyzing these different portions of rules allows us to then characterize the differences (in expressiveness/typability) between the intuitionistic and classical type systems. Moreover, these characterizations allow use to determine how extensions to the type systems might be supported by the intuitionistic and classical interpretations. Contributions and Outline Summing up, this paper makes the following contributions: * The session type system , which is derived from a concurrent interpretation of (a linear fragment of LU), together with its soundness results (<Ref>, sec:ull); * A formal comparison between (i) the classes of processes typable under interpretations of intuitionistic and classical linear logic and (ii) those typable in  (sec:comparison). We prove that (1) precisely captures the class of processes typable under the classical interpretation (<Ref>), and that (2) the class of processes typable under the intuitionistic interpretation is strictly included in (<Ref>). Together, these two results confirm the informal observation that the two interpretations induce different classes of processes. * An analysis of the significance of our characterizations in terms of two different aspects: the locality principle enforced by intuitionistic interpretations, and process composition usually limited in type systems derived from linear logic in comparison to session type systems not based on logical foundations (sec:discussion). This analysis corroborates the observation made in [11] concerning locality, and suitably extends it to show that intuitionistic interpretations cannot type empty sends on previously received channels. Finally, we draw some conclusions in <Ref>. Related Work Our work can be seen as an expressiveness study on the classes of processes induced by different type systems. Concretely, our results can be seen as providing formal evidence that session type systems derived from classical linear logic are more expressive than those based on intuitionistic linear logic, based on the fact that the former are more permissive than the latter. The study of the relative expressiveness of process calculi has a long, fruitful history (see, e.g. [25, 42, 40]). The aim is to compare two different process languages by defining an encoding (a translation up to some correctness criteria) between them or by proving its non-existence. Even though our aims are similar in spirit to expressiveness studies, at a technical level there are substantial differences, because our focus is on assessing the influence that different type systems have on the same process language—as such, our comparisons do not rely on encodings. Although most studies on relative expressiveness have addressed untyped process languages, some works have studied the influence of (behavioral) types on the expressiveness of process languages—see, e.g., [15, 21, 14, 35, 41]. Salient examples are the works [17, 43], which compare type systems that enforce the deadlock-freedom and termination properties, respectively. In particular, a main result in [17] is that (classical) linear logic induces a strict subclass of deadlock-free processes with respect to the class of typable processes in non-logically motivated type systems. Unlike our work, the focus in [17] is on different process languages, each with a different type system. This requires the definition of encodings, on processes but also on types, that abstract away from the syntactic differences between the classes of processes induced by each typed framework. This paper is an extended, revised version of the workshop paper [28]. We present several improvements and developments with respect to that work. First, here we have significantly improved the correctness results for (<Ref>). and considered branching/selection types (not studied in [28]). Also, we have refined the presentation of in [28] with a more explicit treatment of duality, presented in <Ref>. Moreover, we have broadened the analysis of our comparison results in <Ref> by considering extensions to the type system with more expressive forms of parallel composition and restriction. § A SESSION TYPE SYSTEM BASED ON LU Girard [23] developed Logic of Unity (LU) to study and compare classical, intuitionistic, and linear logic, without having to “change the rules of the game by, e.g., passing from one style of sequent to the other.” The idea is simple: there is one form of sequent with an abundance of inference rules. The several logics subsumed by LU are then characterized by (possibly overlapping) subsets of those rules. Clearly, we find ourselves in a similar situation: we want to compare intuitionistic and classical linear logic as session type systems, by abstracting away from typing judgments and rules of different forms. To this end, in this section we introduce United Linear Logic (), a logic based on the linear fragment of LU, and present the Curry-Howard interpretation of as a session type system for the $\pi$-calculus, dubbed , following [6, 52] (ssec:proccal). As we will see in <Ref>, we can then characterize the session type interpretations of intuitionistic and classical linear logic as subsets of the typing rules of . We also present correctness properties common to (logically motivated) session type systems (ssec:procprop). Finally, we discuss an alternative presentation of that has a more explicit account of duality (ssec:duality). §.§ The Process Calculus and Type System A session type represents a sequence of exchanges that should be performed along a channel. Propositions in —interpreted as session types in —are defined as follows: propositions/types are generated by the following grammar: \begin{align*} A, B ::= \1 \bnf \bot \bnf A \tensor B \bnf A \lolli B \bnf \oplus\{i:A\}_{i \in I} \bnf \&\{i:A\}_{i \in I} \bnf \bang A \bnf \whynot A \end{align*} <Ref> gives the intuitive reading of the interpretation of propositions as session types. Some details are worth noting. First, receiving in is defined using the intuitionistic connective ; we will see later that can be used to define the classical connective —also interpreted as receiving—which is not supported by intuitionistic linear logic. Second, supports labeled $n$-ary branching constructs (following, e.g., Caires and Pérez in [7]). In linear logic, $\oplus$ and $\&$ are binary connectives [22, 24, 23]. In [6, 52], Caires, Pfenning, and Wadler originally interpret them as a choice between a left and a right option. However, in session-typed $\pi$-calculi (see, e.g., [31]) $n$-ary choice is standard and more convenient. $\1$ and $\bot$ Close the channel $A \tensor B$ Send a channel of type $A$ and continue as $B$ $A \lolli B$ Receive a channel of type $A$ and continue as $B$ $\oplus\{i:A_i\}_{i \in I}$ Send a label $i \in I$ and continue as $A_i$ $\&\{i:A_i\}_{i \in I}$ Receive a label $i \in I$ and continue as $A_i$ $\bang A$ Repeatedly provide a service of type $A$ $\whynot A$ Connect to a service of type $A$ Interpretation of propositions as session types. The duality of propositions is defined as follows: Duality is a binary relation on propositions/types, denoted $\dual{A}$, defined as follows: \begin{align*} \dual{\1} &:= \bot & \dual{(A \tensor B)} &:= A \lolli \dual{B} & \dual{(\oplus\{i:A_i\}_{i \in I})} &:= \&\{i:\dual{A_i}\}_{i \in I} & \dual{(\bang A)} &:= \whynot \dual{A} \\ \dual{\bot} &:= \1 & \dual{(A \lolli B)} &:= A \tensor \dual{B} & \dual{(\&\{i:A_i\}_{i \in I})} &:= \oplus\{i:\dual{A_i}\}_{i \in I} & \dual{(\whynot A)} &:= \bang \dual{A} \end{align*} Duality in reflects the intended reciprocity of protocols between two parties: when a process on one side of a channel sends, the process on the opposite side must receive, and vice versa. It is easy to see that duality is an involution: $\dual{(\dual{A})} = A$. As mentioned before, we can use duality to define in terms of : $A \parr B := \dual{A} \lolli B$; as we will see, the rules for of classical linear logic can be recovered from the rules for in using duality. Duality also shows that and are De Morgan-style duals in classical linear logic: \begin{align*} \dual{(A \tensor B)} &= A \lolli \dual{B} = \dual{(\dual{A})} \lolli \dual{B} = \dual{A} \parr \dual{B} \\ \dual{(A \parr B)} &= \dual{(\dual{A} \lolli B)} = \dual{A} \tensor \dual{B} \end{align*} The discipline of binary session types deals with concurrent processes that communicate through point-to-point channels. The $\pi$-calculus [39, 47] offers a rigorous yet expressive framework for defining this discipline and establishing its fundamental properties. is a type system for $\pi$-calculus processes defined as follows: Process terms are generated by the following grammar: \begin{align*} P, Q ::= \0 \bnf \nu{x}P \bnf P \| Q \bnf \send{x}{y}.P \bnf \recv{x}{y}.P \bnf x \triangleleft \ell.P \bnf x \triangleright \{i:P_i\}_{i \in I} \\ \bnf \serv{x}{y}.P \bnf \fwd{x}{y} \bnf \send{x}{}.P \bnf \recv{x}{}.P \end{align*} Process constructs for inaction $\0$, channel restriction $\nu{x}P$, and parallel composition $P \| Q$ have standard readings. The same applies to constructs for sending, receiving, selection, branching, and replicated receive prefixes, which are respectively denoted $\send{x}{y}.P$, $\recv{x}{y}.P$, $x \triangleleft \ell.P$, $x \triangleright \{i:P_i\}_{i \in I}$, and $\serv{x}{y}.P$. Note that all these prefixes are blocking, which means that our process calculus implements synchronous communication (see, e.g., [12, 29] for interpretations of linear logic as session type systems in the asynchronous setting). Process $\fwd{x}{y}$ denotes a forwarder that “fuses” channels $x$ and $y$; it is akin to the link processes used in encodings of name passing using internal mobility [2]. We consider also constructs $\send{x}{}.P$ and $\recv{x}{}.P$, which specify the explicit closing of channels: their synchronization represents the explicit de-allocation of linear resources. We use these constructs to give a non-silent interpretation of $\1$ and $\bot$ (see ssec:types). In $\nu{y}{P}$, $\recv{x}{y}.P$, and $\serv{x}{y}.P$ the occurrence of $y$ is binding, with scope $P$. The set of free names of a process $P$ is denoted $\fn(P)$ and is the complement of $\bn(P)$, the set of bound names of $P$. We identify processes up to consistent renaming of bound names, writing $\equiv_\alpha$ for this congruence. We write $P\subst{y/x}$ for the capture-avoiding substitution of free occurrences of $y$ for $x$ in $P$. Structural Congruence The following is an important notation of syntactical equivalence for processes. Structural congruence is a binary relation on process terms, denoted $P \equiv Q$. It is defined as the least congruence on processes (i.e., closed under arbitrary process contexts) that satisfies the axioms in <Ref> (top). The above formulation of structural congruence internalizes the conditions induced by typing (i.e., proof transformations in ), necessary for the correctness properties proven in <Ref>. As is usual for Curry-Howard correspondences, computation is related to cut reduction. Cuts are used in logic to combine two proofs that contain dual propositions. As we will see, a cut in session type interpretations of linear logic entails the parallel composition and connection of two processes. Generally, cut reduction transforms cuts into cuts on smaller types. In correspondences between linear logic and session types, cut reduction corresponds to communication—the notion of computation in the $\pi$-calculus, which expresses internal behavior of processes. The definition of the reduction relation follows. Note that, for a simpler presentation, it includes a set of rules for commuting conversions ($\kappa$-rules); an alternative presentation could treat commuting conversions as a behavioral equivalence. νx ( P Q ) ≡νx ( Q P ) x ∉(Q) y ∉(P) νx ( P νy ( Q R ) ) ≡νy ( Q νx ( P R ) ) x ∉(R) y ∉(P) νx ( P νy ( Q R ) ) ≡νy ( νx ( P Q ) R ) x ≠y νx(P xy) Py/x νx ( x. x.Q ) Q νx ( νy xy . ( P_1 P_2 ) xz.Q ) νx ( P_2 νy ( P_1 Qy/z ) ) j ∈I νx ( x ◃j.P x ▹{i:Q_i}_i ∈I ) νx ( P Q_j ) νx ( νy xy.P xz.Q ) νx ( νy ( P Qy/z ) xz.Q ) u ∉(P) νu ( P xz.Q ) P νy ( P x.Q ) x.νy ( P Q ) y ∈(Q_2) νy ( P νz xz.( Q_1 Q_2 ) ) νz xz.( Q_1 νy ( P Q_2 ) ) y ∈(Q_1) νy ( P νz xz.( Q_1 Q_2 ) ) νz xz.( νy ( P Q_1 ) Q_2 ) νy ( P xz.Q ) xz.νy ( P Q ) νy ( P x ◃ℓ.Q ) x ◃ℓ.νy ( P Q ) νy ( x ▹{i:P_i}_i ∈I Q ) x ▹{i:νy ( P_i Q )}_i ∈I Structural congruence and reduction for . Reduction is a binary relation on process terms, denoted $P \red Q$, defined in <Ref>. It is closed under the following rules: Q Q' P Q P Q' P Q νyP νyQ P ≡P' P' Q' Q' ≡Q P Q We write $\red_\beta$ to denote reductions that follow from $\beta$-rules. In this definition, Rule $\beta$id replaces a channel $x$ with a channel $y$ which $x$ is forwarded to. Rule $\beta$close formalizes the explicit channel de-allocation mentioned above: the synchronization between an empty send and an empty receive effectively closes channel $x$. Rule $\beta$send formalizes the synchronization between a send and a corresponding receive, substituting the sent name for the received name in the continuation of the receive. Rule $\beta$sel formalizes the synchronization between a selection and a branch. Rule $\beta$serv allows to synchronize a client and a service; a copy of the service $Q$ is spawned with the sent name substituted for the received name while the replicated receive remains. Rule $\beta$weaken allows to clean up a service if it has no clients. The $\kappa$-rules reflect commuting conversions in linear logic, which in the process calculus allow to pull prefixes on free names out of series of consecutive restrictions; as we will see, we use them such that we can state a general progress result. Rules par, res, and sc close reduction under parallel composition, restriction and structural congruence, respectively (<Ref>). Type Checking The inference system of is a sequent calculus with sequents of the form $\Gamma; \Delta \vdash \Lambda$. Here, $\Gamma$, $\Delta$ and $\Lambda$ denote regions which collect propositions and obey different structural criteria. $\Gamma$ is the unrestricted region, which contains propositions that can be indefinitely used. $\Delta$ and $\Lambda$ are the linear regions, which contain propositions that must be used exactly once. We write `$\emptyset$' to denote an empty region. Also, we extend duality to regions: $\dual{(\Delta)}$ contains exactly the duals of the propositions in $\Delta$. The type system for is an extension of 's inference system with process and channel name annotations on sequents, such that judgments are of the form $\Gamma; \Delta \vdash P :: \Lambda$. Then, the regions $\Gamma$, $\Delta$ and $\Lambda$ denote the unrestricted resp. linear contexts of $P$, consisting of assignments $x:A$ where $x$ is a channel name and $A$ is a proposition/type. Γ; x:A ⊢xy :: y:A Γ; x:A, y:A ⊢xy :: ∅ Γ; ∅⊢x. :: x: Γ; Δ⊢P :: Λ Γ; Δ, x: ⊢x.P :: Λ Γ; Δ⊢P :: Λ Γ; Δ⊢x.P :: Λ, x: Γ; x:⊢x. :: ∅ Γ; Δ⊢P :: Λ, y:A Γ; Δ' ⊢Q :: Λ', x:B Γ; Δ, Δ' ⊢νy xy.(P Q) :: Λ, Λ', x:A B Γ; Δ, y:A, x:B ⊢P :: Λ Γ; Δ, x:A B ⊢xy.P :: Λ Γ; Δ⊢P :: Λ, y:A, x:B Γ; Δ⊢xy.P :: Λ, x:A B Γ; Δ, y:A ⊢P :: Λ Γ; Δ', x:B ⊢Q :: Λ' Γ; Δ, Δ', x:A B ⊢νy xy.(P Q) :: Λ, Λ' Γ; Δ, y:A ⊢P :: Λ, x:B Γ; Δ⊢xy.P :: Λ, x:A B Γ; Δ⊢P :: Λ, y:A Γ; Δ', x:B ⊢Q :: Λ' Γ; Δ, Δ', x:A B ⊢νy xy.(P Q) :: Λ, Λ' Γ; Δ⊢P :: Λ, x:A_j j ∈I Γ; Δ⊢x ◃j .P :: Λ, x:⊕{i:A_i}_i ∈I ∀i ∈I.  Γ; Δ, x:A_i ⊢P_i :: Λ Γ; Δ, x:⊕{i:A_i}_i ∈I ⊢x ▹{i:P_i}_i ∈I :: Λ ∀i ∈I.  Γ; Δ⊢P_i :: Λ, x:A_i Γ; Δ⊢x ▹{i:P_i}_i ∈I :: Λ, x:&{i:A_i}_i ∈I Γ; Δ, x:A_j ⊢P :: Λ j ∈I Γ; Δ, x:&{i:A_i}_i ∈I ⊢x ◃j .P :: Λ Γ, u:A; Δ⊢P :: Λ, x:A Γ, u:A; Δ⊢νx ux.P :: Λ Γ, u:A; Δ, x:A ⊢P :: Λ Γ, u:A; Δ⊢νx ux.P :: Λ Γ; ∅⊢P :: y:A Γ; ∅⊢xy.P :: x:A Γ, u:A; Δ⊢P :: Λ Γ; Δ, x:A ⊢Px/u :: Λ Γ, u:A; Δ⊢P :: Λ Γ; Δ⊢Px/u :: Λ, x:A Γ; y:A ⊢P :: ∅ Γ; x:A ⊢xy.P :: ∅ The $\pull$ type system. Γ; Δ⊢P :: Λ, x:A Γ; Δ', x:A ⊢Q :: Λ' Γ; Δ, Δ' ⊢νx(P Q) :: Λ, Λ' Γ; Δ, x:A ⊢P :: Λ Γ; Δ' ⊢Q :: Λ', x:A Γ; Δ, Δ' ⊢νx(P Q) :: Λ, Λ' Γ; Δ⊢P :: Λ, x:A Γ; Δ' ⊢Q :: Λ', x:A Γ; Δ, Δ' ⊢νx(P Q) :: Λ, Λ' Γ; Δ, x:A ⊢P :: Λ Γ; Δ', x:A ⊢Q :: Λ' Γ; Δ, Δ' ⊢νx(P Q) :: Λ, Λ' Γ, u:A; Δ⊢P :: Λ Γ; ∅⊢Q :: x:A Γ; Δ⊢νu ( P ux.Q ) :: Λ Γ; ∅⊢P :: x:A Γ, u:A; Δ⊢Q :: Λ Γ; Δ⊢νu ( ux.P Q ) :: Λ Γ, u:A; Δ⊢P :: Λ Γ; x:A ⊢Q :: ∅ Γ; Δ⊢νu ( P ux.Q ) :: Λ Γ; x:A ⊢P :: ∅ Γ, u:A; Δ⊢Q :: Λ Γ; Δ⊢νu ( ux.P Q ) :: Λ Cut-rules of the type system. <Ref> give the typing rules of , which are based directly on the linear fragment of LU in [23] (some rules are marked with $\ast$, which we will refer to later). We comment on the rules in <Ref>. Axioms idR and idL type forwarding constructs which connect two channels of dual type. Axioms $\1$R and $\bot$L type processes that close a session with an empty send after which they become inactive. Rules $\bot$R and $\1$L type processes that close a session with an empty receive. These four rules define a non-silent interpretation for $\1$ and $\bot$ that entails process communication (cf. <Ref>), which corresponds to cut reductions in proofs. (An alternative silent interpretation of $\1$ is discussed in <Ref>.) The typing system elegantly induces processes under the internal mobility discipline, whereby only fresh channels are exchanged in communications [45, 2]. Rules $\tensor$R, $\parr$L, and $\lolli$L type bound sends, where one process provides the sent channel independently from another process which provides the continuation channel. Rules $\tensor$L, $\parr$R, and $\lolli$R type receive-prefixed processes. Rules $\oplus$R and $\&$L type selection and rules $\oplus$L and $\&$R type branching. Our interpretation of $\bang A$ and $\whynot A$ as server and client behaviors follows the interpretation of classical linear logic in [11]. Rules copyR and copyL type clients that connect to a service by sending a fresh channel. Rules $\bang$R and $\whynot$L allow the typing of unused services and Rules $\bang$L and $\whynot$R allow to add unused services to the unrestricted context. <Ref> gives a series of so-called cut-rules, that type channel connections. The number and shape of these rules is a difference with respect to previous presentations. Rules cutRL, cutLR, cutLL, and cutRR type pairs of processes that have a channel of dual type in common by composing them in parallel and immediately binding their common channel. The four similar rules provide for all possible sides the cut channel can appear on. This way, constructs for restriction and parallel composition are jointly treated. Rules cut$\bang$R, cut$\bang$L, cut$\whynot$R, and cut$\whynot$R type the connection of a service provider with potential clients; a process $Q$ with potential clients has a channel $u$ in its unrestricted context, so the rules create a service from a process $P$ that has a single channel $x$ of type dual to $u$'s type by prefixing it with replicated reception on $u$ (forming $\serv{u}{x}.P$) and then composing this process in parallel with $Q$ and binding $u$. This abundance of cut-rules is derived from the generality of 's judgments and necessary for proving the correctness results presented in <Ref>. In <Ref> we shall consider an alternative presentation of , which allows moving channels between the left and side regions of typing judgments using duality; as we will see, in such a presentation we will be able to drastically cut down the number of cut-rules. Differences with LU As already mentioned, for the purposes of our formal comparison we consider a linear logic derived from LU [23] restricted to linear connectives. The following are notable differences between our linear logic and the linear fragment of LU: * we include a Rule idL which is complementary to idR; * we include Rules $\1$L and $\bot$R which are lacking in LU; * we omit rules for $\top$ and 0 (the units of $\&$ and $\oplus$, resp.), which are usually disregarded in session type interpretations of linear logic (an exception is [30], which uses $\top$ and 0 to give a local account of subtyping); * we omit rules that move propositions between the left and right linear regions using duality (in <Ref> we will return to these rules). * because the order of assumptions in typing rules makes a practical difference, we include additional symmetric cut-rules. §.§ Correctness Properties Session type systems for the $\pi$-calculus derived from the Curry-Howard correspondence enforce strong correctness properties for processes, which follow directly from properties of the logic, in particular from cut elimination. This is no different for . Our first result is the safety property of subject congruence and reduction (<Ref>), which says that typability is consistent across structural congruence and reductions. If $\Gamma ; \Delta \vdash P :: \Lambda$ and $P \equiv Q$, then $\Gamma ; \Delta \vdash Q :: \Lambda$. By induction on the derivation of the structural congruence. The only inductive case is the closure under arbitrary process contexts, which follows from the IH directly. The base cases correspond to a rule in <Ref> (top). In each case, we infer the typing of $P$ and $Q$ from the shapes of the processes in the rule, and show that these typing inferences have identical assumptions and conclusion. The cases of Rules cutAssocL and cutAssocR are straightforward as usual. The analysis of the more interesting case of Rule cutSymm depends on the last-applied cut-rule. If the left-hand side uses, e.g., Rule cutLR, then the right-hand side should use Rule cutRL. If $\Gamma; \Delta \vdash P :: \Lambda$ and $P \red Q$, then $\Gamma; \Delta \vdash Q :: \Lambda$. By induction on the derivation of the reduction. The cases correspond to the rules in <Ref> (bottom), as well as the closure rules in <Ref>. In each case, we infer the typing of $P$ and construct on for $Q$ from the shapes of the processes in the rule, and show that these typing inferences have identical assumptions and conclusions. We detail two cases. The case of Rule $\beta$serv serves to illustrate the need for multiple symmetric cut-rules. Suppose, for example, that the last-applied rule is Rule cut$\bang$R, and that the client request is derived using Rule copyL: \[ \begin{bussproof} \bussAssume{ \Gamma , x:A ; \Delta , y:A \vdash P :: \Lambda \bussUn[\ruleLabel{copyL}]{ \Gamma , x:A ; \Delta \vdash \nu{y} \send{x}{y}.P :: \Lambda \bussAssume{ \Gamma ; \emptyset \vdash Q :: z:A \bussBin[\ruleLabel{cut$\bang$R}]{ \Gamma ; \Delta \vdash \nu{x} ( \nu{y} \send{x}{y}.P \| \serv{x}{z}.Q ) :: \Lambda \end{bussproof} \] As per Rule $\beta$serv, we need to identically type $\nu{x} ( \nu{y} ( P \| Q\subst{y/z} ) \| \serv{x}{z}.Q )$ using the same assumptions as above. Had we only had, e.g., Rules cutRL and cutLL, this would not be possible. However, with the rules in <Ref> it is no problem. We first have to add $x:A$ into the persistent regions of the proof of the typing of $Q$, and substitute $y$ for $z$, after which we derive the following: \[ \begin{bussproof} \bussAssume{ \Gamma , x:A ; \Delta , y:A \vdash P :: \Lambda \bussAssume{ \Gamma , x:A ; \emptyset \vdash Q\subst{y/z} :: y:A \bussBin[\ruleLabel{cutLR}]{ \Gamma , x:A ; \Delta \vdash \nu{y} ( P \| Q\subst{y/z} ) :: \Lambda \bussAssume{ \Gamma ; \emptyset \vdash Q :: z:A \bussBin[\ruleLabel{cut$\bang$R}]{ \Gamma ; \Delta \vdash \nu{x} ( \nu{y} ( P \| Q\subst{y/z} ) \| \serv{x}{z}.Q ) :: \Lambda \end{bussproof} \] As another representative case, we consider Rule $\beta{\tensor}{\parr}$. We have $P = \nu{x} ( \nu{y}\send{x}{y}.(R \| S) \| x(z).T) \red \nu{x} ( S \| \nu{y} ( R \| T\subst{y/z} ) ) = Q$. There are multiple ways to type $P$, depending on the cut-rule applied. Here, we give the example of Rule cutR. The proof of $\Gamma; \Delta \vdash P :: \Lambda$ looks as follows: \[ \begin{bussproof} \bussAssume{ \Gamma; \Delta_1 \vdash R :: \Lambda_1, y:A \bussAssume{ \Gamma; \Delta_2 \vdash S :: \Lambda_2, x:B \bussBin[\ruleLabel{$\tensor$R}]{ \Gamma; \Delta_1, \Delta_2 \vdash \nu{y}\send{x}{y}.(R \| S) :: \Lambda_1, \Lambda_2, x:A \tensor B \bussAssume{ \Gamma; \Delta_3, z:A, x:B \vdash T :: \Lambda_3 \bussUn[\ruleLabel{$\tensor$L}]{ \Gamma; \Delta_3, x:A \tensor B \vdash x(z).T :: \Lambda_3 \bussBin[\ruleLabel{cutR}]{ \Gamma; \underbrace{\Delta_1, \Delta_2, \Delta_3}_{\Delta} \vdash \underbrace{\nu{x}(\nu{y}\send{x}{y}(R \| S) \| T)}_{P} :: \underbrace{\Lambda_1, \Lambda_2, \Lambda_3}_{\Lambda} \end{bussproof} \] We can then construct a proof of $\Gamma; \Delta \vdash Q :: \Lambda$ using the assumptions in the above proof. \[ \begin{bussproof} \bussAssume{ \Gamma; \Delta_2 \vdash S :: \Lambda_2, x:B \bussAssume{ \Gamma; \Delta_1 \vdash R :: \Lambda_1, y:A \bussAssume{ \Gamma; \Delta_3, y:A, x:B \vdash T \subst{y/z} :: \Lambda_3 \bussBin[\ruleLabel{cutR}]{ \Gamma; \Delta_1, \Delta_3, x:B \vdash \nu{y}(R \| T \subst{y/z} ) :: \Lambda_1, \Lambda_3 \bussBin[\ruleLabel{cutR}]{ \Gamma; \Delta \vdash \underbrace{\nu{x}( S \| \nu{y}(R \| T \subst{y/z}) )}_{Q} :: \Lambda \end{bussproof} \] Our second result is the liveness property of progress, which says that the specific form of composition and restriction in following from the cut-rule enables communication, and that processes never get stuck waiting for each other: If $\Gamma; \Delta \vdash P :: \Lambda$ and $P \equiv \nu{x}(Q \| R)$, then there exists $P'$ such that $P \red P'$. By induction on the size of the proof of $\Gamma; \Delta \vdash P :: \Lambda$. By <Ref>, $\Gamma ; \Delta \vdash \nu{x} ( Q \| R ) :: \Lambda$. By assumption, the last inference of the derivation thereof is either a linear cut or an unrestricted cut. (Case linear cut) The last-applied rule can be cutR or cutL. W.l.o.g. assume the former. By inversion of cutR, we have a proof $\pi_Q$ of $\Gamma; \Delta_Q \vdash Q :: \Lambda_Q, x:A$ and a proof $\pi_R$ of $\Gamma; \Delta_R, x:A \vdash R :: \Lambda_R$ where $\Delta_Q, \Delta_R = \Delta$ and $\Lambda_Q, \Lambda_R = \Lambda$. If the last-applied rules in $\pi_Q$ and $\pi_R$ are both on $x$, then we apply a $\beta$-reduction depending on $A$. For example, assume $A = B \tensor C$. Then the last-applied rules in $\pi_Q$ and $\pi_R$ are $\tensor$R and $\tensor$L, respectively. Hence, by Rule $\beta{\tensor}{\parr}$, $P \equiv \nu{x}(\nu{y}\send{x}{y}.(Q_y \| Q_x) \| \recv{x}{y}.R') \red \nu{x}(Q_x \| \nu{y}(Q_y \| R'))$. Otherwise, w.l.o.g. assume the last-applied rule not on $x$ is in $\pi_Q$. Then, if $Q$ is a cut, by the induction hypothesis, $Q \red Q'$, and hence $P \equiv \nu{x}(Q \| R) \red \nu{x}(Q' \| R)$. Otherwise, $Q$ is prefixed by an action on some free channel $y$ which is not a free channel of $R$. Hence, we apply a $\kappa$-conversion depending on the type of the channel the last-applied rule in $\pi_Q$ works on. For example, if this rule introduces $y:B \tensor C$ on the right, then, by Rule $\kappa{\tensor}$, $P \equiv \nu{x}(\nu{z}\send{y}{z}.(Q_z \| Q_y) \| R) \red \nu{z}\send{y}{z}.\nu{x}(Q_z \| Q_y \| R)$. (Case unrestricted cut) The last-applied rule can be cut$\bang$ or cut$\whynot$. W.l.o.g. assume the former. Then $Q \equiv \serv{x}{y}.Q'$, and, by inversion of this rule, we have a proof $\pi_{Q'}$ of $\Gamma; \emptyset \vdash Q' :: y:A$ and a proof $\pi_R$ of $\Gamma, x:A; \Delta \vdash R :: \Lambda$. If $x \notin \fn(R)$, then, by Rule $\beta$weakenL, $P \equiv \nu{x}(\serv{x}{y}.Q' \| R) \red R$. Otherwise, the next step depends on the last-applied rule in $\pi_R$. If the last-applied rule in $\pi_R$ is on $x$, then it must be copyR or copyL. W.l.o.g. assume the former. Then $R \equiv \nu{y}\send{x}{y}.R'$, so, by Rule $\beta\bang\whynot$, $P \equiv \nu{x}(\serv{x}{y}.Q' \| \nu{y}\send{x}{y}.R') \red \nu{x}(\serv{x}{y}.Q' \| \nu{y}(Q' \| R'))$. Otherwise, the proof proceeds as in the last part of the case of linear cut. An important corollary of subject reduction and progress is that closed processes are deadlock-free. The corresponding result from linear logic is cut-elimination, but this can be misleading: most reductions actually increase the number of cuts. Therefore, to prove deadlock-freedom we need another notion than the size of a proof. Here, we use the cost of a proof (following, e.g., [10, 13]), which is determined by the sum of the costs of its cuts. The cost of a cut is the sum of the cut propositions/types, which depends on the amount of connectives it has. For example, the cost associated to type $\1 \tensor \bot \parr \1$ is five. This way, the cost of a proof decreases upon reduction, because the new cuts are on propositions/types that cost less than before. Let us write $\red_\beta^\ast$ to denote the reflexive, transitive closure of $\red_\beta$. We have: If $\emptyset; \emptyset \vdash P :: z:\1$ or $\emptyset; z:\bot \vdash P :: \emptyset$, then $P \red_\beta^\ast \send{z}{}.\0$. By induction on the number of client requests (1) and the cut cost of the derivation of the typing of $P$ (2). In the ultimate base case, there are no client requests, and the cut cost is zero, where well-typedness gives us that $P = \send{z}{}.\0$. Otherwise, the derivation of the typing of $P$ must end with a cut-rule. By <Ref> (progress), there is $Q$ s.t. $P \red Q$. Hence, by <Ref> (subject reduction), we have $\emptyset; \emptyset \vdash Q :: z:\1$ (resp. $\emptyset; z:\bot \vdash Q :: \emptyset$). By its type, the free channel $z$ cannot guard any actions on bound channels, and there are no other free channels. Therefore, following the proof of <Ref>, the reduction $P \red Q$ can only be the result of a $\beta$-reduction, i.e. $P \red_\beta Q$. The analysis depends on whether the reduction involves a client/server or not. If the reduction involves a client/server, the number of clients reduces, so the thesis follows from 1. Otherwise, the reduction replaces a cut with cuts on smaller types, so the cut cost of the typing of $Q$ is less than that of $P$, and the thesis follows from 2. §.§ On Duality It may seem that there is an extensive redundancy in the system of rules in <Ref>, caused by the two-sidedness of 's judgments: every connective can be inferred on either side of judgments. For example, Rules $\tensor$R, $\parr$L, and $\lolli$L all type the send of a channel; which rule to use depends on the side of the judgment the involved channels are on. However, there is no actual redundancy, for if we were to omit rules for, e.g., $\parr$ and $\lolli$, it would be impossible to type a send on a previously received channel. This abundance of typing rules in can be explained by its full support for duality: for every rule inferring a connective on one side of a judgment, there is rule for inferring the connective's dual on the other side of a judgment. To make this duality explicit, we define an alternative type system by restricting 's rules to a specific fragment and adding LU's rules for moving propositions between sides of judgments: The type system , with judgments $\Gamma; \Delta \vpull P :: \Lambda$, is defined on the process calculus as defined in <Ref>. Its rules are the $\ast$-marked rules in <Ref> plus the following rules: Γ; ΔP :: Λ, x:A Γ; Δ, x:A P :: Λ Γ; Δ, x:A P :: Λ Γ; ΔP :: Λ, x:A Fortunately, in the presence of these two rules, a number of other rules become truly redundant: all rules in <Ref> not marked with $\ast$ are admissible or derivable in . Dually, Rules $\mleft$ and $\mright$ are admissible in vanilla . The following theorem formalizes these facts: * The rules in <Ref> not marked with $\ast$ are admissible or derivable in , and * Rules $\mleft$ and $\mright$, as given in <Ref>, are admissible in . (Item 1) Suppose given a proof of $\Gamma; \Delta \vpull P :: \Lambda$. By applying induction on the structure of this proof we show that any applications of non-$\ast$-marked rules can be replaced with applications of $\ast$-marked rules in combination with uses of Rules $\mleft$ or $\mright$. We discuss every possible last-applied rule, omitting cases of $\ast$-marked rules as they follow directly from the induction hypothesis. \begin{align*} & \bullet \ruleLabel{idL} && 1\quad && \Gamma; x:A, y:\dual{A} \vpull \fwd{x}{y} :: \emptyset \tag{assumption} \\ & && 2\quad && \Gamma; x:A \vpull \fwd{x}{y} :: y:A \tag{\ruleLabel{idR}} \\ & && 3\quad && \Gamma; x:A, y:\dual{A} \vpull \fwd{x}{y} :: \emptyset \tag{\ruleLabel{$\mleft$} on 2} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\bot$R} && 1\quad && \Gamma; \Delta \vpull \recv{x}{}.P :: \Lambda, x:\bot \tag{assumption} \\ & && 2\quad && \Gamma; \Delta \vpull P :: \Lambda \tag{inversion on 1} \\ & && 3\quad && \Gamma; \Delta \vpull P :: \Lambda ~\text{with only $\ast$ rules} \tag{IH on 2} \\ & && 4\quad && \Gamma; \Delta, x:\1 \vpull P :: \Lambda \tag{\ruleLabel{$\1$L} on 3} \\ & && 5\quad && \Gamma; \Delta \vpull P :: \Lambda, x:\bot \tag{\ruleLabel{$\mright$} on 4} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\bot$L} && 1\quad && \Gamma; x:\bot \vpull \send{x}{}.\0 :: \emptyset \tag{assumption} \\ & && 2\quad && \Gamma; \emptyset \vpull \send{x}{}.\0 :: x:\1 \tag{\ruleLabel{$\1$R}} \\ & && 3\quad && \Gamma; x:\bot \vpull \send{x}{}.\0 :: \emptyset \tag{\ruleLabel{$\mleft$} on 2} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\parr$R} && 1\quad && \Gamma; \Delta \vpull \recv{x}{y}.P :: \Lambda, x:A \parr B \tag{assumption} \\ & && 2\quad && \Gamma; \Delta \vpull P :: \Lambda, y:A, x:B \tag{inversion on 1} \\ & && 3\quad && \Gamma; \Delta \vpull P :: \Lambda, y:A, x:B ~\text{with only $\ast$ rules} \tag{IH on 2} \\ & && 4\quad && \Gamma; \Delta, y:\dual{A}, x:\dual{B} \vpull P :: \Lambda \tag{\ruleLabel{$\mleft$} twice on 3} \\ & && 5\quad && \Gamma; \Delta, x:\dual{A} \tensor \dual{B} \vpull \recv{x}{y}.P :: \Lambda \tag{\ruleLabel{$\tensor$L} on 4} \\ & && 6\quad && \Gamma; \Delta \vpull \recv{x}{y}.P :: \Lambda, x:A \parr B \tag{\ruleLabel{$\mright$} on 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\parr$L} && 1\quad && \Gamma; \Delta, \Delta', x:A \parr B \vpull \nu{y}\send{x}{y}.(P \| Q) :: \Lambda, \Lambda' \tag{assumption} \\ & && 2\quad && \Gamma; \Delta, y:A \vpull P :: \Lambda \\ & && 3\quad && \Gamma; \Delta', x:B \vpull Q :: \Lambda' \tag{inversion on 1} \\ & && 4\quad && \Gamma; \Delta, y:A \vpull P :: \Lambda ~\text{with only $\ast$ rules} \tag{IH on 2} \\ & && 5\quad && \Gamma; \Delta', x:B \vpull Q :: \Lambda' ~\text{with only $\ast$ rules} \tag{IH on 3} \\ & && 6\quad && \Gamma; \Delta \vpull P :: \Lambda, y:\dual{A} \tag{\ruleLabel{$\mright$} on 4} \\ & && 7\quad && \Gamma; \Delta' \vpull Q :: \Lambda', x:\dual{B} \tag{\ruleLabel{$\mright$} on 5} \\ & && 8\quad && \Gamma; \Delta, \Delta' \vpull \nu{y}\send{x}{y}.(P \| Q) :: \Lambda, \Lambda', x:\dual{A} \tensor \dual{B} \tag{\ruleLabel{$\tensor$R} on 6 and 7} \\ & && 9\quad && \Gamma; \Delta, \Delta', x:A \parr B \vpull \nu{y}\send{x}{y}.(P \| Q) :: \Lambda, \Lambda' \tag{\ruleLabel{$\mleft$} on 8} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{copyR} && 1\quad && \Gamma, u:A; \Delta \vpull \nu{x}\send{u}{x}.P :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma, u:A; \Delta \vpull P :: \Lambda, x:\dual{A} \tag{inversion on 1} \\ & && 3\quad && \Gamma, u:A; \Delta \vpull P :: \Lambda, x:\dual{A} ~\text{with only $\ast$ rules} \tag{IH on 2} \\ & && 4\quad && \Gamma, u:A; \Delta, x:A \vpull P :: \Lambda \tag{\ruleLabel{$\mleft$} on 3} \\ & && 5\quad && \Gamma, u:A; \Delta \vpull \nu{x}\send{u}{x}.P :: \Lambda \tag{\ruleLabel{copyL} on 4} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\whynot$R} && 1\quad && \Gamma; \Delta \vpull P\subst{x/u} :: \Lambda, x:\whynot \dual{A} \tag{assumption} \\ & && 2\quad && \Gamma, u:A; \Delta \vpull P :: \Lambda \tag{inversion on 1} \\ & && 3\quad && \Gamma, u:A; \Delta \vpull P :: \Lambda ~\text{with only $\ast$ rules} \tag{IH on 2} \\ & && 4\quad && \Gamma; \Delta, x:\bang A \vpull P\subst{x/u} :: \Lambda \tag{\ruleLabel{$\bang$L} on 3} \\ & && 5\quad && \Gamma; \Delta \vpull P\subst{x/u} :: \Lambda, x:\whynot \dual{A} \tag{\ruleLabel{$\mright$} on 4} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\whynot$L} && 1\quad && \Gamma; x:\whynot A \vpull \serv{x}{y}.P :: \emptyset \tag{assumption} \\ & && 2\quad && \Gamma; y:A \vpull P :: \emptyset \tag{inversion on 1} \\ & && 3\quad && \Gamma; y:A \vpull P :: \emptyset ~\text{with only $\ast$ rules} \tag{IH on 2} \\ & && 4\quad && \Gamma; \emptyset \vpull P :: y:\dual{A} \tag{\ruleLabel{$\mright$} on 3} \\ & && 5 \quad && \Gamma; \emptyset \vpull \serv{x}{y}.P :: x:\bang \dual{A} \tag{\ruleLabel{$\bang$R} on 4} \\ & && 6 \quad && \Gamma; x:\whynot A \vpull \serv{x}{y}.P :: \emptyset \tag{\ruleLabel{$\mleft$} on 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cutRR} && 1\quad && \Gamma ; \Delta , \Delta' \vpull \nu{x} ( P \| Q ) :: \Lambda , \Lambda' \tag{assumption} \\ & && 2\quad && \Gamma ; \Delta \vpull P :: \Lambda , x:A \\ & && 3\quad && \Gamma ; \Delta' \vpull Q :: \Lambda' , x:\dual{A} \tag{inversion on 1} \\ & && 4\quad && \Gamma ; \Delta \vpull P :: \Lambda , x:A ~\text{with only $\ast$ rules} \tag{IH on 2} \\ & && 5\quad && \Gamma ; \Delta' \vpull Q :: \Lambda' , x:\dual{A} ~\text{with only $\ast$ rules} \tag{IH on 3} \\ & && 6\quad && \Gamma ; \Delta' , x:A \vpull Q :: \Lambda' \tag{\ruleLabel{$\mleft$} on 5} \\ & && 7\quad && \Gamma; \Delta, \Delta' \vpull \nu{x} ( P \| Q ) :: \Lambda, \Lambda' \tag{\ruleLabel{cutRL} on 4 and 6} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cutLL} && 1\quad && \Gamma; \Delta, \Delta' \vpull \nu{x}(P \| Q) :: \Lambda, \Lambda' \tag{assumption} \\ & && 2\quad && \Gamma; \Delta, x:A \vpull P :: \Lambda \\ & && 3\quad && \Gamma; \Delta', x:\dual{A} \vpull Q :: \Lambda' \tag{inversion on 1} \\ & && 4\quad && \Gamma; \Delta, x:A \vpull P :: \Lambda ~\text{with only $\ast$ rules} \tag{IH on 2} \\ & && 5\quad && \Gamma; \Delta', x:\dual{A} \vpull Q :: \Lambda' ~\text{with only $\ast$ rules} \tag{IH on 3} \\ & && 6\quad && \Gamma; \Delta \vpull P :: \Lambda, x:\dual{A} \tag{\ruleLabel{$\mright$} on 4} \\ & && 7\quad && \Gamma; \Delta, \Delta' \vpull \nu{x}(P \| Q) :: \Lambda, \Lambda' \tag{\ruleLabel{cutRL} on 6 and 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cut$\whynot$R} && 1\quad && \Gamma; \Delta \vpull \nu{u} ( P \| \serv{u}{x}.Q ) :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma, u:A; \Delta \vpull P :: \Lambda \\ & && 3\quad && \Gamma; x:\dual{A} \vpull Q :: \emptyset \tag{inversion on 1} \\ & && 4\quad && \Gamma, u:A; \Delta \vpull P :: \Lambda ~\text{with only $\ast$ rules} \tag{IH on 2} \\ & && 5\quad && \Gamma; x:\dual{A} \vpull Q :: \emptyset ~\text{with only $\ast$ rules} \tag{IH on 3} \\ & && 6\quad && \Gamma; \emptyset \vpull Q :: x:A \tag{\ruleLabel{$\mright$} on 5} \\ & && 7\quad && \Gamma; \Delta \vpull \nu{u} ( P \| \serv{u}{x}.Q ) :: \Lambda \tag{\ruleLabel{cut$\bang$R} on 4 and 6} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cut$\whynot$L} && 1\quad && \Gamma; \Delta \vpull \nu{u}(\serv{u}{x}.P \| Q) :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma; x:\dual{A} \vpull P :: \emptyset \\ & && 3\quad && \Gamma, u:A; \Delta \vpull Q :: \Lambda \tag{inversion on 1} \\ & && 4\quad && \Gamma; x:\dual{A} \vpull P :: \emptyset ~\text{with only $\ast$ rules} \tag{IH on 2} \\ & && 5\quad && \Gamma, u:A; \Delta \vpull Q :: \Lambda ~\text{with only $\ast$ rules} \tag{IH on 3} \\ & && 6\quad && \Gamma; \emptyset \vpull P :: x:A \tag{\ruleLabel{$\mright$} on 4} \\ & && 7\quad && \Gamma; \Delta \vpull \nu{u}(\serv{u}{x}.P \| Q) :: \Lambda \tag{\ruleLabel{cut$\bang$L} on 6 and 5} \end{align*} (Item 2) Suppose given a proof of $\Gamma; \Delta \vdash P :: \Lambda$, possibly with applications of Rules $\mleft$ and $\mright$. By applying induction on the structure of this proof we show that it can be transformed to not contain any applications of $\mleft$ and $\mright$. We discuss every possible last-applied rule. However, all cases except $\mleft$ and $\mright$ follow directly from the induction hypothesis. Therefore, we only detail the case of $\mleft$—the case of $\mright$ is analogous. Since $\mleft$ is the last-applied rule, we know the assumption is of the form $\Gamma; \Delta, x:\dual{A} \vdash P :: \Lambda$. By inversion, we have $\Gamma; \Delta \vdash P :: \Lambda, x:A$. The idea is to move the application of $\mleft$ up the proof tree, applied to a subtype of $A$. We apply the induction hypothesis to find a proof of $\Gamma; \Delta \vdash P :: \Lambda, x:A$ without applications of $\mleft$ and $\mright$. Typing rules leave all channels/types untouched except the ones they work on. Therefore, we can traverse up the proof tree—remembering which steps were taken—until we encounter the rule that introduces $x:A$. Note that these steps do not include applications of $\mleft$ and $\mright$. The consequence of this rule looks like $\Gamma'; \Delta' \vdash P' :: \Lambda', x:A$, for some $\Gamma'$, $\Delta'$, $P'$ and $\Lambda'$. Now, we apply induction on the size of $A$ (with induction hypothesis denoted 2) to prove $\Gamma'; \Delta', x:\dual{A} \vdash P' :: \Lambda'$. We discuss every possible last-applied rule that introduces $x:A$: \begin{align*} & \bullet \ruleLabel{idR} && 1\quad && \Gamma'; y:A \vdash \fwd{y}{x} :: x:A \tag{assumption} \\ & && 2\quad && \Gamma'; y:A, x:\dual{A} \vdash \fwd{y}{x} :: \emptyset \tag{\ruleLabel{idL}} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\1$R} && 1\quad && \Gamma'; \emptyset \vdash \send{x}{}.\0 :: x:\1 \tag{assumption} \\ & && 2\quad && \Gamma'; x:\bot \vdash \send{x}{}.\0 :: \emptyset \tag{\ruleLabel{$\bot$L}} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\bot$R} && 1\quad && \Gamma'; \Delta' \vdash \recv{x}{}.P' :: \Lambda', x:\bot ~\text{without \ruleLabel{$\mleft$}/\ruleLabel{$\mright$}} \tag{assumption} \\ & && 2\quad && \Gamma'; \Delta' \vdash P' :: \Lambda' \tag{inversion on 1} \\ & && 3\quad && \Gamma'; \Delta', x:\1 \vdash \recv{x}{}.P' :: \Lambda' \tag{\ruleLabel{$\1$L} on 2} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\tensor$R} && 1\quad && \Gamma'; \Delta', \Delta'' \vdash \nu{y}\send{x}{y}.(P' \| Q') :: \Lambda', \Lambda'', x:B \tensor C \\ & && && ~\text{without \ruleLabel{$\mleft$}/\ruleLabel{$\mright$}} \tag{assumption} \\ & && 2\quad && \Gamma'; \Delta' \vdash P' :: \Lambda', y:B \\ & && 3\quad && \Gamma'; \Delta'' \vdash Q' :: \Lambda'', x:C \tag{inversion on 1} \\ & && 4\quad && \Gamma'; \Delta', y:\dual{B} \vdash P' :: \Lambda' \tag{\ruleLabel{$\mleft$} on 2} \\ & && 5\quad && \Gamma'; \Delta'', x:\dual{C} \vdash Q' :: \Lambda'' \tag{\ruleLabel{$\mleft$} on 3} \\ & && 6\quad && \Gamma'; \Delta', y:\dual{B} \vdash P' :: \Lambda' ~\text{without \ruleLabel{$\mleft$}/\ruleLabel{$\mright$}} \tag{\ih2 on 4} \\ & && 7\quad && \Gamma'; \Delta'', x:\dual{C} \vdash Q' :: \Lambda'' ~\text{without \ruleLabel{$\mleft$}/\ruleLabel{$\mright$}} \tag{\ih2 on 5} \\ & && 8\quad && \Gamma'; \Delta', \Delta'', x:\dual{B} \parr \dual{C} \vdash \nu{y}\send{x}{y}.(P' \| Q') :: \Lambda', \Lambda'' \tag{\ruleLabel{$\parr$L} on 8 and 9} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\lolli$R} && 1\quad && \Gamma'; \Delta' \vdash \recv{x}{y}.P' :: \Lambda', x:B \lolli C ~\text{without \ruleLabel{$\mleft$}/\ruleLabel{$\mright$}} \tag{assumption} \\ & && 2\quad && \Gamma'; \Delta', y:B \vdash P' :: \Lambda', x:C \tag{inversion on 1} \\ & && 3\quad && \Gamma'; \Delta', y:B, x:\dual{C} \vdash P' :: \Lambda' \tag{\ruleLabel{$\mleft$} on 2} \\ & && 4\quad && \Gamma'; \Delta', y:B, x:\dual{C} \vdash P' :: \Lambda' ~\text{without \ruleLabel{$\mleft$}/\ruleLabel{$\mright$}} \tag{\ih2 on 3} \\ & && 5\quad && \Gamma'; \Delta', x:B \tensor \dual{C} \vdash \recv{x}{y}.P' :; \Lambda' \tag{\ruleLabel{$\tensor$L} on 4} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\oplus$R} && 1\quad && \Gamma'; \Delta' \vdash x \triangleleft j.P' :: \Lambda', x:\oplus\{i:A_i\}_{i \in I} ~\text{without \ruleLabel{$\mleft$}/\ruleLabel{$\mright$}} \tag{assumption} \\ & && 2\quad && \Gamma'; \Delta' \vdash P' :: \Lambda', x:A_j \\ & && 3\quad && j \in I \tag{inversion on 1} \\ & && 4\quad && \Gamma'; \Delta', x:\dual{A_j} \vdash P' :: \Lambda' \tag{\ruleLabel{$\mleft$} on 3} \\ & && 5\quad && \Gamma'; \Delta', x:\dual{A_j} \vdash P' :: \Lambda' \tag{\ih2 on 4} \\ & && 6\quad && \Gamma'; \Delta', x:\&\{i:\dual{A_i}\}_{i \in I} \vdash x \triangleleft j.P' :: \Lambda' \tag{\ruleLabel{$\&$L} on 5 and 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\&$R} && 1\quad && \Gamma'; \Delta' \vdash x \triangleright \{i:P'_i\}_{i \in I} :: \Lambda', x:\&\{i:A_i\}_{i \in I} ~\text{without \ruleLabel{$\mleft$}/\ruleLabel{$\mright$}} \tag{assumption} \\ & && 2\quad && \forall i \in I.~ \Gamma'; \Delta' \vdash P'_i :: \Lambda', x:A_i \tag{inversion on 1} \\ & && 3\quad && \forall i \in I.~ \Gamma'; \Delta', x:\dual{A_i} \vdash P'_i :: \Lambda' \tag{\ruleLabel{$\mleft$} on 2} \\ & && 4\quad && \forall i \in I.~ \Gamma'; \Delta', x:\dual{A_i} \vdash P'_i :: \Lambda' ~\text{without \ruleLabel{$\mleft$}/\ruleLabel{$\mright$}} \tag{\ih2 on 3} \\ & && 5\quad && \Gamma'; \Delta', x:\oplus\{i:\dual{A_i}\}_{i \in I} \vdash x \triangleright \{i:P'_i\}_{i \in I} :: \Lambda' \tag{\ruleLabel{$\oplus$L} on 4} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\bang$R} && 1\quad && \Gamma'; \emptyset \vdash \serv{x}{y}.P' :: x:\bang B ~\text{without \ruleLabel{$\mleft$}/\ruleLabel{$\mright$}} \tag{assumption} \\ & && 2\quad && \Gamma'; \emptyset \vdash P' :: y:B \tag{inversion on 1} \\ & && 3\quad && \Gamma'; y:\dual{B} \vdash P' :: \emptyset \tag{\ruleLabel{$\mleft$} on 2} \\ & && 4\quad && \Gamma'; y:\dual{B} \vdash P' :: \emptyset ~\text{without \ruleLabel{$\mleft$}/\ruleLabel{$\mright$}} \tag{\ih2 on 3} \\ & && 5\quad && \Gamma'; x:\whynot \dual{B} \vdash \serv{x}{y}.P' :: \emptyset \tag{\ruleLabel{$\whynot$L} on 4} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\whynot$R} && 1\quad && \Gamma'; \Delta' \vdash P'\subst{x/u} :: \Lambda', x:\whynot B \tag{assumption} \\ & && 2\quad && \Gamma', u:\dual{A}; \Delta' \vdash P' :: \Lambda' \tag{inversion on 1} \\ & && 3\quad && \Gamma'; \Delta', x:\bang \dual{B} \vdash P'\subst{x/u} :: \Lambda' \tag{\ruleLabel{$\bang$L} on 2} \end{align*} Finally, we recall the steps we have traversed up the tree and remember that they do not include applications of $\mleft$ and $\mright$. We re-apply them on $\Gamma'; \Delta', x:\dual{A} \vdash P' :: \Lambda'$, without affecting $x:\dual{A}$. Instead, they only affect $\Gamma'$, $\Delta'$ and $\Lambda'$ to give us a proof of $\Gamma; \Delta, x:\dual{A} \vdash P :: \Lambda$ without applications of $\mleft$ and $\mright$. This concludes the proof of <Ref>. § COMPARING INTUITIONISTIC AND CLASSICAL INTERPRETATIONS In this section, we rigorously compare the class of -typable processes to the classes of processes typable in session type interpretations of linear logic, in classical [11, 52] and intuitionistic [6] settings. is an independent yardstick for this comparison, because it is derived from which subsumes both linear logics by design. We have discussed in <Ref> several design choices we made when defining and . Besides differences stemming from the dichotomy between classical and intuitionistic linear logic, logically-motivated session type systems also present features induced by certain design choices. For a fair comparison, we want to make sure that the differences come only from typing. This means that we need to make the same design choices for both interpretations: we require explicit closing, a separate unrestricted context, and identity as forwarding. §.§ Session Type Systems Derived from Intuitionistic and Classical Linear Logic Γ; x:A xy :: y:A Γ; ∅x. :: x: Γ; ΔP :: z:C Γ; Δ, x: x.P :: z:Z Γ; ΔP :: y:A Γ; Δ' Q :: x:B Γ; Δ, Δ' νyxy.(P Q) :: x:A B Γ; Δ, y:A, x:B P :: z:C Γ; Δ, x:A B xy.P :: z:C Γ; Δ, y:A P :: x:B Γ; Δxy.P :: x:A B Γ; ΔP :: y:A Γ; Δ', x:B Q :: z:C Γ; Δ, Δ', x:A B νyxy.(P Q) :: z:C Γ; ΔP :: x:A_j j ∈I Γ; Δx ◃j .P :: x:⊕{i:A_i}_i ∈I ∀i ∈I.  Γ; Δ, x:A_i P_i :: z:C Γ; Δ, x:⊕{i:A_i}_i ∈I x ▹{i:P_i}_i ∈I :: z:C ∀i ∈I.  Γ; ΔP_i :: x:A_i Γ; Δx ▹{i:P_i}_i ∈I :: x:&{i:A_i}_i ∈I Γ; Δ, x:A_j P :: z:C j ∈I Γ; Δ, x:&{i:A_i}_i ∈I x ◃j .P :: z:C Γ, u:A; Δ, x:A P :: z:C Γ, u:A; Δνxux.P :: z:C Γ; ∅P :: y:A Γ; ∅xy.P :: x:A Γ, u:A; ΔP :: z:C Γ; Δ, x:A Px/u :: z:C Γ; ΔP :: x:A Γ; Δ', x:A Q :: z:C Γ; Δ, Δ' νx(P Q) :: z:C Γ; Δ, x:A P :: z:C Γ; Δ' Q :: x:A Γ; Δ, Δ' νx(P Q) :: z:C Γ, u:A; ΔP :: z:C Γ; ∅Q :: x:A Γ; Δνu ( P ux.Q ) :: z:C Γ; ∅P :: x:A Γ, u:A; ΔQ :: z:C Γ; Δνu(ux.P Q) :: z:C The type system. xy Γ; x:A, y:A P Γ; Δ x.P Γ; Δ, x: x. Γ; x: P Γ; Δ, y:A Q Γ; Δ', x:B νyxy.(P Q) Γ; Δ, Δ', x:A B P Γ; Δ, y:A, x:B xy.P Γ; Δ, x:A B P Γ; Δ, x:A_j j ∈I x ◃j.P Γ; Δ, x:⊕{i:A_i}_i ∈I ∀i ∈I.  P_i Γ; Δ, x:A_i x ▹{i:P_i}_i ∈I Γ; Δ, x:&{i:A_i}_i ∈I P Γ, u:A; Δ, y:A νyuy.P Γ, u:A; Δ P Γ, u:A; Δ Px/u Γ; Δ, x:A P Γ; y:A xy.P Γ; x:A P Γ; Δ, x:A Q Γ; Δ', x:A νx(P Q) Γ; Δ, Δ' P Γ, u:A; Δ Q Γ; x:A νu ( P ux.Q ) Γ; Δ P Γ; x:A Q Γ, u:A; Δ νu(ux.P Q) Γ; Δ The type system. We want to derive session type systems from intuitionistic and classical linear logic for the same process syntax as uses (cf. <Ref>). <Ref> gives the inference rules for the type system derived from intuitionistic linear logic, denoted . This system is based on the presentation by Caires, Pfenning, and Toninho in [10]; the judgment is denoted as follows: \Gamma; \Delta \vdill P :: z:C With respect to the intuitionistic interpretation introduced in [6, 11], features a non-silent interpretation of $\1$ and $\bot$, based on explicit closure of sessions (cf. Rules $\1$R and $\1$L). We adopt this interpretation because, as explained in [10], it leads to a Curry-Howard correspondence that is tighter than correspondences with silent interpretations (such as those in [6, 11]). A more superficial difference is that follows standard presentations of session type systems by supporting $n$-ary labeled choices; in contrast, the systems in [6, 11] support binary labeled choices. We also include symmetric variants of the cut-rules, as they are necessary for type preservation under Rule CutSymm of structural congruence. <Ref> gives the inference rules for the type system derived from classical linear logic. It is based on a combination of features from Caires, Pfenning, and Toninho's in [11] and Wadler's in [52]; in the following, it is denoted . The corresponding judgment is as follows: P \vdcll \Gamma; \Delta <Ref> summarizes the differences in the design choices between , , and ; these differences are merely superficial: * As we have seen, an explicit closing of sessions (as in ) concerns a non-silent interpretation of the atomic propositions $\1$ and $\bot$. In contrast, realizes an implicit (silent) closing of sessions. * Sequents with a separate unrestricted context (as in ) are of the form $P \vdash \Gamma; \Delta$, which can also be written as $P \vdash \Delta, \Gamma'$ where $\Gamma'$ contains only types of the form $!A$. * The identity axiom can be interpreted as the forwarding process, which enables to account for behavioral polymorphism (i.e., universal and existential quantification over propositions/session types) [52, 8]. As already mentioned, forwarding is not a typical process construct in session $\pi$-calculi. Note that, since typing judgments are one-sided there is no need for symmetric cut-rules. §.§ Formal Comparison Explicit closing Separate context Identity as [11] No Yes No [52] Yes No Yes (this paper) Yes Yes Yes Feature comparison of three session type interpretations of classical linear logic. Now that we have presented all three systems, we start our comparison by contrasting the shape of their typing judgments: \[ \Gamma; \Delta \vdash P :: \Lambda \] \[ \Gamma; \Delta \vdill P :: x:A \] \[ P \vdcll \Gamma; \Delta \] In and judgments are similar, but in they have exactly one channel/type pair on the right. We will see that the difference between and can be characterized by this fact alone. Judgments in are different from those in and : they have only one linear context and both the linear and the unrestricted contexts are on the right. As we will see, our results reflect this with a duality relation between the contexts of and . Our formal results rely on classes of processes typable in the three typing systems: Let $\mathbb{P}$ denote the set of all processes induced by <Ref>. \begin{align*} \U &= \{P \in \mathbb{P} \mid \exists \Gamma, \Delta, \Lambda ~\text{such that}~ \Gamma; \Delta \vdash P :: \Lambda\}, \\ \C &= \{P \in \mathbb{P} \mid \exists \Gamma, \Delta ~\text{such that}~ P \vdcll \Gamma; \Delta\}, \\ \I &= \{P \in \mathbb{P} \mid \exists \Gamma, \Delta, x, A ~\text{such that}~ \Gamma; \Delta \vdill P :: x:A\}. \end{align*} Our first result is that $\U = \C$, i.e., is merely a two-sided representation of . $\U = \C$. We briefly discuss how we prove this result. On the one hand (from left to right), if $P \in \U$, there is a proof of $\Gamma; \Delta \vdash P :: \Lambda$. By backtracking on this proof, we can use rules in analogous to the -rules used to generate an equivalent proof of $P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda$ thus showing $P \in \C$. Note how the single sidedness of judgments requires us to move $\Gamma$ and $\Delta$ to the right-hand side using duality. On the other hand (from right to left), if $P \in \C$, there is a proof of $P \vdcll \Gamma; \Delta$. Again, by backtracking on this proof, we can use a combination of rules similar to those used in in combination with Rules $\mleft$ and $\mright$ from <Ref> to prove $\dual{(\Gamma)}; \emptyset \vpull \Delta$ in . Going through simplifies this process, since using $\mleft$ and $\mright$ enables us to guarantee that all of $\Delta$ ends up on the right-hand side of the typing judgment instead of dividing unpredictably between left and right. Note that we do have to use duality to move $\Gamma$ to the left. Since $\mleft$ and $\mright$ are admissible in (cf. <Ref>) and all the other rules in are also present in , this means we also have a proof of $\dual{(\Gamma)}; \emptyset \vdash \Delta$ in . Hence, $P \in \U$. ($\,\U \subseteq \C$) Take any $P \in \U$. Then, by <Ref>, there are $\Gamma$, $\Delta$, $\Lambda$ s.t. $\Gamma; \Delta \vdash P :: \Lambda$. By showing that this implies $P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda$, we have $P \in \C$. We show this by induction on the structure of the proof of $\Gamma; \Delta \vdash P :: \Lambda$. \begin{align*} & \bullet \ruleLabel{$\id$R} && 1\quad && \Gamma; x:A \vdash \fwd{x}{y} :: y:A \tag{assumption} \\ & && 2\quad && \fwd{x}{y} \vdcll \dual{(\Gamma)}; x:\dual{A}, y:A \tag{\ruleLabel{id}} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{idL} && 1\quad && \Gamma; x:A, y:\dual{A} \vdash \fwd{x}{y} :: \emptyset \tag{assumption} \\ & && 2\quad && \fwd{x}{y} \vdcll \dual{(\Gamma)}; x:\dual{A}, y:A \tag{\ruleLabel{id}} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\1$R} && 1\quad && \Gamma; \emptyset \vdash \send{x}{}.\0 :: x:\1 \tag{assumption} \\ & && 2\quad && \send{x}{}.\0 \vdcll \dual{(\Gamma)}; x:\1 \tag{\ruleLabel{$\1$}} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\1$L} && 1\quad && \Gamma; \Delta, x:\1 \vdash \recv{x}{}.P :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma; \Delta \vdash P :: \Lambda \tag{inversion on 1} \\ & && 3\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda \tag{IH on 2} \\ & && 4\quad && \recv{x}{}.P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\bot \tag{\ruleLabel{$\bot$} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\bot$R} && 1\quad && \Gamma; \Delta \vdash \recv{x}{}.P :: \Lambda, x:\bot \tag{assumption} \\ & && 2\quad && \Gamma; \Delta \vdash P :: \Lambda \tag{inversion on 1} \\ & && 3\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda \tag{IH on 2} \\ & && 4\quad && \recv{x}{}.P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\bot \tag{\ruleLabel{$\bot$} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\bot$L} && 1\quad && \Gamma; x:\bot \vdash \send{x}{}.\0 :: \emptyset \tag{assumption} \\ & && 2\quad && \send{x}{}.\0 \vdcll \dual{(\Gamma)}; x:\1 \tag{\ruleLabel{$\1$}} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\tensor$R} && 1\quad && \Gamma; \Delta, \Delta' \vdash \nu{y}\send{x}{y}.(P \| Q) :: \Lambda, \Lambda', x:A \tensor B \tag{assumption} \\ & && 2\quad && \Gamma; \Delta \vdash P :: \Lambda, y:A \\ & && 3\quad && \Gamma; \Delta' \vdash Q :: \Lambda', x:B \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, y:A \tag{IH on 2} \\ & && 5\quad && Q \vdcll \dual{(\Gamma)}; \dual{(\Delta')}, \Lambda', x:B \tag{IH on 3} \\ & && 6\quad && \nu{y}\send{x}{y}.(P \| Q) \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \dual{(\Delta')}, \Lambda, \Lambda', x:A \tensor B \tag{\ruleLabel{$\tensor$} on 4 and 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\tensor$L} && 1\quad && \Gamma; \Delta, x:A \tensor B \vdash \recv{x}{y}.P :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma; \Delta, y:A, x:B \vdash P :: \Lambda \tag{inversion on 1} \\ & && 3\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, y:\dual{A}, x:\dual{B} \tag{IH on 2} \\ & && 4\quad && \recv{x}{y}.P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\dual{A} \parr \dual{B} \tag{\ruleLabel{$\parr$} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\parr$R} && 1\quad && \Gamma; \Delta \vdash \recv{x}{y}.P :: \Lambda, x:A \parr B \tag{assumption} \\ & && 2\quad && \Gamma; \Delta \vdash P :: \Lambda, y:A, x:B \tag{inversion on 1} \\ & && 3\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, y:A, x:B \tag{IH on 2} \\ & && 4\quad && \recv{x}{y}.P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:A \parr B \tag{\ruleLabel{$\parr$} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\parr$L} && 1\quad && \Gamma; \Delta, \Delta', x:A \parr B \vdash \nu{y}\send{x}{y}.(P \| Q) :: \Lambda, \Lambda' \tag{assumption} \\ & && 2\quad && \Gamma; \Delta, y:A \vdash P :: \Lambda \\ & && 3\quad && \Gamma; \Delta', x:B \vdash Q :: \Lambda' \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, y:\dual{A} \tag{IH on 2} \\ & && 5\quad && Q \vdcll \dual{(\Gamma)}; \dual{(\Delta')}, \Lambda', x:\dual{B} \tag{IH on 3} \\ & && 6\quad && \nu{y}\send{x}{y}.(P \| Q) \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \dual{(\Delta')}, \Lambda, \Lambda', x:\dual{A} \tensor \dual{B} \tag{\ruleLabel{$\tensor$} on 4 and 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\lolli$R} && 1\quad && \Gamma; \Delta \vdash \recv{x}{y}.P :: \Lambda, x:A \lolli B \tag{assumption} \\ & && 2\quad && \Gamma; \Delta, y:A \vdash P :: \Lambda, x:B \tag{inversion on 1} \\ & && 3\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, y:\dual{A}, x:B \tag{IH on 2} \\ & && 4\quad && \recv{x}{y}.P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\dual{A} \parr B \tag{\ruleLabel{$\parr$} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\lolli$L} && 1\quad && \Gamma; \Delta, \Delta', x:A \lolli B \vdash \nu{y}\send{x}{y}.(P \| Q) :: \Lambda, \Lambda' \tag{assumption} \\ & && 2\quad && \Gamma; \Delta \vdash P :: \Lambda, y:A \\ & && 3\quad && \Gamma; \Delta', x:B \vdash Q :: \Lambda' \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, y:A \tag{IH on 2} \\ & && 5\quad && Q \vdcll \dual{(\Gamma)}; \dual{(\Delta')}, \Lambda', x:\dual{B} \tag{IH on 3} \\ & && 6\quad && \nu{y}\send{x}{y}.(P \| Q) \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \dual{(\Delta')}, \Lambda, \Lambda', x:A \tensor \dual{B} \tag{\ruleLabel{$\tensor$} on 4 and 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\&$R} && 1\quad && \Gamma; \Delta \vdash x \triangleright \{i:P_i\}_{i \in I} :: \Lambda, x:\&\{i:A_i\}_{i \in I} \tag{assumption} \\ & && 2\quad && \forall i \in I.~ \Gamma; \Delta \vdash P_i :: \Lambda, x:A_i \tag{inversion on 1} \\ & && 3\quad && \forall i \in I.~ P_i \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:A_i \tag{IH on 2} \\ & && 4\quad && x \triangleright \{i:P_i\}_{i \in I} \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\&\{i:A_i\}_{i \in I} \tag{\ruleLabel{$\&$} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\&$L} && 1\quad && \Gamma; \Delta, x:\&\{i:A_i\} \vdash x \triangleleft j.P :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma; \Delta, x:A_j \vdash P :: \Lambda \\ & && 3\quad && j \in I \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\dual{A_j} \tag{IH on 2} \\ & && 5\quad && x \triangleright j.P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\oplus\{i:\dual{A_i}\}_{i \in I} \tag{\ruleLabel{$\oplus$} on 4 and 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\oplus$R} && 1\quad && \Gamma; \Delta \vdash x \triangleleft j.P :: \Lambda, x:\oplus\{i:A_i\}_{i \in I} \tag{assumption} \\ & && 2\quad && \Gamma; \Delta \vdash P :: \Lambda, x:A_j \\ & && 3\quad && j \in I \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:A_j \tag{IH on 2} \\ & && 5\quad && x \triangleleft j.P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\oplus\{i:A_i\}_{i \in I} \tag{\ruleLabel{$\oplus$} on 4 and 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\oplus$L} && 1\quad && \Gamma; \Delta, x:\oplus\{i:A_i\}_{i \in I} \vdash x \triangleright \{i:P_i\}_{i \in I} :: \Lambda \tag{assumption} \\ & && 2\quad && \forall i \in I.~ \Gamma; \Delta, x:A_i \vdash P_i :: \Lambda \tag{inversion on 1} \\ & && 3\quad && \forall i \in I.~ P_i \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\dual{A_i} \tag{IH on 2} \\ & && 4\quad && x \triangleright \{i:P_i\}_{i \in I} \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\&\{i:\dual{A_i}\}_{i \in I} \tag{\ruleLabel{$\&$} on 4} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{copyR} && 1\quad && \Gamma, u:A; \Delta \vdash \nu{x}\send{u}{x}.P :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma, u:A; \Delta \vdash P :: \Lambda, x:\dual{A} \tag{inversion on 1} \\ & && 3\quad && P \vdcll \dual{(\Gamma)}, u:\dual{A}; \dual{(\Delta)}, \Lambda, x:\dual{A} \tag{IH on 2} \\ & && 4\quad && \nu{x}\send{u}{x}.P \vdcll \dual{(\Gamma)}, u:\dual{A}; \dual{(\Delta}), \Lambda \tag{\ruleLabel{copy} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{copyL} && 1\quad && \Gamma, u:A; \Delta \vdash \nu{x}\send{u}{x}.P :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma, u:A; \Delta, x:A \vdash P :: \Lambda \tag{inversion on 1} \\ & && 3\quad && P \vdcll \dual{(\Gamma)}, u:\dual{A}; \dual{(\Delta)}, \Lambda, x:\dual{A} \tag{IH on 2} \\ & && 4\quad && \nu{x}\send{u}{x}.P :: \dual{(\Gamma)}, u:\dual{A}; \dual{(\Delta)}, \Lambda \tag{\ruleLabel{copy} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\bang$R} && 1\quad && \Gamma; \emptyset \vdash \serv{x}{y}.P :: x:\bang A \tag{assumption} \\ & && 2\quad && \Gamma; \emptyset \vdash P :: y:A \tag{inversion on 1} \\ & && 3\quad && P \vdcll \dual{(\Gamma)}; y:A \tag{IH on 2} \\ & && 4\quad && \serv{x}{y}.P \vdcll \dual{(\Gamma)}; y:\bang A \tag{\ruleLabel{$\bang$} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\bang$L} && 1\quad && \Gamma; \Delta, x:\bang A \vdash P\subst{x/u} :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma, u:A; \Delta \vdash P :: \Lambda \tag{inversion on 1} \\ & && 3\quad && P \vdcll \dual{(\Gamma)}, u:\dual{A}; \dual{(\Delta)}, \Lambda \tag{IH on 2} \\ & && 4\quad && P\subst{x/u} \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\whynot \dual{A} \tag{\ruleLabel{$\whynot$} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\whynot$R} && 1\quad && \Gamma; \Delta \vdash P\subst{x/u} :: \Lambda, x:\whynot \dual{A} \tag{assumption} \\ & && 2\quad && \Gamma, u:A; \Delta \vdash P :: \Lambda \tag{inversion on 1} \\ & && 3\quad && P \vdcll \dual{(\Gamma)}, u:\dual{A}; \dual{(\Delta)}, \Lambda \tag{IH on 2} \\ & && 4\quad && P\subst{x/u} \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\whynot \dual{A} \tag{\ruleLabel{$\whynot$} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\whynot$L} && 1\quad && \Gamma; x:\whynot A \vdash \serv{x}{y}.P :: \emptyset \tag{assumption} \\ & && 2\quad && \Gamma; y:A \vdash P :: \emptyset \tag{inversion on 1} \\ & && 3\quad && P \vdcll \dual{(\Gamma)}; y:\dual{A} \tag{IH on 2} \\ & && 4\quad && \serv{x}{y}.P \vdcll \dual{(\Gamma)}; x:\bang \dual{A} \tag{\ruleLabel{$\bang$} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cutRL} && 1\quad && \Gamma; \Delta, \Delta' \vdash \nu{x}(P \| Q) :: \Lambda, \Lambda' \tag{assumption} \\ & && 2\quad && \Gamma; \Delta \vdash P :: \Lambda, x:A \\ & && 3\quad && \Gamma; \Delta', x:A \vdash Q :: \Lambda' \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:A \tag{IH on 2} \\ & && 5\quad && Q \vdcll \dual{(\Gamma)}; \dual{(\Delta')}, \Lambda', x:\dual{A} \tag{IH on 3} \\ & && 6\quad && \nu{x}(P \| Q) \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \dual{(\Delta')}, \Lambda, \Lambda' \tag{\ruleLabel{cut} on 4 and 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cutLR} && 1\quad && \Gamma; \Delta, \Delta' \vdash \nu{x}(P \| Q) :: \Lambda, \Lambda' \tag{assumption} \\ & && 2\quad && \Gamma; \Delta, x:A \vdash P :: \Lambda \\ & && 3\quad && \Gamma; \Delta' \vdash Q :: \Lambda', x:A \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\dual{A} \tag{IH on 2} \\ & && 5\quad && Q \vdcll \dual{(\Gamma)}; \dual{(\Delta')}, \Lambda', x:A \tag{IH on 3} \\ & && 6\quad && \nu{x}(P \| Q) \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \dual{(\Delta')}, \Lambda, \Lambda' \tag{\ruleLabel{cut} on 4 and 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cutRR} && 1\quad && \Gamma; \Delta, \Delta' \vdash \nu{x}(P \| Q) :: \Lambda, \Lambda' \tag{assumption} \\ & && 2\quad && \Gamma; \Delta \vdash P :: \Lambda, x:A \\ & && 3\quad && \Gamma; \Delta' \vdash Q :: \Lambda', x:\dual{A} \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:A \tag{IH on 2} \\ & && 5\quad && Q \vdcll \dual{(\Gamma)}; \dual{(\Delta')}, \Lambda', x:\dual{A} \tag{IH on 3} \\ & && 6\quad && \nu{x}(P \| Q) \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \dual{(\Delta')}, \Lambda, \Lambda' \tag{\ruleLabel{cut} on 4 and 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cutLL} && 1\quad && \Gamma; \Delta, \Delta' \vdash \nu{x}(P \| Q) :: \Lambda, \Lambda' \tag{assumption} \\ & && 2\quad && \Gamma; \Delta, x:A \vdash P :: \Lambda \\ & && 3\quad && \Gamma; \Delta', x:\dual{A} \vdash Q :: \Lambda' \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda, x:\dual{A} \tag{IH on 2} \\ & && 5\quad && Q \vdcll \dual{(\Gamma)}; \dual{(\Delta')}, \Lambda', x:A \tag{IH on 3} \\ & && 6\quad && \nu{x}(P \| Q) \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \dual{(\Delta')}, \Lambda, \Lambda' \tag{\ruleLabel{cut} on 4 and 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cut$\bang$R} && 1\quad && \Gamma; \Delta \vdash \nu{u}(P \| \serv{u}{x}.Q) :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma, u:A; \Delta \vdash P :: \Lambda \\ & && 3\quad && \Gamma; \emptyset \vdash Q :: x:A \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}, u:\dual{A}; \dual{(\Delta)}, \Lambda \tag{IH on 2} \\ & && 5\quad && Q \vdcll \dual{(\Gamma)}; x:A \tag{IH on 3} \\ & && 6\quad && \nu{u} ( P \| \serv{u}{x}.Q ) \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda \tag{\ruleLabel{cut$\whynot$R} on 4 and 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cut$\bang$L} && 1\quad && \Gamma; \Delta \vdash \nu{u}(\serv{u}{x}.P \| Q) :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma; \emptyset \vdash P :: x:A \\ & && 3\quad && \Gamma, u:A; \Delta \vdash Q :: \Lambda \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}; x:A \tag{IH on 2} \\ & && 5\quad && Q \vdcll \dual{(\Gamma)}, u:\dual{A}; \dual{(\Delta)}, \Lambda \tag{IH on 3} \\ & && 6\quad && \nu{u}(\serv{u}{x}.P \| Q) \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda \tag{\ruleLabel{cut$\whynot$L} on 4 and 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cut$\whynot$R} && 1\quad && \Gamma; \Delta \vdash \nu{u} ( P \| \serv{u}{x}.Q ) :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma, u:A; \Delta \vdash P :: \Lambda \\ & && 3\quad && \Gamma; x:\dual{A} \vdash Q :: \emptyset \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}, u:\dual{A}; \dual{(\Delta)}, \Lambda \tag{IH on 2} \\ & && 5\quad && Q \vdcll \dual{(\Gamma)}; x:A \tag{IH on 3} \\ & && 6\quad && \nu{u} ( P \| \serv{u}{x}.Q ) \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda \tag{\ruleLabel{cut$\whynot$R} on 4 and 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cut$\whynot$L} && 1\quad && \Gamma; \Delta \vdash \nu{u}(\serv{u}{x}.P \| Q) :: \Lambda \tag{assumption} \\ & && 2\quad && \Gamma; x:\dual{A} \vdash P :: \emptyset \\ & && 3\quad && \Gamma, u:A; \Delta \vdash Q :: \Lambda \tag{inversion on 1} \\ & && 4\quad && P \vdcll \dual{(\Gamma)}; x:A \tag{IH on 2} \\ & && 5\quad && Q \vdcll \dual{(\Gamma)}, u:\dual{A}; \dual{(\Delta)}, \Lambda \tag{IH on 3} \\ & && 6\quad && \nu{u}(\serv{u}{x}.P \| Q) \vdcll \dual{(\Gamma)}; \dual{(\Delta)}, \Lambda \tag{\ruleLabel{cut$\whynot$L} on 4 and 5} \end{align*} ($\,\C \subseteq \U$) Take any $P \in \C$. Then, there are $\Gamma$, $\Delta$ s.t. $P \vdcll \Gamma; \Delta$. By showing that this implies $\dual{(\Gamma)}; \emptyset \vdash P :: \Delta$, we have $P \in \U$. As per the second item of <Ref>, Rules $\mleft$ and $\mright$ from <Ref> are admissible in . Therefore, it suffices to show $\dual{(\Gamma)}; \emptyset \vpull P :: \Delta$. We do so by induction on the structure of the proof of $P \vdcll \Gamma; \Delta$. \begin{align*} & \bullet \ruleLabel{id} && 1\quad && \fwd{x}{y} \vdcll \Gamma; x:A, y:\dual{A} \tag{assumption} \\ & && 2\quad && \dual{(\Gamma)}; x:\dual{A} \vpull \fwd{x}{y} :: y:\dual{A} \tag{\ruleLabel{idR}} \\ & && 3\quad && \dual{(\Gamma)} ; \emptyset \vpull \fwd{x}{y} :: x:A , y:\dual{A} \tag{\ruleLabel{$\mright$}} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\1$} && 1\quad && \send{x}{}.\0 \vdcll \Gamma; x:\1 \tag{assumption} \\ & && 2\quad && \dual{(\Gamma)}; \emptyset \vpull \send{x}{}.\0 :: x:\1 \tag{\ruleLabel{$\1$R}} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\bot$} && 1\quad && \recv{x}{}.P \vdcll \Gamma; \Delta, x:\bot \tag{assumption} \\ & && 2\quad && P \vdcll \Gamma; \Delta \tag{inversion 1} \\ & && 3\quad && \dual{(\Gamma)}; \emptyset \vpull P :: \Delta \tag{IH on 2} \\ & && 4\quad && \dual{(\Gamma)}; x:\1 \vpull \recv{x}{}.P :: \Delta \tag{\ruleLabel{$\1$L} on 3} \\ & && 5\quad && \dual{(\Gamma)}; \emptyset \vpull \recv{x}{}.P :: \Delta, x:\bot \tag{\ruleLabel{$\mright$} on 4} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\tensor$} && 1\quad && \nu{y}\send{x}{y}.(P \| Q) \vdcll \Gamma; \Delta, \Delta', x:A \tensor B \tag{assumption} \\ & && 2\quad && P \vdcll \Gamma; \Delta, y:A \\ & && 3\quad && Q \vdcll \Gamma; \Delta', x:B \tag{inversion on 1} \\ & && 4\quad && \dual{(\Gamma)}; \emptyset \vpull P :: \Delta, y:A \tag{IH on 2} \\ & && 5\quad && \dual{(\Gamma)}; \emptyset \vpull Q :: \Delta', x:B \tag{IH on 3} \\ & && 6\quad && \dual{(\Gamma)}; \emptyset \vpull \nu{y}\send{x}{y}(P \| Q) :: \Delta, \Delta', x:A \tensor B \tag{\ruleLabel{$\tensor$R} on 4 and 5} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\parr$} && 1\quad && \recv{x}{y}.P \vdcll \Gamma; \Delta, x:A \parr B \tag{assumption} \\ & && 2\quad && P \vdcll \Gamma; \Delta, y:A, x:B \tag{inversion on 1} \\ & && 3\quad && \dual{(\Gamma)}; \emptyset \vpull P :: \Delta, y:A, x:B \tag{IH on 2} \\ & && 4\quad && \dual{(\Gamma)}; y:\dual{A} \vpull P :: \Delta, x:B \tag{\ruleLabel{$\mleft$} on 3} \\ & && 5\quad && \dual{(\Gamma)}; \emptyset \vpull \recv{x}{y}.P :: \Delta, x:\underbrace{\dual{A} \lolli B}_{A \parr B} \tag{\ruleLabel{$\lolli$R} on 4} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\oplus$} && 1\quad && x \triangleleft j.P \vdcll \Gamma; \Delta, x:\oplus\{i:A_i\}_{i \in I} \tag{assumption} \\ & && 2\quad && P \vdcll \Gamma; \Delta, x:A_j \\ & && 3\quad && j \in I \tag{inversion on 1} \\ & && 4\quad && \dual{(\Gamma)}; \emptyset \vpull P :: \Delta, x:A_j \tag{IH on 2} \\ & && 5\quad && \dual{(\Gamma)}; \emptyset \vpull x \triangleleft j.P :: \Delta, x:\oplus\{i:A_i\}_{i \in I} \tag{\ruleLabel{$\oplus$R} on 4 and 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\&$} && 1\quad && x \triangleright \{i:P_i\}_{i \in I} \vdcll \Gamma; \Delta, x:\&\{i:A_i\}_{i \in I} \tag{assumption} \\ & && 2\quad && \forall i \in I.~ P_i \vdcll \Gamma; \Delta, x:A_i \tag{inversion on 1} \\ & && 3\quad && \forall i \in I.~ \dual{(\Gamma)}; \emptyset \vpull P_i :: \Delta, x:A_i \tag{IH on 2} \\ & && 4\quad && \dual{(\Gamma)}; \emptyset \vpull x \triangleright \{i:P_i\}_{i \in I} :: \Delta, x:\&\{i:A_i\}_{i \in I} \tag{\ruleLabel{$\&$R} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{copy} && 1\quad && \nu{x}\send{u}{x}.P \vdcll \Gamma, u:A; \Delta \tag{assumption} \\ & && 2\quad && P \vdcll \Gamma, u:A; \Delta, x:A \tag{inversion on 1} \\ & && 3\quad && \dual{(\Gamma)}, u:\dual{A}; \emptyset \vpull P :: \Delta, x:A \tag{IH on 2} \\ & && 4\quad && \dual{(\Gamma)}, u:\dual{A}; x:\dual{A} \vpull P :: \Delta \tag{\ruleLabel{$\mleft$} on 3} \\ & && 5\quad && \dual{(\Gamma)}, u:\dual{A}; \emptyset \vpull \nu{x}\send{u}{x}.P :: \Delta \tag{\ruleLabel{copyL} on 4} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\bang$} && 1\quad && \serv{x}{y}.P \vdcll \Gamma; x:\bang A \tag{assumption} \\ & && 2\quad && P \vdcll \Gamma; y:A \tag{inversion on 1} \\ & && 3\quad && \dual{(\Gamma)}; \emptyset \vpull P :: y:A \tag{IH on 2} \\ & && 4\quad && \dual{(\Gamma)}; \emptyset \vpull \serv{x}{y}.P :: x:\bang A \tag{\ruleLabel{$\bang$R} on 3} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{$\whynot$} && 1\quad && P\subst{x/u} \vdcll \Gamma; \Delta, x:\whynot A \tag{assumption} \\ & && 2\quad && P \vdcll \Gamma, u:A; \Delta \tag{inversion on 1} \\ & && 3\quad && \dual{(\Gamma)}, u:\dual{A}; \emptyset \vpull P :: \Delta \tag{IH on 2} \\ & && 4\quad && \dual{(\Gamma)}; x:\bang \dual{A} \vpull P\subst{x/u} :: \Delta \tag{\ruleLabel{$\bang$L} on 3} \\ & && 5\quad && \dual{(\Gamma)}; \emptyset \vpull P\subst{x/u} :: \Delta, x:\whynot A \tag{\ruleLabel{$\mright$} on 4} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cut} && 1\quad && \nu{x}(P \| Q) \vdcll \Gamma; \Delta, \Delta' \tag{assumption} \\ & && 2\quad && P \vdcll \Gamma; \Delta, x:A \\ & && 3\quad && Q \vdcll \Gamma; \Delta', x:\dual{A} \tag{inversion on 1} \\ & && 4\quad && \dual{(\Gamma)}; \emptyset \vpull P :: \Delta, x:A \tag{IH on 2} \\ & && 5\quad && \dual{(\Gamma)}; \emptyset \vpull Q :: \Delta', x:\dual{A} \tag{IH on 3} \\ & && 6\quad && \dual{(\Gamma)}; x:A \vpull Q :: \Delta' \tag{\ruleLabel{$\mleft$} on 5} \\ & && 7\quad && \dual{(\Gamma)}; \emptyset \vpull \nu{x}(P \| Q) :: \Delta, \Delta' \tag{\ruleLabel{cutRL} on 4 and 6} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cut$\whynot$R} && 1\quad && \nu{u} ( P \| \serv{u}{x}.Q ) \vdcll \Gamma; \Delta \tag{assumption} \\ & && 2\quad && P \vdcll \Gamma, u:A; \Delta \\ & && 3\quad && Q \vdcll \Gamma; x:\dual{A} \tag{inversion on 1} \\ & && 4\quad && \dual{(\Gamma)}, u:\dual{A}; \emptyset \vpull P :: \Delta \tag{IH on 2} \\ & && 5\quad && \dual{(\Gamma)}; \emptyset \vpull Q :: x:\dual{A} \tag{IH on 3} \\ & && 6\quad && \dual{(\Gamma)}; \emptyset \vpull \nu{u} ( P \| \serv{u}{x}.Q ) :: \Delta \tag{\ruleLabel{cut$\bang$R}} \displaybreak[1] \\[5pt] & \bullet \ruleLabel{cut$\whynot$L} && 1\quad && \nu{u}(\serv{u}{x}.P \| Q) \vdcll \Gamma; \Delta \tag{assumption} \\ & && 2\quad && P \vdcll \Gamma; x:\dual{A} \\ & && 3\quad && Q \vdcll \Gamma, u:A; \Delta \tag{inversion on 1} \\ & && 4\quad && \dual{(\Gamma)}; \emptyset \vpull P :: x:\dual{A} \tag{IH on 2} \\ & && 5\quad && \dual{(\Gamma)}, u:\dual{A}; \emptyset \vpull Q :: \Delta \tag{IH on 3} \\ & && 6\quad && \dual{(\Gamma)}; \emptyset \vpull \nu{u}(\serv{u}{x}.P \| Q) :: \Delta \tag{\ruleLabel{cut$\bang$L}} \end{align*} This concludes the proof of <Ref>. We now turn our attention to , the class of processes typable under the intuitionistic interpretation. The observation by Caires, Pfenning and Toninho in [11] entails that (-typable processes) should be a strict subset of (-typable processes). We formalize this fact by characterizing the intuitionistic fragment of (which coincides with ) by limiting its typing rules and by showing that the class of processes typable in this fragment coincides with (<Ref>). It then follows that this class of processes is strictly contained in (-typable processes; <Ref>). Since we have just shown that $\U = \C$, this formalizes the fact that is a strict subset of . <Ref> below formalizes two equivalent characterizations of the intuitionistic fragment of . One characterization is based on the limited form of duality of : the Rules $\mleft$ and $\mright$ in <Ref> may not be used. The other characterization is based on the restricted two-sided form of sequents: the rules in <Ref> are limited to have exactly one channel/type pair on the right. This requires the following auxiliary definition: The of a sequent $\Gamma; \Delta \vdash P :: \Lambda$ is the size of $\Lambda$. We say this sequent has $|\Lambda|$. We now have: Given a process $P$, the following are equivalent: * there are $\Gamma$, $\Delta$, $x$ and $A$ such that $\Gamma; \Delta \vdill P :: x:A$; * there are $\Gamma$, $\Delta$ and $\Lambda$ such that $\Gamma; \Delta \vdash P :: \Lambda$ where all sequents in its proof have 1; * there are $\Gamma$, $\Delta$ and $\Lambda$ such that $\Gamma; \Delta \vpull P :: \Lambda$ where its proof never uses Rules $\mleft$ and $\mright$. We first argue that items 1 and 2 are equivalent; then, we argue that items 2 and 3 imply each other: (Equivalence of items 1 and 2) We show that restricting the sequents in to an of 1 entails the limitation of its rules to a strict subset: those marked with $\ast$ in <Ref>. This set of rules coincides with the set of rules for in <Ref>. Hence, a proof of typability in (item 1) can be replicated in with 1 (item 2) and vice versa. Because the proof for each rule follows the same pattern, we only detail two cases: one without $\ast$ and one with $\ast$. * Rule $\parr$L is not $\ast$-marked. Suppose we have a proof of typability in with 1 and suppose this includes an application of $\parr$L. By assumption, this application's antecedents have 1, so its consequence must have 2. This contradicts the assumption that all sequents have 1, showing that Rule $\parr$L is not usable in this fragment of . * Rule $\lolli$L is $\ast$-marked. By assumption, in any application of this rule its antecedents would have 1. Then, its consequence also has 1, so this rule can be used without problems. (Item 2 implies item 3) By <Ref>, the set of rules of consists of only $\mleft$, $\mright$, and the $\ast$-marked rules in <Ref>. In , the only rules that alter the s in judgments are $\mleft$ and $\mright$. Hence, if these rules may not be used, proofs of typability in have a constant . Since all axioms of (the $\ast$-marked Axioms $\id$R and $\1$R in <Ref>) have 1, this constant is 1. Therefore, a proof of typability in without using $\mleft$ and $\mright$ (item 3) can be replicated exactly in with restricted to 1 (item 2). (Item 3 implies item 2) The proof of equivalence of items 1 and 2 above shows that with  1 the only usable rules in are those marked with $\ast$ in <Ref>. By <Ref>, without Rules $\mleft$ and $\mright$, consists only of these $\ast$-marked rules. Hence, a proof of typability in with restricted to 1 (item 2) can be replicated exactly in without using $\mleft$ and $\mright$ (item 3). We may now state our final result: $\I \subsetneq \U$. <Ref> ensures that $\I \subseteq \U$. To show that this inclusion is strict, we give a process $P \in \U$ such that $P \notin \I$. Assuming a proof for $u:B; \emptyset \vdash P' :: z:A$, let $P = \recv{x}{y}.\serv{y}{z}.P' \subst{x/u}$; there are precisely three ways to prove $P \in \U$: they follow from the three ways to infer a receive in ($\parr$R, $\lolli$R, and $\tensor$L). The proofs are as follows: u:B; ∅⊢P' :: z:A u:B; ∅⊢yz.P' :: y:A ∅; ∅⊢yz.P' x/u :: y:A, x:B ∅; ∅⊢xy.yz.P' x/u :: x:(A) (B) u:B; ∅⊢P' :: z:A u:B; z:A ⊢P' :: ∅ u:B; y:A ⊢yz.P' :: ∅ ∅; y:A ⊢yz.P' x/u :: x:B ∅; ∅⊢xy.yz.P' x/u :: x:(A) (B) u:B; ∅⊢P' :: z:A u:B; z:A ⊢P' :: ∅ u:B; y:A ⊢yz.P' :: ∅ ∅; y:A, x:B ⊢yz.P' x/u :: ∅ ∅; x:(A) (B) ⊢xy.yz.P' x/u :: ∅ Clearly, all these proofs contain judgments of different from 1 (first case) or require using $\mleft$/$\mright$ (second and third cases). Hence, by <Ref>, $P \notin \I$. § ANALYSIS Now that we have established characterizations for the intuitionistic and classical fragments of , we analyze the meaning of these results and discuss possible extensions. Our analysis is twofold. First, we consider the informal observation by Caires, Pfenning, and Toninho [11] that intuitionistic type systems enforce the locality principle (ssec:locality); unlike the classical formulation, the intuitionistic fragment of (and thus ) has a partial form of duality, which ensures that typability guarantees locality. Next, we discuss how can support alternative, more expressive forms of parallel composition and restriction, and how such extensions transfer to classical or intuitionistic type systems (ssec:mixcycle); we will see that the complete duality of classical type systems allows for such extensions, while the rely-guarantee reading of intuitionistic type systems cannot account for them. §.§ Locality Locality is a well-known principle in concurrency research [38]. The idea is that freshly created channels are local. Local channels are mutable, in the sense that they can be used for receives. Once a channel has been transmitted to another location, it becomes non-local, and thus immutable: it can only be used for sends—receives are no longer allowed. This makes locality particularly relevant for giving formal semantics to distributed programming languages; a prime example is the join calculus [20], whose theory relies on (and is deeply influenced by) the locality principle [19]. Locality also makes an appearance in other contexts. Honda and Laurent's [26] correspondence between a typed $\pi$-calculus and polarized proof-nets enforces locality due to receives having negative polarity while received names may only be of positive polarity. Also, Dal Lago 's [16] typed $\pi$-calculus with intersection types goes further: processes are hyperlocalized, meaning that there may be no receives on free channels after a prior receive at all. Neither the intuitionistic or the classical interpretation guarantee full-fledged locality through typing: both systems allow receives on previously received channels. The exception is replicated receive, which is used to define a shared channel that can continuously receive linear channels over which to perform a service. The intuitionistic interpretation guarantees locality for shared channels; in other words, cannot type replicated receives on non-local channels. The following example, taken from the work by Caires, Pfenning and Toninho [11], is typable in but not in : \[ \nu{x}(\recv{x}{y}.\serv{y}{z}.P \| \nu{q}\send{x}{q}.Q) \] Consider the left process in the parallel composition, $\recv{x}{y}.\serv{y}{z}.P$. It first receives a channel $y$ over channel $x$; then, it uses $y$ for replicated receive, thus defining it as a shared channel. In , channel $x$ has type $\bang A \parr B$. The intuitionistic variant of this type is $\dual{(\bang A)} \lolli B = \whynot \dual{A} \lolli B$ on the right of the typing judgment, and $\whynot \dual{A} \tensor \dual{B}$ on the left. It is impossible to type a process with a channel of such a type in , because it lacks rules to type `' channels. The fact that cannot type non-local shared channels—due to the absence of a dual for `'—suggests that there should be another kind of non-local channels that cannot type: there is no dual for . Indeed, cannot type empty sends on previously received channels, a feature that is not addressed in [11]. The following example is typable in but not in : \[ \recv{x}{y}.\recv{x}{}.\send{y}{}.\0 \] In , the type of $x$ in this process is $\1 \parr \bot$. The intuitionistic variant of this type is $\dual{\1} \lolli \bot = \bot \lolli \bot$ on the right of the typing judgment, and $\bot \tensor \1$ on the left. This process is not typable in , because it has no rules to type channels. We make this more precise by giving an alternative proof to <Ref> ($\I \subsetneq \U$) based on this observation: By <Ref>, it is sufficient to give $P \in \U$ such that $P \notin \I$. Let $P := \recv{x}{y} . \recv{x}{} . \send{y}{} . \0$. There are three ways to prove that $P \in \U$: Γ; y:⊢y . :: ∅ Γ; y:, x: ⊢x . y . :: ∅ Γ; x: ⊢xy . x . y . :: ∅ Γ; ∅⊢y . :: y: Γ; ∅⊢x . y . :: y: , x: Γ; ∅⊢xy . x . y . :: x: x: Γ; y:⊢y . :: ∅ Γ; y:⊢x . y . :: x: Γ; ∅⊢xy . x . y . :: x: Clearly, all these proofs contain judgments of different from 1. Hence, by <Ref>, $P \notin \I$. Based on these two proofs for <Ref>, we have corroborated Caires 's observation on locality, and extended it to the case of empty sends and receives. Judgments, Revisited The absence of rules for and in is not a design choice, but an inherent consequence of the form of its judgments. As shown by <Ref>, rules for and are an impossibility when judgments are required to have exactly one assignment (channel/proposition) on the right. <Ref> also shows this is closely related to duality, which is not a complete relation in the intuitionistic interpretation. In the classical case, the type system is symmetrical (as shown by a support for $\mleft$/$\mright$-rules), while the intuitionistic type system is asymmetrical. §.§ Parallel Composition and Restriction The type systems we have discussed so far are rather restrictive in terms of how processes can be composed and connected: there is only the cut-rule which composes and connects two processes that have exactly one channel of dual type in common. Indeed, the cut-rule jointly handles constructs for parallel composition and restriction, which contrasts with many non-logical type systems for the $\pi$-calculus (e.g., [34, 33, 51]) where parallel composition and restriction typically have a dedicated rule each. In this section we discuss extensions to the type systems that decompose cut into two separate rules: mix for parallel composition, and cycle for channel connection (restriction). As we will see, these notions form another clear distinction point between Curry-Howard interpretations of classical and intuitionistic linear logic. §.§.§ Independent Parallel Composition Caires  [11] discuss an alternative form of parallel composition. In the intuitionistic interpretation, the so-called independent parallel composition connects (i) an arbitrary process $Q$ with (ii) a process $P$ with a channel of type on the right; it is derivable in the silent interpretation of , in which Axioms $\1$R and $\bot$L type the inactive process $\0$ and Rules $\bot$R and $\1$L leave processes untouched. This way, the rules for $\1$ would be the following (the rules for $\bot$ are similar): Γ; ∅⊢ :: x: Γ; Δ⊢P :: Λ Γ; Δ, x: ⊢P :: Λ In this case, the correspondence relies on structural congruences in processes and proofs (suitably extended), rather than on reduction. To obtain independent parallel composition, one uses Rule cut to connect the right channel of $P$ with a corresponding (dual) channel in $Q$, which is exposed on the left using Rule $\1$L: \[ \begin{bussproof} \bussAssume{ \Gamma; \Delta \vdill P :: z:\1 \bussAssume{ \Gamma; \Delta' \vdill Q :: x:C \bussUn[\ruleLabel{$\1$L}]{ \Gamma; \Delta', z:\1 \vdill Q :: x:C \bussBin[\ruleLabel{cutRL}]{ \Gamma; \Delta, \Delta' \vdill \nu{z}(P \| Q) :: x:C \end{bussproof} \] The requirement that process $P$ above has a channel of type on the right is rather restrictive: only left-rules can be used for typing, so $P$ must be a process that relies on multiple behaviors but can offer a behavior of type $\1$. Related to this constraint, Caires show that in the silent interpretation $\Gamma; \Delta \vdill P :: x:A$ implies $\Gamma; \Delta,x:\dual{A} \vdill P :: z:\1$, for some fresh $z$, provided that $A$ is exponential-free <cit.>. However, this “movement” from the right to the left is only possible under the absence of the identity axiom, which is in turn necessary when considering, e.g., behavioral polymorphism [8]. Indeed, given any type $A$ we can prove $\emptyset; x:A \vdill \fwd{x}{y} :: y:A$ using the identity axiom, but there is no way to prove $\emptyset; x:A, y:\dual{A} \vdill \fwd{x}{y} :: z:\1$. §.§.§ Parallel Composition: Mix Girard [22] discusses an extension to linear logic which allows the combination of two independent proofs. Wadler gives a Curry-Howard interpretation of this rule as the parallel composition of two processes that have no channels in common [52]. In [4], Caires complements the extension with an axiom that types the inactive process with an empty context. We now define , which is extended with this form of parallel composition. The only change is the addition of the following rules: P Γ; Δ Q Γ; Δ' P Q Γ; Δ, Δ' Γ; ∅ We straightforwardly define as a similar extension of by adding the following rules: Γ; Δ⊢P :: Λ Γ; Δ' ⊢Q :: Λ' Γ; Δ, Δ' ⊢P Q :: Λ, Λ' Γ; ∅⊢ :: ∅ It is easy to see that <Ref>—the equivalence of and —still holds for and . Following the reasoning of <Ref>—the characterization of in terms of —, Rules mix and empty do not belong to the intuitionistic fragment of : Rule empty has 0, and if the antecedents of Rule mix both have 1 then its consequence would have 2. This leads us to conclude that the more flexible ways of composition induced by mix cannot be supported by intuitionistic systems. The Equivalence of and Caires [4] notes that, in the classical setting, Rules mix and empty make it possible to prove $\bot \lolli \1$ and $\1 \lolli \bot$ (where denotes linear implication). Hence, it is possible to consider and equivalent, writing a single symbol (say, `$\bullet$') for either—very similar to the singular, self-dual type in standard session types. The work of Atkey  [1] is also relevant, as it develops a detailed treatment of the conflation of $\bot$ and $\1$. Due to the absence of and the impossibility of mix and empty in the intuitionistic interpretation, it is not possible to prove and equivalent. This reinforces the fact that in the intuitionistic setting there is a significant difference between channels/propositions in the left and the right contexts. §.§.§ Channel Connection: Cycle The cut-rule only allows us to connect two processes on a single channel. Although this neatly guarantees deadlock-freedom, it prevents many deadlock-free processes from being typable (see [17] for comparisons between these classes of processes). For example, we can construct a process $\nu{x}\nu{y}(P \| Q)$ where \begin{align*} &:= \nu{u}\send{x}{u}.(\recv{y}{v}.(\recv{u}{}.\recv{v}{}.\0 \| \send{y}{}.\0) \| \send{x}{}.\0) ~\text{and} \\ &:= \recv{x}{w}.\nu{z}\send{y}{z}.(\recv{x}{}.\send{z}{}.\0 \| \recv{y}{}.\send{w}{}.\0) \end{align*} This process is deadlock-free, but not typable using cut. Connecting on multiple channels at once does have the danger of introducing the circular dependencies between sessions that are at the heart of deadlocked processes. For example, suppose that we replace $P$ with $P'$, where we swap the send on $x$ and the receive on $y$: \[ P' := \recv{y}{v}.\nu{u}\send{x}{u}.(\recv{u}{}.\recv{v}{}.\0 \| \send{y}{}.\0 \| \send{x}{}.\0). \] Now, the composed and connected process would be stuck with $P'$ waiting for a receive on $y$ and $Q$ for a receive on $x$. Dardha and Gay [13] present a session type system based on classical linear logic in which they replace the cut-rule with a rule called cycle, which connects two channels of dual types in the same judgment. This new rule has a side-condition that allows their type system to maintain deadlock-freedom. We can define as an extension of with the cycle-rule. Since we are specifically interested in this form of channel restriction, below we abstract away from Dardha and Gay's side-condition by simply writing $\phi$. Note that we first require some modifications to restriction and reduction, based on [51]. Restriction in should involve two channel names, i.e. writing $\nu{x y}P$ instead of $\nu{x}P$, because the involved channels appear in the same judgment and thus need to be uniquely named. Indeed, $\nu{x y}P$ says that $x$ and $y$ are the two endpoints of the same channel in $P$. \[ \begin{bussproof}[cycle] \bussAssume{ P \vdcll \Gamma; \Delta, x:A, y:\dual{A} \bussAssume{ \phi \bussBin{ \nu{x y}P \vdcll \Gamma; \Delta \end{bussproof} \] Defining as a similar extension of is again straightforward. Similarly to the cut-rules in <Ref>, we add four rules that operate on different sides of the typing judgment: Γ; Δ, x:A ⊢P :: Λ, y:A Γ; Δ⊢νy xP :: Λ Γ; Δ, x:A ⊢P :: Λ, y:A Γ; Δ⊢νx yP :: Λ Γ; Δ⊢P :: Λ, x:A, y:A Γ; Δ⊢νx yP :: Λ Γ; Δ, x:A, y:A ⊢P :: Λ Γ; Δ⊢νx yP :: Λ Again, it is easy to see that <Ref> still holds for and . Interestingly, if we follow <Ref>, then Rule cycleLL actually is in 's intuitionistic fragment. Unfortunately, this rule is useless for . Every type has to contain at least one atomic type, which in the case of can only be . Since cycleLL requires two channels of dual types, one of the channels then must contain the atomic type which we have seen earlier to be impossible in the intuitionistic setting. We again conclude that the intuitionistic setting imposes restrictions that make more flexible ways of connecting processes than through cut an impossibility. Adding mix- and cycle-rules enables a more flexible formulation of structural congruence, as well as more flexible typing of sends. For one, e.g., Rule $\tensor$R would only require a single continuation that provides both the payload and the continuation of the send. We can then add structural congruence rules that are found in traditional, untyped settings, such as $P \| Q \equiv Q \| P$ and $P \| \0 \equiv P$. Preventing Circular Dependencies We briefly discuss some solutions to the side-condition for dead­lock-freedom $\phi$. Dardha and Gay annotate types with priorities and include side-conditions on these priorities in other rules [13]. This approach is based on Kobayashi's type systems for deadlock-freedom <cit.>. A completely orthogonal approach is by Toninho and Yoshida in [50]. It is based on multiparty session type theory in which session types describe interactions between two or more participants (as opposed to the binary session types we study in this paper, which always involve exactly two participants) <cit.>. Toninho and Yoshida generate partial multiparty session types from binary typings of processes. Processes can then be connected on multiple channels at once if their partial multiparty types are compatible, in which case deadlock-freedom results transfer from the multiparty theory to the binary system. § CONCLUSION Curry-Howard correspondences between linear logic and session types explain how linear logic can provide an ample, principled framework to conceive different type disciplines for message-passing processes specified in the $\pi$-calculus. There is no single canonical interpretation, as there are multiple interpretations, depending on design choices involving, e.g., the logical connectives considered and their respective process operators. In this context, this paper has pursued a very concrete goal: to formally compare the interpretations of classical and intuitionistic linear logic as session types. The comparison results reported in <Ref> are an indispensable step to consolidating the logical foundations of message-passing concurrency; they also have a number of relevant ramifications, as reported in <Ref>. Our technical approach to this goal relies on a fragment of Girard's Logic of Unity (LU) [23] as a basis to develop , a new session type system that can type all processes typable in both and , which correspond to classical and intuitionistic interpretations, respectively. The linear logic , on which the type system stands, is an admittedly modest fragment of Girard's LU. Still, we emphasize that is sufficient for our purposes, as it gives us a fair and rigorous basis for addressing our declared goal of comparing type systems based on different linear logics. The development of computational interpretations for LU, including but going beyond and , is surely an interesting direction but one that lies outside the scope of our work. In judgments have a particular reading in which a process relies on several channels (on the left of judgments) and guarantees a behavior along a single designated channel (on the right). uses two-sided judgments as well, similar to LU. However, does not retain 's rely-guarantee reading, because it does not distinguish between the sides of its sequents. The consequence is that supports a full duality relation, as opposed to . This allows to mimic the explicit duality in the single-sided . On the other hand, restricting the right side of 's judgments to exactly one channel—thus limiting support for duality—characterizes a fragment of 's typing rules that precisely coincides with . Our results confirm the informal observation by Caires  [11] that the difference between session type systems based on classical and intuitionistic linear logic is in the enforcement of locality of shared names. We have not only confirmed that cannot type processes that do not respect locality for shared channels: a new insight obtained in this paper is that also forbids empty sends on received channels. Our results show that these constraints are a consequence of the strict requirement that 's typing judgments must have exactly one channel/type on the right. and do not have such constraints and thus fully support duality. Our work can be seen as providing a technical answer to long-suspected but fuzzily understood differences between and , concerning the role of duality (implicit in , explicit in ) and locality of shared channels (enforced by but not by ). We believe it is important to give a formal footing to this kind of informal observations. Formally stating the required comparisons is not obvious, and insightful in itself, as implicit assumptions may emerge in the process. In this respect, we adopted as it provides an effective yardstick for comparisons, independent from both and , and based on an already existing framework (Girard's LU). In future work, it would be interesting to explore other approaches towards our technical results (<Ref>). If one adopts the stance that a permissive type system is also an expressive one, then the results from our comparisons (notably, the strict inclusion of in ) indicate that classical linear logic induces a more expressive class of typed processes than intuitionistic linear logic. There is an alternative stance, which considers little permissive type systems as being more precise than other, more permissive type system. The locality property for shared channels, as enforced by intuitionistic interpretations, is a case in point here. In our view, the connection between permissiveness, expressiveness, and precision is an interesting question, which largely depends on the value of the intended properties. Indeed, the precision of type systems derived from intuitionistic interpretations may not be meaningful in settings/applications where locality is simply not a relevant property. On the other hand, the conditions that make intuitionistic interpretations less permissive can always be imposed on more permissive interpretations as side-conditions, making them more precise. Finally, as observed by Caires  [11], it is surely remarkable that intuitionistic interpretations of session types precisely capture a principle such as locality of shared names, which was known and exploited in different contexts [38] long before the Curry-Howard interpretations were first spelled out [6]. To conclude, we believe that LU and have other useful applications besides the comparison of type systems based on vanilla linear logic; our discussion of more expressive parallel composition and channel connection in <Ref> corroborates this. We have been able to rely on to study the effects of mix- and cycle-rules in the intuitionistic interpretation by transferring extensions of the classical interpretation to and following the methodology of our comparison results. In this case, extensions of intuitionistic interpretations with such rules do not make sense due to the constraints of typing judgments. This makes sense from a logical perspective: intuitionistic argumentation is based on a constructivist philosophy in which putting unrelated things together (mix) and then relating them later (cycle) may not be acceptable. We are grateful to Juan Jaramillo, Joseph Paulus, Revantha Ramanayake for helpful discussions. We would also like to thank the anonymous reviewers of PLACES'20 for their suggestions, which were helpful to improve the presentation. [1] Robert Atkey, Sam Lindley, and J. Garrett Morris. Conflation Confers Concurrency. In Sam Lindley, Conor McBride, Phil Trinder, and Don Sannella, editors, A List of Successes That Can Change the World: Essays Dedicated to Philip Wadler on the Occasion of His 60th Birthday, Lecture Notes in Computer Science, pages 32–55. Springer International Publishing, Cham, 2016. [2] Michele Boreale. On the expressiveness of internal mobility in name-passing calculi. Theoretical Computer Science, 195(2):205–226, March 1998. [3] Stephanie Balzer and Frank Pfenning. Manifest Sharing with Session Types. Proc. ACM Program. Lang., 1(ICFP):37:1–37:29, August 2017. [4] Luís Caires. Types and Logic, Concurrency and Non-Determinism. Technical Report MSR-TR-2014-104, In Essays for the Luca Cardelli Fest, Microsoft Research, September 2014. [5] Bor-Yuh Evan Chang, Kaustuv Chaudhuri, and Frank Pfenning. A judgmental analysis of linear logic. Technical Report CMU-CS-03-131R, Department of Computer Science, Carnegie Mellon University, November 2003. [6] Luís Caires and Frank Pfenning. Session Types as Intuitionistic Linear Propositions. In Paul Gastin and François Laroussinie, editors, CONCUR 2010 - Concurrency Theory, Lecture Notes in Computer Science, pages 222–236, Berlin, Heidelberg, 2010. Springer. [7] Luís Caires and Jorge A. Pérez. Linearity, Control Effects, and Behavioral Types. In Hongseok Yang, editor, Programming Languages and Systems, Lecture Notes in Computer Science, pages 229–259, Berlin, Heidelberg, 2017. Springer. [8] Luís Caires, Jorge A. Pérez, Frank Pfenning, and Bernardo Toninho. Behavioral polymorphism and parametricity in session-based communication. In Matthias Felleisen and Philippa Gardner, editors, Programming Languages and Systems, pages 330–349, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg. [9] Luís Caires, Jorge A. Pérez, Frank Pfenning, and Bernardo Toninho. Domain-Aware Session Types. In Wan Fokkink and Rob glabbeekvan Glabbeek, editors, 30th International Conference on Concurrency Theory (CONCUR 2019), volume 140 of Leibniz International Proceedings in Informatics (LIPIcs), pages 39:1–39:17, Dagstuhl, Germany, 2019. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. [10] Luís Caires, Frank Pfenning, and Bernardo Toninho. Towards concurrent type theory. In Proceedings of the 8th ACM SIGPLAN Workshop on Types in Language Design and Implementation, pages 1–12. ACM, January 2012. [11] Luís Caires, Frank Pfenning, and Bernardo Toninho. Linear logic propositions as session types. Mathematical Structures in Computer Science, 26(3):367–423, March 2016. [12] Henry DeYoung, Luís Caires, Frank Pfenning, and Bernardo Toninho. Cut Reduction in Linear Logic as Asynchronous Session-Typed Communication. In Patrick Cégielski and Arnaud Durand, editors, Computer Science Logic (CSL'12) - 26th International Workshop/21st Annual Conference of the EACSL, volume 16 of Leibniz International Proceedings in Informatics (LIPIcs), pages 228–242, Dagstuhl, Germany, 2012. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. [13] Ornela Dardha and Simon J. Gay. A New Linear Logic for Deadlock-Free Session-Typed Processes. In Christel Baier and Ugo Dal Lago, editors, Foundations of Software Science and Computation Structures, Lecture Notes in Computer Science, pages 91–109. Springer International Publishing, 2018. [14] Ornela Dardha, Elena Giachino, and Davide Sangiorgi. Session types revisited. Information and Computation, 256:253–286, October 2017. [15] Romain Demangeon and Kohei Honda. Full Abstraction in a Subtyped pi-Calculus with Linear Types. In Joost-Pieter Katoen and Barbara König, editors, CONCUR 2011 – Concurrency Theory, Lecture Notes in Computer Science, pages 280–296, Berlin, Heidelberg, 2011. Springer. [16] Ugo Dal Lago, Marc vismede Visme, Damiano Mazza, and Akira Yoshimizu. Intersection types and runtime errors in the pi-calculus. Proceedings of the ACM on Programming Languages, 3(POPL):7:1–7:29, January 2019. [17] Ornela Dardha and Jorge A. Pérez. Comparing type systems for deadlock freedom. Journal of Logical and Algebraic Methods in Programming, 124:100717, January 2022. [18] Dan Frumin, Emanuele D'Osualdo, Bas van den Heuvel, and Jorge A. Pérez. A bunch of sessions: A propositions-as-sessions interpretation of bunched implications in channel-based concurrency. Proceedings of the ACM on Programming Languages, 6(OOPSLA2):155:841–155:869, October 2022. [19] Cédric Fournet, Georges Gonthier, Jean-Jacques Levy, Luc Maranget, and Didier Rémy. A calculus of mobile agents. In Ugo Montanari and Vladimiro Sassone, editors, CONCUR '96: Concurrency Theory, Lecture Notes in Computer Science, pages 406–421, Berlin, Heidelberg, 1996. Springer. [20] Cédric Fournet and Cosimo Laneve. Bisimulations in the join-calculus. Theoretical Computer Science, 266(1):569–603, September 2001. [21] Simon J. Gay, Nils Gesbert, and António Ravara. Session Types as Generic Process Types. arXiv:1408.1459 [cs], August 2014. [22] Jean-Yves Girard. Linear logic. Theoretical Computer Science, 50(1):1–101, January 1987. [23] Jean-Yves Girard. On the unity of logic. Annals of Pure and Applied Logic, 59(3):201–217, February 1993. [24] Jean-Yves Girard and Yves Lafont. Linear logic and lazy computation. In Hartmut Ehrig, Robert Kowalski, Giorgio Levi, and Ugo Montanari, editors, TAPSOFT '87, Lecture Notes in Computer Science, pages 52–66, Berlin, Heidelberg, 1987. Springer. [25] Daniele Gorla. A taxonomy of process calculi for distribution and mobility. Distributed Computing, 23(4):273–299, December 2010. [26] Kohei Honda and Olivier Laurent. An exact correspondence between a typed pi-calculus and polarised proof-nets. Theoretical Computer Science, 411(22):2223–2238, May 2010. [27] Kohei Honda. Types for dyadic interaction. In Eike Best, editor, CONCUR'93, Lecture Notes in Computer Science, pages 509–523, Berlin, Heidelberg, 1993. Springer. [28] Bas van den Heuvel and Jorge A. Pérez. Session type systems based on linear logic: Classical versus intuitionistic. In Stephanie Balzer and Luca Padovani, editors, Proceedings of the 12th International Workshop on Programming Language Approaches to Concurrency- and Communication-cEntric Software, Dublin, Ireland, 26th April 2020, volume 314 of Electronic Proceedings in Theoretical Computer Science, pages 1–11. Open Publishing Association, 2020. [29] Bas van den Heuvel and Jorge A. Pérez. Deadlock freedom for asynchronous and cyclic process networks. In Julien Lange, Anastasia Mavridou, Larisa Safina, and Alceste Scalas, editors, Proceedings 14th Interaction and Concurrency Experience, Online, 18th June 2021, volume 347 of Electronic Proceedings in Theoretical Computer Science, pages 38–56. Open Publishing Association, 2021. [30] Ross Horne and Luca Padovani. A Logical Account of Subtyping for Session Types. Electronic Proceedings in Theoretical Computer Science, 378:26–37, April 2023. [31] Kohei Honda, Vasco T. Vasconcelos, and Makoto Kubo. Language primitives and type discipline for structured communication-based programming. In Chris Hankin, editor, Programming Languages and Systems, Lecture Notes in Computer Science, pages 122–138, Berlin, Heidelberg, 1998. Springer. [32] Wen Kokke, Fabrizio Montesi, and Marco Peressotti. Better late than never: A fully-abstract semantics for classical processes. Proceedings of the ACM on Programming Languages, 3(POPL):24:1–24:29, January 2019. [33] Naoki Kobayashi. Type Systems for Concurrent Programs. In Bernhard K. Aichernig and Tom Maibaum, editors, Formal Methods at the Crossroads. From Panacea to Foundational Support: 10th Anniversary Colloquium of UNU/IIST, the International Institute for Software Technology of The United Nations University, Lisbon, Portugal, March 18-20, 2002. Revised Papers, Lecture Notes in Computer Science, pages 439–453. Springer, Berlin, Heidelberg, 2003. [34] Naoki Kobayashi, Benjamin C. Pierce, and David N. Turner. Linearity and the pi-calculus. ACM Transactions on Programming Languages and Systems, 21(5):914–947, September 1999. [35] Dimitrios Kouzapas, Jorge A. Pérez, and Nobuko Yoshida. On the relative expressiveness of higher-order session processes. Information and Computation, 268:104433, October 2019. [36] Olivier Laurent. Around Classical and Intuitionistic Linear Logics. In Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS '18, pages 629–638, New York, NY, USA, 2018. ACM. [37] Sam Lindley and J. Garrett Morris. Talking Bananas: Structural Recursion for Session Types. In Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming, ICFP 2016, pages 434–447, New York, NY, USA, 2016. ACM. [38] Massimo Merro. Locality and Polyadicity in Asynchronous Name-Passing Calculi. In Jerzy Tiuryn, editor, Foundations of Software Science and Computation Structures, Lecture Notes in Computer Science, pages 238–251, Berlin, Heidelberg, 2000. Springer. [39] Robin Milner, Joachim Parrow, and David Walker. A calculus of mobile processes, I. Information and Computation, 100(1):1–40, September 1992. [40] Jorge A. Pérez. Higher-Order Concurrency: Expressiveness and Decidability Results. PhD thesis, University of Bologna, Italy, May 2010. [41] Jorge A. Pérez. The Challenge of Typed Expressiveness in Concurrency. In Elvira Albert and Ivan Lanese, editors, Formal Techniques for Distributed Objects, Components, and Systems, Lecture Notes in Computer Science, pages 239–247, Cham, 2016. Springer International Publishing. [42] Kirstin Peters. Comparing Process Calculi Using Encodings. Electronic Proceedings in Theoretical Computer Science, 300:19–38, August 2019. [43] Joseph W. N. Paulus, Jorge A. Pérez, and Daniele Nantes-Sobrinho. Termination in concurrency, revisited. In Santiago Escobar and Vasco T. Vasconcelos, editors, International Symposium on Principles and Practice of Declarative Programming, PPDP 2023, Lisboa, Portugal, October 22-23, 2023, pages 3:1–3:14. ACM, 2023. [44] Zesen Qian, G. A. Kavvos, and Lars Birkedal. Client-server sessions in linear logic. Proceedings of the ACM on Programming Languages, 5(ICFP):62:1–62:31, August 2021. [45] Davide Sangiorgi. Locality and interleaving semantics in calculi for mobile processes. Theoretical Computer Science, 155(1):39–83, February 1996. [46] Harold Schellinx. Some Syntactical Observations on Linear Logic. Journal of Logic and Computation, 1(4):537–559, September 1991. [47] Davide Sangiorgi and David Walker. The Pi-Calculus: A Theory of Mobile Processes. Cambridge University Press, October 2003. [48] Bernardo Toninho, Luis Caires, and Frank Pfenning. Corecursion and Non-divergence in Session-Typed Processes. In Matteo Maffei and Emilio Tuosto, editors, Trustworthy Global Computing, Lecture Notes in Computer Science, pages 159–175, Berlin, Heidelberg, 2014. Springer. [49] Kaku Takeuchi, Kohei Honda, and Makoto Kubo. An interaction-based language and its typing system. In Costas Halatsis, Dimitrios Maritsas, George Philokyprou, and Sergios Theodoridis, editors, PARLE'94 Parallel Architectures and Languages Europe, Lecture Notes in Computer Science, pages 398–413, Berlin, Heidelberg, 1994. Springer. [50] Bernardo Toninho and Nobuko Yoshida. Interconnectability of Session-Based Logical Processes. ACM Transactions on Programming Languages and Systems (TOPLAS), 40(4):17, December 2018. [51] Vasco T. Vasconcelos. Fundamentals of session types. Information and Computation, 217:52–70, August 2012. [52] Philip Wadler. Propositions As Sessions. In Proceedings of the 17th ACM SIGPLAN International Conference on Functional Programming, ICFP '12, pages 273–286, New York, NY, USA, 2012. ACM.
## 0.1 Feature Transformation for Style Transfer Once we have the feature grid representation of a scene, we can tackle the task of stylizing 3D scenes. Given a reference style image, our goal is to render stylized novel views of the 3D scene with multi-view consistency. To achieve this, we apply transformations to the features of the grid. One plausible solution to this task is to apply style transfer to the feature grid directly. This solution is efficient in evaluations as it can render any stylized views with a single style transfer process only. However, it is impractical to train such transformation as it needs to stylize the whole feature grid in every iteration. Another solution is to apply an off-the-shelf zero-shot style transfer method to the features of the sampled 3D points. While this solution can reduce computational cost through decreasing the size of training patch and the number of sampled points, it has two problems: 1) vanilla zero-shot style transformation is conditioned on holistic statistics of the sampled point batch [li2019learning, huang2017arbitrary, liu2021adaattn], which violates multi-view consistency in volume rendering as the feature transformation of a specific 3D point will vary across different sampled points; 2) volume rendering requires sampling hundreds of points along a single ray, which makes transformation on the point batch memory-intensive. Motivated by the observation that style transformation is conditioned on both content information and style information, we decompose the style transformation into sampling-invariant content transformation (SICT) and deferred style transformation (DST). After the decomposition, SICT will be conditioned solely on the content information while DST conditioned solely on the style information, more details to be elaborated in the ensuing subsections. ### 0.1.1 Sampling-invariant Content Transformation [b]0.4 [b]0.4 Figure 1: Vanilla IN. Figure 2: Volume-adaptive IN. Figure 3: Comparison between vanilla instance normalization (IN) in (a) and volume-adaptive IN in (b). During evaluation, volume-adaptive IN uses learned mean and standard- deviation, discarding dependency over the sampled point batch’s holistic statistics (indicated by the redred arrows in the left graph). Given a batch of sampled points, we can get their corresponding features $F_{i}\in\mathbb{R}^{C},i\in[1,2,...,N]$ from the feature grid, where $N$ is the number of the sampled points along a ray and $C$ is the number of the feature channels. The goal of SICT is to transform the extracted features $F_{i}$ so that they can be better stylized. We formulate SICT as a channel- wise self-attention operation to the features after instance normalization (IN) [ulyanov2016instance]. Specifically, we formulate $Q$(query), $K$(key), and $V$(value) as: $\displaystyle Q$ $\displaystyle=q(Norm(F_{i})),$ (1) $\displaystyle K$ $\displaystyle=k(Norm(F_{i})),$ (2) $\displaystyle V$ $\displaystyle=v(Norm(F_{i})),$ (3) where $q,k,v$ are $1\times 1$ convolution layers which reduce the channel number from $C$ to $C^{\prime}$ for computational efficiency, and $Norm$ denotes the IN. However, as shown in fig:IN, vanilla IN calculates per- dimension mean and standard-deviation of the batch of sampled points, which varies with different sampled points and incurs multi-view inconsistency accordingly. Thus we design volume-adaptive IN which, during training, keeps running estimates of the computed mean and standard-deviation, and uses them for normalization during evaluations (instead of computing from the sampled point batch). Through volume-adaptive IN, we can ensure that the content transformation is consistent regardless of the sampled point batch’s holistic statistics. Channel-wise self-attention can thus be implemented by: $\bar{F_{i}}=V\otimes\mathrm{Softmax}\left(\widetilde{cov}(Q,K)\right),$ (4) where $\otimes$ denotes matrix multiplication and $\widetilde{cov}(Q,K)\in\mathbb{R}^{N\times C^{\prime}\times C^{\prime}}$ denotes the covariance matrix in the channel dimension. ### 0.1.2 Deferred Style Transformation Figure 4: Deferred style transformation. We apply the style transformation to the volume-rendered feature maps $\bar{F_{c}}$ according to the style feature maps $F_{s}$. To ensure multi-view consistency, we modulate the bias (e.g. the mean value of the style feature maps $\mu(F_{s})$) with the sum weight of sampled points along each ray $w_{\textbf{r}}$. After applying SICT to the features of each 3D point, we apply DST to the volume-rendered 2D feature maps $\bar{F_{c}}$ rather than 3D point features $\bar{F_{i}}$. To ensure multi-view consistency, we formulate the transformation as matrix multiplication and adaptive bias addition as illustrated in fig:DST. Specifically, we first extract feature maps $F_{s}$ of the reference style $S$ using a pre-trained VGG[simonyan2014very], and then generate the style transformation matrix $T\in\mathbb{R}^{C^{\prime}\times C^{\prime}}$ using feature covariance $cov(F_{s})$ following [li2019learning]. Next, we apply matrix multiplication with $T$ to the feature maps $\bar{F_{c}}$ and use a $1\times 1$ convolution layer $conv$ without bias to restore the channel number from $C^{\prime}$ to $C$. Though these operations can partially instill style information, they are not expressive enough without bias addition containing style information [wu2021styleformer]. Thus following [huang2017arbitrary], we multiply the feature maps with the standard-deviation value $\sigma(F_{s})$ and add the mean value $\mu(F_{s})$. To ensure it is equivalent when applying the transformation to either 3D point features or 2D feature maps, we adaptively modulate the mean value $\mu(F_{s})$ with the sum weight of sampled points along each ray $w_{\textbf{r}}$. DST can be mathematically formulated by: $F_{cs}=conv\left(T\otimes\bar{F_{c}}\right)\times\sigma(F_{s})+w_{\textbf{r}}\times\mu(F_{s}),$ (5) $\text{where}\quad\bar{F_{c}}=\sum_{i=1}^{N}w_{i}\bar{F_{i}},w_{\textbf{r}}=\sum_{i=1}^{N}w_{i},\textbf{r}\in\mathcal{R}$ (6) where $w_{i}$ denotes the weight of sampled point $i$ (eq:weight), $\bar{F_{i}}$ denotes the feature of sample $i$ after SICT, and $\mathcal{R}$ is the set of rays in each training batch. Note $conv$ is a $1\times 1$ convolution layer without bias, so it is basically a matrix multiplication operation. And $\sigma(S),\mu(S)$ are scalars. Together with the adaptive bias modulation $w_{\textbf{r}}$, eq:styletrans can be reformulated by: $F_{cs}=\sum_{i=1}^{N}w_{i}\left(\underbrace{conv\left(T\otimes\bar{F_{i}}\right)\times\sigma(F_{s})+\mu(F_{s})}_{\mbox{(i)}}\right),$ (7) where part (i) can be seen as applying style transformation on every 3D point feature independently before volume rendering. This proves that applying DST on 2D feature maps is equivalent to applying the transformation on 3D points’ features, maintaining multi-view consistency. The full derivation of eq:styletrans2 is provided in the appendix. Finally, we adopt a 2D CNN decoder to project the stylized feature maps $F_{cs}$ to RGB space to generate the final stylized novel view images.
# The n-th root of sequential effect algebras††thanks: This project is supported by Natural Science Foundation of China (10771191 and 10471124) and Natural Science Foundation of Zhejiang Province of China (Y6090105). Shen Jun1,2, Wu Junde1 Tel: 86-571-87951609-8111, E-mail<EMAIL_ADDRESS> ###### Abstract In 2005, Professor Gudder presented 25 open problems of sequential effect algebras, the 20th problem asked: In a sequential effect algebra, if the square root of some element exists, is it unique ? We can strengthen the problem as following: For each given positive integer $n>1$, is there a sequential effect algebra such that the n-th root of its some element $c$ is not unique and the n-th root of $c$ is not the k-th root of $c$ ($k<n$) ? In this paper, we answer the strengthened problem affirmatively. 1Department of Mathematics, Zhejiang University, Hangzhou 310027, P. R. China 2Department of Mathematics, Anhui Normal University, Wuhu 241003, P. R. China Keywords. Effect algebra, sequential effect algebra, root. PACS numbers: 02.10-v, 02.30.Tb, 03.65.Ta. Let $H$ be a complex Hilbert space and ${\cal D}(H)$ the set of density operators on $H$, i.e., the trace class positive operators on $H$ of unit trace, which represent the states of quantum system. A self-adjoint operator $A$ on $H$ such that $0\leq A\leq I$ is called a quantum effect ([1, 2]), the set of quantum effects on $H$ is denoted by ${\cal E}(H)$. The set of orthogonal projection operators on $H$ is denoted by ${\cal P}(H)$. For each $P\in{\cal P}(H)$ is associated a so-called Lüders transformation $\Phi_{L}^{P}:{\cal D}(H)\rightarrow{\cal D}(H)$ such that for each $T\in{\cal D}(H)$, $\Phi_{L}^{P}(T)=PTP$. Moreover, each quantum effect $B\in{\cal E}(H)$ gives also to a general Lüders transformation $\Phi_{L}^{B}$ such that for each $T\in{\cal D}(H)$, $\Phi_{L}^{B}(T)=B^{\frac{1}{2}}TB^{\frac{1}{2}}$ ([3-4]). Let $B,C\in{\cal E}(H)$ be two quantum effects. It is easy to prove that the composition $\Phi_{L}^{B}\circ\Phi_{L}^{C}$ satisfies that for each $T\in{\cal D}(H)$, ($\Phi_{L}^{B}\circ\Phi_{L}^{C})(T)=(B^{\frac{1}{2}}CB^{\frac{1}{2}})^{\frac{1}{2}}T(B^{\frac{1}{2}}CB^{\frac{1}{2}})^{\frac{1}{2}}$ ([4]). Professor Gudder called $B^{\frac{1}{2}}CB^{\frac{1}{2}}$ the sequential product of $B$ and $C$, and denoted it by $B\circ C$ ([5-7]). This sequential product has been generalized to an algebraic structure called a sequential effect algebra ([8]). Now, we state the basic definitions and results of sequential effect algebras. An effect algebra is a system $(E,0,1,\oplus)$, where 0 and 1 are distinct elements of $E$ and $\oplus$ is a partial binary operation on $E$ satisfying that [9]: (EA1). If $a\oplus b$ is defined, then $b\oplus a$ is defined and $b\oplus a=a\oplus b$. (EA2). If $a\oplus(b\oplus c)$ is defined, then $(a\oplus b)\oplus c$ is defined and $(a\oplus b)\oplus c=a\oplus(b\oplus c).$ (EA3). For each $a\in E$, there exists a unique element $b\in E$ such that $a\oplus b=1$. (EA4). If $a\oplus 1$ is defined, then $a=0$. In an effect algebra $(E,0,1,\oplus)$, if $a\oplus b$ is defined, we write $a\bot b$. For each $a\in(E,0,1,\oplus)$, it follows from (EA3) that there exists a unique element $b\in E$ such that $a\oplus b=1$, we denote $b$ by $a^{\prime}$. Let $a,b\in(E,0,1,\oplus)$, if there exists a $c\in E$ such that $a\bot c$ and $a\oplus c=b$, then we say that $a\leq b$, if in addition, $a\neq b$, then we write $a<b$. It follows from [9] that $\leq$ is a partial order of $(E,0,1,\oplus)$ and satisfies that for each $a\in E$, $0\leq a\leq 1$, $a\bot b$ if and only if $a\leq b^{\prime}$. A sequential effect algebra is an effect algebra $(E,0,1,\oplus)$ and another binary operation $\circ$ defined on $(E,0,1,\oplus)$ satisfying that [8]: (SEA1). The map $b\mapsto a\circ b$ is additive for each $a\in E$, that is, if $b\bot c$, then $a\circ b\bot a\circ c$ and $a\circ(b\oplus c)=a\circ b\oplus a\circ c$. (SEA2). $1\circ a=a$ for each $a\in E$. (SEA3). If $a\circ b=0$, then $a\circ b=b\circ a$. (SEA4). If $a\circ b=b\circ a$, then $a\circ b^{\prime}=b^{\prime}\circ a$ and $a\circ(b\circ c)=(a\circ b)\circ c$ for each $c\in E$. (SEA5). If $c\circ a=a\circ c$ and $c\circ b=b\circ c$, then $c\circ(a\circ b)=(a\circ b)\circ c$ and $c\circ(a\oplus b)=(a\oplus b)\circ c$ whenever $a\bot b$. Let $(E,0,1,\oplus,\circ)$ be a sequential effect algebra. Then the operation $\circ$ is said to be a sequential product on $(E,0,1,\oplus,\circ)$. If $a,b\in(E,0,1,\oplus,\circ)$ and $a\circ b=b\circ a$, then $a$ and $b$ is said to be sequentially independent and write $a|b$ ([8]). Let $a\in(E,0,1,\oplus,\circ)$. If there exists an element $b\in(E,0,1,\oplus,\circ)$ such that $\underbrace{b\circ b\circ\cdots\circ b}\limits_{the\ number\ is\ n}=a$, then we write $b^{n}=a$ and $b$ is said to be a n-th root of $a$. Note that $b$ is a n-th root of $a$ implies that $a$ can be obtained by measuring $b$ n-times repeatedly. The sequential effect algebra is an important and interesting mathematical model for studying the quantum measurement theory [5-8]. In [10], Professor Gudder presented 25 open problems to motivate the study of sequential effect algebra theory. The 20th problem asked: Problem 1 ([10]). In a sequential effect algebra $(E,0,1,\oplus,\circ)$, if the square root of some element exists, is it unique ? Now, we can strengthen Problem 1 as following: Problem 2. For each given positive integer $n>1$, is there a sequential effect algebra $(E,0,1,\oplus,\circ)$ such that the n-th root of its some element $c$ is not unique and the n-th root of $c$ is not the k-th root of $c$ ($k<n$) ? i.e., are there $a,b\in E$, such that $a\neq b$, $a^{n}=c=b^{n}$ and $a^{k}\neq c$, $b^{k}\neq c$ for $k<n$ ? In this paper, we present an example to answer Problem 2 affirmatively. Actually, we will construct a sequential effect algebra $E_{0}$, such that there are elements $a,b,c\in E_{0}$ having the relations $a>a^{2}>\cdots>a^{n},$ $b>b^{2}>\cdots>b^{n},$ $a^{k}\neq b^{k}\ for\ k<n\ ,\ a^{n}=b^{n}=c\neq 0.$ In order to construct our example, we need some preliminary steps: Suppose $Z$ be the integer set, $n>1$ be a given positive integer. Let $p(x)=\sum\limits_{i=1}^{n-1}k_{i}x^{i}$, where $k_{i}\in Z$, $k_{i}\equiv 0$ or the first nonzero $k_{i}>0$, we denote all the polynomials characterized above by $I_{0}$ . Suppose $p_{1},p_{2}\in I_{0}$ and $p_{1}(x)=\sum\limits_{i=1}^{n-1}k_{1,i}x^{i}$ , $p_{2}(x)=\sum\limits_{i=1}^{n-1}k_{2,i}x^{i}$ , let $F(p_{1},p_{2})(x)=\sum\limits_{i+j\leq n-1}k_{1,i}k_{2,j}x^{i+j}$ , $G(p_{1},p_{2})=\sum\limits_{i+j=n}k_{1,i}k_{2,j}$ . Then it is easy to see that $F(p_{1},p_{2})\in I_{0}$ and $G(p_{1},p_{2})\in Z$ . Thus we defined mappings $F:I_{0}\times I_{0}\longrightarrow I_{0}$ and $G:I_{0}\times I_{0}\longrightarrow Z$ . Moreover, suppose $p_{1},p_{2},p_{3}\in I_{0}$ and $p_{1}(x)=\sum\limits_{i=1}^{n-1}k_{1,i}x^{i}$ , $p_{2}(x)=\sum\limits_{i=1}^{n-1}k_{2,i}x^{i}$ , $p_{3}(x)=\sum\limits_{i=1}^{n-1}k_{3,i}x^{i}$ , let $\overline{F}(p_{1},p_{2},p_{3})(x)=\sum\limits_{i+j+m\leq n-1}k_{1,i}k_{2,j}k_{3,m}x^{i+j+m}$ , $\overline{G}(p_{1},p_{2},p_{3})=\sum\limits_{i+j+m=n}k_{1,i}k_{2,j}k_{3,m}$ . Then it is also easy to see that $\overline{F}(p_{1},p_{2},p_{3})\in I_{0}$ and $\overline{G}(p_{1},p_{2},p_{3})\in Z$ . Thus we defined mappings $\overline{F}:I_{0}\times I_{0}\times I_{0}\longrightarrow I_{0}$ and $\overline{G}:I_{0}\times I_{0}\times I_{0}\longrightarrow Z$ . Lemma 1. Suppose $p,p_{1},p_{2},p_{3}\in I_{0}$, we have (1). $F(p_{1},p_{2})=F(p_{2},p_{1})$, $G(p_{1},p_{2})=G(p_{2},p_{1})$; (2). $F(p_{1},p_{2}+p_{3})=F(p_{1},p_{2})+F(p_{1},p_{3})$, $G(p_{1},p_{2}+p_{3})=G(p_{1},p_{2})+G(p_{1},p_{3})$; (3). $F(0,p)=0$, $G(0,p)=0$; (4). if $F(p_{1},p_{2})=0$, then $G(p_{1},p_{2})\geq 0$; (5). $p_{1}-F(p_{1},p_{2})\in I_{0}$, and $p_{1}=F(p_{1},p_{2})\Longleftrightarrow p_{1}=0$; (6). $F(F(p_{1},p_{2}),p_{3})=\overline{F}(p_{1},p_{2},p_{3})$, $G(F(p_{1},p_{2}),p_{3})=\overline{G}(p_{1},p_{2},p_{3})$; (7). $p_{1}+p_{2}\in I_{0}$, and $p_{1}+p_{2}=0\Longleftrightarrow p_{1}=p_{2}=0$. Proof. (1),(2),(3),(6) and (7) are trivial. (4). Except for the trivial cases, we may suppose $p_{1}(x)=\sum\limits_{i=n_{1}}^{n-1}k_{1,i}x^{i}$, $p_{2}(x)=\sum\limits_{i=n_{2}}^{n-1}k_{2,i}x^{i}$, with $k_{1,n_{1}}>0$ and $k_{2,n_{2}}>0$. Then from $F(p_{1},p_{2})=0$ we have $n_{1}+n_{2}\geq n$. If $n_{1}+n_{2}=n$, then $G(p_{1},p_{2})=k_{1,n_{1}}k_{2,n_{2}}>0$; otherwise $n_{1}+n_{2}>n$ and $G(p_{1},p_{2})=0$. (5). Except for the trivial cases, we may suppose $p_{1}(x)=\sum\limits_{i=n_{1}}^{n-1}k_{1,i}x^{i}$, $p_{2}(x)=\sum\limits_{i=n_{2}}^{n-1}k_{2,i}x^{i}$, with $k_{1,n_{1}}>0$ and $k_{2,n_{2}}>0$. Then the first item of $p_{1}-F(p_{1},p_{2})$ is $k_{1,n_{1}}x^{n_{1}}$, so $p_{1}-F(p_{1},p_{2})\in I_{0}$. If $p_{1}\neq 0$, then from the above reason we know that $p_{1}-F(p_{1},p_{2})\neq 0$. Thus, the lemma is proved. Now, we take two infinite sets $U$ and $V$ such that $U\cap V=\emptyset$. Let $f:I_{0}\times I_{0}\times Z\rightarrow U$ and $g:I_{0}\times I_{0}\times Z\rightarrow V$ be two one to one maps. Then, we construct our example as following: Let $E_{0}=\\{f(p,q,m),g(p,q,m)|p,q\in I_{0},m\in Z\ and\ satisfy\ that\ m\geq 0\ whenever\ p=q=0\\}$. First, we define a partial binary operation $\oplus$ on $E_{0}$ as follows (when we write $x\oplus y=z$, we always mean that $x\oplus y=z=y\oplus x$): (i). $f(p_{1},q_{1},m_{1})\oplus f(p_{2},q_{2},m_{2})=f(p_{1}+p_{2},q_{1}+q_{2},m_{1}+m_{2})$ (the right side is well-defined, see Lemma 1(7)); (ii). for $p_{2}-p_{1}\in I_{0}$, $q_{2}-q_{1}\in I_{0}$, and satisfy that $m_{2}\geq m_{1}$ when $p_{2}=p_{1}$ and $q_{2}=q_{1}$, $f(p_{1},q_{1},m_{1})\oplus g(p_{2},q_{2},m_{2})=g(p_{2}-p_{1},q_{2}-q_{1},m_{2}-m_{1})$ . No other $\oplus$ operation is defined. Next, we define a binary operation $\circ$ on $E_{0}$ as follows (when we write $x\circ y=z$, we always mean that $x\circ y=z=y\circ x$): (i). $f(p_{1},q_{1},m_{1})\circ f(p_{2},q_{2},m_{2})=f\Big{(}F(p_{1},p_{2}),F(q_{1},q_{2}),G(p_{1},p_{2})+G(q_{1},q_{2})\Big{)}$ (the right side is well-defined, see Lemma 1(4)); (ii). $f(p_{1},q_{1},m_{1})\circ g(p_{2},q_{2},m_{2})=f\Big{(}p_{1}-F(p_{1},p_{2}),q_{1}-F(q_{1},q_{2}),m_{1}-G(p_{1},p_{2})-G(q_{1},q_{2})\Big{)}$ (the right side is well-defined, see Lemma 1(3), (5)); (iii). $g(p_{1},q_{1},m_{1})\circ g(p_{2},q_{2},m_{2})=g\Big{(}p_{1}+p_{2}-F(p_{1},p_{2}),q_{1}+q_{2}-F(q_{1},q_{2}),m_{1}+m_{2}-G(p_{1},p_{2})-G(q_{1},q_{2})\Big{)}$ (the right side is well-defined, see Lemma 1(3), (5), (7)). We denote $f(0,0,0)$ by $0$, $g(0,0,0)$ by $1$. Proposition 1. $(E_{0},0,1,\oplus,\circ)$ is a sequential effect algebra. Proof. In the proof below, we will use Lemma 1 frequently without annotation. First, we verify that $(E_{0},0,1,\oplus)$ is an effect algebra. (EA1) is obvious. We verify (EA2) as follows: (i). $f(p_{1},q_{1},m_{1})\oplus\Big{(}f(p_{2},q_{2},m_{2})\oplus f(p_{3},q_{3},m_{3})\Big{)}=\Big{(}f(p_{1},q_{1},m_{1})\oplus f(p_{2},q_{2},m_{2})\Big{)}\oplus f(p_{3},q_{3},m_{3})=f(p_{1}+p_{2}+p_{3},q_{1}+q_{2}+q_{3},m_{1}+m_{2}+m_{3})$; (ii). $f(p_{1},q_{1},m_{1})\oplus\Big{(}f(p_{2},q_{2},m_{2})\oplus g(p_{3},q_{3},m_{3})\Big{)}$ or $\Big{(}f(p_{1},q_{1},m_{1})\oplus f(p_{2},q_{2},m_{2})\Big{)}\oplus g(p_{3},q_{3},m_{3})$ is defined iff $p_{3}-p_{1}-p_{2}\in I_{0}$, $q_{3}-q_{1}-q_{2}\in I_{0}$ and satisfy that $m_{3}\geq m_{1}+m_{2}$ when $p_{3}=p_{1}+p_{2}$ and $q_{3}=q_{1}+q_{2}$, at this point, they all equal to $g(p_{3}-p_{1}-p_{2},q_{3}-q_{1}-q_{2},m_{3}-m_{1}-m_{2})$. Note that $f(p,q,m)\oplus g(p,q,m)=g(0,0,0)=1$, we verified (EA3). For (EA4), we note from our construction that the unique element orthogonal to $g(0,0,0)(=1)$ is $f(0,0,0)(=0)$, that is, $f(0,0,0)\bot g(0,0,0)$ and $f(0,0,0)\oplus g(0,0,0)=g(0,0,0)$. So far, we have proved that $(E_{0},0,1,\oplus)$ is an effect algebra. Next, we verify that $(E_{0},0,1,\oplus,\circ)$ is a sequential effect algebra. (SEA3) and (SEA5) are obvious. We verify (SEA1) as follows: (i). $f(p_{1},q_{1},m_{1})\circ\Big{(}f(p_{2},q_{2},m_{2})\oplus f(p_{3},q_{3},m_{3})\Big{)}=f(p_{1},q_{1},m_{1})\circ f(p_{2},q_{2},m_{2})\oplus f(p_{1},q_{1},m_{1})\circ f(p_{3},q_{3},m_{3})=f\Big{(}F(p_{1},p_{2}+p_{3}),F(q_{1},q_{2}+q_{3}),G(p_{1},p_{2}+p_{3})+G(q_{1},q_{2}+q_{3})\Big{)}$, $g(p_{1},q_{1},m_{1})\circ\Big{(}f(p_{2},q_{2},m_{2})\oplus f(p_{3},q_{3},m_{3})\Big{)}=g(p_{1},q_{1},m_{1})\circ f(p_{2},q_{2},m_{2})\oplus g(p_{1},q_{1},m_{1})\circ f(p_{3},q_{3},m_{3})=f\Big{(}p_{2}+p_{3}-F(p_{1},p_{2}+p_{3}),q_{2}+q_{3}-F(q_{1},q_{2}+q_{3}),m_{2}+m_{3}-G(p_{1},p_{2}+p_{3})-G(q_{1},q_{2}+q_{3})\Big{)}$; (ii). when $f(p_{2},q_{2},m_{2})\oplus g(p_{3},q_{3},m_{3})$ is defined, i.e., when $p_{3}-p_{2}\in I_{0}$, $q_{3}-q_{2}\in I_{0}$, and satisfy that $m_{3}\geq m_{2}$ if $p_{3}=p_{2}$ and $q_{3}=q_{2}$ , $f(p_{1},q_{1},m_{1})\circ\Big{(}f(p_{2},q_{2},m_{2})\oplus g(p_{3},q_{3},m_{3})\Big{)}=f(p_{1},q_{1},m_{1})\circ f(p_{2},q_{2},m_{2})\oplus f(p_{1},q_{1},m_{1})\circ g(p_{3},q_{3},m_{3})=f\Big{(}p_{1}-F(p_{1},p_{3}-p_{2}),q_{1}-F(q_{1},q_{3}-q_{2}),m_{1}-G(p_{1},p_{3}-p_{2})-G(q_{1},q_{3}-q_{2})\Big{)}$, $g(p_{1},q_{1},m_{1})\circ\Big{(}f(p_{2},q_{2},m_{2})\oplus g(p_{3},q_{3},m_{3})\Big{)}=g(p_{1},q_{1},m_{1})\circ f(p_{2},q_{2},m_{2})\oplus g(p_{1},q_{1},m_{1})\circ g(p_{3},q_{3},m_{3})=g\Big{(}p_{1}+p_{3}-p_{2}-F(p_{1},p_{3}-p_{2}),q_{1}+q_{3}-q_{2}-F(q_{1},q_{3}-q_{2}),m_{1}+m_{3}-m_{2}-G(p_{1},p_{3}-p_{2})-G(q_{1},q_{3}-q_{2})\Big{)}$. We verify (SEA2) as follows: $1\circ f(p,q,m)=g(0,0,0)\circ f(p,q,m)=f(p,q,m);$ $1\circ g(p,q,m)=g(0,0,0)\circ g(p,q,m)=g(p,q,m).$ We verify (SEA4) as follows: (i). $f(p_{1},q_{1},m_{1})\circ\Big{(}f(p_{2},q_{2},m_{2})\circ f(p_{3},q_{3},m_{3})\Big{)}$ $=f(p_{1},q_{1},m_{1})\circ f\Big{(}F(p_{2},p_{3}),F(q_{2},q_{3}),G(p_{2},p_{3})+G(q_{2},q_{3})\Big{)}$ $=f\Big{(}F(p_{1},F(p_{2},p_{3})),F(q_{1},F(q_{2},q_{3})),G(p_{1},F(p_{2},p_{3}))+G(q_{1},F(q_{2},q_{3}))\Big{)}$ $=f\Big{(}\overline{F}(p_{1},p_{2},p_{3}),\overline{F}(q_{1},q_{2},q_{3}),\overline{G}(p_{1},p_{2},p_{3})+\overline{G}(q_{1},q_{2},q_{3})\Big{)}$, by symmetry, $\Big{(}f(p_{1},q_{1},m_{1})\circ f(p_{2},q_{2},m_{2})\Big{)}\circ f(p_{3},q_{3},m_{3})$ $=f(p_{3},q_{3},m_{3})\circ\Big{(}f(p_{1},q_{1},m_{1})\circ f(p_{2},q_{2},m_{2})\Big{)}$ $=f\Big{(}\overline{F}(p_{1},p_{2},p_{3}),\overline{F}(q_{1},q_{2},q_{3}),\overline{G}(p_{1},p_{2},p_{3})+\overline{G}(q_{1},q_{2},q_{3})\Big{)}$, so we have $f(p_{1},q_{1},m_{1})\circ\Big{(}f(p_{2},q_{2},m_{2})\circ f(p_{3},q_{3},m_{3})\Big{)}=\Big{(}f(p_{1},q_{1},m_{1})\circ f(p_{2},q_{2},m_{2})\Big{)}\circ f(p_{3},q_{3},m_{3})$. (ii). $f(p_{1},q_{1},m_{1})\circ\Big{(}f(p_{2},q_{2},m_{2})\circ g(p_{3},q_{3},m_{3})\Big{)}$ $=f(p_{1},q_{1},m_{1})\circ f\Big{(}p_{2}-F(p_{2},p_{3}),q_{2}-F(q_{2},q_{3}),m_{2}-G(p_{2},p_{3})-G(q_{2},q_{3})\Big{)}$ $=f\Big{(}F(p_{1},p_{2}-F(p_{2},p_{3})),F(q_{1},q_{2}-F(q_{2},q_{3})),G(p_{1},p_{2}-F(p_{2},p_{3}))+G(q_{1},q_{2}-\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}F(q_{2},q_{3}))\Big{)}$ $=f\Big{(}F(p_{1},p_{2})-F(p_{1},F(p_{2},p_{3})),F(q_{1},q_{2})-F(q_{1},F(q_{2},q_{3})),G(p_{1},p_{2})-\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}G(p_{1},F(p_{2},p_{3}))+G(q_{1},q_{2})-G(q_{1},F(q_{2},q_{3}))\Big{)}$ $=f\Big{(}F(p_{1},p_{2})-\overline{F}(p_{1},p_{2},p_{3}),F(q_{1},q_{2})-\overline{F}(q_{1},q_{2},q_{3}),G(p_{1},p_{2})-\overline{G}(p_{1},p_{2},p_{3})+\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}G(q_{1},q_{2})-\overline{G}(q_{1},q_{2},q_{3})\Big{)}$, $\Big{(}f(p_{1},q_{1},m_{1})\circ f(p_{2},q_{2},m_{2})\Big{)}\circ g(p_{3},q_{3},m_{3})$ $=f\Big{(}F(p_{1},p_{2}),F(q_{1},q_{2}),G(p_{1},p_{2})+G(q_{1},q_{2})\Big{)}\circ g(p_{3},q_{3},m_{3})$ $=f\Big{(}F(p_{1},p_{2})-F(F(p_{1},p_{2}),p_{3}),F(q_{1},q_{2})-F(F(q_{1},q_{2}),q_{3}),G(p_{1},p_{2})+G(q_{1},q_{2})-\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}G(F(p_{1},p_{2}),p_{3})-G(F(q_{1},q_{2}),q_{3})\Big{)}$ $=f\Big{(}F(p_{1},p_{2})-\overline{F}(p_{1},p_{2},p_{3}),F(q_{1},q_{2})-\overline{F}(q_{1},q_{2},q_{3}),G(p_{1},p_{2})-\overline{G}(p_{1},p_{2},p_{3})+\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}G(q_{1},q_{2})-\overline{G}(q_{1},q_{2},q_{3})\Big{)}$, so we have $f(p_{1},q_{1},m_{1})\circ\Big{(}f(p_{2},q_{2},m_{2})\circ g(p_{3},q_{3},m_{3})\Big{)}=\Big{(}f(p_{1},q_{1},m_{1})\circ f(p_{2},q_{2},m_{2})\Big{)}\circ g(p_{3},q_{3},m_{3})$ . (iii). $f(p_{1},q_{1},m_{1})\circ\Big{(}g(p_{2},q_{2},m_{2})\circ g(p_{3},q_{3},m_{3})\Big{)}$ $=f(p_{1},q_{1},m_{1})\circ g\Big{(}p_{2}+p_{3}-F(p_{2},p_{3}),q_{2}+q_{3}-F(q_{2},q_{3}),m_{2}+m_{3}-G(p_{2},p_{3})-G(q_{2},q_{3})\Big{)}$ $=f\Big{(}p_{1}-F(p_{1},p_{2}+p_{3}-F(p_{2},p_{3})),q_{1}-F(q_{1},q_{2}+q_{3}-F(q_{2},q_{3})),m_{1}-G(p_{1},p_{2}+\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}p_{3}-F(p_{2},p_{3}))-G(q_{1},q_{2}+q_{3}-F(q_{2},q_{3}))\Big{)}$ $=f\Big{(}p_{1}-F(p_{1},p_{2}+p_{3})+\overline{F}(p_{1},p_{2},p_{3}),q_{1}-F(q_{1},q_{2}+q_{3})+\overline{F}(q_{1},q_{2},q_{3}),m_{1}-G(p_{1},p_{2}+\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}p_{3})+\overline{G}(p_{1},p_{2},p_{3})-G(q_{1},q_{2}+q_{3})+\overline{G}(q_{1},q_{2},q_{3})\Big{)}$, $\Big{(}f(p_{1},q_{1},m_{1})\circ g(p_{2},q_{2},m_{2})\Big{)}\circ g(p_{3},q_{3},m_{3})$ $=f\Big{(}p_{1}-F(p_{1},p_{2}),q_{1}-F(q_{1},q_{2}),m_{1}-G(p_{1},p_{2})-G(q_{1},q_{2})\Big{)}\circ g(p_{3},q_{3},m_{3})$ $=f\Big{(}p_{1}-F(p_{1},p_{2})-F(p_{1}-F(p_{1},p_{2}),p_{3}),q_{1}-F(q_{1},q_{2})-F(q_{1}-F(q_{1},q_{2}),q_{3}),m_{1}-\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}G(p_{1},p_{2})-G(q_{1},q_{2})-G(p_{1}-F(p_{1},p_{2}),p_{3})-G(q_{1}-F(q_{1},q_{2}),q_{3})\Big{)}$ $=f\Big{(}p_{1}-F(p_{1},p_{2}+p_{3})+\overline{F}(p_{1},p_{2},p_{3}),q_{1}-F(q_{1},q_{2}+q_{3})+\overline{F}(q_{1},q_{2},q_{3}),m_{1}-G(p_{1},p_{2}+\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}p_{3})+\overline{G}(p_{1},p_{2},p_{3})-G(q_{1},q_{2}+q_{3})+\overline{G}(q_{1},q_{2},q_{3})\Big{)}$, so we have $f(p_{1},q_{1},m_{1})\circ\Big{(}g(p_{2},q_{2},m_{2})\circ g(p_{3},q_{3},m_{3})\Big{)}=\Big{(}f(p_{1},q_{1},m_{1})\circ g(p_{2},q_{2},m_{2})\Big{)}\circ g(p_{3},q_{3},m_{3})$. (iv). $g(p_{1},q_{1},m_{1})\circ\Big{(}g(p_{2},q_{2},m_{2})\circ g(p_{3},q_{3},m_{3})\Big{)}$ $=g(p_{1},q_{1},m_{1})\circ g\Big{(}p_{2}+p_{3}-F(p_{2},p_{3}),q_{2}+q_{3}-F(q_{2},q_{3}),m_{2}+m_{3}-G(p_{2},p_{3})-G(q_{2},q_{3})\Big{)}$ $=g\Big{(}p_{1}+p_{2}+p_{3}-F(p_{2},p_{3})-F(p_{1},p_{2}+p_{3}-F(p_{2},p_{3})),q_{1}+q_{2}+q_{3}-F(q_{2},q_{3})-\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}F(q_{1},q_{2}+q_{3}-F(q_{2},q_{3})),m_{1}+m_{2}+m_{3}-G(p_{2},p_{3})-G(q_{2},q_{3})-G(p_{1},p_{2}+\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}p_{3}-F(p_{2},p_{3}))-G(q_{1},q_{2}+q_{3}-F(q_{2},q_{3}))\Big{)}$ $=g\Big{(}p_{1}+p_{2}+p_{3}-F(p_{2},p_{3})-F(p_{1},p_{2})-F(p_{1},p_{3})+\overline{F}(p_{1},p_{2},p_{3}),q_{1}+q_{2}+q_{3}-\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}F(q_{2},q_{3})-F(q_{1},q_{2})-F(q_{1},q_{3})+\overline{F}(q_{1},q_{2},q_{3}),m_{1}+m_{2}+m_{3}-G(p_{2},p_{3})-\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}G(p_{1},p_{2})-G(p_{1},p_{3})+\overline{G}(p_{1},p_{2},p_{3})-G(q_{2},q_{3})-G(q_{1},q_{2})-G(q_{1},q_{3})+\linebreak~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\overline{G}(q_{1},q_{2},q_{3})\Big{)}$, by symmetry, we have $g(p_{1},q_{1},m_{1})\circ\Big{(}g(p_{2},q_{2},m_{2})\circ g(p_{3},q_{3},m_{3})\Big{)}=\Big{(}g(p_{1},q_{1},m_{1})\circ g(p_{2},q_{2},m_{2})\Big{)}\circ g(p_{3},q_{3},m_{3})$. Thus, we proved that $(E_{0},0,1,\oplus,\circ)$ is a sequential effect algebra and the theorem is proved. Now, let $P_{i}(x)=x^{i}$. Then it is easy to see that $F(P_{1},P_{j})=\left\\{\begin{array}[]{ll}P_{1+j}\ ,&\hbox{$if\ j<n-1$;}\\\ 0\ ,&\hbox{$if\ j=n-1$.}\end{array}\right.~{}and~{}~{}G(P_{1},P_{j})=\left\\{\begin{array}[]{ll}0\ ,&\hbox{$if\ j<n-1$;}\\\ 1\ ,&\hbox{$if\ j=n-1$.}\end{array}\right.$ Thus we have $[f(P_{1},0,0)]^{k}=f(P_{1},0,0)\circ f(P_{k-1},0,0)=f(P_{k},0,0)$ for $k<n$, $[f(P_{1},0,0)]^{n}=f(P_{1},0,0)\circ f(P_{n-1},0,0)=f(0,0,1)$, $[f(P_{1},0,0)]^{n+1}=f(P_{1},0,0)\circ f(0,0,1)=0$, and $[f(0,P_{1},0)]^{k}=f(0,P_{1},0)\circ f(0,P_{k-1},0)=f(0,P_{k},0)$ for $k<n$, $[f(0,P_{1},0)]^{n}=f(0,P_{1},0)\circ f(0,P_{n-1},0)=f(0,0,1)$, $[f(0,P_{1},0)]^{n+1}=f(0,P_{1},0)\circ f(0,0,1)=0$. If we denote $f(P_{1},0,0)$ by $a$, $f(0,P_{1},0)$ by $b$, $f(0,0,1)$ by $c$, then it is easy to get the relations $a>a^{2}>\cdots>a^{n}>a^{n+1},$ $b>b^{2}>\cdots>b^{n}>b^{n+1},$ $a^{k}\neq b^{k}\ for\ k<n\ ,\ a^{n}=b^{n}=c\neq 0\ and\ a^{n+1}=b^{n+1}=0.$ That is, $a,b$ are the n-th root of $c$, but $a,b$ are not the k-th root of $c$, where $k=2,3,\cdots,n-1$, moreover, $a,b$ are also the n+1-th root of $0$, so, the Problem 2 is answered affirmatively. Finally, we would like to point out that for the advances of sequential effect algebras, see [11-16]. Acknowledgement The authors wish to express their thanks to the referee for his valuable comments and suggestions. References [1]. Ludwig, G. Foundations of Quantum Mechanics (I-II), Springer, New York, 1983. [2]. Ludwig, G. An Axiomatic Basis for Quantum Mechanics (II), Springer, New York, 1986. [3]. Davies, E. B. Quantum Theory of Open Systems, Academic Press, London, 1976. [4]. Busch, P, Grabowski, M and Lahti P. J, Operational Quantum Physics, Springer-Verlag, Beijing Word Publishing Corporation, 1999. [5]. Gudder, S, Nagy, G. Sequential quantum measurements. J. Math. Phys. 42(2001), 5212-5222. [6]. Gheondea, A, Gudder, S. Sequential product of quantum effects. Proc. Amer. Math. Soc. 132 (2004), 503-512. [7]. Gudder, S, Latr moli re, F. Characterization of the sequential product on quantum effects. J. Math. Phys. 49 (2008), 052106-052112. [8]. Gudder, S, Greechie, R. Sequential products on effect algebras. Rep. Math. Phys. 49(2002), 87-111. [9]. Foulis, D J, Bennett, M K. Effect algebras and unsharp quantum logics. Found Phys 24 (1994), 1331-1352. [10]. Gudder, S. Open problems for sequential effect algebras. Inter. J. Theory. Physi. 44 (2005), 2219-2230. [11] Shen Jun and Wu Junde. Not each sequential effect algebra is sharply dominating. Phys. Letter A. 373, 1708-1712, (2009) [12] Shen Jun and Wu Junde. Remarks on the sequential effect algebras. Report. Math. Phys. 63, 441-446, (2009) [13] Shen Jun and Wu Junde. Sequential product on standard effect algebra ${\cal E}(H)$. J. Phys. A: Math. Theor. 44, 345203-345214, (2009) [14] Shen Jun and Wu Junde. The Average Value Inequality in Sequential Effect Algebras. Acta Math. Sinica, English Series. 25, 1330-1336, (2009) [15] Liu Weihua and Wu Junde. A uniqueness problem of the sequence product on operator effect algebra ${\cal E}(H)$. J. Phys. A: Math. Theor. 42, 185206-185215, (2009) [16] Liu Weihua and Wu Junde. On fixed points of Lüders operation. J. Math. Phys. 50, 103531-103532, (2009)
By (10.15), $\left(\int_{A}\phi_{x_{0}}^{2}dm\right)^{\frac{1}{2}}\leq Cn^{\frac{\psi}{2}\left(\frac{2}{\alpha}-1\right)}$ and by Proposition 2.6, $\left(\int f_{n}^{2}dm\right)^{\frac{1}{2}}\leq Cn^{\frac{1}{\alpha}-\frac{1}{2}}$. Hence we may bound (10.16) by $Cn^{\left(1+\psi\right)\left(\frac{1}{\alpha}-\frac{1}{2}\right)}$. To bound (II), let $B=U_{n}\cap(T_{\sigma^{i}\omega}^{j-i})^{-1}(U_{n}^{c})$. Then, (10.18) $\displaystyle\int_{U_{n}\cap(T_{\sigma^{i}\omega}^{j-i})^{-1}(U_{n}^{c})}f_{n}\cdot f_{n}\circ T_{\sigma^{i}\omega}^{j-i}dm\leq\int_{B}f_{n}\cdot\phi_{x_{0}}\circ T_{\sigma^{i}\omega}^{j-i}dm\leq\left(\int f_{n}^{2}dm\right)^{\frac{1}{2}}\left(\int_{B}\phi_{x_{0}}^{2}\circ T_{\sigma^{i}\omega}^{j-i}dm\right)^{\frac{1}{2}}.$ As before $\left(\int f_{n}^{2}dm\right)^{\frac{1}{2}}\leq Cn^{\frac{1}{\alpha}-\frac{1}{2}}$ and $\displaystyle\left(\int_{B}\phi_{x_{0}}^{2}\circ T_{\sigma^{i}\omega}^{j-i}dm\right)^{\frac{1}{2}}$ $\displaystyle\leq\left(\int\phi_{x_{0}}^{2}\circ T_{\sigma^{i}\omega}^{j-i}\mathbf{1}_{(T_{\sigma^{i}\omega}^{j-i})^{-1}(U_{n}^{c})}dm\right)^{\frac{1}{2}}\leq C\left(\int_{U_{n}^{c}}\phi_{x_{0}}^{2}dm\right)^{\frac{1}{2}}\leq Cn^{\frac{\psi}{2}\left(\frac{2}{\alpha}-1\right)}$ by (10.15), and so (10.18) is bounded by $Cn^{\left(1+\psi\right)\left(\frac{1}{\alpha}-\frac{1}{2}\right)}$. It follows that ${\rm(II)}+{\rm(III)}\leq C(\log n)n^{1+\left(1+\psi\right)\left(\frac{1}{\alpha}-\frac{1}{2}\right)}=o(n^{\frac{2}{\alpha}})$, since $\psi<1$. This proves that (10.14) is a $o(b_{n}^{2})$ and concludes the proof of (10.13). Finally, from (10.11), (10.12) and (10.13), we obtain (10.19) $\displaystyle\lim_{\varepsilon\to 0}\limsup_{n\to\infty}\frac{1}{b_{n}^{2}}{\mathbb{E}}_{\nu^{\omega}}[(S_{\omega,n,n})^{2}]=0,$ which gives the result by taking the limit first in $n$ and then in $\varepsilon$ in (10.10). ∎ ### 10.2. Intermittent maps We prove convergence to a stable law in the setting of Example 5.12 when $\alpha\in(0,1)$. ###### Proof of Theorem 6.9. We apply Proposition 5.8. By Theorem 6.6, it remains to prove (5.7), since $\alpha\in(0,1)$. We will need an estimate for ${\mathbb{E}}_{\nu^{\omega}}(|\phi_{x_{0}}|\mathbf{1}_{\left\\{\phi_{x_{0}}\leq\varepsilon b_{n}\right\\}})$ which is independent of $\omega$. For this purpose, we introduce the absolutely continuous probability measure $\nu_{\rm max}$ whose density is given by $h_{\rm max}(x)=\kappa x^{-\gamma_{\rm max}}$. Since all densities $h_{\omega}$ belong to the cone $L$, we have that $h_{\omega}\leq\frac{a}{\kappa}h_{\rm max}$ for all $\omega$. Thus, $\frac{1}{b_{n}}\sum_{j=0}^{n-1}{\mathbb{E}}_{\nu^{\sigma^{j}\omega}}(\phi_{x_{0}}\mathbf{1}_{\left\\{|\phi_{x_{0}}|\leq\varepsilon b_{n}\right\\}})\leq\frac{n}{b_{n}}\frac{a}{\kappa}{\mathbb{E}}_{\nu_{\rm max}}(\phi_{x_{0}}{\mathbf{1}}_{\left\\{|\phi_{x_{0}}|\leq\varepsilon b_{n}\right\\}}).$ We can easily verify that $\phi_{x_{0}}$ is regularly varying of index $\alpha$ with respect to $\nu_{\rm max}$, with scaling sequence equal to $(b_{n})_{n\geq 1}$ up to a multiplicative constant factor. Consequently, by Proposition 2.6, we have that, for some constant $c>0$, ${\mathbb{E}}_{\nu_{\rm max}}(\phi_{x_{0}}\mathbf{1}_{\left\\{|\phi_{x_{0}}|\leq\varepsilon b_{n}\right\\}})\sim c\varepsilon^{1-\alpha}n^{\frac{1}{\alpha}-1},$ which implies (5.7). ∎ ## 11\. The annealed case In this section, we consider the annealed counterparts of our results. Even though the annealed versions do not seem to follow immediately from the quenched version, it is easy to obtain them from our proofs in the quenched case. We take $\phi_{x_{0}}(x)=d(x,x_{0})^{-\frac{1}{\alpha}}$ as before we consider the convergence on the measure space $\Omega\times[0,1]$ with respect to $\nu_{F}(d\omega,dx)={\mathbb{P}}(d\omega)\nu^{\omega}(dx)$. We give precise annealed results in the case of Theorems 6.7 and 6.9, where we consider $X^{a}_{n}(\omega,x)(t):=\frac{1}{b_{n}}\sum_{j=0}^{\lfloor nt\rfloor-1}\phi_{x_{0}}(T_{\omega}^{j}x)-tc_{n},\>t\geq 0,$ viewed as a random process defined on the probability space $(\Omega\times[0,1],\nu_{F})$. ###### Theorem 11.1. Under the same assumptions as Theorem 6.7, the random process $X^{a}_{n}(t)$ converges in the $J_{1}$ topology to the Lévy $\alpha$-stable process $X_{(\alpha)}(t)$ under the probability measure $\nu_{F}$. ###### Proof. We apply [TK10b, Theorem 1.2] to the skew-product system $(\Omega\times[0,1],F,\nu_{F})$ and the observable $\phi_{x_{0}}$ naturally extended to $\Omega\times[0,1]$. Recall that $\nu_{F}$ is given by the disintegration $\nu_{F}(d\omega,dx)={\mathbb{P}}(d\omega)\nu^{\omega}(dx)$. We have to prove that 1. (a) $N_{n}\,{\stackrel{{\scriptstyle d}}{{\to}}\,}N_{(\alpha)}$, 2. (b) if $\alpha\in[1,2)$, for all $\delta>0$, $\lim_{\varepsilon\to 0}\limsup_{n\to\infty}\nu_{F}\left((\omega,x)\,:\,\max_{1\leq k\leq n}\left|\frac{1}{b_{n}}\sum_{j=0}^{k-1}\left[\phi_{x_{0}}(T_{\omega}^{j}x){\mathbf{1}}_{\left\\{|\phi_{x_{0}}\circ T_{\omega}^{j}|\leq\varepsilon b_{n}\right\\}}(x)-{\mathbb{E}}_{\nu}(\phi_{x_{0}}{\mathbf{1}}_{\left\\{|\phi_{x_{0}}|\leq\varepsilon b_{n}\right\\}})\right]\right|\geq\delta\right)=0,$ where $N_{n}(\omega,x)(B):=N_{n}^{\omega}(x)(B)=\\#\left\\{j\geq 1\,:\,\left(\frac{j}{n},\frac{\phi_{x_{0}}(T_{\omega}^{j-1}(x))}{b_{n}}\right)\in B\right\\},\;n\geq 1.$ To prove (a), we take $f\in C_{K}^{+}((0,\infty)\times({\mathbb{R}}\setminus\\{0\\}))$ arbitrary. Then, by Theorem 6.5, we have for ${\mathbb{P}}$-a.e. $\omega$ $\lim_{n\to\infty}{\mathbb{E}}_{\nu^{\omega}}(e^{-N_{n}^{\omega}(f)})={\mathbb{E}}(e^{-N(f)}).$ Integrating with respect to ${\mathbb{P}}$ and using the dominated convergence theorem yields $\lim_{n\to\infty}{\mathbb{E}}_{\nu_{F}}(e^{-N_{n}(f)})={\mathbb{E}}(e^{-N(f)}),$ which proves (a). To prove (b), we simply have to integrate with respect to ${\mathbb{P}}$ in the estimates in the proof of Theorem 6.7, which hold uniformly in $\omega\in\Omega$, and then to take the limits as $n\to\infty$ and $\varepsilon\to 0$. ∎ Similarly, we have: ###### Theorem 11.2. Under the same assumptions as Theorem 6.9, $X_{n}^{a}(1)\,{\stackrel{{\scriptstyle d}}{{\to}}\,}X_{(\alpha)}(1)$ under the probability measure $\nu_{F}$. ###### Proof. We can proceed as for Theorem 11.1 in order to check the assumptions of [TK10b, Theorem 1.3] for the skew-product system $(\Omega\times[0,1],F,\nu_{F})$ and the observable $\phi_{x_{0}}$. ∎ ## 12\. Appendix The observation that our distributional limit theorems hold for any measures $\mu\ll\nu^{\omega}$ follows from Theorem 1, Corollary 1 and Corollary 3 of Zweimüller’s work [Zwe07]. Let $S_{n}(x)=\frac{1}{b_{n}}[\sum_{j=0}^{n-1}\phi\circ T_{\omega}^{j}(x)-a_{n}].$ and suppose $S_{n}\rightarrow_{\nu_{\omega}}Y$ where $Y$ is a Lévy random variable. We consider first the setup of Example 5.12. We will show that for any measure $\nu$ with density $h$ i.e. $d\nu=hdm$ in the cone $L$ of Example 5.12, in particular Lebesgue measure $m$ with $h=1$, $S_{n}\rightarrow_{\nu}Y$ We focus on $m$. According to [Zwe07, Theorem 1] it is enough to show that $\int\psi(S_{n})d\nu_{\omega}-\int\psi(S_{n})dm\to 0.$ for any $\psi:{\mathbb{R}}\rightarrow{\mathbb{R}}$ which is bounded and uniformly Lipschitz. Fix such a $\psi$ and consider $\int\psi(\frac{1}{b_{n}}[\sum_{j=0}^{n-1}\phi\circ T_{\omega}^{j}(x)-a_{n}])(h_{\omega}-1)dm$ $\leq\int\psi(\frac{1}{b_{n}}[\sum_{j=0}^{n-1}\phi\circ T_{\sigma^{k}\omega}^{j}(x)-a_{n}])P_{\omega}^{k}(h_{\omega}-1)dm$ $\leq\|\psi\|_{\infty}\|P_{\omega}^{k}(h_{\omega}-1)\|_{L^{1}(m)}.$ Since $\|P_{\omega}^{k}(h_{\omega}-1)\|_{L^{1}_{m}}\to 0$ in case of Example 5.12 and maps satisfying (LY), (Dec) and (Min) the assertion is proved. By [Zwe07, Corollary 3], the proof for continuous time distributional limits follows immediately. ## References * [AA16] Mohamed Abdelkader and Romain Aimino. On the quenched central limit theorem for random dynamical systems. J. Phys. A, 49(24):244002, 13, 2016. * [ADSZ04] J. Aaronson, M. Denker, O. Sarig, and R. Zweimüller. Aperiodicity of cocycles and conditional local limit theorems. Stoch. Dyn., 4(1):31–62, 2004. * [AFV15] Hale Aytaç, Jorge Milhazes Freitas, and Sandro Vaienti. Laws of rare events for deterministic and random dynamical systems. Trans. Amer. Math. Soc., 367(11):8229–8278, 2015. * [AHN+15] Romain Aimino, Huyi Hu, Matthew Nicol, Andrei Török, and Sandro Vaienti. Polynomial loss of memory for maps of the interval with a neutral fixed point. Discrete Contin. Dyn. Syst., 35(3):793–806, 2015. * [ANV15] Romain Aimino, Matthew Nicol, and Sandro Vaienti. Annealed and quenched limit theorems for random expanding dynamical systems. Probab. Theory Related Fields, 162(1-2):233–274, 2015. * [AR16] Romain Aimino and Jérôme Rousseau. Concentration inequalities for sequential dynamical systems of the unit interval. Ergodic Theory Dynam. Systems, 36(8):2384–2407, 2016. * [BB16] Wael Bahsoun and Christopher Bose. Corrigendum: Mixing rates and limit theorems for random intermittent maps (2016 Nonlinearity 29 1417) [MR3476513]. Nonlinearity, 29(12):C4, 2016. * [BG97] Abraham Boyarsky and Paweł Góra. Laws of chaos. Probability and its Applications. Birkhäuser Boston, Inc., Boston, MA, 1997. Invariant measures and dynamical systems in one dimension. * [BGT87] N. H. Bingham, C. M. Goldie, and J. L. Teugels. Regular variation, volume 27 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 1987. * [CF20] Harry Crimmins and Gary Froyland. Fourier approximation of the statistical properties of Anosov maps on tori. Nonlinearity, 33(11):6244–6296, 2020. * [CR07] Jean-Pierre Conze and Albert Raugi. Limit theorems for sequential expanding dynamical systems on $[0,1]$. In Ergodic theory and related fields, volume 430 of Contemp. Math., pages 89–121. Amer. Math. Soc., Providence, RI, 2007. * [Eag76] G. K. Eagleson. Some simple conditions for limit theorems to be mixing. Teor. Verojatnost. i Primenen., 21(3):653–660, 1976. * [Fel71] William Feller. An introduction to probability theory and its applications. Vol. II. John Wiley & Sons, Inc., New York-London-Sydney, second edition, 1971\. * [FFV17] Ana Cristina Moreira Freitas, Jorge Milhazes Freitas, and Sandro Vaienti. Extreme value laws for non stationary processes generated by sequential and random dynamical systems. Ann. Inst. Henri Poincaré Probab. Stat., 53(3):1341–1370, 2017\. * [Gou] Sébastien Gouëzel. Stable laws for the doubling map. Preprint (2008), https://www.math.sciences.univ-nantes.fr/~gouezel/articles/DoublingStable.pdf. * [Gou04] Sébastien Gouëzel. Central limit theorem and stable laws for intermittent maps. Probab. Theory Related Fields, 128(1):82–122, 2004. * [HNT12] Mark Holland, Matthew Nicol, and Andrei Török. Extreme value theory for non-uniformly expanding dynamical systems. Trans. Amer. Math. Soc., 364(2):661–688, 2012. * [HNTV17] Nicolai Haydn, Matthew Nicol, Andrew Török, and Sandro Vaienti. Almost sure invariance principle for sequential and non-stationary dynamical systems. Trans. Amer. Math. Soc., 369(8):5293–5316, 2017. * [HRY20] Nicolai T. A. Haydn, Jérôme Rousseau, and Fan Yang. Exponential law for random maps on compact manifolds. Nonlinearity, 33(12):6760–6789, 2020. * [HSV99] Masaki Hirata, Benoît Saussol, and Sandro Vaienti. Statistics of return times: a general framework and new applications. Comm. Math. Phys., 206(1):33–55, 1999. * [Kal76] Olav Kallenberg. Random measures. Akademie-Verlag, Berlin; Academic Press, London-New York, 1976. * [KL21] A. Korepanov and J. Leppänen. Loss of memory and moment bounds for nonstationary intermittent dynamical systems. Comm. Math. Phys., 385(2):905–935, 2021. * [KW69] Eustratios G. Kounias and Teng-shan Weng. An inequality and almost sure convergence. Ann. Math. Statist., 40:1091–1093, 1969. * [LSV99] Carlangelo Liverani, Benoît Saussol, and Sandro Vaienti. A probabilistic approach to intermittency. Ergodic Theory Dynam. Systems, 19(3):671–685, 1999. * [NPT21] Matthew Nicol, Felipe Perez Pereira, and Andrew Török. Large deviations and central limit theorems for sequential and random systems of intermittent maps. Ergodic Theory Dynam. Systems, 41(9):2805–2832, 2021. * [NTV18] Matthew Nicol, Andrew Török, and Sandro Vaienti. Central limit theorems for sequential and random intermittent dynamical systems. Ergodic Theory Dynam. Systems, 38(3):1127–1153, 2018. * [Res87] Sidney I. Resnick. Extreme values, regular variation, and point processes, volume 4 of Applied Probability. A Series of the Applied Probability Trust. Springer-Verlag, New York, 1987. * [RSV14] Jérôme Rousseau, Benoit Saussol, and Paulo Varandas. Exponential law for random subshifts of finite type. Stochastic Process. Appl., 124(10):3260–3276, 2014. * [RT15] Jérôme Rousseau and Mike Todd. Hitting times and periodicity in random dynamics. J. Stat. Phys., 161(1):131–150, 2015. * [Rud87] Walter Rudin. Real and complex analysis. McGraw-Hill Book Co., New York, third edition, 1987. * [TK10a] Marta Tyran-Kamińska. Convergence to Lévy stable processes under some weak dependence conditions. Stochastic Process. Appl., 120(9):1629–1650, 2010. * [TK10b] Marta Tyran-Kamińska. Weak convergence to Lévy stable processes in dynamical systems. Stoch. Dyn., 10(2):263–289, 2010. * [Zwe07] Roland Zweimüller. Mixing limit theorems for ergodic transformations. J. Theoret. Probab., 20(4):1059–1071, 2007.
where $D_{t}=(\overline{\nabla f(y_{t})}^{\top}p_{t})^{2}$ and $y_{t}$ depends on $\theta_{t}$. This corresponds to finding the fixed-point of $g$, so we apply the fixed-point iteration method. Specifically, we first let $\theta_{t}=0$, then $y_{t}=x_{t}$, and let $\theta_{t}\leftarrow g(\theta_{t})$ (the above corresponding to Line 7 of Algorithm 4); then we calculate $y_{t}$ again using the new value of $\theta_{t}$ (corresponding to Line 8), and let $\theta_{t}\leftarrow g(\theta_{t})$ (corresponding to Line 9). We find that two iterations are able to lead to satisfactory performance. Note that two additional queries are required to obtain $\nabla f(y_{t}^{(0)})^{\top}p_{t}$ and $\nabla f(y_{t}^{(1)})^{\top}p_{t}$ used in Line 7 and Line 9. Since $D_{t}=(\overline{\nabla f(y_{t})}^{\top}p_{t})^{2}=\frac{(\nabla f(y_{t})^{\top}p_{t})^{2}}{\|\nabla f(y_{t})\|^{2}}$, we need to estimate $\|\nabla f(y_{t})\|^{2}$ as introduced in Section C.3. However, $y_{t}^{(0)}$ and $y_{t}^{(1)}$ in Algorithm 4 are different from both $y_{t}$ and $y_{t-1}$, so to estimate $\|\nabla f(y_{t}^{(0)})\|^{2}$ and $\|\nabla f(y_{t}^{(1)})\|^{2}$ as in Section C.3, many additional queries are required (since the query results of the directional derivative at $y_{t-1}$ or $y_{t}$ cannot be reused). Therefore, we introduce one additional approximation: we use the estimate of $\|\nabla f(y_{t-1})\|^{2}$ as the approximation of $\|\nabla f(y_{t}^{(0)})\|^{2}$ and $\|\nabla f(y_{t}^{(1)})\|^{2}$. Since the gradient norm itself is relatively large (compared with e.g. directional derivatives) and in zeroth-order optimization, the single-step update is relatively small, we expect that $\|\nabla f(y_{t}^{(0)})\|^{2}$ and $\|\nabla f(y_{t}^{(1)})\|^{2}$ are closed to $\|\nabla f(y_{t-1})\|^{2}$. In Algorithm 4, Line 14 estimates $\|\nabla f(y_{t})\|^{2}$ by Eq. (153), and the estimator is denoted $\|\hat{\nabla}f_{t}\|^{2}$; in the next iteration, it is used to approximate $\|\nabla f(y_{t+1}^{(0)})\|^{2}$ and $\|\nabla f(y_{t+1}^{(1)})\|^{2}$. Finally we note that in the experiments, we find that when using Algorithm 4, the error brought by approximation of $\|\nabla f(y_{t}^{(0)})\|^{2}$ and $\|\nabla f(y_{t}^{(1)})\|^{2}$ sometimes makes the performance of the algorithm not robust, especially when $q$ is small (e.g. $q=10$), which could lead the algorithm to divergence. Therefore, we propose two tricks to suppress the influence of approximation error (we note that in practice, the second trick is more important, while the first trick is often not necessary given the application of the second trick): * • To reduce the variance of $\|\hat{\nabla}f_{t}\|$ when $q$ is small, we let $\displaystyle\|\hat{\nabla}f_{t}^{\mathrm{avg}}\|^{2}=\frac{1}{k}\sum_{s=t-k+1}^{t}\|\hat{\nabla}f_{s}\|^{2},$ (167) and use $\|\hat{\nabla}f_{t-1}^{\mathrm{avg}}\|^{2}$ to replace $\|\hat{\nabla}f_{t-1}\|^{2}$ in Line 7 and Line 9. In our experiments we choose $k=10$. Compared with $\|\hat{\nabla}f_{t-1}\|^{2}$, using $\|\hat{\nabla}f_{t-1}^{\mathrm{avg}}\|^{2}$ to estimate $\|\nabla f(y_{t}^{(0)})\|^{2}$ and $\|\nabla f(y_{t}^{(1)})\|^{2}$ could reduce the variance at the cost of increased bias. * • Although $D_{t}\leq 1$, It is possible that $\hat{D}_{t}$ in Line 7 and Line 9 is larger than $1$, which could lead to a negative $\theta_{t}$. Therefore, a clipping of $\hat{D}_{t}$ is required. In our experiments, we observe that a $\hat{D}_{t}$ which is less than but very close to $1$ (when caused by the accidental large approximation error) could also lead to instability of optimization, perhaps because that it leads to a too large value of $\theta_{t}$ used to determine $y_{t}$ and to update $m_{t}$. Therefore, we let $\hat{D}_{t}\leftarrow\min\\{\hat{D}_{t},B_{\mathrm{ub}}\\}$ in Line 7 and Line 9 before calculating $\theta_{t}$, where $0<B_{\mathrm{ub}}\leq 1$ is fixed. In our experiments we set $B_{\mathrm{ub}}$ to $0.6$. We leave a more systematic study of the approximation error as future work. ### C.5 Implementation of History-PARS in practice (History-PARS-Impl) In PARS, when using a specific prior instead of the prior from a general source, we can utilize some properties of the prior. When using the historical prior, we find that $D_{t}$ is usually similar to $D_{t-1}$, and intuitively it happens when the smoothness of the objective function does not change quickly along the optimization trajectory. Therefore, the best value of $\theta_{t}$ should also be similar to the best value of $\theta_{t-1}$. Based on this observation, we can directly use $\theta_{t-1}$ as the value of $\theta_{t}$ in step $t$, and the value of $\theta_{t-1}$ is obtained with $y_{t-1}$ in step $t-1$. Following this thread, we present our implementation of History-PARS, i.e. History-PARS-Impl, in Algorithm 5. Algorithm 5 History-PARS in implementation (History-PARS-Impl) 1:The $L$-smooth black-box function $f$; initialization $x_{0}$; $\hat{L}$ as an upper bound of $L$ ($\hat{L}\geq L$); Query count per iteration $q$ (cannot be too small); total number of iterations $T$; $\gamma_{0}>0$. 2:$x_{T}$ as the approximate minimizer of $f$. 3:$m_{0}\leftarrow x_{0}$; 4:$\theta_{-1}\leftarrow 0$; 5:$v_{-1}\sim\mathcal{U}(\mathbb{S}_{d-1})$; 6:for $t=0$ to $T-1$ do 7: $y_{t}\leftarrow(1-\alpha_{t})x_{t}+\alpha_{t}m_{t}$, where $\alpha_{t}\geq 0$ is a positive root of the equation $\alpha_{t}^{2}=\theta_{t-1}(1-\alpha_{t})\gamma_{t}$; $\gamma_{t+1}\leftarrow(1-\alpha_{t})\gamma_{t}$; 8: Sample an orthonormal set $\\{u_{i}\\}_{i=1}^{q}$ in the subspace perpendicular to $v_{t-1}$; 9: $g_{1}(y_{t})\leftarrow\sum_{i=1}^{q}\nabla f(y_{t})^{\top}u_{i}\cdot u_{i}+\nabla f(y_{t})^{\top}v_{t-1}\cdot v_{t-1}$; 10: $g_{2}(y_{t})\leftarrow\frac{d-1}{q}\sum_{i=1}^{q}\nabla f(y_{t})^{\top}u_{i}\cdot u_{i}+\nabla f(y_{t})^{\top}v_{t-1}\cdot v_{t-1}$; 11: $\theta_{t}\leftarrow\frac{D_{t}+\frac{q}{d-1}(1-D_{t})}{\hat{L}\left(D_{t}+\frac{d-1}{q}(1-D_{t})\right)}$, where $D_{t}$ is estimated using Eq. (154) with $p_{t}=v_{t-1}$; 12: $x_{t+1}\leftarrow y_{t}-\frac{1}{\hat{L}}g_{1}(y_{t})$, $m_{t+1}\leftarrow m_{t}-\frac{\theta_{t-1}}{\alpha_{t}}g_{2}(y_{t})$; 13: $v_{t}\leftarrow g_{1}(y_{t})$; 14:end for 15:return $x_{T}$. ### C.6 Full version of Algorithm 8 considering the strong convexity parameter and its convergence theorem Algorithm 6 Extended accelerated random search framework for $\tau\geq 0$ 1:The $L$-smooth and $\tau$-strongly convex black-box function $f$; initialization $x_{0}$; $\hat{L}$ as an upper bound of $L$ ($\hat{L}\geq L$); $\hat{\tau}$ as a lower bound of $\tau$ ($0\leq\hat{\tau}\leq\tau$); iteration number $T$; $\gamma_{0}\geq\hat{\tau}$. 2:$x_{T}$ as the approximate minimizer of $f$. 3:$m_{0}\leftarrow x_{0}$; 4:for $t=0$ to $T-1$ do 5: Find a $\theta_{t}$ such that $\theta_{t}\leq\frac{\mathbb{E}_{t}\left[\left(\nabla f(y_{t})^{\top}v_{t}\right)^{2}\right]}{\hat{L}\cdot\mathbb{E}_{t}[\|g_{2}(y_{t})\|^{2}]}$ in which $\theta_{t}$, $y_{t}$ and $g_{2}(y_{t})$ is defined in the following 3 steps: 6: Step 1: $y_{t}\leftarrow(1-\beta_{t})x_{t}+\beta_{t}m_{t}$, where $\beta_{t}:=\frac{\alpha_{t}\gamma_{t}}{\gamma_{t}+\alpha_{t}\hat{\tau}}$, $\alpha_{t}\geq 0$ is a positive root of the equation $\alpha_{t}^{2}=\theta_{t}((1-\alpha_{t})\gamma_{t}+\alpha_{t}\hat{\tau})$; $\gamma_{t+1}\leftarrow(1-\alpha_{t})\gamma_{t}+\alpha_{t}\hat{\tau}$; 7: Step 2: Let $v_{t}$ be a random vector s.t. $\|v_{t}\|=1$; $g_{1}(y_{t})\leftarrow\nabla f(y_{t})^{\top}v_{t}\cdot v_{t}$; 8: Step 3: Let $g_{2}(y_{t})$ be an unbiased estimator of $\nabla f(y_{t})$, i.e. $\mathbb{E}_{t}[g_{2}(y_{t})]=\nabla f(y_{t})$; 9: $\lambda_{t}\leftarrow\frac{\alpha_{t}}{\gamma_{t+1}}\hat{\tau}$; 10: $x_{t+1}\leftarrow y_{t}-\frac{1}{\hat{L}}g_{1}(y_{t})$, $m_{t+1}\leftarrow(1-\lambda_{t})m_{t}+\lambda_{t}y_{t}-\frac{\theta_{t}}{\alpha_{t}}g_{2}(y_{t})$; 11:end for 12:return $x_{T}$. In fact, the ARS algorithm proposed in [26] requires knowledge of the strong convexity parameter $\tau$ of the objective function, and the original algorithm depends on $\tau$. The ARS algorithm has a convergence rate for general smooth convex functions, and also have another potentially better convergence rate if $\tau>0$. In previous sections, for simplicity, we suppose $\tau=0$ and illustrate the corresponding extension in Algorithm 8. In fact, for the general case $\tau\geq 0$, the original ARS can also be extended to allow for incorporation of prior information. We present the extension to ARS with $\tau\geq 0$ in Algorithm 6. Note that our modification is similar to that in Algorithm 8. For Algorithm 6, we can also provide its convergence guarantee as shown in Theorem 7. Note that after considering the strong convexity parameter in the algorithm, we have an additional convergence guarantee, i.e. Eq. (169). In the corresponding PARS algorithm, we have $\theta_{t}\geq\frac{q^{2}}{\hat{L}d^{2}}$, so the convergence rate of PARS is not worse than that of ARS and admits improvement given a good prior. ###### Theorem 7. Let $x^{*}:=\mathrm{argmin}_{x}f(x)$. Then in Algorithm 6, if $f$ is convex, then we have $\displaystyle\mathbb{E}\left[(f(x_{T})-f(x^{*}))\left(1+\frac{\sqrt{\gamma_{0}}}{2}\sum_{t=0}^{T-1}\sqrt{\theta_{t}}\right)^{2}\right]\leq f(x_{0})-f(x^{*})+\frac{\gamma_{0}}{2}\|x_{0}-x^{*}\|^{2}.$ (168) and $\displaystyle\mathbb{E}\left[(f(x_{T})-f(x^{*}))\exp\left(\sqrt{\hat{\tau}}\sum_{t=0}^{T-1}\sqrt{\theta_{t}}\right)\right]\leq f(x_{0})-f(x^{*})+\frac{\gamma_{0}}{2}\|x_{0}-x^{*}\|^{2}.$ (169) ###### Proof. Let $L_{e}:=\frac{\hat{L}}{2\hat{L}-L}\cdot\hat{L}$. We still have Eq. (18), so $\displaystyle\mathbb{E}_{t}[f(x_{t+1})]$ $\displaystyle\leq f(y_{t})-\frac{\mathbb{E}_{t}\left[\left(\nabla f(y_{t})^{\top}v_{t}\right)^{2}\right]}{2L_{e}}$ (170) $\displaystyle\leq f(y_{t})-\frac{\mathbb{E}_{t}\left[\left(\nabla f(y_{t})^{\top}v_{t}\right)^{2}\right]}{2\hat{L}}.$ (171) For any $x$, define $\delta_{t}(x):=\frac{\gamma_{t}}{2}\|m_{t}-x\|^{2}+f(x_{t})-f(x)$. Let $p_{t}:=(1-\lambda_{t})m_{t}+\lambda_{t}y_{t}$. We first prove a lemma. Since $(1-\beta_{t})x_{t}+\beta_{t}m_{t}=y_{t}=(1-\beta_{t})y_{t}+\beta_{t}y_{t}$, we have $m_{t}-y_{t}=\frac{1-\beta_{t}}{\beta_{t}}(y_{t}-x_{t})$. So $\displaystyle p_{t}$ $\displaystyle=(1-\lambda_{t})m_{t}+\lambda_{t}y_{t}=y_{t}+(1-\lambda_{t})(m_{t}-y_{t})=y_{t}+(1-\lambda_{t})\frac{1-\beta_{t}}{\beta_{t}}(y_{t}-x_{t}).$ (172) By $\beta_{t}=\frac{\alpha_{t}\gamma_{t}}{\gamma_{t}+\alpha_{t}\hat{\tau}}$, $\gamma_{t+1}=(1-\alpha_{t})\gamma_{t}+\alpha_{t}\hat{\tau}$ and $\lambda_{t}=\frac{\alpha_{t}}{\gamma_{t+1}}\hat{\tau}$, after eliminating $\gamma_{t}$ and $\gamma_{t+1}$, we have $(1-\lambda_{t})\frac{1-\beta_{t}}{\beta_{t}}=\frac{1-\alpha_{t}}{\alpha_{t}}$. Hence $p_{t}=y_{t}+\frac{1-\alpha_{t}}{\alpha_{t}}(y_{t}-x_{t})$, which means $\displaystyle y_{t}=(1-\alpha_{t})x_{t}+\alpha_{t}p_{t}.$ (173) Now we start the main proof. $\displaystyle\delta_{t+1}(x)$ $\displaystyle=\frac{\gamma_{t+1}}{2}\|m_{t+1}-x\|^{2}+f(x_{t+1})-f(x)$ (174) $\displaystyle=\frac{\gamma_{t+1}}{2}\|p_{t}-x\|^{2}-\frac{\gamma_{t+1}\theta_{t}}{\alpha_{t}}g_{2}(y_{t})^{\top}(p_{t}-x)+\frac{\gamma_{t+1}\theta_{t}^{2}}{2\alpha_{t}^{2}}\|g_{2}(y_{t})\|^{2}+f(x_{t+1})-f(x)$ (175) $\displaystyle=\frac{\gamma_{t+1}}{2}\|p_{t}-x\|^{2}-\alpha_{t}g_{2}(y_{t})^{\top}(p_{t}-x)+\frac{\theta_{t}}{2}\|g_{2}(y_{t})\|^{2}+f(x_{t+1})-f(x).$ (176) Hence $\displaystyle\mathbb{E}_{t}[\delta_{t+1}(x)]$ $\displaystyle=\frac{\gamma_{t+1}}{2}\|p_{t}-x\|^{2}-\alpha_{t}\nabla f(y_{t})^{\top}(p_{t}-x)+\frac{\theta_{t}}{2}\mathbb{E}_{t}[\|g_{2}(y_{t})\|^{2}]+\mathbb{E}_{t}[f(x_{t+1})]-f(x)$ (177) $\displaystyle\leq\frac{\gamma_{t+1}}{2}\|p_{t}-x\|^{2}-\alpha_{t}\nabla f(y_{t})^{\top}(p_{t}-x)+\frac{\mathbb{E}_{t}\left[\left(\nabla f(y_{t})^{\top}v_{t}\right)^{2}\right]}{2\hat{L}}+\mathbb{E}_{t}[f(x_{t+1})]-f(x)$ (178) $\displaystyle\leq\frac{\gamma_{t+1}}{2}\|p_{t}-x\|^{2}-\alpha_{t}\nabla f(y_{t})^{\top}(p_{t}-x)+f(y_{t})-f(x)$ (179) $\displaystyle=\frac{\gamma_{t+1}}{2}\|p_{t}-x\|^{2}-\nabla f(y_{t})^{\top}(\alpha_{t}p_{t}-\alpha_{t}x)+f(y_{t})-f(x)$ (180) $\displaystyle=\frac{\gamma_{t+1}}{2}\|p_{t}-x\|^{2}+\nabla f(y_{t})^{\top}(-y_{t}+(1-\alpha_{t})x_{t}+\alpha_{t}x)+f(y_{t})-f(x)$ (181) $\displaystyle=\frac{\gamma_{t+1}}{2}\|p_{t}-x\|^{2}+\alpha_{t}\left(f(y_{t})+\nabla f(y_{t})^{\top}(x-y_{t})\right)$ (182) $\displaystyle\quad+(1-\alpha_{t})\left(f(y_{t})+\nabla f(y_{t})^{\top}(x_{t}-y_{t})\right)-f(x)$ (183) $\displaystyle\leq\frac{\gamma_{t+1}}{2}\|p_{t}-x\|^{2}+(1-\alpha_{t})f(x_{t})-(1-\alpha_{t})f(x)-\frac{\alpha_{t}\tau}{2}\|x-y_{t}\|^{2}.$ (184) We also have $\displaystyle\frac{\gamma_{t+1}}{2}\|p_{t}-x\|^{2}$ $\displaystyle=\frac{\gamma_{t+1}}{2}\|(1-\lambda_{t})m_{t}+\lambda_{t}y_{t}-x\|^{2}$ (185) $\displaystyle=\frac{\gamma_{t+1}}{2}\|(1-\lambda_{t})(m_{t}-x)+\lambda_{t}(y_{t}-x)\|^{2}$ (186) $\displaystyle\leq\frac{\gamma_{t+1}(1-\lambda_{t})}{2}\|m_{t}-x\|^{2}+\frac{\gamma_{t+1}\lambda_{t}}{2}\|y_{t}-x\|^{2}$ (187) $\displaystyle=\frac{\gamma_{t+1}(1-\lambda_{t})}{2}\|m_{t}-x\|^{2}+\frac{\alpha_{t}\hat{\tau}}{2}\|y_{t}-x\|^{2}$ (188) $\displaystyle=(1-\alpha_{t})\frac{\gamma_{t}}{2}\|m_{t}-x\|^{2}+\frac{\alpha_{t}\hat{\tau}}{2}\|x-y_{t}\|^{2},$ (189) where the inequality is due to Jensen’s inequality applied to the convex function $\|\cdot\|^{2}$, and the third equality is obtained after substituting $\lambda_{t}\gamma_{t+1}=\alpha_{t}\hat{\tau}$ by the definition of $\lambda_{t}$. Since $\gamma_{t+1}=(1-\alpha_{t})\gamma_{t}+\alpha_{t}\hat{\tau}=(1-\alpha_{t})\gamma_{t}+\lambda_{t}\gamma_{t+1}$, we have $\gamma_{t+1}(1-\lambda_{t})=(1-\alpha_{t})\gamma_{t}$, which leads to the last equality. Hence $\displaystyle\mathbb{E}_{t}[\delta_{t+1}(x)]$ $\displaystyle\leq\frac{\gamma_{t+1}}{2}\|p_{t}-x\|^{2}+(1-\alpha_{t})f(x_{t})-(1-\alpha_{t})f(x)-\frac{\alpha_{t}\tau}{2}\|x-y_{t}\|^{2}$ (190) $\displaystyle=(1-\alpha_{t})\delta_{t}(x)+\frac{\alpha_{t}(\hat{\tau}-\tau)}{2}\|x-y_{t}\|^{2}$ (191) $\displaystyle\leq(1-\alpha_{t})\delta_{t}(x).$ (192) Therefore, $\displaystyle\delta_{0}(x)$ $\displaystyle\geq\frac{1}{1-\alpha_{0}}\mathbb{E}[\delta_{1}(x)]=\mathbb{E}\left[\frac{\delta_{1}(x)}{1-\alpha_{0}}\right]$ $\displaystyle\geq\mathbb{E}\left[\frac{\mathbb{E}_{1}[\delta_{2}(x)]}{(1-\alpha_{0})(1-\alpha_{1})}\right]=\mathbb{E}\left[\mathbb{E}_{1}\left[\frac{\delta_{2}(x)}{(1-\alpha_{0})(1-\alpha_{1})}\right]\right]=\mathbb{E}\left[\frac{\delta_{2}(x)}{(1-\alpha_{0})(1-\alpha_{1})}\right]$ $\displaystyle\geq\ldots$ $\displaystyle\geq\mathbb{E}\left[\frac{\delta_{T}(x)}{\prod_{t=0}^{T-1}(1-\alpha_{t})}\right].$ We have $\delta_{T}(x)\geq f(x_{T})-f(x)$. To prove the theorem, let $x=x^{*}$. The remaining is to give an upper bound of $\prod_{t=0}^{T-1}(1-\alpha_{t})$. Let $\psi_{k}:=\prod_{t=0}^{k-1}(1-\alpha_{t})$ and $a_{k}:=\frac{1}{\sqrt{\psi_{k}}}$, we have $\displaystyle a_{k+1}-a_{k}$ $\displaystyle=\frac{1}{\sqrt{\psi_{k+1}}}-\frac{1}{\sqrt{\psi_{k}}}=\frac{\sqrt{\psi_{k}}-\sqrt{\psi_{k+1}}}{\sqrt{\psi_{k}\psi_{k+1}}}=\frac{\psi_{k}-\psi_{k+1}}{\sqrt{\psi_{k}\psi_{k+1}}(\sqrt{\psi_{k}}+\sqrt{\psi_{k+1}})}$ (193) $\displaystyle\geq\frac{\psi_{k}-\psi_{k+1}}{\sqrt{\psi_{k}\psi_{k+1}}(\sqrt{\psi_{k}})}$ (194) $\displaystyle=\frac{\psi_{k}-(1-\alpha_{k})\psi_{k}}{2\psi_{k}\sqrt{\psi_{k+1}}}=\frac{\alpha_{k}}{2\sqrt{\psi_{k+1}}}=\frac{\sqrt{\gamma_{k+1}\theta_{k}}}{2\sqrt{\psi_{k+1}}}=\frac{\sqrt{\theta_{k}}}{2}\sqrt{\frac{\gamma_{k+1}}{\psi_{k+1}}}$ (195) $\displaystyle\geq\frac{\sqrt{\gamma_{0}\theta_{k}}}{2}.$ (196) The last step is because $\gamma_{t+1}\geq(1-\alpha_{t})\gamma_{t}$, so $\frac{\gamma_{k+1}}{\gamma_{0}}\geq\prod_{t=0}^{k}(1-\alpha_{t})=\psi_{k+1}$. Since $\psi_{0}=1$, $a_{0}=1$. Hence $a_{T}\geq 1+\frac{\sqrt{\gamma_{0}}}{2}\sum_{t=0}^{T-1}\sqrt{\theta_{t}}$. Therefore, $\displaystyle\psi_{T}\leq\frac{1}{\left(1+\frac{\sqrt{\gamma_{0}}}{2}\sum_{t=0}^{T-1}\sqrt{\theta_{t}}\right)^{2}}.$ (197) Meanwhile, since $\gamma_{0}\geq\hat{\tau}$ and $\gamma_{t+1}=(1-\alpha_{t})\gamma_{t}+\alpha_{t}\hat{\tau}$, we have that $\forall t,\gamma_{t}\geq\hat{\tau}$. Then $\alpha_{t}^{2}=\theta_{t}((1-\alpha_{t})\gamma_{t}+\alpha_{t}\hat{\tau})\geq\theta_{t}\hat{\tau}$, then we have that $\alpha_{t}\geq\sqrt{\hat{\tau}\theta_{t}}$. Therefore, $\displaystyle\psi_{T}\leq\prod_{t=0}^{T-1}\left(1-\sqrt{\hat{\tau}\theta_{t}}\right)\leq\exp\left(-\sqrt{\hat{\tau}}\sum_{t=0}^{T-1}\sqrt{\theta_{t}}\right).$ (198) The proof is completed. ∎ ## Appendix D Supplemental materials for Section 5 ### D.1 More experimental settings in Section 5.1 In experiments in this section, we set the step size $\mu$ used in finite differences (Eq. (1)) to $10^{-6}$. #### D.1.1 Experimental settings for Figure 1 ##### Prior We adopt the setting in Section 4.1 of [20] to mimic the case that the prior is a biased version of the true gradient. Specifically, we let $p_{t}=\overline{\overline{\nabla f(x_{t})}+(b+n)}$, where $b$ is a fixed vector and $n$ is a random vector uniformly sampled each iteration, $\|b\|=1$ and $\|n\|=1.5$. ##### Test functions Our test functions are as follows. We choose $f_{1}$ as the "worst-case smooth convex function" used to construct the lower bound complexity of first-order optimization, as used in [26]: $\displaystyle f_{1}(x)=\frac{1}{2}(x^{(1)})^{2}+\frac{1}{2}\sum_{i=1}^{d-1}(x^{(i+1)}-x^{(i)})^{2}+\frac{1}{2}(x^{(d)})^{2}-x^{(1)},\text{ where }x_{0}=\mathbf{0}.$ (199) We choose $f_{2}$ as a simple smooth and strongly-convex function with a worst-case initialization: $\displaystyle f_{2}(x)=\sum_{i=1}^{d}\left(\frac{i}{d}\cdot(x^{(i)})^{2}\right),\text{ where }x_{0}^{(1)}=d,x_{0}^{(i)}=0\text{ for }i\geq 2.$ (200) We choose $f_{3}$ as the Rosenbrock function ($f_{8}$ in [13]) which is a well-known non-convex function used to test the performance of optimization problems: $\displaystyle f_{3}(x)=\sum_{i=1}^{d-1}\left(100\left((x^{(i)})^{2}-x^{(i+1)}\right)^{2}+(x^{(i)}-1)^{2}\right),\text{ where }x_{0}=\mathbf{0}.$ (201) We note that ARS, PARS-Naive and PARS could depend on a strong convexity parameter (see Section C.6) when applied to a strongly convex function. Therefore, for $f_{2}$ we set this parameter to the ground truth value. For $f_{1}$ and $f_{3}$ we set it to zero, i.e. we use Algorithm 8. #### D.1.2 Experimental settings for Figure 2 In this part we set $d=500$ and set $q$ such that each iteration of each algorithm costs $11$ queries. Since when using the historical prior, we aim to build algorithms agnostic to parameters of the objective function, we set the strong convexity parameter in ARS-based methods to $0$ even though we know that e.g. $f_{2}$ is strongly-convex. Correspondingly, we adopt adaptive restart of function scheme [27] to reach the ideal performance. We introduce our implementation here. In each iteration (suppose that currently it is step $t$) of Algorithm 5, we check whether $f(y_{t})\leq f(y_{t-1})$. If not, we set $m_{t+1}\leftarrow x_{t+1}$ and $\gamma_{t+1}\leftarrow\gamma_{0}$ as the restart. ### D.2 More experimental settings in Section 5.2 We perform attacks under the $\ell_{2}$ norm with the perturbation bound set to $3.514$ ($=32/255\times\sqrt{784}$) if each pixel value has the range $[0,1]$. To deal with the constraints in optimization, in each iteration we perform projection after the update to ensure that the constraints are satisfied. The objective function to maximize is the C&W loss [7], i.e. $f(x)=Z(x)_{t}-\max_{i\neq t}Z(x)_{i}$, where $Z(x)$ is the logits given the input $x$. The network architecture is from the PyTorch example (https://github.com/pytorch/examples/tree/master/mnist). We set the step size $\mu$ used in finite differences (Eq. (1)) to $10^{-4}$. ## Appendix E Potential negative societal impacts As a theoretical work, we think this paper can provide valuable insights on understanding existing algorithms and may inspire new algorithms for zeroth- order optimization, while having no significant potential negative societal impacts. One may pay attention to its application for query-based black-box adversarial attacks.
capbtabboxtable[][] # Training Semantic Segmentation on Heterogeneous Datasets Panagiotis Meletis and Gijs Dubbelman P. Meletis<EMAIL_ADDRESS>and G. Dubbelman<EMAIL_ADDRESS>are with the Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands. 0000-0001-8054-1760 0000-0001-6635-3245 ###### Abstract We explore semantic segmentation beyond the conventional, single-dataset homogeneous training and bring forward the problem of Heterogeneous Training of Semantic Segmentation (HTSS). HTSS involves simultaneous training on multiple heterogeneous datasets, _i.e_. datasets with conflicting label spaces and different (weak) annotation types from the perspective of semantic segmentation. The HTSS formulation exposes deep networks to a larger and previously unexplored aggregation of information that can potentially enhance semantic segmentation in three directions: i) performance: increased segmentation metrics on seen datasets, ii) generalization: improved segmentation metrics on unseen datasets, and iii) knowledgeability: increased number of recognizable semantic concepts. To research these benefits of HTSS, we propose a unified framework, that incorporates heterogeneous datasets in a single-network training pipeline following the established FCN standard. Our framework first curates heterogeneous datasets to bring them in a common format and then trains a single-backbone FCN on all of them simultaneously. To achieve this, it transforms weak annotations, which are incompatible with semantic segmentation, to per-pixel labels, and hierarchizes their label spaces into a universal taxonomy. The trained HTSS models demonstrate performance and generalization gains over a wide range of datasets and extend the inference label space entailing hundreds of semantic classes. ###### Index Terms: semantic segmentation, heterogeneous datasets, multi-dataset/domain training, weakly/semi-supervised training ## I Introduction Semantic Segmentation [1, 2, 3] is a indispensable building block of upstream systems in various domains, such as automated driving [4, 5], biomedical image analysis [6], virtual/augmented reality, and surveillance [3]. Semantic segmentation is part of the bigger family of image recognition tasks, which include, among others, image classification and object detection. The success of supervised Convolutional Neural Networks (CNNs) in image classification [7, 8, 9] set them as the de-facto solution for related image recognition tasks, which is in a large part attributed to the successful utilization of very large datasets. Unlike CNNs trained for classification or detection, where data collection and labeling is easier, CNNs for semantic segmentation face two fundamental data challenges that are analyzed in the next paragraphs. Limited size of existing datasets. Datasets for semantic segmentation [10, 11, 12] contain typically 100 to 1000 times less annotated images than for image classification and object detection, as visualized in Fig. 1. The lack of rich datasets causes CNNs for semantic segmentation to exhibit limited performance when evaluated on seen datasets and poor generalization capabilities on unseen datasets [13, 14, 15]. This challenge becomes particularly acute in data- scarce areas, such as street-scene understanding. The main reason for the difference in dataset sizes is the required level-of-detail of the annotations for the task at hand. For example, COCO creators [16] report that annotating pixels for semantic segmentation of general scenes is 15x slower than drawing bounding boxes, while according to [17] annotating pixels is 78x slower than choosing image-level tags. Figure 1: Comparison of various image understanding datasets with respect to their: i) annotation type, ii) number of semantic classes, and iii) number of images (visualized by the area of the circles). Networks for semantic segmentation are typically trained using a single per-pixel (fine) labeled dataset. Low diversity of represented semantic concepts. The second fundamental challenge is related to the number of recognizable semantic concepts a CNNs can predict. Typical a CNN trained with segmentation datasets can recognize a few dozens concepts, since fine-grained semantic classes are absent from these datasets. The complexity of (manual) per-pixel labeling practically constrains the annotated semantic classes to represent an order of 100 different scene concepts, while datasets with less spatially-detailed annotations (bounding boxes or tags) [18, 19, 20] can reach up to $1,000-10,000$ unique semantic classes (see vert. axis of Fig. 1). The straightforward way to address the aforementioned two challenges, is to annotate more images at the pixel level, or refine existing annotations with finer-grained (sub-)classes by manual or semi-automated means. This is a natural yet costly approach, since manual labeling is laborious (_e.g_. 90 min. per image for Cityscapes and Vistas [11, 10]) and semi-automated procedures result in insufficient quality of annotations when not complemented with human quality control. These two practical challenges motivate us to research an alternative approach. Having analyzed the annotation quality and semantic diversity of existing image understanding datasets (Fig. 1) we observed that, as a whole, they cover thousands of semantic concepts and contain millions of images. Thus, we contemplate a solution that can combine many existing datasets and solve both aforementioned challenges. However, combining datasets is not an easy task, due to their structural difference, but when successful, it allows for training more capable CNN models with minimal extra manual effort. In this work, we investigate the challenges and benefits of combined dataset training in three directions: segmentation performance on seen datasets (validation/testing splits of datasets on which the CNN is trained), generalization on unseen datasets (splits of datasets not used during training), and Knowledgeability, _i.e_., the number of recognizable semantic concepts with sufficient segmentation quality. Instead of combining only datasets for semantic segmentation, _i.e_. pixel-labeled datasets, a larger candidate pool of heterogeneous datasets from various image understanding tasks is admitted. These datasets, can significantly increase the available training data and the number of semantic classes leading to improvements in all three directions. Combining different heterogeneous datasets brings up new challenges, since these datasets are created for different tasks or application domains, and thus they contain conflicting or unrelated label spaces and incompatible (weak) annotations. As a consequence, it is impossible to employ established training strategies for semantic segmentation, _e.g_. a fully convolutional network pipeline (FCN) [21]. To advance the state-of-the-art in multi-dataset training, we generalize semantic segmentation training over heterogeneous datasets and analyze the related challenges. Subsequently, we propose a unified methodology that decouples dataset specifics (structure, annotation types, label spaces) from the task formulation of semantic segmentation. In this way, a plethora of existing image understanding datasets can be leveraged under the same consistent and robust FCN-based framework. The contributions of this work can be summarized as follows. * • The formulation of the heterogeneous datasets training problem for semantic segmentation (HTSS) and characterization of the challenges. * • A methodology for combining label spaces with different semantic granularity (level-of-detail) and with different semantic concepts, thus enabling simultaneous training on datasets with disjoint or conflicting label spaces. * • A methodology for consolidating strong (pixel) and weak (bounding-box or image-tag) supervision, and thus enabling simultaneous training with mixed supervision. * • The novel Knowledgeability metric, which quantifies the number of recognizable semantic classes by a network wrt. achievable performance for these classes, and that can be used to compare the performance of a network across datasets irrespective of the number of classes. ## II Related work Multi-dataset training is gaining traction in various areas, _e.g_. in object detection [22, 23], depth estimation [24], and domain adaptation [25, 26], since it improves model robustness and generalization capabilities. This work focuses on semantic segmentation and relaxation of the requirements that a dataset has to comply to, in order to be suited for multi-dataset training. The proposed work generalizes related approaches in literature for semantic segmentation [27, 28] and complements recent work [29], [30], [31], [32]. Figure 2: Motivation and overview of the proposed framework. HTSS aims at using a wide range of heterogeneous image understanding datasets with incompatible annotation formats and conflicting label spaces (1). Our methodology derives a unified label space (2) and consolidates supervision (3) so they can be used to simultaneously train an FCN (4) on all datasets. The trained network (5) has better performance on the training datasets, generalizes better to unseen images, and recognizes finer-grained semantic classes. ### II-A Multi-dataset semantic segmentation The majority of previous works focus on using multiple datasets with possibly different label spaces, but a single type of supervision, _i.e_. pixel-level labels. Most of the works solve the challenges that arise from conflicts in label semantics through dataset-based solutions [33, 27], architecture-based solutions [34, 35, 36, 37, 38, 39], or loss-based solutions [40, 35, 38]. In these works, all label spaces of the employed datasets are combined into a common taxonomy, by merging, splitting, or ignoring semantic concepts or by manual re-labeling when needed. Early works extend the conventional FCN architecture with multiple heads/decoders up to one for each dataset [37], or multiple (hierarchical) classifiers [35], thereby effectively approaching the problem from the multi-task learning perspective. The authors of [33] combine six datasets, while in [27] thirteen datasets are combined to create a large- scale training and testing platform. Contrary to existing works, the proposed Heterogeneous Training of Semantic Segmentation (HTSS) framework does not require any dataset relabeling and does not ignore classes for simultaneous training of an FCN with multiple datasets. Moreover, it maintains the network architecture, since it solves all label space conflicts at the stage of loss calculation, and therefore the method can be applied to any semantic segmentation network without requiring architectural changes. ### II-B Semantic Segmentation with weak supervision Semantic segmentation is by definition a pixel-based task and it is conventionally realized by training a CNN with per-pixel homogeneous supervision. Previous works have used a diverse set of less detailed (weak) heterogeneous supervision, either to accommodate strong supervision, or independently in a semi/weakly-supervised setting [41, 42, 43, 44, 36]. Several methods generate candidate masks from bounding-box supervision [45, 43, 46, 42, 35] using external modules, internal network predictions, or heuristics to refine weak annotations. These masks are used to train networks alone or together with strong supervision. Even weaker forms of supervision have been employed and examples of this include point-level [17] and image- level [47, 48, 49] annotations, mainly within a multiple instance learning formulation. Finally, methods that use a combination of multiple weaker types of supervision have been proposed, such as bounding boxes and image-level tags [50, 51, 44, 52, 53]. Inspired by earlier works, the proposed framework achieves pixel-accurate training using weak supervision by a pre-processing step that generates pseudo-labels and a refinement process during training. Moreover, unlike previous methods, the HTSS framework treats all types of weakly-labeled datasets uniformly and uses them in combination with strongly-labeled datasets. ### II-C Other related tasks Two related semantic segmentation tasks that encapsulate multiple datasets in their formulation are transfer learning [54] and domain adaptation [26, 25]. These tasks aim at transferring rich knowledge from a source dataset/domain to a target dataset/domain, where knowledge is scarce or even non-existing. They mainly concentrate on the performance in the target domain, which may be available during training, in some limited form. Recently, variations of these tasks also track performance in the source domain and investigate multiple- source versions of the problems [55, 56]. The HTSS formulation considers performance on all employed datasets and during training it does not depend on information from the testing datasets explicitly, as in domain adaptation. The following four tasks are briefly addressed for their relevance with aspects of HTSS. First, multi-dataset semantic segmentation has been tackled in literature using multi-task learning [24, 57, 58, 59, 60], where a network head/branch is devoted to each dataset independently, _i.e_. segmentation for each dataset is modeled as a separate “task”. Second, continual learning is relevant to the concept of knowledgeability, since in this task new classes are discovered or added during training or inference [61, 62] and old data may not be available. Third, self-training or pseudo-label approaches [63, 64, 65, 41, 66] have addressed the absence of labels during training for some datasets/domains. Finally, learning with partial labels [67, 68] is related to conflicting labels spaces. The partial-label formulation associates a training sample with a set of candidate labels among which at most one is the correct label. This plethora of research shows that training with multiple and heterogeneous datasets is a desired capability of modern training pipelines. ### II-D Conventional semantic segmentation We recap here the formulation for the conventional image semantic segmentation, which we use in the following section to define Heterogeneous Training of Semantic Segmentation. The task of Semantic Segmentation [1, 21, 2, 10, 3] involves the per-pixel classification of an image into a predetermined set of mutually-exclusive semantic classes. A semantic segmentation system has a 2-D image $\mathbf{x}$ as input and uses a given label space $\mathcal{L}^{\text{pred}}$ of semantic classes. The aim of the system is to predict a 2-D matrix $\mathbf{y}^{\text{pred}}$, where each element corresponds to an image pixel with a semantic class from $\mathcal{L}^{\text{pred}}$ assigned to it. In the conventional supervised learning setting, the task entails a dataset $\mathcal{S}=(\mathcal{D},\mathcal{L})$, which consists of $N$ image-label pairs $\mathcal{D}=\\{\left(\mathbf{x}_{i},\mathbf{y}_{i}\right),~{}i=1,\dots,N\\}$ and a label space $\mathcal{L}=\left\\{l_{j},~{}j=0,\dots,L\right\\}$ of $L$ semantic classes. Every label $\mathbf{y}\in\mathcal{L}^{H\times W}$ is a 2-D matrix with spatial size $H\times W$ and every position corresponds to a single pixel in the image $\mathbf{x}$. It is common that a semantic class (_e.g_. the $l_{0}$) corresponds to a special label that denotes unlabeled or void pixels. The semantic classes represent semantic entities of a scene, _e.g_. vehicle, person, tree, or sky. It is essential that all $l_{j}$ have unambiguous and mutually-exclusive semantic definitions $def(l_{j})$ within a dataset,_e.g_. all possible types of cars in a dataset should not resemble any of the trucks and vice versa. If this does not hold, _i.e_. annotations are noisy or concepts overlap between classes, then a classifier trained on this dataset may be “confused” and the evaluation is inaccurate. Although in literature, existing datasets strive to define unambiguous and non-overlapping classes within their label space, in practice some ambiguity exists, mainly due to defining semantic classes using a single-word. In this work, we assume that such noise in labels is negligible. ## III Heterogeneous Training for Semantic Segmentation In this section we describe the problem and challenges of Heterogeneous Training of Semantic Segmentation (HTSS), prior to presenting the proposed framework in Sec. IV. The terminology follows the conventional semantic segmentation task described in Sec. II-D. In the conventional setting described in Section II-D, the formats of the task definition and the given dataset are in full agreement with each other. In this case, the output label space $\mathcal{L}^{\text{pred}}$ can be set to be the dataset label space $\mathcal{L}^{\text{pred}}\equiv\mathcal{L}$ and the predictions are congruent to the per-pixel annotations $\mathbf{y}^{\text{pred}}\cong\mathbf{y}$. The symmetry between the training data and the task goal is advantageous, however it limits the available information that a system can be exposed to, _e.g_. only one dataset from the first column of Fig. 1. These datasets are limited in size and semantic concepts, which renders them inadequate for large-scale, in-the-wild semantic segmentation. We consider two generalizations to the traditional semantic segmentation problem and encapsulate them in a unified formulation. The first aims at enriching the output semantic space of the task using multiple heterogeneous label spaces. The second intends to increase the amount of available supervision by generalizing the task input to more types of supervision. We incorporate these generalization within a unified formulation by maintaining the task output identical, while relaxing the requirements for the given datasets. This enables potential inclusion of datasets, which are originally created for other scene understanding tasks, _e.g_. multiple datasets from all columns of Fig. 1, when training a network for semantic segmentation. Heterogeneous semantic segmentation enables trained networks to aggregate information from diverse datasets and it could demonstrate potential improvements in the following three aspects. First, multi-dataset training increases examples for underrepresented classes and provides diversity in recognizable semantics, which could be advantageous for performance, _i.e_. segmentation accuracy on seen (training) datasets. Second, the diversity of employed datasets should bring benefits to generalization, _i.e_. segmentation accuracy on unseen (testing) datasets. We evaluate this aspect under the cross-dataset zero-shot setting [27]. Third, network predictions are more fine-grained and semantically rich by incorporating semantics from multiple datasets. As can be observed from Fig. 1, label spaces of pixel- labeled datasets are generally smaller than weakly-labeled datasets, _i.e_. $\mathcal{L}^{\text{pixel}}\ll\mathcal{L}^{\text{bbox}}\ll\mathcal{L}^{\text{tag}}$. We propose a metric, detailed in Sec. IV, to quantify the semantic richness of the predicted classes w.r.t. to the segmentation performance for these classes. Label type | Definition ---|--- Pixel (dense) | $\mathbf{y}\in\mathcal{L}^{H\times W},~{}\left|y_{k}=l_{0}\right|\ll\left|y_{k}\neq l_{0}\right|,~{}k\in H\times W$ Pixel (coarse) | $\mathbf{y}\in\mathcal{L}^{H\times W},~{}\left|y_{k}=l_{0}\right|\gg\left|y_{k}\neq l_{0}\right|,~{}k\in H\times W$ Bound. boxes | $\mathbf{y}=\left\\{\left(l_{k},~{}\text{bbox-coords}_{k}\right),~{}k=1,\dots,B\right\\}$ Image tags | $\mathbf{y}=\left\\{l_{k},~{}k=1,\dots,T\right\\}$ TABLE I: A variety of annotation formats are considered. The dataset indexing superscript $(i)$ and the dataset sample subscript $j$ are omitted for clarity. $l_{0}$ is the void class. ### III-A Problem formulation Similarly to the conventional segmentation task definition, heterogeneous semantic segmentation aims at predicting a 2-D matrix $\mathbf{y}^{\text{pred}}$ with semantic classes, given a 2-D image $\mathbf{x}$ and a label space $\mathcal{L}^{\text{pred}}$. Contrary to the traditional single-dataset formulation, we assume that a collection $\mathbb{S}$ of $D$ heterogeneous datasets is available, where each dataset $\mathcal{S}^{(i)}$ includes $N$ image-label pairs $\mathcal{D}^{(i)}$ and the corresponding label space $\mathcal{L}^{(i)}$ with $L^{(i)}$ semantic classes: $\displaystyle\mathbb{S}=\left\\{\mathcal{S}^{(i)},~{}i=1,\dots,D\right\\}~{},$ (1) $\displaystyle\mathcal{S}^{(i)}=\left(\mathcal{D}^{(i)},\mathcal{L}^{(i)}\right)~{},$ (2) $\displaystyle\mathcal{D}^{(i)}=\left\\{\left(\mathbf{x}^{(i)}_{j},\mathbf{y}^{(i)}_{j}\right),~{}j=1,\dots,N^{(i)}\right\\}~{},$ (3) $\displaystyle\mathcal{L}^{(i)}=\left\\{l^{(i)}_{m},~{}m=0,\dots,L^{(i)}\right\\}~{}.$ (4) The type of labels $\mathbf{y}$ considered in this work are provided in Tab. I. The goal is to train a system for semantic segmentation, such that it utilizes information from all heterogeneous datasets in $\mathbb{S}$. The system should have a consistent label space and recognize the semantic concepts from all considered label spaces $\mathcal{L}^{\text{pred}}=\uplus_{i=1}^{D}\mathcal{L}^{(i)}$. Within the researched problem formulation, the following conditions should hold for the employed datasets. 1\. Intra-dataset label space consistency. Each label space $\mathcal{L}^{(i)}$ should include consistent and mutually-exclusive semantic classes, as explained in Sec. II-D: $def\left(l^{(i)}_{m}\right)\cap def\left(l^{(i)}_{n}\right)=\emptyset,~{}\forall~{}i=1,\dots,D~{}.$ (5) where $def(l)$ denotes the proper definition of the semantic class $l$. 2\. Condition for weakly-labeled classes. Any semantic class from a weakly- labeled dataset $\mathcal{S}^{(W)}$ should either be identical to, or contain partially semantics from, a class in a strongly-labeled dataset $\mathcal{S}^{(S)}$: $\exists~{}l^{(S)}~{}\text{so that}~{}def\left(l^{(W)}\right)\subseteq def\left(l^{(S)}\right)~{}.$ (6) Note that: i) cond. 2 implies that there must be at least one pixel-labeled dataset available for training, and ii) cond. 1 does not imply inter-dataset label space consistency, which is one of the challenges addressed by our HTSS framework. Challenges | HTSS components ---|--- Label spaces | Label type | Preparation (once) | Supervision during training (each step) no conflicts | strong | - | standard cross-entropy (CE) weak (Sec. IV-B) | create pseudo-labels (Sec. IV-B1, Fig. 5) | conditional CE, refine pseu- do-labels (Sec. IV-B2, Fig. 6) with conflicts | strong (Sec. IV-A) | build taxonomy (Sec. IV-A1, Fig. 3) | supervise semantic atoms (Sec. IV-A2, Fig. 4) weak (Sec. IV-C) | all above | all above TABLE II: Overview of the HTSS components developed in this work. Each row includes methods for combining a strongly-labeled dataset and any other dataset(s) with the type of supervision denoted by the second column. ### III-B Challenges Heterogeneous semantic segmentation brings forward incompatibilities in the annotation formats and the conflicting label spaces among datasets. The following two paragraphs analyze the related challenges. Label space conflicts. Datasets are annotated over different label spaces on a vast spectrum of semantic detail, as they are collected to serve different purposes, which leads to conflicting or overlapping definitions of classes between datasets. If the class definitions for all labels is matching between datasets then a simple union of the label spaces is feasible. However, this is usually not doable because of potential conflicts. The main source of conflicts stems from partial overlapping semantic class definitions between two arbitrary datasets $\mathcal{S}^{(X)}$ and $\mathcal{S}^{(Y)}$, which is specified by: $def\left(l^{(X)}\right)\cap def\left(l^{(Y)}\right)\neq\emptyset~{}.$ (7) Since, the class definitions can overlap only partially, merging them or including them both in the combined label space will introduce ambiguity to the output label space of a trained network. A special common case occurs when conflicts arise from differences in the semantic level-of-detail between classes. For example, a class $l^{(X)}$ from dataset $\mathcal{S}^{(X)}$ describes a high-level concept which contains many more fine-grained classes for dataset $\mathcal{S}^{(Y)}$, giving: $def\left(l^{(X)}\right)=\bigcup_{m}def\left(l_{m}^{(Y)}\right)~{}.$ (8) The inclusion of all classes $l^{(X)}$, $l^{(Y)}_{m},\forall m$ in a single label space would also imply introducing conflicts. Annotation format incompatibilities. A plethora of diverse datasets for scene understanding cannot be used in semantic segmentation when using standard training schemes, due to their incompatible annotation formats. The most common of them are shown in Tab. I. Semantic segmentation is by definition a pixel-wise task, thus it is convenient that training datasets to provide annotations at the same pixel-level format. The spatial localizability of labels from other datasets of Fig. 1 is not adequate to train a network, because they introduce spatial localization uncertainties during pixel-wise training for segmentation. The incompatibilities in annotation formats are even more pronounced in a multi-dataset training scenario, where a variety of incompatible annotation formats can exist. Heterogeneous semantic segmentation thus requires to extract useful supervision at the pixel level from a much coarser source of information. ## IV Methodology The development of our methodology for heterogeneous multi-dataset training abides to the design principle of maintaining the established single-backbone FCN pipeline. This desideratum enables straightforward applicability of the proposed methods to current or future FCN-based architectures, and scalability to an arbitrary number of datasets. Figure 3: Combined taxonomy of 118 semantic atoms (void atom is not shown) merging a total of 174 semantic classes from Cityscapes (27), Vistas (65), IDD (33), and MTS (50) datasets. Each dataset section corresponds to the grouping of the combined taxonomy labels in order to apply strong supervision from the respective dataset. Figure 4: The HTSS classifier is supervised using the standard cross-entropy (CE) loss between the semantic atoms vector and a different ground truth vector per dataset (in boxes). This is achieved by combining the CNN output onto a new multinomial distribution vector that matches each dataset’s labels. A selection of classes from Fig. 3 are shown in order to explain the procedure of probabilities accumulation. The proposed Heterogeneous Training of Semantic Segmentation (HTSS) framework addresses the challenges of Sec. III-B by introducing a methodology for combing disjoint or conflicting label spaces (Sec. IV-A), training a single- backbone network with strong and weak supervision simultaneously (Sec. IV-B, Sec. IV-C). An overview of the methodology is depicted in Fig. 2. Tab. II associates the components of our methodology with the challenges they address. ### IV-A Combine datasets with different label spaces This subsection describes our approach for training a single-backbone, single- classifier FCN on multiple pixel-labeled datasets, which have disjoint or conflicting semantic label spaces. The approach enables training on an arbitrary number of datasets and it consists of two steps. Before training. First, the label spaces of the training datasets have to be consolidated into a unified taxonomy (Sec. IV-A1). This procedure can be automated if the label spaces are part of a semantic ontology or hierarchical lexical database (_e.g_. WordNet [69] or BabelNet [70]). For example, the classes of ImageNet [19] are organized according to the WordNet hierarchy. An ontology [71] contains lexico-semantic relations (_e.g_. hypernymy/hyponymy, holonymy/meronymy) and the semantic relatedness (is-a, has-a) of concepts, which can be used to group classes together or split them into finer semantic concepts. In this work, the label spaces of the employed datasets are not constructed according to an ontology, thus we used lexico-semantic relations from WordNet, BabelNet, thesaurus.com, and merriam-webster.com to generate a unified taxonomy of semantic concepts and minimize manual effort. During training. A single classifier makes predictions over the consistent, unified taxonomy of labels obtained from the previous step. Only during training, these predictions are mapped back to the label space of each dataset with specific per-dataset converters (Sec. IV-A2), in which case the segmentation loss can be directly applied. #### IV-A1 Generate unified taxonomy of label spaces When combining multiple datasets it is highly probable that classes between datasets have conflicting definitions, as described in Sec. III-B. In order to solve these conflicts and generate a unified taxonomy, we introduce the concept of the semantic atoms. A semantic atom $\alpha$ is a fine-level semantic primitive (class), whose definition coincide either fully or partially to a definition of a semantic class from a dataset in $\mathbb{S}$. A set of properly chosen semantic atoms $\mathcal{A}=\left\\{\alpha_{m},~{}m=1,\dots,A\right\\}$ fully covers the semantics of all employed datasets without any conflicts. Using the ontologies from the aforementioned online sources we automate the construction of set $\mathcal{A}$. The set is subsequently validated by a human merely for inconsistencies due to ambiguities in their (natural language) description (Algorithm 1). Data: label spaces from all datasets $\mathcal{L}^{(i)}$, $i=1,\dots,D$ Result: the set of semantic atoms $\mathcal{A}$ /* Initialize semantic atoms with the multi-set of all labels, then remove ones not complying to conditions. */ all_labels $\leftarrow$ $\uplus_{i=1}^{D}\mathcal{L}^{(i)}$ repeat /* generate combinations of labels pairs */ pairs = generate_combinations(all_labels) for _pair in pairs_ do ls, lo = pair if _(synonym(ls, lo) or hypernym(ls, lo) or holonym(ls, lo))_ then all_labels.remove(ls) break until _no label is removed from all_labels_ $\mathcal{A}$ $\leftarrow$ all_labels Algorithm 1 Combine the labels from existing datasets to create the set of semantic atoms using the lexico-semantic relations from WordNet, BabelNet, thesaurus.com, and merriam-webster.com. The following three properties hold for all semantic atoms. First, each semantic atom should have a concise and unique semantic definition that does not overlap with any of the other semantic atoms: $\text{def}\left(\alpha_{k}\right)\cap\text{def}\left(\alpha_{m}\right)=\emptyset,~{}\forall~{}k\neq m~{}.$ (9) Second, its definition matches fully or partially to a definition of a semantic class from a dataset and thus every semantic atom corresponds to (is-a, hyponym, meronym) at most one semantic class: $\text{def}(\alpha_{m})\subseteq\text{def}(l_{n}),~{}\forall\alpha_{m}\in\mathcal{A},~{}l_{n}\in\mathfrak{L}~{},$ (10) where $\mathfrak{L}=\cup_{i}\mathcal{L}^{(i)},~{}i=1,\dots,D$ is the set of labels from all label spaces of the datasets to be combined. Third, the set of all semantic atoms should completely describe the semantics of all datasets, which yields that every semantic class $l_{n}$ consists of (has-a, hypernym, holonym) at least one semantic atom: $\text{def}(l_{n})=\bigcup_{m\in M_{n}}\text{def}(\alpha_{m}),~{}\forall l_{n}\in\mathfrak{L}~{},$ (11) where $M_{n}$ is a set of indices corresponding to class $l_{n}$. As long as the manual process for the extraction of semantic atoms is completed, the taxonomy of semantic classes from all datasets can be generated (see Figure 3). Then, we can train a single-classifier FCN using the semantic atoms as output classes and supervise this CNN using the original label spaces from each dataset, by combing the atoms using the generated unified taxonomy as described in the next section. #### IV-A2 Supervise semantic atoms with the original label spaces Having extracted the set of semantic atoms $\mathcal{A}$ that fully covers the semantics of the employed datasets $\mathbb{S}$, we can train a single- backbone, single-classifier FCN with output label space $\mathcal{A}$. This procedure is shown in Fig. 4 for a selection of semantic atoms from Fig. 3. The output of the classifier for spatial position (pixel) $p$ and dataset $i$ is the categorical probability vector $\bm{\sigma}^{(i)}_{p}\in\left[0,1\right]^{A}$, of cardinality $A=\mathcal{A}$, where each element corresponds to the probability of a semantic atom in $\mathcal{A}$. Since $\bm{\sigma}^{(i)}_{p}$ represents a categorical probability it holds that $\sum_{m}\sigma^{(i)}_{p,m}=1$. In the following, we describe how $\bm{\sigma}^{(i)}_{p}$ is transformed to be compatible with the original label space $\mathcal{L}^{(i)}$ of each dataset of the taxonomy, in order to train the classifier using the conventional cross-entropy loss. Conceptually, for each supervising dataset $i$, we are going to map the categorical output $\bm{\sigma}_{p}^{(i)}$ to the categorical labels. Via this mapping the labels of the original dataset can supervise (in)directly the training of the semantic atoms. The extraction of semantic atoms induces a collection of sets $\\{G^{(i)}_{m},~{}i=1,\dots,D,~{}m=0,\dots,L^{(i)}\\}$. Each $G^{(i)}_{m}$ contains the semantic atoms that correspond to class $l^{(i)}_{m}$ from dataset $i$. According to the taxonomy construction process (Section IV-A1), the extracted semantic atoms fully describe the semantics of all classes from all selected datasets. As a consequence, an arbitrary dataset class is represented by either a single or a combination of semantic atom(s). Using this property, we can partition $\bm{\sigma}^{(i)}_{p}$ into groups according to sets $G^{(i)}_{m}$ and accumulate their probabilities into a reduced vector $\bm{s}_{p}^{(i)}\in{[0,1]}^{L^{(i)}}$ for each dataset $i$. This process can be written concisely as: $s^{(i)}_{p,m}(\bm{\sigma}_{p})=\sum_{\alpha\in G^{(i)}_{m}}\sigma^{(i)}_{p,\alpha}~{},$ (12) where for now, an integer number is assigned as “name” to every atom $\alpha$, so that they can be used for indexing $\mathcal{A}=\\{1,2,\dots,A\\}$. Since $\bm{\sigma}^{(i)}_{p}$ is a categorical distribution, then $\bm{s}^{(i)}_{p}$ is also a categorical distribution. Moreover, it contains classes that correspond one-by-one to the ground truth $\bm{y}^{(i)}_{p}$. Thus, they can be used in the standard cross-entropy loss formulation. During batch-wise training, a batch can contain images from many datasets. Without loss of generality, we will formulate the cross-entropy loss for a single image $j$ from dataset $i$ in the batch. The label $\bm{y}^{(i)}_{j}$ (Eq. (3)) has shape $H\times W\times L^{(i)}$ (one-hot encoding), and the output $\bm{\sigma}^{(i)}_{j}$ has shape $H\times W\times A$. By using a single index $p\in P$ to enumerate spatial positions ($H,W$) and omitting from the notation of $\bm{y}$ and $\bm{\sigma}$ the dataset index $i$ and image index $j$, the cross-entropy loss can be expressed as: $\text{Loss}_{j}\left(\bm{y},\bm{\sigma}\right)=-\dfrac{1}{\left|P\right|}\sum_{p\in\mathcal{P}}\sum_{m}y_{p,m}\log s^{(i)}_{p,m}~{}.$ (13) In the following, we derive the gradients of the loss wrt. the logits of the network and prove that our method is a generalization of the standard formulation. As this is independent of the dataset and the position indices, we drop them for minimizing notation clutter. The logits $\bm{\lambda}\in\mathbb{R}^{A}$ are the input of the softmax $\bm{\sigma}$, where $\sigma_{i}(\bm{\lambda})=e^{\lambda_{i}}/\sum_{j}e^{\lambda_{j}}$ and the converted outputs of the network can be expressed as $\bm{s}\left(\bm{\sigma}\left(\bm{\lambda}\right)\right)$. Using the backpropagation rule the gradient of the loss wrt. the logits is: $\frac{\partial\text{Loss}}{\partial\bm{\lambda}}=\frac{\partial\text{Loss}}{\partial\bm{s}}\cdot\frac{\partial\bm{s}}{\partial\bm{\sigma}}\cdot\frac{\partial\bm{\sigma}}{\partial\bm{\lambda}}~{}.$ (14) Since each pixel in the annotations has a single class (one-hot), _i.e_. $y_{m}=1$ for class $m=m^{*}$ and $y_{m}=0,~{}m\neq m^{*}$, the loss of Eq. (13) (omitting the summation over positions $p$) reduces to $-\sum_{m}\llbracket m=m^{*}\rrbracket\log s_{m}=-\log s_{m^{*}}$, where $\llbracket\cdot\rrbracket$ is the Iverson bracket. It is easy to show that the partial derivatives of the factors in Eq. (14) are: $\partial\text{Loss}/\partial s_{i}=-\llbracket i=i^{*}\rrbracket/\sigma_{i}$, $\partial s_{i}/\partial\sigma_{j}=\llbracket j\in G_{i}\rrbracket$, and $\partial\sigma_{i}/\partial\lambda_{j}=\sigma_{i}\left(\llbracket i=j\rrbracket-\sigma_{j}\right)$. Substituting these into Eq. (14) it yields: $\frac{\partial\text{Loss}}{\partial\lambda_{m}}=\sigma_{m}-\llbracket m\in G_{m^{*}}\rrbracket~{},$ (15) which is a mere generalization of the loss derivative in the original FCN framework, which is $\partial\text{Loss}/\partial\lambda_{m}=\sigma_{m}-\llbracket m=m^{*}\rrbracket$. This property ensures comparable gradient flows between FCN and our framework, and thus no architectural changes or loss modifications are needed. Figure 5: Creation of pseudo labels from weakly-annotated datasets before training. For each pixel in the image a categorical probability vector is created with elements corresponding to the set of annotated classes ($L=3$ in this example). For each position of $\hat{\mathbf{y}}$, each element along the third dimension is assigned a probability, _i.e_. the normalized number of respective bounding boxes covering that pixel. During training, these pseudo labels are further refined using input from the classifier itself, see Fig. 6. ### IV-B Combine datasets with different annotation types This section describes how a single-backbone, single-classifier FCN is trained on multiple weakly- or strongly- labeled datasets. For now, we assume that the label spaces of all datasets are identical. This limitation is lifted in Sec. IV-C, where combined training with different annotation types and conflicting labels spaces is investigated. As explained in Sec. III-B, the spatial localization of annotations in weakly- labeled datasets, _e.g_. bounding boxes and image tags, is inadequate for providing useful pixel-level supervision. However, if properly conditioned or refined, these spatial locations have the potential to provide helpful cues for increasing segmentation performance. A two-step approach is followed, abiding to the FCN design principles without adding extra modules to the network. First, as described in Sec. IV-B1, weak annotations from all datasets are converted to pseudo per-pixel labels, so they can be seamlessly used together with pixel labels from strongly-labeled datasets for pixel-wise training. Second, during each training step, the pseudo labels are refined, using only information from the network from this step, without requiring any external knowledge. The process is analyzed in Sec. IV-B2. #### IV-B1 Unifying weak and strong annotations The objective of unifying heterogeneous annotations is to transform the weak annotations into per-pixel pseudo labels, so they can be integrated in the pixel-wise training loss. The pseudo labels are then refined during training, to provide a best-effort approximation of the ideal fine labels (Sec. IV-B2). We consider weak supervision from bounding boxes and image-level tags, as listed in Tab. I. Bounding box annotations have orthogonal boundaries, which rarely match the smooth object boundaries, _e.g_. poles, humans, bicycles, while image tags have even coarser localization. We treat image tags as bounding boxes that extend to the whole image. This forms a basis to handle both annotation formats within a common formulation. The core of the method involves representing the per-pixel label as a categorical probability vector $\hat{\bm{y}}_{p}\in\left[0,1\right]^{L}$ over the set of all classes $\mathcal{L}$ of the dataset it belongs to. This choice enables including information from all bounding boxes, even if they heavily overlap, and does not require hard choices to assign a single class to each pixel, _e.g_. assigning randomly or by heuristics. The algorithm is described in the following paragraph and visualized in Figure 5. Given a weak label $\mathbf{y}$, we initiate a 3-D label canvas, with two spatial dimensions, being equal to the label size, and a depth dimension with size $L$ for the semantic classes. Each bounding box in $\mathcal{B}$ casts a unity vote to all spatial locations (pixels) being covered by it, at a single position in the depth dimension that corresponds to the semantic class of the bounding box. After the voting is completed from all boxes, the label canvas is normalized along the depth dimensions (semantic classes) to unity, by dividing by the sum of votes for that position. Then, another 2-D slice is concatenated along the same spatial dimensions, corresponding to the unlabeled semantic class. Finally, for pixels that are not covered by any bounding box, the unlabeled probability is set to unity. At this point, a valid categorical probability vector for each image pixel is obtained, which can be directly used as-is in the conventional per-pixel cross-entropy loss. #### IV-B2 Supervising semantic atoms with unified annotations In the previous section, it was sketched how weak annotations are transformed into per-pixel pseudo-labels that are used in the cross-entropy loss formulation after their spatial locality is refined. Here, the refinement of these coarse labels is described. The refinement is performed in an online fashion during training and generates more (spatially) localized labels for supervision. It is achieved by applying two conditions to pseudo-labels that omit supervision for uncertain or ambiguous pixels, _e.g_. pixels that may reside outside of an object. Figure 6: Refinement of pseudo labels during each training step using the predictions of the network in that step. We assume that a collection of $\mathbb{S}^{(S)}$ strongly-labeled and $\mathbb{S}^{(W)}$ weakly-labeled datasets are given and all datasets have an identical label space. First, the weak labels $\bm{y}^{(W)}$ of $\mathbb{S}^{(W)}$ are converted into per-pixel pseudo-labels $\hat{\mathbf{y}}$, using the procedure of Sec. IV-B1. Then, during a training step, we rely on the network predictions, which is the best estimate it can be attained, to improve the pseudo-labels. If $\bm{\sigma}_{p}\in[0,1]^{L}$ is the softmax output for position $p$ for an image with weak labels, the predictions can be expressed as $\pi_{p}=\operatorname*{arg\,max}_{m}\sigma_{p,m}$. The refined pseudo-labels $\tilde{\mathbf{y}}_{p}$ are obtained for pixel $p$ by keeping the pseudo- label if the prediction agrees and the corresponding probability is higher than a threshold $T$, as follows: $\tilde{\mathbf{y}}_{p}=\begin{cases}\hat{\mathbf{y}}_{p},&\text{if}~{}\pi_{p}=\operatorname*{arg\,max}_{m}\hat{y}_{p,m}~{}\text{and}~{}\sigma_{p,\pi_{p}}\geq T,\\\ \text{unlabeled},&\text{otherwise}\end{cases}$ (16) The first condition of Eq. (16) is illustrated in Fig. 6. The second condition refers to the magnitude of the probability of the predictions, which can be view as a measure of confidence. Specifically, a heuristically chosen threshold $T$ is used, which should be exceeded by the probability of the highest predicted class, in order to be deemed reliable. This threshold provides a good trade-off between utilizing enough weak labels, while maintaining their confidence high. For the experiments, we have empirically chosen $T=0.9$. The final loss for the batch with images from both weakly-labeled $(W)$ and strongly-labeled $(S)$ datasets is computed as: $\text{Loss}=-\sum_{p\in\mathcal{P}}\sum_{j}z_{p,j}\log\sigma_{p,j}~{},$ (17) where $\mathcal{P}=\mathcal{P}^{(S)}\cup\mathcal{P}^{(W)}$ is the set of all pixels and $z_{p,j}$ is defined as: $\bm{z}_{p}=\begin{cases}\frac{1}{\lvert\mathcal{P}^{(S)}\rvert}\bm{y}^{(S)}_{p},&p\in\mathcal{P}^{(S)}\\\ \frac{1}{\lvert\mathcal{P}^{(W)}\rvert}\tilde{\bm{y}}^{(W)}_{p},&p\in\mathcal{P}^{(W)}~{}.\end{cases}$ (18) ### IV-C Conflicting label spaces and annotation types Sec. IV-A proposed a solution for label-space conflicts considering only strongly-labeled datasets. Sec. IV-B proposed a solution for training networks with multiple annotation types considering only datasets with identical label spaces. The combination of the two approaches that is able to simultaneously train networks with any datasets (case of last row of Tab. II) is described in this section. The extraction of semantic atoms (Sec. IV-A1) and the conversion of weak to per-pixel annotations (Sec. IV-B1) can be directly applied to the selected datasets. However, for supervising the semantic atoms, the formulas of Sec. IV-A2, and IV-B2 cannot be directly applied for any semantic atom and a small set of them requires a different handling. According to this distinction, the semantic atoms are split into two sets $\mathcal{A}^{a}$ and $\mathcal{A}^{s}$ with classes that need special care. Each atom in $\mathcal{A}^{a}$ is either a class with only strong labels, or with weak and strong labels from different datasets. Each atom in $\mathcal{A}^{s}$ is strictly a class with weak labels for which the condition of Eq. (6) holds. The localization cues of the atoms in $\mathcal{A}^{s}$ are extremely sparse, due to their weak annotations and the fact that they do not appear in strongly-labeled datasets. Consequently, the refinement step (Eq. (16)) is ineffective for pixel-accurate segmentation. As a solution, for these atoms (_e.g_. the traffic sign subclasses in the taxonomy of Fig. 3), we use the parent (strongly-labeled) classes $\mathcal{A}^{p}$ (_e.g_. traffic sign front) as cues for pixel-accurate segmentation. Then, fine-grained semantics can be attained using classification over $\mathcal{A}^{s}$. The predictions of the classifier, as illustrated in Fig. 7, are over two sets of classes: $\mathcal{A}^{ap}=\mathcal{A}^{a}\cup\mathcal{A}^{p}$ and $\mathcal{A}^{s}$. For the subclasses in $\mathcal{A}^{s}$ the relationship between them and the parent classes $\mathcal{A}^{p}$ is leveraged. Specifically, the predictions of the corresponding parent class in $\mathcal{A}^{p}$ are used to provide cues for the refinement process (for example the predicted segmentation masks of the traffic sign front class are used to refine the bounding boxes of the traffic sign subclasses of Fig. 3). The classifier is trained using the losses from Eq. (13) and (17). During inference, the subclasses of $\mathcal{A}^{s}$ simply replace their parent classes from $\mathcal{A}^{p}$ in the final predictions. ## V Experimental evaluation Figure 7: Classifier structure in case of conflicting label spaces and mixed (strong and weak) supervision. The final predictions (right block) are the predictions of classifier part $\mathcal{A}^{ap}$, where the pixels that are assigned classes belonging to classifier part $\mathcal{A}^{p}$ are replaced by subclass predictions of classifier part $\mathcal{A}^{s}$. We conduct extensive experimentation with various dataset combinations to validate our methodology for multi-dataset simultaneous training with mixed supervision and conflicting label spaces. The results are assessed on three directions: i) segmentation performance on seen datasets, _i.e_. validation/testing splits of training datasets, ii) generalization on unseen datasets, _i.e_. splits of datasets not used during training, and iii) semantic knowledgeability: number of recognizable semantic concepts. Sec. V-A describes the evaluation metrics and Sec. V-B discusses the technical details of our experiments. The following three Sections V-C, V-D, V-E investigate the three scenarios in multi-dataset training appearing in Table II. Finally, Sec. V-F contains ablations for specific design choices in this work. A selection of diverse datasets for street scene understanding with strong and weak supervision are used. An overview of the employed datasets is shown in Table III. For each dataset a collection of label spaces is defined. ### V-A Evaluation metrics We use two metric families to quantify the performance of the models. The first family consists of the standard Intersection over Union (IoU) metric [16, 10] and averages of it (arithmetic – mIoU) to summarize performance and generalizability over multiple classes and across different datasets. The second family is based on a new metric, namely Knowledgeability that we define in the following subsection. Dataset name | | Labels | # imgs | Train./Eval. Lab. space ---|---|---|---|--- Training datasets | | | | Cityscapes [10] | | pixel | 2,975 | C-28 Cityscapes Coarse [10] | | pixel, bbox | 19,997 | C-28 Cityscapes T. Signs [35] | | pixel | 2,975 | CT-62 Mapillary Vistas [11] | | pixel | 18,000 | V-66 Indian Driving D. [12] | | pixel | 20,000 | I-33 EuroCity-Persons [72] | | bbox | 40,000 | E-2 Map. Traffic Signs [73] | | bbox | 36,589 | T-51 Open Images [20] | | bbox, im. tag | 100,000 | O-51 Testing datasets | | | | Cityscapes (val) [10] | | pixel | 500 | C-20 Cityscapes T. Signs [35] | | pixel | 500 | CT-34 Mapil. Vistas (val) [11] | | pixel | 2,000 | V-66 IDD (val) [12] | | pixel | 2,036 | I-33 Generalization (unseen) datasets | | Wild Dash (val) [74] | | pixel | 70 | W-19 KITTI (train) [75] | | pixel | 200 | K-20 TABLE III: Overview of employed datasets for experimentation. The type of annotations, the number of images and the label spaces are shown. The networks are trained with the label spaces of the training datasets. The evaluation label spaces may be smaller than the training counterparts for the same dataset, due to missing classes in testing/generalization datasets or smaller official splits. Train datasets | Output label space | Accuracy [mIoU %] | Knowledgeability [$\mathcal{K}^{L(\text{dataset})}$ %] ---|---|---|--- Citys | IDD | Vistas | Zero-shot Generalization | Val-split Generalization | Unseen datasets | Seen datasets WildDash | KITTI | Citys | IDD | Vistas | WildDash | KITTI | Citys | IDD | Vistas ✓ | | | C-20 | 27.8 | 48.0 | 63.0 | 29.8 | 22.5 | 39.2 | 43.1 | 50.3 | 30.4 | 26.0 | ✓ | | I-33 | 36.8 | 40.2 | 47.3 | 63.4 | 23.7 | 48.6 | 50.9 | 55.3 | 65.0 | 49.1 | | ✓ | V-66 | 42.4 | 49.5 | 67.6 | 41.0 | 40.9 | 53.2 | 57.5 | 60.3 | 53.0 | 63.5 ✓ | ✓ | | HTSS-34 | 36.3 | 55.3 | 73.0 | 61.3 | 26.4 | 49.9 | 50.5 | 56.8 | 58.5 | 49.8 ✓ | | ✓ | HTSS-66 | 44.4 | 51.6 | 61.2 | 43.7 | 43.1 | 60.0 | 62.4 | 62.3 | 61.0 | 62.9 | ✓ | ✓ | HTSS-68 | 44.0 | 53.8 | 69.2 | 58.0 | 42.6 | 59.5 | 63.0 | 60.1 | 62.4 | 63.0 ✓ | ✓ | ✓ | HTSS-68 | 44.4 $\left\lceil+16.6\right\rceil$ | 56.5 $\left\lceil+16.3\right\rceil$ | 74.9 +11.9 | 57.9 -5.5 | 43.1 +2.2 | 64.2 $\left\lceil+25.0\right\rceil$ | 64.4 $\left\lceil+21.3\right\rceil$ | 66.7 +16.4 | 65.0 +0.0 | 68.3 +4.8 TABLE IV: HTSS on pixel-labeled datasets with conflicting label spaces. Performance of combined training (bottom rows) compared to single-dataset training (top rows). $L(\text{dataset})$ = 19, 20, 20, 33, 66 for the 5 datasets. $\left\lceil\text{+x}\right\rceil$: up to +x%. Knowledgeability metric. This metric quantifies in a single value the semantic richness of the output of a semantic segmentation system, by evaluating how many semantic concepts (atoms) it can recognize with sufficient segmentation accuracy. Using existing metrics, one way to achieve this is to report the size of the system’s output label space and separately the IoU performance per class. However, this approach has some pitfalls. First, merely reporting the label space size is not a reliable metric for semantic richness of predictions, since the IoU performance for some classes can be very low or even zero. Second, IoU-based average aggregates, _e.g_. mIoU, do not reflect the number of recognizable classes, because they assess a system purely at segmentation level. Finally, these aggregates are intrinsically dependent on the size of the evaluated label space: as the size increases, the difficulty of assigning the correct class increases, eventually leading to mIoU reduction (due to the smoothing properties of averaging). The new metric is designed to explicitly consider the size of the label spaces of both the system output and the evaluated dataset together with the segmentation performance for the output classes. The core of the metric is based on counting the number of classes that achieve an IoU higher than a threshold $t$ wrt. the total number of classes $c$ that are considered for computing the metric. To make the metric independent of the need for proper selection of $t$, the counting is averaged over a set of $N_{T}$ thresholds, which in this work are chosen to be equidistant, _i.e_. $T=\\{0.0,~{}1/N_{T},~{}\dots,~{}1.0-1/N_{T}\\}$. Other values for T can be chosen depending on the application and datasets specifics. Assuming an output label space of a model that contains $L$ discreet semantic classes, the set of all per-class IoUs $\mathcal{E}=\\{\text{IoU}_{i}\\}_{i=1}^{L}$ can be constructed by evaluating the model output against the ground truth. Subsequently, the set $\mathcal{E}$ is used to generate all the subsets $\tilde{\mathcal{E}}_{t}=\\{\text{IoU}~{}|~{}\text{IoU}>t,~{}\text{IoU}\in\mathcal{E}\\}$ containing the IoUs above the threshold $t$ from $T$. To this end, Knowledgeability is defined as the $c$-normalized number of classes averaged over $T$: $\mathcal{K}^{c}_{T}=\frac{1}{N_{T}}\sum_{t\in T}\frac{\min(|\tilde{\mathcal{E}}_{t}|,c)}{c}~{}.$ (19) This definition guarantees $\mathcal{K}^{c}_{T}$ to be between 0 and 1, _i.e_. $0\leq\mathcal{K}\leq\mathcal{K}_{\text{max}}=\min\left(L,c\right)/c\leq 1$, which is achieved by creating the sets $\tilde{\mathcal{E}}_{t}$ using the strictly greater condition ($\text{IoU}>t$) and by employing the $\min$ function. The bounds enable the use of the metric for comparison across datasets with different number of classes and semantic segmentation systems with different number of output classes. The new metric Knowledgeability allows us to express the increase in the number of recognizable classes and at the same time consider the performance on these classes, without the need to choose a specific single threshold. ### V-B Implementation details Convolutional network architecture. The convolutional network follows the PSP architecture [76], since it provides a good trade-off between segmentation performance, training time, and memory requirements. The backbone feature extractor is a ResNet-50 [9] modified for segmentation by i) changing the block strides and dilation rates [77, 76], ii) projecting the output 2048-dimension features to 256-dimension features to reduce memory requirements, and iii) using a Pyramid Pooling Module [76], as described in [35]. Hyper-parameter tuning and implementation details. Two factors that emerge in multi-dataset training and that significantly affect the accuracy of CNNs for segmentation are the batch size and the input image size. The global and per- dataset batch size are connected with the robustness of the optimization algorithm (SGD) and determine training balance across all classes. As the number of the output classes increase, bigger batch sizes are required. The second factor, _i.e_. the input image size of the feature extractor, determines the scale and detail of the extracted features throughout the network. Ideally one would like to use high batch sizes and image sizes given the available (fixed) compute infrastructure. However, using more datasets for multi-dataset training forces using larger batch sizes, to guarantee that all dataset are represented in the batch. This in turn, requires reducing the image size for satisfying of the available (fixed) compute infrastructure. Each of our experiments (subsections) involve training with different number of datasets and classes, thus we tune these two hyper-parameters separately per experiment to obtain optimal performance given the constraints. This leads to different baseline performance for each experiment and results can therefore not be compared across different experiments. But, within a single experiment, we guarantee that all methods achieve optimal results by finding the empirically best trade-off between batch size and image size given the memory constraints. The experiments are conducted on a machine with 4 Titan V GPUs. Training datasets | Output label space | Zero-shot generalization | Val-split generalization ---|---|---|--- pixel-labeled | bbox-labeled | WildDash (unseen) | Cityscapes (seen) Citys | CitysC | ECP | CitysC | ECP-2 | CitysC-10 | All classes | ECP-2 | CitysC-10 | All classes ✓ | | | | C-20 | 12.9 | 16.7 | 23.0 | 65.0 | 69.6 | 61.3 ✓ | ✓ | | | C-20 | 13.8 | 17.4 | 23.1 | 65.2 | 70.1 | 61.5 ✓ | | ✓ | | HTSS-20 | 31.7 $+18.8$ | 20.7 $+4.0$ | 22.7 $-0.3$ | 66.7 $+1.7$ | 70.5 $+0.9$ | 61.3 $+0.0$ ✓ | | | ✓ | HTSS-20 | 31.4 $\left\lceil+17.6\right\rceil$ | 20.8 $\left\lceil+3.4\right\rceil$ | 24.4 $\left\lceil+1.4\right\rceil$ | 65.9 $\left\lceil+0.9\right\rceil$ | 70.5 $\left\lceil+0.9\right\rceil$ | 63.4 $\left\lceil+2.1\right\rceil$ ✓ | | ✓ | ✓ | HTSS-20 | 31.1 $\left\lceil+18.2\right\rceil$ | 26.2 $\left\lceil+9.5\right\rceil$ | 22.4 $\left\lceil-0.6\right\rceil$ | 64.0 $\left\lceil-1.0\right\rceil$ | 71.6 $\left\lceil+2.0\right\rceil$ | 61.7 $\left\lceil+0.4\right\rceil$ TABLE V: HTSS on pixel-labeled and bounding-box-labeled datasets with non- conflicting label spaces. Accuracy [mIoU %] on seen and unseen datasets for the specific class subsets (ECP-2, CitysC-10) that receive the extra weak supervision from ECP and CitysC and all classes. The pixel-labeled CitysC (row 2) is used to set the oracle for the experiments involving the weakly-labeled CitysC (rows 4, 5). C-20 and HTSS-20 label spaces coincide, the HTSS prefix is used to emphasize mixed-supervision training using the HTSS framework. $\left\lceil\text{+x}\right\rceil$: up to +x%. ### V-C Strong supervision, conflicting label spaces In the first set of experiments (Tab. IV), we focus on combining pixel-labeled datasets with conflicting label spaces (case of third row in Tab. II). This scenario is commonly occurring, where different pixel-labeled datasets (for semantic segmentation) are annotated at conflicting levels of semantic granularity. In the experiments, the label spaces of three datasets (Cityscapes, Vistas, IDD) are combined, as described in Sec. IV-A, resulting in the taxonomy of Fig. 3 (without the MTS dataset). The direct solution of training with the union of datasets and their label spaces is not applicable, since the semantic conflicts among the label spaces introduce ambiguities in the concatenated output label space. For example, the rider Cityscapes class conflicts with the motorcyclist and bicyclist Vistas classes. These conflicts are resolved by generating a universal taxonomy (Sec. IV-A). We train on all combinations of three pixel-labeled datasets using the HTSS methodology after solving the conflicts in their label spaces. We compare the accuracy and knowledgeability (Sec. V-A) between HTSS networks against single- dataset trainings. For these experiments, the input image size is $799\times 799$ and batch size formation is 1 image from Cityscapes, 2 from Vistas and 1 from IDD, for each GPU. Accuracy results. As can be seen in Tab. IV, the HTSS-68 network trained on all datasets outperforms 6 out of 7 other networks for seen (columns 3-5) or unseen datasets (columns 1-2). The improvements are due to training a single model with diverse images from multiple datasets (9 times more than Cityscapes and 2 times more than Vistas), which is possible after solving conflicts in the label spaces. An exception is the case of the IDD val split, where the single-dataset training matches or outperforms HTSS networks. After careful visual examination of the dataset, we have observed that the semantic annotations of IDD have a high degree of overlapping concepts. For example, the road class and the drivable fallback class have partially overlapping definitions, _i.e_. they both contain semantic atoms like pothole or crosswalk. This contradicts our hypothesis on assuming non-conflicting semantic class definitions (refer to Eq. (5)) and possibly explains the discrepancy in the results. Knowledgeability results. The HTSS networks are able to segment, in a single pass, an image over more semantic concepts with high attainable mIoU, as demonstrated by Knowledgeability. Moreover, this is proportional to the size of the output label space (_e.g_. columns 6, 7, 10). ### V-D Strong & weak supervision, non-conflicting label spaces This section explores HTSS on a mix of strongly- and weakly-labeled datasets that have non-conflicting label spaces. The hypothesis under investigation is that a very large quantity of weak labels (exponentially more samples but with poor localization) can be beneficial if used in combination with strong labels, especially for under-represented classes. Using the approach developed in Sec. IV-B we train HTSS networks on the pixel- labeled Cityscapes and Cityscapes Coarse, the bounding-box-labeled ECP, and the bounding-box-labeled Cityscapes Coarse, that we generated from the pixel labels (see Tab. III). The label space used is the common 20 classes for Cityscapes (C-20), which we call also HTSS-20, as it is a subset of the complete taxonomy of Fig. 3. The results are provided in Tab. V. For these experiments, the input image size is $699\times 699$ and batch size formation is 1 image from Cityscapes, 2 from Cityscapes Coarse, and 2 from ECP, for each GPU. There are two interesting comparisons to discuss. First, training with Cityscapes and Cityscapes Coarse irrespective of the supervision type (rows 2 and 4) should in principle give similar performance since the supervision information is the same. However the HTSS network shows higher accuracy for all class subsets either in the same or different image domains. Specifically, the two underrepresented classes of Cityscapes (person and rider, _i.e_. ECP-2) have the higher difference in accuracy by +18.8% for Wild Dash. This indicates that the generated pixel pseudo-labels by HTSS (see Fig. 6) provide more complete training cues than the original pixel labels of Cityscapes Coarse. The second comparison (rows 1 and 3) highlights that bounding-box labels from ECP (different image domain) increase the generalization accuracy compared to From the results of Tab. V we conclude that the HTSS framework can successfully leverage weak supervision to increase segmentation accuracy on selected classes (_e.g_. vulnerable road users, _i.e_. ECP-2) and at the same time maintain or slightly improve the accuracy achieved by strong supervision (Cityscapes). A second experiment examines the segmentation accuracy of specific classes when adding weak supervision from bounding boxes and image tags. The results are provided in Tab. VI) and demonstrate that an increasing amount of weak supervision improves the segmentation accuracy accordingly. Using weak supervision from the Open Images dataset (rows 2 and 3) improves the average mIoU accuracy up to +2.9%, while specific classes are substantially benefited with an increase of up to +13.2% IoU. The inclusion of image-tag supervision, improves or maintains IoUs for 6 out of 8 classes, however the improvement is less significant compared to bounding-box supervision only. This shows that weaker and less localized forms of supervision have smaller gains in segmentation performance. | | | Val-split generalization - Cityscapes (seen) ---|---|---|--- Train datasets | Bicycle | Bus | Car | Motorc. | Train | Truck | Vehicle mIoU | Person | Rider | Human mIoU Citys | Open Im. pixel | bbox | tag | | | | | | | | | | ✓ | | | 67.8 | 80.1 | 92.3 | 51.9 | 69.6 | 63.2 | 70.8 | 70.9 | 48.5 | 59.7 ✓ | ✓ | | 68.7 | 82.1 | 92.9 | 50.2 | 69.8 | 71.9 | 72.6 | 72.5 | 51.2 | 61.9 ✓ | ✓ | ✓ | 69.1 | 79.7 | 92.8 | 48.9 | 69.6 | 76.3 | 72.7 | 72.9 | 52.5 | 62.7 TABLE VI: HTSS on pixel-, bounding-box-, and image-tag- labeled images with non-conflicting label spaces. Accuracies (mIoU %) for the specific classes that receive extra supervision from the weakly-labeled Open Images dataset. Train datasets | | | | | | ---|---|---|---|---|---|--- pixel | bbox | | | | | | Citys | CitysT | CitysT | MTS | Output label space | WildDash | Cityscapes T. Signs (seen) W-19 | CitysT-14 | C-20 | CitysT-34 mIoU | mIoU | mIoU | mIoU | $\mathcal{K}^{34}$ ✓ | | | | C-20 | 27.1 | n/a | 70.3 | n/a | 39.3 ✓ | ✓ | | | CT-34 | 27.8 | 17.7 | 69.5 | 47.5 | 46.2 ✓ | | ✓ | | HTSS-34 | 30.2 $\left\lceil+3.1\right\rceil$ | 17.0 | 69.8 $\left\lceil+0.3\right\rceil$ | 46.9 | 44.5 $\left\lceil+5.2\right\rceil$ ✓ | | | ✓ | HTSS-70 | 28.9 | 11.6 | 70.7 | 45.6 | 44.3 TABLE VII: HTSS on pixel-labeled and bounding-box-labeled datasets with conflicting label spaces. Performance on seen and unseen datasets for all classes (W-19, CitysT-34) and for class subsets (C-20, CitysT-14). Cityscapes Traffic Signs (CitysT) is originally a pixel-labeled dataset, which we convert to bounding boxes [43] (bbox column). CitysT-34 = C-20 $\cup$ CitysT-14, HTSS-70 = C-20 $\cup$ MTS-50. ### V-E Strong & weak supervision, conflicting label spaces The most challenging scenario involves multi-dataset training with mixed supervision and conflicting label spaces. To investigate this scenario, we augment the label space of Cityscapes with traffic sign classes from the bounding-box-labeled datasets Cityscapes Traffic Signs (14 t. signs) and MTS (50 t. signs). In this case, the methodology developed in Sec. IV-C is employed, the HTSS network is trained with input image size of $599\times 599$, and the batch size arrangement is 1 image from Cityscapes, 3 from Cityscapes Traffic Signs, and 3 from Mapillary Traffic Signs, for each GPU. Table VII shows the results. We evaluate all networks on the original Cityscapes classes (C-20), the traffic sign classes subset (CitysT-14), all CitysT-34 classes (CitysT-14 traffic signs + C-20), as well as, on the unseen Wild Dash, whose classes are a subset of Cityscapes (W-19 $\subset$ C-20). Comparing row 3 (HTSS-34) with the first two rows we observe that using the HTSS methodology the network is able to learn from weak supervision and even surpass the fully pixel-level supervised oracle (row 2) in some cases. Moreover, the results of the last row indicate that using MTS, which has a different image domain than Cityscapes, the network is able to segment Cityscapes traffic signs to some extent (11.6%). Overall, these experiments demonstrate the ability to: i) learn from mixed supervision, ii) learn from the same or different image domains, and iii) maintain or slightly improve generalization accuracy for the pixel-labeled classes (columns 1, 3). ((a)) Single-dataset training on Cityscapes. ((b)) HTSS on Cityscapes + Vistas + IDD. Figure 8: Progress of features, while adding more datasets, as a 2-D t-SNE visualization, when using same t-SNE hyper-parameters. Clusters become less scattered (intra-class distance) and better separated (inter-class distance). Model | Conflicts resolution | Memory $\Delta$ Params | Inference $\Delta$ ms | Unseen dts. mmIoU | $\mathcal{K}^{66}$ ---|---|---|---|---|--- one per dataset | post-proc. merging | $+5.4\cdot 10^{7}$ | $+127.1$ | 46.2 | 62.6 shared backbone, head per dataset | common classes | $+2.1\cdot 10^{5}$ | $+1.7$ | 34.2 | 45.6 post-proc. merging | $+2.1\cdot 10^{5}$ | $+1.8$ | 45.4 | 62.8 shared backbone, single head | common classes | reference | reference | 30.2 | 42.1 semantic atoms (ours) | $+5.1\cdot 10^{3}$ | $+0.0$ | 50.5 | 68.3 TABLE VIII: Common baselines methodologies for combining three pixel-labeled datasets (Cityscapes, Vistas, IDD) with conflicting label spaces. All methods use ResNet-50 backbones and softmax classifiers. The $\Delta$’s for the total number of parameters (Params) and the single-image inference time (ms) are w.r.t. the reference row 4, _i.e_. keeping only the common, non-conflicting classes from all datasets. ### V-F Ablations and Insights An analysis is provided for the conducted experiments. First, an ablation on how the amount of weak supervision affects performance is presented in Tab. IX for the experiment of Sec. V-E. An increasing number of images and bounding boxes from the weakly-labeled dataset are added per step. We observe that as weakly-labeled images are included, the segmentation performance increases accordingly. Second, we provide t-SNE plots in Fig. 8 for experiments from Tab. IV. The plots capture the 2-D projections of the output of feature extractor before and after adding multiple datasets. It can be seen that the representations have better properties from the perspective of classificability/separability. Finally, we provide comparisons of the HTSS methodology against various baselines in Tab. VIII examining memory and time factors wrt. the attained performance. The first three rows describe direct solutions using existing trained networks and post processing for solving conflicts. The single-network approach (fourth row) is the closest to HTSS, but resolves conflicts by maintaining only the common classes. This leads to a significant loss in Knowledgeability, as the number of recognizable classes reduce. Overall, the HTSS approach uses a reduced number of parameters and performs fast inference, since it uses a common backbone and a single classifier. pixel | bbox-labeled | Cityscapes (C-20) ---|---|--- Citys | Open Images | mAcc | mIoU ✓ | - | 81.2 | 70.2 ✓ | $1k$ images ($17.3k$ bboxes) | 80.7 | 69.8 ✓ | $10k$ images ($140.4k$ bboxes) | 81.6 | 70.6 ✓ | $100k$ images ($1185.8k$ bboxes) | 83.7 | 72.3 TABLE IX: Segmentation accuracy with different number of bounding boxes used to generate pseudo labels from the weakly-labeled Open Images. ## VI Conclusion We presented the HTSS framework for simultaneously training FCNs on multiple heterogeneous datasets for semantic segmentation. We explored heterogeneous training with various combinations of strongly (pixel) and weakly (bounding boxes, image tags) labeled datasets. The experiments showed that HTSS improved, in the majority of the combinations, three aspects of the predicted results: i) the segmentation performance on test splits of seen (training) datasets, generalization on unseen datasets, and awareness of semantic concepts, expressed by the proposed Knowledgeability metric. Our HTSS framework does not require any extra labeling for weakly-labeled datasets, while the only manual step consists of defining a semantic taxonomy of the label spaces of the employed datasets, solely in the case of conflicting label spaces. HTSS allows supplementing the pixel-labeled training data with other relevant datasets that otherwise would not be compatible. These properties make the HTSS approach useful to many applications in which training data for semantic segmentation are too scarce to achieve required performance. ## References * [1] Y. Guo, Y. Liu, T. Georgiou, and M. Lew, “A review of semantic segmentation using deep neural networks,” _International Journal of Multimedia Information Retrieval_ , vol. 7, pp. 87–93, 2017. * [2] A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, P. Martinez-Gonzalez, and J. Garcia-Rodriguez, “A survey on deep learning techniques for image and video semantic segmentation,” _Applied Soft Computing_ , vol. 70, pp. 41–65, 2018. * [3] S. Minaee, Y. Y. Boykov, F. Porikli, A. J. Plaza, N. Kehtarnavaz, and D. Terzopoulos, “Image segmentation using deep learning: A survey,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2021. * [4] H. Zhu, K. Yuen, L. Mihaylova, and H. Leung, “Overview of environment perception for intelligent vehicles,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 18, no. 10, pp. 2584–2601, 2017. * [5] J. Janai, F. Güney, A. Behl, and A. Geiger, “Computer vision for autonomous vehicles: Problems, datasets and state of the art,” _Foundations and Trends® in Computer Graphics and Vision_ , vol. 12, no. 1–3, pp. 1–308, 2020\. * [6] S. A. Taghanaki, K. Abhishek, J. P. Cohen, J. Cohen-Adad, and G. Hamarneh, “Deep semantic segmentation of natural and medical images: a review,” _Artificial Intelligence Review_ , pp. 1–42, 2020. * [7] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in _Advances in Neural Information Processing Systems_ , vol. 25. Curran Associates, Inc., 2012, pp. 1097–1105. * [8] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in _Proc. of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2015, pp. 1–9. * [9] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proc. of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 770–778. * [10] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in _Proc. of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016. * [11] G. Neuhold, T. Ollmann, S. R. Bulò, and P. Kontschieder, “The mapillary vistas dataset for semantic understanding of street scenes,” in _Proc. of the International Conference on Computer Vision_ , 2017, pp. 22–29. * [12] G. Varma, A. Subramanian, A. Namboodiri, M. Chandraker, and C. Jawahar, “Idd: A dataset for exploring problems of autonomous navigation in unconstrained environments,” in _IEEE Winter Conference on Applications of Computer Vision_. IEEE, 2019, pp. 1743–1751. * [13] J. Zhang, L. Qi, Y. Shi, and Y. Gao, “Generalizable semantic segmentation via model-agnostic learning and target-specific normalization,” _arXiv preprint arXiv:2003.12296_ , 2020. * [14] Q. Dou, D. Coelho de Castro, K. Kamnitsas, and B. Glocker, “Domain generalization via model-agnostic learning of semantic features,” _Advances in Neural Information Processing Systems_ , vol. 32, pp. 6450–6461, 2019. * [15] R. Romijnders, P. Meletis, and G. Dubbelman, “A domain agnostic normalization layer for unsupervised adversarial domain adaptation,” in _IEEE Winter Conference on Applications of Computer Vision_. IEEE, 2019, pp. 1866–1875. * [16] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in _European conference on computer vision_. Springer, 2014, pp. 740–755. * [17] A. Bearman, O. Russakovsky, V. Ferrari, and L. Fei-Fei, “What’s the point: Semantic segmentation with point supervision,” in _European conference on computer vision_. Springer, 2016, pp. 549–565. * [18] A. Gupta, P. Dollar, and R. Girshick, “Lvis: A dataset for large vocabulary instance segmentation,” in _Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 5356–5364. * [19] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in _Conference on Computer Vision and Pattern Recognition_. IEEE, 2009, pp. 248–255. * [20] A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, A. Kolesnikov _et al._ , “The open images dataset v4,” _International Journal of Computer Vision_ , pp. 1–26, 2020. * [21] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in _Proc. of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2015, pp. 3431–3440. * [22] X. Zhou, V. Koltun, and P. Krähenbühl, “Simple multi-dataset detection,” 2021\. * [23] X. Zhao, S. Schulter, G. Sharma, Y.-H. Tsai, M. Chandraker, and Y. Wu, “Object detection with a unified label space from multiple datasets,” in _European Conference on Computer Vision_. Springer, 2020, pp. 178–193. * [24] R. Ranftl, K. Lasinger, D. Hafner, K. Schindler, and V. Koltun, “Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , pp. 1–1, 2020. * [25] S. Zhao, B. Li, C. Reed, P. Xu, and K. Keutzer, “Multi-source domain adaptation in the deep learning era: A systematic survey,” 2020. * [26] S. Sun, H. Shi, and Y. Wu, “A survey of multi-source domain adaptation,” _Information Fusion_ , vol. 24, pp. 84–92, 2015. * [27] J. Lambert, Z. Liu, O. Sener, J. Hays, and V. Koltun, “Mseg: A composite dataset for multi-domain semantic segmentation,” in _Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 2879–2888. * [28] S. Jain, D. P. Pani, M. Danelljan, and L. Van Gool, “Scaling semantic segmentation beyond 1k classes on a single gpu,” _arXiv preprint arXiv:2012.07489_ , 2020. * [29] P. Bevandić, M. Oršić, I. Grubišić, J. Šarić, and S. Šegvić, “Multi-domain semantic segmentation on datasets with overlapping classes,” _arXiv e-prints_ , pp. arXiv–2009, 2020. * [30] G. Ghiasi, B. Zoph, E. D. Cubuk, Q. V. Le, and T.-Y. Lin, “Multi-task self-training for learning general representations,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 8856–8865. * [31] L. Wang, D. Li, H. Liu, J. Peng, L. Tian, and Y. Shan, “Cross-dataset collaborative learning for semantic segmentation in autonomous driving,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 36, no. 3, 2022, pp. 2487–2494. * [32] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 7263–7271. * [33] G. Ros, S. Stent, P. F. Alcantarilla, and T. Watanabe, “Training constrained deconvolutional networks for road scene semantic segmentation,” _arXiv preprint arXiv:1604.01545_ , 2016. * [34] X. Liang, H. Zhou, and E. Xing, “Dynamic-structured semantic propagation network,” in _Proc. of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 752–761. * [35] P. Meletis and G. Dubbelman, “Training of convolutional networks on multiple heterogeneous datasets for street scene semantic segmentation,” in _IEEE IV 2018_. IEEE, 6 2018, pp. 0–8. * [36] T. Kalluri, G. Varma, M. Chandraker, and C. Jawahar, “Universal semi-supervised semantic segmentation,” in _Proc. of the IEEE International Conference on Computer Vision_ , 2019, pp. 5259–5270. * [37] M. Leonardi, D. Mazzini, and R. Schettini, “Training efficient semantic segmentation cnns on multiple datasets,” in _International Conference on Image Analysis and Processing_. Springer, 2019, pp. 303–314. * [38] X. Fang and P. Yan, “Multi-organ segmentation over partially labeled datasets with multi-scale feature abstraction,” _arXiv preprint arXiv:2001.00208_ , 2020. * [39] L. Sun, K. Yang, X. Hu, W. Hu, and K. Wang, “Real-time fusion network for rgb-d semantic segmentation incorporating unexpected obstacle detection for road-driving images,” _IEEE Robotics and Automation Letters_ , vol. 5, no. 4, pp. 5558–5565, 2020. * [40] F. Kong, C. Chen, B. Huang, L. M. Collins, K. Bradbury, and J. M. Malof, “Training a single multi-class convolutional segmentation network using multiple datasets with heterogeneous labels: preliminary results,” in _IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium_. IEEE, 2019, pp. 3903–3906. * [41] Y. Zou, Z. Zhang, H. Zhang, C.-L. Li, X. Bian, J.-B. Huang, and T. Pfister, “Pseudoseg: Designing pseudo labels for semantic segmentation,” _arXiv preprint arXiv:2010.09713_ , 2020. * [42] M. S. Ibrahim, A. Vahdat, M. Ranjbar, and W. G. Macready, “Semi-supervised semantic image segmentation with self-correcting networks,” in _Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 12 715–12 725. * [43] P. Meletis and G. Dubbelman, “On boosting semantic street scene segmentation with weak supervision,” in _2019 IEEE Intelligent Vehicles Symposium (IV)_. * [44] G. Papandreou, L.-C. Chen, K. P. Murphy, and A. L. Yuille, “Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation,” in _International Conference on Computer Vision_. IEEE, 2015, pp. 1742–1750. * [45] J. Zhu, J. Mao, and A. L. Yuille, “Learning from weakly supervised data by the expectation loss svm (e-svm) algorithm,” in _Advances in Neural Information Processing Systems_ , 2014, pp. 1125–1133. * [46] J. Dai, K. He, and J. Sun, “Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation,” in _Proc. of the IEEE International Conference on Computer Vision_ , 2015, pp. 1635–1643. * [47] X. Wang, S. Liu, H. Ma, and M.-H. Yang, “Weakly-supervised semantic segmentation by iterative affinity learning,” _International Journal of Computer Vision_ , pp. 1–14, 2020. * [48] D. Pathak, P. Krahenbuhl, and T. Darrell, “Constrained convolutional neural networks for weakly supervised segmentation,” in _Proc. of the IEEE International Conference on Computer Vision_ , 2015, pp. 1796–1804. * [49] F. Meng, K. Luo, H. Li, Q. Wu, and X. Xu, “Weakly supervised semantic segmentation by a class-level multiple group cosegmentation and foreground fusion strategy,” _IEEE Transactions on Circuits and Systems for Video Technology_ , 2019. * [50] L. Ye, Z. Liu, and Y. Wang, “Learning semantic segmentation with diverse supervision,” in _2018 IEEE Winter Conference on Applications of Computer Vision_. IEEE, 2018, pp. 1461–1469. * [51] P. Meletis, R. Romijnders, and G. Dubbelman, “Data selection for training semantic segmentation cnns with cross-dataset weak supervision,” in _IEEE Intelligent Transportation Systems Conference_. IEEE, 2019, pp. 3682–3688. * [52] X. Li, H. Ma, and X. Luo, “Weaklier supervised semantic segmentation with only one image level annotation per category,” _IEEE Transactions on Image Processing_ , vol. 29, pp. 128–141, 2019. * [53] Q. Li, A. Arnab, and P. H. Torr, “Weakly- and semi-supervised panoptic segmentation,” in _The European Conference on Computer Vision_ , September 2018. * [54] K. Weiss, T. M. Khoshgoftaar, and D. Wang, “A survey of transfer learning,” _Journal of Big data_ , vol. 3, no. 1, pp. 1–40, 2016. * [55] J. He, X. Jia, S. Chen, and J. Liu, “Multi-source domain adaptation with collaborative learning for semantic segmentation,” _arXiv preprint arXiv:2103.04717_ , 2021. * [56] F. J. Piva and G. Dubbelman, “Exploiting image translations via ensemble self-supervised learning for unsupervised domain adaptation,” _arXiv preprint arXiv:2107.06235_ , 2021. * [57] V. Nekrasov, T. Dharmasiri, A. Spek, T. Drummond, C. Shen, and I. Reid, “Real-time joint semantic segmentation and depth estimation using asymmetric annotations,” in _2019 International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 7101–7107. * [58] I. Kokkinos, “Ubernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory,” in _Proc. of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 6129–6138. * [59] S. Vandenhende, S. Georgoulis, W. Van Gansbeke, M. Proesmans, D. Dai, and L. Van Gool, “Multi-task learning for dense prediction tasks: A survey,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2021. * [60] M. Crawshaw, “Multi-task learning with deep neural networks: A survey,” 2020. * [61] Y. Nakajima, B. Kang, H. Saito, and K. Kitani, “Incremental class discovery for semantic segmentation with rgbd sensing,” in _Proc. of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 972–981. * [62] M. Klingner, A. Bär, P. Donn, and T. Fingscheidt, “Class-incremental learning for semantic segmentation re-using neither old data nor old labels,” in _2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)_. IEEE, 2020, pp. 1–8. * [63] A. Saporta, T.-H. Vu, M. Cord, and P. Pérez, “Esl: Entropy-guided self-supervised learning for domain adaptation in semantic segmentation,” _arXiv preprint arXiv:2006.08658_ , 2020. * [64] W. P. Sanberg, G. Dubbleman _et al._ , “Free-space detection with self-supervised and online trained fully convolutional networks,” _Electronic Imaging_ , vol. 2017, no. 19, pp. 54–61, 2017. * [65] B. Zoph, G. Ghiasi, T.-Y. Lin, Y. Cui, H. Liu, E. D. Cubuk, and Q. Le, “Rethinking pre-training and self-training,” _Advances in Neural Information Processing Systems_ , vol. 33, 2020. * [66] X. Zhan, Z. Liu, P. Luo, X. Tang, and C. Loy, “Mix-and-match tuning for self-supervised semantic segmentation,” in _Proc. of the AAAI Conference on Artificial Intelligence_ , vol. 32, no. 1, 2018. * [67] M.-L. Zhang, F. Yu, and C.-Z. Tang, “Disambiguation-free partial label learning,” _IEEE Transactions on Knowledge and Data Engineering_ , vol. 29, no. 10, pp. 2155–2167, 2017. * [68] J. Cid-Sueiro, “Proper losses for learning from partial labels,” in _Advances in Neural Information Processing Systems_. Citeseer, 2012, pp. 1565–1573. * [69] G. A. Miller, “Wordnet: a lexical database for english,” _Communications of the ACM_ , vol. 38, no. 11, pp. 39–41, 1995. * [70] R. Navigli and S. P. Ponzetto, “Babelnet: Building a very large multilingual semantic network,” in _Proc. of the 48th annual meeting of the association for computational linguistics_ , 2010, pp. 216–225. * [71] M. Kulmanov, F. Z. Smaili, X. Gao, and R. Hoehndorf, “Semantic similarity and machine learning with ontologies,” _Briefings in Bioinformatics_ , 2021. * [72] M. Braun, S. Krebs, F. Flohr, and D. M. Gavrila, “Eurocity persons: A novel benchmark for person detection in traffic scenes,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 41, no. 8, pp. 1844–1861, 2019. * [73] C. Ertler, J. Mislej, T. Ollmann, L. Porzi, G. Neuhold, and Y. Kuang, “The mapillary traffic sign dataset for detection and classification on a global scale,” in _European Conference on Computer Vision_. Springer, 2020, pp. 68–84. * [74] O. Zendel, K. Honauer, M. Murschitz, D. Steininger, and G. Fernandez Dominguez, “Wilddash-creating hazard-aware benchmarks,” in _Proc. of the European Conference on Computer Vision_ , 2018, pp. 402–416. * [75] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” _The International Journal of Robotics Research_ , vol. 32, no. 11, pp. 1231–1237, 2013. * [76] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in _Proc. of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 2881–2890. * [77] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” in _International Conference on Learning Representations_ , Y. Bengio and Y. LeCun, Eds., 2016. Panagiotis Meletis Panagiotis Meletis holds a PhD from the Eindhoven University of Technology. He works on advancing deep learning techniques for autonomous vehicles within the Mobile Perception Systems lab. His research involves the design of novel algorithms towards holistic visual scene understanding. Panagiotis received his MSc in Electrical and Computer Engineering from the National Technical University of Athens in 2015. His current research interests include deep learning techniques for machine perception, automated visual reasoning, and explainable and sustainable artificial intelligence. --- Gijs Dubbelman PhD, is an associate professor with the Eindhoven University of Technology and heading the Mobile Perception Research (MPS) lab, which aims to improve the real-time perception capabilities of mobile sensor platforms through research on artificial intelligence. Key research areas of the MPS lab are 3-D computer vision, multi-modal pattern recognition, and deep learning. Formerly, Gijs Dubbelman was a member of the Field Robotics Center of Carnegie Mellon’s Robotics Institute, where he performed research on Visual-SLAM for autonomous robots and vehicles. ---
# Factored couplings in multi-marginal optimal transport via difference of convex programming Quang Huy Tran Univ. Bretagne-Sud, CNRS, IRISA F-56000 Vannes <EMAIL_ADDRESS> &Hicham Janati École Polytechnique, CMAP, UMR 7641 F-91120 Palaiseau <EMAIL_ADDRESS> Ievgen Redko Univ Lyon, UJM-Saint-Etienne, CNRS, UMR 5516 F-42023 Saint-Etienne <EMAIL_ADDRESS> &Rémi Flamary École Polytechnique, CMAP, UMR 7641 F-91120 Palaiseau <EMAIL_ADDRESS> &Nicolas Courty Univ. Bretagne-Sud, CNRS, IRISA F-56000 Vannes <EMAIL_ADDRESS> ###### Abstract Optimal transport (OT) theory underlies many emerging machine learning (ML) methods nowadays solving a wide range of tasks such as generative modeling, transfer learning and information retrieval. These latter works, however, usually build upon a traditional OT setup with two distributions, while leaving a more general multi-marginal OT formulation somewhat unexplored. In this paper, we study the multi-marginal OT (MMOT) problem and unify several popular OT methods under its umbrella by promoting structural information on the coupling. We show that incorporating such structural information into MMOT results in an instance of a different of convex (DC) programming problem allowing us to solve it numerically. Despite high computational cost of the latter procedure, the solutions provided by DC optimization are usually as qualitative as those obtained using currently employed optimization schemes. ## 1 Introduction Broadly speaking, the classic OT problem provides a principled approach for transporting one probability distribution onto another following the principle of the least effort. Such a problem, and the distance on the space of probability distributions derived from it, arise in many areas of machine learning (ML) including generative modeling, transfer learning and information retrieval, where OT has been successfully applied. A natural extension of classic OT, in which the admissible transport plan (a.k.a coupling) can have more than two prescribed marginal distributions, is called the multi-marginal optimal transport (MMOT) (Gangbo and Swiech, 1998). The latter has several attractive properties: it enjoys the duality theory (Kellerer, 1984) and finds connections with the Wasserstein barycenter problem (Agueh and Carlier, 2011) used for data averaging and probabilistic graphical models (Haasler et al., 2020). While being far less popular than the classic OT with two marginals, MMOT is a very useful framework on its own with some notable recent applications in generative adversarial networks (Cao et al., 2019), clustering (Mi and Bento, 2020) and domain adaptation (Hui et al., 2018; He et al., 2019), to name a few. The recent success of OT in ML is often attributed to the entropic regularization (Cuturi, 2013) where the authors imposed a constraint on the coupling matrix forcing it to be closer to the independent coupling given by the rank-one product of the marginals. Such a constraint leads to the appearance of the strongly convex entropy term in the objective function and allows the entropic OT problem to be solved efficiently using simple Sinkhorn- Knopp matrix balancing algorithm. In addition to this, it was also noticed that structural constraints allow to reduce the prohibitively high sample- complexity of the classic OT problem (Genevay et al., 2019; Forrow et al., 2019). While these and several other recent works (Lin et al., 2021; Scetbon et al., 2021) addressed structural constraints in the classic OT problem with two marginals, none of them considered a much more challenging case of doing so in a multi-marginal setting. On the other hand, while the work of (Haasler et al., 2020) considers the MMOT problem in which the cost tensor induced by a graphical structure, it does not naturally promote the factorizability of transportation plans. #### Contributions In this paper, we define and study a general MMOT problem with structural penalization on the coupling matrix. We start by showing that a such formulation includes several popular OT methods as special cases and allows to gain deeper insights into them. We further consider a relaxed problem where hard constraints are replaced by a regularization term and show that it leads to an instance of the difference of convex programming problem. A numerical study of the solutions obtained when solving the latter in cases of interest highlights their competitive performance when compared to solutions provided by the optimization strategies used previously. ## 2 Preliminary knowledge #### Notations. For each integer $n\geq 1$, we write $[n]:=\\{1,...,n\\}$. For each discrete probability measure $\mu$ with finite support, its negative entropy is defined as $H(\mu)=\langle\mu,\log\mu\rangle$, where the logarithm operator is element-wise, with the convention that $0\log 0=0$. Here, $\langle\cdot,\cdot\rangle$ denotes the Frobenius inner product. The Kullback- Leibler divergence between two discrete probability measures $\mu$ and $\nu$ with finite supports is defined as $\text{KL}(\mu|\nu)=\begin{cases}\langle\mu,\log\frac{\mu}{\nu}\rangle,\text{ if }\mu\text{ is absolutely continuous with respect to }\nu\\\ \infty,\text{ otherwise}.\end{cases}$ where the division operator in the logarithm is element-wise. In what follows, we write $\mathcal{T}=(1,...,N)$, for some integer $N\geq 1$. For any positive integers $a_{1},...,a_{N}$, we call $P\in\mathbb{R}^{a_{1}\times...\times a_{N}}$ a $N$-D tensor. In particular, a $1$-D tensor is a vector and $2$-D tensor is a matrix. A tensor is a probability tensor if its entries are nonnegative and the sum of all entries is $1$. Given $N$ probability vectors $\mu_{1},...,\mu_{N}$, we write $\mu=(\mu_{n})_{n=1}^{N}$. We denote $\Sigma_{\mathcal{T}}$ the set of $N$-D probability tensors and $U_{\mathcal{T}}\subset\Sigma_{\mathcal{T}}$ the set of tensors whose $N$ marginal distributions are $\mu_{1},...,\mu_{N}$. Any coupling in $U_{\mathcal{T}}$ is said to be admissible. We write $\mu_{\mathcal{T}}:=\mu_{1}\otimes...\otimes\mu_{N}$ the tensor product (or measure product) of $\mu_{1},...,\mu_{N}$. #### Multi-marginal OT problem. Given a collection of $N$ probability vectors $\mu=(\mu_{n}\in\mathbb{R}^{a_{n}})_{n=1}^{N}$ and a $N$-D cost tensor $C\in\mathbb{R}^{a_{1}\times...\times a_{N}}$, the MMOT problem reads $\text{MMOT}(\mu)=\inf_{P\in U_{\mathcal{T}}}\langle C,P\rangle.$ (1) In practice, such a formulation is intractable to optimize in a discrete setting as it results in a linear program where the number of constraints grows exponentially in $N$. A more tractable strategy for solving MMOT is to consider the following entropic regularization problem $\inf_{P\in U_{\mathcal{T}}}\langle C,P\rangle+\varepsilon\text{KL}(P|\mu_{\mathcal{T}}).$ (2) which can be solved using Sinkhorn’s algorithm Benamou et al. (2014). We refer the interested reader to Supplementary materials for algorithmic details. ## 3 Factored Multi-marginal Optimal Transport In this section, we first define a factored MMOT (F-MMOT) problem where we seek to promote a structure on the optimal coupling given such as a factorization into a tensor product. Interestingly, such a formulation can be shown to include several other OT problems as special cases. Then, we introduce a relaxed version called MMOT-DC where the factorization constraint is smoothly promoted through a Kullback-Leibler penalty. ### 3.1 Motivation Before a formal statement of our problem, we first give a couple of motivating examples showing why and when structural constraints on the coupling matrix can be beneficial. To this end, first note that a trivial example of the usefulness of such constraints in OT is the famous entropic regularization. Indeed, while most of the works define the latter by adding negative entropy of the coupling to the classic OT objective function directly, the original idea was to constraint the sought coupling to remain close (to some extent) to a rank-one product of the two marginal distributions. The appearance of negative entropy in the final objective function is then only a byproduct of such constraint due to the decomposition of the KL divergence into a sum of three terms with two of them being constant. Below we give two more examples of real-world applications related to MMOT problem where a certain decomposition imposed on the coupling tensor can be desirable. #### Multi-source multi-target translation. A popular task in computer vision is to match images across different domains in order to perform the so-called image translation. Such tasks are often tackled within the GAN framework where one source domain from which the translation is performed, is matched with multiple target domains modeled using generators. While MMOT was applied in this context by Cao et al. (2019) when only one source was considered, its application in a multi-source setting may benefit from structural constraints on the coupling tensor incorporating the human prior on what target domains each source domain should be matched to. #### Multi-task reinforcement learning. In this application, the goal is to learn individual policies for a set of agents while taking into account the similarities between them and hoping that the latter will improve the individual policies. A common approach is to consider an objective function consisting of two terms where the first term is concerned with learning individual policies, while the second forces a consensus between them. Similar to the example considered above, MMOT problem was used to promote the consensus across different agents’ policies in (Cohen et al., 2021), even though such a consensus could have benefited from a prior regarding the semantic relationships between the learned tasks. ### 3.2 Factored MMOT and its relaxation We start by giving several definitions used in the following parts of the paper. ###### Definition 3.1 (Tuple partition) A sequence of tuples $(\mathcal{T}_{m})_{m=1}^{M}$, with $M\leq N$, is called a partition of a $N$-tuple $\mathcal{T}$ if the tuples $\mathcal{T}_{1},...,\mathcal{T}_{M}$ are nonempty and disjoint, and their concatenation gives $\mathcal{T}$. Here, we implicitly take into account the order of the tuple, which is not the case for the partition of a set. If there exists a tuple in $(\mathcal{T}_{m})_{m=1}^{M}$ which contains only one element, then we say $(\mathcal{T}_{m})_{m=1}^{M}$ is degenerate. ###### Definition 3.2 (Marginal tensor) Given a tensor $P\in\mathbb{R}^{a_{1}\times...\times a_{N}}$ and a partition $(\mathcal{T}_{m})_{m=1}^{M}$ of $\mathcal{T}=(1,...,N)$, we call $P_{\\#m}$ the $\mathcal{T}_{m}$-marginal tensor, by summing $P$ over all dimensions not in $\mathcal{T}_{m}$. We write $P_{\\#\mathcal{T}}=P_{\\#1}\otimes...\otimes P_{\\#M}\in\mathbb{R}^{a_{1}\times...\times a_{N}}$ the tensor product of its marginal tensors. For example, for $M=N=2$ and $\mathcal{T}=(1,2)$, we have $\mathcal{T}_{1}=(1)$ and $\mathcal{T}_{2}=(2)$. So, given a matrix $P\in\mathbb{R}^{a_{1}\times a_{2}}$, its marginal tensors $P_{\\#1}$ and $P_{\\#2}$ are vectors in $\mathbb{R}^{a_{1}}$ and $\mathbb{R}^{a_{2}}$, respectively, defined by $(P_{\\#1})_{i}=\sum_{j}P_{ij}$ and $(P_{\\#2})_{j}=\sum_{i}P_{ij}$ for $(i,j)\in[a_{1}]\times[a_{2}]$. The tensor product $P_{\\#\mathcal{T}}\in\mathbb{R}^{a_{1}\times a_{2}}$ is defined by $(P_{\\#\mathcal{T}})_{ij}=(P_{\\#1})_{i}(P_{\\#2})_{j}$. Clearly, if $P$ is a probability tensor, then so are its marginal tensors. Suppose $\mathcal{T}_{m}=(p,...,q)$ for $m=1,...,M$ and $1\leq p\leq q\leq N$. We denote $\Sigma_{\mathcal{T}_{m}}$ the set of probability tensors in $\mathbb{R}^{a_{p}\times...\times a_{q}}$ and $U_{\mathcal{T}_{m}}$ the set of probability tensors in $\mathbb{R}^{a_{p}\times...\times a_{q}}$ whose $(r)$-marginal vector is $\mu_{r}$, for every $r=p,...,q$. We define $\mu_{\mathcal{T}_{m}}:=\mu_{p}\otimes...\otimes\mu_{q}$ by $(\mu_{\mathcal{T}_{m}})_{i_{p}...i_{q}}:=\prod_{r}(\mu_{r})_{i_{r}}$, for $i_{r}\in[a_{r}]$, with $r=p,...,q$. Clearly, we have $\mu_{\mathcal{T}}=\mu_{\mathcal{T}_{1}}\otimes...\otimes\mu_{\mathcal{T}_{M}}$. ###### Definition 3.3 (Factored MMOT) Given a partition $(\mathcal{T}_{m})_{m=1}^{M}$ of $\mathcal{T}$ and a collection of histograms $\mu$, we consider the following OT problem $\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}=\inf_{P\in\mathcal{F}_{M}}\langle C,P\rangle,$ (F-MMOT) where $\mathcal{F}_{M}\subset U_{\mathcal{T}}$ is the set of admissible couplings which can be factorized as a tensor product of $M$ component probability tensors in $\Sigma_{\mathcal{T}_{1}},...,\Sigma_{\mathcal{T}_{M}}$. Several remarks are in order here. First, one should note that the partition considered above is in general not degenerate meaning that the decomposition can involve tensors of an arbitrary order $<N$. Second, the decomposition in this setting depicts the prior knowledge regarding the tuples of measures which should be independent: the couplings for the measures from different tuples will be degenerate and the optimal coupling tensor will be reconstructed from couplings of each tuple separately. Third, suppose the partition $(\mathcal{T}_{m})_{m=1}^{M}$ is not degenerate and $M=2$, i.e. the tensor is factorized as product of two tensors, the problem F-MMOT is equivalent to a variation of low nonnegative rank OT (see Appendix for a proof). As for the existence of the solution to this problem, we have that $\mathcal{F}_{M}$ is compact because it is a close subset of the compact set $U_{\mathcal{T}}$, which implies that F-MMOT always admits a solution. Furthermore, observe that $\begin{split}\mathcal{F}_{M}&=\\{P\in U_{\mathcal{T}}:P=P_{1}\otimes...\otimes P_{M},\text{where }P_{m}\in\Sigma_{\mathcal{T}_{m}},\forall m=1,...,M\\}\\\ &=\\{P\in\Sigma_{\mathcal{T}}:P=P_{1}\otimes...\otimes P_{M},\text{where }P_{m}\in U_{\mathcal{T}_{m}},\forall m=1,...,M\\}.\end{split}$ Thus, the problem F-MMOT can be rewritten as $\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}=\inf_{\begin{subarray}{c}P_{m}\in U_{\mathcal{T}_{m}}\\\ \forall m=1,...,M\end{subarray}}\langle C,P_{1}\otimes...\otimes P_{M}\rangle.$ So, if $\mathcal{T}_{1},...,\mathcal{T}_{M}$ are $2$-tuples and two marginal distributions corresponding to each $U_{\mathcal{T}_{m}}$ are identical and uniform, then by Birkhoff’s theorem (Birkhoff, 1946), F-MMOT admits an optimal solution in which each component tensor $P_{m}$ is a permutation matrix. #### COOT and GW as special cases. When $N=4$ and $M=2$ with $\mathcal{T}_{1}=(a_{1},a_{2})$ and $\mathcal{T}_{2}=(a_{3},a_{4})$, the problem F-MMOT becomes the CO-Optimal transport (Redko et al., 2020), where the two component tensors are known as sample and feature couplings. If furthermore, $a_{1}=a_{3},a_{2}=a_{4}$, and $\mu_{1}=\mu_{3},\mu_{2}=\mu_{4}$, it becomes a lower bound of the discrete Gromov-Wasserstein distance (Mémoli, 2011). This means that our formulation can be seen as a generalization of several OT formulations. Observe that if a probability tensor $P$ can be factorized as a tensor product of probability tensors, i.e. $P=P_{1}\otimes...\otimes P_{M}$, then each $P_{m}$ is also the $\mathcal{T}_{m}$-marginal tensor of $P$. This prompts us to consider the following relaxation of factored MMOT, where the hard constraint $\mathcal{F}_{M}$ is replaced by a regularization term. ###### Definition 3.4 (Relaxed Factored MMOT) Given $\varepsilon\geq 0$, a partition $(\mathcal{T}_{m})_{m=1}^{M}$ of $\mathcal{T}$ and a collection of measures $\mu$, we define the following problem: $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}=\inf_{P\in U_{\mathcal{T}}}\langle C,P\rangle+\varepsilon\text{KL}(P|P_{\\#\mathcal{T}}).$ (MMOT-DC) From the exposition above, one can guess that this relaxation is reminiscent of the entropic regularization in MMOT and coincides with it when $M=N$. As such it also recovers the classical entropic OT. One should note that the choice of the KL divergence is not arbitrary and its advantage will become clear when it comes to the algorithm. special case of MMOT-DC is when $M=N$, we recover the entropic MMOT. After having defined the two optimization problems, we now set on exploring their theoretical properties. ### 3.3 Theoretical properties Intuitively, the relaxed problem is expected to allow for solutions with a lower value of the final objective function. We formally prove the validity of this intuition below. ###### Proposition 3.1 (Preliminary properties) 1. 1. $\forall\varepsilon\geq 0,\text{MMOT}(\mu)\leq\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}\leq\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$. 2. 2. $\forall\varepsilon>0,\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}=0$ if and only if $\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}=0$. An interesting property of $\text{MMOT-DC}_{\varepsilon}$ is that it interpolates between MMOT and F-MMOT. Informally, for very large $\varepsilon$, the KL divergence term dominates, so the optimal transport plans tend to be factorizable. On the other hand, for very small $\varepsilon$, the KL divergence term becomes negligible and we approach MMOT. The result below formalizes this intuition. ###### Proposition 3.2 (Interpolation between MMOT and F-MMOT) For any partition $(\mathcal{T}_{m})_{m=1}^{M}$ of $\mathcal{T}$ and for $\varepsilon>0$, let $P_{\varepsilon}$ be a minimiser of the problem $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$. 1. 1. When $\varepsilon\to\infty$, one has $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}\to\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$. In this case, any cluster point of the sequence of minimisers $(P_{\varepsilon})_{\varepsilon}$ is a minimiser of $\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$. 2. 2. When $\varepsilon\to 0$, then $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}\to\text{MMOT}(\mu)$. In this case, any cluster point of the sequence of minimisers $(P_{\varepsilon})_{\varepsilon}$ is a minimiser of $\text{MMOT}(\mu)$. #### GW distance revisited. Somewhat surprisingly, the relaxation MMOT-DC also allows us to prove the equality between GW distance and COOT in the discrete setting. Let $\mathcal{X}$ be a finite subset (of size $m$) of a certain metric space. Denote $C_{x}\in\mathbb{R}^{m\times m}$ its similarity matrix (e.g. distance matrix). We define similarly the set $\mathcal{Y}$ of size $n$ and the corresponding similarity matrix $C_{y}\in\mathbb{R}^{n\times n}$. We also assign two discrete probability measures $\mu_{x}\in\mathbb{R}^{m}$ and $\mu_{y}\in\mathbb{R}^{n}$ to $\mathcal{X}$ and $\mathcal{Y}$, respectively. The GW distance is then defined as $\text{GW}(C_{x},C_{y})=\inf_{P\in U(\mu_{x},\mu_{y})}\langle L_{p}(C_{x},C_{y}),P\otimes P\rangle,$ and the COOT reads $\text{COOT}(C_{x},C_{y})=\inf_{\begin{subarray}{c}P\in U(\mu_{x},\mu_{y})\\\ Q\in U(\mu_{x},\mu_{y})\end{subarray}}\langle L_{p}(C_{x},C_{y}),P\otimes Q\rangle,$ where the $4$D tensor $L_{p}(X,Y)\in\mathbb{R}^{m\times n\times m\times n}$ is defined by $\big{(}L_{p}(X,Y)\big{)}_{i,j,k,l}=|X_{i,k}-Y_{j,l}|^{p}$, for some $p\geq 1$, and $U(\mu,\nu)$ is the set of couplings in $\mathbb{R}^{m\times n}$ whose two marginal distributions are $\mu$ and $\nu$. When $C_{x}$ and $C_{y}$ are two squared Euclidean distance matrices, and $p=2$, it can be shown that the GW distance is equal to the COOT (Redko et al., 2020). This is also true when $L_{p}(C_{x},C_{y})$ is a negative definite kernel Séjourné et al. (2020). Here, we establish another case where this equality still holds. ###### Corollary 3.3 If $L_{p}(C_{x},C_{y})$ is a (symmetric) Lipschitz function with respect to both inputs, and induces a strictly positive definite kernel $\exp\big{(}-\frac{L_{p}(C_{x},C_{y})}{\varepsilon}\big{)}$ on $(\mathcal{X}\times\mathcal{Y})^{2}$, for every $\varepsilon>0$, then there exists a solution $(P,Q)$ of COOT such that $P=Q$. As a consequence, we have the equality between GW distance and COOT. The proof relies on the connection between MMOT-DC and COOT shown in the proposition 3.2, and that the two $\mathcal{T}_{1},\mathcal{T}_{2}$-marginal matrices of the $4$-D solution of MMOT-DC are in fact identical, under the assumption of the cost tensor. The proof of the second claim is deferred to the Appendix. ## 4 Numerical solution We now turn to computational aspects of the problem MMOT-DC. First, note that the KL divergence term can be decomposed as $\text{KL}(P|P_{\\#\mathcal{T}})=H(P)-\sum_{m=1}^{m}H(P_{\\#m}),$ where the function $H_{m}$ defined by $H_{m}(P):=H(P_{\\#m})$ is continuous and convex with respect to $P$. Now, the problem MMOT-DC becomes $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}=\inf_{P\in U_{\mathcal{T}}}\langle C,P\rangle+\varepsilon H(P)-\varepsilon\sum_{m=1}^{M}H_{m}(P).$ (3) This is nothing but a Difference of Convex (DC) programming problem (which explains the name MMOT-DC), thanks to the convexity of the set $U_{\mathcal{T}}$ and the entropy function $H$. Thus, it can be solved by the DC algorithm (Tao and Souad, 1986; Pham and Thi, 1997) as follows: at the iteration $t$, 1. 1. Calculate $G^{(t)}\in\partial(\sum_{m=1}^{M}H_{m})(P^{(t)})$. 2. 2. Solve $P^{(t+1)}\in\arg\min_{P\in U_{\mathcal{T}}}\langle C-\varepsilon G^{(t)},P\rangle+\varepsilon H(P)$. This algorithm is very easy to implement. Indeed, the second step is a classical entropic-regularized MMOT problem and can be solved by the Sinkhorn algorithm 3. In the first step, the gradient can be calculated explicitly. For the sake of simplicity, we illustrate the calculation in a simple case, where $M=2$ and $N=4$ with $\mathcal{T}_{1},\mathcal{T}_{2}$ are two $2$-tuples. The function $H_{1}+H_{2}$ is continuous, so $h^{(t)}=\nabla_{P}(H_{1}+H_{2})(P^{(t)})$. Given a $4$-D probability tensor $P$, we have $H_{1}(P)+H_{2}(P)=\sum_{i,j,k,l}P_{i,j,k,l}\log\big{(}\sum_{i,j}P_{i,j,k,l}\big{)}+P_{i,j,k,l}\log\big{(}\sum_{k,l}P_{i,j,k,l}\big{)}.$ So, $\frac{\partial(H_{1}+H_{2})}{\partial P_{i,j,k,l}}=\log\left(\sum_{i,j}P_{i,j,k,l}\right)+\frac{P_{i,j,k,l}}{\sum_{i,j}P_{i,j,k,l}}+\log\left(\sum_{k,l}P_{i,j,k,l}\right)+\frac{P_{i,j,k,l}}{\sum_{k,l}P_{i,j,k,l}}.$ The complete DC algorithm for the problem 3 can be found in the algorithm 1. Algorithm 1 DC algorithm for the problem MMOT-DC Input. Cost tensor $C$, partition $(\mathcal{T}_{m})_{m=1}^{M}$ of $\mathcal{T}$, histograms $\mu_{1},...,\mu_{N}$, hyperparameter $\varepsilon>0$, initialization $P^{(0)}$, tuple of initial dual vectors for the Sinkhorn step $(f_{1}^{(0)},...,f_{N}^{(0)})$. Output. Tensor $P\in U_{\mathcal{T}}$. While not converge 1. 1. Compute $G^{(t)}=\sum\limits_{m=1}^{M}\nabla_{P}H(P^{(t)}_{\\#m})$ the gradient of teh concave term. 2. 2. Solve $P^{(t+1)}\in\arg\min_{P\in U_{\mathcal{T}}}\langle L(X,Y)-\varepsilon G^{(t)},P\rangle+\varepsilon H(P),$ using the Sinkhorn algorithm 3, with the tuple of initial dual vectors $(f_{1}^{(0)},...,f_{N}^{(0)})$. We observed that initialization is crucial to the convergence of algorithm which is not surprising for a non-convex problem. To accelerate the algorithm for large $\varepsilon$, we propose to use the warm-start strategy, which is similar to the one used in the entropic OT problem with very small regularization parameter (Schmitzer, 2019). Its idea is simple: we consider an increasing finite sequence $(\varepsilon_{n})_{n=0}^{N}$ approaching $\varepsilon$ such that the solution $P_{\varepsilon_{0}}$ of the problem $\text{MMOT-DC}_{\varepsilon_{0}}(X,Y)$ can be estimated quickly and accurately using the initialization $P^{(0)}$. Then we solve each successive problem $\text{MMOT-DC}_{\varepsilon_{n}}(X,Y)$ using the previous solution $P_{\varepsilon_{n-1}}$ as initialization. Finally, the problem $\text{MMOT- DC}_{\varepsilon}(X,Y)$ is solved using the solution $P_{\varepsilon_{N}}$ as initialization. Algorithm 2 DC algorithm with warm start for the problem MMOT-DC Input. Cost tensor $C$, partition $(\mathcal{T}_{m})_{m=1}^{M}$ of $\mathcal{T}$, histograms $\mu_{1},...,\mu_{N}$, hyperparameter $\varepsilon>0$, initialization $P^{(0)}$, initial $\varepsilon_{0}>0$, step size $s>1$, tuple of initial dual vectors $(f_{1}^{(0)},...,f_{N}^{(0)})$. Output. Tensor $P\in U_{\mathcal{T}}$. 1. 1. While $\varepsilon_{0}<\varepsilon$: 1. (a) Using algorithm 1, solve the problem $\text{MMOT-DC}_{\varepsilon_{0}}(X,Y)$ with initialization $P^{(0)}$ and $(f_{1}^{(0)},...,f_{N}^{(0)})$ to find the solution $P_{\varepsilon_{0}}$ and its associated tuple of dual vectors $(f_{1}^{(\varepsilon_{0})},...,f_{N}^{(\varepsilon_{0})})$. 2. (b) Set $P^{(0)}=P_{\varepsilon_{0}},f_{i}^{(0)}=f_{i}^{(\varepsilon_{0})}$, for $i=1,...,N$. 3. (c) Increase regularization: $\varepsilon_{0}:=s\varepsilon_{0}$. 2. 2. Using algorithm 1, solve the problem $\text{MMOT-DC}_{\varepsilon}(X,Y)$ using the initialization $P^{(0)}$ and $(f_{1}^{(0)},...,f_{N}^{(0)})$. ## 5 Experimental evaluation In this section, we illustrate the use of MMOT-DC on simulated data. Rather than performing experiments in full generality, we choose the setting where $N=4$ and $M=2$ with $\mathcal{T}_{1}=(1,2)$ and $\mathcal{T}_{2}=(3,4)$, so that we can compare MMOT-DC with other popular solvers of COOT and GW distance. Given two matrices $X$ and $Y$, we always consider the $4$-D cost tensor $C$, where $C_{i,j,k,l}=|X_{i,k}-Y_{j,l}|^{2}$. On the other hand, we are not interested in the $4$-D minimiser of MMOT-DC, but only in its two $\mathcal{T}_{1},\mathcal{T}_{2}$-marginal matrices. #### Solving COOT on a toy example. We generate a random matrix $X\in\mathbb{R}^{30\times 25}$, whose entries are drawn independently from the uniform distribution on the interval $[0,1)$. We equip the rows and columns of $X$ with two discrete uniform distributions on $[30]$ and $[25]$. We fix two permutation matrices $P\in\mathbb{R}^{30\times 30}$ (called sample permutation) and $Q\in\mathbb{R}^{25\times 25}$ (called feature permutation), then calculate $Y=PXQ$. We also equip the rows and columns of $Y$ with two discrete uniform distributions on $[30]$ and $[25]$. It is not difficult to see that $\text{COOT}(X,Y)=0$ because $(P,Q)$ is a solution. As COOT is a special case of F-MMOT, we see that $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{2},\mu\big{)}=0$, for every $\varepsilon>0$, by proposition 3.1. In this experiment, we will check if marginalizing the minimizer of MMOT-DC allows us to recover the permutation matrices $P$ and $Q$. As can be seen from the figure 1, MMOT-DC can recover the permutation positions, for various values of $\varepsilon$. On the other hand, it can not recover the true sparse permutation matrices because the Sinkhorn algorithm applied to the MMOT problem implicitly results in a dense tensor, thus having dense marginal matrices. For this reason, the loss only remains very close to zero, but never exactly. Figure 1: Couplings generated by COOT and MMOT-DC on the matrix recovering task. We also plot, with some abuse of notation, the histograms of the error between the $(1,3),(1,4),(2,3),(2,4)$-marginal matrices of MMOT-DC and their independent counterpart from F-MMOT. In this example, in theory, as the F-MMOT optimal tensor $P$ can be factorized as $P=P_{12}\otimes P_{34}$, it is immediate to see that $P_{12}=P_{14}=P_{23}=P_{24}\in\mathbb{R}^{30\times 25}$ are uniform matrices whose entries are $\frac{1}{750}$. Figure 2: Histograms of difference between true independent marginal matrices and their approximations. We see that the marginal matrices obtained by the algorithm 1 approximate well the theoretical uniform matrices. #### Quality of the MMOT-DC solutions. Now, we consider the situation where the true matching between two matrices is not known in advance and investigate the quality of the solutions returned by MMOT-DC to solve the COOT and GW problems. This means that we will look at the COOT loss $\langle C,P\otimes Q\rangle$, where the smaller the loss, the better when using both exact COOT and GW solvers and our relaxation. We generate two random matrices $X\in\mathbb{R}^{20\times 3}$ and $Y\in\mathbb{R}^{30\times 2}$, whose entries are drawn independently from the uniform distribution on the interval $[0,1)$. Then we calculate two corresponding squared Euclidean distance matrices of size $20$ and $30$. Their rows and columns are equipped with the discrete uniform distributions. In this case, the COOT loss coincides with the GW distance (Redko et al., 2020). We compare four solvers: 1. 1. The Frank-Wolfe algorithm (Frank and Wolfe, 1956) to solve the GW (GW-FW). 2. 2. The projected gradient algorithm to solve the entropic GW distance Peyré et al. (2016) (EGW-PG). We choose the regularization parameter from $\\{0.0008,0.0016,0.0032,0.0064,0.0128,0.0256\\}$ and pick the one which corresponds to smallest COOT loss. 3. 3. The Block Coordinate Descent algorithm to approximate COOT and its entropic approximation Redko et al. (2020) (GW-BCD and EGW-BCD, respectively), where two additional KL divergences corresponding to two couplings are introduced. Both regularization parameters are tuned from $\\{0,0.0005,0.001,0.005,0.01,0.05,0.1,0.5,1\\}$, where $0$ means that there is no regularization term for the corresponding coupling and we pick the pair whose COOT loss is the smallest. 4. 4. The algorithm 1 to solve the MMOT-DC. We tune $\varepsilon\in\\{1,1.4,1.8,2.2,2.6\\}$ and we pick the one which corresponds to smallest COOT loss. For GW-FW and EGW-PG, we use the implementation from the library PythonOT (Flamary et al., 2021). Given two random matrices, we record the COOT loss corresponding to the solution generated by these three methods. We simulate this process $70$ times and compare the overall performance of these methods. We can see in Table 1 the average value and standard deviation and the comparison for the values of the loss between the different algorithms in Figure 3. The performance is quite similar across methods with a slight advantage for EGW-PG. This is in itself a very interesting result that has never been noted to the best of our knowledge: the entropic version of GW can provide better solution than solving the exact problem maybe because of the "convexification "of the problem due to the entropic regularization. Our approach is also interestingly better than the exact GW-FW which illustrates that the relaxation might help in finding better solutions despite the non-convexity of the problem. GW-FW | EGW-PG | GW-BCD | EGW-BCD | MMOT-DC ---|---|---|---|--- 0.083 ($\pm$ 0.035) | 0.079 ($\pm$ 0.035) | 0.083 ($\pm$ 0.035) | 0.080 ($\pm$ 0.035) | 0.082 ($\pm$ 0.036) Table 1: Average and standard deviation of COOT loss of the solvers. MMOT-DC is competitive to other solvers, except for EGW-PG and EGW-BCD. Figure 3: Scatter plots of MMOT-DC versus other solvers. In all four plots, the points tend to concentrate around the line $y=x$, which indicates the comparable performance of MMOT-DC. On the other hand, the top-right plot shows the clear superiority of EGW-PG. ## 6 Discussion and conclusion In this paper, we present a novel relaxation of the factorized MMOT problem called MMOT-DC. More precisely, we replace the hard constraint on factorization constraint by a smooth regularization term. The resulting problem not only enjoys an interpolation property between MMOT and factorized MMOT, but also is a DC problem, which can be solved easily by the DC algorithm. We illustrate the use of MMOT-DC the via some simulated experiments and show that it is competitive with the existing popular solvers of COOT and GW distance. One limitation of the current DC algorithm is that, it is not scalable because it requires storing a full-size tensor in the gradient step computation. Thus, future work may focus on more efficiently designed algorithms, in terms of both time and memory footprint. Moreover, incorporating additional structure on the cost tensor may also be computationally and practically beneficial. From a theoretical viewpoint, it is also interesting to study the extension of MMOT-DC to the continuous setting, which can potentially allow us to further understand the connection between GW distance and COOT. #### Acknowledgements. The authors thank to Thibault Séjourné for the fruitful discussion on the connection between GW distance and COOT. The authors thank to Titouan Vayer for pointing out the error in the definition of the lower bound of GW distance. ## References * Agueh and Carlier [2011] Martial Agueh and Guillaume Carlier. Barycenters in the Wasserstein Space. _SIAM Journal on Mathematical Analysis_ , 43:904–924, 2011\. * Benamou et al. [2014] Jean-David Benamou, Guillaume Carlier, Marco Cuturi, Luca Nenna, and Gabriel Peyré. Iterative Bregman Projections for Regularized Transportation Problems. _SIAM Journal on Scientific Computing_ , 37:1111–1138, 2014\. * Birkhoff [1946] George David Birkhoff. Tres observaciones sobre el algebra lineal. _Universidad Nacional de Tucuman, Revista_ , 5:147–150, 1946. * Borgwardt et al. [2006] Karsten M. Borgwardt, Arthur Gretton, Malte J. Rasch, Hans-Peter Kriegel, Bernhard Schölkopf, and Alex J. Smola. Integrating structured biological data by Kernel Maximum Mean Discrepancy. _Bioinformatics_ , 22(14):49–57, 7 2006. * Cao et al. [2019] Jiezhang Cao, Langyuan Mo, Yifan Zhang, Kui Jia, Chunhua Shen, and Mingkui Tan. Multi-marginal Wasserstein GAN. _Advances in Neural Information Processing Systems_ , pages 1774–1784, 2019. * Cohen and Rothblum [1993] Joel E. Cohen and Uriel G. Rothblum. Nonnegative ranks, decompositions, and factorizations of nonnegative matrices. _Linear Algebra and its Applications_ , 190:149–168, 1993\. * Cohen et al. [2021] Samuel Cohen, K. S. Sesh Kumar, and Marc Peter Deisenroth. Sliced multi-marginal optimal transport. _ICML_ , 2021. * Cuturi [2013] Marco Cuturi. Sinkhorn Distances: Lightspeed Computation of Optimal Transport. In _NeurIPS_ , pages 2292–2300, 2013. * Feydy et al. [2019] Jean Feydy, Thibault Séjourné, François-Xavier Vialard, Shun ichi Amari, Alain Trouvé, and Gabriel Peyré. Interpolating between Optimal Transport and MMD using Sinkhorn Divergences. _Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS)_ , 2019. * Flamary et al. [2021] Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T.H. Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, and Titouan Vayer. Pot: Python optimal transport. _Journal of Machine Learning Research_ , 22(78):1–8, 2021. URL http://jmlr.org/papers/v22/20-451.html. * Forrow et al. [2019] Aden Forrow, Jan-Christian Hütter, Mor Nitzan, Philippe Rigollet, Geoffrey Schiebinger, and Jonathan Weed. Statistical Optimal Transport via Factored Couplings. _The 22nd International Conference on Artificial Intelligence and Statistics_ , pages 2454–2465, 2019. * Frank and Wolfe [1956] Marguerite Frank and Philip Wolfe. An algorithm for quadratic programming. _Naval Research Logistics Quarterly_ , 3(1-2):95–110, 1956. * Gangbo and Swiech [1998] Wilfrid Gangbo and Andrzej Swiech. Optimal maps for the multidimensional Monge-Kantorovich problem. _Communications on Pure and Applied Mathematics_ , 51:23–45, 1998. * Genevay et al. [2019] Aude Genevay, Lénaic Chizat, Francis Bach, Marco Cuturi, and Gabriel Peyré. Sample complexity of sinkhorn divergences. _Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics_ , 89:1574–1583, 2019. * Haasler et al. [2020] Isabel Haasler, Rahul Singh, Qinsheng Zhang, Johan Karlsson, and Yongxin Chen. Multi-marginal optimal transport and probabilistic graphical models. _arXiv preprint arXiv:2006.14113_ , 2020. * He et al. [2019] Zhenliang He, Wangmeng Zuo, Meina Kan, Shiguang Shan, and Xilin Chen. Attgan: Facial attribute editing by only changing what you want. _IEEE Trans. Image Process._ , 28(11):5464–5478, 2019. * Hui et al. [2018] Le Hui, Xiang Li, Jiaxin Chen, Hongliang He, and Jian Yang. Unsupervised multi-domain image translation with domain-specific encoders/decoders. In _24th International Conference on Pattern Recognition, ICPR 2018, Beijing, China, August 20-24, 2018_ , pages 2044–2049. IEEE Computer Society, 2018. * Kellerer [1984] Hans G Kellerer. Duality theorems for marginal problems. _Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete_ , 67:399–432, 1984. * Lin et al. [2021] Chi-Heng Lin, Mehdi Azabou, and Eva Dyer. Making transport more robust and interpretable by moving data through a small number of anchor points. _Proceedings of the 38th International Conference on Machine Learning_ , 139:6631–6641, 2021. * Mi and Bento [2020] Liang Mi and José Bento. Multi-marginal optimal transport defines a generalized metric. _CoRR_ , abs/2001.11114, 2020. * Mémoli [2011] Facundo Mémoli. Gromov-Wasserstein distances and the metric approach to object matching. _Foundations of Computational Mathematics_ , pages 1–71, 2011. * Peyré et al. [2016] Gabriel Peyré, Marco Cuturi, and Justin Solomon. Gromov-Wasserstein Averaging of Kernel and Distance Matrices. _International Conference on Machine Learning_ , 48, 2016. * Pham and Thi [1997] Tao Dinh Pham and Hoai An Le Thi. Convex analysis approach to D.C. programming: Theory, Algorithm and Applications. _Acta Mathematica Vietnamica_ , 22:289–355, 1997. * Ramdas et al. [2017] Aaditya Ramdas, Nicolás García Trillos, and Marco Cuturi. On Wasserstein Two-Sample Testing and Related Families of Nonparametric Tests. _Entropy_ , 19, 2017. * Redko et al. [2020] Ievgen Redko, Titouan Vayer, Rémi Flamary, and Nicolas Courty. CO-Optimal Transport. _Advances in Neural Information Processing Systems_ , 2020. * Scetbon et al. [2021] Meyer Scetbon, Marco Cuturi, and Gabriel Peyré. Low-Rank Sinkhorn Factorization. _Proceedings of the 38th International Conference on Machine Learning_ , 139:9344–9354, 2021. * Schmitzer [2019] Bernhard Schmitzer. Stabilized Sparse Scaling Algorithms for Entropy Regularized Transport Problems. _SIAM Journal on Scientific Computing_ , 41:1443–1481, 2019\. * Séjourné et al. [2020] Thibault Séjourné, François-Xavier Vialard, and Gabriel Peyré. The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation. _arXiv preprint arXiv:2009.04266_ , 2020. * Tao and Souad [1986] Pham Dinh Tao and El Bernoussi Souad. Algorithms for Solving a Class of Nonconvex Optimization Problems. Methods of Subgradients. _North-Holland Mathematics Studies_ , 129:249–271, 1986\. * Vo [2015] Thanh Xuan Vo. _Learning with sparsity and uncertainty by Difference of Convex functions optimization_. PhD thesis, Université de Lorraine, 2015. ## Appendix A Appendix #### Derivation of the Sinkhorn algorithm in MMOT. The corresponding entropic dual problem of the primal problem 2 reads $\sup_{f_{n}\in\mathbb{R}^{a_{n}}}\sum_{n=1}^{N}\langle f_{n},\mu_{n}\rangle-\varepsilon\sum_{i_{1},...,i_{N}}\exp\Big{(}\frac{\sum_{n}(f_{n})_{i_{n}}-C_{i_{1},...,i_{N}}}{\varepsilon}\Big{)}+\varepsilon.$ (4) For each $n=1,...,N$ and $i_{n}\in[a_{n}]$, the first order optimality condition reads $0=(\mu_{n})_{i_{n}}-\exp\big{(}\frac{(f_{n})_{i_{n}}}{\varepsilon}\big{)}\sum_{i_{-n}}\exp\Big{(}\frac{\sum_{j\neq n}(f_{j})_{i_{j}}-C_{i_{1},...,i_{N}}}{\varepsilon}\Big{)},$ where, with some abuse of notation, we write $i_{-n}=(i_{1},...,i_{n-1},i_{n+1},...,i_{N})$. Or, equivalently $(f_{n})_{i_{n}}=\varepsilon\log(\mu_{n})_{i_{n}}-\varepsilon\log\sum_{i_{-n}}\exp\Big{(}\frac{\sum_{j\neq n}(f_{j})_{i_{j}}-C_{i_{1},...,i_{N}}}{\varepsilon}\Big{)},$ or even more compact form $f_{n}=\varepsilon\log\mu_{n}-\varepsilon\log\sum_{i_{-n}}\exp\Big{(}\frac{\sum_{j\neq n}(f_{j})_{i_{j}}-C_{\cdot,i_{-n}}}{\varepsilon}\Big{)}.$ Using the primal-dual relation, we obtain the minimiser of the primal problem 2 by $P_{i_{1},...,i_{N}}=\exp\Big{(}\frac{\sum_{n}(f_{n})_{i_{n}}-C_{i_{1},...,i_{N}}}{\varepsilon}\Big{)},$ for $i_{n}\in[a_{n}]$, with $n=1,...,N$. Similar to the entropic OT, the Sinkhorn algorithm 3 is also usually implemented in log-domain to avoid numerical instability. Algorithm 3 Sinkhorn algorithm for the entropic MMOT problem 2 from Benamou et al. [2014] Input. Histograms $\mu_{1},...,\mu_{N}$, hyperparameter $\varepsilon>0$, cost tensor $C$ and tuple of initial dual vectors $(f^{(0)}_{1},...f^{(0)}_{N})$. Output. Optimal transport plan $P$ and tuple of dual vectors $(f_{1},...f_{N})$ (optional). 1. 1. While not converge: for $n=1,...,N$, $\begin{split}f^{(t+1)}_{n}&=\varepsilon\log\mu_{n}-\varepsilon\log\sum_{i_{-n}}\Big{[}\exp\Big{(}\frac{\sum_{j<n}(f^{(t+1)}_{j})_{i_{j}}+\sum_{j>n}(f^{(t)}_{j})_{i_{j}}-C_{\cdot,i_{-n}}}{\varepsilon}\Big{)}\Big{]}.\end{split}$ 2. 2. Return tensor $P$, where for $i_{n}\in[a_{n}]$, with $n=1,...,N$, $P_{i_{1},...,i_{N}}=\exp\Big{(}\frac{\sum_{n}(f_{n})_{i_{n}}-C_{i_{1},...,i_{N}}}{\varepsilon}\Big{)}.$ (5) #### F-MMOT of two components (i.e. $M=2$) as a variation of low nonnegative rank OT. For the sake of notational ease, we only consider the simplest case, but the same argument holds in the general case. Consider $\mathcal{T}_{1}=(1,2)$ and $\mathcal{T}_{2}=(3,4)$, then the factored MMOT problem reads $\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{2},\mu\big{)}=\min_{P\in\mathcal{F}_{2}}\langle C,P\rangle.$ (6) First, we define three reshaping operations. * • vectorization: concatenates rows of a matrix into a vector. $\text{vec}:\mathbb{R}^{m\times n}\to\mathbb{R}^{mn},$ where each element $A_{i,j}$ of the matrix $A\in\mathbb{R}^{m\times n}$ is mapped to a unique element $b_{(i-1)n+j}$ of the vector $b\in\mathbb{R}^{mn}$, with $A_{i,j}=b_{(i-1)n+j}$, for $i=1,...,m$ and $j=1,...,n$. Conversely, each element $b_{k}$ is mapped to a unique element $A_{k//n,n-k\%n}$, for every $k=1,...,mn$. Here, $k//n$ is the quotient of the division of $k$ by $n$ and $k\%n$ is the remainder of this division, i.e. if $k=qn+r$, with $0\leq r<n$, then $k//n=q$ and $k\%n=r$. * • Matrization: transforms a $4$D tensor to a $2$D tensor (matrix) by vectorizing the first two and the last two dimensions of the tensor. $\text{mat}:\mathbb{R}^{n_{1}\times n_{2}\times n_{3}\times n_{4}}\to\mathbb{R}^{(n_{1}n_{2})\times(n_{3}n_{4})},$ where, similar to the vectorization, each element $P_{i,j,k,l}$ of the tensor $P\in\mathbb{R}^{n_{1}\times n_{2}\times n_{3}\times n_{4}}$ is mapped to the unique element $A_{(i-1)n_{2}+j,(k-1)n_{4}+l}$ of the matrix $A\in\mathbb{R}^{(n_{1}n_{2})\times(n_{3}n_{4})}$, with $P_{i,j,k,l}=A_{(i-1)n_{2}+j,(k-1)n_{4}+l}$. * • Concatenation: stacks vertically two equal-column matrices. $\begin{split}\text{con}_{v}:&\mathbb{R}^{m\times d}\times\mathbb{R}^{n\times d}\to\mathbb{R}^{(m+n)\times d}\\\ &\big{(}(u_{1},...,u_{m}),(v_{1},...,v_{n})\big{)}\to(u_{1},...,u_{m},v_{1},...,v_{n})^{T}.\end{split}$ Or, stacks horizontally two equal-row matrices $\begin{split}\text{con}_{h}:&\mathbb{R}^{n\times p}\times\mathbb{R}^{n\times q}\to\mathbb{R}^{n\times(p+q)}\\\ &\big{(}(u_{1},...,u_{p}),(v_{1},...,v_{q})\big{)}\to(u_{1},...,u_{p},v_{1},...,v_{q}).\end{split}$ ###### Lemma A.1 For any $4$D tensor $\pi\in\mathbb{R}^{n_{1}\times n_{2}\times n_{3}\times n_{4}}$, denote $P$ its matrisation. We have, $\text{vec}\Big{(}\sum_{k,l}\pi_{\cdot,\cdot,k,l}\Big{)}=\sum_{n=1}^{n_{3}n_{4}}P_{\cdot,n}=P1_{n_{3}n_{4}},$ where $1_{n}$ is the vector of ones in $\mathbb{R}^{n}$. #### Proof of lemma A.1. For $(i,j)\in[n_{1}]\times[n_{2}]$, we have $\text{vec}\Big{(}\sum_{k,l}\pi_{\cdot,\cdot,k,l}\Big{)}_{(i-1)n_{2}+j}=\sum_{k,l}\pi_{i,j,k,l}=\sum_{k,l}P_{(i-1)n_{2}+j,(k-1)n_{4}+l}=\sum_{n=1}^{n_{3}n_{4}}P_{(i-1)n_{2}+j,n}.$ Now, let $(e_{i})_{i=1}^{n_{1}n_{2}}$ be the standard basis vectors of $\mathbb{R}^{(n_{1}n_{2})}$, i.e. $(e_{i})_{k}=1_{\\{i=k\\}}$. For each $\pi\in U_{\mathcal{T}}$, denote $P$ its matrix form, then by lemma A.1, we have, for $i\in[n_{1}]$, $(\mu_{1})_{i}=\sum_{j}\sum_{k,l}\pi_{i,j,k,l}=\sum_{j=1}^{n_{2}}\sum_{n=1}^{n_{3}n_{4}}P_{(i-1)n_{2}+j,n},$ which can be written in matrix form as $A_{1}^{T}P1_{n_{3}n_{4}}=\mu_{1}$ where the matrix $A_{1}=\text{con}_{h}(v_{1},...,v_{n_{1}})\in\mathbb{R}^{(n_{1}n_{2})\times n_{1}}$, with $v_{i}\in\mathbb{R}^{(n_{1}n_{2})}$, where $v_{i}=\sum_{j=(i-1)n_{2}+1}^{in_{2}}e_{j}$, with $i\in[n_{1}]$. Similarly, $A_{2}P1_{n_{3}n_{4}}=\mu_{2}$, where the matrix $A_{2}=\text{con}_{h}(I_{n_{2}},...,I_{n_{2}})\in\mathbb{R}^{n_{2}\times(n_{1}n_{2})}$, where $I_{n}\in\mathbb{R}^{n\times n}$ is the identity matrix. Both conditions can be compactly written as $A_{12}^{T}P1_{n_{3}n_{4}}=\mu_{12},$ where the matrix $A_{12}=\text{con}_{h}(A_{1},A_{2}^{T})\in\mathbb{R}^{(n_{1}n_{2})\times(n_{1}+n_{2})}$ and $\mu_{12}=\text{con}_{v}(\mu_{1},\mu_{2})\in\mathbb{R}^{(n_{1}+n_{2})}$. Note that $\mu_{12}$ is not a probability because its mass is $2$. The matrix $A_{12}$ has exactly $2n_{1}n_{2}$ ones and the rest are zeros. Similarly, for $A_{34}$ and $\mu_{34}$ defined in the same way as $A_{12}$ and $\mu_{12}$, respectively, we establish the equality $A_{34}^{T}P^{T}1_{n_{1}n_{2}}=\mu_{34}$. As a side remark, both matrices $A_{12}^{T}$ and $A_{34}^{T}$ are totally unimodular, i.e. every square submatrix has determinant $-1,0$, or $1$. Figure 4: An example of the matrix $A_{12}$ when $n_{1}=2$ and $n_{2}=3$. To handle the factorization constraint, first we recall the following concept. ###### Definition A.1 Given a nonnegative matrix $A$, we define its nonnegative rank by $\text{rank}_{+}(A):=\min\big{\\{}r\geq 1:A=\sum_{i=1}^{r}M_{i},\text{ where }\text{rank}(M_{i})=1,M_{i}\geq 0,\forall i\big{\\}}.$ By convention, zero matrix has zero (thus nonnegative) rank. So, the constraint $P=P_{1}\otimes P_{2}$ is equivalent to $\text{mat}(P)=\text{vec}(P_{1})\text{vec}(P_{2})^{T}$. By lemma 2.1 in [Cohen and Rothblum, 1993], $\text{rank}_{+}(A)=1$ if and only if there exist two nonnegative vectors $u,v$ such that $A=uv^{T}$. Thus, the factorization constraint can be rewritten as $\text{rank}_{+}\big{(}\text{mat}(P)\big{)}=1$. Denote $L=\text{mat}(C)$ and $M=n_{1}n_{2},N=n_{3}n_{4}$. The problem 6 can be rewritten as $\begin{split}\min_{Q\in\mathbb{R}^{M\times N}_{\geq 0}}&\langle L,Q\rangle\\\ \text{ such that }&A_{12}^{T}Q1_{N}=\mu_{12}\\\ &A_{34}^{T}Q^{T}1_{M}=\mu_{34}\\\ &\text{rank}_{+}(Q)=1.\end{split}$ #### Proof of proposition 3.1. The inequality $\text{MMOT}(\mu)\leq\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$ follows from the positivity of the KL divergence. On the other hand, $\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}=\inf_{P\in\mathcal{F}_{M}}\langle C,P\rangle+\varepsilon\text{KL}(P|P_{\\#}),$ because $\text{KL}(P|P_{\\#})=0$, for every $P\in\mathcal{F}_{M}$. As $\mathcal{F}_{M}\subset U_{\mathcal{T}}$, we have $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}\leq\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$. Now, if $\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}=0$, then $\text{MMOT-DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}=0$. Conversely, if $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}=0$, for $\varepsilon>0$, then there exists $P^{*}\in U_{\mathcal{T}}$ such that $\langle C,P^{*}\rangle=0$ and $P^{*}=P^{*}_{\\#}$. Thus $\langle C,P^{*}_{\\#}\rangle=0$, which means $\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}=0$. #### Proof of proposition 3.2. The function $\varepsilon\to\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$ is increasing on $\mathbb{R}_{\geq 0}$ and bounded, thus admits a finite limit $L\leq\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$, when $\varepsilon\to\infty$, and a finite limit $l\geq\text{MMOT}(\mu)$, when $\varepsilon\to 0$. Let $P_{\varepsilon}$ be a solution of the problem $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$. As $U_{\mathcal{T}}$ is compact, when either $\varepsilon\to 0$ or $\varepsilon\to\infty$, one can extract a converging subsequence (after reindexing) $(P_{\varepsilon_{k}})_{k}\to\widetilde{P}\in U_{\mathcal{T}}$, when either $\varepsilon_{k}\to 0$ or $\varepsilon_{k}\to\infty$. Thus, the convergence of the marginal distributions is also guaranteed, i.e $(P_{\varepsilon_{k}})_{\\#m}\to\widetilde{P}_{\\#m}\in U_{\mathcal{T}_{m}}$, for every $m=1,...,M$, which implies that $P_{\varepsilon_{k}}-(P_{\varepsilon_{k}})_{\\#\mathcal{T}}\to\widetilde{P}-\widetilde{P}_{\\#\mathcal{T}}$. When $\varepsilon\to 0$, let $P^{*}$ be a solution of the problem $\text{MMOT}(\mu)$. Then, $\langle C,P^{*}\rangle\leq\langle C,P_{\varepsilon}\rangle+\varepsilon\text{KL}(P_{\varepsilon}|(P_{\varepsilon})_{\\#\mathcal{T}})\leq\langle C,P^{*}\rangle+\varepsilon\text{KL}(P^{*}|P^{*}_{\\#\mathcal{T}}).$ By the sandwich theorem, when $\varepsilon\to 0$, we have $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}\to\langle C,P^{*}\rangle=\text{MMOT}(\mu)$. Furthermore, as $0\leq\langle C,P_{\varepsilon_{k}}\rangle-\langle C,P^{*}\rangle\leq\varepsilon_{k}\text{KL}(P^{*}|P^{*}_{\\#\mathcal{T}}),$ when $\varepsilon_{k}\to 0$, it follows that $\langle C,\widetilde{P}\rangle=\langle C,P^{*}\rangle$. So $\widetilde{P}$ is a solution of the problem $\text{MMOT}(\mu)$. We conclude that any cluster point of the sequence of minimisers of $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$ when $\varepsilon\to 0$ is a minimiser of $\text{MMOT}(\mu)$. As a byproduct, since $\text{KL}(P^{*}|P^{*}_{\\#\mathcal{T}})-\text{KL}(P_{\varepsilon_{k}}|(P_{\varepsilon_{k}})_{\\#\mathcal{T}})\geq\frac{\langle C,P_{\varepsilon_{k}}\rangle-\langle C,P^{*}\rangle}{\varepsilon_{k}}\geq 0,$ we deduce that $\text{KL}(\widetilde{P}|\widetilde{P}_{\\#\mathcal{T}})\leq\text{KL}(P^{*}|P^{*}_{\\#\mathcal{T}})$ (so the cluster point $\widetilde{P}$ has minimal mutual information). On the other hand, when $\varepsilon\to\infty$, for $\mu=\mu_{1}\otimes...\otimes\mu_{N}$, one has $\langle C,\mu\rangle+\varepsilon\times 0\geq\langle C,P_{\varepsilon}\rangle+\varepsilon\text{KL}(P_{\varepsilon}|(P_{\varepsilon})_{\\#\mathcal{T}})\geq\varepsilon\text{KL}(P_{\varepsilon}|(P_{\varepsilon})_{\\#\mathcal{T}}).$ Thus, $0\leq\text{KL}(P_{\varepsilon}|(P_{\varepsilon})_{\\#\mathcal{T}})\leq\frac{1}{\varepsilon}\langle C,\mu\rangle\to 0,\text{ when }\varepsilon\to\infty,$ which means $\text{KL}(P_{\varepsilon}|(P_{\varepsilon})_{\\#\mathcal{T}})\to 0$, when $\varepsilon\to\infty$. In particular, when $\varepsilon_{k}\to\infty$, we have $\text{KL}(P_{\varepsilon_{k}}|(P_{\varepsilon_{k}})_{\\#\mathcal{T}})\to 0$. Thus, $\text{KL}(\widetilde{P}|\widetilde{P}_{\\#\mathcal{T}})=0$, which implies $\widetilde{P}=\widetilde{P}_{\\#\mathcal{T}}$. Now, as $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}\geq\langle C,P_{\varepsilon}\rangle$, taking limit when $\varepsilon\to\infty$ gives us $L\geq\langle C,\widetilde{P}\rangle=\langle C,\widetilde{P}_{\\#\mathcal{T}}\rangle\geq\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$. Thus $L=\langle C,\widetilde{P}\rangle=\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$, i.e. $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}\to\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$ when $\varepsilon\to\infty$. We also have that any cluster point of the sequence of minimisers of $\text{MMOT- DC}_{\varepsilon}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$ when $\varepsilon\to\infty$ is a minimiser of $\text{F-MMOT}\big{(}(\mathcal{T}_{m})_{m=1}^{M},\mu\big{)}$. #### Proof of corollary 3.3. In this proof, we write $C:=L_{p}(C_{x},C_{y})$, for notational ease. In the setting of GW distance, we have $N=4,M=2$ and $\mathcal{T}_{1}=(1,2),\mathcal{T}_{2}=(3,4)$. Given a solution $P_{\varepsilon}$ of the problem MMOT-DC, let $Q_{i}\in U\big{(}(P_{\varepsilon})_{\\#i},(P_{\varepsilon})_{\\#i}\big{)}\subset U_{\mathcal{T}}$, for $i=1,2$. The optimality of $P_{\varepsilon}$ implies that $\langle C,P_{\varepsilon}\rangle+\varepsilon\Big{[}H(P_{\varepsilon})-H\big{(}(P_{\varepsilon})_{\\#1}\big{)}-H\big{(}(P_{\varepsilon})_{\\#2}\big{)}\Big{]}\leq\langle C,Q_{i}\rangle+\varepsilon\Big{[}H(Q_{i})-2H\big{(}(P_{\varepsilon})_{\\#i}\big{)}\Big{]}.$ Thus, $2\big{(}\langle C,P_{\varepsilon}\rangle+\varepsilon H(P_{\varepsilon})\big{)}\leq\sum_{i=1}^{2}\langle C,Q_{i}\rangle+\varepsilon H(Q_{i}).$ As this is true for every $Q_{i}\in U\big{(}(P_{\varepsilon})_{\\#i},(P_{\varepsilon})_{\\#i}\big{)}$, we have $\begin{split}\frac{1}{2}\sum_{i=1}^{2}\text{OT}_{\varepsilon}\big{(}(P_{\varepsilon})_{\\#i},(P_{\varepsilon})_{\\#i}\big{)}&=\frac{1}{2}\sum_{i=1}^{2}\inf_{Q_{i}\in U\big{(}(P_{\varepsilon})_{\\#i},(P_{\varepsilon})_{\\#i}\big{)}}\langle C,Q_{i}\rangle+\varepsilon H(Q_{i})\\\ &\geq\langle C,P_{\varepsilon}\rangle+\varepsilon H(P_{\varepsilon})\\\ &\geq\inf_{P\in U\big{(}(P_{\varepsilon})_{\\#1},(P_{\varepsilon})_{\\#2}\big{)}}\langle C,P\rangle+\varepsilon H(P)\\\ &=\text{OT}_{\varepsilon}\big{(}(P_{\varepsilon})_{\\#1},(P_{\varepsilon})_{\\#2}\big{)}.\end{split}$ The second inequality holds because $P_{\varepsilon}\in U\big{(}(P_{\varepsilon})_{\\#1},(P_{\varepsilon})_{\\#2}\big{)}$. Thus, $\text{OT}_{\varepsilon}\big{(}(P_{\varepsilon})_{\\#1},(P_{\varepsilon})_{\\#2}\big{)}-\frac{1}{2}\sum_{i=1}^{2}\text{OT}_{\varepsilon}\big{(}(P_{\varepsilon})_{\\#i},(P_{\varepsilon})_{\\#i}\big{)}\leq 0.$ (7) The left-hand side of 7 is nothing but the Sinkhorn divergence $\text{SD}_{\varepsilon}$ between $(P_{\varepsilon})_{\\#1}$ and $(P_{\varepsilon})_{\\#2}$ [Ramdas et al., 2017]. As a strictly positive definite kernel is necessarily universal in the finite setting (see for example section 2.3 in [Borgwardt et al., 2006]), by theorem 1 in [Feydy et al., 2019], the inequality 7 is in fact an equality and we must have $(P_{\varepsilon})_{\\#1}=(P_{\varepsilon})_{\\#2}$. Now, by proposition 3.2, when $\varepsilon\to\infty$, a cluster point $P_{\\#1}\otimes P_{\\#2}$ of the sequence of minimisers $(P_{\varepsilon})_{\varepsilon}$ induces a solution $(P^{(1)},P^{(2)})$ of the COOT. The above result implies that $P_{\\#1}=P_{\\#2}$ and the equality between GW and COOT then follows. #### An empirical variation. Intuitively, for sufficiently large $\varepsilon$, the minimisation of the KL divergence is prioritised over the linear term in the objective function of the MMOT-DC problem, which implies that the optimal tensor $P^{*}$ is "close" to its corresponding tensor product $P^{*}_{\\#\mathcal{T}}$. So, instead of calculating the gradient at $P$, one may calculate at $P_{\\#\mathcal{T}}$. In this case, the gradient reads $\begin{split}\sum_{m=1}^{M}\nabla_{P}H_{m}(P_{\\#\mathcal{T}})=\big{[}\log P_{\\#1}+P_{\\#1}\big{]}\oplus...\oplus\big{[}\log P_{\\#M}+P_{\\#M}\big{]},\end{split}$ where $\oplus$ represents the tensor sum operator between two arbitrary-size tensors: $(P\oplus Q)_{i,j}:=P_{i}+Q_{j}$, where with some abuse of notation, $i$ or $j$ can be understood as a tuple of indices. Thus, we avoid storing the $N$-D gradient tensor (as in the algorithm 1) and only need to store $M$ smaller-size tensors. Not only saving the memory, this variation also seems to be empirically competitive with the original algorithm 1, if not sometimes better, in terms of COOT loss. The underlying reason might be related to the approximate DCA scheme [Vo, 2015], where one replaces both steps in each DC iteration by their approximation. We leave the formal theoretical justification of this variation to the future work. We call this variation MMOT-DC-v1 and use the same setup as in the experiment 5. Figure 5: Scatter plots of MMOT-DC-v1 versus other solvers. In all four plots, the points tend to concentrate around the line $y=x$, which indicates the comparable performance of MMOT-DC-v1. On the other hand, the top-right plot shows the clear superiority of EGW-PG. MMOT-DC | MMOT-DC-v1 ---|--- 0.0822 ($\pm$ 0.0364) | 0.0820 ($\pm$ 0.0361) Table 2: Average and standard deviation of COOT loss of MMOT-DC and MMOT- DC-v1. The performance of the two algorithms is very similar.
# Directional-dependence of the event-by-event neutron-$\gamma$ multiplicity correlations in 252Cf(sf) Stefano Marin<EMAIL_ADDRESS>Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109, USA Eoin P. Sansevero<EMAIL_ADDRESS>Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109, USA M. Stephan Okar Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109, USA Isabel E. Hernandez Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109, USA Ramona Vogt Nuclear and Chemical Sciences Division, Lawrence Livermore National Laboratory, Livermore, CA 94550, USA Physics and Astronomy Department, University of California, Davis, CA 95616, USA Jørgen Randrup Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA Shaun D. Clarke Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109, USA Vladimir A. Protopopescu Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA Sara A. Pozzi Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109, USA Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA ###### Abstract We differentiate the event-by-event n-$\gamma$ multiplicity data from ^252Cf(sf) with respect to the energies of the emitted particles as well as their relative angles of emission. We determine that neutron emission enhances $\gamma$-ray emission around $0.7$ and $1.2$ MeV, but the only directional alignment was observed for $E_{\gamma}\leq 0.7$ MeV and tended to be parallel and antiparallel to neutrons emitted in the same event. The emission of $\gamma$ rays at other energies was determined to be nearly isotropic. The presence of the emission and alignment enhancements is explained by positive correlations between neutron emission and quadrupole $\gamma$-ray emission along rotational bands in the de-exciting fragments. This observation corroborates the hypothesis of positive correlations between the angular momentum of a fragment and its intrinsic excitation energy. The results of this work are especially relevant in view of the recent theoretical and experimental interest in the generation of angular momentum in fission. Specifically, we have determined an alignment of the fragments angular momenta in a direction perpendicular to the direction of motion. We interpret the lack of $n$-$\gamma$ angular correlations for fission fragments near closed shells as a weakening of the alignment process for spherical nuclei. Lastly, we have observed that statistical $\gamma$ rays are emitted isotropically, indicating tha the average angular momentum removed by this radiation is small. These results, and the analysis tools presented in this work, representing a stepping stone for future analysis of $n$-$\gamma$ emission correlations and their connection to angular momentum properties. neutron-gamma multiplicity competition; fission fragment de-excitation ## I Introduction Neutrons and $\gamma$ rays emitted from fission fragments reveal important features of the nuclear fission process and the state of the fragments immediately following fission. Among several open questions in fission, the $n$-$\gamma$ angular correlations are particularly interesting because of their intimate relation to the fission fragment angular momenta. The angular momentum of a fragments plays a pivotal role in the emission of $\gamma$ rays and the $n$-$\gamma$ angular distribution. The characterization of the fragment angular momenta is one of the most important open questions in fission physics. The first experimental investigations of fission fragment angular momenta were carried out in the 1960s [1] and 1970s [2] and some early theoretical work was done in the 1980s on the character of the distribution of the fragment angular momentum [3] and the underlying mechamism for its generation [4]. Due to the advances in instrumentation, modeling, and computation since then, the topic of fission fragment angular momenta has gained renewed interest in recent years [5, 6, 7, 8, 9, 10, 11, 12]. Of particular interest are the event-by- event correlations between fragment energy and angular momentum, the directional alignment of the angular momentum with respect to the motion of the fragment, and the correlations, both in magnitude and direction, between the angular momenta of the two fragments. The evaporation of neutrons from fragments is highly correlated with the fragment intrinsic excitation energy [13, 14, 15], whereas $\gamma$-ray emission correlates strongly with the fragment angular momentum [16, 17]. By analysing the correlations between neutrons and $\gamma$ rays it is possible to infer the underlying correlations between the fragment energy and angular momentum. To reduce noise and systematic biases associated with the emission of these particles, it is necessary to differentiate the emission based on the kinematic properties of the emission, namely their kinetic energies and directions. In this work, we continue our analysis of the event-by-event neutron-$\gamma$ multiplicity correlations presented in Refs. [18, 19]. In Ref. [18], we determined that neutron and $\gamma$-ray emissions are slightly negatively correlated, as a result of energy and angular momentum conservation. The more recent analysis of Ref. [19] analyzed how the correlations depend on the energy of the emitted particles. We have observed predominantly negative correlations with notable positive enhancements at specific $\gamma$-ray energies: $E_{\gamma}\approx 0.7$ and $1.2$ MeV. With the aid of model calculations, we concluded that the positive enhancements originated from positive correlations between the angular momenta and energies of the fragments in a fission event. In this work, we extend the previous investigations by analysing the correlations differentiated with respect to both energy and direction. The paper is structured as follows. In Section II, we discuss the origins of the angular distribution of neutron and $\gamma$ radiation. In Section III, we present the new analysis of the experimental data that takes into account both the energy and the angular dependence of the emitted radiation. Section IV presents the analysis of the data collected using the Chi-Nu liquid organic scintillator array at LANSCE, Los Alamos. The experimental results and possible theoretical interpretations are also discussed. Lastly, in Section V we discuss how the observed $n$-$\gamma$ emission alignments, and lack thereof, indicate that the fragment angular momentum is polarized in a direction perpendicular to fragment direction of motion, and the relationship between the fragment angular momentum and excitation energy. ## II Sources of Angular Correlations In the fragment center-of-mass frame (CoM), neutrons are emitted with mean velocities comparable to the speed of the fragment in the lab frame. Thus, the neutron kinematic boost effectively determines the angular distribution of neutrons in the lab frame. The emission of neutrons in the CoM is often approximated as isotropic and this approximation has been validated experimentally [20, 13]. Other effects, related to the coupling of angular momenta and the possibility of scission neutrons [21], would only result in small corrections. Thus, we expect that neutrons will primarily follow the direction of the fragment motion. Because the light and heavy fragments are emitted emitted back-to-back in the CoM of the initial fissioning nucleus, the angular distribution of neutrons appears as two distributions focused parallel and anti-parallel to the motion of the fragments, i.e., the fission axis. The kinematic focusing causes neutrons with greater kinetic energy in the lab frame to be more tightly aligned with the fission axis and their distributions more anisotropic in the lab frame. Larger lab-frame energies are also associated with larger CoM energies, which biases the sample toward symmetric fission (see Figs. 6 and 18 in Ref. [13]), i.e., fission events resulting in two similar mass fragments. The angular distribution of $\gamma$ rays is also affected by kinematic boosting, an effect known as $\gamma$-ray aberration [22]. However, the effects are significantly weaker given the relatively low velocity of fragments. The effects of a weak aberration depend on the angular distribution in the CoM, but can be approximated as a linear term in the cosine of the angle of emission in the lab frame. The kinematic boosting of both neutrons and $\gamma$ rays tend to make $n$-$\gamma$ angular distribution more parallel when the particles are emitted by the same fragment, and more antiparallel when the particles are emitted by different fragments. Because we observe $n$-$\gamma$ correlations irrespective of the fragments emitting them, these two effects tend to cancel each other. Thus, we do not expect the aberration of $\gamma$ rays to have a dominant role in $n$-$\gamma$ angular correlations. The coupling of the fragment angular momentum with that of emitted $\gamma$ rays gives rise to strong observable angular correlations [1]. We can observe the angular correlations of $\gamma$ rays relative to the fission axis because the angular momenta of the fragments is aligned perpendicular to the fission axis [1, 23, 3, 24, 8]. The emission of $\gamma$ rays following fission is usually divided into two stages: first, $\gamma$ rays are emitted in the continuum to dissipate the intrinsic excitation energy left over after neutron evaporation; second, $\gamma$ rays are emitted to dissipate the energy stored in the collective degrees of freedom. We call the first type of $\gamma$ emission statistical, because the transition strengths are determined from a statistical analysis of the level densities. We call the second type of emission discrete, since the transitions are determined by the available levels in the discrete region of the level scheme. The angular distributions of these two categories of $\gamma$ rays are very different. Statistical $\gamma$-ray emission is assumed to be primarily electric dipole radiation. The angular distributions of statistical $\gamma$ rays has been described to be either isotropic [1], or aligned parallel to the fragment angular momentum and thus perpendicular to the fission axis [25]. The difference between these two alternatives lies in the angular momenta of the initial and final states, $J_{i}$ and $J_{f}$ respectively. Transitions with $J_{f}=J_{i}\pm 1$ contribute $\gamma$ rays emitted predominantly perpendicular to the fission axis, whereas $J_{f}=J_{i}$ contribute $\gamma$ rays emitted predominantly parallel to it [26]. Depending on the proportion of the two types of dipole transitions, the angular distributions of statistical $\gamma$ rays can have different angular distributions. Discrete emission along the yrast band is primarily electric quadrupole in nature, although magnetic dipole contributions at the lowest energies have also been observed. Discrete quadrupole emission along a rotational band tends to be stretched, i.e., the angular momentum removed by the radiation is maximized, $J_{f}=J_{i}-2$. Because of their stretched character, the angular distribution of $\gamma$ rays from quadrupole band transitions are directed approximately perpendicular to the angular momentum axis and are thus predominantly parallel to the fission axis [26]. Based on the discussion presented above, we expect both neutron and $\gamma$-ray emission to be correlated with the fission axis, with no directly correlations between them. Intrinsic correlations between sequential emissions are possible, even in the case of a nucleus that is not initially oriented. These angular correlations arise because the fragment, as it de-excites from energy level to energy level, is in a superposition of magnetic substates of angular momentum. Angular momentum conservation dictates that the magnetic quantum numbers of successive levels are entangled with one another, introducing intrinsic correlations between them. The intrinsic angular correlations between neutrons and $\gamma$ rays can be quite strong. Thus, while we cannot currently exclude that these angular correlations play an important role in the determination of $n$-$\gamma$, there are several factors that reduce their strength. First, we expect these intrinsic correlations only to affect the emission from a single fragment. Second, the emission of other particles in the same decay sequence will diminish the observed correlations. This is particularly important for the correlations between neutrons and the discrete $\gamma$ rays, generally emitted after $1-2$ statistical $\gamma$ rays have been emitted. Third, intrinsic correlations can only be expected if neutrons are emitted with some orbital angular momentum. It is usually assumed that neutrons below approximately $1-2$ MeV are emitted predominantly as $s$-waves, thus significantly reducing the effects of intrinsic angular correlations. A recent investigation [5] showed that the optical model of the nucleus can predict much larger values of the neutron orbital angular momentum, but more evidence is needed. In light of these considerations, the intrinsic angular correlations are not explicitly discussed in this paper, and will be investigated in future work. ## III Experimental Analysis In this analysis, we differentiate the covariance of the event-by-event neutron and $\gamma$-ray multiplicities, $N_{n}$ and $N_{\gamma}$, with respect to the angle between them, $\theta_{n\gamma}$, as well as their energies, $E_{n}$ and $E_{\gamma}$. This analysis is an improvement of the analysis presented in Ref. [19], which only differentiated correlations with respect to energy. The differentiated normalized covariance, $C_{E_{n}E_{\gamma}\theta_{n\gamma}}$, is $C_{E_{n}E_{\gamma}\theta_{n\gamma}}=\frac{\partial^{3}\text{cov}(N_{n},N_{\gamma})}{\partial E_{n}\partial E_{\gamma}\partial\theta_{n\gamma}}\left[\frac{\partial}{\partial\theta_{n\gamma}}\left(\frac{\partial\langle N_{n}\rangle}{\partial E_{n}}\frac{\partial\langle N_{\gamma}\rangle}{\partial E_{\gamma}}\right)\right]^{-1}\ .$ (1) The quantity $C_{E_{n}E_{\gamma}\theta_{n\gamma}}$ is bounded from below at $-1$, but has no upper bound. The three differentiations we perform here serve distinct purposes. Because the discrete level transitions tend to be lower in energy than statistical emission, the differentiation with respect to $E_{\gamma}$ helps to sharpen the separation between statistical and discrete emission. The differentiation with respect to neutron energy sharpens the neutron-$\gamma$ angular correlations by narrowing the angular distribution of neutrons with respect to the fission axis. This differentiation also allows the identification of correlations that exist due to sample biasing, and are thus unrelated to the more interesting correlations between a fragment energy and angular momentum. Lastly, the differentiation with respect to angle is used to identify the angular momentum properties of $\gamma$ rays. The data analyzed in this paper were collected with the Chi-Nu array at Los Alamos National Laboratory. These data are the same as those analyzed in Refs. [18, 19, 27, 28]. A detailed description of the experiment and the detector can be found in those references. In this paper, we focus on the capabilities of angular measurements with the Chi-Nu array. See Ref. [19] for a discussion of the energy acceptance of the Chi-Nu detectors. Because spontaneous fission lacks a preferred direction and the fission axis is not experimentally measured, the directional distribution is measured between pairs of emitted particles. Specifically, in the present experiment we measure the neutron-$\gamma$ covariance between the measured neutron and $\gamma$-ray multiplicities in two detectors whose geometric centers are separated by an angle $\theta_{n\gamma}$. The normalization of the covariance, the factor in square brackets in Eq. (1), is calculated from the product of the mean measured multiplicities in each detector. Because of different gain settings and distances to the source, the efficiencies of each detector varied considerably. These variations in the individual detector behavior are directly translated into variations in the product of efficiencies for each detector pair. With $42$ active detectors during the experiment, a total of $861$ detector pairs are possible. However, because we take the first detector to detect a neutron and the second a $\gamma$-ray, each detector pair is doubly degenerate. Thus, a total of $1722$ detector-pair combinations are considered. We show the angular efficiency of the system in the upper half of Fig. 1. Each point in the polar plot represents a detector pair while the distance from the origin represents the energy-averaged efficiency of the detector pair, the product in the measured neutron-$\gamma$ multiplicities across all energies. The red circle represents the average detector-pair efficiency across all detector pairs. The average detector efficiency, expressed as the rate of double counts per fission event in a specific detector pair $\langle d_{n}d_{\gamma}\rangle$, is indicated on the figure. In the lower half of Fig. 1, we group detector pairs in angular bins. The size of the marker represents the number of detector pairs included in that group, while the position of the marker represents the average in angle and efficiency for all pairs in the group. The legend indicates the number of detector pairs in each bin. The standard deviations in both angle and efficiency are shown as error bars. Figure 1: The angular efficiency of the Chi-Nu detection system. See text for details. Because the Chi-Nu array is hemispherical, angles between detector pairs are neither isotropic nor symmetric with respect to $\pi/2$. The variations observed in the efficiencies are significantly reduced if the mean of each angular group is taken. In fact, we see from the lower half of Fig. 1 that the mean pair efficiency of the angular groups falls close to the average over all detector pairs. However, it should be noted that because of the hemispherical geometry of Chi-Nu, we expect measurements to be biased toward neutrons and $\gamma$ rays emitted at acute angles. The angular resolution of the detection system is defined as the spread in the angles measured by the experimental system when the particles are emitted at a fixed angle. The resolution depends on the room return, where the radiation interacts with materials around the detector system, or the detector system itself, before being measured, and on the finite width of the detectors. Given the large size of the liquid organic scintillators employed, the resolution due to finite width of the detectors dominates, introducing an uncertainty in the measured angles of $\approx 0.13$ rad. This result was confirmed by an MCNPX-PoliMi [29] simulation of the detector array. Using the same simulation, we have also determined that systematic biases are negligible: the mean angles between measured particles will be the same as the emitted angles, while the width of the measured angular distribution is broadened by angular resolution effects. Cross talk effects are negligible because neutrons and $\gamma$ rays are easily distinguished by the organic scintillators. The unfolding of the neutron and $\gamma$-ray energies is performed using the method described in Ref. [19]. This type of unfolding does not completely recover the initial distribution, but addresses systematic biases in the average spectra. Specifically, the unfolded distribution remains broadened with a system-characteristic resolution. Because the angular response of Chi- Nu does not introduce systematic biases and only introduces broadening, we do not unfold the angle and take the angle between two particles to be the angle between the centers of the detectors where the particles interacted. We consider energy ranges of $1.0<E_{n}<7.4$ MeV and $0.24<E_{\gamma}<3$ MeV, with bin widths of size $0.4$ and $0.16$ MeV for neutrons and $\gamma$ rays, respectively. ## IV Results For every fixed neutron and $\gamma$-ray energy $E_{n}$ and $E_{\gamma}$, we obtain curves defining the magnitude of neutron-$\gamma$ correlations for angles $\theta_{n\gamma}$ between the two emissions and extract the Legendre coefficients from them. In the CoM frame, only even-order polynomials can describe the angular distribution of $\gamma$ rays [26]. However, the aberration of $\gamma$ rays can introduce odd-order, antisymmetric terms to the angular distribution. We have fit the experimental data using Legendre polynomial, and have determined that the best fit was provided by retaining only the symmetric 0th and 2nd order polynomials. Thus, the $n$-$\gamma$ correlations at fixed energies are fit using $C_{E_{n}E_{\gamma}\theta_{n\gamma}}=A_{0}(E_{n},E_{\gamma})+A_{2}(E_{n},E_{\gamma})P_{2}(\text{cos}\theta_{n\gamma})\ .$ (2) Neutron-$\gamma$ correlation obtained from the data are shown in Fig. 2 for several selected combinations of $E_{n}$ and $E_{\gamma}$. On the same plot, we show the fit to the data retaining only the 0th and 2nd order polynomials. The Legendre coefficients across all neutron and $\gamma$-ray energies are shown in Fig. 3. Lastly, the statistical uncertainties of the coefficients, determined from randomly resampling the data are also shown in Fig. 3. Figure 2: Curves of $C_{E_{n},E_{\gamma}\theta_{n\gamma}}$ for selected neutron and $\gamma$-ray energies. Legendre polynomial fits for $0^{\text{th}}$ and $2^{\text{nd}}$ order are shown. Figure 3: Legendre polynomials fit parameters to $C_{E_{n}E_{\gamma},\theta_{n\gamma}}$ The parameter $A_{0}$ has a simple physical interpretation: it represents the magnitude of the $n$-$\gamma$ covariance averaged over all emission angles. The result shown in Fig. 3 (a), as expected from our previous investigation [19], shows structure developing in the regions $E_{\gamma}\approx 0.7$ and $E_{\gamma}\approx 1.2$ MeV. Using model calculations, we showed that the presence of the enhancement at $0.7$ MeV can be explained by positive correlations between the fragment angular momentum and energy, increasing the feeding of rotational band states with increasing neutron multiplicities, and thus excitation energy. The enhancement at $1.2$ MeV is explained in part by the same correlations and in part by a biasing of the fission samples towards symmetric fission. For a more detailed analysis of the angle-independent $n$-$\gamma$ correlations see Ref. [19]. The coefficient of the second Legendre polynomial, $A_{2}$, shown in Fig. 3 (c) shows the dependence of the correlations on the emission angle between neutrons and $\gamma$ rays. Positive $A_{2}$ indicates $\gamma$ rays are aligned predominantly along the direction of neutron emission, both parallel and antiparallel, while negative $A_{2}$ indicates $\gamma$ rays are aligned perpendicular to neutron emission. The statistical uncertainties for both $A_{0}$ and $A_{2}$ are shown in Fig. 3 (b) and (d), respectively. The uncertainties are larger for the higher energies, where fewer particles were measured. However, the uncertainties are several times smaller than the magnitude of the Legendre coefficients in the regions of enhancement that we discuss below. We note enhanced positive structure at $E_{\gamma}\approx 0.7$ MeV in $A_{2}$. This enhancement closely resembles the structure observed in $A_{0}$, but extends to lower energies and does not extend to higher energies. Importantly, we do not observe pronounced angular correlations at $E_{\gamma}\approx 1.2$ MeV, as we do in $A_{0}$. We observe a trend of enhanced correlations with increasing neutron energies. These results are not surprising considering the discussion presented in Sec. II. With increasing neutron lab-frame energies, we bias towards more kinematically-boosted neutrons, thus the angle between $\gamma$ rays and neutrons becomes more representative of the angle between $\gamma$ rays and the fission axis. At high $E_{\gamma}$ the angular correlations are much smaller and very close to $0$. Angular correlations are also weak at the lowest $\gamma$-ray energies, but caution should be used in interpreting this region as it borders the lower edge of the $E_{\gamma}$ acceptance and the unfolding might lead to artifacts. We do not observe significant dependence on neutron energy in either the low or high $E_{\gamma}$ region. The alignment of $\gamma$ rays with the fission axis at $E_{\gamma}\approx 0.7$ MeV indicates that these $\gamma$ rays are predominantly from stretched quadrupole transitions along rotational bands. As noted above, these transitions generate $\gamma$ rays predominantly perpendicular to the angular momentum, and thus parallel to the fission axis. This is not the first experimental observation of these angular correlations, see for example Refs. [30, 31, 1]. However, the results of this work combine these angular correlations with the observed positive overall covariance, manifested as $A_{0}$, between neutrons and $\gamma$ rays in this energy region, thus giving further confirmation of the presence of positive correlations between the fragment angular momentum and energy. The positively-correlated region at $E_{\gamma}\approx 1.2$ MeV shows some deviations from the expected behavior. In Ref. [19], we identified high-energy transitions in spherical nuclei near the shell closure of ^132Sn as the main contributor to the enhanced correlations. We expect these $\gamma$ rays to be predominantly stretched, thus giving rise to positive angular correlations. However, experimental observation shows that the angular correlations in this region, while still positive, are significantly reduced with respect to the enhancement at $E_{\gamma}\approx 0.7$ MeV. This reduction can occur if, in spherical nuclei, the direction of the angular momentum is not strongly oriented perpendicular to the fission axis, as is the case for more deformed fragments. In the higher-energy region of $E_{\gamma}\gtrapprox 1.8$ MeV the emission should be dominated by statistical transitions. We observe that the $\gamma$ rays in this region are predominantly isotropic. This is in good agreement with earlier observation by Val’skii et al. [30] and Hoffman [1], but in disagreement with the simplified model of stretched $E1$ transitions mentioned by Oberstedt et al. [31]. Therefore, the results of this analysis indicate that the statistical transitions in the continuum are not dominantly stretched: a significant component connects states of equal angular momentum. In the low-energy region of $E_{\gamma}\approx 0.4$ MeV we again find $\gamma$ rays uncorrelated with the neutron direction, and hence the fission axis. Val’skii [30] investigated the multipolar character of $\gamma$-ray transitions and determined that, at the lowest energies and, significantly for $E_{\gamma}<0.5$ MeV, $M1$ transitions become important. We can explain the relative isotropy of these transitions at these energies, also observed by Val’skii, by considering that these intraband transitions have a lower probability of being stretched since, in some situations, two connected levels in different bands will have the same angular momentum. Even for stretched transitions, the angular momentum at low $E_{\gamma}$ will be reduced because the contribution of $E2$ and $M1$ transitions are both important in this energy region and their angular distributions carry opposite signs. ## V Concluding remarks We have expanded our previous analysis of event-by-event $n$-$\gamma$ emission correlations by considering the angle between the emitted particles in addition to their individual energies. We have observed enhanced emission correlations as well as alignments of the emitted particles for $\gamma$-ray energies associated with rotational band transitions. We conclude that this enhancement is related to positive correlations between the fragment energies and angular momenta. Theoretical models can explain these correlations in terms of excitations of rotational modes of the fragments during the fission process [3, 8, 24]. With increasing excitation energy of the fissioning system, these modes are excited more and give rise to an increase in the fragment angular momenta. An enhancement in the emission of isotropic discrete $\gamma$ rays was observed at higher $\gamma$-ray energies. These emissions are predominantly from stretched electric quadrupole transitions from heavy fragments with masses close to the shell closure of ^132Sn. The results of our analysis indicates that the angular momenta of these fragments are not strongly polarized in the plane perpendicular to the fission axis. These results can be explained by the nearly spherical shape of the fragments in this region, which makes it harder to generate angular momentum and align it. In addition to providing further evidence for positive correlations between the energy and angular momentum of a fragment, the results also indicate that statistical $\gamma$-ray emission is not dominantly stretched radiation. The impact of transitions where the angular momentum of the initial and final state is equivalent is strongly affected by the energy-dependent level densities. It will be interesting to investigate the magnitude of the mixing theoretically and compare model calculations with the experimental results shown here. Understanding the magnitude of angular momentum carried by statistical $\gamma$ rays is an essential step toward in the determination of the fission fragment initial angular momenta. Having experimentally determined the alignment of neutrons and $\gamma$ rays, the next step will be to refine the current theoretical models and generate predictions to compare to experiment. Along with several other important observables, these correlations will provide significant help for the refinement of the modeling of angular momentum in fission. The improved models will, in turn, shed light on the fundamental dynamics of the fission process [12] as well as provide predictions of $n$-$\gamma$ emission where experimental data is lacking. ###### Acknowledgements. S.M. thanks the Chi-Nu experimental group at LANSCE-LANL and M.J. Marcath for sharing the experimental data used in this analysis. This work was in part supported by the Office of Defense Nuclear Nonproliferation Research & Development (DNN R&D), National Nuclear Security Administration, US Department of Energy. This work was funded in-part by the Consortium for Monitoring, Technology, and Verification under Department of Energy National Nuclear Security Administration award number DE-NA0003920. The work of V.A.P. was performed under the auspices of UT-Battelle, LLC under Contract No. DE- AC05-00OR22725 with the U.S. Department of Energy. The work of R.V. was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. J. R. acknowledges support from the Office of Nuclear Physics in the U.S. Department of Energy under Contract DE-AC02-05CH11231. ## References * Hoffman [1964] M. M. Hoffman, Phys. Rev. 133, B714 (1964). * Wilhelmy _et al._ [1972] J. B. Wilhelmy, E. Cheifetz, R. C. Jared, S. G. Thompson, H. R. Bowman, and J. O. Rasmussen, Phys. Rev. C 5, 2041 (1972). * Moretto and Schmitt [1980] L. G. Moretto and R. P. Schmitt, Phys. Rev. C 21, 204 (1980). * Døssing and Randrup [1985] T. Døssing and J. Randrup, Nuclear Physics A 433, 215 (1985). * Stetcu _et al._ [2021] I. Stetcu, A. E. Lovell, P. Talou, T. Kawano, S. Marin, S. A. Pozzi, and A. Bulgac, Phys. Rev. Lett. 127, 222502 (2021). * Bulgac _et al._ [2021] A. Bulgac, I. Abdurrahman, K. Godbey, and I. Stetcu, “Fragment intrinsic spins and fragments’ relative orbital angular momentum in nuclear fission,” (2021), arXiv:2108.03763 [nucl-th] . * Vogt and Randrup [2021] R. Vogt and J. Randrup, Phys. Rev. C 103, 014610 (2021). * Randrup and Vogt [2021] J. Randrup and R. Vogt, Phys. Rev. Lett. 127, 062502 (2021). * Wilson _et al._ [2021] J. N. Wilson, D. Thisse, M. Lebois, N. Jovančević, D. Gjestvang, R. Canavan, M. Rudigier, D. Étasse, R.-B. Gerst, L. Gaudefroy, E. Adamska, P. Adsley, A. Algora, M. Babo, K. Belvedere, J. Benito, G. Benzoni, A. Blazhev, A. Boso, S. Bottoni, M. Bunce, R. Chakma, N. Cieplicka-Oryńczak, S. Courtin, M. L. Cortés, P. Davies, C. Delafosse, M. Fallot, B. Fornal, L. Fraile, A. Gottardo, V. Guadilla, G. Häfner, K. Hauschild, M. Heine, C. Henrich, I. Homm, F. Ibrahim, Ł. W. Iskra, P. Ivanov, S. Jazrawi, A. Korgul, P. Koseoglou, T. Kröll, T. Kurtukian-Nieto, L. Le Meur, S. Leoni, J. Ljungvall, A. Lopez-Martens, R. Lozeva, I. Matea, K. Miernik, J. Nemer, S. Oberstedt, W. Paulsen, M. Piersa, Y. Popovitch, C. Porzio, L. Qi, D. Ralet, P. H. Regan, K. Rezynkina, V. Sánchez-Tembleque, S. Siem, C. Schmitt, P.-A. Söderström, C. Sürder, G. Tocabens, V. Vedia, D. Verney, N. Warr, B. Wasilewska, J. Wiederhold, M. Yavahchova, F. Zeiser, and S. Ziliani, Nature 590, 566 (2021). * Chebboubi _et al._ [2017] A. Chebboubi, G. Kessedjian, O. Litaize, O. Serot, H. Faust, D. Bernard, A. Blanc, U. Köster, O. Méplan, P. Mutti, and C. Sage, Physics Letters B 775, 190 (2017). * Rakopoulos _et al._ [2018] V. Rakopoulos, M. Lantz, A. Solders, A. Al-Adili, A. Mattera, L. Canete, T. Eronen, D. Gorelov, A. Jokinen, A. Kankainen, V. S. Kolhinen, I. D. Moore, D. A. Nesterenko, H. Penttilä, I. Pohjalainen, S. Rinta-Antila, V. Simutkin, M. Vilén, A. Voss, and S. Pomp, Phys. Rev. C 98, 024612 (2018). * Marević _et al._ [2021] P. Marević, N. Schunck, J. Randrup, and R. Vogt, Phys. Rev. C 104, L021601 (2021). * Göök _et al._ [2014] A. Göök, F.-J. Hambsch, and M. Vidali, Physical Review C 90, 064611 (2014). * Signarbieux _et al._ [1972] C. Signarbieux, J. Poitou, M. Ribrag, and J. Matuszek, Physics Letters B 39, 503 (1972). * Gavron and Fraenkel [1971] A. Gavron and Z. Fraenkel, Phys. Rev. Lett. 27, 1148 (1971). * Travar _et al._ [2021] M. Travar, V. Piau, A. Göök, O. Litaize, J. Nikolov, A. Oberstedt, S. Oberstedt, J. Enders, M. Peck, W. Geerts, and M. Vidali, Physics Letters B 817, 136293 (2021). * Nifenecker _et al._ [1972] H. Nifenecker, C. Signarbieux, M. Ribrag, J. Poitou, and J. Matuszek, Nuclear Physics A 189, 285 (1972). * Marin _et al._ [2020] S. Marin, V. A. Protopopescu, R. Vogt, M. J. Marcath, S. Okar, M. Y. Hua, P. Talou, P. F. Schuster, S. D. Clarke, and S. A. Pozzi, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 968, 163907 (2020). * Marin _et al._ [2021] S. Marin, M. S. Okar, E. P. Sansevero, I. E. Hernandez, C. A. Ballard, R. Vogt, J. Randrup, P. Talou, A. E. Lovell, I. Stetcu, O. Serot, O. Litaize, A. Chebboubi, S. D. Clarke, V. A. Protopopescu, and S. A. Pozzi, Phys. Rev. C 104, 024602 (2021). * Chietera _et al._ [2015] A. Chietera, L. Stuttgé, F. Gönnenwein, Y. Kopatch, M. Mutterer, I. Guseva, A. Gagarski, E. Chernysheva, O. Dorvaux, F. Hambsch, F. Hanappe, Z. Mezentsevah, and S. Telezhnikovch, Acta Physica Polonica B 46, 569 (2015). * Petrov _et al._ [2008] G. A. Petrov, A. M. Gagarski, I. S. Guseva, V. E. Sokolov, G. V. Val’Ski, A. S. Vorobiev, D. O. Krinitcin, O. A. Shcherbakov, D. V. Nikolaev, Y. S. Pleva, V. I. Petrova, and T. A. Zavarukhina, Physics of Atomic Nuclei 71, 1137 (2008). * Landau _et al._ [1975] L. D. Landau, E. M. Lifshitz, and M. Hamermesh, _Classical Theory of Fields_ , Course of Theoretical Physics (Elsevier Science, 1975). * Randrup [1982] J. Randrup, Nuclear Physics A 383, 468 (1982). * Randrup and Vogt [2014] J. Randrup and R. Vogt, Phys. Rev. C 89, 044601 (2014). * Oberstedt, Andreas _et al._ [2018] Oberstedt, Andreas, Billnert, Robert, Gatera, Angélique, Göök, Alf, and Oberstedt, Stephan, EPJ Web Conf. 193, 03005 (2018). * Tolhoek and Cox [1953] H. Tolhoek and J. Cox, Physica 19, 101 (1953). * Marcath _et al._ [2018] M. J. Marcath, R. C. Haight, R. Vogt, M. Devlin, P. Talou, I. Stetcu, J. Randrup, P. F. Schuster, S. D. Clarke, and S. A. Pozzi, Phys. Rev. C 97, 044622 (2018). * Schuster _et al._ [2019] P. F. Schuster, M. J. Marcath, S. Marin, S. D. Clarke, M. Devlin, R. C. Haight, R. Vogt, P. Talou, I. Stetcu, T. Kawano, J. Randrup, and S. A. Pozzi, Phys. Rev. C 100, 014605 (2019). * Pozzi _et al._ [2003] S. A. Pozzi, E. Padovani, and M. Marseguerra, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 513, 550 (2003). * Val’ski _et al._ [1969] G. V. Val’ski, B. M. Aleksandrov, I. A. Baranov, A. S. Krivokatskii, G. A. Petrov, and Y. S. Pleva, Soviet Journal of Nuclear Physics 10, 137 (1969). * Oberstedt _et al._ [2019] A. Oberstedt, A. Gatera, A. Göök, and S. Oberstedt, Acta Physika Polonica B 50, 275 (2019).
# Design a Win-Win Strategy That Is Fair to Both Service Providers and Tasks When Rejection Is Not an Option Yohai Trabelsi1 Pan Xu2 Sarit Kraus1 1Department of Computer Science, Bar-Ilan University, Ramat Gan, Israel 2New Jersey Institute of Technology, Newark, NJ, USA <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Assigning tasks to service providers is a frequent procedure across various applications. Often the tasks arrive dynamically while the service providers remain static. Preventing task rejection caused by service provider overload is of utmost significance. To ensure a positive experience in relevant applications for both service providers and tasks, fairness must be considered. To address the issue, we model the problem as an online matching within a bipartite graph and tackle two minimax problems: one focuses on minimizing the highest waiting time of a task, while the other aims to minimize the highest workload of a service provider. We show that the second problem can be expressed as a linear program and thus solved efficiently while maintaining a reasonable approximation to the objective of the first problem. We developed novel methods that utilize the two minimax problems. We conducted extensive simulation experiments using real data and demonstrated that our novel heuristics, based on the linear program, performed remarkably well. ## 1 Introduction In resource allocation, numerous problems can be represented as online matching in bipartite graphs. One side of the graph comprises service providers (interchangeably called workers in this paper), while the other consists of allocated task types. The graph’s edges indicate the qualifications of service providers to perform tasks of specific types. In online matching problems, a common scenario involves one dynamic side and one static side. This dynamic-static setup finds application in various contexts, such as matching riders(dynamic) to drivers(static) Dickerson et al. (2021), connecting search queries(dynamic) to advertisers in sponsored search(static) Delong et al. (2022), and facilitating the teleoperation of autonomous vehicles (AVs) Ackerman Viden et al. (2023). The primary objective in these problems is to optimize some criteria from the perspective of the allocator. Some other works are dedicated to optimizing allocation fairness. For example, in the domain of ride-sourcing, a method to achieve allocation fairness was proposed in Lesmana et al. (2019). Additionally, certain studies address cases where fairness should be maintained for both online tasks and offline workers Esmaeili et al. (2023). Our work is motivated by the teleoperation of AVs that has garnered increasing attention recently (e.g., Zhang (2020); Ackerman Viden et al. (2023); Tener and Lanir (2022)). The primary role of teleoperation is to aid AVs by intervening in challenging driving situations111As mentioned in Tener and Lanir (2022), the AVs will need this intervention, at least in the near future.. Ensuring a fair allocation of teleoperators to driving tasks is crucial for enhancing the satisfaction of both teleoperators and AVs’ users. Particularly, if certain intervention requests have significantly longer waiting times or if some teleoperators are disproportionately busier than others, such imbalances can lead to dissatisfaction among those affected. In addition, as a person in the vehicle is awaiting the teleoperator’s intervention, a rejection of a request is unacceptable. Another property of this application is that the teleoperators (workers) are reusable, which means they are ready to perform a new intervention request (task) once they finish a previously allocated request. We model the problem as online matching in a bipartite graph and propose several approaches to optimize fairness for both the tasks (e.g., intervention requests) and the workers (e.g., teleoperators) involved in the process. Our notion of fairness is aligned with Rawls’ theory of justice Rawls (1999). We introduce two minimax problems within the given context. The first concerns fairness regarding tasks relative to waiting times, while the second focuses on Rawlsian fairness for service providers based on their workload. In both scenarios, task rejection is not permissible. We demonstrate that the second problem can be efficiently formulated as a linear problem. Notably, the solution to the second problem mirrors the first when task durations from each worker conform to the same distribution. In cases where this isn’t true, we show that the second problem’s solution approximates the first problem’s solution, supported by a provable approximation ratio. Our study concludes with extensive simulations that underscore the efficacy of these minimax problems. Furthermore, we devise innovative heuristics that leverage the minimax solutions. These heuristics enhance task fairness while preserving favorable outcomes for worker fairness. Our main contributions are: (1) We propose two models to promote fairness among tasks and workers. (2) We present an LP-based algorithmic framework, which can exactly solve fairness maximization among workers and approximately among tasks, and we provide a tight approximation bound. (3) We empirically implement and compare different methods, including several baselines, on datasets involving the teleoperation of AVs. ### 1.1 Related Work In this section, we describe previous works about fair allocation and allocation with delays. Notably, to our knowledge, our work distinguishes itself by being the first to consider fairness and allocation delays together. #### Fair allocation Some studies address fair allocation, focusing on only one side of the graph, as seen in Ma et al. (2020). Although their fairness approach resembles ours, it pertains solely to one side of the graph, which falls short of our requirements. Other research, like Patro et al. (2020), deals with fairness in recommendation systems. However, the fairness objectives in recommendation systems significantly differ from those in task allocation contexts. Practical solutions for enhancing fairness for both service providers and tasks are explored in works such as Zhou et al. (2023). Regrettably, this branch of research lacks theoretical performance bounds for their solutions. The fairness principles in Esmaeili et al. (2023) closely align with ours. They consider both workers (offline side) and tasks (online side), embracing Rawlsian welfare Rawls (1958). Nonetheless, task rejection is permissible in their scenario if workers are unavailable. #### Allocation with delayed assignments The original online matching problem was introduced in Karp et al. (1990), where static nodes (workers) are instantly paired with dynamic nodes (tasks) upon arrival. However, real scenarios often lack immediate worker availability for tasks, prompting consideration for task execution delays over outright rejection. Numerous works tackle resource allocation with potential task delays. However, many of these approaches (e.g., Righter (1987); Li et al. (2023)) prioritize utility maximization without factoring in task wait times or worker workload. Some leverage reinforcement learning for such issues yet often make batch decisions, leading to suboptimal outcomes. Moreover, theoretical guarantees are frequently absent. An LP-based method for delayed allocations is presented in Ackerman Viden et al. (2023), optimizing a complex utility function that accounts for task waiting times but overlooks worker workload. Another pertinent domain involves queue admission control systems with multiple classes. Here, diverse customer types (tasks) arrive dynamically, and a decision-maker determines which task to accept, as demonstrated in Rigter et al. (2022). However, several studies in this realm do not distinguish between workers, while others permit task rejection. To our knowledge, the problem of two-sided fair allocation when task rejection is not allowed has not yet been addressed. ## 2 Preliminaries $G$ | Input network graph $G=(I,J,E)$. ---|--- $I$ ($J$) | Set of worker (task) types. $\mathcal{N}_{i}$ ($\mathcal{N}_{j}$) | Set of neighbors of $i$ ($j$). $i\sim j$ ($j\sim i$) | Equivalent to $i\in\mathcal{N}_{j}$ ($j\in\mathcal{N}_{i}$). $\lambda_{j}$ | Arrival rate of task type $j\in J$. $\lambda_{i}$ | Arrival rate on worker $i\in I$. $\mathrm{Exp}(\mu)$ | Exponential distribution of rate $\mu>0$. $\mathrm{Exp}(\mu_{ij})$ | Service time taken by worker $i$ to service $j$. $\rho_{i}\in[0,1]$ | Workload of worker $i\in I$. $w_{j}$ | Expected (absolute) waiting time of $j$; see Eqn. (4). $\bar{w}_{j}$ | Expected (relative) waiting time of $j$; see Eqn. (5). $\kappa\geq 1$ | $\max_{i\in I}\big{(}\max_{j\sim i,j^{\prime}\sim i}\mu_{ij}/\mu_{i,j^{\prime}}\big{)}$. Table 1: A glossary of notations throughout this paper. Suppose we use a bipartite graph $G=(I,J,E)$ to model the worker-task network, where $I$ denotes the set of offline workers (_e.g.,_ teleoperators), $J$ the set of types of tasks, and an edge $e=(i,j)$ indicates the feasibility of worker $i$ to serve the task (of type) $j$. Note that at certain points within this paper, we abuse the notation by referring to $j$ as a task instead of a task type. We also abuse the notation by referring to an edge $e=(i,j)$ as $(ij)$. Tasks of type $j\in J$ arrive following an independent Poisson process of rate $\lambda_{j}>0$. For each edge $e=(i,j)\in E$, we assume it takes worker $i$ an exponentially distributed service time222This assumption is justified in Devore (2008). Note that the theoretical analysis does not depend on it. We could use any distribution if the mean and the variance of service time are known. of rate $\mu_{ij}>0$ to complete a task of type $j$ (_i.e.,_ with mean of $1/\mu_{ij}$)333Note that the assumption does not necessarily suggest the most likely outcome is for tasks to be finished in an extremely short time. Consider a task type with an exponentially distributed service time of rate $\mu$, denoted as $X=\mathrm{Exp}(\mu)$. We observe that for any given threshold $a>0$, $\Pr[X\geq a]=e^{-\mu a}$, which can be close to one when $\mu$ is small. . For each worker $i$ and task $j$, let $\mathcal{N}_{i}\subseteq J$ and $\mathcal{N}_{j}\subseteq I$ denote the set of neighbors of $i$ and $j$ in the graph $G$. The assigning rule is as follows. Upon the arrival of a task of type $j$, we (as the central coordinator) have to assign it to a feasible worker $i\in\mathcal{N}_{j}$ immediately: if $i$ is free (or available) at that time, then $i$ will serve $j$ right away; otherwise, $j$ will join the virtual queue of $i$ and it will stay there until being served by $i$. ### 2.1 Allocation Policy and Related Concepts Consider an allocation policy $\pi(\mathbf{x})$ (possibly randomized), characterized as a vector $\mathbf{x}=\\{x_{ij}|(ij)\in E\\}$, where $x_{ij}\in[0,1]$ denotes the percentage of task (of type) $j$ assigned to and served by worker $i$. In the following, we discuss a few important properties and concepts related to $\pi(\mathbf{x})$. Let $\mathcal{Q}_{i}$ be the virtual queue maintained by worker $i\in I$. Arrival rate on $\mathcal{Q}_{i}$, denoted by $\lambda_{i}$. Observe that $\mathbf{x}=(x_{ij})$ can be viewed alternatively as the probability that $\pi$ assigns each arriving $j$ to $i$. Thus, we claim that $\mathcal{Q}_{i}$ admits a Poisson arrival process of rate $\lambda_{i}:=\sum_{j\in\mathcal{N}_{i}}\lambda_{j}\cdot x_{ij}$. By the property of the Poisson process (See section 2.3.2 at Gallager (2011)), conditioning on the arrival of task (of type) $\bar{j}\in J$ on $i$, we claim that $\Pr[\bar{j}=j]=x_{ij}\cdot\lambda_{j}/\lambda_{i}$ for each $j\in\mathcal{N}_{i}$. Service time on $\mathcal{Q}_{i}$, denoted by $\mathcal{S}_{i}$. The analysis above shows that the task joining $\mathcal{Q}_{i}$ is of type $j\in\mathcal{N}_{i}$ with probability equal to $x_{ij}\cdot\lambda_{j}/\lambda_{i}$. Thus, the overall service time $\mathcal{S}_{i}=\sum_{j\in\mathcal{N}_{i}}\chi_{ij}\cdot\mathrm{Exp}(\mu_{ij})$, where $\chi_{ij}=1$ indicates that the task joining $i$ is of type $j$ with $\mathsf{E}[\chi_{ij}]=x_{ij}\cdot\lambda_{j}/\lambda_{i}$, and $\mathrm{Exp}(\mu_{ij})$ represents the exponentially distributed service time of $i$ for $j$ of rate $\mu_{ij}$. Thus, $\mathcal{S}_{i}$ follows a _hyperexponential distribution_ Gupta and Goyal (1964) with mean equal to $\displaystyle s_{i}:=\mathsf{E}[\mathcal{S}_{i}]=\sum_{j\in\mathcal{N}_{i}}(x_{ij}\lambda_{j})/(\lambda_{i}\mu_{ij}).$ (1) Workload of worker $i$, denoted by $\rho_{i}$. By definition, $\displaystyle\rho_{i}:=\lambda_{i}\cdot\mathsf{E}[\mathcal{S}_{i}]=\sum_{j\in\mathcal{N}_{i}}(x_{ij}\lambda_{j})/\mu_{ij},$ (2) where $\rho_{i}$ can be re-interpreted as the probability that the worker $i$ is busy or the proportion of time the worker $i$ is busy averaged over a long period. Note that $\rho_{i}<1$ is the key condition ensuring the virtual queue $\mathcal{Q}_{i}$ can enter a stable state. This is also a condition we should impose on every worker $i\in I$ when designing policy $\pi(\mathbf{x})$ since otherwise, $i$ could always stay occupied in the long run (thus, not acceptable to $i$) and every task $j$ assigned to $i$ could risk an infinitely long waiting time (not acceptable to $j$). Waiting time on worker $i$, denoted by $W_{i}$. By the analysis above, we see that the queue $\mathcal{Q}_{i}$ on worker $i$ qualifies as an $M/G/1$ (using the standard Kendall’s notation Kendall (1953)), which means it admits a Poisson arrival process, a general service time distribution, and a single worker. By the Pollaczek-Khinchin mean formula Asmussen (2003), $\displaystyle w_{i}:=\mathsf{E}[W_{i}]=\frac{\lambda_{i}\mathsf{E}[\mathcal{S}^{2}_{i}]}{2(1-\rho_{i})}=\frac{\sum_{j\in\mathcal{N}_{i}}x_{ij}\lambda_{j}/\mu^{2}_{ij}}{1-\sum_{j\in\mathcal{N}_{i}}x_{ij}\lambda_{j}/\mu_{ij}},$ (3) where the numerator is equal to $\displaystyle\lambda_{i}\cdot\mathsf{E}[\mathcal{S}_{i}^{2}]=\lambda_{i}\cdot\mathsf{E}\Big{[}\Big{(}\sum_{j\in\mathcal{N}_{i}}\chi_{ij}\cdot\mathrm{Exp}(\mu_{ij})\Big{)}^{2}\Big{]}$ $\displaystyle=\lambda_{i}\cdot\mathsf{E}\Big{[}\sum_{j\in\mathcal{N}_{i}}\chi_{ij}\cdot\mathrm{Exp}^{2}(\mu_{ij})\Big{]}=\lambda_{i}\cdot\sum_{j\in\mathcal{N}_{i}}\mathsf{E}\Big{[}\chi_{ij}\cdot\mathrm{Exp}^{2}(\mu_{ij})\Big{]}$ $\displaystyle=\lambda_{i}\cdot\sum_{j\in\mathcal{N}_{i}}(x_{ij}\lambda_{j}/\lambda_{i})\cdot(2/\mu^{2}_{ij})=2\sum_{j\in\mathcal{N}_{i}}(x_{ij}\lambda_{j}/\mu^{2}_{ij}).$ Absolute and relative waiting time of $j$, denoted by $W_{j}$ and $\overline{W}_{j}$. Recall that under $\pi(\mathbf{x})$, a task $j$ will be assigned to a feasible worker $i\in\mathcal{N}_{j}$ with probability $x_{ij}$. Thus, the expected (absolute) waiting time of $j$ should be $\displaystyle w_{j}:=\mathsf{E}[W_{j}]=\sum_{i\in\mathcal{N}_{j}}x_{ij}\cdot w_{i},$ (4) where $w_{i}$ is the expected waiting time on queue $\mathcal{Q}_{i}$, as shown in (3). The _relative_ waiting time of $j$ on $\mathcal{Q}_{i}$ is defined as the ratio of waiting time on $\mathcal{Q}_{i}$ to the service time of $i$ for $j$, which has a mean of $1/\mu_{ij}$. Thus, the expected relative waiting time of $j$ should be $\displaystyle\bar{w}_{j}$ $\displaystyle:=\mathsf{E}[\overline{W}_{j}]=\sum_{i\in\mathcal{N}_{j}}x_{ij}\cdot w_{i}/(1/\mu_{ij})$ $\displaystyle=\sum_{i\in\mathcal{N}_{j}}x_{ij}\cdot\mu_{ij}\cdot\frac{\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}^{2}}{1-\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}}.$ (5) ### 2.2 Two Fairness-Related Objectives In this paper, we propose the following two fairness metrics and objectives when optimizing a policy $\pi(\mathbf{x})$. #### FAIR-T: Fairness promotion among tasks, denoted by ${\min\max_{j\in J}\bar{w}_{j}}$ . We quantify the overall fairness among users achieved by policy $\pi(\mathbf{x})$ as the maximum expected _relative_ waiting time among all task types, _i.e.,_ $\max_{j\in J}\bar{w}_{j}$. A formula for calculating the relative waiting time is shown in (5). Note that here we choose the relative version instead of the absolute one (_i.e.,_ $\max_{j\in J}w_{j}$) following, for example, the paper Maister and others (1984) that asserts that “the more valuable the service, the longer the customer will wait.” A compelling example is that:“Special checkout counters were originally provided because customers with only a few items felt resentful at having to wait a long time for what was seen as a simple transaction. Customers with a full cart of groceries were much more inclined to tolerate lines.” FAIR-S: Fairness promotion among workers, denoted by $\min\max_{i\in I}\rho_{i}$. Recall that for each worker $i\in I$, the workload $\rho_{i}\in(0,1)$, as defined in (2), captures the percentage of busy time on worker $i$. Thus, the maximum workload, i.e., $\max_{i\in I}\rho_{i}$, reflects the highest degree of being occupied among all workers under policy $\pi(\mathbf{x})$. By opting for minimization of the maximum workload, denoted by $\min\max_{i\in I}\rho_{i}$, we aim to minimize the occupation time of the most occupied worker as substantially as feasible. ### 2.3 Two Optimization Programs Consider an allocation policy $\pi(\mathbf{x})$ parameterized by $\mathbf{x}=(x_{ij})$, where $x_{ij}$ with $(ij)\in E$ denotes the percentage of task of type $j$ assigned to worker $i$. For ease of notation, we will use $i\sim j$ (and $j\sim i$) to represent $i\in\mathcal{N}_{j}$ (and $j\in\mathcal{N}_{i}$) throughout this paper. We formulate FAIR-T and FAIR-S as minmax programs as follows. $\displaystyle(\operatorname{\textbf{PT}})\min$ $\displaystyle\max_{j\in J}\Big{(}\bar{w}_{j}=\sum_{i\sim j}x_{ij}\cdot\mu_{ij}\cdot\frac{\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}^{2}}{1-\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}}\Big{)},$ (6) $\displaystyle x_{j}:=\sum_{i\sim j}x_{ij}=1,~{}~{}\forall j\in J$ (7) $\displaystyle\rho_{i}=\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}\leq 1,~{}~{}\forall i\in I$ (8) $\displaystyle 0\leq x_{ij}\leq 1,~{}~{}\forall(ij)\in E.$ (9) $\displaystyle(\operatorname{\textbf{PS}})~{}~{}\min$ $\displaystyle~{}~{}\max_{i}\rho_{i},$ (10) $\displaystyle x_{j}:=\sum_{i\sim j}x_{ij}=1,$ $\displaystyle~{}~{}\forall j\in J$ (11) $\displaystyle\rho_{i}=\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}\leq 1,$ $\displaystyle~{}~{}\forall i\in I$ (12) $\displaystyle 0\leq x_{ij}\leq 1,$ $\displaystyle~{}~{}\forall(ij)\in E.$ (13) We refer to the above programs as $\operatorname{\textbf{PT}}$ and $\operatorname{\textbf{PS}}$, respectively. Let $\mathbf{x}_{t}^{*}$ and $\mathbf{x}^{*}_{s}$ be optimal solutions to $\operatorname{\textbf{PT}}$ and $\operatorname{\textbf{PS}}$, respectively. ###### Lemma 1. $\pi(\mathbf{x}_{t}^{*})$ and $\pi(\mathbf{x}_{s}^{*})$ are optimal policies under FAIR-T and FAIR-S, respectively. ###### Proof. We focus on showcasing the case of FAIR-T and the program $\operatorname{\textbf{PT}}$. The proof for the other case is similar. Note that the term shown in (6) captures the precise objective we aim to optimize. To prove our claim, we need to demonstrate that all constraints in $\operatorname{\textbf{PT}}$ hold true for any viable policy of $\pi(\mathbf{x})$. Constraint (7) is reasonable because every policy must assign each incoming task to a feasible worker without rejection, thereby ensuring that the total percentages assigned for each type sum up to one. Constraint (8) is valid as the workload of any worker (i.e., the percentage of busy time) should not exceed one. Constraint (9) holds true since $x_{ij}$ represents the percentage of tasks of type $j$ assigned to worker $i$. ∎ Lemma 1 suggests that the optimal policies for FAIR-T and FAIR-S each can be obtained by solving minmax programs represented by $\operatorname{\textbf{PT}}$ and $\operatorname{\textbf{PS}}$ respectively. Note that $\operatorname{\textbf{PS}}$ can be reformulated as a linear program ($\operatorname{LP}$) by introducing an auxiliary variable $\rho$ and modifying the objective as $\min\rho$, along with additional constraints $\rho\geq\rho_{i}$ for all $i\in I$. Consequently, we can efficiently solve $\operatorname{\textbf{PS}}$ and obtain an optimal policy for FAIR-S. However, for program $\operatorname{\textbf{PT}}$, the objective is non-linear and can be neither convex nor concave even under very special settings, posing a technical challenge for direct optimization; see detailed discussions in the Appendix. Nevertheless, under certain conditions, $\operatorname{\textbf{PT}}$ can be effectively and accurately approximated by $\operatorname{\textbf{PS}}$, as proven in Theorem 1. ###### Lemma 2. The optimal values of $\operatorname{\textbf{PT}}$ and $\operatorname{\textbf{PS}}$ each remain invariant if we treat any task type $j\in J$ with an arrival rate of $\lambda_{j}$ as $k$ different online types, each having the same set of neighbors as $j$, with an arrival rate of $\lambda_{j}/k$ for any integer $k$. The above lemma suggests that for fairness maximization among either workers under metric FAIR-S or tasks under metric FAIR-T, we can assume without loss of generality that all tasks take a uniform arrival rate by creating an appropriate number of copies for each task type. In other words, the variation among tasks’ arrival rates makes no difference to fairness promotion, compared with the difference among service times. _In the remaining sections, we assume without loss of generality that $\lambda_{j}=\lambda$ for all $j\in J$._ ## 3 The Relation Between the Two Fairness Optimization Problems Consider a general setting denoted by ${\boldsymbol{\mu}}:=(\mu_{ij})$, where $\mu_{ij}$ with $(ij)\in E$ represents the parameter for the exponential distribution of the service time taken by worker $i$ to serve task $j$. Let $\eta_{t}({\boldsymbol{\mu}},\mathbf{x})$ denote the objective value of $\operatorname{\textbf{PT}}({\boldsymbol{\mu}})$ with respect to the input ${\boldsymbol{\mu}}$ and a feasible solution $\mathbf{x}=(x_{ij})$. Similarly, $\eta_{s}({\boldsymbol{\mu}},\mathbf{x})$ denotes the objective value of $\operatorname{\textbf{PS}}({\boldsymbol{\mu}})$. When the context is clear, we may omit either the first or second argument for $\eta_{t}$ and $\eta_{s}$. For any given input ${\boldsymbol{\mu}}$, let $\eta^{*}_{t}({\boldsymbol{\mu}})$ and $\eta_{s}^{*}({\boldsymbol{\mu}})$ denote the optimal values of $\operatorname{\textbf{PT}}({\boldsymbol{\mu}})$ and $\operatorname{\textbf{PS}}({\boldsymbol{\mu}})$ respectively. ###### Theorem 1. Let $\mathbf{x}^{*}_{s}$ be an optimal solution to $\operatorname{\textbf{PS}}({\boldsymbol{\mu}})$. We have $\displaystyle\eta_{t}({\boldsymbol{\mu}},\mathbf{x}_{s}^{*})\leq\kappa^{3}\left(1+\Big{(}1-\frac{1}{\kappa}\Big{)}\cdot\frac{\eta^{*}_{s}({\boldsymbol{\mu}})}{1-\eta^{*}_{s}({\boldsymbol{\mu}})}\right)\cdot\eta_{t}^{*}({\boldsymbol{\mu}}),$ (14) where $\kappa=\max_{i\in I}\big{(}\max_{j\sim i,j^{\prime}\sim i}\mu_{ij}/\mu_{i,j^{\prime}}\big{)}\geq 1$, which captures the maximum pairwise ratio among the expectations of all service time on each given worker. For a private case of Theorem 1\- where $\kappa=1$ we prove that $\eta_{t}({\boldsymbol{\mu}},\mathbf{x}_{s}^{*})=\eta_{t}^{*}({\boldsymbol{\mu}})$ (Theorem 2). These results serve as the bedrock of the whole proof for Theorem 1, which is deferred to the Appendix for space reasons. Toward the proof of $\kappa=1$, we first define the following minimax programs and show their equivalence to $\operatorname{\textbf{PT}}$ and $\operatorname{\textbf{PS}}$, respectively, for $\kappa=1$. $\displaystyle(\overline{\operatorname{\textbf{PT}}})~{}~{}\min$ $\displaystyle~{}~{}\max_{j\in J}\Big{(}\bar{w}_{j}=\sum_{i\sim j}\frac{x_{ij}}{1-\rho_{i}}-1\Big{)},$ (15) $\displaystyle\sum_{i\sim j}x_{ij}=1,$ $\displaystyle~{}~{}\forall j\in J$ (16) $\displaystyle\rho_{i}=(\lambda/\mu_{i})\sum_{j\sim i}x_{ij}\leq 1,$ $\displaystyle~{}~{}\forall i\in I$ (17) $\displaystyle 0\leq x_{ij}\leq 1,$ $\displaystyle~{}~{}(ij)\in E.$ (18) $\displaystyle(\overline{\operatorname{\textbf{PS}}})~{}~{}\min$ $\displaystyle~{}~{}\max_{i\in I}\rho_{i},$ (19) $\displaystyle\sum_{i\sim j}x_{ij}=1,$ $\displaystyle~{}~{}\forall j\in J$ (20) $\displaystyle\rho_{i}=(\lambda/\mu_{i})\sum_{j\sim i}x_{ij}\leq 1,$ $\displaystyle~{}~{}\forall i\in I$ (21) $\displaystyle 0\leq x_{ij}\leq 1,$ $\displaystyle~{}~{}(ij)\in E.$ (22) ###### Lemma 3. For $\kappa=1$, the programs $\operatorname{\textbf{PT}}$ and $\overline{\operatorname{\textbf{PT}}}$ are equivalent and the programs $\operatorname{\textbf{PS}}$ and $\overline{\operatorname{\textbf{PS}}}$ are also equivalent. ###### Proof. Note that $\kappa=1$ suggests that $\mu_{ij}$ takes some uniform value of $\mu_{ij}=\mu_{i}$ for every $j\sim i$. Recall that $\lambda_{j}=\lambda$ for all $j\in J$ due to Lemma 2. Under these assumptions, we see that the expressions of $\rho_{i}$ and $\bar{w}_{j}$ in (2) and (5) can be simplified as $\displaystyle\rho_{i}=\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}=(\lambda/\mu_{i})\cdot\sum_{\ell\sim i}x_{i\ell},$ $\displaystyle\bar{w}_{j}:=\mathsf{E}[\overline{W}_{j}]=\sum_{i\sim j}x_{ij}\cdot w_{i}/(1/\mu_{ij})$ $\displaystyle=\sum_{i\sim j}\frac{x_{ij}\cdot\mu_{i}\cdot\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i}^{2}}{1-\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i}}=\sum_{i\sim j}\frac{x_{ij}\cdot\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i}}{1-\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i}}$ $\displaystyle=\sum_{i\sim j}x_{ij}\Big{(}-1+\frac{1}{1-\rho_{i}}\Big{)}=-1+\sum_{i\sim j}\frac{x_{ij}}{1-\rho_{i}},$ where the equality on the last line is due to $\sum_{i\sim j}x_{ij}=1$ for every $j\in J$ (no rejection allowed). Substituting lines 17 and 21 with the value of $\rho_{i}$ and line 15 with the value of $\bar{w}_{j}$ implies that the programs $\operatorname{\textbf{PT}}$ and $\operatorname{\textbf{PS}}$ are equivalent to $\overline{\operatorname{\textbf{PT}}}$ and $\overline{\operatorname{\textbf{PS}}}$. ∎ Consider a given setting with ${\boldsymbol{\mu}}=(\mu_{ij})$ satisfying $\mu_{ij}=\mu_{i}$ for all $j\sim i$ and $\lambda_{j}=\lambda$ for all $j\in J$. For ease of notation, we use $\overline{\eta^{*}_{t}}$ and $\overline{\eta^{*}_{s}}$ to denote optimal values of $\overline{\operatorname{\textbf{PT}}}$ and $\overline{\operatorname{\textbf{PS}}}$, respectively, with respect to the given setting. By default, we assume both have feasible solutions.444Infeasibility to either Program $\overline{\operatorname{\textbf{PT}}}$ or $\overline{\operatorname{\textbf{PS}}}$ suggests that no policy can lead to meaningful fairness among tasks (finite max expected waiting time) or among workers (a non-zero ratio of being free). We denote by $\overline{\eta_{t}(\mathbf{x}_{s}^{*})}$ the value of $\overline{\operatorname{\textbf{PT}}}$ on $\mathbf{x}_{s}^{*}$ in the given setting. It is tempting to prove that $\eta_{t}(\mathbf{x}_{s}^{*})=\eta_{t}^{*}$ for $\kappa=1$ by showing that $\overline{\operatorname{\textbf{PT}}}$ and $\overline{\operatorname{\textbf{PS}}}$ each possess an optimal solution such that $\\{\rho_{i}\\}$ all take a uniform value, say $\rho$. Following this “claim”, $\overline{\operatorname{\textbf{PT}}}$ is then reduced to $\min 1/(1-\rho)-1$ with $\rho=\rho_{i}$ for all $i\in I$, while $\overline{\operatorname{\textbf{PS}}}$ is reduced to $\min\rho$ with $\rho=\rho_{i}$ for all $i\in I$. This establishes Theorem 2 since $\min 1/(1-\rho)-1$ is equivalent to $\min\rho$. The example below disproves this idea, unfortunately. ###### Example 1. [_$\overline{\operatorname{\textbf{PT}}}$_ and _$\overline{\operatorname{\textbf{PS}}}$_ each possess a unique optimal solution with non-uniform values of $\\{\rho_{i}\\}$ and $\\{\bar{w}_{j}\\}$.] Consider a graph $G=(I,J,E)$ such that $|I|=m=2$ and $|J|=n\gg 1$ (See Figure 1). The input setting is as follows. $\mu_{ij}=\mu$ for all $(ij)\in E$ and $\lambda_{j}=\lambda$ for all $j\in J$. Let $\phi=\lambda/\mu$ with $n\cdot\phi<1$. $i=2$ is connected to all $j\in J$, while $i=1$ is connected only to $j=1$. We can verify that (1) _$\overline{\operatorname{\textbf{PT}}}$_ and _$\overline{\operatorname{\textbf{PS}}}$_ each have a _unique_ optimal solution and the two are the same, which is $\mathbf{x}^{*}=(x_{ij})$ with $x_{11}=1$, $x_{21}=0$, and $x_{2j}=1$ for all $1<j\leq n$; (2) for _$\overline{\operatorname{\textbf{PS}}}$_ : $\rho_{1}(\mathbf{x}^{*})=\phi$, and $\rho_{2}(\mathbf{x}^{*})=(n-1)\phi<1$; for _$\overline{\operatorname{\textbf{PT}}}$_ : $\bar{w}_{1}(\mathbf{x}^{*})=1/(1-\rho_{1}(\mathbf{x}^{*}))-1=1/(1-\phi)-1$ and $\bar{w}_{j}=1/(1-\rho_{2}(\mathbf{x}^{*}))-1=1/(1-(n-1)\phi)-1$ for $1<j\leq n$. We will now present two lemmas that establish together the correctness of Theorem 2. ###### Lemma 4. $\overline{\eta_{t}(\mathbf{x}_{s}^{*})}\leq 1/(1-\overline{\eta_{s}^{*}})-1$ ###### Proof. Since $\mathbf{x}^{*}=(x_{ij})$ is an optimal solution to $\overline{\operatorname{\textbf{PS}}}$, $\overline{\eta_{s}^{*}}=\max_{i\in I}\rho_{i}(\mathbf{x}^{*}):=\rho^{*}$. Observe that for each $j\in J$, $\bar{w}_{j}(\mathbf{x}^{*})=\sum_{i\sim j}\frac{x_{ij}}{1-\rho_{i}(\mathbf{x}^{*})}\leq\sum_{i\sim j}\frac{x_{ij}}{1-\rho^{*}}=\frac{1}{1-\rho^{*}}-1,$ which suggests that $\overline{\eta_{t}(\mathbf{x}^{*})}=\max_{j}\bar{w}_{j}(\mathbf{x}^{*})\leq 1/(1-\rho^{*})-1=1/(1-\overline{\eta^{*}_{s}})-1$. ∎ $i_{1}$$i_{2}$$j_{1}$$j_{2}$$j_{n}$ $\displaystyle\rho_{1}(\mathbf{x}^{*})=\phi,$ $\displaystyle\rho_{2}(\mathbf{x}^{*})=(n-1)\phi,$ $\displaystyle\bar{w}_{1}(\mathbf{x}^{*})=\frac{1}{1-\phi}-1,$ $\displaystyle\bar{w}_{j}(\mathbf{x}^{*})=\frac{1}{1-(n-1)\phi}-1.$ Figure 1: An example where $\overline{\operatorname{\textbf{PT}}}$ and $\overline{\operatorname{\textbf{PS}}}$ each have a unique optimal solution with non-uniform values of $\\{\rho_{i}\\}$ and $\\{\bar{w}_{j}\\}$, though $\eta_{t}({\boldsymbol{\mu}},\mathbf{x}_{s}^{*})=\eta^{*}_{t}$ since the programs share the same unique optimal solution. ###### Lemma 5. $1/(1-\overline{\eta_{s}^{*}})-1\leq\overline{\eta_{t}^{*}}$. The lemma’s proof is in the Appendix. We’re now set to present results for $\kappa=1$. ###### Theorem 2. Consider an input ${\boldsymbol{\mu}}=(\mu_{ij})$ with $\kappa=1$. Let $\mathbf{x}_{s}^{*}$ be an optimal solution to $\overline{\operatorname{\textbf{PS}}}$. We have that the value of $\overline{\operatorname{\textbf{PT}}}$ on the solution of $\mathbf{x}_{s}^{*}$ is equal to its optimal value, _i.e.,_ $\eta_{t}(\mathbf{x}_{s}^{*})=\eta_{t}^{*}$. ###### Proof. The above two lemmas together imply that $\overline{\eta_{t}(\mathbf{x}_{s}^{*})}\leq\overline{\eta_{t}^{*}}$. $\mathbf{x}_{s}^{*}$ is feasible to $\overline{\operatorname{\textbf{PT}}}$ since $\overline{\operatorname{\textbf{PT}}}$ and $\overline{\operatorname{\textbf{PS}}}$ share the same set of constraints, and thus, $\overline{\eta_{t}(\mathbf{x}_{s}^{*})}\geq\overline{\eta_{t}^{*}}$, which establishes Theorem 2. ∎ ## 4 Experiments ### 4.1 Algorithms and Heuristics This section presents an algorithm derived from solutions to one of the minimax problems. We also describe a heuristic based on this algorithm, which gives preference to assigning tasks to available workers, thereby enhancing allocation through the effective workload of free workers. In addition, this section introduces two real-time greedy heuristics, which function as baseline methodologies. #### Minimax problems based algorithm: We first describe Algorithm 1. This algorithm has offline and online phases. In the offline phase (line 2), a solution to one of the minimax problems is computed. In the online phase (lines 4-7), when a task arrives, the task is assigned to the queue of a worker according to the probabilities computed by the program in the offline phase. This algorithm has two variants: One solves $\operatorname{\textbf{PT}}$ in the offline phase while the other solves $\operatorname{\textbf{PS}}$. #### Minimax problems based heuristic: A notable issue with Algorithm 1 is that tasks can wait for a busy worker despite other available workers. This leads to suboptimal performance. To address this, we create a heuristic based on Algorithm 1. Like Algorithm 1, in Algorithm 2, task assignment probabilities are computed offline to mitigate this problem. In the online phase, incoming tasks are assigned to free workers. If multiple workers are free, their precomputed probabilities (from the offline phase) are normalized to sum to 1. A worker is subsequently chosen randomly, guided by these normalized probabilities. If there are no free workers, the tasks are assigned according to their probabilities as in Algorithm 1. Algorithm 2 describes this heuristic. As in Algorithm 1, there are two variants of Algorithm 2: One solves $\operatorname{\textbf{PT}}$ in the offline phase, while the other solves $\operatorname{\textbf{PS}}$. This method targets reduced waiting times, especially during low-load periods. However, this change might decrease worker workload or waiting times for other tasks, as it deviates from calculated optimal probabilities. In practice, we find that the trade-off for worker and task fairness is reasonable, given the substantial benefits for all tasks’ fairness. #### Computational complexity of Algorithms 1 and 2 Both algorithms 1 and 2 have offline and online phases. The offline phase is identical for both algorithms and requires the solution of $\operatorname{\textbf{PT}}$ or $\operatorname{\textbf{PS}}$. Following Cohen et al. (2021), the runtime for solving the linear program- $\operatorname{\textbf{PS}}$ can be as low as $O^{*}(N^{2+1/6}\log(N/\delta))$, where $\delta$ is the relative accuracy and $N=|E|$ is the number of edges in the graph $G$. We leave the complexity of solving $\operatorname{\textbf{PT}}$ to future work. In any case, the complexity of the offline phase dominates the complexity of the online phase. 1 Offline Phase: 2 Solve $\operatorname{\textbf{PT}}$ ($\operatorname{\textbf{PS}}$) and let $\\{x_{ij}\\}$ be an optimal solution. 3 Online Phase: 4 for _each task of type $j$ that arrive on time $t$_ do 5 Let $\mathcal{Q}_{i}$ be a queue of worker $i$ 6 Choose randomly a worker $i$ following the probabilities in $\\{x_{ij}\\}$ 7 8 Update $\mathcal{Q}_{i}=\mathcal{Q}_{i}\cup\\{(j,t)\\}$. 9 Algorithm 1 An LP-based algorithm for FAIR-T and FAIR-S. #### The two greedy heuristics Similar to Ackerman Viden et al. (2023), we devised two greedy heuristics as baselines for comparison. The first minimizes maximum task waiting times, and the second minimizes maximum worker workload. In the first, incoming tasks are assigned to workers with the shortest estimated waiting time, calculated by summing average expected task durations for tasks in the queue. The elapsed time for ongoing tasks is subtracted from their average duration to update estimates. For the second, tasks are assigned to less utilized workers based on current workload upon task arrival. This approach considers executed tasks, using actual durations rather than expected durations. The first heuristic is denoted as GTW (Greedy Task Waiting time) and the second as GWU (Greedy Worker Workload). ### 4.2 Experimental Settings We ran experiments on the teleoperation domain. As already mentioned in Section 1, the teleoperation of AVs involves intervention tasks that are assigned to the teleoperators who perform them. We adapted the dataset of Ackerman Viden et al. (2023) for our two-sided fairness study. More details about the experimental settings and additional experimental results have been moved to the Appendix. Source code and data for running the experiments are available at Trabelsi (2024). Figure 2: Y axis is the maximum waiting time(in seconds) for a task(a,c and e) and the max. worker workload(b,d and f). The X axis in (a,b) is the value of $\kappa$(x axis) and the task load is of 120000 tasks per day. In (c,d), the X axis is the task load and $\kappa$ is set to 1. In (e,f) the X axis is for different balances of arrival distribution: first bar is for equal distribution for each task type. In the second bar, the first type has probability of 70% and the others have 10%. the other bars are defined similarly for the second, third and forth task types (task load was 80000 tasks per day and $\kappa$ is set to 1). In the legend: SIM(PT) and SIM(PS) denote Algorithm 1’s results for $\operatorname{\textbf{PT}}$ and $\operatorname{\textbf{PS}}$ in simulation. SIM-F(PT) and SIM-F(PS) are Algorithm 2’s results (in which we assign to a free worker first) for $\operatorname{\textbf{PT}}$ and $\operatorname{\textbf{PS}}$ in simulation. $GTW$ and $GWU$ are the results of the greedy heuristics targeting task waiting time and worker workload in simulation. Error bars represent a confidence interval of 0.95. #### The tasks, their durations and their arrival rates Our study built upon the four task types defined by Viden et al. (2023). Their dataset provided valuable insights into the average duration times for each teleoperator (worker) and task type in a simulation. We explored three distinct approaches to define task duration in our experiments. All approaches involved sampling durations from exponential distributions, but the difference lay in the means of these distributions. #### The teleoperators and the tasks they can perform Using the dataset of Viden et al. (2023), we initially had 10 teleoperators (workers) and 4 task types. We form a bipartite graph with 10 teleoperators on one side and 4 task types on the other. The dataset provides average task completion times for each teleoperator-task pair. An edge is established between a teleoperator and a task type if their average time matches or exceeds the task type’s median value. Following this process, a teleoperator who consistently performed tasks slower than the median was identified and subsequently excluded from the graph. More experiments on a synthetic dataset in which the numbers of teleoperators and task types are varied can be found in the Appendix. 1 Offline Phase: 2 Solve $\operatorname{\textbf{PT}}$ ($\operatorname{\textbf{PS}}$) and let $\\{x_{ij}\\}$ be an optimal solution. 3 Online Phase: 4 for _each task of type $j$ that arrive on time $t$_ do 5 Let $\mathcal{Q}_{i}$ be a queue of worker $i$ and let $F$ be the subset of free workers on time $t$ 6 If $F\neq\emptyset$, randomly choose a free worker $i$ with probability $\\{x_{ij}/\sum_{i^{\prime}\in F}{x_{i^{\prime}j}}\\}$ 7 Otherwise choose randomly a worker $i$ following the probabilities in $\\{x_{ij}\\}$ 8 Update $\mathcal{Q}_{i}=\mathcal{Q}_{i}\cup\\{(j,t)\\}$. Algorithm 2 A heuristic for FAIR-T (FAIR-S). #### Experimental environment and more settings Each experiment spanned a virtual 4-week period. Due to algorithmic stochasticity, each experiment was repeated 10 times. ### 4.3 Results and Discussion #### Effect of changing $\kappa$ In Figures 2(a,b), we illustrate the performance of various methods across diverse $\kappa$ values. In Figure 2(a), we measure the maximum task waiting time. We see that the gap between SIM($\operatorname{\textbf{PS}}$) and SIM($\operatorname{\textbf{PT}}$), as well as the gap between SIM-F($\operatorname{\textbf{PS}}$) and SIM-F($\operatorname{\textbf{PT}}$), increase with $\kappa$. This aligns with the fact that with higher values of $\kappa$, the approximation ratio of $\operatorname{\textbf{PS}}$’s solution relative to $\operatorname{\textbf{PT}}$’s objective is greater. However, the ratio between the different methods measured in practice is lower than the worst-case theoretical ratio given by Theorem 1 (which is greater than $\kappa^{3}$). In Figure 2(b), we measure the maximum worker workload. The differences between SIM($\operatorname{\textbf{PT}}$) vs SIM($\operatorname{\textbf{PS}}$) are very small for $\kappa\leq 3$, but they become more significant for $\kappa\in\\{4,5\\}$. Surprisingly, there is a different effect with SIM-F($\operatorname{\textbf{PT}}$) and SIM-F($\operatorname{\textbf{PS}}$). SIM-F($\operatorname{\textbf{PT}}$) performs slightly better than SIM-F($\operatorname{\textbf{PS}}$). We conjecture that the initial selection of free workers has a more detrimental effect in SIM-F($\operatorname{\textbf{PS}}$), which integrates two distinctly different methods, in contrast to the relatively similar approaches in SIM-F($\operatorname{\textbf{PT}}$). We also see that for larger values of $\kappa$, both SIM($\operatorname{\textbf{PT}}$) and SIM($\operatorname{\textbf{PS}}$) perform worse than for lower values. #### Effect of changing the task load Figures 2(c,d) might help the teleoperation center’s owner decide whether the current number of workers is sufficient. It is noticeable that in Figure 2(c) there is a significant jump from 120000 to 140000 tasks per day. This means that perhaps the owner should employ more workers in this case. Referring to Figure 2(d) may lead us to similar conclusions. Employing more workers is advisable if individual worker workload is excessively high. #### Effect of changing the task balance Figures 2(e,f) represent the performance of the different algorithms when changing the task balance. The left bar represents an even distribution for each task type (0.25). The second bar represents a higher probability for the first type (0.7) and a lower probability for the other types (0.1). The other bars are similarly defined for the other task types. In Figures 2(e,f), higher arrival distribution of the first task type leads to elevated waiting times and worker workload. Consequently, the teleoperation center’s owner could enhance fairness by upskilling operators who are not qualified for the task or hiring new ones proficient in it. Alternatively, training could be provided to expedite task completion. The negligible error bars in all figures show that the error approaches 0 if the experiments are carried out over a sufficiently long period of time, as we have done. #### Computed optimal values vs simulation values In all experiments that we ran, the computed expected maximum waiting time (OPT($\operatorname{\textbf{PT}}$)) and the computed expected maximum worker workload (OPT($\operatorname{\textbf{PS}}$)) closely align with simulation- derived values (SIM($\operatorname{\textbf{PT}}$) and SIM($\operatorname{\textbf{PS}}$) respectively). Additionally, the alignment of OPT($\operatorname{\textbf{PT}}$) and OPT($\operatorname{\textbf{PS}}$) at $\kappa=1$ is consistent with Theorem 1. #### Choosing the best algorithm The heuristic GTW, which minimizes the maximum task waiting time, performs well at maximum task waiting time and performs poorly at maximum worker workload. Conversely, the greedy heuristic that minimizes the maximum worker workload, GWU, performs well at the maximum worker workload and performs poorly at maximum task waiting time. The methods that offer the best tradeoff between two dimensions of fairness are SIM-F($\operatorname{\textbf{PT}}$) and SIM-F($\operatorname{\textbf{PS}}$). However, since $\operatorname{\textbf{PT}}$ is nonlinear, there is no tool that guarantees to find an optimal solution for $\operatorname{\textbf{PT}}$, and therefore $\operatorname{\textbf{PT}}$-dependent approaches such as SIM-F($\operatorname{\textbf{PT}}$) might be unsolvable. Therefore, if $\kappa=1$ or at least a small number close to $1$, we might want to use SIM-F($\operatorname{\textbf{PS}}$). However, SIM($\operatorname{\textbf{PS}}$) might be slightly better if worker workload is more important than task waiting times (but still important). If $\kappa$ is large, it is advisable to consider using a tool that approximates a solution for $\operatorname{\textbf{PT}}$ with SIM-F($\operatorname{\textbf{PT}}$). The figures show that the available tools work adequately in such cases, despite the lack of theoretical guarantees (at least for small problems). Another option is to try both SIM-F($\operatorname{\textbf{PT}}$) and SIM-F($\operatorname{\textbf{PS}}$) and pick the one that gives the best results. ## 5 Conclusion This paper addresses two-sided fairness problems represented as online bipartite matching with accommodated delays. We introduce two minimax problems: $\operatorname{\textbf{PT}}$ to minimize the maximum workload of workers and $\operatorname{\textbf{PS}}$ to minimize the maximum waiting time of tasks. We show that the second problem can be formulated as a linear program and thus solved efficiently. Moreover, we showed that the policy using a solution for $\operatorname{\textbf{PS}}$ approximates the solution for $\operatorname{\textbf{PT}}$, and we then presented an upper bound on the approximation ratio. Finally, we compared the performance of different approaches (most of them used the solutions to the problems) and empirically evaluated their performance. Future research may explore different definitions of fairness. In addition, it is promising to extend our approach to scenarios where workers are also arriving dynamically. To demonstrate the need in such scenarios one might consider the teleoperation application where teleoperators (workers) can join or leave the crew. Considering different distributions for both task arrivals and task durations can provide more depth and insights into the study. Finally, it might be beneficial to consider some robust version, say, minimization of the maximum possible absolute waiting time among users, which is equivalent to the minimization of the maximum absolute waiting time among all workers. ## Acknowledgements This research has been partially supported by the Israel Science Foundation under grant 1958/20 and the EU Project TAILOR under grant 952215. Work of Pan Xu was partially supported by NSF CRII Award IIS-1948157. ## References * Ackerman Viden et al. [2023] Osnat Ackerman Viden, Yohai Trabelsi, Pan Xu, Karthik Abinav Sankararaman, Oleg Maksimov, and Sarit Kraus. Allocation problem in remote teleoperation: Online matching with offline reusable resources and delayed assignments. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, pages 513–521, 2023. * Asmussen [2003] Søren Asmussen. Random walks. Applied Probability and Queues, pages 220–243, 2003. * Cohen et al. [2021] Michael B Cohen, Yin Tat Lee, and Zhao Song. Solving linear programs in the current matrix multiplication time. Journal of the ACM (JACM), 68(1):1–39, 2021. * Delong et al. [2022] Steven Delong, Alireza Farhadi, Rad Niazadeh, and Balasubramanian Sivan. Online bipartite matching with reusable resources. In Proceedings of the 23rd ACM Conference on Economics and Computation, pages 962–963, 2022. * Devore [2008] Jay L Devore. Probability and statistics for engineering and the sciences. 2008\. * Dickerson et al. [2021] John P Dickerson, Karthik A Sankararaman, Aravind Srinivasan, and Pan Xu. Allocation problems in ride-sharing platforms: Online matching with offline reusable resources. ACM Transactions on Economics and Computation (TEAC), 9(3):1–17, 2021. * Esmaeili et al. [2023] Seyed Esmaeili, Sharmila Duppala, Davidson Cheng, Vedant Nanda, Aravind Srinivasan, and John P Dickerson. Rawlsian fairness in online bipartite matching: Two-sided, group, and individual. In Proc. 37th AAAI, number 5, pages 5624–5632, 2023. * Gallager [2011] Robert G Gallager. Discrete stochastic processes. OpenCourseWare: Massachusetts Institute of Technology, 2011. * Gupta and Goyal [1964] SK Gupta and JK Goyal. Queues with poisson input and hyper-exponential output with finite waiting space. Operations Research, 12(1):75–81, 1964. * Karp et al. [1990] Richard M. Karp, Umesh V. Vazirani, and Vijay V. Vazirani. An optimal algorithm for on-line bipartite matching. STOC-90, 1990. * Kendall [1953] David G Kendall. Stochastic processes occurring in the theory of queues and their analysis by the method of the imbedded markov chain. The Annals of Mathematical Statistics, pages 338–354, 1953. * Lesmana et al. [2019] Nixie S Lesmana, Xuan Zhang, and Xiaohui Bei. Balancing efficiency and fairness in on-demand ridesourcing. Advances in neural information processing systems, 32, 2019. * Li et al. [2023] Zihao Li, Hao Wang, and Zhenzhen Yan. Fully online matching with stochastic arrivals and departures. In Proc. 37th AAAI, number 10, pages 12014–12021, 2023. * Ma et al. [2020] Will Ma, Pan Xu, and Yifan Xu. Group-level fairness maximization in online bipartite matching. arXiv preprint arXiv:2011.13908, 2020. * Maister and others [1984] David H Maister et al. The psychology of waiting lines. Citeseer, 1984. * Patro et al. [2020] Gourab K Patro, Arpita Biswas, Niloy Ganguly, Krishna P Gummadi, and Abhijnan Chakraborty. Fairrec: Two-sided fairness for personalized recommendations in two-sided platforms. In Proceedings of the web conference 2020, pages 1194–1204, 2020. * Rawls [1958] John Rawls. Justice as fairness. The philosophical review, 67(2):164–194, 1958. * Rawls [1999] John Rawls. A Theory of Justice. Harvard University Press, Cambridge, MA, 1999. * Righter [1987] Rhonda Righter. The stochastic sequential assignment problem with random deadlines. Probability in the Engineering and Informational Sciences, 1(2):189–202, 1987. * Rigter et al. [2022] Marc Rigter, Danial Dervovic, Parisa Hassanzadeh, Jason Long, Parisa Zehtabi, and Daniele Magazzeni. Optimal admission control for multiclass queues with time-varying arrival rates via state abstraction. In Proc. 36th AAAI, number 9, pages 9918–9925, 2022. * Tener and Lanir [2022] Felix Tener and Joel Lanir. Driving from a distance: Challenges and guidelines for autonomous vehicle teleoperation interfaces. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pages 1–13, 2022. * Trabelsi [2024] Yohai Trabelsi. Code and data: Design a win-win strategy that is fair to both service providers and tasks when rejection is not an option. https://github.com/yohayt/two_sided_fairness, 2024. Accessed: 05/05/2024. * Zhang [2020] Tao Zhang. Toward automated vehicle teleoperation: Vision, opportunities, and challenges. IEEE Internet of Things Journal, 7(12):11347–11354, 2020. * Zhou et al. [2023] Quan Zhou, Jakub Mareček, and Robert Shorten. Subgroup fairness in two-sided markets. Plos one, 18(2):e0281443, 2023. Technical Appendix Paper title: Design a Win-Win Strategy That Is Fair to Both Service Providers and Tasks When Rejection is Not an Option Paper id: 185 ## Appendix A Objective Function in Program $\operatorname{\textbf{PT}}$ Can Be Neither Convex nor Concave (Even When $\kappa=1$) We show that the optimization program $\operatorname{\textbf{PT}}$ can be minimization of a non-convex function. Consider the example shown in Figure 3, where $\lambda_{j}=\lambda$ for all $j\in J$, and $\mu_{ij}=\mu_{i},\forall j\sim i,\forall i\in I$. Set $\phi_{i}=\lambda/\mu_{i}$, for each $i\in I$. Let $x$ be the value on edge $(i=1,j=1)$, and thus, $1-x$ be that on edge $(i=2,j=1)$. Similarly, let $y$ and $1-y$ be the values on edges $(i=3,j=2)$ and $(i=2,j=2)$. We can verify that $\displaystyle\rho_{1}$ $\displaystyle=\phi_{1}\cdot x,\rho_{2}=\phi_{2}(1-x+1-y),\rho_{3}=\phi_{3}\cdot y;$ $\displaystyle\bar{w}_{1}$ $\displaystyle=\frac{x}{1-\phi_{1}\cdot x}+\frac{1-x}{1-\phi_{2}(1-x+1-y)}-1$ $\displaystyle\bar{w}_{2}$ $\displaystyle=\frac{y}{1-\phi_{3}\cdot y}+\frac{1-y}{1-\phi_{2}(1-x+1-y)}-1.$ Let ${\boldsymbol{\phi}}=(\phi_{1},\phi_{2},\phi_{3})$, and assume $\phi_{1}\leq 1,\phi_{3}\leq 1,\phi_{2}\leq 1/2$. Under these assumptions, we see the feasible region of $\operatorname{\textbf{PT}}$ can be reduced to $\Omega=\\{(x,y):0\leq x,y\leq 1\\}$. $\displaystyle f_{{\boldsymbol{\phi}}}(x,y)$ $\displaystyle=\max(\bar{w}_{1},\bar{w}_{2})=\max\left(\frac{x}{1-\phi_{1}\cdot x}+\frac{1-x}{1-\phi_{2}(1-x+1-y)}-1,\frac{y}{1-\phi_{3}\cdot y}+\frac{1-y}{1-\phi_{2}(1-x+1-y)}-1\right)$ $\displaystyle\Omega$ $\displaystyle=\\{(x,y):0\leq x,y\leq 1\\}.$ Furthermore, set $\phi_{1}=\phi_{3}=p\in[0,1]$ and $\phi_{2}=q\in[0,1/2]$. Define $\displaystyle g_{p,q}(x,y)=\frac{x}{1-p\cdot x}+\frac{1-x}{1-q(2-x-y)}-1,~{}~{}h_{p,q}(x,y)=\frac{y}{1-p\cdot y}+\frac{1-y}{1-q(2-x-y)}-1.$ Thus, the original program $\operatorname{\textbf{PT}}$ can be simplified as $\displaystyle\min f_{p,q}(x,y):=\max\Big{(}g_{p,q}(x,y),h_{p,q}(x,y)\Big{)}:~{}~{}0\leq x,y\leq 1.$ (23) By symmetry, we can assume WLOG that $x\geq y$. Then we see $\displaystyle f_{p,q}(x,y)$ $\displaystyle=h_{p,q}(x,y)=\frac{y}{1-p\cdot y}+\frac{1-y}{1-q(2-x-y)}-1,$ $\displaystyle\mbox{~{}if~{}}2q+p^{2}xy\geq(p+q)\cdot(x+y),~{}0\leq y\leq x\leq 1;$ $\displaystyle=g_{p,q}(x,y)=\frac{x}{1-p\cdot x}+\frac{1-x}{1-q(2-x-y)}-1,$ $\displaystyle\mbox{~{}if~{}}2q+p^{2}xy\leq(p+q)\cdot(x+y),~{}0\leq y\leq x\leq 1.$ We can verify that the region $\Omega_{1}:=\\{(x,y):2q+p^{2}xy\geq(p+q)\cdot(x+y),~{}0\leq y\leq x\leq 1\\}$ is convex, while the function $f_{p,q}(x,y)=h_{p,q}(x,y)$ is _neither convex nor concave_ over $\Omega_{1}$ under many settings of $(p,q)$, say, _e.g.,_ $p=0.7$ and $q=0.4$. This establishes $f_{p,q}(x,y)$ is neither convex or concave over the original region $\Omega=\\{(x,y):0\leq x,y\leq 1\\}$. $i_{1}$$i_{2}$$i_{3}$$j_{1}$$j_{2}$$x$$1-y$$1-x$$y$ Figure 3: A toy example showing the objective function in Program $\operatorname{\textbf{PT}}$ can be neither convex nor concave even when $\kappa=1$. ## Appendix B Proof of Lemma 2 Consider a given input setting characterized by ${\boldsymbol{\mu}}:=(\mu_{ij})$ and ${\boldsymbol{\lambda}}:=(\lambda_{j})$. Recall that $\eta_{t}^{*}({\boldsymbol{\lambda}},{\boldsymbol{\mu}})$ and $\eta_{s}^{*}({\boldsymbol{\lambda}},{\boldsymbol{\mu}})$ denote the optimal values of $\operatorname{\textbf{PT}}$ and $\operatorname{\textbf{PS}}$ with respect to ${\boldsymbol{\lambda}}$ and ${\boldsymbol{\mu}}$. We focus on the case when $k=2$, and all the analysis can be straightforwardly generalized to any generic integer $k$. Consider a modified version of ${\boldsymbol{\lambda}}$ in which the first request type $j=1$ is split into two copies, $j=0$ and $j=1$, each having an arrival rate of $\lambda_{1}/2$. Let $\bar{\boldsymbol{\lambda}}=(\bar{\lambda}_{j})$ be this modified version with $\bar{\lambda}_{0}=\bar{\lambda}_{1}=\lambda_{1}/2$ and $\bar{\lambda}_{j}=\lambda_{j}$ for all $j\geq 2$. Let $\bar{J}=J\cup\\{0\\}$ be the resulting set of online types. ###### Lemma 6. $\eta_{t}^{*}({\boldsymbol{\lambda}},{\boldsymbol{\mu}})=\eta_{t}^{*}(\bar{\boldsymbol{\lambda}},{\boldsymbol{\mu}})$, and $\eta_{s}^{*}({\boldsymbol{\lambda}},{\boldsymbol{\mu}})=\eta_{s}^{*}(\bar{\boldsymbol{\lambda}},{\boldsymbol{\mu}})$, ###### Proof. Let us focus on showing the first equality from Lemma 6. Suppose $\mathbf{x}=(x_{ij})$ is an optimal solution to $\operatorname{\textbf{PT}}({\boldsymbol{\lambda}},{\boldsymbol{\mu}})$. Consider a modified solution $\mathbf{y}=(y_{ij})$ that satisfies: (1) $y_{ij}=x_{ij}$ for all $j\geq 2$ and $i\sim j$, and (2) $y_{i,0}=y_{i,1}=x_{i,1}$. Let $\rho_{i}(\mathbf{x})$ and $\rho_{i}(\mathbf{y})$ be the values of $\rho_{i}$ with respect to $\mathbf{x}$ under the setting of $({\boldsymbol{\lambda}},{\boldsymbol{\mu}})$ and $\mathbf{y}$ under the setting of $(\bar{\boldsymbol{\lambda}},{\boldsymbol{\mu}})$, respectively. Similarly, let $\bar{w}_{j}(\mathbf{x})$ and $\bar{w}_{j}(\mathbf{y})$ represent the values of $\bar{w}_{j}$ with respect to $\mathbf{x}$ and $\mathbf{y}$, respectively. We can verify that $\mathbf{y}$ is feasible for $\operatorname{\textbf{PT}}(\bar{\boldsymbol{\lambda}},{\boldsymbol{\mu}})$. Furthermore, $\rho_{i}(\mathbf{x})=\rho_{i}(\mathbf{y})$ for all $i\in I$, and $\bar{w}_{j}(\mathbf{x})=\bar{w}_{j}(\mathbf{y})$ for all $j\geq 2$, and $\bar{w}_{0}(\mathbf{y})=\bar{w}_{1}(\mathbf{y})=\bar{w}_{1}(\mathbf{x})$. Thus, we claim that: $\eta_{t}^{*}(\bar{\boldsymbol{\lambda}},{\boldsymbol{\mu}})\leq\max_{j\in\bar{J}}\bar{w}_{j}(\mathbf{y})=\max_{j\in J}\bar{w}_{j}(\mathbf{x})=\eta_{t}^{*}({\boldsymbol{\lambda}},{\boldsymbol{\mu}}).$ Now we prove the other direction. Let $\mathbf{y}=(y_{ij})$ be an optimal solution to $\operatorname{\textbf{PT}}(\bar{\boldsymbol{\lambda}},{\boldsymbol{\mu}})$. Consider a modified solution $\mathbf{x}=(x_{ij})$ that satisfies: (1) $x_{ij}=y_{ij}$ for all $j\geq 2$ and $i\sim j$, and (2) $x_{i,1}=(y_{i,0}+y_{i,1})/2$ for all $i\sim j$. We can verify that $\mathbf{x}$ is feasible for $\operatorname{\textbf{PT}}({\boldsymbol{\lambda}},{\boldsymbol{\mu}})$. Furthermore, $\rho_{i}(\mathbf{x})=\rho_{i}(\mathbf{y})$ for all $i\in I$, and $\bar{w}_{j}(\mathbf{x})=\bar{w}_{j}(\mathbf{y})$ for all $j\geq 2$, and $\bar{w}_{1}(\mathbf{x})=(\bar{w}_{0}(\mathbf{y})+\bar{w}_{1}(\mathbf{y}))/2$. Thus, we claim that: $\eta_{t}^{*}({\boldsymbol{\lambda}},{\boldsymbol{\mu}})\leq\max_{j\in J}\bar{w}_{j}(\mathbf{x})\leq\max_{j\in\bar{J}}\bar{w}_{j}(\mathbf{y})=\eta_{t}^{*}(\bar{\boldsymbol{\lambda}},{\boldsymbol{\mu}}).$ This shows that the first equality holds, and the second equality follows from the same argument. ∎ ## Appendix C Proof of Theorem 1 Now, we consider a general setting ${\boldsymbol{\mu}}=(\mu_{ij})$ with $\kappa=\max_{i\in I}\max_{j\sim i,j^{\prime}\sim i}\mu_{ij}/\mu_{i,j^{\prime}}\geq 1$. Consider such a virtual instance that for each $i\in I$, all of $\mu_{ij}$ with $j\sim i$ are replaced with ${\mu}_{i}:=\max_{j\sim i}\mu_{ij}$. Let $\bar{\boldsymbol{\mu}}=(\bar{\mu}_{ij})$ be the modified version of ${\boldsymbol{\mu}}$ such that $\bar{\mu}_{ij}=\mu_{i}$ for every $j\sim i$ and $i\in I$. By Claims 1 and 2, we see that $\eta_{t}^{*}(\bar{\boldsymbol{\mu}})=-1+1/(1-\eta_{s}^{*}(\bar{\boldsymbol{\mu}}))$, where $\eta_{t}^{*}(\bar{\boldsymbol{\mu}})$ and $\eta_{s}^{*}(\bar{\boldsymbol{\mu}})$ denote the optimal values of $\operatorname{\textbf{PT}}$ and $\operatorname{\textbf{PS}}$ under $\bar{\boldsymbol{\mu}}$, respectively. Similarly, $\eta_{t}^{*}({\boldsymbol{\mu}})$ and $\eta_{s}^{*}({\boldsymbol{\mu}})$ denote the optimal values of $\operatorname{\textbf{PT}}$ and $\operatorname{\textbf{PS}}$ under ${\boldsymbol{\mu}}$, respectively. ###### Lemma 7. (1) $\eta_{t}^{*}({\boldsymbol{\mu}})\geq\eta_{t}^{*}(\bar{\boldsymbol{\mu}})/\kappa$; (2) $\eta_{s}^{*}(\bar{\boldsymbol{\mu}})\geq\eta_{s}^{*}({\boldsymbol{\mu}})/\kappa$. ###### Proof. We prove the first inequality as follows. Consider any optimal solution $\mathbf{x}=(x_{ij})$ to $\operatorname{\textbf{PT}}({\boldsymbol{\mu}})$. Observe that for any $j\in J$, $\displaystyle\bar{w}_{j}({\boldsymbol{\mu}},\mathbf{x})$ $\displaystyle=\sum_{i\sim j}x_{ij}\cdot\mu_{ij}\cdot\frac{\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}^{2}}{1-\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}}$ $\displaystyle\geq(1/\kappa)\cdot\sum_{i\sim j}x_{ij}\cdot\frac{\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}}{1-\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}}=(1/\kappa)\cdot\sum_{i\sim j}x_{ij}\cdot\frac{\rho_{i}({\boldsymbol{\mu}},\mathbf{x})}{1-\rho({\boldsymbol{\mu}},\mathbf{x})}$ $\displaystyle=(1/\kappa)\cdot\sum_{i\sim j}x_{ij}\cdot\Big{(}-1+\frac{1}{1-\rho_{i}({\boldsymbol{\mu}},\mathbf{x})}\Big{)}\geq(1/\kappa)\cdot\sum_{i\sim j}x_{ij}\cdot\Big{(}-1+\frac{1}{1-\rho_{i}(\bar{\boldsymbol{\mu}},\mathbf{x})}\Big{)},$ where the inequality on the last line is due to $\displaystyle\rho_{i}({\boldsymbol{\mu}},\mathbf{x})=\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}\geq\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i}=\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\bar{\mu}_{ij}=\rho_{i}(\bar{\boldsymbol{\mu}},\mathbf{x}).$ (24) Therefore, $\displaystyle\eta_{t}^{*}({\boldsymbol{\mu}})$ $\displaystyle=\max_{j\in J}\bar{w}_{j}({\boldsymbol{\mu}},\mathbf{x})\geq(1/\kappa)\cdot\max_{j\in J}\sum_{i\sim j}x_{ij}\cdot\Big{(}-1+\frac{1}{1-\rho_{i}(\bar{\boldsymbol{\mu}},\mathbf{x})}\Big{)}\geq(1/\kappa)\cdot\eta_{t}^{*}(\bar{\boldsymbol{\mu}}),$ where the last inequality above is valid since $\mathbf{x}$ is optimal to $\operatorname{\textbf{PT}}({\boldsymbol{\mu}})$, and thus, it is feasible to $\operatorname{\textbf{PT}}(\bar{\boldsymbol{\mu}})$ since for each $j\in J$, $\sum_{i\sim j}x_{ij}=1$ and for each $i\in I$, $\rho_{i}(\bar{\boldsymbol{\mu}},\mathbf{x})\leq\rho_{i}({\boldsymbol{\mu}},\mathbf{x})\leq 1$ due to Inequality (24). Now, we show the second one. Note that we assume by default that $\operatorname{\textbf{PS}}({\boldsymbol{\mu}})$ is feasible with $\eta^{*}_{b}({\boldsymbol{\mu}})\leq 1$. So is $\operatorname{\textbf{PS}}(\bar{\boldsymbol{\mu}})$. We claim that the optimal value of $\eta^{*}_{b}({\boldsymbol{\mu}})$ remains invariant after removing the constraints of $\rho_{i}\leq 1$ for all $i\in I$. Let $\operatorname{\textbf{PC}}$ be the program of $\operatorname{\textbf{PS}}$ after removing the constraints $\rho_{i}\leq 1$ for all $i\in I$, and suppose $\eta_{c}({\boldsymbol{\mu}},\mathbf{x})$ is the corresponding value of $\operatorname{\textbf{PC}}$ with respect to ${\boldsymbol{\mu}}$ and $\mathbf{x}$. Observe that $\eta_{s}^{*}({\boldsymbol{\mu}})=\eta_{c}^{*}({\boldsymbol{\mu}})$ and $\eta_{s}^{*}(\bar{\boldsymbol{\mu}})=\eta_{c}^{*}(\bar{\boldsymbol{\mu}})$. Consider an optimal solution $\mathbf{x}=(x_{ij})$ to $\operatorname{\textbf{PS}}(\bar{\boldsymbol{\mu}})$. We can verify that it is surely feasible to $\operatorname{\textbf{PC}}({\boldsymbol{\mu}})$. Thus, $\displaystyle\eta^{*}_{b}(\bar{\boldsymbol{\mu}})$ $\displaystyle=\eta_{s}(\bar{\boldsymbol{\mu}},\mathbf{x})=\eta_{c}(\bar{\boldsymbol{\mu}},\mathbf{x})\geq\eta_{c}({\boldsymbol{\mu}},\mathbf{x})/\kappa\geq\eta_{c}^{*}({\boldsymbol{\mu}})/\kappa=\eta_{s}^{*}({\boldsymbol{\mu}})/\kappa,$ (25) where (a) the first inequality on (25) follows from that $\eta_{c}(\bar{\boldsymbol{\mu}},\mathbf{x})=\max_{i\in I}\sum_{\ell\sim i}x_{i\ell}\cdot\lambda_{\ell}/\bar{\mu}_{i\ell}=\max_{i\in I}\sum_{\ell\sim i}x_{i\ell}\cdot\lambda_{\ell}/\mu_{i}\geq\max_{i\in I}\sum_{\ell\sim i}x_{i\ell}\cdot\lambda_{\ell}/(\kappa\cdot\mu_{i\ell})=\eta_{c}({\boldsymbol{\mu}},\mathbf{x})/\kappa;$ and (b) the second inequality on (25) is valid since $\mathbf{x}$ is feasible to $\operatorname{\textbf{PC}}({\boldsymbol{\mu}})$ and $\eta_{c}^{*}({\boldsymbol{\mu}})$ is the optimal value. ∎ ###### Proof of Theorem 1. Let $\mathbf{x}=(x_{ij})$ be an optimal solution to $\operatorname{\textbf{PS}}({\boldsymbol{\mu}})$. Observe that $\mathbf{x}$ is feasible to $\operatorname{\textbf{PT}}({\boldsymbol{\mu}})$ since the two programs $\operatorname{\textbf{PT}}(\mu)$ and $\operatorname{\textbf{PS}}(\mu)$ share the same set of constraints. $\displaystyle\eta_{t}({\boldsymbol{\mu}},\mathbf{x})$ $\displaystyle=\max_{j\in J}\sum_{i\sim j}x_{ij}\cdot\mu_{ij}\cdot\frac{\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}^{2}}{1-\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}}$ $\displaystyle\leq\max_{j\in J}\kappa\cdot\sum_{i\sim j}x_{ij}\cdot\frac{\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}}{1-\sum_{\ell\sim i}x_{i\ell}\lambda_{\ell}/\mu_{i\ell}}=\kappa\cdot\max_{j\in J}\sum_{i\sim j}x_{ij}\cdot\Big{(}-1+\frac{1}{1-\rho_{i}({\boldsymbol{\mu}},\mathbf{x})}\Big{)}$ $\displaystyle\leq\kappa\cdot\Big{(}-1+\frac{1}{1-\max_{i\in I}\rho_{i}({\boldsymbol{\mu}},\mathbf{x})}\Big{)}=\kappa\cdot\Big{(}-1+\frac{1}{1-\eta_{s}^{*}({\boldsymbol{\mu}})}\Big{)}$ $\displaystyle=\kappa\cdot{\frac{\eta_{s}^{*}({\boldsymbol{\mu}})}{1-\eta_{s}^{*}({\boldsymbol{\mu}})}}=\kappa\cdot{\frac{\eta_{s}^{*}(\bar{\boldsymbol{\mu}})}{1-\eta_{s}^{*}(\bar{\boldsymbol{\mu}})}}\cdot\Big{(}\frac{\eta_{s}^{*}({\boldsymbol{\mu}})}{1-\eta_{s}^{*}({\boldsymbol{\mu}})}\Big{)}/\Big{(}\frac{\eta_{s}^{*}(\bar{\boldsymbol{\mu}})}{1-\eta_{s}^{*}(\bar{\boldsymbol{\mu}})}\Big{)}$ $\displaystyle=\kappa\cdot\eta_{t}^{*}(\bar{\boldsymbol{\mu}})\cdot\frac{\eta_{s}^{*}({\boldsymbol{\mu}})}{\eta_{s}^{*}(\bar{\boldsymbol{\mu}})}\cdot\frac{1-\eta_{s}^{*}(\bar{\boldsymbol{\mu}})}{1-\eta_{s}^{*}({\boldsymbol{\mu}})}~{}~{}\big{(}\mbox{by Claims~{}\ref{claim:main-3} and~{}\ref{claim:>=} on $\bar{\boldsymbol{\mu}}$}\big{)}$ $\displaystyle\leq\kappa^{3}\cdot\eta_{t}^{*}({\boldsymbol{\mu}})\cdot\frac{1-\eta_{s}^{*}({\boldsymbol{\mu}})/\kappa}{1-\eta_{s}^{*}({\boldsymbol{\mu}})}=\kappa^{3}\cdot\eta_{t}^{*}({\boldsymbol{\mu}})\cdot\Big{(}1+\Big{(}1-\frac{1}{\kappa}\Big{)}\cdot\frac{\eta_{s}^{*}({\boldsymbol{\mu}})}{1-\eta_{s}^{*}({\boldsymbol{\mu}})}\Big{)}~{}~{}\big{(}\mbox{by Lemma~{}\ref{lem:19-a}}\big{)}.$ ∎ ## Appendix D Proof of Lemma 5 We split the proof of Lemma 5 into the following two claims. For any $S\subseteq J$ with $S\neq\emptyset$, let $\lambda(S)=\sum_{j\in S}\lambda_{j}=\lambda\cdot|S|$, and $\mu(S)=\sum_{i\in\partial(S)}\mu_{i}$ with $\partial(S)=\\{i:\exists j\in S,i\sim j\\}$ being set of neighbors of $S$.555Note that $\partial(S)$ includes all possible neighbors of nodes in $S$, but nodes in $\partial(S)$ may have neighbors beyond $S$. ###### Claim 1. $\eta_{s}^{*}=\max_{S\subseteq J,S\neq\emptyset}\lambda(S)/\mu(S)$. ###### Claim 2. $\eta_{t}^{*}+1\geq 1/\big{(}1-\lambda(S)/\mu(S)\big{)}$ for any $S\subseteq J$ with $S\neq\emptyset$. The two claims above together establish Lemma 5. We present the proofs of the two Claims. ###### Proof of Claim 1. We first show $\eta_{s}^{*}\geq\lambda(S)/\mu(S)$ for any $S\neq\emptyset,S\subseteq J$. Consider an optimal solution $\mathbf{x}^{*}=(x_{ij})$ for $\overline{\operatorname{\textbf{PS}}}$. Let $x_{i}=\sum_{j\sim i,j\in J}x_{ij}$. $\displaystyle\eta_{s}^{*}=\max_{i\in I}\rho_{i}(\mathbf{x}^{*})\geq\max_{i\in\partial(S)}\rho_{i}(\mathbf{x}^{*})=\max_{i\in\partial(S)}(\lambda x_{i}/\mu_{i})$ $\displaystyle\geq\Big{(}\sum_{i\in\partial(S)}\lambda x_{i}\Big{)}/\Big{(}\sum_{i\in\partial(S)}\mu_{i}\Big{)}\geq\lambda(S)/\mu(S),$ where the last inequality follows from $\displaystyle\sum_{i\in\partial(S)}\lambda x_{i}$ $\displaystyle=\lambda\sum_{i\in\partial(S)}\sum_{j\sim i,j\in J}x_{ij}$ $\displaystyle\geq\lambda\sum_{j\in S}\sum_{i\in I,i\sim j}x_{ij}=\lambda\cdot|S|=\lambda(S).$ Now we show $\eta_{s}^{*}\leq\max_{S\neq\emptyset,S\subseteq J}\lambda(S)/\mu(S)$. For an optimal solution $\mathbf{x}^{*}=(x_{ij})$ of $\overline{\operatorname{\textbf{PS}}}$, let $I(\mathbf{x}^{*})=\\{i\in I:\rho_{i}(\mathbf{x}^{*})=\eta_{s}^{*}\\}$ be the set of node $i$ with saturated load under $\mathbf{x}^{*}$. Let $\mathbf{y}^{*}=(y_{ij})$ be an optimal solution of $\overline{\operatorname{\textbf{PS}}}$ such that $I(\mathbf{y}^{*})$ has the smallest size. Let $J(\mathbf{y}^{*})=\\{j\in J:\exists i\in I(\mathbf{y}^{*})\mbox{~{}with~{}}i\sim j,y_{ij}>0\\}$ be set of non-zero neighbors of $I(\mathbf{y}^{*})$ with respect to $\mathbf{y}^{*}$. We claim that $\partial(J(\mathbf{y}^{*}))=I(\mathbf{y}^{*})$. We show by contradiction as follows. Suppose there is some $j\in J(\mathbf{y}^{*})$ such that (1) there exists some $i\sim j,i\in I(\mathbf{y}^{*})$ with $y_{ij}>0$ and $\rho_{i}(\mathbf{y}^{*})=\eta_{s}^{*}$ and (2) there exists some $\bar{i}\sim j$ with $\bar{i}\notin I(\mathbf{y}^{*})$, _i.e.,_ $\rho_{\bar{i}}(\mathbf{y}^{*})<\eta_{s}^{*}$. Consider the following perturbation: $\bar{y}_{ij}\leftarrow y_{ij}-\epsilon$ and $\bar{y}_{\bar{i},j}\leftarrow y_{\bar{i},j}+\epsilon$ for an appropriate value of $\epsilon>0$, we could end up with another optimal solution that either has a strictly smaller size of $I(\bar{\mathbf{y}}^{*})$ if $|I(\mathbf{y}^{*})|>1$ or a strictly better optimal value if $|I(\mathbf{y}^{*})|=1$, which contradicts our assumption. Observe that $\eta_{s}^{*}=\rho_{i}(\mathbf{y}^{*})=(\lambda y_{i}/\mu_{i})$ for every $i\in I(\mathbf{y}^{*})$ with $y_{i}=\sum_{j\sim i}y_{ij}$. Set $S^{*}=J(\mathbf{y}^{*})$ with $\partial(S^{*})=I(\mathbf{y}^{*})$, we have $\displaystyle\eta_{s}^{*}$ $\displaystyle=\Big{(}\sum_{i\in I(\mathbf{y}^{*})}\lambda y_{i}\Big{)}/\Big{(}\sum_{i\in I(\mathbf{y}^{*})}\mu_{i}\Big{)}=\Big{(}\sum_{i\in I(\mathbf{y}^{*})}\lambda\sum_{j\sim i}y_{ij}\Big{)}/\mu(S^{*})$ $\displaystyle=\Big{(}\sum_{j\in J(\mathbf{y}^{*})}\lambda\sum_{i\sim j}y_{ij}\Big{)}/\mu(S^{*})=\lambda(S^{*})/\mu(S^{*}).$ Therefore, we get $\eta_{s}^{*}\leq\max_{S\neq\emptyset,S\subseteq J}\lambda(S)/\mu(S)$. ∎ ###### Proof of Claim 2. Consider any optimal solution $\mathbf{x}^{*}=(x_{ij})$ for $\overline{\operatorname{\textbf{PT}}}$. Recall that $\Lambda=\lambda(S)=\lambda\cdot|S|$ and $\Phi=\mu(S)=\sum_{i\in\partial(S)}\mu_{i}$. We see $\Lambda/\Phi=\lambda(S)/\mu(S)\leq\eta_{s}^{*}\leq 1$ due to Claim 1. Set $x_{i}=\sum_{j\sim i,j\in J}x_{ij}$ and $x_{i}(S)=\sum_{j\sim i,j\in S}x_{ij}$. Let $\rho_{i}(\mathbf{x}^{*})=(\lambda/\mu_{i})\sum_{j\sim i,j\in J}x_{ij}=(\lambda/\mu_{i})\cdot x_{i}$ and $\rho_{i,S}(\mathbf{x}^{*})=(\lambda/\mu_{i})\sum_{j\sim i,j\in S}x_{ij}=(\lambda/\mu_{i})\cdot x_{i}(S)$. Thus, $\rho_{i}(\mathbf{x}^{*})\geq\rho_{i,S}(\mathbf{x}^{*})$. $\displaystyle\eta_{t}^{*}+1=\max_{j\in J}\big{(}\bar{w}_{j}(\mathbf{x}_{a}^{*})+1\big{)}\geq\frac{1}{|S|}\sum_{j\in S}\big{(}\bar{w}_{j}(\mathbf{x}_{a}^{*})+1\big{)}=\frac{1}{|S|}\sum_{j\in S}\sum_{i\sim j}\frac{x_{ij}}{1-\rho_{i}(\mathbf{x}^{*})}=\frac{1}{|S|}\sum_{i\in\partial(S)}\sum_{j\sim i,j\in S}\frac{x_{ij}}{1-\rho_{i}(\mathbf{x}^{*})}$ $\displaystyle=\frac{1}{|S|}\sum_{i\in\partial(S)}\frac{x_{i}(S)}{1-\rho_{i}(\mathbf{x}^{*})}\geq\frac{1}{|S|}\sum_{i\in\partial(S)}\frac{x_{i}(S)}{1-\rho_{i,S}(\mathbf{x}^{*})}$ $\displaystyle=\frac{1}{\lambda\cdot|S|}\sum_{i\in\partial(S)}\mu_{i}\cdot\frac{(\lambda/\mu_{i})\cdot x_{i}(S)}{1-\rho_{i,S}(\mathbf{x}^{*})}=\frac{1}{\Lambda}\sum_{i\in\partial(S)}\frac{\mu_{i}\cdot\rho_{i,S}(\mathbf{x}^{*})}{1-\rho_{i,S}(\mathbf{x}^{*})}=\frac{1}{\Lambda}\sum_{i\in\partial(S)}\mu_{i}\cdot\Big{(}-1+\frac{1}{1-\rho_{i,S}(\mathbf{x}^{*})}\Big{)}$ $\displaystyle=-\frac{\Phi}{\Lambda}+\frac{1}{\Lambda}\sum_{i\in\partial(S)}\frac{\mu_{i}}{1-\rho_{i,S}(\mathbf{x}^{*})}=-\frac{\Phi}{\Lambda}+\frac{1}{\Lambda}\sum_{i\in\partial(S)}\frac{\mu_{i}}{1-(\lambda/\mu_{i})\cdot x_{i}(S)}=-\frac{\Phi}{\Lambda}+\frac{1}{\Lambda}\sum_{i\in\partial(S)}\frac{\mu_{i}^{2}}{\mu_{i}-\lambda\cdot x_{i}(S)}.$ Set $z_{i}=\lambda\cdot x_{i}(S)$. Observe that (1) $\displaystyle\sum_{i\in\partial(S)}z_{i}$ $\displaystyle=\sum_{i\in\partial(S)}\lambda\cdot x_{i}(S)=\lambda\sum_{i\in\partial(S)}\sum_{j\sim i,j\in S}x_{ij}=\lambda\cdot\sum_{j\in S}\sum_{i\sim j,i\in I}x_{ij}=\lambda\cdot|S|=\Lambda,$ and (2) for any $i\in I$, we have $z_{i}\leq\mu_{i}$ since $z_{i}/\mu_{i}=(\lambda/\mu_{i})\cdot x_{i}(S)=\rho_{i,S}(\mathbf{x}^{*})\leq\rho_{i}(\mathbf{x}^{*})\leq 1$. For any fixed values of $(\mu_{i})$, let $g_{i}(z_{i}):=\mu_{i}^{2}/(\mu_{i}-z_{i})$ be a function of $z_{i}\in[0,\mu_{i}]$. Consider a minimization program below, $\displaystyle\Big{(}\min\sum_{i\in\partial(S)}g_{i}(z_{i}):0\leq z_{i}\leq\mu_{i},\forall i\in\partial(S);\sum_{i\in\partial(S)}z_{i}=\Lambda.\Big{)}$ (26) By local perturbation, we claim that for any given $(\mu_{i})$, Program (26) has a unique optimal solution $\mathbf{z}^{*}=(z_{i}^{*})$ such that $\mu_{i}/(\mu_{i}-z^{*}_{i})$ takes a uniform value for every $i\in\partial(S)$, and so does $z^{*}_{i}/\mu_{i}$. Let $\beta=z^{*}_{i}/\mu_{i}$ for every $i\in\partial(S)$. We see that $\beta=\sum_{i\in\partial(S)}z^{*}_{i}/\sum_{i\in\partial(S)}\mu_{i}=\Lambda/\Phi$. Thus, we claim that $\displaystyle\eta_{t}^{*}+1$ $\displaystyle\geq-\frac{\Phi}{\Lambda}+\frac{1}{\Lambda}\sum_{i\in\partial(S)}\frac{\mu_{i}^{2}}{\mu_{i}-\lambda\cdot x_{i}(S)}=-\frac{\Phi}{\Lambda}+\frac{1}{\Lambda}\sum_{i\in\partial(S)}g_{i}(z_{i})$ $\displaystyle\geq-\frac{\Phi}{\Lambda}+\frac{1}{\Lambda}\sum_{i\in\partial(S)}g_{i}(z_{i}^{*})=-\frac{\Phi}{\Lambda}+\frac{1}{\Lambda}\sum_{i\in\partial(S)}\frac{\mu_{i}}{1-z_{i}^{*}/\mu_{i}}$ $\displaystyle=-\frac{\Phi}{\Lambda}+\frac{\Phi}{\Lambda}\cdot\frac{1}{1-\Lambda/\Phi}=\frac{\Phi}{\Lambda}\cdot\Big{(}\frac{1}{1-\Lambda/\Phi}-1\Big{)}=\frac{1}{1-\Lambda/\Phi}=\frac{1}{1-\lambda(S)/\mu(S)}.$ ∎ ## Appendix E Details for the Experiments Settings ### E.1 Approaches Used for Determining the Duration of the Tasks We propose three approaches for determining the duration of the tasks. In the first approach, we aim to study settings where $\kappa=1$. We calculated the average duration time for each (human) teleoperator across all allowed tasks and computed the overall average. This average was then used as the mean in the exponential distribution. By setting $\kappa=1$ in this method, the optimal solution of $\operatorname{\textbf{PS}}$ also became an optimal solution of $\operatorname{\textbf{PT}}$. In the second approach, we aim at varying $\kappa$ by assigning different mean values to the various tasks. These mean values were chosen around the mean calculated in the previous approach. This allowed us to assess the performance of our algorithms and heuristics in scenarios where there was significant variance in the duration times of the different tasks. Finally, the third approach considers the average time taken by the teleoperators to perform the tasks in the simulation for each specific teleoperator-task combination. This approach helps us to make the settings as similar as possible to the real data. The exact implementation of the second approach is as follows: The mean duration for each teleoperator is the average task duration she is authorized to undertake. The durations follow an exponential distribution with means calculated as $(2\cdot\kappa\cdot x)/(1+k)$ and $2x-(2\cdot\kappa\cdot x)/(1+k)$ for the first and last tasks respectively, where $x$ represents the average task duration for the teleoperator. The average values range from 3.33 to 8 seconds, with a standard deviation 1.58. For the remaining tasks, the means are uniformly selected within the range between the first and last task means. In cases where a teleoperator is permitted to perform only one task, the mean is set as $x$ for the exponential distribution. The experiments in the main paper use the second approach, while both the second and third approaches are presented in the appendix. ### E.2 Generating the Task Arrival Rates To establish the task arrival rates, we followed a specific procedure. We used the average number of task arrivals per day (100,000) used in Viden et al. Ackerman Viden et al. [2023] as a basis and examined the neighborhood of this number (e.g., 60000-140000). For each task type, we multiplied this average by the weight assigned to that specific task type, resulting in the final $\lambda$ parameter for the normal distribution of task types. Having the task arrival rates, the actual arrival times are generated by the following procedure: Using an exponential distribution with a mean of $1/\lambda$ we calculate a set of arrival times. For each arrival time we decide the type of the arriving task by using configurable probabilities. This approach allowed us to model the arrival times of tasks and analyze the system’s performance under various task-type balances. ### E.3 Experiments Environment and Some More Technical Details In our experiments, we ran simulations using Python and Matlab. The simulation ran for a (virtual) period of 4 weeks. The nonlinear minimax problem $\operatorname{\textbf{PT}}$ was solved with the Matlab function fmincon and the problem $\operatorname{\textbf{PS}}$ was cast as a linear program and solved with the Matlab function linprog. Most of the experiments were carried out on a Windows laptop. Other experiments were run on a Linux server with 98 cores to save time. It is important to emphasize that the $\operatorname{\textbf{PS}}$ method is valid only when the available workers can handle all the tasks. Therefore, we focus on problems for which $\operatorname{\textbf{PS}}$ can provide a feasible solution. ## Appendix F Additional Figures for the Experiments Section ### F.1 Results for a Different Duration Distribution Figures 4 and 5 are similar to figures 2(c,d,e,f) from the Experiments section. The only difference is that the parameter $\mu_{i,j}$ is defined as the real average time it takes the teleoperator $i$ to perform a task of type $j$. Therefore, the value of $\kappa$ varies for the different workers. Figure 4: Maximum Task Waiting Time (left) and Maximum worker workload (right) under varying task arrival loads. The x-axis represents the expected number of arrivals per day. In these graphs, the arrival distributions of all tasks are equal. Figure 5: Task waiting time (left) and worker workload (right) for different arrival distributions: equal task arrival vs. one task type arriving seven times more than the others (task load was 60000 requests per day). ### F.2 Results for Different Numbers of Task Types In this section, we vary the number of task types and analyze the impact on the maximum waiting time of tasks and the maximum workload of workers. In Figure 7 we used the second approach(see section E.1) to define the task duration parameter ($\mu_{i,j}$) for tasks 1-4, while in Figure 6 we used the third approach. In both Figures, to define the parameters for a new task not present in the dataset (5 and 6), we selected a mean value uniformly at random from the range 4 and 11 (the means of the original tasks) and used the standard deviation of the actual duration values to define a normal distribution. This normal distribution was used to determine the duration for each worker. To obtain a reasonable duration, we set a minimum duration of at least 1 second for all workers. The graph of the input network was created according to these durations. An edge exists between a worker and a task type only if the average time required by the worker to complete a task of that type is at most equal to the median duration for that task type. In the final step, the duration parameters for the new tasks were determined based on these durations, using the second and third approaches for Figures 7 and 6 respectively. In Figures 6 and 7 we see that having only 2 task types leads to a worse performance both in the maximum waiting time of the tasks and in the maximum worker workload measures. For 3-6 task types, we find that the performance in Figure 6 decreases for both measures as the number of task types increases (at least for some methods), while the performance in Figure 7 is very similar in this range. We conclude that although the number of task types is irrelevant when distributed uniformly among the workers, it is quite important if they are distributed differently (as in the real-data). This fact should be taken into account when modeling task types in an application. Figure 6: Task waiting time (left) and worker workload (right) for different numbers of task types(task load was 90000 requests per day; $\mu_{i,j}$ varies according to the real and synthetic data and $\kappa$ is concluded accordingly). Figure 7: Task waiting time (left) and worker workload (right) for different numbers of task types(task load was 120000 requests per day and $\kappa=1$). ### F.3 Results for Different Numbers of Workers In this section, we vary the number of workers and analyze the effects on the maximum waiting time of the tasks and the maximum workload of the workers. In Figure 8 we used the second approach(section E.1) to define the task duration parameter ($\mu_{i,j}$) for workers 1-9, while in Figure 9 we used the third approach. In both Figures, we have chosen a (uniformly) random duration from the range of existing durations to define the average durations for a new worker (10 to 14). The graph of the input network was defined using these durations, where an edge between a worker and a task type exists only if the average duration of the worker performing a task of that type is at most equal to the median duration for that task type. Finally, the duration parameters of the new tasks were determined based on these durations according to the second and third approaches. We find that, as expected, the performance of the system improves as the number of workers increases. Therefore, when deciding how many workers to acquire, we should look for a reasonable balance between system performance and the cost of additional workers. Figure 8: Task waiting time (left) and worker workload (right) for different numbers of workers(task load was 90000 requests per day and $\kappa$ is determined by the values of $\mu_{i,j}$). Figure 9: Task waiting time (left) and worker workload (right) for different numbers of workers types(task load was 90000 requests per day and $\kappa=1$ ).
# Comparison theorems for multi-dimensional BSDEs with jumps and applications to constrained stochastic linear-quadratic control Ying Hu , Xiaomin Shi and Zuo Quan Xu Y. Hu: Univ Rennes, CNRS, IRMAR-UMR 6625, F-35000 Rennes, France<EMAIL_ADDRESS>X. Shi: School of Statistics and Mathematics, Shandong University of Finance and Economics, Jinan 250100, China<EMAIL_ADDRESS>Z. Xu: Department of Applied Mathematics, The Hong Kong Polytechnic University, Kowloon, Hong Kong. <EMAIL_ADDRESS> ###### Abstract. In this paper, we, for the first time, establish two comparison theorems for multi-dimensional backward stochastic differential equations with jumps. Our approach is novel and completely different from the existing results for one- dimensional case. Using these and other delicate tools, we then construct solutions to coupled two-dimensional stochastic Riccati equation with jumps in both standard and singular cases. In the end, these results are applied to solve a cone-constrained stochastic linear-quadratic and a mean-variance portfolio selection problem with jumps. Different from no jump problems, the optimal (relative) state processes may change their signs, which is of course due to the presence of jumps. ### Keywords: Backward stochastic differential equations with jumps, multi-dimensional comparison theorem, stochastic Riccati equation with jumps, cone-constrained linear-quadratic control, mean-variance problem. ### Mathematics Subject Classification (2020): 60H30. 60J76. 93E20. 91G10. ## 1\. Introduction The study of backward stochastic differential equations (BSDEs, for short) can be dated back to Bismut [3], who studied the linear case, as an adjoint equation in the Pontryagin stochastic maximum principle. The general Lipschitz continuous case was later resolved in the seminal paper of Pardoux and Peng [28]. Since then, BSDEs have attracted strong interest of many researchers and found widely applications in partial differential equations, stochastic control, stochastic differential game and mathematical finance; see, e.g., [6, 8, 9, 10, 12, 16, 30]. In particular, the solvability of quadratic BSDEs in one-dimensional case was firstly obtained in Kobylanski [20], and then generalized to multi-dimensional case by [11, 15, 25, 38]. BSDEs that are driven by a Brownian motion and a Poisson random measure, which are named as BSDE with jumps (BSDEJ) in this paper, were firstly tackled by Tang and Li [37], then followed notably by Barles, Buckdahn and Pardoux [2], Royer [33], Quenes and Sulem [32] in the Lipschitz case. Quadratic BSDEs with jumps and their applications in utility maximization problems have also been investigated; see, e.g., Antonelli and Mancini [1], Kazi-Tani, Possamaï and Zhou [17], Laeven and Stadje [21], Morlais [26, 27] among many others. Please refer to Papapantoleon, Possamaï, Saplaouras [29] for a synopsis of these topics. BSDEs arising from stochastic linear quadratic (LQ) control problems, called the stochastic Riccati equations (SREs), form an important class of BSDEs. In these BSDEs, the first unknown variable appears on the denominator and the second unknown variable grows quadratically in the generator. These features distinguish them from those well-studied BSDEs with Lipschitz or quadratic growth generators, so that they have to be studied by new methods. Bismut [4] firstly found that a linear state feedback form optimal control for a stochastic LQ control problem is available, provided that its associated SRE admits a solution in some suitable space. Unfortunately he could not show the existence of such a solution in general. Nowadays numerous progresses have been made in solving SREs. Kohlmann and Tang [19] resolved the existence and uniqueness issues for one-dimensional SREs, then Tang [35, 36] resolved the matrix-valued case using the stochastic maximum principle and dynamic programming method respectively. Sun, Xiong and Yong [34] studied the indefinite case. Inspired by Tang’s [36] dynamic programming method, Zhang, Dong and Meng [39] established the existence and uniqueness of solutions to SREs with jumps (SREJ). Li, Wu and Yu [22] studied the indefinite case using a so-called relax compensator. Motivated by the mean-variance (MV) portfolio selection problem with no- shorting constraints, Hu and Zhou [16] studied cone-constrained stochastic LQ problem and found that the optimal control takes a piecewise ($0$ is the unique segment point) linear state feedback form. The associated SRE is a two- dimensional, but decoupled, BSDE. Hence it can be treated separately as two one-dimensional BSDEs. The solvability was established with the aid of quadratic BSDE theory and truncation techniques. The decoupling phenomenon lies in the fact that the optimal state process will not change its sign (namely not cross $0$), i.e. it will stay positive (resp. negative) if the initial state is positive (resp. negative). Dong [7] generalized the model in [16] to incorporate a jump by the enlargement of filtration framework. The corresponding SRE is a coupled two-dimensional BSDEJ, whose solvability is obtained by solving two recursive systems of BSDEs driven only by Brownian motions. This decomposition approach works only in the filtration enlargement theory; see also Kharroubi, Lim and Ngoupeyou [18], Hu, Shi and Xu [14] for the unconstrained or regime switching case. Czichowsky and Schweizer [5] extended the cone-constrained MV model to a general semi-martingale framework, but they can not solve the two-dimensional SREJ. They claimed that “finding a solution by general BSDE techniques seems a formidable challenge” in [5, Remark 4.8]. This paper is intended as an attempt to cope with the formidable challenge indicated in [5]. Our main contribution is to resolve the solvability of a two-dimensional coupled SREJ in the Wiener-Poisson world via pure BSDE techniques. Although one can consider the more general semi-martingale framework, we will focus on the Wiener-Poisson world as SREJ in this case takes more concrete structures for presentation and illustration. We establish the solvability for both standard and singular cases, containing the SREJ emerging in the cone-constrained MV problem as an special example. Since the existing approximation procedures in Kohlmann and Tang [19] and our previous work [13] can not be applied to the present problem, we provide a new approximation procedure to achieve the goal. A crucial and novel tool used in the approximation approach is our new comparison theorems for BSDEJs. We establish two comparison theorems which seem to be the first ones for multi-dimensional case. The first one requires a locally Lipschitz condition for one generator (see Remark 2.3) and works for bounded state processes, whereas the second one requires the globally Lipschitz condition for both generators and works for square integrable state processes. Most of existing comparison theorems for BSDEJs require the condition $\gamma>-1$ (see Remark 2.2) or even stronger $\gamma>-1+\varepsilon$ in order to utilize the Girsanov theorem; see, e.g., Barles, Buckdahn and Pardoux [2] and Royer [33]. To the best of our knowledge, Quenez and Sulem’s [32] comparison theorem is the only one that relaxes the condition to $\gamma\geq-1$. Without resort to the Girsanov theorem, they used the conditional expectation representation of one-dimensional linear BSDEJs to establish their comparison theorem. Nevertheless all of these existing comparison theorems for BSDEJs can only deal with one-dimensional case. In our approximation procedure, however, the SREJ is a fully coupled two-dimensional BSDEJ, therefore comparison theorems for multi-dimensional BSDEJs are strongly appealing. It is worth pointing that the conditional expectation representation method used in [32] cannot be applied to multi-dimensional BSDEJs. In this paper, we propose a completely different approach to establish our comparison theorems for multi-dimensional BSDEJs for the first time. We achieve the goal by directly analyzing $((\delta Y_{t})^{+})^{2}$ with the aid of the Meyer-Itô formula and utilizing a tricky elementary inequality (Lemma 2.1) that works for $\gamma\geq-1$. Note one cannot expect to extend the results to the case $\gamma<-1$ since counter-examples do exist in this case; see [2, Remark 2.7]. With the help of the new comparison theorems for multi-dimensional BSDEJs, we can construct solutions to the two-dimensional coupled SREJ in both standard and singular cases. We then apply the result to solve a cone-constrained stochastic LQ problem with jumps and obtain the efficient portfolio for a MV problem with jumps. It is worth pointing out that even without the cone- constraint, MV problems with jumps have not been investigated thoroughly. Lim [23] studied such a problem, but he assumed all the coefficients are predictable with respect to the Brownian filtration, rendered the corresponding SRE exactly the same as that in the model without jumps. On the other hand, Zhang, Dong and Meng [39] examined stochastic LQ problems with jumps, but they assumed the control weight in the running cost is uniformly positive so that their result cannot solve the corresponding MV problem where the control weight is 0. We will not only solve the MV problem with jumps, but also incorporate convex cone-constraint, especially covering the famous no- shorting constraints. By adding cone-constraint, the associated SREJ becomes a fully coupled two-dimensional BSDEJ, thus causing notably nontrivial difficulty in its solvability. The rest part of this paper is organized as follows. Section 2 is devoted to proving two comparison theorems for multi-dimensional BSDEJs. In Section 3, we study a cone-constrained stochastic LQ control problem with jumps and prove the existence and uniqueness of solution to the associated SREJ. In Section 4, we solve a cone-constrained MV problem. Appendix A provides a heuristical derivation of the SREJ. A lengthy and complementary proof of Theorem 3.1 is relegated to Appendix B. ## 2\. Comparison theorems for multi-dimensional BSDEJs Let $(\Omega,\mathcal{F},\mathbb{F},\mathbb{P})$ be a fixed complete filtered probability space. The filtration $\mathbb{F}=\\{\mathcal{F}_{t},t\geq 0\\}$ is generated by two independent random sources augmented by all $\mathbb{P}$-null sets: one is a standard $n$-dimensional Brownian motion $W_{t}=(W_{1,t},\ldots,W_{n,t})^{\top}$, and the other one is a Poisson random measure $N(\operatorname{d}\\!t,\operatorname{d}\\!e)$ defined on ${\mathbb{R}}_{+}\times\mathcal{E}$ induced by a stationary Poisson point process with a stationary compensator (intensity measure) given by $\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ satisfying $\nu(\mathcal{E})<\infty$, where $\mathcal{\mathcal{E}}\subseteq{\mathbb{R}}^{\ell}\setminus\\{0\\}$ is a nonempty Borel subset of the $\ell$-dimensional Euclidean space ${\mathbb{R}}^{\ell}$. We use an increasing sequence $\\{T_{n}\\}_{n\in\mathbb{N}}$ to denote the jump times of underlying Poisson point process. The compensated Poisson random measure is denoted by $\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e)$. For the ease of notations, we only consider one-dimensional Poisson random measure, although the results of this paper can be generalized to the multi-dimensional case without essential difficulties. Throughout the paper, let $T$ denote a fixed positive constant, $\mathcal{P}$ denote the $\mathbb{F}$-predictable $\sigma$-field on $\Omega\times[0,T]$, and $\mathcal{B}(\mathcal{E})$ denote the Borel $\sigma$-algebra of $\mathcal{E}$. We denote by ${\mathbb{R}}^{\ell}$ the set of $\ell$-dimensional column vectors, by ${\mathbb{R}}^{\ell}_{+}$ the set of vectors in ${\mathbb{R}}^{\ell}$ whose components are nonnegative, by ${\mathbb{R}}^{\ell\times n}$ the set of $\ell\times n$ real matrices, and by $\mathbb{S}^{n}$ the set of symmetric $n\times n$ real matrices. Therefore, ${\mathbb{R}}^{\ell}\equiv{\mathbb{R}}^{\ell\times 1}$. For any vector $Y$, we denote $Y_{i}$ as its $i$-th component. For any matrix $M=(m_{ij})$, we denote its transpose by $M^{\top}$, and its norm by $|M|=\sqrt{\sum_{ij}m_{ij}^{2}}$. If $M\in\mathbb{S}^{n}$ is positive definite (resp. positive semidefinite), we write $M>$ (resp. $\geq$) $0.$ We write $A>$ (resp. $\geq$) $B$ if $A,B\in\mathbb{S}^{n}$ and $A-B>$ (resp. $\geq$) $0.$ We use the standard notations $x^{+}=\max\\{x,0\\}$ and $x^{-}=\max\\{-x,0\\}$ for $x\in{\mathbb{R}}$ and define a set $\mathcal{M}=\\{1,2,...,\ell\\}$. We will use the elementary inequality $|a^{\top}b|\leq c|a|^{2}+\frac{|b|^{2}}{2c}$ for any $a,b\in{\mathbb{R}}^{n},c>0$ frequently throughout the paper without claim. We use the following spaces throughout the paper: $\displaystyle L^{2}_{\mathcal{F}_{T}}(\Omega;{\mathbb{R}})$ $\displaystyle=\Big{\\{}\xi:\Omega\to{\mathbb{R}}\;\Big{|}\;\mbox{$\xi$ is $\mathcal{F}_{T}$-measurable, and ${\mathbb{E}}|\xi|^{2}<\infty$}\Big{\\}},$ $\displaystyle L^{\infty}_{\mathcal{F}_{T}}(\Omega;{\mathbb{R}})$ $\displaystyle=\Big{\\{}\xi:\Omega\to{\mathbb{R}}\;\Big{|}\;\mbox{$\xi$ is $\mathcal{F}_{T}$-measurable, and essentially bounded}\Big{\\}},$ $\displaystyle L^{2}_{\mathbb{F}}(0,T;{\mathbb{R}})$ $\displaystyle=\Big{\\{}\phi:\Omega\times[0,T]\to{\mathbb{R}}\;\Big{|}\;\mbox{$\phi$ is $\mathcal{P}$-measurable and ${\mathbb{E}}\int_{0}^{T}|\phi_{t}|^{2}\operatorname{d}\\!t<\infty$}\Big{\\}},$ $\displaystyle L^{\infty}_{\mathbb{F}}(0,T;{\mathbb{R}})$ $\displaystyle=\Big{\\{}\phi:\Omega\times[0,T]\to{\mathbb{R}}\;\Big{|}\;\mbox{$\phi$ is $\mathcal{P}$-measurable and essentially bounded}\Big{\\}},$ $\displaystyle L^{2,\nu}({\mathbb{R}})$ $\displaystyle=\Big{\\{}\phi:\mathcal{E}\to{\mathbb{R}}\;\Big{|}\;\mbox{$\phi$ is $\mathcal{B}(\mathcal{E})$-measurable and $||\phi||^{2}_{\nu}:=\int_{\mathcal{E}}|\phi(e)|^{2}\nu(\operatorname{d}\\!e)<\infty$}\Big{\\}},$ $\displaystyle L^{\infty,\nu}({\mathbb{R}})$ $\displaystyle=\Big{\\{}\phi:\mathcal{E}\to{\mathbb{R}}\;\Big{|}\;\mbox{$\phi$ is $\mathcal{B}(\mathcal{E})$-measurable and essentially bounded w.r.t. $\operatorname{d}\\!\nu$}\Big{\\}},$ $\displaystyle L^{2,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}})$ $\displaystyle=\Big{\\{}\phi:\Omega\times[0,T]\times\mathcal{E}\to{\mathbb{R}}\;\Big{|}\;\mbox{$\phi$ is $\mathcal{P}\otimes\mathcal{B}(\mathcal{E})$-measurable }$ $\displaystyle\qquad\quad\ \mbox{and ${\mathbb{E}}\int_{0}^{T}\int_{\mathcal{E}}|\phi_{t}(e)|^{2}\nu(\operatorname{d}\\!e)\operatorname{d}\\!t<\infty$}\Big{\\}},$ $\displaystyle L^{\infty,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}})$ $\displaystyle=\Big{\\{}\phi:\Omega\times[0,T]\times\mathcal{E}\to{\mathbb{R}}\;\Big{|}\;\phi\mbox{ is $\mathcal{P}\otimes\mathcal{B}(\mathcal{E})$-measurable and}$ $\displaystyle\qquad\quad\mbox{ essentially bounded w.r.t. $\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\otimes\operatorname{d}\\!\nu$}\Big{\\}},$ $\displaystyle S^{2}_{\mathbb{F}}(0,T;{\mathbb{R}})$ $\displaystyle=\Big{\\{}\phi:\Omega\times[0,T]\to{\mathbb{R}}\;\Big{|}\;(\phi_{t})_{0\leq t\leq T}\mbox{ is c\\`{a}d-l\\`{a}g, $\mathbb{F}$-adapted}$ $\displaystyle\qquad\quad\mbox{ and ${\mathbb{E}}\sup_{0\leq t\leq T}|\phi_{t}|^{2}<\infty$}\Big{\\}},$ $\displaystyle S^{\infty}_{\mathbb{F}}(0,T;{\mathbb{R}})$ $\displaystyle=\Big{\\{}\phi:\Omega\times[0,T]\to{\mathbb{R}}\;\Big{|}\;(\phi_{t})_{0\leq t\leq T}\mbox{ is c\\`{a}d-l\\`{a}g, $\mathbb{F}$-adapted}$ $\displaystyle\qquad\quad\mbox{ and essentially bounded}\Big{\\}}.$ These definitions are generalized in the obvious way to the cases that ${\mathbb{R}}$ is replaced by ${\mathbb{R}}^{n}$, ${\mathbb{R}}^{n\times\ell}$ or $\mathbb{S}^{n}$. Arguments $s$, $t$ and $\omega$, or statements “almost surely” (a.s.) and “almost everywhere” (a.e.), may be suppressed for simplicity in many circumstances when no confusion occurs. We shall use $c$ to represent a generic positive constant which can be different from line to line. All the equations and inequalities in subsequent analysis shall be understood in the sense that $\operatorname{d}\\!\mathbb{P}$-a.s. or $\operatorname{d}\\!\nu$-a.e. or $\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\textrm{-a.e.}$ or $\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\otimes\operatorname{d}\\!\nu$-a.e. etc. In this paper, any $\ell$-dimensional backward stochastic differential equation with jumps (BSDEJ) (on $[0,T]$) is characterized by a pair $(\xi,f)$, in which $\xi:\Omega\to{\mathbb{R}}^{\ell}$ is called the terminal value which is an $\mathcal{F}_{T}$-measurable random vector, and $f:\Omega\times[0,T]\times{\mathbb{R}}^{\ell}\times{\mathbb{R}}^{n\times\ell}\times L^{2,\nu}({\mathbb{R}}^{\ell})\to{\mathbb{R}}^{\ell}$ is called the generator which is a $\mathcal{P}\otimes\mathcal{B}({\mathbb{R}}^{\ell})\otimes\mathcal{B}({\mathbb{R}}^{n\times\ell})\otimes\mathcal{B}(L^{2,\nu}({\mathbb{R}}^{\ell}))$-measurable process. We call the BSDEJ $\ell$-dimensional as its state process is ${\mathbb{R}}^{\ell}$-valued. We often rewrite in its component form for the ease of presentations. ### 2.1. Comparison theorem for bounded processes We first prove a comparison theorem where the state processes are essentially bounded. ###### Theorem 2.1. Suppose, for every $i\in\mathcal{M}$, $\displaystyle(Y_{i},Z_{i},\Phi_{i}),(\overline{Y}_{i},\overline{Z}_{i},\overline{\Phi}_{i})\in S^{\infty}_{\mathbb{F}}(0,T;{\mathbb{R}})\times L^{2}_{\mathbb{F}}(0,T;{\mathbb{R}}^{n})\times L^{2,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}}),$ and they satisfy BSDEJs $\displaystyle Y_{i,t}$ $\displaystyle=\xi_{i}+\int_{t}^{T}f_{i}(s,Y_{s-},Z_{i,s},\Phi_{s})\operatorname{d}\\!s$ (2.1) $\displaystyle\qquad-\int_{t}^{T}Z_{i,s}^{\top}\operatorname{d}\\!W_{s}-\int_{t}^{T}\int_{\mathcal{E}}\Phi_{i,s}(e)\widetilde{N}(\operatorname{d}\\!s,\operatorname{d}\\!e)~{}\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\textrm{-a.e.},$ and $\displaystyle\overline{Y}_{i,t}$ $\displaystyle=\overline{\xi}_{i}+\int_{t}^{T}\overline{f}_{i}(s,\overline{Y}_{s-},\overline{Z}_{i,s},\overline{\Phi}_{s})\operatorname{d}\\!s$ (2.2) $\displaystyle\qquad-\int_{t}^{T}\overline{Z}_{i,s}^{\top}\operatorname{d}\\!W_{s}-\int_{t}^{T}\int_{\mathcal{E}}\overline{\Phi}_{i,s}(e)\widetilde{N}(\operatorname{d}\\!s,\operatorname{d}\\!e)~{}\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\textrm{-a.e.}.$ Also suppose that, for all $i\in\mathcal{M}$ and $s\in[0,T]$, 1. (1) $\xi_{i}\leq\overline{\xi}_{i}$; 2. (2) there exists a constant $c>0$ such that $\displaystyle\quad\;f_{i}(s,Y_{s-},Z_{i,s},\Phi_{1,s},\cdots,\Phi_{i,s},\cdots,\Phi_{\ell,s})$ $\displaystyle\qquad\quad- f_{i}(s,Y_{s-},Z_{i,s},\Phi_{1,s},\cdots,\overline{\Phi}_{i,s},\cdots,\Phi_{\ell,s})$ $\displaystyle\leq c\int_{\mathcal{E}}(\Phi_{i,s}(e)-\overline{\Phi}_{i,s}(e))^{+}\nu(\operatorname{d}\\!e)+\int_{\mathcal{E}}|\Phi_{i,s}(e)-\overline{\Phi}_{i,s}(e)|\nu(\operatorname{d}\\!e);$ 3. (3) there exists a constant $c>0$ such that $\displaystyle\quad\;f_{i}(s,Y_{s-},Z_{i,s},\Phi_{1,s},\cdots,\overline{\Phi}_{i,s},\cdots,\Phi_{\ell,s})-f_{i}(s,\overline{Y}_{s-},\overline{Z}_{i,s},\overline{\Phi}_{s})$ $\displaystyle\leq c\Big{(}|Y_{i,s-}-\overline{Y}_{i,s-}|+\sum_{j\neq i}(Y_{j,s-}-\overline{Y}_{j,s-})^{+}+|Z_{i,s}-\overline{Z}_{i,s}|$ $\displaystyle\qquad\quad+\sum_{j\neq i}\int_{\mathcal{E}}(Y_{j,s-}+\Phi_{j,s}(e)-\overline{Y}_{j,s-}-\overline{\Phi}_{j,s}(e))^{+}\nu(\operatorname{d}\\!e)\Big{)};~{}\text{and}$ 4. (4) $f_{i}(s,\overline{Y}_{s-},\overline{Z}_{i,s},\overline{\Phi}_{s})\leq\overline{f}_{i}(s,\overline{Y}_{s-},\overline{Z}_{i,s},\overline{\Phi}_{s}).$ Then $Y_{i}\leq\overline{Y}_{i}$ for all $i\in\mathcal{M}$. To prove this theorem, we need the following critical elementary result. ###### Lemma 2.1. For all $(x,y)\in{\mathbb{R}}\times{\mathbb{R}}$ and $c\geq-1$, we have $\displaystyle[(x+y)^{+}]^{2}-(x^{+})^{2}-2(1+c)x^{+}y\geq-(c^{2}\vee 1)(x^{+})^{2}.$ Proof: There are three cases: * • If $x\leq 0$, then $\displaystyle[(x+y)^{+}]^{2}-(x^{+})^{2}-2(1+c)x^{+}y=[(x+y)^{+}]^{2}\geq 0=-(c^{2}\vee 1)(x^{+})^{2}.$ * • If $y\leq 0$, then, since $c\geq-1$, $\displaystyle[(x+y)^{+}]^{2}-(x^{+})^{2}-2(1+c)x^{+}y\geq[(x+y)^{+}]^{2}-(x^{+})^{2}\geq-(x^{+})^{2}\geq-(c^{2}\vee 1)(x^{+})^{2}.$ * • If $x\geq 0$ and $y\geq 0$, then $\displaystyle[(x+y)^{+}]^{2}-(x^{+})^{2}-2(1+c)x^{+}y=y^{2}-2cxy$ $\displaystyle=(y-cx)^{2}-c^{2}x^{2}$ $\displaystyle\geq-(c^{2}\vee 1)x^{2}=-(c^{2}\vee 1)(x^{+})^{2}.\qquad$ The proof is complete. $\Box$ Proof of Theorem 2.1. For $t\in[0,T]$ and $i\in\mathcal{M}$, set $\delta Y_{i,t}=Y_{i,t}-\overline{Y}_{i,t},\ \delta Z_{i,t}=Z_{i,t}-\overline{Z}_{i,t},\ \delta\Phi_{i,t}=\Phi_{i,t}-\overline{\Phi}_{i,t}.$ Applying the Meyer-Itô formula [31, Chapter IV, Theorem 70] to $(\delta Y_{i,t})^{+}$, we get, $\displaystyle\operatorname{d}\\!\;(\delta Y_{i,t})^{+}$ $\displaystyle=-\mathbf{1}_{\\{\delta Y_{i,t-}>0\\}}[f_{i}(t,Y_{t-},Z_{i,t},\Phi_{t})-\overline{f}_{i}(t,\overline{Y}_{t-},\overline{Z}_{i,t},\overline{\Phi}_{t})]\operatorname{d}\\!t$ $\displaystyle\quad\;+\int_{\mathcal{E}}[(\delta Y_{i,t-}+\delta\Phi_{i,t}(e))^{+}-(\delta Y_{i,t-}^{i})^{+}-\mathbf{1}_{\\{\delta Y_{i,t-}>0\\}}\delta\Phi_{i,t}(e)]\nu(\operatorname{d}\\!e)\operatorname{d}\\!t+\frac{1}{2}\operatorname{d}\\!L_{i,t}$ $\displaystyle\quad\;+\mathbf{1}_{\\{\delta Y_{i,t-}>0\\}}\delta Z_{i,t}^{\top}\operatorname{d}\\!W_{t}+\int_{\mathcal{E}}[(\delta Y_{i,t-}+\delta\Phi_{i,t}(e))^{+}-(\delta Y_{i,t-})^{+}]\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),$ where $L_{i,t}$ is the local time of $\delta Y_{i,t}$ at $0$. Since $\delta Y_{i,t-}\operatorname{d}\\!L_{i,t}=0$, applying Itô’s formula to $((\delta Y_{i,t})^{+})^{2}$ yields $\displaystyle\operatorname{d}\\!\;((\delta Y_{i,t})^{+})^{2}$ $\displaystyle=-2(\delta Y_{i,t-})^{+}[f_{i}(t,Y_{t-},Z_{i,t},\Phi_{t})-\overline{f}_{i}(t,\overline{Y}_{t-},\overline{Z}_{i,t},\overline{\Phi}_{t})]\operatorname{d}\\!t+\mathbf{1}_{\\{\delta Y_{i,t-}>0\\}}|\delta Z_{i,t}|^{2}\operatorname{d}\\!t$ $\displaystyle\quad\;+\int_{\mathcal{E}}[((\delta Y_{i,t-}+\delta\Phi_{i,t}(e))^{+})^{2}-((\delta Y_{i,t-})^{+})^{2}-2(\delta Y_{i,t-})^{+}\delta\Phi_{i,t}(e)]\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ (2.3) $\displaystyle\quad\;+2(\delta Y_{i,t-})^{+}\delta Z_{i,t}^{\top}\operatorname{d}\\!W_{t}+\int_{\mathcal{E}}[((\delta Y_{i,t-}+\delta\Phi_{i,t}(e))^{+})^{2}-((\delta Y_{i,t-})^{+})^{2}]\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e).$ Using the condition 4 and inserting two zero-sum terms, we get $\displaystyle\quad\;f_{i}(t,Y_{t-},Z_{i,t},\Phi_{t})-\overline{f}_{i}(t,\overline{Y}_{t-},\overline{Z}_{i,t},\overline{\Phi}_{t})$ $\displaystyle\leq f_{i}(t,Y_{t-},Z_{i,t},\Phi_{t})-f_{i}(t,\overline{Y}_{t-},\overline{Z}_{i,t},\overline{\Phi}_{t})$ $\displaystyle=[f_{i}(t,Y_{t-},Z_{i,t},\Phi_{t})-f_{i}(t,Y_{t-},Z_{i,t},\Phi_{1,t},\cdots,\overline{\Phi}_{i,t},\cdots,\Phi_{\ell,t})]$ $\displaystyle\quad\;+[f_{i}(t,Y_{t-},Z_{i,t},\Phi_{1,t},\cdots,\overline{\Phi}_{i,t},\cdots,\Phi_{\ell,t})-f_{i}(t,\overline{Y}_{t-},\overline{Z}_{i,t},\overline{\Phi}_{t})].$ By the conditions 2, the first difference on the right hand side (RHS) in above is upper bounded by $\int_{\mathcal{E}}\gamma_{i,t}(e)\delta\Phi_{i,t}(e)\nu(\operatorname{d}\\!e),$ where $\displaystyle\gamma_{i,t}(e)=\begin{cases}c&\quad\text{ if }\delta\Phi_{i,t}(e)\geq 0;\\\ -1&\quad\text{ if }\delta\Phi_{i,t}(e)<0.\end{cases}$ By the conditions 3, the second difference on the RHS is upper bounded respectively by $c\Big{(}|\delta Y_{i,t-}|+\sum_{j\neq i}(\delta Y_{j,t-})^{+}+|\delta Z_{i,t}|+\sum_{j\neq i}\int_{\mathcal{E}}(\delta Y_{j,t-}+\delta\Phi_{j,t}(e))^{+}\nu(\operatorname{d}\\!e)\Big{)}.$ Using these estimates and $\nu(\mathcal{E})<\infty$, we deduce that $\displaystyle 2(\delta Y_{i,t-})^{+}[f_{i}(t,Y_{t-},Z_{i,t},\Phi_{t})-\overline{f}_{i}(t,\overline{Y}_{t-},\overline{Z}_{i,t},\overline{\Phi}_{t})]$ $\displaystyle\leq c\sum_{i=1}^{\ell}((\delta Y_{i,t-})^{+})^{2}+\mathbf{1}_{\\{\delta Y_{i,t-}>0\\}}|\delta Z_{i,t}|^{2}+\mathbf{1}_{\\{\delta Y_{i,t-}>0\\}}\sum_{j\neq i}\int_{\mathcal{E}}((\delta Y_{j,t-}+\delta\Phi_{j,t}(e))^{+})^{2}\nu(\operatorname{d}\\!e)$ (2.4) $\displaystyle\quad\;+2(\delta Y_{i,t-})^{+}\int_{\mathcal{E}}\gamma_{i,t}(e)\delta\Phi_{i,t}(e)\nu(\operatorname{d}\\!e).$ Integrating from $t$ to $T$ in (2.1), taking conditional expectation and using (2.1), we obtain $\displaystyle\quad((\delta Y_{i,t})^{+})^{2}$ $\displaystyle\leq\mathbb{E}_{t}\int_{t}^{T}\Big{(}c\sum_{i=1}^{\ell}((\delta Y_{i,s-})^{+})^{2}+\mathbf{1}_{\\{\delta Y_{i,s-}>0\\}}\sum_{j\neq i}\int_{\mathcal{E}}((\delta Y_{j,s-}+\delta\Phi_{j,s}(e))^{+})^{2}\nu(\operatorname{d}\\!e)\Big{)}\operatorname{d}\\!s$ $\displaystyle-\mathbb{E}_{t}\int_{t}^{T}\int_{\mathcal{E}}\Big{[}((\delta Y_{i,s-}+\delta\Phi_{i,s}(e))^{+})^{2}-((\delta Y_{i,s-})^{+})^{2}-2(1+\gamma_{i,s}(e))(\delta Y_{i,s-})^{+}\delta\Phi_{i,s}(e)\Big{]}\nu(\operatorname{d}\\!e)\operatorname{d}\\!s.$ Because $\gamma_{i}\in L^{\infty,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}})$ and $\gamma_{i}\geq-1$, it follows from Lemma 2.1 that (2.5) $-[((\delta Y_{i,s-}+\delta\Phi_{i,s}(e))^{+})^{2}-((\delta Y_{i,s-})^{+})^{2}-2(1+\gamma_{i,s}(e))(\delta Y_{i,s-})^{+}\delta\Phi_{i,s}(e)]\\\ \qquad\leq(\gamma_{i,s}(e)^{2}\vee 1)((\delta Y_{i,s-})^{+})^{2}\leq c((\delta Y_{i,s-})^{+})^{2}.$ Combining the above estimates and using $\nu(\mathcal{E})<\infty$, we obtain (2.6) $\displaystyle((\delta Y_{i,t})^{+})^{2}$ $\displaystyle\leq c\mathbb{E}_{t}\int_{t}^{T}\sum_{i=1}^{\ell}((\delta Y_{i,s-})^{+})^{2}\operatorname{d}\\!s+\sum_{j\neq i}{\mathbb{E}}_{t}\int_{t}^{T}\int_{\mathcal{E}}((\delta Y_{j,s-}+\delta\Phi_{j,s}(e))^{+})^{2}\nu(\operatorname{d}\\!e)\operatorname{d}\\!s,$ where the constant $c$ is independent of $t$, $T$ and $i$. Note that $\displaystyle{\mathbb{E}}_{t}\int_{t}^{T}\int_{\mathcal{E}}((\delta Y_{j,s-}+\delta\Phi_{j,s})^{+})^{2}\nu(\operatorname{d}\\!e)\operatorname{d}\\!s$ $\displaystyle={\mathbb{E}}_{t}\int_{t}^{T}\int_{\mathcal{E}}((\delta Y_{j,s-}+\delta\Phi_{j,s}(e))^{+})^{2}N(\operatorname{d}\\!s,\operatorname{d}\\!e)$ $\displaystyle={\mathbb{E}}_{t}\Big{[}\sum_{n\in\mathbb{N},\ t<T_{n}\leq T}((\delta Y_{j,T_{n}-}+\delta\Phi_{j,T_{n}}(\Delta U_{T_{n}}))^{+})^{2}\Big{]}$ (2.7) $\displaystyle={\mathbb{E}}_{t}\Big{[}\sum_{n\in\mathbb{N},\ t<T_{n}\leq T}((\delta Y_{j,T_{n}})^{+})^{2}\Big{]},$ where $U_{t}:=\int_{0}^{t}\int_{\mathcal{E}}zN(\operatorname{d}\\!s,\operatorname{d}\\!e)$, $\Delta U_{T_{n}}:=U_{T_{n}}-U_{T_{n}-}$, recalling that $\\{T_{n}\\}_{n\in\mathbb{N}}$ denotes of jump times of underlying Poisson point process. Substituting (2.1) into (2.6) yields, $\displaystyle((\delta Y_{i,t})^{+})^{2}$ $\displaystyle\leq c\mathbb{E}_{t}\int_{t}^{T}\sum_{i=1}^{\ell}((\delta Y_{i,s-})^{+})^{2}\operatorname{d}\\!s+\sum_{j\neq i}{\mathbb{E}}_{t}\Big{[}\sum_{n\in\mathbb{N},\ t<T_{n}\leq T}((\delta Y_{j,T_{n}})^{+})^{2}\Big{]}.$ Since the jumps of $\delta Y$ are accountable, we can replace $\delta Y_{i,s-}$ by $\delta Y_{i,s}$ in the above integral to get (2.8) $\displaystyle((\delta Y_{i,t})^{+})^{2}$ $\displaystyle\leq c\mathbb{E}_{t}\int_{t}^{T}\sum_{i=1}^{\ell}((\delta Y_{i,s})^{+})^{2}\operatorname{d}\\!s+\sum_{j\neq i}{\mathbb{E}}_{t}\Big{[}\sum_{n\in\mathbb{N},\ t<T_{n}\leq T}((\delta Y_{j,T_{n}})^{+})^{2}\Big{]}.$ For any constant $h\in(0,T]$, set $M(h):=\operatorname*{ess\;sup}\limits_{(t,i)\in[T-h,T]\times\mathcal{M}}((\delta Y_{i,t})^{+})^{2},$ which is finite since $\delta Y$ is bounded. For any $t\in[T-h,T]$, we obtain from (2.8) that $\displaystyle((\delta Y_{i,t})^{+})^{2}$ $\displaystyle\leq c\int_{t}^{T}\sum_{i=1}^{\ell}M(h)\operatorname{d}\\!s+\sum_{j\neq i}{\mathbb{E}}_{t}\Big{[}\sum_{n\in\mathbb{N},\ t<T_{n}\leq T}M(h)\Big{]}$ $\displaystyle=c\ell M(h)(T-t)+M(h)\sum_{j\neq i}{\mathbb{E}}_{t}\int_{t}^{T}\int_{\mathcal{E}}1N(\operatorname{d}\\!s,\operatorname{d}\\!e)$ $\displaystyle=c\ell M(h)(T-t)+M(h)(\ell-1)\nu(\mathcal{E})(T-t)$ $\displaystyle\leq(c\ell+(\ell-1)\nu(\mathcal{E}))M(h)h.$ Taking essential supreme over $(t,i)\in[T-h,T]\times\mathcal{M}$ on both sides leads to (2.9) $\displaystyle M(h)$ $\displaystyle\leq(c\ell+(\ell-1)\nu(\mathcal{E}))M(h)h.$ Setting $h=\min\\{1/(c\ell+(\ell-1)\nu(\mathcal{E})+1),T\\}$ from now on. It then follows from above that $M(h)=0$, thus $\delta Y_{i,t}\leq 0$ for all $t\in[T-h,T]$. Similarly, using $\delta Y_{i,T-h}\leq 0$ and repeating the above argument on $[0\vee(T-2h),T-h]$, one can get $\delta Y_{i,t}\leq 0$ for all $t\in[0\vee(T-2h),T-h]$. Repeating this procedure, the desired comparison result follows. $\Box$ ###### Remark 2.1. If the inequalities in the conditions 1 and 4 are reversed, then so is the conclusion. ###### Remark 2.2. It is not hard to see that the condition 2 is equivalent to that there exists a process $\gamma_{i}\in L^{\infty,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}})$ with $\gamma_{i}\geq-1$ such that $\displaystyle\quad\;f_{i}(s,Y_{s-},Z_{i,s},\Phi_{1,s},\cdots,\Phi_{i,s},\cdots,\Phi_{\ell,s})$ $\displaystyle\qquad\quad- f_{i}(s,Y_{s-},Z_{i,s},\Phi_{1,s},\cdots,\overline{\Phi}_{i,s},\cdots,\Phi_{\ell,s})$ $\displaystyle\leq\int_{\mathcal{E}}\gamma_{i,s}(e)(\Phi_{i,s}(e)-\overline{\Phi}_{i,s}(e))\nu(\operatorname{d}\\!e).$ Most of existing comparison theorems for BSDEJs require the condition $\gamma>-1$ or even stronger $\gamma>-1+\varepsilon$ in order to utilize the Girsanov theorem; see, e.g., Barles, Buckdahn and Pardoux [2] and Royer [33]. Our requirement, namely $\gamma\geq-1$, is the same as Quenez and Sulem’s [32]. But all these existing comparison theorems work for one-dimensional BSDEJs only. ###### Remark 2.3. The condition 3 holds if, for every $K>0$, there exists a constant $c>0$ (depending on $K$) such that $f_{i}(s,y,z,\phi)-f_{i}(s,\overline{y},\overline{z},\overline{\phi})\\\ \qquad\leq c\Big{(}|y_{i}-\overline{y}_{i}|+\sum_{j\neq i}(y_{j}-\overline{y}_{j})^{+}+|z-\overline{z}|+\sum_{j\neq i}\int_{\mathcal{E}}(y_{j}-\overline{y}_{j}+\phi_{j}(e)-\overline{\phi}_{j}(e))^{+}\nu(\operatorname{d}\\!e)\Big{)}$ holds for all $(y,z,\phi)$ and $(\overline{y},\overline{z},\overline{\phi})\in{\mathbb{R}}^{\ell}\times{\mathbb{R}}^{n}\times L^{2,\nu}({\mathbb{R}}^{\ell})$ satisfying $\phi_{i}\equiv\overline{\phi}_{i}$ and $|y|+|\overline{y}|\leq K$. Since $|y|+|\overline{y}|\leq K$, it is a locally Lipschitz condition w.r.t $y$. The condition implies $f_{i}$ is increasing w.r.t $y_{j}$ and $\phi_{j}$ for every $j\neq i$. Also, the term $\sum_{j\neq i}(y_{j}-\overline{y}_{j})^{+}$ can be removed if there does exist jump, i.e. $\nu(\mathcal{E})>0$. ###### Remark 2.4. We call a generator $f$ is Lipschitz in $(y,z,\phi)$ with Lipschitz constant $c$ if $\displaystyle|f(\omega,t,y,z,\phi)-f(\omega,t,\overline{y},\overline{z},\overline{\phi})|\leq c(|y-\overline{y}|+|z-\overline{z}|+||\phi-\overline{\phi}||_{\nu})~{}\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\textrm{-a.e.}$ holds for all $(y,z,\phi)$, $(\overline{y},\overline{z},\overline{\phi})\in{\mathbb{R}}^{\ell}\times{\mathbb{R}}^{n\times\ell}\times L^{2,\nu}({\mathbb{R}}^{\ell})$. Then the condition 3 holds if 1. (1) $f_{i}(s,y,z,\phi)$ is Lipschitz in $(y,z,\phi)$; 2. (2) $f_{i}(s,y,z,\phi)$ is increasing w.r.t $y_{j}$ for every $j\neq i$; and 3. (3) there exists a constant $c>0$ such that $\displaystyle f_{i}(s,Y_{s-},Z_{i,s},\Phi_{1,s},...,\Phi_{i-1,s},\overline{\Phi}_{i,s},\Phi_{i+1,s},...,\Phi_{\ell,s})$ $\displaystyle\qquad- f_{i}(s,\overline{Y}_{1,s-},...,\overline{Y}_{i-1,s-},Y_{i,s-},\overline{Y}_{i+1,s-},,...,\overline{Y}_{\ell,s-},Z_{i,s-},\overline{\Phi}_{s-})$ $\displaystyle\leq c\sum_{j\neq i}\int_{\mathcal{E}}(Y_{j,s-}+\Phi_{j,s}(e)-\overline{Y}_{j,s-}-\overline{\Phi}_{j,s}(e))^{+}\nu(\operatorname{d}\\!e).$ ###### Remark 2.5. In (2.5) the condition $\gamma\in L^{\infty,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}}^{\ell})$ can be replaced by the following weaker one: there exist constants $0<h,\varepsilon<1$ such that $\operatorname*{ess\;sup}_{t\in[0,T]}\mathbb{E}_{t}\int_{t}^{T\wedge(t+h)}\int_{\mathcal{E}}|\gamma_{s}(e)|^{2}\nu(\operatorname{d}\\!e)\operatorname{d}\\!s\leq 1-\varepsilon.$ This condition is satisfied, for instance, when $\int_{\mathcal{E}}|\gamma_{\cdot}(e)|^{2}\nu(\operatorname{d}\\!e)\in L^{\infty}_{\mathbb{F}}(0,T;{\mathbb{R}})$. Indeed, the above condition implies, for $t\in[T-h,T]$, $\displaystyle\mathbb{E}_{t}\int_{t}^{T}((\delta Y_{i,s-})^{+})^{2}\int_{\mathcal{E}}(\gamma_{i,s}(e)^{2}\vee 1)\nu(\operatorname{d}\\!e)\operatorname{d}\\!s$ $\displaystyle\leq M(h)~{}\mathbb{E}_{t}\int_{t}^{T}\int_{\mathcal{E}}(\gamma_{i,s}(e)^{2}+1)\nu(\operatorname{d}\\!e)\operatorname{d}\\!s$ $\displaystyle\leq M(h)(1-\varepsilon+h\nu(\mathcal{E}))\leq(1-\varepsilon/2)M(h),$ by choosing $h$ small enough. This together with (2.1) and (2.1) will lead to an estimate similar to (2.9) in the above proof. ### 2.2. Comparison theorem for square integrable processes Theorem 2.1 requires the state processes to be bounded, which may be too restrictive for applications. The following result relaxes this assumption to square integrable processes, but we have to in addition assume that both $f$ and $\overline{f}$ are globally Lipschitz. ###### Theorem 2.2. We shall use the same notations as in Theorem 2.1. Suppose, for all $i\in\mathcal{M}$, $\displaystyle(Y_{i},Z_{i},\Phi_{i}),(\overline{Y}_{i},\overline{Z}_{i},\overline{\Phi}_{i})\in S^{2}_{\mathbb{F}}(0,T;{\mathbb{R}})\times L^{2}_{\mathbb{F}}(0,T;{\mathbb{R}}^{n})\times L^{2,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}}),$ and they satisfy the BSDEJs (2.1) and (2.1). Also suppose that, for every $i\in\mathcal{M}$, 1. (1) the conditions 1, 2, 3 and 4 hold; 2. (2) $f_{i}(\cdot,0,0,0)$ and $\overline{f}_{i}(\cdot,0,0,0)\in L^{2}_{\mathbb{F}}(0,T;{\mathbb{R}})$; 3. (3) both $f_{i}$ and $\overline{f}_{i}$ are Lipschitz in $(y,z,\phi)$. Then $Y_{i}\leq\overline{Y}_{i}$ for all $i\in\mathcal{M}$. Proof: For each $m\geq 1$ and $i\in\mathcal{M}$, we denote $\displaystyle\xi^{m}_{i}=\xi_{i}\mathbf{1}_{|\xi|+|\overline{\xi}|\leq m},~{}~{}f^{m}_{i}(t,y,z,\phi)=f_{i}(t,y,z,\phi)\mathbf{1}_{|f(t,0,0,0)|+|\overline{f}(t,0,0,0)|\leq m},$ $\displaystyle\overline{\xi}^{m}_{i}=\overline{\xi}_{i}\mathbf{1}_{|\xi|+|\overline{\xi}|\leq m},~{}~{}\overline{f}^{m}_{i}(t,y,z,\phi)=\overline{f}_{i}(t,y,z,\phi)\mathbf{1}_{|f(t,0,0,0)|+|\overline{f}(t,0,0,0)|\leq m}.$ Note that $\xi^{m}_{i}$, $\overline{\xi}^{m}_{i}$, $f^{m}_{i}(\cdot,0,0,0)$ and $\overline{f}^{m}_{i}(\cdot,0,0,0)$ are bounded by $m$ and the generators $f^{m}=(f^{m}_{1},...,f^{m}_{\ell})$ and $\overline{f}^{m}=(\overline{f}^{m}_{1},...,\overline{f}^{m}_{\ell})$ are both Lipschitz in $(y,z,\phi)$ with the same Lipschitz constant as $f$ and $\overline{f}$. It then follows from [37, Theorem 2.4] or [2, Theorem 2.1, Proposition 2.2] that the following BSDEJs: $\displaystyle Y_{i,t}^{m}$ $\displaystyle=\xi^{m}_{i}+\int_{t}^{T}f^{m}_{i}(s,Y_{s-}^{m},Z_{i,s}^{m},\Phi_{s}^{m})\operatorname{d}\\!s$ $\displaystyle\qquad\qquad\qquad-\int_{t}^{T}(Z_{i,s}^{m})^{\top}\operatorname{d}\\!W_{s}-\int_{t}^{T}\int_{\mathcal{E}}\Phi_{i,s}^{m}(e)\widetilde{N}(\operatorname{d}\\!s,\operatorname{d}\\!e)~{}\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\textrm{-a.e.},\ i\in\mathcal{M},$ and $\displaystyle\overline{Y}_{i,t}^{m}$ $\displaystyle=\overline{\xi}^{m}_{i}+\int_{t}^{T}\overline{f}^{m}_{i}(s,\overline{Y}_{s-}^{m},\overline{Z}_{i,s}^{m},\overline{\Phi}_{s}^{m})\operatorname{d}\\!s$ $\displaystyle\qquad\qquad\qquad-\int_{t}^{T}(\overline{Z}_{i,s}^{m})^{\top}\operatorname{d}\\!W_{s}-\int_{t}^{T}\int_{\mathcal{E}}\overline{\Phi}_{i,s}^{m}(e)\widetilde{N}(\operatorname{d}\\!s,\operatorname{d}\\!e)~{}\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\textrm{-a.e.},\ i\in\mathcal{M},$ admit unique solutions $(Y^{m},Z^{m},\Phi^{m})$ and $(\overline{Y}^{m},\overline{Z}^{m},\overline{\Phi}^{m})$ respectively, such that $\displaystyle(Y^{m}_{i},Z^{m}_{i},\Phi^{m}_{i}),~{}(\overline{Y}^{m}_{i},\overline{Z}^{m}_{i},\overline{\Phi}^{m}_{i})\in S^{2}_{\mathbb{F}}(0,T;{\mathbb{R}})\times L^{2}_{\mathbb{F}}(0,T;{\mathbb{R}}^{n})\times L_{\mathcal{P}}^{2,\nu}(0,T;{\mathbb{R}})\ \mbox{ for all}\ i\in\mathcal{M}.$ We temporally suppose that (2.10) $\displaystyle Y^{m}_{i},~{}~{}\overline{Y}^{m}_{i}\in S^{\infty}_{\mathbb{F}}(0,T;{\mathbb{R}})\ \mbox{ for all}\ i\in\mathcal{M}.$ Then applying Theorem 2.1 leads to (2.11) $\displaystyle Y_{i}^{m}\leq\overline{Y}_{i}^{m}\ \mbox{for all}\ i\in\mathcal{M}.$ From [2, Proposition 2.2], we know there is constant $c>0$ independent of $m$ such that $\displaystyle{\mathbb{E}}\Big{[}\sup_{0\leq t\leq T}|Y_{t}-Y_{t}^{m}|^{2}\Big{]}\leq c{\mathbb{E}}\Big{[}|\xi-\xi^{m}|^{2}+\int_{0}^{T}|f(t,Y_{t},Z_{t},\Phi_{t})-f^{m}(t,Y_{t},Z_{t},\Phi_{t})|^{2}\operatorname{d}\\!t\Big{]},$ $\displaystyle{\mathbb{E}}\Big{[}\sup_{0\leq t\leq T}|\overline{Y}_{t}-\overline{Y}_{t}^{m}|^{2}\Big{]}\leq c{\mathbb{E}}\Big{[}|\overline{\xi}-\overline{\xi}^{m}|^{2}+\int_{0}^{T}|\overline{f}(t,\overline{Y}_{t},\overline{Z}_{t},\overline{\Phi}_{t})-\overline{f}^{m}(t,\overline{Y}_{t},\overline{Z}_{t},\overline{\Phi}_{t})|^{2}\operatorname{d}\\!t\Big{]}.$ These estimates together with the definitions of $\xi^{m},\overline{\xi}^{m},f^{m},\overline{f}^{m}$ and the dominated convergence theorem lead to $\displaystyle\lim_{m\to\infty}{\mathbb{E}}\Big{[}\sup_{0\leq t\leq T}|Y_{t}-Y_{t}^{m}|^{2}+\sup_{0\leq t\leq T}|\overline{Y}_{t}-\overline{Y}_{t}^{m}|^{2}\Big{]}=0.$ Applying the elementary inequalities $(x^{+})^{2}\leq 2(y^{+})^{2}+2(x-y)^{2}$, $(x+y)^{2}\leq 2x^{2}+2y^{2}$ for $x$, $y\in{\mathbb{R}}$ and (2.11), we have $\displaystyle\quad{\mathbb{E}}\Big{[}\sup_{0\leq t\leq T}\sum_{i=1}^{\ell}[(Y_{i,t}-\overline{Y}_{i,t})^{+}]^{2}\Big{]}$ $\displaystyle\leq{\mathbb{E}}\Big{[}2\sup_{0\leq t\leq T}\sum_{i=1}^{\ell}[(Y_{i,t}^{m}-\overline{Y}_{i,t}^{m})^{+}]^{2}+2\sup_{0\leq t\leq T}\sum_{i=1}^{\ell}(Y_{i,t}-Y_{i,t}^{m}+\overline{Y}^{m}_{i,t}-\overline{Y}_{i,t})^{2}\Big{]}$ $\displaystyle={\mathbb{E}}\Big{[}2\sup_{0\leq t\leq T}\sum_{i=1}^{\ell}(Y_{i,t}-Y_{i,t}^{m}+\overline{Y}^{m}_{i,t}-\overline{Y}_{i,t})^{2}\Big{]}$ $\displaystyle\leq{\mathbb{E}}\Big{[}4\sup_{0\leq t\leq T}\sum_{i=1}^{\ell}(Y_{i,t}-Y_{i,t}^{m})^{2}+4\sup_{0\leq t\leq T}\sum_{i=1}^{\ell}(\overline{Y}^{m}_{i,t}-\overline{Y}_{i,t})^{2}\Big{]}.$ Sending $m\to\infty$ in the above, we get the desired result $Y_{i}\leq\overline{Y}_{i}$ for all $i\in\mathcal{M}$. It remains to establish (2.10). To this end, let $\beta>0$ be a large constant to be chosen later. Applying Itô’s formula to $e^{\beta t}(Y^{m}_{i,t})^{2}$, for each $i\in\mathcal{M}$, yields $\displaystyle\quad\;e^{\beta t}(Y^{m}_{i,t})^{2}+{\mathbb{E}}_{t}\int_{t}^{T}e^{\beta s}\Big{(}\beta(Y^{m}_{i,s})^{2}+|Z^{m}_{i,s}|^{2}+||\Phi^{m}_{i,s}||^{2}_{\nu}\Big{)}\operatorname{d}\\!s$ $\displaystyle={\mathbb{E}}_{t}[e^{\beta T}(\xi^{m}_{i})^{2}]+{\mathbb{E}}_{t}\int_{t}^{T}2e^{\beta s}Y^{m}_{i,s-}f^{m}_{i}(s,Y_{s-}^{m},Z_{i,s}^{m},\Phi_{s}^{m})\operatorname{d}\\!s$ $\displaystyle\leq m^{2}e^{\beta T}+{\mathbb{E}}_{t}\int_{t}^{T}2e^{\beta s}|Y^{m}_{s-}||f^{m}_{i}(s,Y_{s-}^{m},Z_{i,s}^{m},\Phi_{s}^{m})-f^{m}_{i}(s,0,0,0)|\operatorname{d}\\!s$ $\displaystyle\qquad\qquad\;+{\mathbb{E}}_{t}\int_{t}^{T}2e^{\beta s}|Y^{m}_{s-}||f^{m}_{i}(s,0,0,0)|\operatorname{d}\\!s$ $\displaystyle\leq m^{2}e^{\beta T}+{\mathbb{E}}_{t}\int_{t}^{T}2e^{\beta s}|Y^{m}_{s-}|c\Big{(}|Y_{s-}^{m}|+|Z_{i,s}^{m}|+||\Phi_{s}^{m}||_{\nu}\Big{)}\operatorname{d}\\!s$ $\displaystyle\qquad\qquad\;+{\mathbb{E}}_{t}\int_{t}^{T}e^{\beta s}|Y^{m}_{s-}|^{2}+{\mathbb{E}}_{t}\int_{t}^{T}e^{\beta s}|f^{m}_{i}(s,0,0,0)|^{2}\operatorname{d}\\!s$ $\displaystyle\leq m^{2}(1+T)e^{\beta T}+{\mathbb{E}}_{t}\int_{t}^{T}e^{\beta s}\Big{(}c|Y_{s-}^{m}|^{2}+|Z_{i,s}^{m}|^{2}+\frac{1}{\ell}||\Phi_{s}^{m}||_{\nu}^{2}\Big{)}\operatorname{d}\\!s,$ where the last constant $c$ does not depend on $t$, $\beta$ and $i$. Canceling the common terms involving $|Z_{i,s}^{m}|^{2}$, we get $\displaystyle\quad e^{\beta t}(Y^{m}_{i,t})^{2}+{\mathbb{E}}_{t}\int_{t}^{T}e^{\beta s}\Big{(}\beta(Y^{m}_{i,s})^{2}+||\Phi_{i,s}^{m}||_{\nu}^{2}\Big{)}\operatorname{d}\\!s$ $\displaystyle\leq m^{2}(1+T)e^{\beta T}+{\mathbb{E}}_{t}\int_{t}^{T}e^{\beta s}\Big{(}c|Y_{s-}^{m}|^{2}+\frac{1}{\ell}||\Phi_{s}^{m}||_{\nu}^{2}\Big{)}\operatorname{d}\\!s.$ Summing $i$ from $1$ to $\ell$ gives $\displaystyle\quad e^{\beta t}|Y^{m}_{t}|^{2}+{\mathbb{E}}_{t}\int_{t}^{T}e^{\beta s}\Big{(}\beta|Y^{m}_{t}|^{2}+||\Phi^{m}_{t}||^{2}_{\nu}\Big{)}\operatorname{d}\\!s$ $\displaystyle\leq\ell m^{2}(1+T)e^{\beta T}+{\mathbb{E}}_{t}\int_{t}^{T}e^{\beta s}\Big{(}c\ell|Y_{s-}^{m}|^{2}+||\Phi_{s}^{m}||_{\nu}\Big{)}\operatorname{d}\\!s$ $\displaystyle=\ell m^{2}(1+T)e^{\beta T}+{\mathbb{E}}_{t}\int_{t}^{T}e^{\beta s}\Big{(}c\ell|Y_{s}^{m}|^{2}+||\Phi_{s}^{m}||_{\nu}\Big{)}\operatorname{d}\\!s,$ where the last equation is due to the fact that the jumps of $Y$ are accountable. By setting $\beta=c\ell$ and canceling the common integrals in the above estimate, we obtain $Y^{m}\in S^{\infty}_{\mathbb{F}}(0,T;{\mathbb{R}}^{\ell})$. The assertion for $(\overline{Y}^{m}_{i},\overline{Z}^{m}_{i},\overline{\Phi}^{m}_{i})$ in (2.10) can be similarly proved. This completes the proof. $\Box$ ## 3\. A stochastic LQ control problem with jumps and the related two- dimensional BSDEJ ### 3.1. Cone-constrained stochastic LQ control with jumps Consider the following ${\mathbb{R}}$-valued linear stochastic differential equation (SDE): (3.1) $\displaystyle\begin{cases}\operatorname{d}\\!X_{t}=\left[A_{t}X_{t-}+B_{t}^{\top}u_{t}\right]\operatorname{d}\\!t+\left[C_{t}X_{t-}+D_{t}u_{t}\right]^{\top}\operatorname{d}\\!W_{t}\\\ \qquad\qquad\qquad+\int_{\mathcal{E}}\left[E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t}\right]\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\ t\in[0,T],\\\ X_{0}=x,\end{cases}$ where $A,\ B,\ C,\ D$ are all $\mathcal{P}$-measurable processes, and $E(\cdot),\ F(\cdot)$ are $\mathcal{P}\otimes\mathcal{B}(\mathcal{E})$-measurable stochastic processes of suitable size, $x\in{\mathbb{R}}$ is known. Let $\Pi$ be a given closed cone in ${\mathbb{R}}^{m}$, so if $u\in\Pi$, then $\lambda u\in\Pi$ for all $\lambda\geq 0$. It is used to represent the constraint set for controls. The class of admissible controls is defined as the set $\displaystyle\mathcal{U}:=\Big{\\{}u\in L^{2}_{\mathbb{F}}(0,T;{\mathbb{R}}^{m})\;\Big{|}\;u_{t}\in\Pi,~{}\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\textrm{-a.e.}\Big{\\}}.$ If $u\in\mathcal{U}$, then (3.1) admits a unique solution $X$, and we call $(X,u)$ an admissible pair. The cone-constrained stochastic LQ problem is stated as follows: (3.2) $\displaystyle\begin{cases}\mathrm{Minimize}&\ J(x,u)\\\ \mbox{subject to}&\ (X,u)\mbox{ is admissible for}\ \eqref{state},\end{cases}$ where the cost functional is given as the following quadratic form (3.3) $\displaystyle J(x,u):=$ $\displaystyle\mathbb{E}\Big{[}\int_{0}^{T}\Big{(}Q_{t}X_{t}^{2}+u_{t}^{\top}R_{t}u_{t}+2X_{t}S_{t}^{\top}u_{t}\Big{)}\operatorname{d}\\!t+GX_{T}^{2}\Big{]}.$ The associated value function is defined as $\displaystyle V(x):=\inf_{u\in\mathcal{U}}J(x,u).$ Problem (3.2) is said to be solvable (at $x$), if there exists a control $u^{*}\in\mathcal{U}$ such that $\displaystyle-\infty<J(x,u^{*})\leq J(x,u),\quad\forall\;u\in\mathcal{U},$ in which case, $u^{*}$ is called an optimal control for problem (3.2), and the optimal value is $\displaystyle V(x)=J(x,u^{*}).$ Our aim is to solve problem (3.2). We put the following assumptions on the coefficients in this section. ###### Assumption 3.1 (Bounded coefficients). It holds that $\displaystyle\begin{cases}A\in L^{\infty}_{\mathbb{F}}(0,T;{\mathbb{R}}),\ B\in L_{\mathbb{F}}^{\infty}(0,T;{\mathbb{R}}^{m}),\ C\in L_{\mathbb{F}}^{\infty}(0,T;{\mathbb{R}}^{n}),\\\ D\in L_{\mathbb{F}}^{\infty}(0,T;{\mathbb{R}}^{n\times m}),\ E\in L^{\infty,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}}),\ F\in L_{\mathcal{P}}^{\infty,\nu}(0,T;{\mathbb{R}}^{m}),\\\ Q\in L_{\mathbb{F}}^{\infty}(0,T;{\mathbb{R}}_{+}),\ R\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{S}^{m}),\ S\in L_{\mathbb{F}}^{\infty}(0,T;{\mathbb{R}}^{m}),\ G\in L_{\mathcal{F}_{T}}^{\infty}(\Omega;{\mathbb{R}}_{+}).\end{cases}$ ###### Assumption 3.2 (Standard case). It holds that $\left(\begin{smallmatrix}R&S\\\ S^{\top}&Q\end{smallmatrix}\right)\geq 0$, and there exists a constant $\delta>0$ such that $R\geq\delta\mathbf{1}_{m}$, where $\mathbf{1}_{m}$ denotes the $m$-dimensional identity matrix. ###### Assumption 3.3 (Singular case). It holds that $\left(\begin{smallmatrix}R&S\\\ S^{\top}&Q\end{smallmatrix}\right)\geq 0$ and there exists a constant $\delta>0$ such that $G\geq\delta$ and $D^{\top}D+\int_{\mathcal{E}}F(e)F(e)^{\top}\nu(\operatorname{d}\\!e)\geq\delta\mathbf{1}_{m}$. ### 3.2. Coupled SRE with jumps Nowadays, it is well known that solutions to stochastic LQ problems depend heavily on the solvability of the related SREs. We now introduce the associated SRE for our problem (3.2).111We will give a heuristic derivation in Appendix A for the readers’ convenience. See also Dong [7] for a special SRE with single jump stems from the theory of filtration enlargement. For $(\omega,t,v,P_{i},\Lambda,\Gamma_{i})\in\Omega\times[0,T]\times\Pi\times{\mathbb{R}}\times{\mathbb{R}}^{m}\times L^{\infty,\nu}({\mathbb{R}})$, $i=1,2$ define the following mappings: $\displaystyle H_{1}(\omega,t,v,P_{1},P_{2},\Lambda,\Gamma_{1},\Gamma_{2})$ $\displaystyle:=v^{\top}(R+P_{1}D^{\top}D)v+2(P_{1}(B+D^{\top}C)+D^{\top}\Lambda+S)^{\top}v$ $\displaystyle\qquad+\int_{\mathcal{E}}\Big{[}(P_{1}+\Gamma_{1})\Big{(}((1+E+F^{\top}v)^{+})^{2}-1\Big{)}-2P_{1}(E+F^{\top}v)$ $\displaystyle\qquad\qquad\quad+(P_{2}+\Gamma_{2})((1+E+F^{\top}v)^{-})^{2}\Big{]}\nu(\operatorname{d}\\!e),$ $\displaystyle H_{2}(\omega,t,v,P_{1},P_{2},\Lambda,\Gamma_{1},\Gamma_{2})$ $\displaystyle:=v^{\top}(R+P_{2}D^{\top}D)v-2(P_{2}(B+D^{\top}C)+D^{\top}\Lambda+S)^{\top}v$ $\displaystyle\qquad+\int_{\mathcal{E}}\Big{[}(P_{2}+\Gamma_{2})\Big{(}((-1-E+F^{\top}v)^{-})^{2}-1\Big{)}+2P_{2}(-E+F^{\top}v)$ $\displaystyle\qquad\qquad\quad+(P_{1}+\Gamma_{1})\Gamma((-1-E+F^{\top}v)^{+})^{2}\Big{]}\nu(\operatorname{d}\\!e),$ and set (3.4) $\displaystyle H_{1}^{*}(\omega,t,P_{1},P_{2},\Lambda,\Gamma_{1},\Gamma_{2}):=\inf_{v\in\Pi}H_{1}(\omega,t,v,P_{1},P_{2},\Lambda,\Gamma_{1},\Gamma_{2}),$ (3.5) $\displaystyle H_{2}^{*}(\omega,t,P_{1},P_{2},\Lambda,\Gamma_{1},\Gamma_{2}):=\inf_{v\in\Pi}H_{2}(\omega,t,v,P_{1},P_{2},\Lambda,\Gamma_{1},\Gamma_{2}).$ The associated SRE for our problem (3.2) is given as follows: (3.6) $\displaystyle\begin{cases}\operatorname{d}\\!P_{1,t}=-\Big{[}(2A+C^{\top}C)P_{1,t-}+2C^{\top}\Lambda_{1,t}+Q+H_{1}^{*}(t,P_{1,t-},P_{2,t-},\Lambda_{1,t},\Gamma_{1,t},\Gamma_{2,t})\Big{]}\operatorname{d}\\!t\\\ \hfill+\Lambda_{1,t}^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}\Gamma_{1,t}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\qquad\quad\\\ \operatorname{d}\\!P_{2,t}=-\Big{[}(2A+C^{\top}C)P_{2,t-}+2C^{\top}\Lambda_{2,t}+Q+H_{2}^{*}(t,P_{1,t-},P_{2,t-},\Lambda_{2,t},\Gamma_{1,t},\Gamma_{2,t})\Big{]}\operatorname{d}\\!t\\\ \hfill+\Lambda_{2,t}^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}\Gamma_{2,t}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\qquad\quad\\\ P_{1,T}=G,\ P_{2,T}=G,\\\ P_{1,t}\geq 0,\ P_{1,t-}+\Gamma_{1,t}\geq 0,\ P_{2,t}\geq 0,\ P_{2,t-}+\Gamma_{2,t}\geq 0.\end{cases}$ This is a new two-dimensional coupled nonlinear BSDEJ. ###### Remark 3.1. Hu and Zhou [16] studied a cone-constrained LQ problem without jumps; the associated SREs [16, Eq. (3.5) and (3.6)] are decoupled, so that one can solve $P_{1}$ and $P_{2}$ separately. As is well-known $P_{1}$ and $P_{2}$ correspond to the optimal value with positive and negative initial state. When there is no jump in the model, the optimal state process does not change sign, so that only one of $P_{1}$ and $P_{2}$ is involved. Therefore, they are decoupled. Things become notably different in models with jumps. Because of jumps, the sign of the optimal state process may switch between positive and negative values, so $P_{1}$ and $P_{2}$ are coupled together and one cannot treat them separately. So our SRE (3.6) is actually a system of coupled BSDEJs whose solvability is far from trivial compared to the decoupled BSDEJs in [16, Eq. (3.5) and (3.6)]. If all the coefficients in Assumption 3.1 are predictable with respect to the Brownian filtration, then $\Gamma_{1}=\Gamma_{2}=0$ and the SRE becomes a two- dimensional coupled BSDE without jumps. Even without jumps, the BSDE is still new and cannot be covered by existing results on multi-dimensional BSDEs; see, e.g., Fan, Hu and Tang [11], Hu and Tang [15]. ###### Remark 3.2. If $\Pi$ is symmetric, namely, $-v\in\Pi$ whenever $v\in\Pi$, then $H_{1}^{*}=H_{2}^{*}$ and (3.6) will degenerate to one equation since $(P_{1},\Lambda_{1},\Gamma_{1})=(P_{2},\Lambda_{2},\Gamma_{2})$. In particular, if there is no control constraint, that is, $\Pi={\mathbb{R}}^{m}$, then both $H_{1}^{*}$ and $H_{2}^{*}$ are equal to $\displaystyle(P+\Gamma)E^{2}+2\Gamma E$ $\displaystyle+\Big{(}P(B+D^{\top}C)+D^{\top}\Lambda+S+\int_{\mathcal{E}}((P+\Gamma)E+\Gamma)F\nu(\operatorname{d}\\!e)\Big{)}^{\top}$ $\displaystyle\qquad\times\Big{(}R+PD^{\top}D+\int_{\mathcal{E}}(P+\Gamma)FF^{\top}\nu(\operatorname{d}\\!e)\Big{)}^{-1}$ $\displaystyle\qquad\times\Big{(}P(B+D^{\top}C)+D^{\top}\Lambda+S+\int_{\mathcal{E}}((P+\Gamma)E+\Gamma)F\nu(\operatorname{d}\\!e)\Big{)}.$ Under $\Pi={\mathbb{R}}^{m}$, Zhang, Dong and Meng [39] addressed the solvability of the matrix-valued SREJ under the assumption $R\geq\delta\mathbf{1}_{m}$ and $S\equiv 0$. By contrast, we will solve the BSDEJ (3.6) in both standard and singular cases for general cone $\Pi$. ###### Definition 3.1. A stochastic process $(P_{1},\Lambda_{1},\Gamma_{1},P_{2},\Lambda_{2},\Gamma_{2})$ is called a solution to the BSDEJ (3.6) if it satisfies (3.6), and $(P_{i},\Lambda_{i},\Gamma_{i})\in S^{\infty}_{\mathbb{F}}(0,T;{\mathbb{R}})\times L^{2}_{\mathbb{F}}(0,T;{\mathbb{R}}^{m})\times L^{\infty,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}})$, $i=1,2$. The solution is called nonnegative if $P_{i}\geq 0$, and called uniformly positive if $P_{i}\geq c$ for some deterministic constant $c>0$, $i=1,2$. ### 3.3. Existence of solution to the BSDEJ (3.6) Dong [7] constructed a solution to a SRE with single jump using two recursive systems of BSDEs driven only by Brownian motions. His decomposition approach is tailor made in the filtration enlargement framework, hence fails in the Poisson random measure model which accommodates probably accountable jumps. Czichowsky and Schweizer [5] characterized the optimal value process of a cone-constrained mean-variance problem in terms of a coupled system of BSDEs [5, Eq.(4.18)] in a semimartingale model. They claimed in [5, Remark 4.8] that _“Due to the coupling term coming from $\mathfrak{h}$, the BSDE system (4.18) is very complicated. It has a nonlinear non-Lipschitz generator plus a generator with jumps, so that finding a solution by general BSDE techniques seems a formidable challenge”_. We now respond to this formidable challenge in the Wiener-Poisson world by providing a proof of the existence of solution to (3.6) by pure BSDE techniques. ###### Theorem 3.1 (Existence in Standard case). Suppose Assumptions 3.1, 3.2 hold, then the BSDEJ (3.6) admits a nonnegative solution $(P_{1},\Lambda_{1},\Gamma_{1},P_{2},\Lambda_{2},\Gamma_{2})$. Proof: For $k=1,2,...$, define maps (3.7) $\displaystyle H_{1}^{k}(\omega,t,P_{1},P_{2},\Lambda_{1},\Gamma_{1},\Gamma_{2}):=\inf_{v\in\Pi,|v|\leq k}H_{1}(\omega,t,v,P_{1},P_{2},\Lambda_{1},\Gamma_{1},\Gamma_{2}),$ (3.8) $\displaystyle H_{2}^{k}(\omega,t,P_{1},P_{2},\Lambda_{2},\Gamma_{1},\Gamma_{2}):=\inf_{v\in\Pi,|v|\leq k}H_{2}(\omega,t,v,P_{1},P_{2},\Lambda_{2},\Gamma_{1},\Gamma_{2}).$ Then they are uniformly Lipschitz in $(P_{1},\Lambda_{1},\Gamma_{1},P_{2},\Lambda_{2},\Gamma_{2})$ and decreasingly approach to $H_{1}^{*}(\omega,t,P_{1},\Lambda_{1},\Gamma_{1},P_{2},\Gamma_{2})$ and $H_{2}^{*}(\omega,t,P_{2},\Lambda_{2},\Gamma_{2},P_{1},\Gamma_{1})$ respectively as $k$ goes to infinity. For each $k$, the following BSDE (3.9) $\displaystyle\begin{cases}\operatorname{d}\\!P_{1,t}^{k}=-\Big{[}(2A+C^{\top}C)P_{1,t-}^{k}+2C^{\top}\Lambda_{1,t}^{k}+Q+H_{1}^{k}(t,P_{1,t-}^{k},P_{2,t-}^{k},\Lambda_{1,t}^{k},\Gamma_{1,t}^{k},\Gamma_{2,t}^{k})\Big{]}\operatorname{d}\\!t\\\ \hfill+(\Lambda_{1,t}^{k})^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}\Gamma_{1,t}^{k}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\qquad\quad\\\ \operatorname{d}\\!P_{2,t}^{k}=-\Big{[}(2A+C^{\top}C)P_{2,t-}^{k}+2C^{\top}\Lambda_{2,t}^{k}+Q+H_{2}^{k}(t,P_{1,t-}^{k},P_{2,t-}^{k},\Lambda_{2,t}^{k},\Gamma_{1,t}^{k},\Gamma_{2,t}^{k})\Big{]}\operatorname{d}\\!t\\\ \hfill+(\Lambda_{2,t}^{k})^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}\Gamma_{2,t}^{k}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\qquad\quad\\\ P_{1,T}^{k}=G,\ P_{2,T}^{k}=G,\end{cases}$ is a two-dimensional BSDEJ with a Lipschitz generator, so by [37, Lemma 2.4], it admits a unique solution $(P_{1}^{k},\Lambda_{1}^{k},\Gamma_{1}^{k},P_{2}^{k},\Lambda_{2}^{k},\Gamma_{2}^{k})$ such that $(P_{i}^{k},\Lambda_{i}^{k},\Gamma_{i}^{k})\in S^{2}_{\mathbb{F}}(0,T;{\mathbb{R}})\times L^{2}_{\mathbb{F}}(0,T;{\mathbb{R}}^{n})\times L^{2,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}}),\ i=1,2.$ From the definition of $H_{1}^{k}$, we have $\displaystyle\qquad H_{1}^{k}(t,P_{1},P_{2},\Lambda_{1},\Gamma_{1},\Gamma_{2})-H_{1}^{k}(t,P_{1},P^{\prime}_{2},\Lambda_{1},\Gamma_{1},\Gamma^{\prime}_{2})$ $\displaystyle\leq\sup_{v\in\Pi,|v|\leq k}\int_{\mathcal{E}}((1+E+F^{\top}v)^{-})^{2}(P_{2}+\Gamma_{2}(e)-P^{\prime}_{2}-\Gamma^{\prime}_{2}(e))\nu(\operatorname{d}\\!e)$ $\displaystyle\leq c_{k}\int_{\mathcal{E}}(P_{2}+\Gamma_{2}(e)-P^{\prime}_{2}-\Gamma^{\prime}_{2}(e))^{+}\nu(\operatorname{d}\\!e),$ and $\displaystyle\qquad H_{1}^{k}(t,P_{1},P_{2},\Lambda_{1},\Gamma_{1},\Gamma_{2})-H_{1}^{k}(t,P_{1},P_{2},\Lambda_{1},\Gamma^{\prime}_{1},\Gamma_{2})$ $\displaystyle\leq\sup_{v\in\Pi,|v|\leq k}\int_{\mathcal{E}}(\Gamma_{1}(e)-\Gamma^{\prime}_{1}(e))\Big{(}((1+E+F^{\top}v)^{+})^{2}-1\Big{)}\nu(\operatorname{d}\\!e)$ $\displaystyle\leq\sup_{v\in\Pi,|v|\leq k}\int_{\mathcal{E}}(\Gamma_{1}(e)-\Gamma^{\prime}_{1}(e))((1+E+F^{\top}v)^{+})^{2}\nu(\operatorname{d}\\!e)+\int_{\mathcal{E}}|\Gamma_{1}(e)-\Gamma^{\prime}_{1}(e)|\nu(\operatorname{d}\\!e)$ $\displaystyle\leq c_{k}\int_{\mathcal{E}}(\Gamma_{1}(e)-\Gamma^{\prime}_{1}(e))^{+}\nu(\operatorname{d}\\!e)+\int_{\mathcal{E}}|\Gamma_{1}(e)-\Gamma^{\prime}_{1}(e)|\nu(\operatorname{d}\\!e),$ where $c_{k}<\infty$ is defined as $\displaystyle c_{k}$ $\displaystyle=\operatorname*{ess\;sup}_{v\in\Pi,|v|\leq k}|1+E+F^{\top}v|^{2},~{}\nu(\operatorname{d}\\!e)\textrm{-a.e..}$ Similar estimates for $H_{2}^{k}$ can be established. Hence according to Theorem 2.2, $P^{k}_{i}$ is decreasing in $k$, for $i=1,2$. Next, we show that the sequence $\\{P_{i}^{k}\\}_{k=1,2,...}$ is nonnegative and uniformly bounded from above, for $i=1,2$. From Assumption 3.1, there exists a constant $c>0$ such that $\displaystyle 2A+C^{\top}C+\int_{\mathcal{E}}E(e)^{2}\nu(\operatorname{d}\\!e)\leq c,\ Q\leq c,\ G\leq c.$ It is easy to check that $(\overline{P}_{1,t},\overline{\Lambda}_{1,t},\overline{\Gamma}_{1,t})=(\overline{P}_{2,t},\overline{\Lambda}_{2,t},\overline{\Gamma}_{2,t})=((c+1)e^{c(T-t)}-1,0,0)$ satisfies the following two-dimensional BSDEJ (3.10) $\displaystyle\begin{cases}\operatorname{d}\\!\overline{P}_{1}=-\Big{[}c\overline{P}_{1}+C^{\top}\Lambda_{1}+c+\int_{\mathcal{E}}\Big{(}\overline{\Gamma}_{1}\big{(}((1+E)^{+})^{2})-1\big{)}+\overline{\Gamma}_{2}((1+E)^{-})^{2}\Big{)}\nu(\operatorname{d}\\!e)\Big{]}\operatorname{d}\\!t\\\ \hfill+\overline{\Lambda}_{1}^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}\overline{\Gamma}_{1}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\qquad\quad\\\ \operatorname{d}\\!\overline{P}_{2}=-\Big{[}c\overline{P}_{2}+C^{\top}\Lambda_{2}+c+\int_{\mathcal{E}}\Big{(}\overline{\Gamma}_{2}\big{(}((1+E)^{+})^{2})-1\big{)}+\overline{\Gamma}_{2}((1+E)^{-})^{2}\Big{)}\nu(\operatorname{d}\\!e)\Big{]}\operatorname{d}\\!t\\\ \hfill+\overline{\Lambda}_{2}^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}\overline{\Gamma}_{2}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\qquad\quad\\\ \overline{P}_{1,T}=c,~{}\overline{P}_{2,T}=c.\end{cases}$ By the definition of $H_{1}^{k}$, we have $\displaystyle H_{1}^{k}(t,\overline{P}_{1},\overline{P}_{2},\overline{\Lambda}_{1},\overline{\Gamma}_{1},\overline{\Gamma}_{2})$ $\displaystyle\leq H_{1}(t,0,\overline{P}_{1},\overline{P}_{2},\overline{\Lambda}_{1},\overline{\Gamma}_{1},\overline{\Gamma}_{2})$ $\displaystyle=\int_{\mathcal{E}}\Big{(}\overline{P}_{1}E^{2}+\overline{\Gamma}_{1}\big{(}((1+E)^{+})^{2})-1\big{)}+\overline{\Gamma}_{1}((1+E)^{-})^{2}\Big{)}\nu(\operatorname{d}\\!e),$ so $\displaystyle\quad(2A+C^{\top}C)\overline{P}_{1}+2C^{\top}\overline{\Lambda}_{1}+Q+H_{1}^{k}(t,\overline{P}_{1},\overline{P}_{2},\overline{\Lambda}_{1},\overline{\Gamma}_{1},\overline{\Gamma}_{2})$ $\displaystyle\leq c\overline{P}_{1}+C^{\top}\overline{\Lambda}_{1}+c+\int_{\mathcal{E}}\Big{(}\overline{\Gamma}_{1}\big{(}((1+E)^{+})^{2})-1\big{)}+\overline{\Gamma}_{1}((1+E)^{-})^{2}\Big{)}\nu(\operatorname{d}\\!e).$ Similarly, we have $\displaystyle\quad(2A+C^{\top}C)\overline{P}_{2}+2C^{\top}\overline{\Lambda}_{2}+Q+H_{2}^{k}(t,\overline{P}_{1},\overline{P}_{2},\overline{\Lambda}_{2},\overline{\Gamma}_{1},\overline{\Gamma}_{2})$ $\displaystyle\leq c\overline{P}_{2}+C^{\top}\Lambda_{2}+c+\int_{\mathcal{E}}\Big{(}\overline{\Gamma}_{2}\big{(}((1+E)^{+})^{2})-1\big{)}+\overline{\Gamma}_{2}((1+E)^{-})^{2}\Big{)}\nu(\operatorname{d}\\!e).$ Keeping the above two inequalities in mind, applying Theorem 2.2 to BSDEJs (3.9) and (3.10), we have for $i=1,2$, $k=1,2,...$ (3.11) $\displaystyle P_{i,t}^{k}\leq\overline{P}_{i,t}\leq M,$ where $M:=(c+1)e^{cT}-1$. On the other hand, notice that $(\underline{P}_{1,t},\underline{\Lambda}_{1,t},\underline{\Gamma}_{1,t})=(\underline{P}_{2,t},\underline{\Lambda}_{2,t},\underline{\Gamma}_{2,t}):=(0,0,0)$ satisfies $\displaystyle\begin{cases}\operatorname{d}\\!\underline{P}=\underline{\Lambda}^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}\underline{\Gamma}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\\\ \underline{P}_{T}=0,\end{cases}$ and $\displaystyle\quad(2A+C^{\top}C)\underline{P}_{1}+2C^{\top}\underline{\Lambda}_{1}+Q+H_{1}^{k}(t,\underline{P}_{1},\underline{P}_{2},\underline{\Lambda}_{1},\underline{\Gamma}_{1},\underline{\Gamma}_{2})$ $\displaystyle\geq Q+\inf_{v\in{\mathbb{R}}^{m}}(v^{\top}Rv+2S^{\top}v)=Q-S^{\top}R^{-1}S\geq 0,$ thanks to Assumption 3.2. Hence, by Theorem 2.2 again, (3.12) $\displaystyle P^{k}_{i,t}\geq\underline{P}_{t}=0,\ i=1,2,~{}k=1,2,...$ Notice, for $i=1,2$, $\displaystyle{\mathbb{E}}\int_{0}^{T}\int_{\mathcal{E}}\mathbf{1}_{\\{P_{i,t-}^{k}+\Gamma_{i,t}^{k}(e)<0\\}}\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ $\displaystyle={\mathbb{E}}\int_{0}^{T}\int_{\mathcal{E}}\mathbf{1}_{\\{P_{i,t-}^{k}+\Gamma_{i,t}^{k}(e)<0\\}}N(\operatorname{d}\\!t,\operatorname{d}\\!e)$ $\displaystyle={\mathbb{E}}\Big{[}\sum_{n\in\mathbb{N},T_{n}\leq T}\mathbf{1}_{\\{P_{i,T_{n}-}^{k}+\Gamma_{i,T_{n}}^{k}(-\Delta U_{T_{n}})<0\\}}\Big{]}$ $\displaystyle={\mathbb{E}}\Big{[}\sum_{n\in\mathbb{N},T_{n}\leq T}\mathbf{1}_{\\{P_{i,T_{n}}^{k}<0\\}}\Big{]}=0,$ where $U_{t}=\int_{0}^{t}\int_{\mathcal{E}}eN(\operatorname{d}\\!t,\operatorname{d}\\!e)$ and $\Delta U_{T_{n}}=U_{T_{n}}-U_{T_{n}-}$, hence, (3.13) $\displaystyle P_{i,t-}^{k}+\Gamma_{i,t}^{k}\geq 0.$ Similarly, we can establish (3.14) $\displaystyle P_{i,t-}^{k}+\Gamma_{i,t}^{k}\leq M.$ Now we obtain $-M\leq-P_{i,t-}^{k}\leq\Gamma_{i,t}^{k}\leq M-P_{i,t-}^{k}\leq M.$ Hence, $\Gamma_{i}^{k}$, $k=1,2,\cdots,$ are uniformly bounded by $M$, and thus belong to $L^{\infty,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}})$. Since $P_{i}^{k}$ is decreasing w.r.t. $k$, we can define $P_{i,t}:=\lim_{k\to\infty}P_{i,t}^{k},\ i=1,2$. Combining (3.11) and (3.12), it follows $0\leq P_{i,t}\leq M,\ i=1,2,~{}t\in[0,T].$ Applying Itô’s formula to $(P_{1,t}^{k})^{2}$, we deduce that $\displaystyle\begin{cases}\operatorname{d}\\!\;(P_{1,t}^{k})^{2}=\Big{\\{}-2P_{1}^{k}\Big{[}(2A+C^{\top}C)P_{1,t-}^{k}+2C^{\top}\Lambda_{1,t}^{k}+Q+H_{1}^{k}(t,P_{1,t-}^{k},P_{2,t-}^{k},\Lambda_{1,t}^{k},\Gamma_{1,t}^{k},\Gamma_{2,t}^{k})\Big{]}\\\ \qquad\qquad\qquad+|\Lambda_{1}^{k}|^{2}+\int_{\mathcal{E}}\Gamma_{1}^{k}(e)^{2}\nu(\operatorname{d}\\!e)\Big{\\}}\operatorname{d}\\!t\\\ \hfill+2P_{1}^{k}(\Lambda_{1}^{k})^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}[(P_{1,t-}^{k}+\Gamma_{1,t}^{k}(e))^{2}-(P_{1,t-}^{k})^{2}]\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\qquad\quad\\\ (P_{1,T}^{k})^{2}=G^{2}.\end{cases}$ Since $0\leq P_{i}^{k},P_{i}^{k}+\Gamma_{i}^{k}\leq M,i=1,2$, and $H_{1}^{k}\leq\int_{\mathcal{E}}\Big{[}(P_{1}+\Gamma_{1})\Big{(}((1+E)^{+})^{2}-1\Big{)}-2P_{1}E+(P_{2}+\Gamma_{2})((1+E)^{-})^{2}\Big{]}\nu(\operatorname{d}\\!e)\leq c,$ by taking expectation on both sides in above and integrating over $[0,T]$, we have (3.15) $\displaystyle\quad(P_{1,0}^{k})^{2}+\frac{1}{2}{\mathbb{E}}\int_{0}^{T}|\Lambda_{1}^{k}|^{2}\operatorname{d}\\!s+{\mathbb{E}}\int_{0}^{T}\int_{\mathcal{E}}\Gamma_{1}^{k}(e)^{2}\nu(\operatorname{d}\\!e)\operatorname{d}\\!s\leq c,$ where $c>0$ is a constant independent of $k$. Therefore, the sequence $(\Lambda_{1}^{k},\Gamma_{1}^{k})$, $k=1,2,\cdots,$ is bounded in $L^{2}_{\mathbb{F}}(0,T;{\mathbb{R}}^{n})\times L^{2,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}})$, thus we can extract a subsequence (which is still denoted by $(\Lambda_{1}^{k},\Gamma_{1}^{k})$) converging in the weak sense to some $(\Lambda_{1},\Gamma_{1})\in L^{2}_{\mathbb{F}}(0,T;{\mathbb{R}}^{n})\times L^{2,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}})$. Similar considerations applying to $(P_{2,t}^{k})^{2}$ yield some $(\Lambda_{2},\Gamma_{2})\in L^{2}_{\mathbb{F}}(0,T;{\mathbb{R}}^{n})\times L^{2,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}})$ which is the weak limit of $(\Lambda_{2}^{k},\Gamma_{2}^{k})$. Following Kobylanski’s argument [20, Proposition 2.4] (see also Antonelli and Mancini [1, Theorem 1], Kohlmann and Tang [19, Theorem 2.1]), we establish in Appendix B the following strong convergence: ###### Lemma 3.1. It holds that (3.16) $\displaystyle\lim_{k\to\infty}{\mathbb{E}}\int_{0}^{T}|\Lambda_{i}^{k}-\Lambda_{i}|^{2}\operatorname{d}\\!t=0,\ \lim_{k\to\infty}{\mathbb{E}}\int_{0}^{T}\int_{\mathcal{E}}|\Gamma_{i}^{k}-\Gamma_{i}|^{2}\nu(\operatorname{d}\\!e)\operatorname{d}\\!t=0,\ i=1,2.$ Furthermore, $(P_{1},\Lambda_{1},\Gamma_{1},P_{2},\Lambda_{2},\Gamma_{2})$ is a nonnegative solution to the BSDEJ (3.6). This completes the proof. $\Box$ ###### Theorem 3.2 (Existence in Singular case). Suppose Assumptions 3.1, 3.3 hold, then the BSDEJ (3.6) admits a uniformly positive solution $(P_{1},\Lambda_{1},\Gamma_{1},P_{2},\Lambda_{2},\Gamma_{2})$. Proof: Similar to the proof of Theorem 3.1, one can show the existence of a nonnegative solution $(P_{1},\Lambda_{1},\Gamma_{1},P_{2},\Lambda_{2},\Gamma_{2})$ to the BSDEJ (3.6), so we omit the details. We only give a sketch on how to find a uniformly positive lower bound for such a solution. Under Assumptions 3.1, 3.3, there exists constant $c_{2}>0$, such that $2A+C^{\top}C+\int_{\mathcal{E}}E^{2}\nu(\operatorname{d}\\!e)-\delta^{-1}\Big{|}B+D^{\top}C\pm\int_{\mathcal{E}}EF\nu(\operatorname{d}\\!e)\Big{|}^{2}\geq- c_{2},$ where $\delta$ is the constant in Assumption 3.3. Notice $(\underline{P}_{1,t},\underline{\Lambda}_{1,t},\underline{\Gamma}_{1,t})=(\underline{P}_{2,t},\underline{\Lambda}_{2,t},\underline{\Gamma}_{2,t}):=(\delta e^{-c_{2}(T-t)},0,0)$ solves the following BSDEJ (3.17) $\displaystyle\begin{cases}\operatorname{d}\\!\underline{P}=-(-c_{2}\underline{P}+C^{\top}\underline{\Lambda})\operatorname{d}\\!t+\underline{\Lambda}^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}\underline{\Gamma}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\\\ \underline{P}_{T}=\delta.\end{cases}$ We have the following estimates, $\displaystyle\qquad H_{1}^{k}(t,\underline{P}_{1},\underline{P}_{2},\underline{\Lambda}_{1},\underline{\Gamma}_{1},\underline{\Gamma}_{2})$ $\displaystyle\geq\inf_{v\in{\mathbb{R}}^{m}}H_{1}(t,v,\underline{P}_{1},\underline{P}_{2},\underline{\Lambda}_{1},\underline{\Gamma}_{1},\underline{\Gamma}_{2})$ $\displaystyle=\inf_{v\in{\mathbb{R}}^{m}}H_{1}(t,v,\underline{P}_{1},\underline{P}_{2},0,0,0)$ $\displaystyle\geq\inf_{v\in{\mathbb{R}}^{m}}\Big{[}v^{\top}Rv+2S^{\top}v\Big{]}+\underline{P}_{1}\int_{\mathcal{E}}E^{2}\nu(\operatorname{d}\\!e)$ $\displaystyle\qquad+\underline{P}_{1}\inf_{v\in{\mathbb{R}}^{m}}\Big{[}v^{\top}(D^{\top}D+\int_{\mathcal{E}}F(e)F(e)^{\top}\nu(\operatorname{d}\\!e))v+2\Big{(}B+D^{\top}C+\int_{\mathcal{E}}EF\nu(\operatorname{d}\\!e)\Big{)}^{\top}v\Big{]}$ $\displaystyle\geq-Q+\underline{P}_{1}\Big{[}\int_{\mathcal{E}}E^{2}\nu(\operatorname{d}\\!e)-\delta^{-1}\Big{|}B+D^{\top}C+\int_{\mathcal{E}}EF\nu(\operatorname{d}\\!e)\Big{|}^{2}\Big{]},$ where we used $\underline{\Lambda}_{1}=0$, $\underline{\Gamma}_{1}=\underline{\Gamma}_{2}=0$ in the equality, $\underline{P}_{1}=\underline{P}_{2}>0$ in the second inequality, and $\left(\begin{smallmatrix}R&S\\\ S^{\top}&Q\end{smallmatrix}\right)\geq 0,\quad D^{\top}D+\int_{\mathcal{E}}F(e)F(e)^{\top}\nu(\operatorname{d}\\!e)\geq\delta\mathbf{1}_{m}$ in the last inequality. Similar result also holds for $H_{2}^{k}(t,\underline{P}_{1},\underline{P}_{2},\underline{\Lambda}_{2},\underline{\Gamma}_{1},\underline{\Gamma}_{2})$. Applying Theorem 2.2 to the BSDEJs (3.9) and (3.17), we get, for $i=1,2$, (3.18) $\displaystyle P_{i,t}^{k}\geq\underline{P}_{i,t}=\delta e^{-c_{2}(T-t)}\geq\delta e^{-c_{2}T},\ t\in[0,T],$ which leads to the desired lower bound. $\Box$ ### 3.4. Solution to the LQ problem (3.2) In this subsection we will present an explicit solution to the LQ problem (3.2) in terms of solutions to the BSDEJ (3.6). For $P_{i}>0$, $\Lambda_{i}\in{\mathbb{R}}^{n}$, $\Gamma_{i}\in L^{2,\nu},\ i=1,2$, define $\displaystyle\hat{v}_{1}(\omega,t,P_{1},P_{2},\Lambda_{1},\Gamma_{1},\Gamma_{2})=\operatorname*{argmin}_{v\in\Pi}H_{1}(\omega,t,v,P_{1},P_{2},\Lambda_{1},\Gamma_{1},\Gamma_{2}),$ (3.19) $\displaystyle\hat{v}_{2}(\omega,t,P_{1},P_{2},\Lambda_{2},\Gamma_{1},\Gamma_{2})=\operatorname*{argmin}_{v\in\Pi}H_{1}(\omega,t,v,P_{1},P_{2},\Lambda_{2},\Gamma_{1},\Gamma_{2}).$ ###### Theorem 3.3. Let $(P_{i},\Lambda_{i},\Gamma_{i})\in S^{\infty}_{\mathbb{F}}(0,T;{\mathbb{R}})\times L^{2}_{\mathbb{F}}(0,T;{\mathbb{R}}^{m})\times L^{\infty,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}}),\ i=1,2$, be a nonnegative (in Standard case), or uniformly positive (in Singular case) solution to the BSDEJ (3.6). Then the state feedback control given by $\displaystyle u^{*}(t,X)=\hat{v}_{1}(\omega,t,P_{1,t-},P_{2,t-},\Lambda_{1,t},\Gamma_{1,t},\Gamma_{2,t})X_{t-}^{+}+\hat{v}_{2}(\omega,t,P_{1,t-},P_{2,t-},\Lambda_{1,t},\Gamma_{1,t},\Gamma_{2,t})X_{t-}^{-},$ is optimal for the LQ problem (3.2). Moreover, the optimal value is $\displaystyle V(x)=P_{1,0}(x^{+})^{2}+P_{2,0}(x^{-})^{2}.$ The proof of Theorem 3.3 is standard, and thus omitted here; please see [16, Theorem 5.2] or [39, Theorem 5.2] for the standard verification argument. As a byproduct of Theorem 3.3, we have the following uniqueness result. ###### Theorem 3.4. Suppose Assumptions 3.1 and 3.2 (resp. Assumptions 3.1 and 3.3) hold, then the BSDEJ (3.6) admits at most one nonnegative (resp. uniformly positive) solution. It seems a challenging task to establish this result by pure BSDE techniques. ## 4\. Application to mean-variance portfolio selection problem Consider a financial market consisting of a risk-free asset (the money market instrument or bond) whose price is $S_{0}$ and $m$ risky securities (the stocks) whose prices are $S_{1},\ldots,S_{m}$. And assume $m\leq n$, i.e. the number of risky securities is no more than the dimension of the Brownian motion. The asset prices $S_{k}$, $k=0,1,\ldots,m,$ are driven by stochastic differential equations (SDEs): $\displaystyle\begin{cases}\operatorname{d}\\!S_{0,t}=r_{t}S_{0,t}\operatorname{d}\\!t,\\\ S_{0,0}=s_{0},\end{cases}$ and $\displaystyle\begin{cases}\operatorname{d}\\!S_{k,t}=S_{k,t}\Big{(}(\mu_{k,t}+r_{t})\operatorname{d}\\!t+\sum\limits_{j=1}^{n}\sigma_{kj,t}\operatorname{d}\\!W_{j,t}+\int_{\mathcal{E}}F_{k,t}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e)\Big{)},\\\ S_{k,0}=s_{k},\end{cases}$ where, for every $k=1,\ldots,m$, $r$ is the interest rate process, $\mu_{k}$, $\sigma_{k}:=(\sigma_{k1},\ldots,\sigma_{kn})$ and $F_{k}$ are the mean excess return rate process and volatility rate process of the $k$-th risky security. Define the vectors $\mu=(\mu_{1},\ldots,\mu_{m})^{\top}$, $F=(F_{1},\ldots,F_{m})^{\top}$ and matrix $\displaystyle\sigma=\left(\begin{array}[]{c}\sigma_{1}\\\ \vdots\\\ \sigma_{m}\\\ \end{array}\right)\equiv(\sigma_{kj})_{m\times n},\ \text{for}\ \text{each}\ i\in\mathcal{M}.$ We shall assume, in this section, ###### Assumption 4.1. The interest rate $r$ is a bounded deterministic measurable function of $t$, $\mu\in L_{\mathbb{F}}^{\infty}(0,T;{\mathbb{R}}^{m}),\ \sigma\in L_{\mathbb{F}}^{\infty}(0,T;{\mathbb{R}}^{m\times n}),\ F\in L_{\mathcal{P}}^{\nu,\infty}(0,T;{\mathbb{R}}^{m}),$ and there exists a constant $\delta>0$ such that $\sigma\sigma^{\top}+\int_{\mathcal{E}}F(e)F(e)^{\top}\nu(\operatorname{d}\\!e)\geq\delta\mathbf{1}_{m}$ for all $t\in[0,T]$. A small investor, whose actions cannot affect the asset prices, will decide at every time $t\in[0,T]$ the amount $\pi_{j,t}$ of his wealth to invest in the $j$-th risky asset, $j=1,\ldots,m$. The vector process $\pi:=(\pi_{1},\ldots,\pi_{m})^{\top}$ is called a portfolio of the investor. Then the investor’s self-financing wealth process $X$ corresponding to a portfolio $\pi$ is the unique strong solution of the SDE: (4.1) $\displaystyle\begin{cases}\operatorname{d}\\!X_{t}=[r_{t}X_{t-}+\pi_{t}^{\top}\mu_{t}]\operatorname{d}\\!t+\pi_{t}^{\top}\sigma_{t}\operatorname{d}\\!W_{t}+\int_{\mathcal{E}}\pi_{t}^{\top}F_{t}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\\\ X_{0}=x.\end{cases}$ The admissible portfolio set is defined as $\displaystyle\mathcal{U}=\Big{\\{}\pi\in L^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{m})\;\Big{|}\;\pi_{t}\in\Pi~{}\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\textrm{-a.e.}\Big{\\}},$ where $\Pi\in{\mathbb{R}}^{m}$ is a given closed convex cone. For instance, $\Pi={\mathbb{R}}^{m}$ means there is no trading constraint; while $\Pi={\mathbb{R}}_{+}^{m}$ means shorting is not allowed in the market. For any $\pi\in\mathcal{U}$, the SDE (4.1) has a unique strong solution. Different from the previous sections, in this section we request the constraint set $\Pi$ to be convex in order to apply the dual approach below. For a given expectation level $z\geq xe^{\int_{0}^{T}r_{s}\operatorname{d}\\!s}$, the investor’s mean-variance problem is to $\displaystyle\mathrm{Minimize}$ $\displaystyle\quad\mathrm{Var}(X_{T})\equiv{\mathbb{E}}[X_{T}^{2}-z^{2}],$ (4.2) $\displaystyle\mathrm{s.t.}$ $\displaystyle\quad\begin{cases}{\mathbb{E}}[X_{T}]=z,\\\ \pi\in\mathcal{U}.\end{cases}$ ###### Remark 4.1. Lim [23] studied a mean-variance problem with jumps without portfolio constraints, i.e. $\Pi={\mathbb{R}}^{m}$. In his model, all the coefficients in (4.1) are assumed to be predictable with respect to the Brownian motion filtration, so no jump term has entered into his SRE, which is exactly the same one as in the model without jumps. We shall say that the mean-variance problem (4.2) is feasible for a given level $z\geq xe^{\int_{0}^{T}r_{s}\operatorname{d}\\!s}$ if there is a portfolio $\pi\in\mathcal{U}$ which satisfies the target constraint ${\mathbb{E}}[X_{T}]=z$. An optimal portfolio to (4.2) is called an efficient portfolio corresponding to $z$ and the corresponding $(\sqrt{\mathrm{Var}(X_{T})},z)$ is called an efficient point. The set of all efficient points, with $z\geq xe^{\int_{0}^{T}r_{s}\operatorname{d}\\!s}$, is called the efficient frontier. Define the dual cone of $\Pi$ as $\widehat{\Pi}:=\Big{\\{}y\in{\mathbb{R}}^{m}\;\Big{|}\;x^{\top}y\leq 0\mbox{ for all $x\in\Pi$}\Big{\\}}.$ The following result gives an equivalent condition for the feasibility of (4.2). The proof is exactly the same as [13, Theorem 5.3], so we omit it. ###### Theorem 4.1 (Feasibility). Under assumption 4.1, the mean-variance problem (4.2) is feasible for any $z\geq xe^{\int_{0}^{T}r_{t}\operatorname{d}\\!t}$ if and only if (4.3) $\int_{0}^{T}\mathbb{P}(\mu_{t}\notin\widehat{\Pi})\operatorname{d}\\!t>0.$ For the rest of this section, we will always assume (4.3) holds. The way to solve (4.2) is rather clear nowadays. To deal with the constraint ${\mathbb{E}}[X_{T}]=z$, we introduce a Lagrange multiplier $-2\lambda\in{\mathbb{R}}$ and obtain the following _relaxed_ optimization problem: (4.4) $\displaystyle\inf$ $\displaystyle\quad J(x,\pi,\lambda;z)={\mathbb{E}}[(X_{T}-\lambda)^{2}]-(\lambda-z)^{2},$ $\displaystyle\mathrm{s.t.}$ $\displaystyle\quad\pi\in\mathcal{U}.$ Denote its optimal value as $V(x,\lambda;z)=\inf_{\pi\in\mathcal{U}}J(x,\pi,\lambda;z).$ According to the Lagrange duality theorem (see Luenberger [24]) (4.5) $\displaystyle\inf_{\pi\in\mathcal{U},{\mathbb{E}}[X_{T}]=z}\mathrm{Var}(X_{T})=\sup_{\lambda\in{\mathbb{R}}}V(x,\lambda;z).$ So we can solve the problem (4.4) by a two-step procedure: Firstly determine $V(x,\lambda;z)$ for every $\lambda$, and then try to find a $\lambda^{*}$ to maximize $\lambda\mapsto V(x,\lambda;z)$. The relaxed problem (4.4) is a special stochastic LQ problem (3.2) studied in Section 3, where (4.6) $\displaystyle A=r,\ B=\mu,\ C=0,\ D=\sigma^{\top},\ E=0,\ Q=0,\ R=0,\ S=0,\ G=1.$ The associated BSDEJ (3.6) becomes (4.7) $\displaystyle\begin{cases}\operatorname{d}\\!P_{1,t}=-\Big{[}2rP_{1,t-}+H_{1}^{*}(t,P_{1,t-},P_{2,t-},\Lambda_{1,t},\Gamma_{1,t},\Gamma_{2,t})\Big{]}\operatorname{d}\\!t+\Lambda_{1,t}^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}\Gamma_{1,t}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\\\ \operatorname{d}\\!P_{2,t}=-\Big{[}2rP_{2,t-}+H_{2}^{*}(t,P_{1,t-},P_{2,t-},\Lambda_{2,t},\Gamma_{1,t},\Gamma_{2,t})\Big{]}\operatorname{d}\\!t+\Lambda_{2,t}^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}\Gamma_{2,t}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\\\ P_{1,T}=1,\ P_{2,T}=1,\end{cases}$ where $H_{1}^{*},\ H_{2}^{*}$, $\hat{v}_{1}$, $\hat{v}_{2}$ are defined as in (3.4), (3.5) and (3.4) with coefficients given in (4.6): $\displaystyle H_{1}(\omega,t,v,P_{1},P_{2},\Lambda_{1},\Gamma_{1},\Gamma_{2})$ $\displaystyle=P_{1}v^{\top}\sigma\sigma^{\top}v+2(P_{1}\mu+\sigma\Lambda_{1})^{\top}v$ $\displaystyle\qquad+\int_{\mathcal{E}}\Big{[}(P_{1}+\Gamma_{1})\Big{(}((1+F^{\top}v)^{+})^{2}-1\Big{)}-2P_{1}F^{\top}v$ $\displaystyle\qquad\qquad+(P_{2}+\Gamma_{2})((1+F^{\top}v)^{-})^{2}\Big{]}\nu(\operatorname{d}\\!e),$ $\displaystyle H_{2}(\omega,t,v,P_{1},P_{2},\Lambda_{2},\Gamma_{1},\Gamma_{2})$ $\displaystyle=P_{2}v^{\top}\sigma\sigma^{\top}v-2(P_{2}\mu+\sigma\Lambda_{2})^{\top}v$ $\displaystyle\qquad+\int_{\mathcal{E}}\Big{[}(P_{2}+\Gamma_{2})\Big{(}((-1+F^{\top}v)^{-})^{2}-1\Big{)}+2P_{2}F^{\top}v$ $\displaystyle\qquad\qquad+(P_{1}+\Gamma_{1})((-1+F^{\top}v)^{+})^{2}\Big{]}\nu(\operatorname{d}\\!e).$ Clearly, Theorems 3.2 and 3.4 can be applied to the BSDEJ (4.7) to ensure that it admits a unique uniformly positive solution $(P_{1},\Lambda_{1},\Gamma_{1},P_{2},\Lambda_{2},\Gamma_{2})$. Accordingly, Theorem 3.3 leads to the following solution to the relaxed problem (4.4). ###### Theorem 4.2. Let $(P_{1},\Lambda_{1},\Gamma_{1},P_{2},\Lambda_{2},\Gamma_{2})$ be the unique uniformly positive solution to (4.7). Then the state feedback control given by $\displaystyle\pi^{*}(t,X)$ $\displaystyle=\hat{v}_{1}(\omega,t,P_{1},P_{2},\Lambda_{1},\Gamma_{1},\Gamma_{2})\Big{(}X_{t-}-\lambda e^{-\int_{t}^{T}r_{s}\operatorname{d}\\!s}\Big{)}^{+}$ (4.8) $\displaystyle\qquad+\hat{v}_{2}(\omega,t,P_{1},P_{2},\Lambda_{2},\Gamma_{1},\Gamma_{2})\Big{(}X_{t-}-\lambda e^{-\int_{t}^{T}r_{s}\operatorname{d}\\!s}\Big{)}^{-},$ is optimal for the LQ problem (3.2). Moreover, the optimal value is $\displaystyle V(x,\lambda;z)=P_{1,0}\Big{[}\Big{(}x-\lambda e^{-\int_{0}^{T}r_{s}\operatorname{d}\\!s}\Big{)}^{+}\Big{]}^{2}+P_{2,0}\Big{[}\Big{(}x-\lambda e^{-\int_{0}^{T}r_{s}\operatorname{d}\\!s}\Big{)}^{-}\Big{]}^{2}-(\lambda-z)^{2}.$ This resolves the first step problem. To solve the second step problem, i.e., to maximize $\lambda\mapsto V(x,\lambda;z)$, the following result is critical. ###### Lemma 4.1. Assume Assumption 4.1 and conditioin (4.3) hold. Then (4.9) $\displaystyle P_{1,0}e^{-2\int_{0}^{T}r_{s}\operatorname{d}\\!s}-1\leq 0,\ P_{2,0}e^{-2\int_{0}^{T}r_{s}\operatorname{d}\\!s}-1<0.$ Proof: Applying Itô’s formula to $P_{2,t}e^{-2\int_{t}^{T}r_{s}\operatorname{d}\\!s}$ on $[0,T]$, we have (4.10) $\displaystyle 1-P_{2,0}e^{-2\int_{0}^{T}r_{s}\operatorname{d}\\!s}=-{\mathbb{E}}\int_{0}^{T}H_{2}^{*}(t,P_{2,t-},\Lambda_{2,t},\Gamma_{2,t},P_{1,t-},\Gamma_{1,t})\operatorname{d}\\!t.$ Since $H_{2}^{*}(t,P_{2,t-},\Lambda_{2,t},\Gamma_{2,t},P_{1,t-},\Gamma_{1,t})\leq 0$ by its very definition, it follows $P_{2,0}e^{-2\int_{0}^{T}r_{s}\operatorname{d}\\!s}-1\leq 0$. Similarly, we can prove that $P_{1,0}e^{-2\int_{0}^{T}r_{s}\operatorname{d}\\!s}-1\leq 0$. It remains to prove the strict inequality $P_{2,0}e^{-2\int_{0}^{T}r_{s}\operatorname{d}\\!s}-1<0$. Suppose, on the contrary, $P_{2,0}e^{-2\int_{0}^{T}r_{s}\operatorname{d}\\!s}-1=0$. It then follows from (4.10) that $H_{2}^{*}(t,P_{1,t-},P_{2,t-},\Lambda_{2,t},\Gamma_{1,t},\Gamma_{2,t})=0~{}\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\textrm{-a.e.}$. Thus we deduce, from the uniqueness (Theorem 3.4) of solution to the BSDE (4.7), that $P_{2,t}=e^{2\int_{t}^{T}r_{s}\operatorname{d}\\!s}$, $\Lambda_{2,t}=0$ and $\Gamma_{2,t}=0$. On the other hand, (4.11) $\displaystyle((1-F^{\top}v)^{+})^{2}+2F^{\top}v-1\leq(1-F^{\top}v)^{2}+2F^{\top}v-1=|F^{\top}v|^{2}.$ Since $F\in L_{\mathcal{P}}^{\nu,\infty}(0,T;{\mathbb{R}}^{m})$, there exists $c_{1}>0$ such that $|F(e)|\leq c_{1}$ for almost all $e\in\mathcal{E}$. Hence (4.12) $\displaystyle 1-F^{\top}v\geq 0,\ \mbox{if}\ v\in\Pi\ \mbox{and }\ |v|\leq c_{1}^{-1}.$ Combining (4.11) and (4.12), we have $\displaystyle H_{2}^{*}(t,P_{1},P_{2},0,\Gamma_{1},0)$ $\displaystyle=\inf_{v\in\Pi}\Big{[}P_{2}v^{\top}\sigma\sigma^{\top}v-2P_{2}\mu^{\top}v+\int_{\mathcal{E}}\Big{[}P_{2}\Big{(}((1-F^{\top}v)^{+})^{2}+2F^{\top}v-1\Big{)}$ $\displaystyle\qquad\qquad+(P_{1}+\Gamma_{1})((1-F^{\top}v)^{-})^{2}\Big{]}\nu(\operatorname{d}\\!e)\Big{]}$ $\displaystyle\leq\inf_{v\in\Pi,|v|\leq c_{1}^{-1}}\Big{[}P_{2}v^{\top}\Big{(}\sigma\sigma^{\top}+\int_{\mathcal{E}}FF^{\top}\nu(\operatorname{d}\\!e)\Big{)}v-2P_{2}\mu^{\top}v\Big{]}$ $\displaystyle\leq P_{2}\inf_{v\in\Pi,|v|\leq c_{1}^{-1}}\Big{(}c|v|^{2}-2\mu^{\top}v\Big{)}.$ Note (4.3) implies that there exists $\mathcal{O}\subseteq\Omega\times[0,T]$ such that $\mu\notin\widehat{\Pi}~{}\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\textrm{-a.e.}$ on $\mathcal{O}$. Hence, there exists $v_{0}\in\Pi$ such that $\mu^{\top}v_{0}>0$ on $\mathcal{O}$. By choosing $v_{1}=\varepsilon v_{0}$ with $\varepsilon>0$ being sufficiently small so that $|v_{1}|\leq c_{1}^{-1}$, we get $\displaystyle H_{2}^{*}(t,P_{1},P_{2},0,\Gamma_{1},0)$ $\displaystyle\leq P_{2}\varepsilon\Big{(}c\varepsilon|v_{0}|^{2}-2\mu^{\top}v_{0}\Big{)}~{}\textrm{on $\mathcal{O}$.}$ The RHS is negative for sufficiently small $\varepsilon>0$ on $\mathcal{O}$, leading to a contraction. Therefore $P_{2,0}e^{-2\int_{0}^{T}r_{s}\operatorname{d}\\!s}-1<0$. $\Box$ To find a $\lambda^{*}$ to maximize $\lambda\mapsto V(x,\lambda;z)$, we do some tedious calculation (using (4.9)) and obtain $\max_{\lambda}V(x,\lambda;z)=V(x,\lambda^{*};z)=\frac{P_{2,0}}{1-P_{2,0}e^{-2\int_{0}^{T}r_{s}\operatorname{d}\\!s}}\Big{(}x-ze^{-\int_{0}^{T}r_{s}\operatorname{d}\\!s}\Big{)}^{2},$ where $\lambda^{*}=\frac{z-xP_{2,0}e^{-\int_{0}^{T}r_{s}\operatorname{d}\\!s}}{1-P_{2,0}e^{-2\int_{0}^{T}r_{s}\operatorname{d}\\!s}}.$ The above analysis boils down to the following solution to the mean-variance problem (4.2). ###### Theorem 4.3. Let $(P_{1},\Lambda_{1},\Gamma_{1},P_{2},\Lambda_{2},\Gamma_{2})$ be the unique uniformly positive solution to (4.7). Then the state feedback portfolio given by $\displaystyle\pi^{*}(t,X)$ $\displaystyle=\hat{v}_{1}(\omega,t,P_{1},P_{2},\Lambda_{1},\Gamma_{1},\Gamma_{2})\Big{(}X_{t-}-\lambda^{*}e^{-\int_{t}^{T}r_{s}\operatorname{d}\\!s}\Big{)}^{+}$ (4.13) $\displaystyle\qquad+\hat{v}_{2}(\omega,t,P_{1},P_{2},\Lambda_{1},\Gamma_{2},\Gamma_{2})\Big{(}X_{t-}-\lambda^{*}e^{-\int_{t}^{T}r_{s}\operatorname{d}\\!s}\Big{)}^{-},$ is optimal to the mean-variance problem (4.2). Moreover, the efficient frontier is determined by $\displaystyle\mathrm{Var}(X_{T})=\frac{P_{2,0}e^{-2\int_{0}^{T}r_{s}\operatorname{d}\\!s}}{1-P_{2,0}e^{-2\int_{0}^{T}r_{s}\operatorname{d}\\!s}}\Big{(}{\mathbb{E}}[X_{T}]-xe^{\int_{0}^{T}r_{s}\operatorname{d}\\!s}\Big{)}^{2},$ where ${\mathbb{E}}[X_{T}]\geq xe^{\int_{0}^{T}r_{s}\operatorname{d}\\!s}$. ###### Remark 4.2. In the constrained mean-variance model without jumps studied in Hu and Zhou [16], the efficient portfolio only takes the second term on the RHS of (4.3), i.e. the optimal wealth $X_{t}$ will never exceed $\lambda^{*}e^{-\int_{t}^{T}r_{s}\operatorname{d}\\!s}$ on $[0,T]$, and it only depends on $(P_{2},\Lambda_{2})$ as $\hat{v}_{2}$ does. But in our cone- constrained MV problem with jumps, the optimal wealth $X_{t}$ will probably cross the bliss point $\lambda^{*}e^{-\int_{t}^{T}r_{s}\operatorname{d}\\!s}$. ## Appendix A Heuristic derivation of the BSDEJ (3.6) By the Meyer-Itô formula [31, Theorem 70], we have $\displaystyle\operatorname{d}\\!X_{t}^{+}$ $\displaystyle=\mathbf{1}_{\\{X_{t-}>0\\}}\Big{[}\big{(}A_{t}X_{t-}+B_{t}^{\top}u_{t}-\int_{\mathcal{E}}(E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})\nu(\operatorname{d}\\!e)\big{)}\operatorname{d}\\!t+(C_{t}X_{t-}+D_{t}u_{t})^{\top}\operatorname{d}\\!W_{t}\Big{]}$ $\displaystyle\qquad+\int_{\mathcal{E}}\Big{[}(X_{t-}+E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})^{+}-X_{t-}^{+}\Big{]}N(\operatorname{d}\\!t,\operatorname{d}\\!e)+\frac{1}{2}\operatorname{d}\\!L_{t},$ and $\displaystyle\operatorname{d}\\!X_{t}^{-}$ $\displaystyle=-\mathbf{1}_{\\{X_{t-}\leq 0\\}}\Big{[}\big{(}A_{t}X_{t-}+B_{t}^{\top}u_{t}-\int_{\mathcal{E}}(E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})\nu(\operatorname{d}\\!e)\big{)}\operatorname{d}\\!t+(C_{t}X_{t-}+D_{t}u_{t})^{\top}\operatorname{d}\\!W_{t}\Big{]}$ $\displaystyle\qquad+\int_{\mathcal{E}}\Big{[}(X_{t-}+E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})^{-}-X_{t-}^{-}\Big{]}N(\operatorname{d}\\!t,\operatorname{d}\\!e)+\frac{1}{2}\operatorname{d}\\!L_{t},$ where $L$ is the local time of $X$ at $0$. Since $X_{t}^{\pm}\operatorname{d}\\!L_{t}=0$, applying the Itô formula yields $\displaystyle\operatorname{d}\\!\;(X_{t}^{+})^{2}$ $\displaystyle=\mathbf{1}_{\\{X_{t-}>0\\}}\Big{[}2X_{t-}^{+}\big{(}A_{t}X_{t-}+B_{t}^{\top}u_{t}-\int_{\mathcal{E}}(E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})\nu(\operatorname{d}\\!e)\big{)}+|C_{t}X_{t-}+D_{t}u_{t}|^{2}\Big{]}\operatorname{d}\\!t$ $\displaystyle\qquad+\mathbf{1}_{\\{X_{t-}>0\\}}2X_{t-}^{+}(C_{t}X_{t-}+D_{t}u_{t})^{\top}\operatorname{d}\\!W_{t}$ $\displaystyle\qquad+\int_{\mathcal{E}}\Big{[}((X_{t-}+E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})^{+})^{2}-(X_{t-}^{+})^{2}\Big{]}N(\operatorname{d}\\!t,\operatorname{d}\\!e),$ and $\displaystyle\operatorname{d}\\!\;(X_{t}^{-})^{2}$ $\displaystyle=\mathbf{1}_{\\{X_{t-}\leq 0\\}}\Big{[}-2X_{t-}^{-}\big{(}A_{t}X_{t-}+B_{t}^{\top}u_{t}-\int_{\mathcal{E}}(E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})\nu(\operatorname{d}\\!e)\big{)}+|C_{t}X_{t-}+D_{t}u_{t}|^{2}\Big{]}\operatorname{d}\\!t$ $\displaystyle\qquad-\mathbf{1}_{\\{X_{t-}\leq 0\\}}2X_{t-}^{-}(C_{t}X_{t-}+D_{t}u_{t})^{\top}\operatorname{d}\\!W_{t}$ $\displaystyle\qquad+\int_{\mathcal{E}}\Big{[}((X_{t-}+E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})^{-})^{2}-(X_{t-}^{-})^{2}\Big{]}N(\operatorname{d}\\!t,\operatorname{d}\\!e).$ Assume that $P_{1}$ and $P_{2}$ are semimartingales of the following form: $\displaystyle\begin{cases}\operatorname{d}\\!P_{1}=-f_{1}\operatorname{d}\\!t+\Lambda_{1}^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}\Gamma_{1}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\\\ P_{1,T}=G,\end{cases}$ and $\displaystyle\begin{cases}\operatorname{d}\\!P_{2}=-f_{2}\operatorname{d}\\!t+\Lambda_{2}^{\top}\operatorname{d}\\!W+\int_{\mathcal{E}}\Gamma_{2}(e)\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),\\\ P_{2,T}=G.\end{cases}$ Applying Itô’s formula to $P_{1,t}(X_{t}^{+})^{2}$, $\displaystyle\operatorname{d}\\!P_{1,t}(X_{t}^{+})^{2}$ $\displaystyle=P_{1,t-}\mathbf{1}_{\\{X_{t-}>0\\}}\Big{[}2X_{t-}^{+}(A_{t}X_{t-}+B_{t}^{\top}u_{t})+|C_{t}X_{t-}+D_{t}u_{t}|^{2}\Big{]}\operatorname{d}\\!t$ $\displaystyle\qquad+2X_{t-}^{+}(C_{t}X_{t-}+D_{t}u_{t})^{\top}\Lambda_{1,t}\operatorname{d}\\!t$ $\displaystyle\qquad-2P_{1,t-}X_{t-}^{+}\mathbf{1}_{\\{X_{t-}>0\\}}\int_{\mathcal{E}}(E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ $\displaystyle\qquad+\int_{\mathcal{E}}(P_{1,t-}+\Gamma_{1,t}(e))\Big{(}((X_{t-}+E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})^{+})^{2}-(X_{t-}^{+})^{2}\Big{)}\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ $\displaystyle\qquad-(X_{t-}^{+})^{2}f_{1}\operatorname{d}\\!t+\Big{[}(X_{t-}^{+})2\Lambda_{1,t}+2P_{1,t-}X_{t-}^{+}(C_{t}X_{t-}+D_{t}u_{t})\Big{]}^{\top}\operatorname{d}\\!W$ $\displaystyle\qquad+\int_{\mathcal{E}}\Big{[}(P_{1,t-}+\Gamma_{1}(e))[((X_{t-}+E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})^{+})^{2}-(X_{t-}^{+})^{2}]$ $\displaystyle\qquad\qquad\qquad+(X_{t-}^{+})^{2}\Gamma_{1}(e)\Big{]}\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e),$ and applying Itô’s formula to $P_{2,t}(X_{t}^{-})^{2}$, $\displaystyle\operatorname{d}\\!P_{2,t}(X_{t}^{-})^{2}$ $\displaystyle=P_{2,t-}\mathbf{1}_{\\{X_{t-}\leq 0\\}}\Big{[}-2X_{t-}^{-}(A_{t}X_{t-}+B_{t}^{\top}u_{t})+|C_{t}X_{t-}+D_{t}u_{t}|^{2}\Big{]}\operatorname{d}\\!t$ $\displaystyle\qquad-2X_{t-}^{-}(C_{t}X_{t-}+D_{t}u_{t})^{\top}\Lambda_{2,t}\operatorname{d}\\!t$ $\displaystyle\qquad+2P_{2,t-}X_{t-}^{-}\mathbf{1}_{\\{X_{t-}\leq 0\\}}\int_{\mathcal{E}}(E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ $\displaystyle\qquad+\int_{\mathcal{E}}(P_{2,t-}+\Gamma_{2,t}(e))\Big{(}((X_{t-}+E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})^{-})^{2}-(X_{t-}^{-})^{2}\Big{)}\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ $\displaystyle\qquad-(X_{t-}^{-})^{2}f_{2}\operatorname{d}\\!t+\Big{[}(X_{t-}^{-})^{2}\Lambda_{2,t}-2P_{2,t-}X_{t-}^{-}(C_{t}X_{t-}+D_{t}u_{t})\Big{]}^{\top}\operatorname{d}\\!W$ $\displaystyle\qquad+\int_{\mathcal{E}}\Big{[}(P_{2,t-}+\Gamma_{2}(e))[((X_{t-}+E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})^{-})^{2}-(X_{t-}^{-})^{2}]$ $\displaystyle\qquad\qquad\qquad+(X_{t-}^{-})^{2}\Gamma_{2}(e)\Big{]}\widetilde{N}(\operatorname{d}\\!t,\operatorname{d}\\!e).$ Then $\displaystyle J(x;u)$ $\displaystyle=\mathbb{E}\Big{[}\int_{0}^{T}\Big{(}Q_{t}X_{t}^{2}+u_{t}^{\top}R_{t}u_{t}+2X_{t}S_{t}^{\top}u_{t}\Big{)}\operatorname{d}\\!t+GX_{T}^{2}\Big{]}$ $\displaystyle=P_{1,0}(x^{+})^{2}+P_{2,0}(x^{-})^{2}+\mathbb{E}\int_{0}^{T}\Big{(}Q_{t}X_{t}^{2}+u_{t}^{\top}R_{t}u_{t}+2X_{t}S_{t}^{\top}u_{t}\Big{)}\operatorname{d}\\!t$ $\displaystyle\qquad+P_{1,t-}\mathbf{1}_{\\{X_{t-}>0\\}}\Big{[}2X_{t-}^{+}(A_{t}X_{t-}+B_{t}^{\top}u_{t})+|C_{t}X_{t-}+D_{t}u_{t}|^{2}\Big{]}\operatorname{d}\\!t$ $\displaystyle\qquad+2X_{t-}^{+}(C_{t}X_{t-}+D_{t}u_{t})^{\top}\Lambda_{1,t}\operatorname{d}\\!t$ $\displaystyle\qquad-2P_{1,t-}X_{t-}^{+}\mathbf{1}_{\\{X_{t-}>0\\}}\int_{\mathcal{E}}(E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ $\displaystyle\qquad+\int_{\mathcal{E}}(P_{1,t-}+\Gamma_{1,t}(e))\Big{(}((X_{t-}+E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})^{+})^{2}-(X_{t-}^{+})^{2}\Big{)}\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ $\displaystyle\qquad-(X_{t-}^{+})^{2}f_{1}\operatorname{d}\\!t-(X_{t-}^{-})^{2}f_{2}\operatorname{d}\\!t$ $\displaystyle\qquad+P_{2,t-}\mathbf{1}_{\\{X_{t-}\leq 0\\}}\Big{[}-2X_{t-}^{-}(A_{t}X_{t-}+B_{t}^{\top}u_{t})+|C_{t}X_{t-}+D_{t}u_{t}|^{2}\Big{]}\operatorname{d}\\!t$ $\displaystyle\qquad-2X_{t-}^{-}(C_{t}X_{t-}+D_{t}u_{t})^{\top}\Lambda_{2,t}\operatorname{d}\\!t$ $\displaystyle\qquad+2P_{2,t-}X_{t-}^{-}\mathbf{1}_{\\{X_{t-}\leq 0\\}}\int_{\mathcal{E}}(E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ $\displaystyle\qquad+\int_{\mathcal{E}}(P_{2,t-}+\Gamma_{2,t}(e))\Big{(}((X_{t-}+E_{t}(e)X_{t-}+F_{t}(e)^{\top}u_{t})^{-})^{2}-(X_{t-}^{-})^{2}\Big{)}\nu(\operatorname{d}\\!e)\operatorname{d}\\!t.$ Denote $\phi(X_{t-},u_{t})$ be the integrand on the RHS of the above equation. * • If $X_{t-}>0$, then $\displaystyle\phi(X_{t-},u_{t})$ $\displaystyle=\Big{[}Q+v^{\top}(R+P_{1,t-}D^{\top}D)v+2S^{\top}v+P_{1,t-}(2A+C^{\top}C)+2C^{\top}\Lambda_{1,t}$ $\displaystyle\qquad+2(P_{1,t-}(B+D^{\top}C)+D^{\top}\Lambda_{1})^{\top}v$ $\displaystyle\qquad-2P_{1,t-}\int_{\mathcal{E}}(E+F^{\top}v)\nu(\operatorname{d}\\!e)+\int_{\mathcal{E}}(P_{1,t-}+\Gamma_{1,t-})\Big{(}((1+E+F^{\top}v)^{+})^{2}-1\Big{)}\nu(\operatorname{d}\\!e)$ $\displaystyle\qquad- f_{1}+\int_{\mathcal{E}}(P_{2,t-}+\Gamma_{2,t-})((1+E+F^{\top}v)^{-})^{2}\nu(\operatorname{d}\\!e)\Big{]}X_{t-}^{2},$ where $v_{t}=\frac{u_{t}}{|X_{t-}|}$. * • If $X_{t-}<0$, then $\displaystyle\phi(X_{t-},u_{t})$ $\displaystyle=\Big{[}Q+v^{\top}(R+P_{2,t-}D^{\top}D)v-2S^{\top}v+P_{2,t-}(2A+C^{\top}C)+2C^{\top}\Lambda_{2,t}$ $\displaystyle\qquad-2(P_{1,t-}(B+D^{\top}C)+D^{\top}\Lambda_{2})^{\top}v$ $\displaystyle\qquad-2P_{2,t-}\int_{\mathcal{E}}(E-F^{\top}v)\nu(\operatorname{d}\\!e)+\int_{\mathcal{E}}(P_{2,t-}+\Gamma_{2,t-})\Big{(}((-1-E+F^{\top}v)^{-})^{2}-1\Big{)}\nu(\operatorname{d}\\!e)$ $\displaystyle\qquad- f_{2}+\int_{\mathcal{E}}(P_{1,t-}+\Gamma_{1,t-})((-1-E+F^{\top}v)^{+})^{2}\nu(\operatorname{d}\\!e)\Big{]}X_{t-}^{2},$ where $v_{t}=\frac{u_{t}}{|X_{t-}|}$. * • If $X_{t-}=0$, then $\phi(0,0)=0$ and $\displaystyle\phi(X_{t-},u_{t})$ $\displaystyle=u_{t}^{\top}Ru_{t}+P_{2,t-}|Du_{t}|^{2}+\int_{\mathcal{E}}(P_{1,t-}+\Gamma_{1,t})((F^{\top}u_{t})^{+})^{2}\nu(\operatorname{d}\\!e)$ $\displaystyle\qquad+\int_{\mathcal{E}}(P_{2,t}+\Gamma_{2,t-})((F^{\top}u_{t})^{-})^{2}\nu(\operatorname{d}\\!e)\geq 0.$ In order to ensure $\min_{u\in\Pi}\phi(X_{t},u)=0$, it is evident to take $f_{1}$ and $f_{2}$ in the form of (3.4) and (3.5). ## Appendix B Proof of Lemma 3.1 For any positive integers $k<l$, set $P_{i,t}^{k,l}:=P_{i,t}^{k}-P_{i,t}^{l}\geq 0,\ \Lambda_{i,t}^{k,l}:=\Lambda_{i,t}^{k}-\Lambda_{i,t}^{l},\ \Gamma_{i,t}^{k,l}:=\Gamma_{i,t}^{k}-\Gamma_{i,t}^{l},\ i=1,2.$ Let $\kappa>0$ be a constant to be specified later, and write $\Psi(x)=\frac{1}{\kappa}\big{(}e^{\kappa x}-\kappa x-1\big{)}.$ Notice $\Psi^{\prime}(x)\geq 0$ for $x\geq 0$. Applying Itô’s formula to $\Psi(P_{1,t}^{k,l})$, we get $\displaystyle\quad\Psi(P_{1,0}^{k,l})+\frac{1}{2}{\mathbb{E}}\int_{0}^{T}\Psi^{\prime\prime}(P_{1,t}^{k,l})|\Lambda_{1}^{k,l}|^{2}\operatorname{d}\\!t$ $\displaystyle\qquad\qquad+{\mathbb{E}}\int_{0}^{T}\int_{\mathcal{E}}\Big{[}\Psi(P_{1,t-}^{k,l}+\Gamma_{1,t}^{k,l})-\Psi(P_{1,t-}^{k,l})-\Psi^{\prime}(P_{1,t-}^{k,l})\Gamma_{1}^{k,l}\Big{]}\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ $\displaystyle=\Psi(0)+{\mathbb{E}}\int_{0}^{T}\Psi^{\prime}(P_{1,t}^{k,l})\Big{[}(2A+C^{\top}C)P^{k,l}_{1,t-}+2C^{\top}\Lambda^{k,l}_{1,t}$ $\displaystyle\qquad+H_{1}^{k}(t,P_{1,t-}^{k},P_{2,t-}^{k},\Lambda_{1,t}^{k},\Gamma_{1,t}^{k},\Gamma_{2,t}^{k})-H_{1}^{l}(t,P_{1,t-}^{l},P_{2,t-}^{l},\Lambda_{1,t}^{l},\Gamma_{1,t}^{l},\Gamma_{2,t}^{l})\Big{]}\operatorname{d}\\!t.$ Using the following fact: $\displaystyle\Psi(0)=0,\ P_{1,t}^{k,l}\geq 0,\ \Psi^{\prime}(P_{1,t}^{k,l})=e^{\kappa P_{1,t}^{k,l}}-1\geq 0,\ H_{1}^{l}\geq H_{1}^{*},$ $\displaystyle H_{1}^{k}\leq\int_{\mathcal{E}}\Big{[}(P_{1}+\Gamma_{1})\Big{(}((1+E)^{+})^{2}-1\Big{)}-2P_{1}E+(P_{2}+\Gamma_{2})((1+E)^{-})^{2}\Big{]}\nu(\operatorname{d}\\!e)\leq c,$ where $c$ is independent of $k$ and $l$, we obtain $\displaystyle\quad\Psi(P_{1,0}^{k,l})+\frac{1}{2}{\mathbb{E}}\int_{0}^{T}\Psi^{\prime\prime}(P_{1,t}^{k,l})|\Lambda_{1}^{k,l}|^{2}\operatorname{d}\\!t$ $\displaystyle\qquad\qquad+{\mathbb{E}}\int_{0}^{T}\int_{\mathcal{E}}\Big{[}\Psi(P_{1,t-}^{k,l}+\Gamma_{1,t}^{k,l})-\Psi(P_{1,t-}^{k,l})-\Psi^{\prime}(P_{1,t-}^{k,l})\Gamma_{1}^{k,l}\Big{]}\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ $\displaystyle\leq{\mathbb{E}}\int_{0}^{T}\Psi^{\prime}(P_{1,t}^{k,l})\Big{[}c+2C^{\top}\Lambda_{1,t}^{k,l}-H_{1}^{*}(t,P_{1,t-}^{l},P_{2,t-}^{l},\Lambda_{1,t}^{l},\Gamma_{1,t}^{l},\Gamma_{2,t}^{l})\Big{]}\operatorname{d}\\!t.$ Keeping in mind $P^{l}_{i}$, $\Gamma^{l}_{i}$, $i=1,2$, are uniformly bounded, we have the following estimates: $\displaystyle- H_{1}^{*}(t,P_{1,t-}^{l},P_{2,t-}^{l},\Lambda_{1,t}^{l},\Gamma_{1,t}^{l},\Gamma_{2,t}^{l})$ $\displaystyle\leq-\inf_{v\in{\mathbb{R}}^{m}}H_{1}(t,0,P_{1,t-}^{l},P_{2,t-}^{l},\Lambda_{1,t}^{l},\Gamma_{1,t}^{l},\Gamma_{2,t}^{l})$ $\displaystyle\leq c+c|\Lambda_{1}^{l}|^{2}$ $\displaystyle\leq c+3c(|\Lambda_{1}^{k,l}|^{2}+|\Lambda_{1}^{k}-\Lambda_{1}|^{2}+|\Lambda_{1}|^{2}),$ where $c>0$ is a constant independent of $l$ and $k$. The above estimates lead to $\displaystyle\quad\Psi(P_{1,0}^{k,l})+{\mathbb{E}}\int_{0}^{T}\Big{(}\frac{1}{2}\Psi^{\prime\prime}(P_{1,t}^{k,l})-3c\Psi^{\prime}(P_{1,t}^{k,l})\Big{)}|\Lambda_{1}^{k,l}|^{2}\operatorname{d}\\!t$ $\displaystyle\qquad\qquad+{\mathbb{E}}\int_{0}^{T}\int_{\mathcal{E}}\Big{[}\Psi(P_{1,t-}^{k,l}+\Gamma_{1,t}^{k,l})-\Psi(P_{1,t-}^{k,l})-\Psi^{\prime}(P_{1,t-}^{k,l})\Gamma_{1}^{k,l}\Big{]}\nu(\operatorname{d}\\!e)\operatorname{d}\\!t$ (B.1) $\displaystyle\leq{\mathbb{E}}\int_{0}^{T}\Psi^{\prime}(P_{1,t}^{k,l})\Big{[}2c+2C^{\top}\Lambda_{1,t}^{k,l}+3c|\Lambda_{1}^{k}-\Lambda_{1}|^{2}+3c|\Lambda_{1}|^{2})\Big{]}\operatorname{d}\\!t.$ Take $\kappa=12c$. Then $\frac{1}{2}\Psi^{\prime\prime}(x)-3c\Psi^{\prime}(x)=3c(e^{\kappa x}+1)=3c\Psi^{\prime}(x)+6c\geq 6c$ for $x\geq 0$. So by the dominated convergence theorem, the sequence $\sqrt{\frac{1}{2}\Psi^{\prime\prime}(P_{1,t}^{k,l})-3c\Psi^{\prime}(P_{1,t}^{k,l})}$ converges strongly to $\sqrt{3c\Psi^{\prime}(P_{1,t}^{k}-P_{1,t})+6c},$ as $l\to\infty$, and they are uniformly bounded. Therefore, $\sqrt{3c\Psi^{\prime}(P_{1,t}^{k,l})+6c}\;\Lambda_{1}^{k,l}$ converges weakly to $\sqrt{3c\Psi^{\prime}(P_{1,t}^{k}-P_{1,t})+6c}\;(\Lambda_{1}^{k}-\Lambda_{1}).$ By the mean value theorem and the uniformly boundedness of $P_{1,t-}^{k,l}$ and $\Gamma_{1,t-}^{k,l}$, we obtain (B.2) $\displaystyle\Psi(P_{1,t-}^{k,l}+\Gamma_{1,t}^{k,l})-\Psi(P_{1,t-}^{k,l})-\Psi^{\prime}(P_{1,t-}^{k,l})\Gamma_{1}^{k,l}$ $\displaystyle=\frac{1}{\kappa}e^{\kappa P_{1,t-}^{k,l}}\Big{[}e^{\kappa\Gamma_{1,t}^{k,l}}-\kappa\Gamma_{1,t}^{k,l}-1\Big{]}\geq\varepsilon|\Gamma_{1,t}^{k,l}|^{2},$ for some constant $\varepsilon>0$ independent of $l$ and $k$. We then get from (B.1) that $\displaystyle\quad{\mathbb{E}}\int_{0}^{T}\big{(}3c\Psi^{\prime}(P_{1}^{k}-P_{1})+6c\big{)}|\Lambda_{1}^{k}-\Lambda_{1}|^{2}\operatorname{d}\\!t$ $\displaystyle\leq\varliminf_{l\to\infty}{\mathbb{E}}\int_{0}^{T}\big{(}3c\Psi^{\prime}(P_{1}^{k,l})+6c\big{)}|\Lambda_{1}^{k,l}|^{2}\operatorname{d}\\!t$ $\displaystyle\leq{\mathbb{E}}\int_{0}^{T}\Psi^{\prime}(P_{1}^{k}-P_{1})\Big{[}2c+2C^{\top}(\Lambda_{1}^{k}-\Lambda_{1})+3c|\Lambda_{1}^{k}-\Lambda_{1}|^{2}+3c|\Lambda_{1}|^{2})\Big{]}\operatorname{d}\\!s.$ Canceling the common terms, it yields $\displaystyle{\mathbb{E}}\int_{0}^{T}6c|\Lambda_{1}^{k}-\Lambda_{1}|^{2}\operatorname{d}\\!s\leq{\mathbb{E}}\int_{0}^{T}\Psi^{\prime}(P_{1}^{k}-P_{1})\Big{[}2c+2C^{\top}(\Lambda_{1}^{k}-\Lambda_{1})+3c|\Lambda_{1}|^{2})\Big{]}\operatorname{d}\\!s.$ By passing to the limit $k\to\infty$, applying dominated convergence theorem and noticing $\Psi^{\prime}(0)=0$, we have $\displaystyle\lim_{k\to\infty}{\mathbb{E}}\int_{0}^{T}|\Lambda_{1}^{k}-\Lambda_{1}|^{2}\operatorname{d}\\!t=0.$ Using (B.2), we can similarly get $\displaystyle\lim_{k\to\infty}{\mathbb{E}}\int_{0}^{T}\int_{\mathcal{E}}|\Gamma_{1}^{k}-\Gamma_{1}|^{2}\nu(\operatorname{d}\\!e)\operatorname{d}\\!t=0.$ Because $\Gamma_{1}^{k}$, $k=1,2,\cdots,$ are uniformly bounded in $L^{\infty,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}})$, we conclude that $\Gamma_{1}\in L^{\infty,\nu}_{\mathcal{P}}(0,T;{\mathbb{R}})$. Along appropriate subsequence (which is still denoted by $(\Lambda_{1}^{k},\Gamma_{1}^{k})$) we may obtain $\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\textrm{-a.e.}$ convergence of $\displaystyle\int_{t}^{T}(\Lambda_{1}^{k})^{\top}\operatorname{d}\\!W+\int_{t}^{T}\int_{\mathcal{E}}\Gamma_{1}^{k}(e)\widetilde{N}(\operatorname{d}\\!s,\operatorname{d}\\!e)\to\int_{t}^{T}\Lambda_{1}^{\top}\operatorname{d}\\!W+\int_{t}^{T}\int_{\mathcal{E}}\Gamma_{1}(e)\widetilde{N}(\operatorname{d}\\!s,dy),$ and $\displaystyle\lim_{k\to\infty}\int_{t}^{T}\Big{[}(2A+C^{\top}C)P_{1,t-}^{k}+2C^{\top}\Lambda_{1,t}^{k}+Q\Big{]}\operatorname{d}\\!s=\int_{t}^{T}\Big{[}(2A+C^{\top}C)P_{1,t-}+2C^{\top}\Lambda_{1,t}+Q\Big{]}\operatorname{d}\\!s.$ We now turn to prove (B.3) $\displaystyle\lim_{k\to\infty}\int_{t}^{T}H_{1}^{k}(s,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})\operatorname{d}\\!s=\int_{t}^{T}H_{1}^{*}(s,P_{1,s-},P_{2,s-},\Lambda_{1,s},\Gamma_{1,s},\Gamma_{2,s})\operatorname{d}\\!s.$ We have $\displaystyle\quad|H_{1}^{k}(s,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})-H_{1}^{*}(s,P_{1,s-},P_{2,s-},\Lambda_{1,s},\Gamma_{1,s},\Gamma_{2,s})|$ $\displaystyle\leq|H_{1}^{k}(s,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})-H_{1}^{*}(s,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})|$ $\displaystyle\qquad+|H_{1}^{*}(s,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})-H_{1}^{*}(s,P_{1,s-},P_{2,s-},\Lambda_{1,s},\Gamma_{1,s},\Gamma_{2,s})|.$ Recall that $\Lambda_{1,s}^{k}\to\Lambda_{1,s}~{}\operatorname{d}\\!\mathbb{P}\otimes\operatorname{d}\\!t\textrm{-a.e.}$, so there exists $k_{1}(\omega,s)$ such that $|\Lambda_{1,s}^{k}|\leq 1+|\Lambda_{1,s}|$ for $k\geq k_{1}$. Notice that $H_{1}(s,0,P_{1,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},P_{2,s-}^{k}+\Gamma_{2,s}^{k})$ is upper bounded by some $c>0$, and $\displaystyle H_{1}(s,v,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})$ $\displaystyle\geq\delta|v|^{2}-2c_{1}|v|(1+|\Lambda_{1}^{k}|)-c_{1}$ $\displaystyle\geq\delta|v|^{2}-2c_{1}|v|(2+|\Lambda_{1}|)-c_{1}\geq c,$ if $|v|>c_{2}(1+|\Lambda_{1,s}|)$ with $c_{2}>0$ being sufficiently large. Hence, for $k\geq\max\\{c_{2}(1+|\Lambda_{1,s}|),k_{1}\\}$, we have $\displaystyle H_{1}^{*}(s,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})$ $\displaystyle=\inf_{\begin{subarray}{c}v\in\Pi\\\ |v|\leq c_{2}(1+|\Lambda_{1}|)\end{subarray}}H_{1}(s,v,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})$ $\displaystyle\geq\inf_{\begin{subarray}{c}v\in\Pi\\\ |v|\leq k\end{subarray}}H_{1}(s,v,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})$ $\displaystyle=H_{1}^{k}(s,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k}).$ We also have the reverse inequality $H_{1}^{*}\leq H_{1}^{k}$ by definition. Therefore, when $k\geq\max\\{c_{2}(1+|\Lambda_{1,s}|),k_{1}\\}$, $H_{1}^{k}$ and $H_{1}^{*}$ coincide at $(s,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})$. Notice that $\displaystyle\quad|H_{1}^{*}(s,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})-H_{1}^{*}(s,P_{1,s-},P_{2,s-},\Lambda_{1,s},\Gamma_{1,s},\Gamma_{2,s}))|$ $\displaystyle\leq\sup_{\begin{subarray}{c}v\in\Pi\\\ |v|\leq c_{2}(1+|\Lambda_{1}|)\end{subarray}}|H_{1}(s,v,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})-H_{1}(s,v,P_{1,s-},P_{2,s-},\Lambda_{1,s},\Gamma_{1,s},\Gamma_{2,s}))|,$ one can easily see, from the definition of $H_{1}$, that as long as $|P_{1}^{k}-P_{1}|+|\Lambda_{1}^{k}-\Lambda_{1}|+\int_{\mathcal{E}}|\Gamma_{1}^{k}-\Gamma_{1}|\nu(\operatorname{d}\\!e)+|P_{2}^{k}-P_{2}|+\int_{\mathcal{E}}|\Gamma_{2}^{k}-\Gamma_{2}|\nu(\operatorname{d}\\!e)\to 0,$ we have $\lim_{k\to\infty}|H_{1}^{*}(s,P_{1,s-}^{k},P_{2,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},\Gamma_{2,s}^{k})-H_{1}^{*}(s,P_{1,s-},P_{2,s-},\Lambda_{1,s},\Gamma_{1,s},\Gamma_{2,s}))|=0.$ Since $\displaystyle|H_{1}^{k}(s,P_{1,s-}^{k},\Lambda_{1,s}^{k},\Gamma_{1,s}^{k},P_{2,s-}^{k},\Gamma_{2,s}^{k})|\leq c(1+|\Lambda_{1,s}^{k}|^{2}),$ the dominated convergence theorem leads to (B.3). Now it is standard to show that $\lim_{k\to\infty}{\mathbb{E}}\Big{[}\sup_{t\in[0,T]}|P^{k}_{1,t}-P_{1,t}|\Big{]}=0;$ please refer to Antonelli and Mancini [1, Theorem 1] for details. ## References * [1] Antonelli F, Mancini C. Solutions of BSDEs with jumps and quadratic/locally Lipschitz generator. Stochastic Process. Appl., 2016, 126(10): 3124-3144. * [2] Barles G, Buckdahn R, Pardoux E. Backward stochastic differential equations and integral-partial differential equations. Stochastics, 1997, 60(1-2): 57-83. * [3] Bismut J M. Conjugate convex functions in optimal stochastic control. J. Math. Anal. Appl., 1973, 44(2): 384-404. * [4] Bismut J M. Linear quadratic optimal stochastic control with random coefficients. SIAM J. Control Optim., 1976, 14(3): 419-444. * [5] Czichowsky C, Schweizer M. Cone-constrained continuous-time Markowitz problems. Ann. Appl. Probab., 2013, 23(2): 764-810. * [6] Darling R W R, Pardoux E. Backwards SDE with random terminal time and applications to semilinear elliptic PDE. Ann. Probab., 1997, 25(3): 1135-1159. * [7] Dong Y. Constrained LQ problem with a random jump and application to portfolio selection. Chin. Ann. Math. Ser. B, 2018, 39(5): 829-848. * [8] El-Karoui N, Hamadène S. BSDEs and risk-sensitive control, zero-sum and nonzero-sum game problems of stochastic functional differential equations. Stochastic Process. Appl., 2003, 107(1): 145-169. * [9] El Karoui N, Peng S, Quenez M C. Backward stochastic differential equations in finance. Math. Finance, 1997, 7(1): 1-71. * [10] El Karoui N, Peng S, Quenez M C. A dynamic maximum principle for the optimization of recursive utilities under constraints. Ann. Appl. Probab., 2001, 11(3): 664-693. * [11] Fan S, Hu Y, Tang S. Multi-dimensional backward stochastic differential equations of diagonally quadratic generators: the general result. J. Differential Equations., 2023, 368: 105-140. * [12] Hu Y, Peng S. On the comparison theorem for multidimensional BSDEs, C. R. Acad. Sci. Paris, Ser. I, 343 (2006), 135-140. * [13] Hu Y, Shi X, Xu Z Q. Constrained stochastic LQ control with regime switching and application to portfolio selection. Ann. Appl. Probab., 2022, 32 (1): 426-460. * [14] Hu Y, Shi X, Xu Z Q. Stochastic linear-quadratic control with a jump and regime switching on a random horizon. Math. Control Relat. Fields, 2023, 13(4): 1597-1617. * [15] Hu Y, Tang S. Multi-dimensional backward stochastic differential equations of diagonally quadratic generators. Stochastic Process. Appl., 2016, 126(4): 1066-1086. * [16] Hu Y, Zhou X. Constrained stochastic LQ control with random coefficients, and application to portfolio selection. SIAM J. Control Optim., 2005, 44(2): 444-466. * [17] Kazi-Tani N, Possamaï D, Zhou C. Quadratic BSDEs with jumps: a fixed-point approach. Electron. J. Probab, 2015, 20(66): 1-28. * [18] Kharroubi I, Lim T, Ngoupeyou A. Mean-variance hedging on uncertain time horizon in a market with a jump. Appl. Math. Optim., 2013, 68(3): 413-444. * [19] Kohlmann M, Tang S. Global adapted solution of one-dimensional stochastic Riccati equations, with application to the mean-variance hedging. Stochastic Process. Appl., 2002, 97(2): 255-288. * [20] Kobylanski M. Backward stochastic differential equations and partial differential equations with quadratic growth. Ann. Probab., 2000, 28(2): 558-602. * [21] Laeven R J A, Stadje M. Robust portfolio choice and indifference valuation. Math. Oper. Res., 2014, 39(4): 1109-1141. * [22] Li N, Wu Z, Yu Z. Indefinite stochastic linear-quadratic optimal control problems with random jumps and related stochastic Riccati equations. Sci. China Math., 2018, 61: 563-576. * [23] Lim A. Mean-variance hedging when there are jumps. SIAM J. Control Optim. 2005, 44(5): 1893-1922. * [24] Luenberger D. Optimization by vector space methods. 1997, John Wiley and Sons. * [25] Luo P. A type of globally solvable BSDEs with triangularly quadratic generators. Electron. J. Probab., 2020, 25: 1-23. * [26] Morlais M. Utility maximization in a jump market model. Stochastics, 2009, 81(1): 1-27. * [27] Morlais M. A new existence result for quadratic BSDEs with jumps with application to the utility maximization problem. Stochastic Process. Appl., 2010, 120(10): 1966-1995. * [28] Pardoux E, Peng S. Adapted solution of a backward stochastic differential equation. Systems Control Lett., 1990, 14(1): 55-61. * [29] Papapantoleon A, Possamaï D, Saplaouras A. Existence and uniqueness results for BSDE with jumps: the whole nine yards. Electron. J. Probab., 2018, 23, 1-68. * [30] Peng S. Backward stochastic differential equations and applications to optimal control. Appl. Math. Optim., 1993, 27(2): 125-144. * [31] Protter P. Stochastic Integration and Differential Equations, 2nd edition, 2005, Springer Berlin Heidelberg. * [32] Quenez M, Sulem A. BSDEs with jumps, optimization and applications to dynamic risk measures. Stochastic Process. Appl., 2013, 123(8): 3328-3357. * [33] Royer M. Backward stochastic differential equations with jumps and related non-linear expectations. Stochastic Process. Appl., 2006, 116(10): 1358-1376. * [34] Sun J, Xiong J, Yong J. Indefinite stochastic linear-quadratic optimal control problems with random coefficients: Closed-loop representation of open-loop optimal controls. Ann. Appl. Probab., 2021, 31(1): 460-499. * [35] Tang S. General linear quadratic optimal stochastic control problems with random coefficients: linear stochastic Hamilton systems and stochastic Riccati equations. SIAM J. Control Optim., 2003, 42(1): 53-75. * [36] Tang S. Dynamic programming for general linear quadratic optimal stochastic control with random coefficients. SIAM J. Control Optim., 2015, 53(2): 1082-1106. * [37] Tang S, Li X. Necessary conditions for optimal control of stochastic systems with random jumps. SIAM J. Control Optim., 1994, 32(5): 1447-1475. * [38] Tevzadze R. Solvability of backward stochastic differential equations with quadratic growth. Stochastic Process. Appl., 2008, 118(3): 503-515. * [39] Zhang F, Dong Y and Meng Q. stochastic Riccati equation with jumps associated with stochastic learar quadratic optimal control with jumps and random coefficients. SIAM J. Control Optim. 2020, 58(1): 393-424.
Yerim<EMAIL_ADDRESS> Nur Suriza<EMAIL_ADDRESS> Sang-Chul<EMAIL_ADDRESS> Department of Electrical and Computer Engineering Inha University Incheon, South Korea Local Feature Extraction from Salient Regions # Local Feature Extraction from Salient Regions by Feature Map Transformation ###### Abstract Local feature matching is essential for many applications, such as localization and 3D reconstruction. However, it is challenging to match feature points accurately in various camera viewpoints and illumination conditions. In this paper, we propose a framework that robustly extracts and describes salient local features regardless of changing light and viewpoints. The framework suppresses illumination variations and encourages structural information to ignore the noise from light and to focus on edges. We classify the elements in the feature covariance matrix, an implicit feature map information, into two components. Our model extracts feature points from salient regions leading to reduced incorrect matches. In our experiments, the proposed method achieved higher accuracy than the state-of-the-art methods in the public dataset, such as HPatches, Aachen Day-Night, and ETH, which especially show highly variant viewpoints and illumination. ## 1 Introduction Extracting and describing local features for matching is essential, especially in computer vision tasks that involve image matching, searching, tracking, and 3D reconstruction [Heinly et al.(2015)Heinly, Schonberger, Dunn, and Frahm, Svärm et al.(2016)Svärm, Enqvist, Kahl, and Oskarsson, Noh et al.(2017)Noh, Araujo, Sim, Weyand, and Han]. Feature matching focuses on three main phases when given two similar images are to be matched: feature detection, feature description, and feature matching [Lowe(2004), Cheng et al.(2014)Cheng, Leng, Wu, Cui, and Lu]. The primary goal of feature matching is to optimize matching accuracy while minimizing the memory footprint of earlier applications. The extracted features should be sparse, highly repeatable, and precise. Each image’s salient features, such as its corners, are initially recognized as interest points during the detection phase. Then, local descriptors are extracted based on the neighborhood regions of these interest points and used in the matching algorithms. Classical approaches [Lowe(2004), Bay et al.(2006)Bay, Tuytelaars, and Van Gool] concentrate on the detect-then-describe method, where they first detect the points by analyzing the gradient of the image before describing the points with directional information. Furthermore, these approaches [Lowe(2004), Bay et al.(2006)Bay, Tuytelaars, and Van Gool] have shifted research trends by the emergence of deep learning methods [Tian et al.(2017)Tian, Fan, and Wu, Zagoruyko and Komodakis(2015), DeTone et al.(2018)DeTone, Malisiewicz, and Rabinovich]. Since deep convolutional neural networks (DCNNs) can automatically learn features, mimic traditional detector behaviors, and process complex images, CNN methods have achieved remarkable performance than before [Luo et al.(2020)Luo, Zhou, Bai, Chen, Zhang, Yao, Li, Fang, and Quan, Tian et al.(2020)Tian, Balntas, Ng, Barroso-Laguna, Demiris, and Mikolajczyk, Tyszkiewicz et al.(2020)Tyszkiewicz, Fua, and Trulls]. These data-driven methods concentrate on sparse points by leveraging descriptors’ information for corresponding points. At the same time, several networks [DeTone et al.(2018)DeTone, Malisiewicz, and Rabinovich, Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel] attempted to achieve better performance by influencing detection and description simultaneously, with improved repeatability and sparsity of detected points. Detector-free local feature matcher [Sun et al.(2021)Sun, Shen, Wang, Bao, and Zhou, Zhou et al.(2021)Zhou, Sattler, and Leal-Taixe] and a decoupled pipeline for a detection and description module were also studied [Li et al.(2022)Li, Wang, Liu, Ran, Xu, and Guo]. Despite these achievements, there is insufficient consideration for light and structure information in an image. Examining this information to robustly locate and define matched points, regardless of camera viewpoint or illumination variance, is critical. Nighttime images are challenging due to the uncertainties of light and structure [Sun et al.(2021)Sun, Shen, Wang, Bao, and Zhou]. When viewpoints change significantly, it is also difficult to match correctly [Balntas et al.(2017)Balntas, Lenc, Vedaldi, and Mikolajczyk]. Although some studies investigated this viewpoint and light information to apply in the local feature domain, they used only hand-crafted ways such as detecting corners or simply rotating features [Pautrat et al.(2020)Pautrat, Larsson, Oswald, and Pollefeys, Liu et al.(2019)Liu, Shen, Lin, Peng, Bao, and Zhou, Melekhov et al.(2020)Melekhov, Brostow, Kannala, and Turmukhambetov]. In this work, we propose a new strategy that uses both style and structure information to address the issue of mismatches in image variance. Specifically, we apply the concept of Instance Selective Whitening (ISW) loss, introduced by RobustNet [Choi et al.(2021)Choi, Jung, Yun, Kim, Kim, and Choo], where the features are transformed with implicit information about the style component to have robustness under variations in light. Since there are limitations in this idea and considers only the style factor, we revised ISW to consider the structure factor and apply it to the local feature field. Furthermore, we focus on salient points to reduce matching time. Contributions. In this paper, we propose a framework that addresses the problems from a different light and structural information. We first extract features using Feature Map Generation (FMG) module. Then, Feature Map Transformation (FMT) module divides extracted features into two components: style and structure matrix. Each component independently learns the information gathered by the learned feature. Consequently, regardless of any changes in any component, the feature map will still select salient and matchable points. We introduce a loss function to maximize the influence of the structure information and minimize the style information in the feature map. The main contributions are summarized as follows: * • We overcome the limitation of feature matching in image variance by distinguishing between structure- and style-dependent features and transforming the feature maps. * • We propose Feature Map Transformation (FMT) module, exploiting an existing style transfer concept that concentrates only on style components, to transform the feature map while training to make it focus on salient features. * • Extensive experiments on different benchmark datasets demonstrate that the proposed method can achieve high accuracy in matching tasks in a short time and with fewer parameters. ## 2 Related Work Local feature learning. The joint learning of feature detectors and descriptors requires a unified network to construct feature maps and allows the two tasks to share the majority of computations for improved performance. DELF [Noh et al.(2017)Noh, Araujo, Sim, Weyand, and Han] proposed an image retrieval technique that learns local features as a by-product of a classification loss combined with an attention mechanism to improve performance on large-scale images. It outperforms images under changing light conditions but has limitations in terms of structural variation. SuperPoint [DeTone et al.(2018)DeTone, Malisiewicz, and Rabinovich] suggested a method for learning from the manual annotation of significant points on simple images such as corners and edges. However, because of their low repeatability and descriptor accuracy, it has many outliers, so the matched points tend to be mistakenly judged. R2D2 [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel] has overcome this issue by learning the descriptor reliability in parallel with the detection and description phases and only selecting both repeatable and reliable keypoints with respect to the descriptor. To find only the matchable points, D2-Net [Dusmanu et al.(2019)Dusmanu, Rocco, Pajdla, Pollefeys, Sivic, Torii, and Sattler] proposed a describe-and-detect method for joint detection and description that uses a single CNN with shared weights. The detection is based on the entire channel’s local maxima and the shape map’s spatial dimensions. DISK [Tyszkiewicz et al.(2020)Tyszkiewicz, Fua, and Trulls] applied reinforcement learning on an end-to-end network inspired by D2-Net [Dusmanu et al.(2019)Dusmanu, Rocco, Pajdla, Pollefeys, Sivic, Torii, and Sattler] that relied on policy gradients. Furthermore, ASLFeat [Luo et al.(2020)Luo, Zhou, Bai, Chen, Zhang, Yao, Li, Fang, and Quan] demonstrated significant improvement using a score map that used local shape estimation to select matching points. Feature covariance. Previous research [Gatys et al.(2015)Gatys, Ecker, and Bethge, Gatys et al.(2016)Gatys, Ecker, and Bethge] proposed that image style information is considered via feature correlations such as a gram or a covariance matrix. Since then, feature correlation has been applied to several different research areas, including style transfer [Li et al.(2017)Li, Fang, Yang, Wang, Lu, and Yang], image-to-image translation [Cho et al.(2019)Cho, Choi, Park, Shin, and Choo], domain adaptation [Roy et al.(2019)Roy, Siarohin, Sangineto, Bulo, Sebe, and Ricci, Sun and Saenko(2016)], and network architecture [Luo(2017), Pan et al.(2019)Pan, Zhan, Shi, Tang, and Luo, Huang et al.(2018)Huang, Yang, Lang, and Deng]. Whitening transformation (WT) [Li et al.(2017)Li, Fang, Yang, Wang, Lu, and Yang, Cho et al.(2019)Cho, Choi, Park, Shin, and Choo, Pan et al.(2019)Pan, Zhan, Shi, Tang, and Luo], which eliminates feature correlation and assigns unit variance to each feature, aids in the removal of style information from the feature representations. Since region-specific styles and region-invariant content are simultaneously written to the covariance vector of the feature maps, whitening all the correlation components reduce feature identification and distort the boundaries of objects [Li et al.(2017)Li, Fang, Yang, Wang, Lu, and Yang, Li et al.(2018)Li, Liu, Li, Yang, and Kautz]. RobustNet [Choi et al.(2021)Choi, Jung, Yun, Kim, Kim, and Choo] proposed the ISW loss, which extracted only the style information to solve the problem. We want to focus on style and structure information, so we modify the ISW loss to satisfy our objective. ## 3 Method ### 3.1 Feature Map Generation Module Feature Map Generation (FMG) module first extracts the features of an image pair, $I_{1}$ and $I_{2}$ independently, which outputs two branches: descriptors and point extraction feature maps. The point extraction branch consists of two feature maps. One produces another with a 1$\times$1 convolution layer; the former is a reliability map $\mathbf{S}$ and the latter in repeatability map $\mathbf{R}$. The covariance matrix derived from the descriptor map $\mathbf{X}$ is used to transform the feature map to focus on saliency with style and structure information. Then FMG module uses feature maps $\mathbf{X}$, $\mathbf{S}$, and $\mathbf{R}$ to calculate the loss functions for repeatability and reliability. The FMG module’s network architecture differs from R2D2 [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel] but uses the same loss functions in this part. So only the architecture part will be described. The proposed method pipeline is shown in Figure 1. Figure 1: Proposed framework. Network consisting of a feature map generation module and a transformation module is shown. The reliability map $\mathbf{S}$ and repeatability map $\mathbf{R}$ learn the regions of points of interest, while the covariance matrix derived from the descriptor map $\mathbf{X}$ is used to transform the feature map to focus on saliency with style and structure information. As introduced in MobileNet [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam], depthwise separable convolution (DSC) is a factorization convolution method that significantly reduces computation and model size with new representations. Motivated by this, we used the DSC to focus on the salient area. Inspired by [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel, Luo et al.(2020)Luo, Zhou, Bai, Chen, Zhang, Yao, Li, Fang, and Quan], we adopt a modified L2Net [Tian et al.(2017)Tian, Fan, and Wu]—where the last layer is replaced by a set of three consecutive layers—in our backbone network to extract feature information from an image pair. Since the input images are image pairs, two backbone networks are needed. We use weight sharing in the DSC layer when adding it behind the backbone network to reduce the model weight. Furthermore, the relationship between descriptors and points of interest can be maintained by sharing the weights. The backbone network then generates three feature maps, $\mathbf{X}$ by $\ell$2 normalization, $\mathbf{S}$ by the element-wise square, and $\mathbf{R}$ obtained from $\mathbf{S}$ with a 1$\times$1 convolution layer. In contrast to [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel], $\mathbf{S}$ and $\mathbf{R}$ come from the same branch since they depend on one another. We assumed that $\mathbf{S}$ is a feature map that learns a point that reduces the matching distance, which might affect in high repeatability rate in $\mathbf{R}$. Therefore, the model weight gets lighter and develops more robust features through information sharing. FMG module calculates the reliability loss, $\mathcal{L}_{reli}$, to get discriminative feature points. Let $\mathbf{X}_{ij}$ be the local descriptor in each pixel $(i,j)$ of the image $I_{1}$; we then predict the individual reliability scores $\mathbf{S}_{ij}$ from $\mathbf{X}_{ij}$ and $\mathbf{X}^{{}^{\prime}}_{uv}$. Here, we specify the exact coordinate $(u,v)$ that corresponds to $(i,j)$, knowing the ground truth correspondence mapping $T$, where $T\in\mathbb{R}^{H\times W\times 2}$ is the ground truth correspondence between image $I_{1}$ and $I_{2}$. $\mathbf{X}_{ij}$ is compared with $\mathbf{X}^{{}^{\prime}}_{uv}$, where $\mathbf{X}^{{}^{\prime}}_{uv}$ is extracted from $I_{2}$. Then, average precision is used to calculate $\mathcal{L}_{reli}$, optimized with a differentiable approximation [He et al.(2018)He, Lu, and Sclaroff, Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel], using $\mathbf{S}_{ij}$. In addition, FMG module calculates the repeatability loss, $\mathcal{L}_{repeat}$, for extracting repeatable feature points as in [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel]. It uses peakiness prediction and similarity between feature pairs from input pair images. For similarity, Let $\mathbf{R}$ and $\mathbf{R^{{}^{\prime}}}$ be the repeatability maps corresponding to $I_{1}$ and $I_{2}$. We set $\mathbf{R}^{{}^{\prime}}_{T}$ to be a map in which $\mathbf{R^{{}^{\prime}}}$ is transformed by the ground truth homography relationship between image pairs $I_{1}$ and $I_{2}$. Because the prime objective is to predict keypoints with high repeatability, we train the network so that the positions of the local maxima in $\mathbf{R}$ are covariant to the actual picture transformations, such as viewpoint structures or light shifts. Assuming that all the local maxima of $\mathbf{R}$ coincide with the local maxima of $\mathbf{R}^{{}^{\prime}}_{T}$, we define the loss function. The basic concept is to maximize the cosine similarity between $\mathbf{R}$ and $\mathbf{R}^{{}^{\prime}}_{T}$ such that the two heatmaps are identical and their maxima identically coincide. The loss may remain at a particular constant that may terminate the learning process, so we prevent this using the peakiness prediction. The final repeatability loss is calculated by considering both similarity and the peakiness of the input image pair. ### 3.2 Feature Map Transformation Module One problem with feature matching is structural perspective and lighting (style) differences. The style corresponds to noise, such as weather and light. Concentrating on the structure, which relates to the image’s point of view or the edges, can result in better results. Feature Map Transformation (FMT) module suppresses the style information by performing task adaptation and supplementing the existing WT loss with an extensive transformation. We improve the previous loss by including structural information to overcome the limitations of using only existing styles. Style/Structure Covariance Matrix. Previous studies [Li et al.(2019)Li, Zhang, Yang, Liu, Song, and Hospedales, Choi et al.(2021)Choi, Jung, Yun, Kim, Kim, and Choo] claimed that applying WT to each instance of style transfer could successfully erase style information. WT is a linear transformation that equalizes the variance term in each channel to one and reduces the covariances between channels to zero. The intermediate feature map is $\textbf{X}\in\mathbb{R}^{C\times HW}$, where $C$ is the number of channels, and $H$ and $W$ are the height and width of the feature map, respectively. The covariances between pairs of channels can be defined as follows: $\Sigma=\frac{1}{HW}\left(\textbf{X}-\mathbf{\mu}\cdot\textbf{O}^{\top}\right)\left(\textbf{X}-\mathbf{\mu}\cdot\textbf{O}^{\top}\right)^{\top}\in\mathbb{R}^{C\times C}$ (1) where $\textbf{O}\in\mathbb{R}^{HW}$ is a column vector of ones, and the $\mathbf{\mu}$ and $\Sigma$ are the mean vector and covariance matrix, respectively. When the loss function is designed so that the elements of the covariance matrix $\Sigma$ decrease, the feature extraction is less affected by the style element in extracting features from the input image. This is because the feature map generation lacks the information of style elements. Based on this concept, we adopt a method of transforming feature maps using style elements. Furthermore, we use information about the structure, the leftover elements in the gram matrix beside style elements, to design the loss function in the direction of expansion rather than suppression. Figure 2: Transformed feature map. Comparison of feature maps from each network indicates the tendency in which we change the feature map; eliminates style information refers to noise, while structure complexity enlarges. Transformation Loss. In order to transform and manipulate feature maps using the aforementioned style and structure information, we introduce the FMT module which use feature map X to calculate $\mathcal{L}_{cov}$. This is done since the descriptor has the most information among the three output feature maps from the FMG module. FMT module then separates style and structure characteristics from the feature representation’s higher-order statistics by selectively modifying each attribute differently. Replacing the old feature map X with a standardized feature map $\mathbf{X}_{s}$ simplifies the optimization process of both the diagonal and off-diagonal elements of the covariance matrix simultaneously [Ulyanov et al.(2016)Ulyanov, Vedaldi, and Lempitsky, Choi et al.(2021)Choi, Jung, Yun, Kim, Kim, and Choo]. $\Sigma_{s}$ is a matrix made from the standardized feature map $\mathbf{X}_{s}$, and we define $\Sigma_{s}$ as follows: $\Sigma_{s}=\frac{1}{HW}\mathbf{X}_{s}\cdot\mathbf{X}_{s}^{\top}\in\mathbb{R}^{C\times C}$ (2) After obtaining the covariance matrices $\Sigma_{s}(I_{1})$ and $\Sigma_{s}(I_{2})$, we calculate the difference between the two matrices to produce matrix $\Sigma_{C}$, as defined in Eq. 3. Matrix $\Sigma_{C}$ indicates the sensitivity of the corresponding covariance to the photometric transformation [Choi et al.(2021)Choi, Jung, Yun, Kim, Kim, and Choo]. Elements with a high variance value retain the style information, whereas elements with a low variance value retain the structure information. We use absolute value notation with vertical bars. $\Sigma_{C}=\left|\Sigma_{s}(I_{1})-\Sigma_{s}(I_{2})\right|$ (3) We cluster the style and structural components in equal amounts, using the mean value to determine the threshold for separating the two components. If a matrix $\Sigma_{C}$ element is greater than the threshold, that element is classified as a style factor, while the rest of the elements are classified as structural factors. This definition is established because the prominent factors of the matrix $\Sigma_{C}$ are assumed to imply the style factor [Choi et al.(2021)Choi, Jung, Yun, Kim, Kim, and Choo]. In this case, the style factor refers to changes in light or color, and the structural factor refers to complexities with many objects, edges, or viewpoints. The proposed loss function, $\mathcal{L}_{cov}$ is formulated as follows: $\mathcal{L}_{cov}=\begin{cases}\mathbb{E}\left[\left\|\Sigma_{C}\odot\mathbf{M}_{sty}\right\|_{1}\right]&\text{ if }\Sigma_{C}>\mathbf{\mu}\\\ \ 1-\mathbb{E}\left[\left\|\Sigma_{C}\odot\mathbf{M}_{str}\right\|_{1}\right]&\text{ otherwise}\end{cases}$ (4) where $\mathbb{E}$ and $\mathbf{M}\in\mathbb{R}^{C\times C}$ are the arithmetic mean and a mask matrix. $\mathbf{M}_{sty}$ and $\mathbf{M}_{str}$ are masks that select style and structure values, and $\odot$ is element-wise multiplication. Finally, the total loss function can be represented by Eq. 5, with each weight $\lambda_{i}$ are empirically tuned to the optimal ratio of 1:1:2. We define $i\in\left\\{1,2,3\right\\}$ because the number of loss terms is three. We strengthen the argument that adding the transformation loss is superior in selecting only salient features when comparing the feature map of R2D2 [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel] with ours in Figure 2. $\mathcal{L}_{total}=\lambda_{1}\cdot\mathcal{L}_{reli}+\lambda_{2}\cdot\mathcal{L}_{repeat}+\lambda_{3}\cdot\mathcal{L}_{cov}$ (5) This aggregated loss function is used to select salient points to minimize the prediction of the less informative regions, such as the sky or ground. FMT is inducing the feature map transformation so that the proposed transformation loss function could extract robust features if the location where the image is taken is in the same place, regardless of the change in structure and style. ## 4 Experiment ### 4.1 Implementation details Training. We apply Adam to optimize the network for 25 epochs with a fixed learning rate of 0.0001, a weight decay of 0.0005, and a batch size of 8 pairs of cropped images of 192 by 192 pixels, as in R2D2 [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel]. Our experiment used the training dataset and ground-truth correspondences used in R2D2. Since our model uses the modified version of R2D2, we trained the network from scratch. Nonetheless, we fixed the patch size N used in the repeatability loss to 16 in all training parts to improve the performance of the transformation loss. Testing. We used the sum of different scales of images to diversify the resolution of the feature maps at test time. The descriptors were interpolated at the modified locations. This multi-scale feature extraction enables the extraction of more tentative keypoints and provides improved localization. Experiment Settings. This study used an NVIDIA GeForce RTX 3090 GPU and CUDA toolkit, version 11.2, with Python 3 and PyTorch 1.8 in the training environment. Figure 3: Quantitative Results on HPatches. Comparison in terms of MMA on the HPatches dataset. MMA@3 Overall Illumi. Viewp. Hes.Aff. [Perd’och et al.(2009)Perd’och, Chum, and Matas] 56.24 51.35 60.79 DELF(new) [Noh et al.(2017)Noh, Araujo, Sim, Weyand, and Han] 49.43 89.73 12.02 SuperPoint [DeTone et al.(2018)DeTone, Malisiewicz, and Rabinovich] 64.45 69.38 59.88 LF-Net [Ono et al.(2018)Ono, Trulls, Fua, and Yi] 53.01 57.31 49.02 D2-Net [Dusmanu et al.(2019)Dusmanu, Rocco, Pajdla, Pollefeys, Sivic, Torii, and Sattler] 39.76 44.99 34.91 R2D2 [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel] 70.06 75.56 64.96 ASLFeat [Luo et al.(2020)Luo, Zhou, Bai, Chen, Zhang, Yao, Li, Fang, and Quan] 72.28 75.47 68.28 DISK [Tyszkiewicz et al.(2020)Tyszkiewicz, Fua, and Trulls] 75.34 79.43 71.53 Ours 78.41 83.22 73.94 Table 1: MMA@3 on HPatches. Comparison at 3px threshold. ### 4.2 Feature Matching Quantitative Evaluation. We evaluated the performance of selecting meaningful points by calculating mean matching accuracy (MMA). If the distance between the converted point and the reference point exists within the threshold based on the ground truth homography metric, the converted point is classified as a correct conversion point. Figure 3 illustrates the comparisons on the H-Patches dataset [Balntas et al.(2017)Balntas, Lenc, Vedaldi, and Mikolajczyk], with MMA measured at various error thresholds. Figure 3 was drawn from the cache data provided by the D2-Net [Dusmanu et al.(2019)Dusmanu, Rocco, Pajdla, Pollefeys, Sivic, Torii, and Sattler] repository. The comparison was performed with DELF [Noh et al.(2017)Noh, Araujo, Sim, Weyand, and Han], SuperPoint [DeTone et al.(2018)DeTone, Malisiewicz, and Rabinovich], LF-Net [Ono et al.(2018)Ono, Trulls, Fua, and Yi] mono and multi-scale D2-Net [Dusmanu et al.(2019)Dusmanu, Rocco, Pajdla, Pollefeys, Sivic, Torii, and Sattler], R2D2 [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel], ASLFeat [Luo et al.(2020)Luo, Zhou, Bai, Chen, Zhang, Yao, Li, Fang, and Quan], DISK [Tyszkiewicz et al.(2020)Tyszkiewicz, Fua, and Trulls], and a hand-crafted Hessian affine detector with a RootSIFT descriptor [Perd’och et al.(2009)Perd’och, Chum, and Matas]. Our network denoted as "Ours" outperformed almost all state-of-the-art networks. Regarding illumination, DELF [Noh et al.(2017)Noh, Araujo, Sim, Weyand, and Han] outperformed our method since it identifies key points in a low-resolution feature map with a fixed grid. However, ours still exhibited the highest performance in terms of the five-pixel threshold because of the fixed grid of keypoints without spatial variation in this subgroup. The overall scores for the illumination dataset and viewpoints and their respective individual scores are presented in Table 1, with the MMA threshold set to 3. Qualitative Evaluation. Figure 4 shows the comparison between our baseline network R2D2 [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel] and the current state-of-the-art method DISK [Tyszkiewicz et al.(2020)Tyszkiewicz, Fua, and Trulls] with a severe change in illumination and viewpoint in the first and second row, respectively. The experiment was done with the nearest neighborhood matching with 3 points of error threshold. When we look at the yellow box, we see that we succeeded in focusing on structural information. The structurally less focused network failed to match in this part. In the red box, it can be seen that the error rate is lowered by focusing less on the background or natural objects corresponding to noise. The matching time is also shorter, shown in Figure 5. ### 4.3 Visual Localization In a local reconstruction job [Sattler et al.(2017)Sattler, Torii, Sivic, Pollefeys, Taira, Okutomi, and Pajdla], we evaluate our technique on the Aachen Day-Night dataset v1.1 [Sattler et al.(2018)Sattler, Maddern, Toft, Torii, Hammarstrand, Stenborg, Safari, Okutomi, Pollefeys, Sivic, et al.] as in D2-Net [Dusmanu et al.(2019)Dusmanu, Rocco, Pajdla, Pollefeys, Sivic, Torii, and Sattler]. In this section, we present the results of an optical localization task. Given the daytime photographs with known camera positions, our goal is to identify the nighttime image of the same area. The known location of the daytime photos in each set is used to triangulate the 3D structure of the scene after extensive feature matching. Finally, these 3D models are used to locate the query photographs taken at night. We followed the guidelines for Visual Localization Benchmark, for which we used our matches as input for a pre-defined visual localization pipeline based on COLMAP [Schonberger and Frahm(2016), Schönberger et al.(2016)Schönberger, Zheng, Frahm, and Pollefeys]. We also adopted hierarchical localization [Sarlin et al.(2019)Sarlin, Cadena, Siegwart, and Dymczyk] in every network for higher performance. This pipeline was then used to build an SfM model with the registered test photos. We used NetVlad [Arandjelovic et al.(2016)Arandjelovic, Gronat, Torii, Pajdla, and Sivic] for the global feature and the nearest neighborhood for matching. The percentages of properly localized photos under three error levels are reported in Table 2. Our results demonstrate our method’s strong generalization capabilities because of its high localization performance compared with SuperPoint [DeTone et al.(2018)DeTone, Malisiewicz, and Rabinovich], D2-Net [Dusmanu et al.(2019)Dusmanu, Rocco, Pajdla, Pollefeys, Sivic, Torii, and Sattler], R2D2 [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel] and DISK [Tyszkiewicz et al.(2020)Tyszkiewicz, Fua, and Trulls]. Our network performed fairly well when compared with matcher methods [Sun et al.(2021)Sun, Shen, Wang, Bao, and Zhou, Zhou et al.(2021)Zhou, Sattler, and Leal-Taixe]. Figure 4: Qualitative Results on HPatches. There are illustrated pairs of illumination and viewpoint variations. Green dots represent points that are correctly matched, whereas red dots represent points that are incorrectly matched. The boxed area provides significantly better outcomes than previous methods by focusing on the salient region. Day Night 0.5m, $2^{\circ}$ 1m, $5^{\circ}$ 5m, $10^{\circ}$ 0.5m, $2^{\circ}$ 1m, $5^{\circ}$ 5m, $10^{\circ}$ SuperPoint [DeTone et al.(2018)DeTone, Malisiewicz, and Rabinovich] 85.3 91.9 94.5 58.6 74.3 85.9 D2-Net [Dusmanu et al.(2019)Dusmanu, Rocco, Pajdla, Pollefeys, Sivic, Torii, and Sattler] 81.6 89.3 96.2 62.8 80.6 92.7 R2D2 [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel] 89.9 95.4 98.4 69.6 85.9 96.3 DISK [Tyszkiewicz et al.(2020)Tyszkiewicz, Fua, and Trulls] - - - 72.3 86.4 97.9 Ours 90.4 96.1 98.9 72.3 89.0 96.9 LoFTR∗ [Sun et al.(2021)Sun, Shen, Wang, Bao, and Zhou] - - - 72.8 88.5 99.0 Patch2Pix∗ [Zhou et al.(2021)Zhou, Sattler, and Leal-Taixe] 86.4 93.0 97.5 72.3 88.5 97.9 Table 2: Aachen Evaluation. Comparison on Aachen for the visual localization task. * are marked for matcher methods. Figure 5: Matching Time. Comparison of matching time. ### 4.4 3D Reconstruction For 3D reconstruction evaluation, we used ETH-Microsoft Dataset [Schonberger et al.(2017)Schonberger, Hardmeier, Sattler, and Pollefeys] and for the evaluation protocols, we ran SfM algorithm by COLMAP [Schonberger and Frahm(2016), Schönberger et al.(2016)Schönberger, Zheng, Frahm, and Pollefeys]. In Table 3, we compare our method with the results presented in [Luo et al.(2020)Luo, Zhou, Bai, Chen, Zhang, Yao, Li, Fang, and Quan], but only the jointly learned models. Compared with other methods, since the number of matched points was less than 100K, we do not provide the comparison with R2D2 [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel]. For sparse reconstruction, we report the number of registered images ($\\#$ Reg), the number of sparse points ($\\#$ Sparse), tracked length (Track), and reprojection error (Reproj). Table 3 data shows that our network performs favorably against previous methods on the 3D reconstruction task. Furthermore, the number of registered images obtained the best value, and the number of sparse points also had the best or second-order value. It can be interpreted that the selection of salient points was made well. Madrid Metropolis 1344 images Gendarmenmarkt 1463 images Tower of London 1576 images $\\#$ Reg $\\#$ Sparse Track Reproj $\\#$ Reg $\\#$ Sparse Track Reproj $\\#$ Reg $\\#$ Sparse Track Reproj SuperPoint [DeTone et al.(2018)DeTone, Malisiewicz, and Rabinovich] 438 29K 9.03 1.02px 967 93K 7.22 1.03px 681 52K 8.67 0.96px D2-Net [Dusmanu et al.(2019)Dusmanu, Rocco, Pajdla, Pollefeys, Sivic, Torii, and Sattler] 495 144K 6.39 1.35px 965 310K 5.55 1.28px 708 287K 5.20 1.34px ASLFeat [Luo et al.(2020)Luo, Zhou, Bai, Chen, Zhang, Yao, Li, Fang, and Quan] 649 129K 9.56 0.95px 1061 320K 8.98 1.05px 846 252K 13.16 0.95px Ours 766 142K 8.13 1.19px 1316 516K 6.81 1.19px 1186 315K 8.63 1.21px Table 3: ETH Evaluation. 3D reconstruction held with ETH-Microsoft Dataset. HPatches Aachen (Day) Aachen (Night) Overall Illum Viewp 0.5m, 2∘ 1m, 5∘ 5m, 10∘ 0.5m, 2∘ 1m, 5∘ 5m, 10∘ w/o $\mathcal{L}_{sty}\&\mathcal{L}_{str}$ 70.06 75.56 64.96 89.9 95.4 98.4 69.6 85.9 96.3 w/o $\mathcal{L}_{sty}$ 72.08 78.04 66.55 89.8 96.1 98.7 73.3 88.5 95.3 w/o $\mathcal{L}_{str}$ 76.53 81.43 71.99 89.7 95.8 98.5 69.6 84.8 96.3 w/o DSC 76.89 81.95 72.19 89.9 95.3 98.2 69.1 85.9 93.7 w/ All 78.41 83.22 73.94 90.4 96.1 98.9 72.3 89.0 96.9 Table 4: Ablation Studies. Ablation experiment on MMA@3 and visual localization to see how each component affect transformation loss. We show that the best results occur when style and structure loss are included with depth-wise convolution. ### 4.5 Ablation Studies We validated the significance of the components that comprise our suggested transformation loss function by performing two ablation studies. We studied the presence of style and structural component losses to determine how each characteristic contributes to learning. $\mathcal{L}_{sty}$ and $\mathcal{L}_{str}$ denote loss function that use same subscript in the masks of Eq. 4. The result in Table 4 reveals that the matching accuracy is not only influenced by the style factor but also by the structure factor. This experiment confirms the efficiency of ISW when applied to our method and its ability to provide additional information about the structure. Furthermore, ablation studies were conducted with and without DSC layer. The performance improved using DSC layer, with a model weight reduction of 2 MB. ## 5 Conclusion We proposed a robust network using self-transformation loss, which transforms a feature map that contributes to the repeatability of local features. We separated structure and style characteristics by clustering the covariance vector and influenced the feature of each characteristic. The feature maps were unified to make two feature maps closer by reducing the light component and sharpening the structure component. Consequently, this step increases the repeatability of the points, reduces outliers, and assigns robustly matched descriptors. A comparison with similar research reveals that our method more effectively extracts robust matching points in various scenes. Nevertheless, the current work has limitations. Points tend to be retrieved by clustering them in a particular region. This analysis reveals that adaptively selecting from sparse points is a promising avenue for future research. ## 6 Acknowledgment This work was supported in part by the National Research Foundation of Korea (NRF) under Grant NRF-2021R1A2C2010893 and in part by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.RS-2022-00155915, Artificial Intelligence Convergence Innovation Human Resources Development (Inha University). ## References * [Arandjelovic et al.(2016)Arandjelovic, Gronat, Torii, Pajdla, and Sivic] Relja Arandjelovic, Petr Gronat, Akihiko Torii, Tomas Pajdla, and Josef Sivic. Netvlad: Cnn architecture for weakly supervised place recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 5297–5307, 2016. * [Balntas et al.(2017)Balntas, Lenc, Vedaldi, and Mikolajczyk] Vassileios Balntas, Karel Lenc, Andrea Vedaldi, and Krystian Mikolajczyk. Hpatches: A benchmark and evaluation of handcrafted and learned local descriptors. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 5173–5182, 2017. * [Bay et al.(2006)Bay, Tuytelaars, and Van Gool] Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. Surf: Speeded up robust features. In _European conference on computer vision_ , pages 404–417. Springer, 2006. * [Cheng et al.(2014)Cheng, Leng, Wu, Cui, and Lu] Jian Cheng, Cong Leng, Jiaxiang Wu, Hainan Cui, and Hanqing Lu. Fast and accurate image matching with cascade hashing for 3d reconstruction. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 1–8, 2014. * [Cho et al.(2019)Cho, Choi, Park, Shin, and Choo] Wonwoong Cho, Sungha Choi, David Keetae Park, Inkyu Shin, and Jaegul Choo. Image-to-image translation via group-wise deep whitening-and-coloring transformation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 10639–10647, 2019. * [Choi et al.(2021)Choi, Jung, Yun, Kim, Kim, and Choo] Sungha Choi, Sanghun Jung, Huiwon Yun, Joanne T Kim, Seungryong Kim, and Jaegul Choo. Robustnet: Improving domain generalization in urban-scene segmentation via instance selective whitening. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 11580–11590, 2021. * [DeTone et al.(2018)DeTone, Malisiewicz, and Rabinovich] Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. Superpoint: Self-supervised interest point detection and description. In _Proceedings of the IEEE conference on computer vision and pattern recognition workshops_ , pages 224–236, 2018. * [Dusmanu et al.(2019)Dusmanu, Rocco, Pajdla, Pollefeys, Sivic, Torii, and Sattler] Mihai Dusmanu, Ignacio Rocco, Tomas Pajdla, Marc Pollefeys, Josef Sivic, Akihiko Torii, and Torsten Sattler. D2-net: A trainable cnn for joint detection and description of local features. _arXiv preprint arXiv:1905.03561_ , 2019. * [Gatys et al.(2015)Gatys, Ecker, and Bethge] Leon Gatys, Alexander S Ecker, and Matthias Bethge. Texture synthesis using convolutional neural networks. _Advances in neural information processing systems_ , 28:262–270, 2015. * [Gatys et al.(2016)Gatys, Ecker, and Bethge] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 2414–2423, 2016. * [He et al.(2018)He, Lu, and Sclaroff] Kun He, Yan Lu, and Stan Sclaroff. Local descriptors optimized for average precision. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 596–605, 2018. * [Heinly et al.(2015)Heinly, Schonberger, Dunn, and Frahm] Jared Heinly, Johannes L Schonberger, Enrique Dunn, and Jan-Michael Frahm. Reconstructing the world* in six days*(as captured by the yahoo 100 million image dataset). In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 3287–3295, 2015. * [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. _arXiv preprint arXiv:1704.04861_ , 2017. * [Huang et al.(2018)Huang, Yang, Lang, and Deng] Lei Huang, Dawei Yang, Bo Lang, and Jia Deng. Decorrelated batch normalization. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 791–800, 2018. * [Li et al.(2019)Li, Zhang, Yang, Liu, Song, and Hospedales] Da Li, Jianshu Zhang, Yongxin Yang, Cong Liu, Yi-Zhe Song, and Timothy M Hospedales. Episodic training for domain generalization. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pages 1446–1455, 2019. * [Li et al.(2022)Li, Wang, Liu, Ran, Xu, and Guo] Kunhong Li, Longguang Wang, Li Liu, Qing Ran, Kai Xu, and Yulan Guo. Decoupling makes weakly supervised local feature better. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 15838–15848, 2022. * [Li et al.(2017)Li, Fang, Yang, Wang, Lu, and Yang] Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. Universal style transfer via feature transforms. _arXiv preprint arXiv:1705.08086_ , 2017. * [Li et al.(2018)Li, Liu, Li, Yang, and Kautz] Yijun Li, Ming-Yu Liu, Xueting Li, Ming-Hsuan Yang, and Jan Kautz. A closed-form solution to photorealistic image stylization. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , pages 453–468, 2018. * [Liu et al.(2019)Liu, Shen, Lin, Peng, Bao, and Zhou] Yuan Liu, Zehong Shen, Zhixuan Lin, Sida Peng, Hujun Bao, and Xiaowei Zhou. Gift: Learning transformation-invariant dense visual descriptors via group cnns. _Advances in Neural Information Processing Systems_ , 32, 2019. * [Lowe(2004)] David G Lowe. Distinctive image features from scale-invariant keypoints. _International journal of computer vision_ , 60(2):91–110, 2004. * [Luo(2017)] Ping Luo. Learning deep architectures via generalized whitened neural networks. In _International Conference on Machine Learning_ , pages 2238–2246. PMLR, 2017. * [Luo et al.(2020)Luo, Zhou, Bai, Chen, Zhang, Yao, Li, Fang, and Quan] Zixin Luo, Lei Zhou, Xuyang Bai, Hongkai Chen, Jiahui Zhang, Yao Yao, Shiwei Li, Tian Fang, and Long Quan. Aslfeat: Learning local features of accurate shape and localization. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 6589–6598, 2020. * [Melekhov et al.(2020)Melekhov, Brostow, Kannala, and Turmukhambetov] Iaroslav Melekhov, Gabriel J Brostow, Juho Kannala, and Daniyar Turmukhambetov. Image stylization for robust features. _arXiv preprint arXiv:2008.06959_ , 2020. * [Noh et al.(2017)Noh, Araujo, Sim, Weyand, and Han] Hyeonwoo Noh, Andre Araujo, Jack Sim, Tobias Weyand, and Bohyung Han. Large-scale image retrieval with attentive deep local features. In _Proceedings of the IEEE international conference on computer vision_ , pages 3456–3465, 2017. * [Ono et al.(2018)Ono, Trulls, Fua, and Yi] Yuki Ono, Eduard Trulls, Pascal Fua, and Kwang Moo Yi. Lf-net: Learning local features from images. _arXiv preprint arXiv:1805.09662_ , 2018. * [Pan et al.(2019)Pan, Zhan, Shi, Tang, and Luo] Xingang Pan, Xiaohang Zhan, Jianping Shi, Xiaoou Tang, and Ping Luo. Switchable whitening for deep representation learning. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pages 1863–1871, 2019. * [Pautrat et al.(2020)Pautrat, Larsson, Oswald, and Pollefeys] Rémi Pautrat, Viktor Larsson, Martin R Oswald, and Marc Pollefeys. Online invariance selection for local feature descriptors. In _European Conference on Computer Vision_ , pages 707–724. Springer, 2020. * [Perd’och et al.(2009)Perd’och, Chum, and Matas] Michal Perd’och, Ondrej Chum, and Jiri Matas. Efficient representation of local geometry for large scale object retrieval. In _2009 IEEE Conference on Computer Vision and Pattern Recognition_ , pages 9–16. IEEE, 2009. * [Revaud et al.(2019)Revaud, De Souza, Humenberger, and Weinzaepfel] Jerome Revaud, Cesar De Souza, Martin Humenberger, and Philippe Weinzaepfel. R2d2: Reliable and repeatable detector and descriptor. _Advances in neural information processing systems_ , 32:12405–12415, 2019. * [Roy et al.(2019)Roy, Siarohin, Sangineto, Bulo, Sebe, and Ricci] Subhankar Roy, Aliaksandr Siarohin, Enver Sangineto, Samuel Rota Bulo, Nicu Sebe, and Elisa Ricci. Unsupervised domain adaptation using feature-whitening and consensus loss. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 9471–9480, 2019. * [Sarlin et al.(2019)Sarlin, Cadena, Siegwart, and Dymczyk] Paul-Edouard Sarlin, Cesar Cadena, Roland Siegwart, and Marcin Dymczyk. From coarse to fine: Robust hierarchical localization at large scale. In _CVPR_ , 2019. * [Sattler et al.(2017)Sattler, Torii, Sivic, Pollefeys, Taira, Okutomi, and Pajdla] Torsten Sattler, Akihiko Torii, Josef Sivic, Marc Pollefeys, Hajime Taira, Masatoshi Okutomi, and Tomas Pajdla. Are large-scale 3d models really necessary for accurate visual localization? In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 1637–1646, 2017. * [Sattler et al.(2018)Sattler, Maddern, Toft, Torii, Hammarstrand, Stenborg, Safari, Okutomi, Pollefeys, Sivic, et al.] Torsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, Marc Pollefeys, Josef Sivic, et al. Benchmarking 6dof outdoor visual localization in changing conditions. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 8601–8610, 2018. * [Schonberger and Frahm(2016)] Johannes L Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 4104–4113, 2016. * [Schönberger et al.(2016)Schönberger, Zheng, Frahm, and Pollefeys] Johannes L Schönberger, Enliang Zheng, Jan-Michael Frahm, and Marc Pollefeys. Pixelwise view selection for unstructured multi-view stereo. In _European Conference on Computer Vision_ , pages 501–518. Springer, 2016. * [Schonberger et al.(2017)Schonberger, Hardmeier, Sattler, and Pollefeys] Johannes L Schonberger, Hans Hardmeier, Torsten Sattler, and Marc Pollefeys. Comparative evaluation of hand-crafted and learned local features. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 1482–1491, 2017. * [Sun and Saenko(2016)] Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In _European conference on computer vision_ , pages 443–450. Springer, 2016. * [Sun et al.(2021)Sun, Shen, Wang, Bao, and Zhou] Jiaming Sun, Zehong Shen, Yuang Wang, Hujun Bao, and Xiaowei Zhou. Loftr: Detector-free local feature matching with transformers. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 8922–8931, 2021. * [Svärm et al.(2016)Svärm, Enqvist, Kahl, and Oskarsson] Linus Svärm, Olof Enqvist, Fredrik Kahl, and Magnus Oskarsson. City-scale localization for cameras with known vertical direction. _IEEE transactions on pattern analysis and machine intelligence_ , 39(7):1455–1461, 2016. * [Tian et al.(2017)Tian, Fan, and Wu] Yurun Tian, Bin Fan, and Fuchao Wu. L2-net: Deep learning of discriminative patch descriptor in euclidean space. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 661–669, 2017. * [Tian et al.(2020)Tian, Balntas, Ng, Barroso-Laguna, Demiris, and Mikolajczyk] Yurun Tian, Vassileios Balntas, Tony Ng, Axel Barroso-Laguna, Yiannis Demiris, and Krystian Mikolajczyk. D2d: Keypoint extraction with describe to detect approach. In _Proceedings of the Asian Conference on Computer Vision_ , 2020\. * [Tyszkiewicz et al.(2020)Tyszkiewicz, Fua, and Trulls] Michał Tyszkiewicz, Pascal Fua, and Eduard Trulls. Disk: Learning local features with policy gradient. _Advances in Neural Information Processing Systems_ , 33:14254–14265, 2020. * [Ulyanov et al.(2016)Ulyanov, Vedaldi, and Lempitsky] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. _arXiv preprint arXiv:1607.08022_ , 2016. * [Zagoruyko and Komodakis(2015)] Sergey Zagoruyko and Nikos Komodakis. Learning to compare image patches via convolutional neural networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 4353–4361, 2015. * [Zhou et al.(2021)Zhou, Sattler, and Leal-Taixe] Qunjie Zhou, Torsten Sattler, and Laura Leal-Taixe. Patch2pix: Epipolar-guided pixel-level correspondences. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 4669–4678, 2021.
# The linear span of uniform matrix product states Claudia De Lazzari Dipartimento di Matematica, Università di Trento, Via Sommarive 14, 38123 Povo (TN), Italy<EMAIL_ADDRESS>, Harshit J Motwani Department of Mathematics: Algebra and Geometry, Ghent University, 9000 Gent, Belgium<EMAIL_ADDRESS>and Tim Seynnaeve Mathematical Institute, University of Bern, Alpeneggstrasse 22, 3012 Bern, Switzerland<EMAIL_ADDRESS> ###### Abstract. The variety of uniform matrix product states arises both in algebraic geometry as a natural generalization of the Veronese variety, and in quantum many-body physics as a model for a translation-invariant system of sites placed on a ring. Using methods from linear algebra, representation theory, and invariant theory of matrices, we study the linear span of this variety. ###### Key words and phrases: Matrix product states, invariant theory of matrices ###### 2020 Mathematics Subject Classification: 15A69, 20G05, 81P45 ## Introduction Tensor networks are a powerful tool for the description of tensors living in a high dimensional space. Since their original conception in quantum many-body physics [AKLT88, FNW92, ÖR95], they have found a wide range of applications in different fields, such as numerical tensor calculus [Ose11, Hac12, CLO+16, BSU16], geometric complexity theory [LQY12], graphical models [RS19], and machine learning [CLO+16, CCX+18]. Methods from differential and complex geometry were introduced in the study of these varieties of tensors in [HMOV14] and some important developments were achieved using methods from algebraic geometry and representation theory, see for instance [BBM15, LQY12, CM14, CMS19, CLVW20, CGFW21]. In this article, we focus on a particular type of tensor networks: uniform matrix product states. From a quantum mechanics perspective, they model translation-invariant physical systems of sites placed on a ring. The geometry of uniform matrix product states has been extensively studied [PGVWC07, HMOV14, CM14, CMS19], but our understanding of them is still far from complete, and several fundamental mathematical problems remain open. Our geometric point of view is the following: we fix a tensor space $(\mathbb{C}^{n})^{\otimes d}$, and consider the set of all tensors in this space that admit a translation-invariant matrix product state representation, with a given bond dimension $m$. After taking the closure of this set, we obtain an algebraic variety, which we denote by $\operatorname{uMPS}(m,n,d)$. The ultimate goal would be to obtain a complete description of this variety, i.e. to find all defining equations. Phrased in this generality, this question is likely to be intractable. Indeed, even just determining which _linear_ equations, if any, vanish on our variety is poorly understood. This is precisely the goal of this paper: ###### Problem 0.1. Describe the linear span $\langle\operatorname{uMPS}(m,n,d)\rangle$ of the variety $\operatorname{uMPS}(m,n,d)$. More precisely: * • What is the dimension of $\langle\operatorname{uMPS}(m,n,d)\rangle$? (1.4) * • For which parameters $m,n,d\in\mathbb{N}$ does $\langle\operatorname{uMPS}(m,n,d)\rangle$ fill the ambient space? (1.6) * • How does $\langle\operatorname{uMPS}(m,n,d)\rangle$ decompose as a $GL_{n}$-representation? (1.9) The variety $\operatorname{uMPS}(m,n,d)$ does not only arise from tensor networks, it is also a very natural geometric construction in its own right. Indeed, as we will soon see, $\operatorname{uMPS}(m,n,d)$ is the closed image of the polynomial map that takes as input an $n$-tuple of $m\times m$-matrices, and outputs the traces of all $d$-fold products of the given matrices. In this way, it is a natural generalization of the classical Veronese variety. Our main Problem 0.1 is therefore equivalent to the following: ###### Problem 0.2. Let $A_{0},\ldots,A_{n-1}$ be $m\times m$ matrices with generic entries. Which linear relations hold between the polynomials $\operatorname{Tr}(A_{i_{1}}\cdots A_{i_{d}}),$ where $(i_{1},\ldots,i_{d})\in[n]^{d}$? The ring generated by all polynomials $\operatorname{Tr}(A_{i_{1}}\cdots A_{i_{d}})$, where the generic matrices are fixed but we allow $d$ to vary, is known as the _trace algebra_. In his classical work [Pro76], Procesi described how to obtain all relations between the generators of this ring in principle. Note however, that the question we are asking is slightly different: we are only interested in relations between traces of matrix products of a _fixed length_ $d$. This article is divided into three sections. In Section 1, we define the variety of uniform matrix product states, and describe its natural symmetries. In Section 2, we undertake a computational study of the space $\langle\operatorname{uMPS}(m,n,d)\rangle$ in the smallest nontrivial case $m=n=2$. We describe an algorithm that can compute this space, viewed as a $GL_{2}$-representation. Based on this result, we obtain a conjectured formula for the character (and in particular: the dimension) of $\langle\operatorname{uMPS}(2,2,d)\rangle$. Section 3 contains our main theoretical results. In Section 3.1, we introduce a powerful method to find linear equations that vanish on $\langle\operatorname{uMPS}(m,n,d)\rangle$, based on the Cayley-Hamilton theorem. As a corollary, we show that for $d\geq(m+1)(m+2)$, the linear span $\langle\operatorname{uMPS}(m,n,d)\rangle$ does not fill its natural ambient space $\operatorname{Cyc}^{d}(\mathbb{C}^{n})$, the space of cyclically symmetric tensors; significantly improving the state of the art. In Section 3.2 we study the special case $m=n=2$, based on the computations in Section 3.1. Using the trace parametrization we show an upper bound on the dimension of $\langle\operatorname{uMPS}(2,2,d)\rangle$ which is close to optimal, and we take some first steps towards proving our conjectured character formula, again using our Cayley-Hamilton technique. ## Acknowledgements The third author would like to thank Alessandra Bernardi, Jarosław Buczyński, Joseph Landsberg, and Frank Verstraete for many helpful discussions. The second author is supported by FWO grants (G023721N, G0F5921N) and UGent BOF grants (BOF21/DOC/182, STA/201909/038). ## 1\. Preliminaries We once and for all fix three parameters $m,n,d\in\mathbb{N}$. We will consider tensors in the space $(\mathbb{C}^{n})^{\otimes d}$. The standard basis of $\mathbb{C}^{n}$ will be written as $\\{e_{0},\ldots,e_{n-1}\\}$. We abbreviate the set $\\{0,\ldots,n-1\\}$ to $[n]$. ###### Definition 1.1. The uniform Matrix Product State parametrization is given by the map (1) $\displaystyle\begin{split}\varphi:(\mathbb{C}^{m\times m})^{n}&\to(\mathbb{C}^{n})^{\otimes d}\\\ (A_{0},\dots,A_{n-1})&\mapsto\sum_{0\leq i_{1},\dots,i_{d}\leq n-1}\operatorname{Tr}(A_{i_{1}}\cdots A_{i_{d}})\ e_{i_{1}}\otimes\cdots\otimes e_{i_{d}}.\end{split}$ We denote the image of this map by $\operatorname{uMPS}^{\circ}(m,n,d)$. The closure of $\operatorname{uMPS}^{\circ}(m,n,d)$ in the Euclidean or equivalently Zariski topology over the complex numbers, is the algebraic variety of uniform matrix product states, denoted by ${\operatorname{uMPS}(m,n,d)}$. ###### Remark 1.2. If we think of $(\mathbb{C}^{m\times m})^{n}$ as the space of $m\times m\times n$ tensors, then the $\operatorname{uMPS}$ parametrization takes a tensor in this space and contracts it $d$ times with itself in a circle; see Figure 1. $\leavevmode\hbox to64.42pt{\vbox to40.7pt{\pgfpicture\makeatletter\hbox{\hskip-21.13957pt\lower 7.46979pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{ {}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}{}\pgfsys@moveto{53.34892pt}{10.66978pt}\pgfsys@moveto{56.34892pt}{10.66978pt}\pgfsys@curveto{56.34892pt}{12.32666pt}{55.0058pt}{13.66978pt}{53.34892pt}{13.66978pt}\pgfsys@curveto{51.69205pt}{13.66978pt}{50.34892pt}{12.32666pt}{50.34892pt}{10.66978pt}\pgfsys@curveto{50.34892pt}{9.01291pt}{51.69205pt}{7.66978pt}{53.34892pt}{7.66978pt}\pgfsys@curveto{55.0058pt}{7.66978pt}{56.34892pt}{9.01291pt}{56.34892pt}{10.66978pt}\pgfsys@closepath\pgfsys@moveto{53.34892pt}{10.66978pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope {}{{}}{} {}{}{}\pgfsys@moveto{53.34892pt}{10.66978pt}\pgfsys@lineto{53.34892pt}{42.67914pt}\pgfsys@stroke\pgfsys@invoke{ } {}{{}}{} {}{}{}\pgfsys@moveto{21.33957pt}{10.66978pt}\pgfsys@lineto{85.35828pt}{10.66978pt}\pgfsys@stroke\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{43.94612pt}{40.52637pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$n$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{27.6193pt}{14.91861pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$m$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{70.29843pt}{14.91861pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$m$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\mapsto\leavevmode\hbox to261.31pt{\vbox to91.04pt{\pgfpicture\makeatletter\hbox{\hskip 2.61711pt\lower-28.65276pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{ {}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}{}\pgfsys@moveto{14.22638pt}{14.22638pt}\pgfsys@moveto{18.22638pt}{14.22638pt}\pgfsys@curveto{18.22638pt}{16.43555pt}{16.43555pt}{18.22638pt}{14.22638pt}{18.22638pt}\pgfsys@curveto{12.01721pt}{18.22638pt}{10.22638pt}{16.43555pt}{10.22638pt}{14.22638pt}\pgfsys@curveto{10.22638pt}{12.01721pt}{12.01721pt}{10.22638pt}{14.22638pt}{10.22638pt}\pgfsys@curveto{16.43555pt}{10.22638pt}{18.22638pt}{12.01721pt}{18.22638pt}{14.22638pt}\pgfsys@closepath\pgfsys@moveto{14.22638pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}{}\pgfsys@moveto{71.1319pt}{14.22638pt}\pgfsys@moveto{75.1319pt}{14.22638pt}\pgfsys@curveto{75.1319pt}{16.43555pt}{73.34106pt}{18.22638pt}{71.1319pt}{18.22638pt}\pgfsys@curveto{68.92273pt}{18.22638pt}{67.1319pt}{16.43555pt}{67.1319pt}{14.22638pt}\pgfsys@curveto{67.1319pt}{12.01721pt}{68.92273pt}{10.22638pt}{71.1319pt}{10.22638pt}\pgfsys@curveto{73.34106pt}{10.22638pt}{75.1319pt}{12.01721pt}{75.1319pt}{14.22638pt}\pgfsys@closepath\pgfsys@moveto{71.1319pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}{}\pgfsys@moveto{128.03741pt}{14.22638pt}\pgfsys@moveto{132.03741pt}{14.22638pt}\pgfsys@curveto{132.03741pt}{16.43555pt}{130.24658pt}{18.22638pt}{128.03741pt}{18.22638pt}\pgfsys@curveto{125.82825pt}{18.22638pt}{124.03741pt}{16.43555pt}{124.03741pt}{14.22638pt}\pgfsys@curveto{124.03741pt}{12.01721pt}{125.82825pt}{10.22638pt}{128.03741pt}{10.22638pt}\pgfsys@curveto{130.24658pt}{10.22638pt}{132.03741pt}{12.01721pt}{132.03741pt}{14.22638pt}\pgfsys@closepath\pgfsys@moveto{128.03741pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{ }\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{ }\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }\definecolor[named]{pgffillcolor}{rgb}{0,0,0}{}\pgfsys@moveto{241.84845pt}{14.22638pt}\pgfsys@moveto{245.84845pt}{14.22638pt}\pgfsys@curveto{245.84845pt}{16.43555pt}{244.05762pt}{18.22638pt}{241.84845pt}{18.22638pt}\pgfsys@curveto{239.63928pt}{18.22638pt}{237.84845pt}{16.43555pt}{237.84845pt}{14.22638pt}\pgfsys@curveto{237.84845pt}{12.01721pt}{239.63928pt}{10.22638pt}{241.84845pt}{10.22638pt}\pgfsys@curveto{244.05762pt}{10.22638pt}{245.84845pt}{12.01721pt}{245.84845pt}{14.22638pt}\pgfsys@closepath\pgfsys@moveto{241.84845pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{ } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope {}{{}}{} {}{}{}\pgfsys@moveto{14.22638pt}{14.22638pt}\pgfsys@lineto{14.22638pt}{56.90552pt}\pgfsys@stroke\pgfsys@invoke{ } {}{{}}{} {}{}{}\pgfsys@moveto{71.1319pt}{14.22638pt}\pgfsys@lineto{71.1319pt}{56.90552pt}\pgfsys@stroke\pgfsys@invoke{ } {}{{}}{} {}{}{}\pgfsys@moveto{128.03741pt}{14.22638pt}\pgfsys@lineto{128.03741pt}{56.90552pt}\pgfsys@stroke\pgfsys@invoke{ } {}{{}}{} {}{}{}\pgfsys@moveto{241.84845pt}{14.22638pt}\pgfsys@lineto{241.84845pt}{56.90552pt}\pgfsys@stroke\pgfsys@invoke{ } {}{{}}{} {}{}{}\pgfsys@moveto{14.22638pt}{14.22638pt}\pgfsys@lineto{170.71655pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{ } {}{{}}{} {}{}{}\pgfsys@moveto{199.16931pt}{14.22638pt}\pgfsys@lineto{241.84845pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{181.19293pt}{11.72638pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\cdots$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{2.68971pt}{54.75275pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$n$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{59.59523pt}{54.75275pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$n$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{116.50075pt}{54.75275pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$n$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{230.31178pt}{54.75275pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$n$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{38.28908pt}{20.60907pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$m$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{95.1946pt}{20.60907pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$m$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{152.10011pt}{20.60907pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$m$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{209.00563pt}{20.60907pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$m$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {}{{}}{} {{}{}}{{}} {{{}}{{}}}{{}}{{{}}{{}}}{}{{}}{}{}{}{}\pgfsys@moveto{14.22638pt}{14.22638pt}\pgfsys@curveto{-2.41711pt}{14.22638pt}{-2.41711pt}{-28.45276pt}{14.22638pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{ } {}{{}}{} {{}{}}{{}} {{{}}{{}}}{{}}{{{}}{{}}}{}{{}}{}{}{}{}\pgfsys@moveto{241.84845pt}{14.22638pt}\pgfsys@curveto{258.49194pt}{14.22638pt}{258.49194pt}{-28.45276pt}{241.84845pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{ } {}{{}}{} {}{}{}\pgfsys@moveto{14.22638pt}{-28.45276pt}\pgfsys@lineto{241.84845pt}{-28.45276pt}\pgfsys@stroke\pgfsys@invoke{ } {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{114.26831pt}{-6.31764pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$d$ sites}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{123.64735pt}{-22.07004pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$m$}} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$ Figure 1. Graphical representation of map (1) defining $\operatorname{uMPS}(m,n,d)$. The weights on the edges represent the dimensions of the vector space along which the tensor contraction is taking place. There are total $d$ tensors involved with dimensions $m\times m\times n$. The parameter $m$ is called bond dimension and $n$ is called physical dimension. ###### Remark 1.3. An alternative name for $\operatorname{uMPS}$ is _translation invariant matrix product states with periodic boundary conditions_ [PGVWC07, CM14]. The name “uniform matrix product states” is sometimes reserved for the thermodynamic limit, where the number $d$ of sites approaches infinity. Our terminology is consistent with [HMOV14, CMS19]. As mentioned in the introduction, the main question we will try to answer in this article is the following: ###### Question 1.4. Determine the linear span of $\operatorname{uMPS}(m,n,d)$; i.e. the smallest vector subspace of $(\mathbb{C}^{n})^{\otimes d}$ containing $\operatorname{uMPS}(m,n,d)$. In particular: what is the dimension of this space? ### 1.1. Cyclic and symmetric invariance The space $(\mathbb{C}^{n})^{\otimes d}$ comes equipped with an action of the symmetric group $\mathfrak{S}_{d}$: for $\sigma\in\mathfrak{S}_{d}$ and $\omega=v_{1}\otimes\cdots\otimes v_{d}\in(\mathbb{C}^{n})^{\otimes d}$ we have $\sigma\cdot(v_{1}\otimes\cdots\otimes v_{d})=v_{\sigma^{-1}(1)}\otimes\cdots\otimes v_{\sigma^{-1}(d)}.$ The symmetric group $\mathfrak{S}_{d}$ naturally contains the cyclic group $C_{d}$ and the dihedral group $D_{2d}$ as subgroups. To be precise: let $r,s\in\mathfrak{S}_{d}$ be the cyclic permutation and reflection defined respectively by $r(i)=i+1(\operatorname{mod}d)\quad\text{and}\quad s(i)=d+1-i,$ then $C_{d}\subseteq\mathfrak{S}_{d}$ is the cyclic subgroup generated by $r$, and $D_{2d}$ is the subgroup generated by $r$ and $s$. The _cyclically symmetric tensors_ and _dihedrally symmetric tensors_ are then the elements of $(\mathbb{C}^{n})^{\otimes d}$ that are invariant under the action of these subgroups: $\displaystyle\operatorname{Cyc}^{d}(\mathbb{C}^{n}):=\\{\omega\in(\mathbb{C}^{n})^{\otimes d}\mid\sigma\cdot\omega=\omega\quad\forall\sigma\in C_{d}\\},$ $\displaystyle\operatorname{Dih}^{d}(\mathbb{C}^{n}):=\\{\omega\in(\mathbb{C}^{n})^{\otimes d}\mid\sigma\cdot\omega=\omega\quad\forall\sigma\in D_{2d}\\}.$ Note that both are linear subspaces of $(\mathbb{C}^{n})^{\otimes d}$, and that $\operatorname{Dih}^{d}(\mathbb{C}^{n})\subseteq\operatorname{Cyc}^{d}(\mathbb{C}^{n})$, where the inclusion is strict unless $d\leq 2$ or $n=2$ and $d\leq 5$. ###### Observation 1.5. The set $\operatorname{uMPS}(m,n,d)$ is a subset of the space of cyclically invariant tensors $\operatorname{Cyc}^{d}(\mathbb{C}^{n})\subset(\mathbb{C}^{n})^{\otimes d}$ because of the trace invariance under cyclic permutations of the matrices: given $M_{1},\dots,M_{d}\in\mathbb{C}^{m\times m}$ then $\operatorname{Tr}(M_{1}\cdots M_{d})=\operatorname{Tr}(M_{\sigma(1)}\cdots M_{\sigma(d)}),$ for $\sigma\in C_{d}$. In other words, we can think of $\operatorname{uMPS}(m,n,d)$ as a subvariety of the ambient space $\operatorname{Cyc}^{d}(\mathbb{C}^{n})$. As noted in [CMS19, Corollary 3.18], if we fix $n$ and $d$ and let $m$ grow, the space $\operatorname{uMPS}(m,n,d)$ will eventually fill this entire ambient space. ###### Question 1.6. For fixed $n$ and $d$, what is the smallest $m$ such that $\langle\operatorname{uMPS}(m,n,d)\rangle=\operatorname{Cyc}^{d}(\mathbb{C}^{m})$? It is known that for $m=d$, equality holds [CMS19, Proposition 3.11]. On the other hand, it follows from a dimension count ([CMS19, Theorem 3.14], see also [NV18]) that if $d\gg m$, the inclusion $\langle\operatorname{uMPS}(m,n,d)\rangle\subset\operatorname{Cyc}^{d}(\mathbb{C}^{n})$ is strict. In Section 3.1 we will prove that already for $d=O(m^{2})$, we have a strict inclusion. ###### Observation 1.7. In the case $m=n=2$, we have the stronger inclusion $\operatorname{uMPS}(2,2,d)\subseteq\operatorname{Dih}^{d}(\mathbb{C}^{n}).$ This is a consequence of the identity $\operatorname{Tr}(A_{i_{1}}\cdots A_{i_{d}})=\operatorname{Tr}(A_{i_{d}}\cdots A_{i_{1}})$, which holds for any pair of $2\times 2$ matrices $A_{0}$, $A_{1}$ and sequence $i_{1},\ldots,i_{d}$ with $i_{j}\in\\{0,1\\}$. See [Gre14, Theorem 1.1] ###### Example 1.8. We consider $\operatorname{uMPS}(2,2,6)$. For every $A_{0},A_{1}\in\mathbb{C}^{2\times 2}$, we have the trace relation $\operatorname{Tr}(A_{0}^{2}A_{1}^{2}A_{0}A_{1})=\operatorname{Tr}(A_{1}A_{0}A_{1}^{2}A_{0}^{2})$, that does not come from invariance of trace under cyclic permutations of the matrices. ### 1.2. $GL_{n}$-invariance The general linear group $GL_{n}$ naturally acts on the space $(\mathbb{C}^{n})^{\otimes d}$: given $g\in GL_{n}$ and $\omega=v_{1}\otimes\cdots\otimes v_{d}\in(\mathbb{C}^{n})^{\otimes d}$, we have (2) $g\cdot(v_{1}\otimes\cdots\otimes v_{N})=(g\cdot v_{1})\otimes\cdots\otimes(g\cdot v_{N}).$ Clearly, $\operatorname{Cyc}^{d}(\mathbb{C}^{n})$ and $\operatorname{Dih}^{d}(\mathbb{C}^{n})$ are invariant under this action. The following computation shows that $\operatorname{uMPS}(m,n,d)$ is invariant under this action as well: (3) $\displaystyle g\cdot\varphi(A_{1},\dots,A_{n})$ $\displaystyle=\sum_{j_{1},\dots,j_{d}=1}^{n}\operatorname{Tr}(A_{j_{1}}\cdots A_{j_{d}})\ (\sum_{i_{1}=1}^{n}{g_{i_{1},j_{1}}e_{i_{1}}})\otimes\cdots\otimes(\sum_{i_{d}=1}^{n}{g_{i_{d},j_{d}}e_{i_{d}}})$ $\displaystyle=\sum_{j_{1},\dots,j_{d}=1}^{n}\sum_{i_{1},\dots,i_{d}=1}^{n}g_{i_{1},j_{1}}\cdots g_{i_{d},j_{d}}\operatorname{Tr}(A_{j_{1}}\cdots A_{j_{d}})\ e_{i_{1}}\otimes\cdots\otimes e_{i_{d}}$ $\displaystyle=\sum_{j_{1},\dots,j_{d}=1}^{n}\sum_{i_{1},\dots,i_{d}=1}^{n}\operatorname{Tr}(g_{i_{1},j_{1}}A_{j_{1}}\cdots g_{i_{d},j_{d}}A_{j_{d}})\ e_{i_{1}}\otimes\cdots\otimes e_{i_{d}}$ $\displaystyle=\sum_{i_{1},\dots,i_{d}=1}^{n}\operatorname{Tr}\Big{[}\Big{(}\sum_{j_{1}=1}^{n}g_{i_{1},j_{1}}A_{j_{1}}\Big{)}\cdots\Big{(}\sum_{j_{d}=1}^{n}g_{i_{d},j_{d}}A_{j_{d}}\Big{)}\Big{]}e_{i_{1}}\otimes\cdots\otimes e_{i_{d}}$ $\displaystyle=\varphi\Big{(}\sum_{j=1}^{n}g_{1,j}A_{j},\dots,\sum_{j=1}^{n}g_{n,j}A_{j}\Big{)}.$ This means that the space $\langle\operatorname{uMPS}(m,n,d)\rangle$ we are interested in is naturally a representation of $GL_{n}$. In the remainder of this subsection, we briefly recall what we will use about representation theory of $GL_{n}$, and fix notation. We once and for all fix a torus $T\subset GL_{n}$, consisting of all diagonal matrices; and identify $T$ with $(\mathbb{C}^{*})^{n}$. For $\lambda=(\lambda_{0},\ldots,\lambda_{n-1})\in\mathbb{Z}^{n}$ and $t=\operatorname{diag}(t_{0},\ldots,t_{n-1})\in T$, we write $t^{\lambda}=t_{0}^{\lambda_{0}}\cdots t_{n-1}^{\lambda_{n-1}}\in\mathbb{C}^{*}$. Let $V$ be any representation of $GL_{n}$. For any $\lambda\in\mathbb{Z}^{n}$, the _weight space_ $V_{\lambda}$ is defined as $V_{\lambda}=\\{v\in V\mid t\cdot v=t^{\lambda}v\quad\forall t\in T\\}.$ It is a well-known fact from representation theory that $V=\bigoplus_{\lambda\in\mathbb{Z}^{n}}{V_{\lambda}}$ as vector spaces; and that knowing the dimensions of the weight spaces uniquely determines the representation $V$ up to isomorphism. The polynomial $\chi_{V}=\sum_{\lambda\in\mathbb{Z}^{n}}{(\dim V_{\lambda})t^{\lambda}}.$ is known as the _character_ of $V$. If we view our representation as a morphism $\rho:GL_{n}\to GL(V)$, the character $\chi_{V}$ is equal to $\operatorname{Tr}(\rho(\operatorname{diag}(t_{0},\ldots,t_{n-1})))$. So we can refine our main Question 1.4 to: ###### Question 1.9. Let $V=\langle\operatorname{uMPS}(m,n,d)\rangle$, viewed as a $GL_{n}$-representation. For every weight $\lambda\in\mathbb{Z}^{n}$, determine the dimension of the weight space $V_{\lambda}$. ### 1.3. Words, necklaces and bracelets As a warm-up, let us consider the classical representation $V=(\mathbb{C}^{n})^{\otimes d}$. Its character is equal to $(t_{1}+\cdots+t_{n})^{d}$, and by expanding we find that $\dim{V_{\lambda}}$ is equal to the multinomial coefficient $\binom{d}{\lambda_{1},\ldots,\lambda_{n}}$ if $\sum_{\lambda_{i}}=d$, and zero otherwise. We can also see this in terms of coordinates, and this will be useful later: ###### Definition 1.10. A _word_ of length $d$ on the alphabet $[n]$ is just an ordered tuple $I=(i_{1},\ldots,i_{d})$, with $i_{j}\in[n]$. The _weight_ of a word $I$ is a tuple $w(I)=(w_{0},\ldots,w_{n-1})\in\mathbb{Z}^{n}$, where $w_{i}$ is the number of entries in $I$ that are equal to $i$. For every word $I=(i_{1},\ldots,i_{d})$, we can define a vector $e_{I}:=e_{i_{1}}\otimes\cdots\otimes e_{i_{d}}$. The space $(\mathbb{C}^{n})^{\otimes d}$ has a basis given by the $e_{I}$, where $I$ runs over all words of length $d$ on the alphabet $[n]$. In addition, every $e_{I}$ is a weight vector of weight $w(I)$. So the dimension of the weight space $V_{\lambda}$ is the number of words of weight $\lambda$, which is indeed the multinomial coefficient $\binom{d}{\lambda_{1},\ldots,\lambda_{n}}$. We move on to the subrepresentations $\operatorname{Cyc}^{d}(\mathbb{C}^{n})$ and $\operatorname{Dih}^{d}(\mathbb{C}^{n})$, the natural ambient spaces for uniform matrix product states. ###### Definition 1.11. A _necklace_ (of length $d$ on the alphabet $[n]$) is an equivalence class of words, where two words are equivalent if they agree up to the action $C_{d}$. A _bracelet_ (of length $d$ on the alphabet $[n]$) is an equivalence class of words, where two words are equivalent if they agree up to the action $D_{2d}$. For a fixed necklace or bracelet, all words in the equivalence class clearly have the same weight; this is the _weight_ of the necklace or bracelet. We denote by $N(n,d)$, resp. $B(n,d)$, the set of necklaces, resp. bracelets, of length $d$ on $[n]$ and by $N_{\lambda}(n,d)\subset N(n,d)$, resp. $B_{\lambda}(n,d)\subset B(n,d)$, the subset of elements of weight $\lambda\in\mathbb{Z}^{n}$. To every necklace $N\in N(n,d)$, we associate a basis vector $e_{N}:=\frac{1}{d}\sum_{\sigma\in C_{d}}{\sigma\cdot e_{I}}$, where $I$ is any representative of $N$. Then $\operatorname{Cyc}^{d}(\mathbb{C}^{n})$ has a basis given by $\\{e_{N}:\ N\in N(n,d)\\}$. Moreover, $e_{N}$ is a weight vector of weight $w(N)$, hence we find that the dimension of the weight space of weight $\lambda$ is given by the number $|N_{\lambda}(n,d)|$ of necklaces of weight $\lambda$. ###### Remark 1.12. The number $|N(n,d)|$ of necklaces of length $d$ on $[n]$ can be counted using Polya’s enumeration theorem, see for instance [Sta13, Theorem 7.10]: (4) $\dim\operatorname{Cyc}^{d}(\mathbb{C}^{n})=|N(n,d)|=\frac{1}{d}\sum_{l|d}\varphi(l)n^{\frac{d}{l}},$ where $\varphi$ is Euler’s totient function. There is also a formula for $|N_{\lambda}(n,d)|=\dim\operatorname{Cyc}^{d}(\mathbb{C}^{n})_{\lambda}$: it is equal to the coefficient of $x_{0}^{\lambda_{0}}\cdots x_{n-1}^{\lambda_{n-1}}$ in the polynomial $\frac{1}{d}\sum_{\ell\mid d}{(x_{0}^{d/\ell}+\cdots+x_{n-1}^{d/\ell})^{\ell}}\varphi(\frac{d}{\ell}).$ To every bracelet $b\in B(n,d)$, we associate a basis vector $e_{b}:=\frac{1}{2d}\sum_{\sigma\in D_{2d}}{\sigma\cdot e_{I}}$, where $I$ is a representative of $b$. Then $\operatorname{Dih}^{d}(\mathbb{C}^{n})$ has a basis given by $\\{e_{b}:\ b\in B(n,d)\\}$, and the dimension of the weight space of weight $\lambda$ is given by the number $|B_{\lambda}(n,d)|$ of bracelets of weight $\lambda$. ###### Remark 1.13. The number of bracelets of length $d$ on $[n]$ is given by (5) $\dim\operatorname{Dih}^{d}(\mathbb{C}^{n})=|B(n,d)|=\begin{cases}\frac{1}{2}|N(n,d)|+\frac{1}{4}(n+1)n^{d/2}&\text{for }d\text{ even,}\\\ \frac{1}{2}|N(n,d)|+\frac{1}{4}n^{(d+1)/2}&\text{for }d\text{ odd.}\end{cases}$ We only state the formula for $|B_{\lambda}(n,d)|$ in the case of binary bracelets (i.e. $n=2$), as that is the only case that is relevant to us: $|B_{(w,d-w)}(2,d)|=\begin{cases*}\left(\frac{1}{2d}\sum_{l|\gcd(d,w)}{\varphi(l)\binom{\frac{d}{l}}{\frac{w}{l}}}\right)+\frac{1}{2}\binom{\frac{d}{2}-1}{\frac{w-1}{2}}&\text{for $w$ odd,}\\\ \left(\frac{1}{2d}\sum_{l|\gcd(d,w)}{\varphi(l)\binom{\frac{d}{l}}{\frac{w}{l}}}\right)+\frac{1}{2}\binom{\frac{d}{2}}{\frac{w}{2}}&\text{for $w$ even.}\end{cases*}$ ## 2\. Computations In this section, we describe how to computationally answer 1.9 for fixed parameters. We focus on the smallest interesting case $m=n=2$. In this case we are dealing with representations of $GL_{2}$, so the weights are in $\mathbb{Z}^{2}$. Moreover, the only occurring weights in $(\mathbb{C}^{2})^{\otimes d}$ are $t_{0}^{w}t_{1}^{d-w}$ for $w=0,\ldots,d$. For subrepresentations of $(\mathbb{C}^{2})^{\otimes d}$, we will abbreviate the weight spaces $V_{(w,d-w)}$ to $V_{w}$. Our goal is to determine the dimension of the weight spaces $\langle\operatorname{uMPS}(2,2,d)\rangle_{w}$. All of our dimension counts in this section and the next use the following easy observation. ###### Observation 2.1. For $p_{1},\ldots,p_{N}\in\mathbb{C}[y_{1},\ldots,y_{s}]$ polynomials, and $X$ the image of the polynomial map (6) $\displaystyle\begin{split}\mathbb{C}^{s}&\to\mathbb{C}^{N}\\\ (y_{1},\ldots,y_{s})&\mapsto(p_{1}(y_{1},\ldots,y_{s}),\ldots,p_{N}(y_{1},\ldots,y_{s})),\end{split}$ a linear equation $\sum{\alpha_{i}x_{i}}$ vanishes on $X$ if and only if the identity $\sum{\alpha_{i}p_{i}}=0$ holds in the polynomial ring $\mathbb{C}[y_{1},\ldots,y_{s}]$. In particular, the dimension of the linear span of $X$ is equal to the dimension of the subspace of $\mathbb{C}[y_{1},\ldots,y_{s}]$ spanned by the $p_{i}$’s. ### 2.1. The trace parametrization If we directly use Definition 1.1, we see that $\operatorname{uMPS}(2,2,d)$ is the closed image of a polynomial map $\mathbb{C}^{8}\to\operatorname{Dih}^{d}(\mathbb{C}^{2})$. However, in this specific case there is an alternative parametrization by $\mathbb{C}^{5}$ instead of $\mathbb{C}^{8}$: the _trace parametrization_ , which appears to be computationally more efficient in practice. It is based on the connection between uniform matrix product states and invariant theory of matrices. ###### Definition 2.2. Let $R=\mathbb{C}[(a_{ij}^{k})_{1\leq i,j\leq m;1\leq k\leq n}]$ be the polynomial ring in $m^{2}n$ variables, and for $k=0,\ldots,n-1$, let $A_{k}:=(a_{ij}^{k})_{1\leq i,j\leq m}$ be generic $m\times m$ matrices. The _trace algebra_ $\mathcal{C}_{m,n}$ is the subalgebra of $R$ generated by the polynomials $\operatorname{Tr}(A_{i_{1}}\cdots A_{i_{s}})$, where $(i_{1},\ldots,i_{s})$ runs over all words (or equivalently: necklaces) in $[n]$. ###### Remark 2.3. The trace algebra is precisely the subring of $R$ consisting of all elements that are invariant under simultaneous conjugation: $f\in\mathcal{C}_{m,n}\iff f(P^{-1}A_{0}P,\ldots,P^{-1}A_{n-1}P)=f(A_{0},\ldots,A_{n-1})\quad\forall P\in GL_{m}.$ This is known as the _first fundamental theorem_ in the invariant theory of matrices [Sib68, Pro76]. ###### Proposition 2.4 ([Sib68, Corollary 2]). The trace algebra $\mathcal{C}_{2,2}$ is generated by the following five polynomials: $\operatorname{Tr}(A_{0}),\operatorname{Tr}(A_{1}),\operatorname{Tr}(A_{0}^{2}),\operatorname{Tr}(A_{0}A_{1}),\operatorname{Tr}(A_{1}^{2}),$ and moreover, there are no polynomial relations between these generators. ###### Corollary 2.5. For every bracelet $b=(b_{1},\ldots,b_{k})$, there is a unique polynomial $P_{b}(T_{0},T_{1},T_{00},T_{01},T_{11})\in\mathbb{C}[T_{0},T_{1},T_{00},T_{01},T_{11}]$ such that for every pair $(A_{0},A_{1})$ of $2\times 2$ matrices, the following equality holds: (7) $\operatorname{Tr}(A_{b_{1}}\cdots A_{b_{k}})=P_{b}(\operatorname{Tr}(A_{0}),\operatorname{Tr}(A_{1}),\operatorname{Tr}(A_{0}^{2}),\operatorname{Tr}(A_{0}A_{1}),\operatorname{Tr}(A_{1}^{2})).$ ###### Remark 2.6. If we give the ring $\mathbb{C}[T_{0},T_{1},T_{00},T_{01},T_{11}]$ a grading by putting $\deg(T_{0})=\deg(T_{1})=1$ and $\deg(T_{00})=\deg(T_{01})=\deg(T_{11})=2$, then the polynomial $P_{b}$ is homogeneous of degree $\operatorname{length}(b)$. The above means that $\operatorname{uMPS}(2,2,d)$ is the image of the polynomial map (8) $\displaystyle\begin{split}\psi:\mathbb{C}^{5}\to&\operatorname{Dih}^{d}(\mathbb{C}^{2})\\\ (T_{0},T_{1},T_{00},T_{01},T_{11})\mapsto&\sum_{b}{P_{b}(T_{0},T_{1},T_{00},T_{01},T_{11})e_{b}},\end{split}$ where $b$ runs over all bracelets of length $d$. This is the trace parametrization. In order to compute the polynomials $P_{b}$, for bracelets of length $3$, one verifies that $\displaystyle P_{000}$ $\displaystyle=-\frac{1}{2}T_{0}^{3}+\frac{3}{2}T_{0}T_{00},$ $\displaystyle P_{100}$ $\displaystyle=-\frac{1}{2}T_{0}^{2}T_{1}+\frac{1}{2}T_{1}T_{00}+T_{0}T_{01},$ $\displaystyle P_{110}$ $\displaystyle=-\frac{1}{2}T_{0}T_{1}^{2}+\frac{1}{2}T_{0}T_{11}+T_{1}T_{01},$ $\displaystyle P_{111}$ $\displaystyle=-\frac{1}{2}T_{1}^{3}+\frac{3}{2}T_{1}T_{11}.$ For bracelets of length $\geq 4$, we can inductively use the following identity [Sib68], which holds for every quadruple $(A,B,C,D)$ of $2\times 2$-matrices: $\displaystyle 2\operatorname{Tr}(ABCD)=$ $\displaystyle\operatorname{Tr}(A)\left(\operatorname{Tr}(BCD)-\operatorname{Tr}(B)\operatorname{Tr}(CD)\right)+\operatorname{Tr}(B)\left(\operatorname{Tr}(CDA)-\operatorname{Tr}(C)\operatorname{Tr}(DA)\right)$ $\displaystyle+\operatorname{Tr}(C)\left(\operatorname{Tr}(DAB)-\operatorname{Tr}(D)\operatorname{Tr}(AB)\right)+\operatorname{Tr}(D)\left(\operatorname{Tr}(ABC)-\operatorname{Tr}(A)\operatorname{Tr}(BC)\right)$ (9) $\displaystyle-\operatorname{Tr}(AC)\operatorname{Tr}(BD)+\operatorname{Tr}(AB)\operatorname{Tr}(CD)+\operatorname{Tr}(AD)\operatorname{Tr}(BC)$ $\displaystyle+\operatorname{Tr}(A)\operatorname{Tr}(B)\operatorname{Tr}(C)\operatorname{Tr}(D).$ ### 2.2. Computing the character The weight space $\langle\operatorname{uMPS}(2,2,d)\rangle_{w}$ is the image of the map $\displaystyle\mathbb{C}^{5}\to$ $\displaystyle\operatorname{Dih}^{d}(\mathbb{C}^{2})_{w}$ $\displaystyle(T_{0},T_{1},T_{00},T_{01},T_{11})\mapsto$ $\displaystyle\sum_{b}{P_{b}(T_{0},T_{1},T_{00},T_{01},T_{11})e_{b}},$ where $b$ ranges over all bracelets of weight $w$. By 2.1, we need to compute the dimension of the linear subspace of $\mathbb{C}[T_{0},T_{1},T_{00},T_{01},T_{11}]$ spanned by the $P_{b}$’s. This can be computed by putting the coefficients of the $P_{b}$’s in a matrix and computing its rank. Data: $d$ Result: Character of the representation $\langle\operatorname{uMPS}(2,2,d)\rangle$ $T_{0}\leftarrow\operatorname{Tr}(A_{0})$; $T_{1}\leftarrow\operatorname{Tr}(A_{1})$; $T_{00}\leftarrow\operatorname{Tr}(A_{0}^{2})$; $T_{01}\leftarrow\operatorname{Tr}(A_{0}A_{1})$; $T_{11}\leftarrow\operatorname{Tr}(A_{1}^{2})$; for _$\ell=3$ to d_ do $bracelets$ = bracelets of length $\ell$; for _$b$ in $bracelets$_ do P[b] $\leftarrow$ $\operatorname{Tr}(A_{b_{1}}\cdots A_{b_{\ell}})$, expressed in the $T_{i}$; end for end for for _$w=0$ to $d$_ do List the monomials $\mathbf{y}^{\alpha_{1}},\ldots,\mathbf{y}^{\alpha_{t}}$ appearing in the $P[b]$, where $b$ ranges over all bracelets of weight $w$; Write $P[b]=\sum_{j}\beta_{b,j}\mathbf{y}^{\alpha_{j}}$; Compute the rank of the matrix $(\beta_{b,j})_{b,j}$; end for Algorithm 1 Linear span of $\langle\operatorname{uMPS}(2,2,d)\rangle$ Putting everything together, we obtain Algorithm 1, which for a given $d$ computes the character (in particular the dimension) of $\langle\operatorname{uMPS}(2,2,d)\rangle$. We implemented this algorithm in SageMath [DLMS22]. The results are summarized in Footnote 1, where we write $D_{w}:=\dim\langle\operatorname{uMPS}(2,2,d)\rangle_{w}$. For $d<8$, the space $\langle\operatorname{uMPS}(2,2,d)\rangle$ is equal to the ambient space $\operatorname{Dih}^{2}(\mathbb{C}^{n})$. Table 1. Character of $\langle\operatorname{uMPS}(2,2,d)\rangle$. Since $D_{w}=D_{d-w}$111More generally: for every $GL_{2}$-representation $V$ we have that $V_{(b_{0},b_{1})}=V_{(b_{1},b_{0})}$., we only list $D_{w}$ for $w\leq\lceil\frac{d}{2}\rceil$. $\begin{array}[]{l|lllllllllll|l|l}\hline\cr\hline\cr d&D_{0}&D_{1}&D_{2}&D_{3}&D_{4}&D_{5}&D_{6}&D_{7}&D_{8}&D_{9}&D_{10}&\dim\langle\operatorname{uMPS}(2,2,d)\rangle&\dim\operatorname{Dih}^{2}(\mathbb{C}^{n})\\\ \hline\cr 8&1&1&4&5&7&&&&&&&29&30\\\ 9&1&1&4&6&8&&&&&&&40&46\\\ 10&1&1&5&7&11&11&&&&&&61&78\\\ 11&1&1&5&8&12&14&&&&&&82&126\\\ 12&1&1&6&9&15&17&20&&&&&118&224\\\ 13&1&1&6&10&16&20&23&&&&&154&380\\\ 14&1&1&7&11&19&23&29&29&&&&211&687\\\ 15&1&1&7&12&20&26&32&35&&&&268&1224\\\ 16&1&1&8&13&23&29&38&41&45&&&353&2250\\\ 17&1&1&8&14&24&32&41&47&51&&&438&4112\\\ 18&1&1&9&15&27&35&47&53&61&61&&559&7685\\\ 19&1&1&9&16&28&38&50&59&67&71&&680&14310\\\ 20&1&1&10&17&31&41&56&65&77&81&86&846&27012\\\ \hline\cr\hline\cr\end{array}$ Based on these computations, we make the following conjecture: ###### Conjecture 2.7. $\dim\langle\operatorname{uMPS}(2,2,d)\rangle_{w}=\begin{cases}1+\frac{d(v-1)v}{2}-\frac{2(v-1)v(2v-1)}{3}+v\lfloor\frac{d}{2}\rfloor-2v^{2}+v&\text{ for $w=2v$,}\\\ 1+\frac{dv(v+1)}{2}-\frac{2v(v+1)(2v+1)}{3}&\text{for $w=2v+1$.}\end{cases}$ This would in particular imply ###### Conjecture 2.8 (Corollary of Conjecture 2.7)). $\dim\langle\operatorname{uMPS}(2,2,d)\rangle=\begin{cases}\frac{1}{192}(d^{4}-4d^{2}+192d+192)&\text{ for $d$ even,}\\\ \frac{1}{192}(d^{4}-10d^{2}+192d+201)&\text{ for $d$ odd.}\end{cases}$ ### 2.3. Higher degree equations Equations of tensor varieties such as secant varieties of the Segre variety or Veronese variety are well known to be hard to compute. For uniform matrix product states, Critch and Morton gave a complete description of the ideal of $\operatorname{uMPS}(2,2,4)$ and $\operatorname{uMPS}(2,2,5)$ and several linear equation of $\operatorname{uMPS}(2,2,d)$ are given for $d$ until $12$, c.f. [CM14]. The generators of the ideal of $\operatorname{uMPS}(2,2,d)$ for $d=4,5,6$ are given in [CMS19]. Our algorithm from the previous section computes the linear span of $\operatorname{uMPS}(2,2,d)$, which is equivalent to computing the degree $1$ part of its defining ideal. With some small modifications, one can obtain an algorithm to compute the degree $k$ part of the defining ideal, viewed as a $GL_{2}$-representation. In this paper we will only give a brief sketch of this algorithm. For a more detailed account, including the use of raising operators to speed up the algorithm, we refer the reader to [DL22]. We first consider a slightly more general setting: let $X$ be any variety that is given as the closed image of a homogeneous polynomial map $p:\mathbb{C}^{s}\to\mathbb{C}^{N}$ as in (6), and let $I$ be its (homogeneous) defining ideal. Suppose that $p$ is compatible with a given $GL_{n}$-action on $\mathbb{C}^{s}$ and $\mathbb{C}^{N}$. Then for every $k\in\mathbb{N}$, the degree $k$ part $I_{k}$ is naturally a $GL_{n}$-representation. Fix a weight $w$ and consider the map $p^{k}_{w}:\mathbb{C}^{s}\xrightarrow{p}\mathbb{C}^{N}\xrightarrow{\nu_{k}}S^{k}\mathbb{C}^{N}\xrightarrow{}(S^{k}\mathbb{C}^{N})_{w},$ where $\nu_{k}$ is the $k$’th Veronese embedding. Then the weight space $I_{k,w}\subseteq(S^{k}\mathbb{C}^{N})_{w}^{*}$ is equal to $\langle\mathrm{Im}\;(p^{k}_{w})\rangle^{\perp}\cong\left({S^{k}\mathbb{C}^{N}}/{\langle\mathrm{Im}\;(p^{k}_{w})\rangle}\right)^{*}.$ The character of this representation is given by the difference of the characters of $\langle\mathrm{Im}\;(p^{k}_{w})\rangle$ and $S^{k}\mathbb{C}^{N}$. The latter character can be computed from the character of $\mathbb{C}^{N}$, and hence we are left to compute the character of $\langle\mathrm{Im}\;(p^{k}_{w})\rangle$. In the case $X=\operatorname{uMPS}(2,2,d)$, we can compute the character of $\langle\mathrm{Im}\;(p^{k}_{w})\rangle$ using a minor modification of Algorithm 1, where in the second loop we replace the collection of polynomials $\\{P[b]\mid\operatorname{weight}(b)=w\\}$ with their $k$-fold products $\left\\{\prod_{j=1}^{k}P[b_{j}]\mid\sum_{j}\operatorname{weight}(b_{j})=w\right\\}.$ Using this algorithm, we computed $I(\operatorname{uMPS}(2,2,d))_{2}$ for $d\leq 10$ and $I(\operatorname{uMPS}(2,2,d))_{3}$ for $d\leq 9$. Our code is available at [DLMS22], and our results are summarized in Tables 2 and 3. Table 2. Character of $I(\operatorname{uMPS}(2,2,d))^{*}_{2}$. Since $D_{w}=D_{2d-w}$22footnotemark: 2, we only list $D_{w}$ for $w\leq d$. $\begin{array}[]{l|llllllll}\hline\cr\hline\cr d&D_{3}&D_{4}&D_{5}&D_{6}&D_{7}&D_{8}&D_{9}&D_{10}\\\ \hline\cr 6&0&1&1&2&&&&\\\ 7&0&1&3&6&7&&&\\\ 8&0&5&10&25&32&42&&\\\ 9&1&7&21&48&79&110&119&\\\ 10&1&14&38&100&176&290&360&408\\\ \hline\cr\hline\cr\end{array}$ Table 3. Character of $I(\operatorname{uMPS}(2,2,d))^{*}_{3}$. Since $D_{w}=D_{3d-w}$33footnotemark: 3, we only list $D_{w}$ for $w\leq\lceil\frac{3d}{2}\rceil$. $\begin{array}[]{l|lllllllllll}\hline\cr\hline\cr d&D_{3}&D_{4}&D_{5}&D_{6}&D_{7}&D_{8}&D_{9}&D_{10}&D_{11}&D_{12}&D_{13}\\\ \hline\cr 6&0&1&2&8&11&17&17&&&&\\\ 7&0&1&4&15&29&49&67&77&&&\\\ 8&0&5&14&51&101&198&292&414&478&532&\\\ 9&1&7&26&83&191&388&671&1039&1431&1784&1983\\\ \hline\cr\hline\cr\end{array}$ ## 3\. Results ### 3.1. Linear relations via Cayley-Hamilton We recall the classical Cayley-Hamilton theorem. We use this result in order to find linear equations for ${\operatorname{uMPS}(m,n,d+k)}$, $k\geq m$ based on linear equations for ${\operatorname{uMPS}(m,n,N)}$, $N=d,d+1,\dots,d+m-1$. ###### Theorem 3.1 (Cayley-Hamilton). Let $A\in\mathbb{C}^{m\times m}$ be a $m\times m$ complex matrix and $p_{A}(\lambda)=\det(\lambda\mathrm{Id}_{m}-A)$ its characteristic polynomial. Then $p_{A}(A)=0$. In fact, the only thing we will use is the following statement, which (since $\deg p_{A}=m$) immediately follows from Theorem 3.1. ###### Corollary 3.2. Let $A\in\mathbb{C}^{m\times m}$. Then $A^{k}$, for $k\geq m$ can be written as a linear combination of its previous powers. ###### Lemma 3.3. Let $c=(c_{1},\ldots,c_{s})\in\mathbb{C}^{s}$ be a vector of coefficients and $\\{i_{\ell}^{j}\\}_{1\leq\ell\leq d,1\leq j\leq s}$ be indices; with $i_{\ell}^{j}\in[n]$. Assume that for every $n$-tuple $(A_{0},\ldots,A_{n-1})$ of $m\times m$ matrices and every $k<m$ the following identity holds: (10) $\sum_{j=1}^{s}c_{j}\operatorname{Tr}(A_{i_{1}^{j}}\cdots A_{i_{d}^{j}}A_{0}^{k})=0.$ Then the same identity holds for arbitrary $k\in\mathbb{N}$. ###### Proof. We use induction on $k\geq m$. By Corollary 3.2, $A_{0}^{k}$ can be written as a linear combination of its previous powers $A_{0}^{l}$, for $l=0,\dots,m-1$. Therefore, we have $\displaystyle\sum_{j=1}^{s}c_{j}\operatorname{Tr}(A_{i_{1}^{j}}\cdots A_{i_{d}^{j}}A_{0}^{k})$ $\displaystyle=\sum_{j=1}^{s}c_{j}\operatorname{Tr}\Big{[}A_{i_{1}^{j}}\cdots A_{i_{d}^{j}}\Big{(}\sum_{l=0}^{m-1}\gamma_{l}A_{0}^{l}\Big{)}\Big{]}$ $\displaystyle=\sum_{j=1}^{s}c_{j}\sum_{l=0}^{m-1}\gamma_{l}\big{(}\operatorname{Tr}(A_{i_{1}^{j}}\cdots A_{i_{d}^{j}}A_{0}^{l})\big{)}$ $\displaystyle=\sum_{l=0}^{m-1}\gamma_{l}\Big{(}\sum_{j=1}^{s}c_{j}\big{(}\operatorname{Tr}(A_{i_{1}^{j}}\cdots A_{i_{d}^{j}}A_{0}^{l})\big{)}\Big{)}=0.$ ∎ The usefulness of Lemma 3.3 stems from the fact that one can find expressions of the form (10) which are trivial for small $k$, in the sense that they follow from the cyclic invariance of the trace, but nontrivial for large $k$. We illustrate this in the example below. ###### Example 3.4. We show that for any $2\times 2$ matrices $A_{0},A_{1},A_{2},A_{3}$ and any $k\geq 0$, the following identity holds $\displaystyle\operatorname{Tr}(A_{1}A_{2}A_{0}A_{3}A_{0}^{k})+\operatorname{Tr}(A_{2}A_{3}A_{0}A_{1}A_{0}^{k})+\operatorname{Tr}(A_{3}A_{1}A_{0}A_{2}A_{0}^{k})$ $\displaystyle=$ $\displaystyle\operatorname{Tr}(A_{1}A_{0}A_{2}A_{3}A_{0}^{k})+\operatorname{Tr}(A_{2}A_{0}A_{3}A_{1}A_{0}^{k})+\operatorname{Tr}(A_{3}A_{0}A_{1}A_{2}A_{0}^{k}).$ By the above argument, it suffices to show the identity for $k=0$ and $k=1$. But these both follow from cyclic invariance of the trace: $\displaystyle{\hbox to0.0pt{\leavevmode\hbox{\set@color$\operatorname{Tr}(A_{1}A_{2}A_{0}A_{3})$}\hss}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\rule[-2.99998pt]{60.11678pt}{0.5pt}}+{\hbox to0.0pt{\leavevmode\hbox{\set@color$\operatorname{Tr}(A_{2}A_{3}A_{0}A_{1})$}\hss}\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\rule[-2.99998pt]{60.11678pt}{0.5pt}}+{\hbox to0.0pt{\leavevmode\hbox{\set@color$\operatorname{Tr}(A_{3}A_{1}A_{0}A_{2})$}\hss}\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color<EMAIL_ADDRESS>$\displaystyle=$ $\displaystyle{\hbox to0.0pt{\leavevmode\hbox{\set@color$\operatorname{Tr}(A_{1}A_{0}A_{2}A_{3})$}\hss}\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color<EMAIL_ADDRESS>to0.0pt{\leavevmode\hbox{\set@color$\operatorname{Tr}(A_{2}A_{0}A_{3}A_{1})$}\hss}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\rule[-2.99998pt]{60.11678pt}{0.5pt}}+{\hbox to0.0pt{\leavevmode\hbox{\set@color$\operatorname{Tr}(A_{3}A_{0}A_{1}A_{2})$}\hss}\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\rule[-2.99998pt]{60.11678pt}{0.5pt}}$ $\displaystyle{\hbox to0.0pt{\leavevmode\hbox{\set@color$\operatorname{Tr}(A_{1}A_{2}A_{0}A_{3}A_{0})$}\hss}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\rule[-2.99998pt]{70.4168pt}{0.5pt}}+{\hbox to0.0pt{\leavevmode\hbox{\set@color$\operatorname{Tr}(A_{2}A_{3}A_{0}A_{1}A_{0})$}\hss}\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\rule[-2.99998pt]{70.4168pt}{0.5pt}}+{\hbox to0.0pt{\leavevmode\hbox{\set@color$\operatorname{Tr}(A_{3}A_{1}A_{0}A_{2}A_{0})$}\hss}\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color<EMAIL_ADDRESS>$\displaystyle=$ $\displaystyle{\hbox to0.0pt{\leavevmode\hbox{\set@color$\operatorname{Tr}(A_{1}A_{0}A_{2}A_{3}A_{0})$}\hss}\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\rule[-2.99998pt]{70.4168pt}{0.5pt}}+{\hbox to0.0pt{\leavevmode\hbox{\set@color$\operatorname{Tr}(A_{2}A_{0}A_{3}A_{1}A_{0})$}\hss}\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color<EMAIL_ADDRESS>to0.0pt{\leavevmode\hbox{\set@color$\operatorname{Tr}(A_{3}A_{0}A_{1}A_{2}A_{0})$}\hss}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\rule[-2.99998pt]{70.4168pt}{0.5pt}}.$ This immediately gives us a nontrivial linear relation on $\operatorname{uMPS}(2,4,d)$, for $d\geq 6$. But we can also find linear relations on $\operatorname{uMPS}(2,2,d)$. For instance, if we put $k=2$, $A_{2}=A_{1}^{2}$ and $A_{3}=A_{0}A_{1}$, we find $\displaystyle\operatorname{Tr}(A_{1}^{3}A_{0}^{2}A_{1}A_{0}^{2})+\operatorname{Tr}(A_{1}^{2}A_{0}A_{1}A_{0}A_{1}A_{0}^{2})+\operatorname{Tr}(A_{1}^{2}A_{0}^{3}A_{1}^{2}A_{0})$ $\displaystyle=$ $\displaystyle\operatorname{Tr}(A_{1}^{2}A_{0}A_{1}A_{0}^{2}A_{1}A_{0})+\operatorname{Tr}(A_{1}^{2}A_{0}^{2}A_{1}^{2}A_{0}^{2})+\operatorname{Tr}(A_{1}^{3}A_{0}^{3}A_{1}A_{0}),$ which is the unique linear relation on $\operatorname{uMPS}(2,2,8)$ that doesn’t follow from dihedral symmetry. ###### Theorem 3.5. Let $A_{0},\ldots,A_{m},B$ be $m\times m$ matrices. Then for every $\ell\in\mathbb{N}$ it holds that (11) $\sum_{\sigma\in\mathfrak{S}_{m},\tau\in C_{m+1}}{\operatorname{sgn}(\sigma)\operatorname{sgn}(\tau)\operatorname{Tr}(A_{\tau(0)}B^{\sigma(0)}A_{\tau(1)}B^{\sigma(1)}\cdots A_{\tau(m-1)}B^{\sigma(m-1)}A_{\tau(m)}B^{\ell})}=0.$ Here $\mathfrak{S}_{m}$ is the symmetric group acting on $\\{0,1,\ldots,m-1\\}$, and $C_{m+1}$ is the cyclic group acting on $\\{0,1,\ldots,m\\}$. ###### Proof. We will first show the statement for $\ell\in\\{0,1,\ldots,m-1\\}$. So let us fix such an $\ell$. We will write $T(\sigma,\tau):=\operatorname{Tr}(A_{\tau(0)}B^{\sigma(0)}A_{\tau(1)}B^{\sigma(1)}\cdots A_{\tau(m-1)}B^{\sigma(m-1)}A_{\tau(m)}B^{\ell}).$ Let us write $c_{a}$ for the permutation that cyclically permutes the first $a$ elements. Precisely $c_{a}(i)=\begin{cases}i+1&\text{for }i<a-1\\\ 0&\text{for }i=a-1\\\ i&\text{for }i>a-1.\end{cases}$ Step 1. For $\sigma\in\mathfrak{S}_{m}$ and $\tau\in C_{m+1}$, we define $\displaystyle\widetilde{\sigma}$ $\displaystyle:=\sigma\circ c_{\sigma^{-1}(\ell)+1}^{-1}\circ c_{m}^{\sigma^{-1}(\ell)+1}$ $\displaystyle\widetilde{\tau}$ $\displaystyle:=\tau\circ c_{m+1}^{\sigma^{-1}(\ell)+1}$ we have $T(\sigma,\tau)=T(\widetilde{\sigma},\widetilde{\tau}).$ Indeed, if we write $k=\sigma^{-1}(\ell)$, then $\displaystyle T(\sigma,\tau)$ $\displaystyle=\operatorname{Tr}(A_{\tau(0)}B^{\sigma(0)}\cdots A_{\tau(k)}B^{\sigma(k)}A_{\tau(k+1)}B^{\sigma(k+1)}\cdots A_{\tau(m-1)}B^{\sigma(m-1)}A_{\tau(m)}B^{\sigma(k)})$ $\displaystyle=\operatorname{Tr}(A_{\tau(k+1)}B^{\sigma(k+1)}\cdots A_{\tau(m)}B^{\sigma(k)}A_{\tau(0)}B^{\sigma(0)}\cdots A_{\tau(k-1)}B^{\sigma(k-1)}A_{\tau(k)}B^{\sigma(k)})$ $\displaystyle=T(\widetilde{\sigma},\widetilde{\tau}).$ Where for the last step, note that * • $k+1=c_{m+1}^{k+1}(0)$, …, $m=c_{m+1}^{k+1}(m-k-1)$, $0=c_{m+1}^{k+1}(m-k)$, …, $k=c_{m+1}^{k+1}(m)$. * • $k+1=c_{k+1}^{-1}(c_{m}^{k+1}(0))$, …, $m-1=c_{k+1}^{-1}(c_{m}^{k+1}(m-k-2))$, $k=c_{k+1}^{-1}(c_{m}^{k+1}(m-k-1))$, $0=c_{k+1}^{-1}(c_{m}^{k+1}(m-k))$, …, $k-1=c_{k+1}^{-1}(c_{m}^{k+1}(m-1))$. Step 2. Note that the assignment $\displaystyle\mathfrak{S}_{m}\times C_{m+1}$ $\displaystyle\to\mathfrak{S}_{m}\times C_{m+1}$ $\displaystyle(\sigma,\tau)$ $\displaystyle\mapsto(\widetilde{\sigma},\widetilde{\tau})$ is an involution. Indeed, we have $\displaystyle\widetilde{\sigma}^{-1}(\ell)=c_{m}^{-\sigma^{-1}(\ell)-1}(c_{\sigma^{-1}(\ell)+1}(\sigma^{-1}(\ell)))=c_{m}^{-\sigma^{-1}(\ell)-1}(0)=m-\sigma^{-1}(\ell)-1.$ So, again writing $k=\sigma^{-1}(\ell)$ $\displaystyle\widetilde{\widetilde{\sigma}}$ $\displaystyle=\widetilde{\sigma}\circ c_{\widetilde{\sigma}^{-1}(\ell)+1}^{-1}\circ c_{m}^{\widetilde{\sigma}^{-1}(\ell)+1}$ $\displaystyle=\sigma\circ c_{k+1}^{-1}\circ c_{m}^{k+1}\circ c_{m-k}^{-1}\circ c_{m}^{m-k}$ $\displaystyle=\sigma.$ To see the last equality: * • For $i<k$: $c_{k+1}^{-1}(c_{m}^{k+1}(c_{m-k}^{-1}(c_{m}^{m-k}(i))))=c_{k+1}^{-1}(c_{m}^{k+1}(c_{m-k}^{-1}(m-k+i)))=\\\ c_{k+1}^{-1}(c_{m}^{k+1}(m-k+i))=c_{k+1}^{-1}(i+1)=i$. * • For $i=k$: $c_{k+1}^{-1}(c_{m}^{k+1}(c_{m-k}^{-1}(c_{m}^{m-k}(k))))=c_{k+1}^{-1}(c_{m}^{k+1}(c_{m-k}^{-1}(0)))=\\\ c_{k+1}^{-1}(c_{m}^{k+1}(m-k-1))=c_{k+1}^{-1}(0)=k$. * • For $i>k$: $c_{k+1}^{-1}(c_{m}^{k+1}(c_{m-k}^{-1}(c_{m}^{m-k}(i))))=c_{k+1}^{-1}(c_{m}^{k+1}(c_{m-k}^{-1}(i-k)))=\\\ c_{k+1}^{-1}(c_{m}^{k+1}(i-k-1))=c_{k+1}^{-1}(i)=i$. And furthermore $\widetilde{\widetilde{\tau}}=\tau\circ c_{m+1}^{k+1}\circ c_{m+1}^{m-k}=\tau$. Step 3. Note that $\displaystyle\operatorname{sgn}(\widetilde{\sigma})\operatorname{sgn}(\widetilde{\tau})$ $\displaystyle=(-1)^{k+(k+1)(m-1)+(k+1)m}\operatorname{sgn}(\sigma)\operatorname{sgn}(\tau)$ $\displaystyle=-\operatorname{sgn}(\sigma)\operatorname{sgn}(\tau).$ From Step 1 we have that $T(\sigma,\tau)=T(\widetilde{\sigma},\widetilde{\tau})$. There will be cancellations of terms in (11) as $\operatorname{sgn}(\widetilde{\sigma})\operatorname{sgn}(\widetilde{\tau})=-\operatorname{sgn}(\sigma)\operatorname{sgn}(\tau)$ from Step 3. Finally using Step 2 we ensure that all terms will cancel out therefore establishing the given identity for $0\leq l\leq m-1$. Now using Lemma 3.3, we conclude that the given identity (11) holds for all $l\in\mathbb{N}$. ∎ ###### Corollary 3.6. If $n\geq 3$ and $d\geq\frac{(m+1)(m+2)}{2}$, then ${\operatorname{uMPS}(m,n,d)}$ is contained in a proper linear subspace of the space of cyclically invariant tensors. ###### Proof. Let $\ell\geq m$ and let $\mathfrak{S}_{m+1}$ denote the symmetric group acting on $\\{0,1,\ldots,m-1,\ell\\}$. Then we can rewrite (11) as follows: (12) $\sum_{\sigma\in\mathfrak{S}_{m+1}}{\operatorname{sgn}(\sigma)\operatorname{Tr}(A_{0}B^{\sigma(0)}A_{1}B^{\sigma(1)}\cdots A_{m-1}B^{\sigma(m-1)}A_{m}B^{\sigma(\ell)})}=0.$ Let $X_{0},X_{1},X_{2}$ be $m\times m$ matrices, and in (12) substitute $A_{0}=X_{0}$, $B=X_{1}$, and $A_{i}=X_{2}$ for $i=1,\ldots,m$. Note that even after that substitution, the ternary bracelets corresponding to the $(m+1)!$ terms in (12) are all distinct. Hence no two terms will cancel, and we get a nontrivial linear relation on $\operatorname{uMPS}(m,3,d)$, where $d=1+2+\cdots+(m-1)+\ell+(m+1)\geq 1+2+\cdots+(m+1)=\binom{m+2}{2}$. ∎ ###### Remark 3.7. With a bit more care, one can also get nontrivial relations on $\operatorname{uMPS}(m,2,d)$ in this way. For instance if we take $\ell=m$ and in (12) we substitute $A_{0}=X_{0}X_{1}^{m+1}X_{0}$, $B=X_{1}$, and $A_{i}=X_{0}$ for $i=1,\ldots,m$, one verifies that again no terms cancel, and hence we found a nontrivial linear relation on $\operatorname{uMPS}(m,2,d)$, where $d=\binom{m+3}{2}$. ### 3.2. Linear equations for $\operatorname{uMPS}(2,2,d)$ From the trace parametrization, we can give an upper bound on $\dim\langle\operatorname{uMPS}(2,2,d)\rangle$. ###### Theorem 3.8. For every $d\in\mathbb{N}$, we have the inequality $\dim\langle\operatorname{uMPS}(2,2,d)\rangle\leq\begin{cases}\frac{1}{192}(d+6)(d+4)^{2}(d+2)&\text{for $d$ even},\\\ \frac{1}{192}(d+7)(d+5)(d+3)(d+1)&\text{for $d$ odd}.\end{cases}$ ###### Proof. It follows from (8) and Remark 2.6 that $\dim\langle\operatorname{uMPS}(2,2,d)\rangle$ can be at most the number of degree $d$ monomials in $\mathbb{C}[T_{0},T_{1},T_{00},T_{01},T_{11}]$. Counting these monomials gives the above formula. ∎ Note that asymptotically for $d\to\infty$, the above bound agrees with our conjectured formula in 2.8. As in the previous section, we abbreviate “weight $\lambda=(w,d-w)\in\mathbb{Z}^{2}$” to “weight $w$”. In the rest of this section, we provide a proof of Conjecture 2.7 in the cases $w=0,1,2,3$. Consider the parametrization of $\operatorname{uMPS}(2,2,d)$ in coordinates $\displaystyle\varphi:(\mathbb{C}^{2\times 2})^{2}$ $\displaystyle\to\operatorname{Dih}^{d}(\mathbb{C}^{2})$ $\displaystyle(A_{0},A_{1})$ $\displaystyle\mapsto(\operatorname{Tr}(A_{0}^{d}),\operatorname{Tr}(A_{0}^{d-1}A_{1}),\dots,\operatorname{Tr}(A_{1}^{d})).$ It is in particular a polynomial map in the unknown entries of the matrices $A_{0},A_{1}\in\mathbb{C}^{2\times 2}$, we denote by $A_{0}=\begin{pmatrix}a_{1}&a_{2}\\\ a_{3}&a_{4}\end{pmatrix},\quad A_{1}=\begin{pmatrix}b_{1}&b_{2}\\\ b_{3}&b_{4}\end{pmatrix}.$ We will write $T_{i_{1}\dots i_{d}}:=\operatorname{Tr}(A_{i_{1}}\dots A_{i_{d}})\in\mathbb{C}[a_{1},\dots,b_{4}]_{d}\quad\text{and}\quad W_{w}:=\langle T_{b}:b\in B_{w}(2,d)\rangle$ By 2.1, we have $\dim\langle\operatorname{uMPS}(2,2,d)\rangle_{w}=\dim W_{w}.$ The cases $w=0$ and $w=1$ are easy: ###### Proposition 3.9. The space $W_{0}$ is a $1$-dimensional vector space generated by the polynomial $T_{0,\dots,0}=\operatorname{Tr}(A_{0}^{d})$. The space $W_{1}$ is a $1$-dimensional vector space generated by the polynomial $T_{10\dots 0}=\operatorname{Tr}(A_{1}A_{0}^{d-1})$. ###### Proof. If $w=0$ then $b=({0\dots 0})\in B_{0}(2,d)$ is the only binary bracelet of weight zero, and if $w=1$ then $b=({10\dots 0})\in B_{1}(2,d)$ is the only binary bracelet of weight $1$. ∎ We now turn to the case $w=2$. Then 2.7 states that $\dim W_{w}=\lfloor\frac{d}{2}\rfloor$. But $\lfloor\frac{d}{2}\rfloor$ is exactly the number $B_{2}(2,d)$ of generators $T_{b}$ of $W_{w}$; hence we need to show that they are linearly independent. ###### Proposition 3.10. The polynomials $\\{T_{b}:b\in B_{2}(2,d)\\}$ are linearly independent. ###### Proof. Note that $W_{2}=\langle T_{10^{i}10^{d-2-i}}\mid i=0,\dots\lfloor\frac{d}{2}\rfloor-1\rangle.$ If we make the following substitutions: $A_{1}=\begin{pmatrix}0&1\\\ 1&0\end{pmatrix},\quad A_{0}=\begin{pmatrix}1&0\\\ 0&x\end{pmatrix}$ our generators $T_{b}$ become (13) $T_{10^{i}10^{d-2-i}}=\operatorname{Tr}(A_{1}A_{0}^{i}A_{1}A_{0}^{d-2-i})=x^{i}+x^{d-2-i}\quad i=0,\dots\lfloor\frac{d}{2}\rfloor-1.$ Since for the given choice of $A_{0},A_{1}$ the polynomials (13) are $\lfloor\frac{d}{2}\rfloor$ linearly independent polynomials, the same holds for a generic choice of matrices. ∎ Finally, we prove the case $w=3$. In this case our conjectured formula states that $\dim W_{w}=d-3$. Consider the following subset of $B_{3}(2,d)$: $\widetilde{B}_{3}:=\\{b\in B_{3}(2,d):b\text{ contains }11\text{ {or} }101\\}\subset B_{3}(2,d).$ ###### Lemma 3.11. The cardinality of $\widetilde{B}_{3}$ equals $d-3$. ###### Proof. The cardinality of $\widetilde{B}_{3}$ is the sum of the number of binary bracelets of weight $3$ containing $11$ and the number of binary bracelets of weight $3$ containing $101$ but not $11$, that are $\lceil\frac{d-2}{2}\rceil$ and $\lceil\frac{d-5}{2}\rceil$ respectively. Therefore the cardinality of $\widetilde{B}_{3}$ is $\lceil\frac{d-2}{2}\rceil+\lceil\frac{d-5}{2}\rceil=d-3.$ ∎ In order to prove the case $w=3$ we need to show that $\\{T_{b}:b\in\widetilde{B}_{3}\\}$ is a basis of $W_{3}$. We first show linear independence: ###### Lemma 3.12. The polynomials $\\{T_{b}:b\in\widetilde{B}_{3}\\}$ are linearly independent. ###### Proof. We will show that the polynomials are linearly independent even after the following substitution: $A_{1}=\begin{pmatrix}0&1\\\ 1&1\end{pmatrix},\quad A_{0}=\begin{pmatrix}1&0\\\ 0&x\end{pmatrix}.$ Then $W_{3}$ is spanned by the following polynomials: $\displaystyle f_{b}$ $\displaystyle:=T_{110^{b}10^{d-b-3}}=x^{b}+x^{d-b-3}+2x^{d-3}\quad b\in\\{0,\dots,\lfloor\frac{d-3}{2}\rfloor\\},$ $\displaystyle g_{b}$ $\displaystyle:=T_{1010^{b}10^{d-b-4}}=x^{b+1}+x^{d-b-3}+x^{d-4}+x^{d-3}\quad b\in\\{1,\dots,\lfloor\frac{d-4}{2}\rfloor\\}.$ We now simply have to put the coefficients of these polynomials in a matrix and show it has full rank. For $d$ even the matrix of coefficients is given by $S=\begin{pmatrix}1&&&&\dots&&&&&3\\\ &1&&&&&&&1&2\\\ &&1&&&&&1&&2\\\ \vdots&&&\ddots&&&\iddots&&&\vdots\\\ 0&&&&1&1&&&&2\\\ 0&0&1&0&&\dots&&&2&1\\\ &&0&1&&&&1&1&1\\\ \vdots&&&&\ddots&&\iddots&&&\vdots\\\ 0&&&\dots&&2&\dots&0&1&1\end{pmatrix}$ and for $d$ odd, given by $S=\setcounter{MaxMatrixCols}{11}\begin{pmatrix}1&&&&&\dots&&&&0&3\\\ &1&&&&&&&0&1&2\\\ &&1&&&&&&1&0&2\\\ \vdots&&&\ddots&&&&\iddots&&\vdots&\vdots\\\ &&&&1&&1&&&0&2\\\ &&&&&2&&&&0&2\\\ 0&0&1&0&&&\dots&&0&2&1\\\ &&0&1&0&&&&1&1&1\\\ &&&&\ddots&&&\iddots&&\vdots&\vdots\\\ 0&0&&&&1&1&&0&1&1\end{pmatrix}.$ By elementary row operations, we can reduce the left upper part to a diagonal matrix of order $\lfloor\frac{d-1}{2}\rfloor$. The left lower part is filled with zeros. The (rectangular) right lower block of dimension $\lfloor\frac{d-4}{2}\rfloor\times\lfloor\frac{d-2}{2}\rfloor$ can be put in the following upper triangular forms, for $d$ even and odd respectively $\begin{pmatrix}0&&\dots&-1&2&-1\\\ \vdots&&-1&1&1&-1\\\ &\iddots&\iddots&0&1&-1\\\ -1&1&&\vdots&\vdots&\vdots\\\ 2&0&\dots&0&1&1\end{pmatrix}\to\begin{pmatrix}2&0&\dots&&&1&1\\\ 0&2&&&&3&-1\\\ &&\ddots&&&5&-3\\\ \vdots&&&&&\vdots&\vdots\\\ 0&\dots&&0&2&*&*\\\ 0&\dots&&&0&*&*\end{pmatrix}\quad\text{for }d\text{ even,}$ $\begin{pmatrix}0&&\dots&-1&2&-1\\\ \vdots&&-1&1&1&-1\\\ &\iddots&\iddots&0&1&-1\\\ &&&\vdots&\vdots&\vdots\\\ -1&1&&&&-1\\\ 1&0&\dots&0&1&0\end{pmatrix}\to\begin{pmatrix}1&0&\dots&&0&1&0\\\ 0&1&0&\dots&0&2&-1\\\ \vdots&0&\ddots&&&3&-2\\\ &&&&&\vdots&\vdots\\\ &&&0&1&*&*\\\ &&&&0&*&*\end{pmatrix}\quad\text{for }d=2\text{ odd.}$ Both the obtained blocks have rank $\lfloor\frac{d-4}{2}\rfloor$. We have that the rank of $S$ is $\lfloor\frac{d-1}{2}\rfloor+\lfloor\frac{d-4}{2}\rfloor=d-3$, and this concludes the proof. ∎ We finish our proof by showing that $\\{T_{b}:b\in\widetilde{B}_{3}\\}$ spans $W_{3}$: ###### Lemma 3.13. Every polynomial $T_{10^{a}10^{b}10^{c}}=\operatorname{Tr}(A_{1}A_{0}^{a}A_{1}A_{0}^{b}A_{1}A_{0}^{c})$, with $1<a\leq b\leq c$, $a+b+c=d-3$ is an element of the linear span $\langle T_{b}:b\in\widetilde{B}_{3}\rangle$. ###### Proof. Notice that the elements of $B_{3}(2,n)\setminus\widetilde{B}_{3}$ can be written without loss of generality in the form $10^{a}10^{b}10^{c},\quad\text{with }1<a\leq b\leq c.$ We use induction on $a$. If $a=0$ and $a=1$ then $(10^{a}10^{b}10^{c})\in\widetilde{B}_{3}$ and we are done. If we substitute $A_{1}\to A_{1}A_{0}^{a-1}$, $A_{2}\to A_{1}A_{0}^{b}$ and $A_{3}\to A_{1}$ in the equation given by Theorem 3.5 We get $\displaystyle\operatorname{Tr}(A_{1}A_{0}^{a-1}A_{1}A_{0}^{b+1}A_{1}A_{0}^{c})+\operatorname{Tr}(A_{1}A_{0}^{b}A_{1}A_{0}A_{1}A_{0}^{a+c-1})+\operatorname{Tr}(A_{1}^{2}A_{0}^{a}A_{1}A_{0}^{b+c})=$ $\displaystyle\operatorname{Tr}(A_{1}A_{0}^{a}A_{1}A_{0}^{b}A_{1}A_{0}^{c})+\operatorname{Tr}(A_{1}A_{0}^{b+1}A_{1}^{2}A_{0}^{a+c-1})+\operatorname{Tr}(A_{1}A_{0}A_{1}A_{0}^{a-1}A_{1}A_{0}^{b+c}).$ Reordering the summands we obtain $\displaystyle T_{10^{a}10^{b}10^{c}}=(T_{10^{b}1010^{a+c-1}}+T_{110^{a}10^{b+c}}-T_{10^{b+1}110^{a+c-1}}-T_{1010^{a-1}10^{b+c}})+T_{10^{a-1}10^{b+1}10^{c}}.$ All terms in the parenthesis have as subscript an elements of $\widetilde{B}_{3}$, and the last term is in $\langle\\{T_{b}:b\in\widetilde{B}_{3}\\}\rangle$ by the induction hypothesis. This concludes the proof. ∎ ## References * [AKLT88] Ian Affleck, Tom Kennedy, Elliott H Lieb, and Hal Tasaki. Valence bond ground states in isotropic quantum antiferromagnets. In Condensed matter physics and exactly soluble models, pages 253–304. Springer, 1988. * [BBM15] Weronika Buczyńska, Jarosław Buczyński, and Mateusz Michałek. The Hackbusch conjecture on tensor formats. J. Math. Pures Appl., 104(4):749–761, 2015. * [BSU16] Markus Bachmayr, Reinhold Schneider, and André Uschmajew. Tensor networks and hierarchical tensors for the solution of high-dimensional partial differential equations. Foundations of Computational Mathematics, 16(6):1423–1472, 2016\. * [CCX+18] Jing Chen, Song Cheng, Haidong Xie, Lei Wang, and Tao Xiang. Equivalence of restricted boltzmann machines and tensor network states. Physical Review B, 97(8):085104, 2018. * [CGFW21] Matthias Christandl, Fulvio Gesmundo, Daniel Stilck França, and Albert H Werner. Optimization at the boundary of the tensor network variety. Physical Review B, 103(19):195139, 2021. * [CLO+16] Andrzej Cichocki, Namgil Lee, Ivan Oseledets, Anh-Huy Phan, Qibin Zhao, and Danilo P Mandic. Tensor networks for dimensionality reduction and large-scale optimization: Part 1 low-rank tensor decompositions. Foundations and Trends in Machine Learning, 9(4-5):249–429, 2016\. * [CLVW20] Matthias Christandl, Angelo Lucia, Péter Vrana, and Albert H Werner. Tensor network representations from the geometry of entangled states. SciPost Phys., 9(3):42, 2020. * [CM14] Andrew Critch and Jason Morton. Algebraic geometry of matrix product states. Symmetry Integrability and Geometry-methods and Applications, 10:095, 2014. * [CMS19] Adam Czapliński, Mateusz Michałek, and Tim Seynnaeve. Uniform matrix product states from an algebraic geometer’s point of view. arXiv preprint arXiv:1904.07563, 2019. * [DL22] Claudia De Lazzari. Algebraic, geometric and numerical methods for Tensor Network Varieties. PhD thesis, University of Trento, 2022. In preparation. * [DLMS22] Claudia De Lazzari, Harshit Motwani, and Tim Seynnaeve. The linear span of uniform matrix product states. GitHub repository, 2022. Available at https://github.com/harshitmotwani2015/uMPS/. * [FNW92] Mark Fannes, Bruno Nachtergaele, and Reinhard F Werner. Finitely correlated states on quantum spin chains. Communications in mathematical physics, 144(3):443–490, 1992. * [Gre14] John Greene. Traces of matrix products. The Electronic Journal of Linear Algebra, 27:716–734, 2014. * [Hac12] Wolfgang Hackbusch. Tensor spaces and numerical tensor calculus, volume 42 of Springer Series in Computational Mathematics. Springer, Heidelberg, 2012. * [HMOV14] Jutho Haegeman, Michaël Mariën, Tobias J Osborne, and Frank Verstraete. Geometry of matrix product states: Metric, parallel transport, and curvature. Journal of Mathematical Physics, 55(2):021902, 2014. * [LQY12] Joseph M. Landsberg, Yang Qi, and Ke Ye. On the geometry of tensor network states. Quantum Inf. Comput., 12(3-4):346–354, 2012. * [NV18] Miguel Navascues and Tamas Vertesi. Bond dimension witnesses and the structure of homogeneous matrix product states. Quantum, 2:50, 2018. * [ÖR95] Stellan Östlund and Stefan Rommer. Thermodynamic limit of density matrix renormalization. Physical review letters, 75(19):3537, 1995. * [Ose11] Ivan V Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5):2295–2317, 2011. * [PGVWC07] David Perez-Garcia, Frank Verstraete, Michael M Wolf, and J Ignacio Cirac. Matrix product state representations. Quantum Inf. Comput., 7(5–6):401–430, 2007. * [Pro76] Claudio Procesi. The invariant theory of n$\times$ n matrices. Advances in mathematics, 19(3):306–381, 1976. * [RS19] Elina Robeva and Anna Seigal. Duality of graphical models and tensor networks. Inf. Inference, 8(2):273–288, 2019. * [Sib68] Konstantin Sergeevich Sibirskii. Algebraic invariants for a set of matrices. Siberian Mathematical Journal, 9(1):115–124, 1968. * [Sta13] Richard P. Stanley. Algebraic combinatorics. Undergraduate Texts in Mathematics. Springer, New York, 2013. Walks, trees, tableaux, and more.
# The implications of high BH spins on the origin of BH-BH mergers A. Olejak11affiliationmark: , K. Belczynski11affiliationmark: 1 Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, ul. Bartycka 18, 00-716 Warsaw, Poland, (aleksandra.olejak<EMAIL_ADDRESS> ###### Abstract The LIGO/Virgo collaboration has reported $50$ black hole—black hole (BH-BH) mergers and $8$ candidates recovered from digging deeper into the detectors noise. The majority of these mergers have low effective spins pointing toward low BH spins and efficient angular momentum transport (AM) in massive stars as proposed by several models (e.g., the Tayler-Spruit dynamo). However, out of these $58$ mergers, $7$ are consistent with having high effective spin parameter ($\chi_{\rm eff}>0.3$). Additionally, $2$ events seem to have high effective spins sourced from the spin of the primary (more massive) BH. These particular observations could be used to discriminate between the isolated binary and dynamical formation channels. It might seem that high BH spins point to a dynamical origin if AM in stars is efficient and forms low-spinning BHs. In such a case dynamical formation is required to produce second and third generations of BH-BH mergers with typically high-spinning BHs. Here we show, however, that isolated binary BH-BH formation naturally reproduces such highly spinning BHs. Our models start with efficient AM in massive stars that is needed to reproduce the majority of BH-BH mergers with low effective spins. Later, some of the binaries are subject to a tidal spin-up allowing the formation of a moderate fraction ($\sim 10\%$) of BH-BH mergers with high effective spins ($\chi_{\rm eff}\gtrsim 0.4-0.5$). In addition, isolated–binary evolution can produce a small fraction of BH-BH mergers with almost maximally spinning primary BHs. Therefore, the formation scenario of these atypical BH-BH mergers remains to be found. ###### Subject headings: stars: black holes, compact objects, massive stars ## 1\. Introduction The LIGO/Virgo collaboration has announced detection of gravitational waves from $\sim 50$ double black hole (BH-BH) mergers (Abbott et al., 2019a, b; Fishbach & Holz, 2020; Abbott et al., 2021a). Additional $8$ BH-BH merger candidates have been recently reported (Abbott et al., 2021b). The majority of all these events have low effective spins parameters: $\chi_{\rm eff}=\frac{m_{1}a_{\rm 1}\cos\theta_{1}+m_{2}a_{\rm 2}\cos\theta_{2}}{m_{1}+m_{2}}\approx 0,$ where $m_{i}$ are BH masses, $a_{\rm i}=cJ_{i}/Gm_{i}^{2}$ are dimensionless BH spin magnitudes ($J_{i}$ being the BH angular momentum (AM), $c$ the speed of light, $G$ the gravitational constant), and $\theta_{i}$ are angles between the individual BH spins and the system orbital AM. However, among the dectections there are also several BH-BH mergers which are characterized by higher (non-zero) positive effective spins. In Table 1 we list the parameters of the five BH-BH mergers with highest effective spins reported by Abbott et al. (2021a) with additional two high effective spin systems reported by Abbott et al. (2021b). Table 1BH-BH mergers with high effective spins No. | Namea | $\chi_{\rm eff}$ | $m_{1}$ | $m_{2}$ | $a_{1}$ ---|---|---|---|---|--- 1 | GW190517 | $0.52^{+0.19}_{-0.19}$ | $37.4^{+11.7}_{-7.6}$ | $25.3^{+7.0}_{-7.3}$ | – 2 | GW170729 | $0.37^{+0.21}_{-0.25}$ | $50.2^{+16.2}_{-10.2}$ | $34.0^{+9.1}_{-10.1}$ | – 3 | GW190620 | $0.33^{+0.22}_{-0.25}$ | $57.1^{+16.0}_{-12.7}$ | $35.5^{+12.2}_{-12.3}$ | – 4 | GW190519 | $0.31^{+0.20}_{-0.22}$ | $66.0^{+10.7}_{-12.0}$ | $40.5^{+11.0}_{-11.1}$ | – 5 | GW190706 | $0.28^{+0.26}_{-0.29}$ | $67.0^{+14.6}_{-13.3}$ | $38.2^{+14.6}_{-13.3}$ | – 6 | GW190403 | $0.70^{+0.15}_{-0.27}$ | $88.0^{+28.2}_{-32.9}$ | $22.1^{+23.8}_{-9.0}$ | $0.92^{+0.07}_{-0.22}$ 7 | GW190805 | $0.35^{+0.30}_{-0.36}$ | $48.2^{+17.5}_{-12.5}$ | $32.0^{+13.4}_{-11.4}$ | $0.74^{+0.22}_{-0.60}$ a: Names are abbreviated. We include candidate detections as full astrophysical events. Parameters of first $5$ events are from original LIGO/Virgo analysis (Abbott et al., 2021a), while the remaining $2$ are from deeper search into the detectors noise (Abbott et al., 2021b). The formation of close BH-BH systems is an open issue with several formation channels proposed and discussed in the context of the LIGO/Virgo mergers. The major formation scenarios include the isolated binary evolution (Bond & Carr, 1984; Tutukov & Yungelson, 1993; Lipunov et al., 1997; Voss & Tauris, 2003; Belczynski et al., 2010b; Dominik et al., 2012; Kinugawa et al., 2014; Hartwig et al., 2016; de Mink & Mandel, 2016; Mandel & de Mink, 2016; Marchant et al., 2016; Spera et al., 2016; Belczynski et al., 2016a; Eldridge & Stanway, 2016; Woosley, 2016; van den Heuvel et al., 2017; Stevenson et al., 2017; Kruckow et al., 2018; Hainich et al., 2018; Marchant et al., 2018; Spera et al., 2019; Neijssel et al., 2019; du Buisson et al., 2020; Bavera et al., 2020, 2021; Qin et al., 2021), the dense stellar system dynamical channel (Portegies Zwart & McMillan, 2000; Miller & Hamilton, 2002b, a; Portegies Zwart et al., 2004; Gültekin et al., 2004, 2006; O’Leary et al., 2007; Sadowski et al., 2008; Downing et al., 2010; Antonini & Perets, 2012a; Benacquista & Downing, 2013; Mennekens & Vanbeveren, 2014; Bae et al., 2014; Chatterjee et al., 2016; Mapelli, 2016; Hurley et al., 2016; Rodriguez et al., 2016; VanLandingham et al., 2016; Askar et al., 2017; Arca-Sedda & Capuzzo-Dolcetta, 2017; Samsing, 2017; Morawski et al., 2018; Banerjee, 2018; Di Carlo et al., 2019; Zevin et al., 2019; Rodriguez et al., 2018; Perna et al., 2019; Kremer et al., 2020), isolated multiple (triple, quadruple) systems (Antonini et al., 2017; Silsbee & Tremaine, 2017; Arca-Sedda et al., 2018; Liu & Lai, 2018; Fragione & Kocsis, 2019), mergers of binaries in galactic nuclei (Antonini & Perets, 2012b; Hamers et al., 2018; Hoang et al., 2018; Fragione et al., 2019) and primordial BH formation (Sasaki et al., 2016; Green, 2017; Clesse & García-Bellido, 2017; Carr & Silk, 2018; De Luca et al., 2020). BH spins and their orientations can play an important role in distinguishing between various BH-BH formation models. If the BH spins are not small, then their orientation may possibly distinguish between a binary evolution origin (predominantly aligned spins) and dynamical formation channels (more or less isotropic distribution of spin orientations). If the BHs formed out of stars have small spins (Spruit, 2002; Zaldarriaga et al., 2017; Hotokezaka & Piran, 2017; Fuller et al., 2019; Qin et al., 2019; Olejak et al., 2020; Bavera et al., 2020; Belczynski et al., 2020) then BH-BH mergers with high effective spins may challenge their isolated evolution origin. In dense stellar clusters, BHs may merge several times easily producing BHs with high spins and making a dynamical channel a prime site for such events (Gerosa & Berti, 2017; Fishbach et al., 2017). However, the assumption about the BH natal spin (and the AM transport efficiency) also plays a role in the effective spin distribution for the dynamical channel (Banerjee, 2021). In this study we show that the current understanding of stellar/binary astrophysics (Belczynski et al., 2021) and the degeneracy between the different formation channels do not allow for such a simple test of the origin of the LIGO/Virgo BH-BH mergers. To demonstrate this we show that although the isolated binary evolution channel produces mostly BH-BH mergers with low effective spins, a small but significant fraction of mergers is expected to have moderate or even high effective spins. Despite the assumption that stars slow down their rotation due to efficient AM transport, we find that tidal interactions are capable of spinning up some stars allowing formation of rapidly spinning BHs (Detmers et al., 2008; Kushnir et al., 2017; Qin et al., 2018). ## 2\. Method We use the population synthesis code StarTrack (Belczynski et al., 2002, 2008) with a model of star formation rates and metallicity distribution based on Madau & Dickinson (2014) described in Belczynski et al. (2020). We employ the delayed core-collapse supernova (SN) engine for neutron star/BH mass calculation (Fryer et al., 2012), with weak mass loss from pulsation pair instability supernovae (Belczynski et al., 2016b). BH natal kicks are calculated from a Maxwellian distribution with $\sigma=265{\rm~{}km}{\rm~{}s}^{-1}$ and decreased by fallback during core- collapse; this makes a significant fraction of BHs form without a natal kick (Mirabel & Rodrigues, 2003). We assume our standard wind losses for massive O/B stars (Vink et al., 2001) and LBV winds (specific prescriptions for these winds are listed in Sec. 2.2 of Belczynski et al., 2010a). BH natal spins are calculated under the assumption that AM in massive stars is transported by the Tayler-Spruit magnetic dynamo (Spruit, 2002) as adopted in the MESA stellar evolutionary code (Paxton et al., 2015). Such BH natal spins take values in the range $a\in 0.05-0.15$ (see Belczynski et al., 2020). Note that the modified classic Tayler-Spruit dynamo with a different non-linear saturation mechanism of the Tayler instability (Fuller et al., 2019; Fuller & Ma, 2019) causes larger magnetic field amplitudes, more efficient AM transport and even lower final natal spins $(a\sim 0.01)$. BH spin may be increased if the immediate BH progenitors (Wolf-Rayet: WR) stars in close binaries are subject to tidal spin-up. In our calculations for BH-WR, WR-BH and WR-WR binary systems with orbital periods in the range $P_{\rm orb}=0.1-1.3$ d the BH natal spin magnitude is fit from WR star spun-up MESA models (see eq.15 of Belczynski et al. (2020)), while for systems with $P_{\rm orb}<0.1$d the BH spin is taken to be equal to $a=1$. BH spins may also be increased by accretion in binary systems. We treat accretion onto a compact object during Roche lobe overflow (RLOF) and from stellar winds using the analytic approximations presented in King et al. (2001) and Mondal et al. (2020). In the adopted approach the accumulation of matter on a BH is very inefficient so accretion does not noticeably affect the final BH spin. However note that, e.g. van Son et al. (2020) or Bavera et al. (2021) tested different super Eddington accretion prescriptions finding that some BHs may be significantly spun-up by accretion. For common the envelope (CE) evolution we assume a $100\%$ ($\alpha_{\rm CE}=1$) orbital energy transfer for CE ejection and we adopt $5\%$ Bondi accretion rate onto the BHs during CE (Ricker & Taam, 2008; MacLeod & Ramirez- Ruiz, 2015; MacLeod et al., 2017). During the stable RLOF (whether it is a thermal- or nuclear-timescale mass transfer: TTMT/NTMT) we adopt the following input physics. If an accretor is a compact object (neutron star or BH) we allow for super-Eddington accretion with excess transferred mass lost with an AM specific to the accretor (Mondal et al., 2020). In all other cases, we allow a fraction of the transferred mass of $f_{\rm a}=0.5$ to be lost from the binary with a specific AM of the binary orbit $j_{\rm loss}=1.0$ (expressed in units of $2\pi A^{2}/P_{\rm orb}$, $A$ being an orbital separation; see eq. 33 of Belczynski et al. (2008)). RLOF stability is an important issue in the context of BH-BH system formation in the framework of the isolated binary evolution (Neijssel et al., 2019; Olejak et al., 2021; Gallegos-Garcia et al., 2021; Belczynski et al., 2021). In the standard StarTrack evolution we impose rather liberal limits for CE (dynamical-timescale RLOF) to develop (see Belczynski et al. (2008): binaries with a donor star more massive than $2-3$ times the mass of the accretor are subject to CE. In this model (for simplicity tagged here as CE model) the vast majority of BH-BH mergers form through CE evolution, although we find some cases ($\lesssim 1\%$) of BH-BH merger formation without any CE event. In the alternative model (non-CE model, detailed description in Olejak et al., 2021) we allow CE to be suppressed for some systems even with mass ratio as high as $6-8$ (Pavlovskii et al., 2017). In this model the majority of the BH-BH mergers form without any CE event (the orbital decrease is obtained through angular momentum loss during stable RLOF), although some ($<10\%$) BH-BH mergers form with the assistance of CE. For each model we calculate the evolution of $64$ million massive, Population I/II binary systems. We use the star formation history and chemical evolution of the Universe to obtain the BH-BH merger properties within an approximate reach of LIGO/Virgo (redshift $z<1$). We use the same method as described in Belczynski et al. (2020). ## 3\. Results Figure 1 shows a typical example of binary system evolution without a CE phase leading to the formation a BH-BH merger with a tidally spun-up primary BH (restricted RLOF stability criteria; Olejak et al., 2021). The rather unequal- mass massive stellar system ($112{~{}M}_{\odot}$ and $68{~{}M}_{\odot}$) with a metallicity of $Z=0.002$ goes through two RLOF events. The RLOF I is initiated by the more massive star; first by an NTMT when the donor is still on the main-sequence and then through a TTMT when the donor evolves off main- sequence. After the RLOF I, the system mass ratio is reversed: the initially more massive star lost over $80\%$ of its mass while the companion gained $\sim 40{~{}M}_{\odot}$. Next, the initially more massive star ends its evolution directly collapsing to the less–massive (secondary) BH with a mass of $m_{2}=15{~{}M}_{\odot}$ and spin $a_{2}=0.14$. When the companion star expands, it initiates a second stable RLOF. At the onset of RLOF II the system has highly unequal masses: the donor is almost $6.5$ times more massive than the BH. The thermal timescale for a donor with mass $M_{\rm don}\approx 97{~{}M}_{\odot}$, radius $R_{\rm don}\approx 300{~{}R}_{\odot}$ and luminosity $L_{\rm don}\approx 3\times 10^{6}L_{\odot}$ (parameters at the RLOF II onset 111Such parameter values are inline with other predictions for massive stars e.g. using Geneva stellar evolution code (Yusof et al., 2013). ) calculated with the formula by Kalogera & Webbink (1996), is $\tau_{\rm th}\approx 330$ $\rm{yr}.$ It corresponds to a very high mass transfer rate $\dot{M}=M_{\rm don}/\tau_{\rm th}\approx 0.3{\rm~{}M}_{\odot}{\rm~{}yr}^{-1}$ which does not allow the BH to accrete much mass (despite the fact that we allow for super-Eddington accretion). Half of the donors mass is lost from the binary with the specific AM of the BH (as the matter was transferred to the vicinity of the BH accretor). This has a huge effect on the orbital separation which decreases from $A=467{~{}R}_{\odot}$ to only $A=7.1{~{}R}_{\odot}$. After RLOF II the binary consists of a BH and a WR star that are close enough to allow for the tidal spin-up of the WR star. Finally, the WR star directly collapses to the more massive (primary) BH with a mass $m_{1}=36{~{}M}_{\odot}$ and spin $a_{1}=0.68$. The BH-BH system merges after $\sim 67$ Myr. Figure 2 shows a typical CE evolution scenario (standard StarTrack RLOF stability criteria) leading to the formation a BH-BH merger with both BHs spun-up by tidal interactions. At the beginning, the binary system of two $\sim 36{~{}M}_{\odot}$ stars with $Z=0.0025$ is on a wide ($A\approx 1340{~{}R}_{\odot}$) and eccentric orbit ($e=0.1$). When the initially more massive star expands the system goes through a stable RLOF, after which the donor looses its H-rich envelope and the orbit circularizes. Soon after RLOF I, the system goes through another (unstable) RLOF initiated by the initially less massive companion star. The ensuing CE evolution leads to significant orbital contraction from $A=3100{~{}R}_{\odot}$ to $A=4.5{~{}R}_{\odot}$ and leaves two WR stars subject to strong tidal interactions. Both stars end their evolution at a similar time forming via supernovae explosions two $\sim 9{~{}M}_{\odot}$ BHs. At the formation, both BHs get significant natal kicks that makes the system orbit larger $A\approx 19{~{}R}_{\odot}$ and eccentric $e=0.44$, leading to a merger time of $\sim 6.7$ Gyr. In Table 2 we present the statistical spin properties of BH-BH systems merging at redshifts $z<1$ for the two tested RLOF stability criteria models. In the rows $1-6$ we list the percentage of the BH-BH mergers with effective–spin parameter values $\chi_{\rm eff}>0.0,\ 0.1,\ 0.2,\ 0.3,\ 0.4,\ 0.5$. In the rows $7-9$ we list the percentages of BH-BH mergers with a highly spinning primary BH $a_{1}>0.5,\ 0.7,\ 0.9$ while the rows $10-12$ give the percentages of mergers with a highly spinning secondary BH $a_{2}>0.5,\ 0.7,\ 0.9$. The full distribution of the primary–spin, the secondary–spin and the effective–spin parameter for both the CE and non-CE evolution, is plotted in Figure 3 in APPENDIX A. Figure 1.— Typical example of non-CE evolutionary scenario leading to the formation of BH-BH merger with tidally spun-up primary: $a_{1}=0.68$ and $\chi_{\rm eff}=0.52$. Binary system goes through two phases of RLOF with episodes of nuclear and thermal timescale mass transfer. RLOF I ends with the system mass ratio reversal. After RLOF II the system orbital separation significantly decreases and WR star is a subject to tidal spin-up by a BH. Soon thereafter the close BH-BH system is formed with a short merger time of $\sim 67$ Myr (see Sec. 3). Figure 2.— Typical example of evolutionary scenario with CE phase leading to the formation of BH-BH merger with $a_{1}=0.79$, $a_{2}=0.79$ and $\chi_{\rm eff}=0.77$. First, the binary system goes through stable RLOF phase with episodes of nuclear and thermal timescale mass transfer initiated by the initially more massive star. Then initially less massive star expands and initiates CE, after which the orbital separation is significantly decreased. After CE, binary hosts two compact WR stars that are subject to tidal spin-up. Both stars explode as supernovae and form BHs on eccentric orbit with merger time of $\sim 6.7$ Gyr (see Sec. 3). Table 2Predictions for BH-BH mergers from binary evolution No. | conditiona | CE model | non-CE model ---|---|---|--- 1 | $\chi_{\rm eff}>0.0$ | 97% | 93% 2 | $\chi_{\rm eff}>0.1$ | 95% | 85% 3 | $\chi_{\rm eff}>0.2$ | 70% | 60% 4 | $\chi_{\rm eff}>0.3$ | 36% | 39% 5 | $\chi_{\rm eff}>0.4$ | 10% | 21% 6 | $\chi_{\rm eff}>0.5$ | 2% | 7% 7 | $a_{1}>0.5$ | 3% | 34% 8 | $a_{1}>0.7$ | 2% | 15% 9 | $a_{1}>0.9$ | 1% | 1% 10 | $a_{2}>0.5$ | 52% | 11% 11 | $a_{2}>0.7$ | 33% | 7% 12 | $a_{2}>0.9$ | 12% | 2% a: We list fractions of BH-BH mergers (redshift $z<1$) produced in our two population synthesis models satisfying a given condition. ## 4\. Discussion and Conclusions The rapidly increasing number of detected BH-BH mergers does allow for some general population statements (Roulet et al., 2021; Galaudage et al., 2021; Abbott et al., 2021b). It appears that (i) majority ($\sim 70-90\%$) of BH-BH mergers have low effective spins consistent with $\chi_{\rm eff}\approx 0$ and that (ii) small fraction ($\sim 10-30\%$) of mergers have positive non-zero spins that can be as high as $\chi_{\rm eff}\gtrsim 0.5$. Additionally, the population is consistent with (iii) no systems having negative effective spins and (iv) a not isotropic distribution of effective spins (which could indicate dynamical origin). Finally, (v) there is at least one case of a primary BH (more massive) in a BH-BH merger with very high spin ($a_{1}>0.7$ at 90% credibility). These properties are noted to be broadly consistent with BH-BH mergers being formed in an isolated binary evolution. In our study we have tested whether we can reproduce the above spin characteristics with our binary evolution models that employ efficient AM transport in massive stars and that impose tidal spin-up of compact massive Wolf-Rayet stars in close binaries. The two presented models employ our standard input physics but allow for the formation of BH-BH mergers assisted either by a CE or by a stable RLOF. We find that the observed population and its spin characteristics (i–v) is consistent with our isolated–binary–evolution predictions (see Tab. 2). In particular, we find that the majority of BH-BH mergers have small positive effective spins: $\sim 70\%$ mergers have $0<\chi_{\rm eff}<0.3$ (efficient AM transport), while a small fraction have significant spins: $36-39\%$ mergers have $\chi_{\rm eff}>0.3$ and $2-7\%$ mergers have $\chi_{\rm eff}>0.5$ (tidal spin-up). The fraction of systems with negative effective spins is small ($3-7\%$) as most BHs do not receive strong natal kicks in our simulations. Individual BH spins can reach high values. A large fraction ($11-52\%$) of secondary BHs may have significant spin values ($a_{2}>0.5$) as it is the less massive stars that are most often subject to tidal spin-up. Nevertheless, primary BHs may also form with high spins ($3-34\%$ with $a_{1}>0.5$) if both stars have similar masses and both are subject to tidal spin-up (see Fig. 2) or due to mass ratio reversal caused by the RLOF (see Fig. 1). We also note the formation of a small fraction of almost maximally spinning BHs: $2-12\%$ for $a_{2}>0.9$ (secondary BH) and $1\%$ for $a_{1}>0.9$ (primary BH). These results on effective spins and individual BH spins are consistent with the current LIGO/Virgo population of BH-BH mergers. Note that Qin et al. (2021) came to different conclusions, finding the high-spinning detections challenging for the Tayler-Spruit dynamo, especially for the unequal mass event with a high spinning primary (GW190403). Our non-CE model reproduces this type of mergers due to the mass ratio reversal (see Fig. 1). In this channel, at the onset of the second stable RLOF, the donor may be even 5-6 times more massive than the accretor, ending as an unequal mass $(q\leq 0.4)$ BH-BH merger. Qin et al. (2021) have not considered the case of a stable RLOF in such unequal mass systems. The above fractions correspond to just two different modes of spinning-up during the classical isolated binary BH-BH formation. Had we varied several other factors that influence BH spins and their orientations in BH-BH mergers, the ranges of these fractions would have broadened. Some obvious physical processes that can affect BH spins and their orientations include: initial star spin alignment (or lack thereof) with the binary AM, the alignment of stellar spins (or lack thereof) during RLOF phases, the treatment of accretion, the initial mass ratio distribution that can alter the ratio of systems going through stable and unstable (CE) RLOF, and the natal kicks that can misalign spin orientations. Above all, the three major uncertainties include the initial stellar rotation of stars forming BHs, the efficiency of AM transport and the strength of tides in close binary systems. All of the above are only weakly constrained. Note that this is a proof-of-principle study that is limited only to BH spins in BH-BH mergers. In particular, we did not try to match BH masses and BH-BH merger rates for the highly spinning LIGO/Virgo BHs. In this study we have only shown that it is possible to produce highly spinning BHs by tidal interactions of stars in close binaries in evolution that includes and does not include CE. Our two examples of evolution (Fig. 1 and 2) have much smaller masses than the LIGO/Virgo mergers with highly spinning BHs (Tab. 1). Note, however, that we have not used here the input physics that allows for the formation of BHs with mass over $50{~{}M}_{\odot}$. Such model is already incorporated and tested within our population synthesis code (Belczynski, 2020). An attempt to match all observed parameters simultaneously is projected to happen in the future when LIGO/Virgo will deliver a larger sample of highly spinning BHs. Given the results presented in this study, alas limited only to BH spins, we conclude that (i) the isolated binary evolution channel reproduces well the BH spins of the LIGO/Virgo mergers (ii) if, in fact, the binary channel is producing the majority of the LIGO/Virgo BH-BH mergers, then this indicates that the AM transport is efficient in massive stars and the tidal interactions in close binaries are strong. We thank the anonymous reviewer, Jean-Pierre Lasota, Ilya Mandel and Sambaran Banerjee for their useful comments on the manuscript. KB and AO acknowledge support from the Polish National Science Center (NCN) grant Maestro (2018/30/A/ST9/00050). ## References * Abbott et al. (2019a) Abbott, B. P., Abbott, R., Abbott, T. D., Abraham, S., LIGO Scientific Collaboration, & Virgo Collaboration. 2019a, ApJ, 882, L24 * Abbott et al. (2019b) —. 2019b, Physical Review X, 9, 031040 * Abbott et al. (2021a) Abbott, R., et al. 2021a, Physical Review X, 11, 021053 * Abbott et al. (2021b) —. 2021b, arXiv e-prints, arXiv:2108.01045 * Antonini & Perets (2012a) Antonini, F., & Perets, H. B. 2012a, ApJ, 757, 27 * Antonini & Perets (2012b) —. 2012b, ApJ, 757, 27 * Antonini et al. (2017) Antonini, F., Toonen, S., & Hamers, A. S. 2017, ApJ, 841, 77 * Arca-Sedda & Capuzzo-Dolcetta (2017) Arca-Sedda, M., & Capuzzo-Dolcetta, R. 2017, ArXiv e-prints * Arca-Sedda et al. (2018) Arca-Sedda, M., Li, G., & Kocsis, B. 2018, arXiv e-prints, arXiv:1805.06458 * Askar et al. (2017) Askar, A., Szkudlarek, M., Gondek-Rosińska, D., Giersz, M., & Bulik, T. 2017, MNRAS, 464, L36 * Bae et al. (2014) Bae, Y.-B., Kim, C., & Lee, H. M. 2014, MNRAS, 440, 2714 * Banerjee (2018) Banerjee, S. 2018, MNRAS, 473, 909 * Banerjee (2021) —. 2021, MNRAS, 500, 3002 * Bavera et al. (2020) Bavera, S. S., et al. 2020, A&A, 635, A97 * Bavera et al. (2021) —. 2021, A&A, 647, A153 * Belczynski (2020) Belczynski, K. 2020, ApJ, 905, L15 * Belczynski et al. (2010a) Belczynski, K., Bulik, T., Fryer, C. L., Ruiter, A., Valsecchi, F., Vink, J. S., & Hurley, J. R. 2010a, ApJ, 714, 1217 * Belczynski et al. (2010b) Belczynski, K., Dominik, M., Bulik, T., O’Shaughnessy, R., Fryer, C. L., & Holz, D. E. 2010b, ApJ, 715, L138 * Belczynski et al. (2016a) Belczynski, K., Holz, D. E., Bulik, T., & O’Shaughnessy, R. 2016a, Nature, 534, 512 * Belczynski et al. (2002) Belczynski, K., Kalogera, V., & Bulik, T. 2002, ApJ, 572, 407 * Belczynski et al. (2008) Belczynski, K., Kalogera, V., Rasio, F. A., Taam, R. E., Zezas, A., Bulik, T., Maccarone, T. J., & Ivanova, N. 2008, ApJS, 174, 223 * Belczynski et al. (2016b) Belczynski, K., et al. 2016b, A&A, 594, A97 * Belczynski et al. (2020) —. 2020, A&A, 636, A104 * Belczynski et al. (2021) —. 2021, arXiv e-prints, arXiv:2108.10885 * Benacquista & Downing (2013) Benacquista, M. J., & Downing, J. M. B. 2013, Living Reviews in Relativity, 16, 4 * Bond & Carr (1984) Bond, J. R., & Carr, B. J. 1984, MNRAS, 207, 585 * Carr & Silk (2018) Carr, B., & Silk, J. 2018, MNRAS, 478, 3756 * Chatterjee et al. (2016) Chatterjee, S., Rodriguez, C. L., Kalogera, V., & Rasio, F. A. 2016, ArXiv e-prints * Clesse & García-Bellido (2017) Clesse, S., & García-Bellido, J. 2017, Physics of the Dark Universe, 15, 142 * De Luca et al. (2020) De Luca, V., Desjacques, V., Franciolini, G., Pani, P., & Riotto, A. 2020, arXiv e-prints, arXiv:2009.01728 * de Mink & Mandel (2016) de Mink, S. E., & Mandel, I. 2016, MNRAS, 460, 3545 * Detmers et al. (2008) Detmers, R. G., Langer, N., Podsiadlowski, P., & Izzard, R. G. 2008, A&A, 484, 831 * Di Carlo et al. (2019) Di Carlo, U. N., Giacobbo, N., Mapelli, M., Pasquato, M., Spera, M., Wang, L., & Haardt, F. 2019, MNRAS, 487, 2947 * Dominik et al. (2012) Dominik, M., Belczynski, K., Fryer, C., Holz, D., Berti, B., Bulik, T., Mandel, I., & O’Shaughnessy, R. 2012, ApJ, 759, 52 * Downing et al. (2010) Downing, J. M. B., Benacquista, M. J., Giersz, M., & Spurzem, R. 2010, MNRAS, 407, 1946 * du Buisson et al. (2020) du Buisson, L., et al. 2020, arXiv e-prints, arXiv:2002.11630 * Eldridge & Stanway (2016) Eldridge, J. J., & Stanway, E. R. 2016, MNRAS, 462, 3302 * Fishbach & Holz (2020) Fishbach, M., & Holz, D. E. 2020, ApJ, 891, L27 * Fishbach et al. (2017) Fishbach, M., Holz, D. E., & Farr, B. 2017, ApJ, 840, L24 * Fragione et al. (2019) Fragione, G., Grishin, E., Leigh, N. W. C., Perets, H. B., & Perna, R. 2019, MNRAS, 488, 47 * Fragione & Kocsis (2019) Fragione, G., & Kocsis, B. 2019, MNRAS, 486, 4781 * Fryer et al. (2012) Fryer, C. L., Belczynski, K., Wiktorowicz, G., Dominik, M., Kalogera, V., & Holz, D. E. 2012, ApJ, 749, 91 * Fuller & Ma (2019) Fuller, J., & Ma, L. 2019, ApJ, 881, L1 * Fuller et al. (2019) Fuller, J., Piro, A. L., & Jermyn, A. S. 2019, MNRAS * Galaudage et al. (2021) Galaudage, S., Talbot, C., Nagar, T., Jain, D., Thrane, E., & Mandel, I. 2021, arXiv e-prints, arXiv:2109.02424 * Gallegos-Garcia et al. (2021) Gallegos-Garcia, M., Berry, C. P. L., Marchant, P., & Kalogera, V. 2021, arXiv e-prints, arXiv:2107.05702 * Gerosa & Berti (2017) Gerosa, D., & Berti, E. 2017, Phys. Rev. D, 95, 124046 * Green (2017) Green, A. M. 2017, ArXiv e-prints * Gültekin et al. (2004) Gültekin, K., Miller, M. C., & Hamilton, D. P. 2004, ApJ, 616, 221 * Gültekin et al. (2006) —. 2006, ApJ, 640, 156 * Hainich et al. (2018) Hainich, R., et al. 2018, A&A, 609, A94 * Hamers et al. (2018) Hamers, A. S., Bar-Or, B., Petrovich, C., & Antonini, F. 2018, ApJ, 865, 2 * Hartwig et al. (2016) Hartwig, T., Volonteri, M., Bromm, V., Klessen, R. S., Barausse, E., Magg, M., & Stacy, A. 2016, MNRAS, 460, L74 * Hoang et al. (2018) Hoang, B.-M., Naoz, S., Kocsis, B., Rasio, F. A., & Dosopoulou, F. 2018, ApJ, 856, 140 * Hotokezaka & Piran (2017) Hotokezaka, K., & Piran, T. 2017, ArXiv e-prints * Hurley et al. (2016) Hurley, J. R., Sippel, A. C., Tout, C. A., & Aarseth, S. J. 2016, MNRAS, 33, e036 * Kalogera & Webbink (1996) Kalogera, V., & Webbink, R. F. 1996, Astrophys. J., 458, 301 * King et al. (2001) King, A. R., Davies, M. B., Ward, M. J., Fabbiano, G., & Elvis, M. 2001, ApJ, 552, L109 * Kinugawa et al. (2014) Kinugawa, T., Inayoshi, K., Hotokezaka, K., Nakauchi, D., & Nakamura, T. 2014, MNRAS, 442, 2963 * Kremer et al. (2020) Kremer, K., et al. 2020, ApJS, 247, 48 * Kruckow et al. (2018) Kruckow, M. U., Tauris, T. M., Langer, N., Kramer, M., & Izzard, R. G. 2018, ArXiv e-prints * Kushnir et al. (2017) Kushnir, D., Zaldarriaga, M., Kollmeier, J. A., & Waldman, R. 2017, MNRAS, 467, 2146 * Lipunov et al. (1997) Lipunov, V. M., Postnov, K. A., & Prokhorov, M. E. 1997, Astronomy Letters, 23, 492 * Liu & Lai (2018) Liu, B., & Lai, D. 2018, ApJ, 863, 68 * MacLeod et al. (2017) MacLeod, M., Antoni, A., Murguia-Berthier, A., Macias, P., & Ramirez-Ruiz, E. 2017, ApJ, 838, 56 * MacLeod & Ramirez-Ruiz (2015) MacLeod, M., & Ramirez-Ruiz, E. 2015, ApJ, 803, 41 * Madau & Dickinson (2014) Madau, P., & Dickinson, M. 2014, ARA&A, 52, 415 * Mandel & de Mink (2016) Mandel, I., & de Mink, S. E. 2016, MNRAS, 458, 2634 * Mapelli (2016) Mapelli, M. 2016, MNRAS, 459, 3432 * Marchant et al. (2016) Marchant, P., Langer, N., Podsiadlowski, P., Tauris, T. M., & Moriya, T. J. 2016, A&A, 588, A50 * Marchant et al. (2018) Marchant, P., Renzo, M., Farmer, R., Pappas, K. M. W., Taam, R. E., de Mink, S., & Kalogera, V. 2018, arXiv e-prints * Mennekens & Vanbeveren (2014) Mennekens, N., & Vanbeveren, D. 2014, A&A, 564, A134 * Miller & Hamilton (2002a) Miller, M. C., & Hamilton, D. P. 2002a, ApJ, 576, 894 * Miller & Hamilton (2002b) —. 2002b, MNRAS, 330, 232 * Mirabel & Rodrigues (2003) Mirabel, I. F., & Rodrigues, I. 2003, Science, 300, 1119 * Mondal et al. (2020) Mondal, S., Belczyński, K., Wiktorowicz, G., Lasota, J.-P., & King, A. R. 2020, MNRAS, 491, 2747 * Morawski et al. (2018) Morawski, J., Giersz, M., Askar, A., & Belczynski, K. 2018, ArXiv e-prints * Neijssel et al. (2019) Neijssel, C. J., et al. 2019, MNRAS, 490, 3740 * O’Leary et al. (2007) O’Leary, R. M., O’Shaughnessy, R., & Rasio, F. A. 2007, Phys. Rev. D, 76, 061504 * Olejak et al. (2021) Olejak, A., Belczynski, K., & Ivanova, N. 2021, A&A, 651, A100 * Olejak et al. (2020) Olejak, A., Fishbach, M., Belczynski, K., Holz, D. E., Lasota, J. P., Miller, M. C., & Bulik, T. 2020, ApJ, 901, L39 * Pavlovskii et al. (2017) Pavlovskii, K., Ivanova, N., Belczynski, K., & Van, K. X. 2017, MNRAS, 465, 2092 * Paxton et al. (2015) Paxton, B., et al. 2015, ApJS, 220, 15 * Perna et al. (2019) Perna, R., Wang, Y.-H., Farr, W. M., Leigh, N., & Cantiello, M. 2019, ApJ, 878, L1 * Portegies Zwart et al. (2004) Portegies Zwart, S. F., Baumgardt, H., Hut, P., Makino, J., & McMillan, S. L. W. 2004, Nature, 428, 724 * Portegies Zwart & McMillan (2000) Portegies Zwart, S. F., & McMillan, S. L. W. 2000, ApJ, 528, L17 * Qin et al. (2018) Qin, Y., Fragos, T., Meynet, G., Andrews, J., Sørensen, M., & Song, H. F. 2018, A&A, 616, A28 * Qin et al. (2019) Qin, Y., Marchant, P., Fragos, T., Meynet, G., & Kalogera, V. 2019, ApJ, 870, L18 * Qin et al. (2021) Qin, Y., Wang, Y.-Z., Dong-Hong, Wu, Meynet, G., & Song, H. 2021, arXiv e-prints, arXiv:2108.04821 * Ricker & Taam (2008) Ricker, P. M., & Taam, R. E. 2008, ApJ, 672, L41 * Rodriguez et al. (2018) Rodriguez, C. L., Amaro-Seoane, P., Chatterjee, S., Kremer, K., Rasio, F. A., Samsing, J., Ye, C. S., & Zevin, M. 2018, Phys. Rev. D, 98, 123005 * Rodriguez et al. (2016) Rodriguez, C. L., Haster, C.-J., Chatterjee, S., Kalogera, V., & Rasio, F. A. 2016, ApJ, 824, L8 * Roulet et al. (2021) Roulet, J., Chia, H. S., Olsen, S., Dai, L., Venumadhav, T., Zackay, B., & Zaldarriaga, M. 2021, arXiv e-prints, arXiv:2105.10580 * Sadowski et al. (2008) Sadowski, A., Belczynski, K., Bulik, T., Ivanova, N., Rasio, F. A., & O’Shaughnessy, R. 2008, ApJ, 676, 1162 * Samsing (2017) Samsing, J. 2017, ArXiv e-prints * Sasaki et al. (2016) Sasaki, M., Suyama, T., Tanaka, T., & Yokoyama, S. 2016, ArXiv e-prints * Silsbee & Tremaine (2017) Silsbee, K., & Tremaine, S. 2017, ApJ, 836, 39 * Spera et al. (2016) Spera, M., Giacobbo, N., & Mapelli, M. 2016, Mem. Soc. Astron. Italiana, 87, 575 * Spera et al. (2019) Spera, M., Mapelli, M., Giacobbo, N., Trani, A. A., Bressan, A., & Costa, G. 2019, MNRAS, 485, 889 * Spruit (2002) Spruit, H. C. 2002, A&A, 381, 923 * Stevenson et al. (2017) Stevenson, S., Vigna-Gómez, A., Mandel, I., Barrett, J. W., Neijssel, C. J., Perkins, D., & de Mink, S. E. 2017, Nature Communications, 8, 14906 * Tutukov & Yungelson (1993) Tutukov, A. V., & Yungelson, L. R. 1993, MNRAS, 260, 675 * van den Heuvel et al. (2017) van den Heuvel, E. P. J., Portegies Zwart, S. F., & de Mink, S. E. 2017, MNRAS, 471, 4256 * van Son et al. (2020) van Son, L. A. C., et al. 2020, ApJ, 897, 100 * VanLandingham et al. (2016) VanLandingham, J. H., Miller, M. C., Hamilton, D. P., & Richardson, D. C. 2016, ApJ, 828, 77 * Vink et al. (2001) Vink, J. S., de Koter, A., & Lamers, H. J. G. L. M. 2001, A&A, 369, 574 * Voss & Tauris (2003) Voss, R., & Tauris, T. M. 2003, MNRAS, 342, 1169 * Woosley (2016) Woosley, S. E. 2016, ApJ, 824, L10 * Yusof et al. (2013) Yusof, N., et al. 2013, MNRAS, 433, 1114 * Zaldarriaga et al. (2017) Zaldarriaga, M., Kushnir, D., & Kollmeier, J. A. 2017, ArXiv e-prints * Zevin et al. (2019) Zevin, M., Samsing, J., Rodriguez, C., Haster, C.-J., & Ramirez-Ruiz, E. 2019, ApJ, 871, 91 ## APPENDIX A Figure 3.— Distribution of primary BH spin ($a_{1}$) – top panel; secondary BH spin ($a_{2}$) – middle panel; effective spin parameter ($\chi_{\rm eff}$) – bottom panel; of BH-BH mergers at redshifts $z<1.0$. The results are for two tested models: the non-CE model plotted with red line and the CE model plotted with blue line. The figure is a supplement to the statistical spin predictions shown in Table 2 and described in Section 3.
# Arbitrage equilibria in active matter systems Venkat Venkatasubramanian<EMAIL_ADDRESS>Complex Resilient Intelligent Systems Laboratory, Department of Chemical Engineering, Columbia University, New York, NY 10027 Abhishek Sivaram Department of Chemical and Biochemical Engineering, Technical University of Denmark, 2800 Kongens Lyngby, Denmark N. Sanjeevrajan Department of Materials Engineering, Indian Institute of Technology-Madras, Chennai, India 600036 Arun Sankar School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, AZ ## Abstract The motility-induced phase separation (MIPS) phenomenon in active matter has been of great interest for the past decade or so. A central conceptual puzzle is that this behavior, which is generally characterized as a nonequilibrium phenomenon, can yet be explained using simple equilibrium models of thermodynamics. Here, we address this problem using a new theory, statistical teleodynamics, which is a conceptual synthesis of game theory and statistical mechanics. In this framework, active agents compete in their pursuit of maximum effective utility, and this self-organizing dynamics results in an arbitrage equilibrium in which all agents have the same effective utility. We show that MIPS is an example of arbitrage equilibrium and that it is mathematically equivalent to other phase-separation phenomena in entirely different domains, such as sociology and economics. As examples, we present the behavior of Janus particles in a potential trap and the effect of chemotaxis on MIPS. ## I Introduction Active matter describes systems composed of a large number of self-actualizing dynamical agents that consume and dissipate energy and exhibit interesting macroscopic behaviors [1, 2, 3, 4, 5, 6, 7]. Biological examples of such self- organizing systems include bacteria, ants, birds, mussels, etc. Nonliving active matter examples include self-propelled Janus particles, layers of vibrated granular rods, and so on. A central conceptual puzzle in our evolving understanding of active matter is why and when a collection of active agents that looks like an out-of-equilibrium system on the microscopic scale behaves macroscopically like a simple equilibrium system of passive matter [5, 7, 8, 6, 9, 10]. Recently, a new framework called statistical teleodynamics has been developed to predict the macroscopic emergent behavior of active matter systems that comprise biological, ecological, and socioeconomic agents [11, 12, 13, 14, 15, 16]. Statistical teleodynamics is a natural generalization of statistical thermodynamics for goal-driven agents in active matter. It is a conceptual synthesis of the central concepts and techniques of population games theory with those of statistical mechanics towards a unified theory of emergent equilibrium phenomena and pattern formation in active matter. The name comes from the Greek word telos, which means goal. Just as the dynamical behavior of molecules is driven by thermal agitation (hence thermodynamics), the dynamics of purposeful agents are driven by the pursuit of their goals and hence teleodynamics. The fundamental quantity in statistical teleodynamics is the effective utility of an agent, which measures the net benefit of an agent after subtracting all the costs the agent incurred in acquiring it. All agents in a population compete to increase their effective utilities. Population game theory proves that this competitive pursuit of maximum utility, under certain conditions, leads to an equilibrium, called the arbitrage equilibrium, where the effective utilities of all agents are equal [17]. There is an important philosophical difference between statistical teleodynamics and statistical thermodynamics. Statistical teleodynamics acknowledges the importance of recognizing the individual active agent and its behavioral properties explicitly in developing a bottom-up analytical framework of emergent phenomena. It also accounts overtly for the role of an agent’s purpose (e.g., survival and growth) and the naturally attendant concept of the pursuit of maximum utility. The role of purpose is not acknowledged in statistical mechanics, as there is no way to express that concept in that framework. However, it is the central concept in game theory and hence fits in naturally in statistical teleodynamics. Since our theory is a bottom-up emergentist framework, it emphasizes the agent perspective in contrast to statistical mechanics, which stresses the system view. For example, whenever equilibrium is addressed in statistical mechanics, it is usually formulated in terms of minimizing free energy, which is a top- down system perspective. However, in statistical teleodynamics, while the system view is also present via the maximization of the game-theoretic potential (as we discuss in the following sections), the equality of effective utilities for all agents as the equilibrium criterion from the agent perspective is conspicuously recognized and exploited. In our previous work, using statistical teleodynamics, we have shown that emergent self-organized behaviors of active agents in different disciplines, such as bacterial chemotaxis [13], ant crater formation [13], flocking of birds [14], mussel bed patterning [15], social segregation [16], and income distributions [12], can be understood as the result of arbitrage equilibria in their respective contexts. The self-actualizing agents in these studies are examples of living agents found in biology, ecology, sociology, and economics. Therefore, they are self-driven by survival purpose and the pursuit of maximum utility. Although the statistical teleodynamics framework was developed to model and predict the emergent behavior of a large population of living goal-driven agents, we believe that it could also be helpful in understanding the emergent behavior of nonliving self-actualizing agents, such as Janus particles. Although nonliving agents are not purposeful, their self-actualizing behavior appears seemingly purposeful, as if they are pursuing some goal in their persistent dynamics. In this paper, by applying the statistical teleodynamics framework, we show that the nonliving active matter systems are in arbitrage equilibria. Under certain mathematical conditions that we discuss in the rest of the paper, this arbitrage equilibrium is equivalent to a statistical or thermodynamic equilibrium, thereby resolving the above-mentioned conceptual puzzle. The remainder of the paper is organized as follows. First, we introduce the statistical teleodynamics framework. Then, we show how the self-organizing dynamics of ant-crater formation is similar to the self-actualizing behavior of Janus particles in a potential trap [18], and the emergent distribution for both weak and strong traps is indeed an equilibrium distribution, a Weibull distribution. We then present a model game-theoretic system that predicts the emergent macroscopic behavior of a population of agents that spontaneously segregate under certain conditions at arbitrage equilibrium. We show how this arbitrage equilibrium is equivalent to motility-induced phase separation (MIPS) [5, 7, 6, 19]. We further extend this discussion to develop a utility- based model of chemotaxis-driven MIPS [20]. In all these cases, we show that the final configurations are indeed in arbitrage equilibrium, which is equivalent to thermodynamic or statistical equilibrium. ## II Statistical Teleodynamics, Potential Games, and Arbitrage Equilibrium As noted, statistical teleodynamics is a synthesis of the central concepts and techniques of population games theory with those of statistical mechanics. The theory of population games is concerned with predicting the final outcome(s) of a large population of goal-driven agents. Given a large collection of strategically interacting rational agents, where each agent is trying to decide and execute the best possible course of actions that maximizes the agent’s payoff or utility in light of similar strategies executed by the other agents, can we predict which strategies would be executed and what outcomes are likely [21, 17]? In particular, one would like to know whether such a game would lead to an equilibrium situation. For some population games, one can identify a single scalar-valued global function, called a potential ($\phi(\boldsymbol{x})$), that captures the necessary information about the utilities (where $\boldsymbol{x}$ is the state vector of the system). The gradient of the potential is the payoff or utility. Such games are called potential games [22, 17, 21, 23]. A potential game reaches strategic equilibrium, called Nash equilibrium, when the potential $\phi(\boldsymbol{x})$ is maximized. Furthermore, this equilibrium is unique if $\phi(\boldsymbol{x})$ is strictly concave (i.e., $\partial^{2}\phi/\partial^{2}x<0$) [17]. In potential games, the utility $h_{i}$ of an agent in state $i$ is the gradient of potential $\phi(\boldsymbol{x})$, i.e., ${h}_{i}(\boldsymbol{x})\equiv{\partial\phi(\boldsymbol{x})}/{\partial x_{i}}$ (1) where $x_{i}=N_{i}/N$ and $\boldsymbol{x}$ is the population vector. $N_{i}$ is the number of agents in state $i$, and $N$ is the total number of agents. Therefore, we have $\displaystyle\phi(\boldsymbol{x})$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{n}\int{h}_{i}(\boldsymbol{x}){d}x_{i}$ (2) where $n$ is the total number of states. To determine the maximum potential, one can use the method of Lagrange multipliers with $L$ as Lagrangian and $\lambda$ as the Lagrange multiplier for the constraint $\sum_{i=1}^{n}x_{i}=1$: $L=\phi+\lambda(1-\sum_{i=1}^{n}x_{i})$ (3) All agents enjoy the same utility in equilibrium, i.e., $h_{i}=h^{*}$. It is an arbitrage equilibrium [24] in which agents are no longer incentivized to switch states, as all states provide the same utility $h^{*}$. In other words, equilibrium is reached when the opportunity for arbitrage, i.e., the ability to increase one’s utility simply by switching to another option or state at no cost, disappears. Thus, the maximization of $\phi$ and $h_{i}=h^{*}$ are equivalent when the equilibrium is unique (i.e., $\phi(\boldsymbol{x})$ is strictly concave [17]), and both specify the same outcome, namely, an arbitrage equilibrium. The former stipulates it from the top-down system perspective, whereas the latter is the bottom-up agent perspective. Thus, this formulation exhibits the duality property. Just as mechanical equilibrium is reached when the forces balance each other equally, thermal equilibrium is reached when the temperatures are equal, and phase equilibrium is achieved when the chemical potentials are equal, our theory demonstrates that a system of active agents will reach an arbitrage equilibrium when their effective utilities are equal. Whenever the effective utility is of a particular mathematical form (as explained in Sections VI and VII), this game-theoretic Nash equilibrium is equivalent to the statistical Boltzmann equilibrium. In fact, our theory reveals the critical insight that both living and non-living agents are driven by arbitrage opportunities towards equilibrium, except that their arbitrage currencies are different. For nonliving matter, the currency is the chemical potential, whereas for living matter, the effective utility. Although the chemical potential (and free energy) is appropriate for describing nonliving physicochemical systems, its usage for living agents, such as bacteria, ants, birds, and so on, seems a bit awkward. We believe that effective utility (and the game-theoretic potential) is a more natural choice for active agents driven by survival and growth goals found in biology, ecology, sociology, and economics. Thus, the utility-oriented perspective helps us to extend the concepts and techniques of statistical thermodynamics more naturally to teleological agents by smoothly connecting with game theory. This is what has been accomplished by statistical teleodynamics. We wish to emphasize that we are not claiming that the lower forms of living agents such as bacteria and ants pursue the survival goal and strategies rationally. Our view is that the biological survival instincts of such agents cause particular dynamical behaviors that evolved over millions of years to help them improve their survival chances. Therefore, they act in a goal-driven manner _instinctively_ , which can be modeled using our framework of the pursuit of maximum utility or survival fitness. Our goal is to identify the fundamental principles and mechanisms of self- organization of goal-driven agents. Towards that, we develop simple models that offer an appropriate coarse-grained description. The spirit of our modeling is similar to that of the van der Waals or the Ising model in statistical thermodynamics. ## III Ant crater model and Janus particles As an example of an active matter system, the dynamical behavior of self- actualized Janus particles in a potential trap has attracted considerable attention [18, 25]. Takatori et al. [18] discuss the behavior of Janus particles in two regimes: (i) weak trap ($\alpha<1$) and (ii) strong trap ($\alpha>1$). They observe that the particle behavior in the weak regime can be considered an equilibrium outcome, whereas, in the strong regime, they suggest a nonequilibrium behavior. We approach this system from the perspective of statistical teleodynamics. For us, this system resembles the behavior of a large population of ants building an ant colony underground. The dynamics of this activity involves the transport of sand grains by ants from an underground nest to the surface. The resulting grain pile, called the ant crater, is of a particular shape known as the Weibull distribution [13]. Since grain transport involves an effort that increases with the distance the ant travels, the ants would prefer to drop the grains sooner rather than later to minimize the effort. However, if they drop them too close to the nest, the sand grains pile could collapse back into the nest, which would mean more work for them later on. Therefore, ants innately balance the need to transport grains as far away as possible while trying to minimize the effort (i.e., the disutility) expended in doing so. As we showed recently [13], this process can be modeled considering the effective utility of ants and their self-organizing competitive dynamics. In our model, the utility of an ant is determined by three factors. The first factor is the utility or benefit that it gains from having a home, the nest, given by $b>0$. The second factor describes the disutility (i.e., the cost) it incurs by transporting the grains away from the nest. We assume that the ants move outward radially from the nest with some average velocity $v$. We model the rate at which the ants drop off the grains as $sr^{a-1}$, where $r$ is the distance it travels from the nest to the drop-off point ($s>0$ and $a>1$ are constant parameters). The disutility of the effort $W$ an ant expends then depends on how much grain it carries and for how long. This results in $\displaystyle W=\int_{0}^{t}sr^{a-1}dt=\int_{0}^{r}sr^{a-1}\frac{dr}{v}=\frac{sr^{a}}{va}=\frac{\omega r^{a}}{a}$ where $\omega=s/v$. In chemical engineering, $W$ is known as the Damköhler number which quantifies the ratio of the time scales of flow to that of the reaction. Higher values of Damköhler number, in this case, suggest a higher propensity of an ant to drop off the grains. The third factor accounts for the disutility of competition among ants. As ants ($N_{i}$) try to crowd at the same location ($r_{i}$) to drop off their grains, this term forces them to spread out to minimize the cost of the competition. As Venkatasubramanian et al. [13] discuss, this term is modeled as $-\ln N_{i}$. Combining all three, the effective utility $h_{i}$ that an ant gains by dropping a grain at a distance $r_{i}$ is given by $h_{i}(r_{i},N_{i})=b-\frac{\omega r_{i}^{a}}{a}-\ln N_{i}$ (4) The potential for this system then becomes $\displaystyle\phi(\mathbf{x})$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{n}\int h_{i}(\mathbf{x})dx_{i}$ (5) $\displaystyle=$ $\displaystyle b-\frac{\omega}{a}\langle r^{a}\rangle+\frac{1}{N}\ln\frac{N!}{\prod_{i=1}^{N}(Nx_{i})!}$ (6) where $\langle r^{a}\rangle$ is the expectation of the quantity $r^{a}$, based on the locations of the ants ($N$ is the total number of ants). As the reader might recognize, the last term in Eq. 6 is entropy. Therefore, by maximizing potential $\phi$, one is equivalently maximizing entropy subject to the constraints in the first two terms. This deep connection between statistical mechanics (through entropy) and game theory (through potential) has been discussed in great detail here [12, 24]. We discuss this connection at some length in Section V. As Venkatasubramanian et al. [13] show, there is a unique arbitrage equilibrium outcome for this collective behavior, where all ants have the same utility, i.e., $h_{i}=h^{*}$. Therefore, we have $h^{*}=b-\frac{\omega r_{i}^{a}}{a}-\ln N_{i}^{*}$ (7) which can be rearranged to show $\displaystyle x_{i}^{*}=\frac{N_{i}^{*}}{N}=\frac{\exp\left(-\dfrac{\omega r_{i}^{a}}{a}\right)}{\sum_{j}\exp\left(-\dfrac{\omega r_{j}^{a}}{a}\right)}$ (8) where $N_{i}^{*}$ is the value in equilibrium. In the continuum limit, the states are continuous, where the state is defined as the radial location $r$. This results in the simplification $x_{i}^{*}=N_{i}^{*}/N=\rho^{*}(r)2\pi rdr/N$, where $\rho^{*}(r)$ is the number density of ants at location $r$ at equilibrium. With this result in Eq (8), it can be shown that the number density of ants follows the distribution, $\rho^{*}(r)=\frac{A}{r}\exp\left({-\frac{\omega r^{a}}{a}}\right)$ (9) where $A$ is a constant that satisfies the boundary condition of a constant flux of ants from the center of the nest. Note that this emergent distribution is that of the number of ants. Given this distribution of $\rho^{*}(r)$, the grain distribution can be calculated using a cumulative distribution given by $\displaystyle F(r)=\frac{\int_{0}^{r}(sr^{a-1}\rho^{*})2\pi r~{}dr}{\int_{0}^{{\infty}}(sr^{a-1}\rho^{*})2\pi r~{}dr}$ This gives, $\displaystyle F(r)=1-\exp{\left(-\frac{\omega r^{a}}{a}\right)}$ with the grain distribution $f(r)$, given by $\displaystyle f(r)=\frac{dF}{dr}=\omega r^{a-1}\exp\left(-\frac{\omega r^{a}}{a}\right)$ (10) which is the Weibull distribution of ant craters that is observed empirically. It is important to emphasize that this distribution results from an equilibrium, namely, the arbitrage equilibrium. This is not a nonequilibrium or far-from-equilibrium outcome. ### III.1 Janus particles in a potential trap We now consider the Janus particles from this perspective. We will show that the Janus particle dynamics can be considered as an arbitrage equilibrium outcome in both the weak and strong traps. Our analysis is based on the work of Takatori et al. [18]. They report on the behavior of self-propelled Janus particles that are in an acoustic trap whose strength can be tuned. The trap force is modeled by $F^{trap}(r)=-kr\exp(-2(r/w)^{2})$ (11) where $k$ is the spring constant and $w$ is the width of the trap. They show that the probability distribution $P(r)$ due to the active Brownian motion of the swimmer is given by $P(\bar{r})(U_{0}\tau_{R})^{2}=(\alpha/\pi)\exp(-\alpha\bar{r}^{2})$ (12) where $\bar{r}=r/(U_{0}\tau_{R})$ and $\alpha:=k\tau_{R}/\zeta$ is the nondimensional trap. In our approach, we consider the “struggle” of Janus particles swimming against the trap force to be similar to the effort expended by ants transporting sand grains. Therefore, we propose that the effective utility $H_{i}$ for a Janus particle be $H_{i}(r_{i},N_{i})=-\frac{\omega r_{i}^{a}}{a}-\ln N_{i}$ (13) where the first term is the disutility of the work done by the Janus particle against the trap force, and the second term is the disutility of the competition among the particles, as before. We do not need the benefit term $b$ here, as it is not relevant for Janus particles. Following the same analysis for the dynamics of the ants, we conclude that the Janus particles will also reach the same arbitrage equilibrium outcome of $H_{i}=H^{*}$ with the probability distribution given by the Weibull distribution $\displaystyle P(r)=\omega r^{a-1}\exp\left(-\frac{\omega r^{a}}{a}\right)$ (14) Now, we expect that the exponent $a$ would depend on the trap strength $\alpha$. As the trap strength increases, the work done by the particles against the trap force increases super linearly with distance $r$. We postulate that this nonlinear dependence can be modeled as $\displaystyle a=1+\alpha+0.5\alpha^{2}$ (15) We obtained the best fit for Eq. 14 for the weak ($\alpha<1$) and strong ($\alpha>1$) trap force data reported by Takatori et al. [18] (Figs. 2a and 2b in their paper). The best-fit plots are shown in Figs. 1-4. As we can see, the Weibull distribution fits both regimes (weak and strong) quite well (the $R^{2}$ values are reported in the figures). We determined the exponent $a$ from the fitted distributions and used Eq. 15 to estimate the $\hat{\alpha}$ values. We see that the estimated values ($\hat{\alpha}$ shown in the figures) are in good agreement with the actual $\alpha$ values reported by Takatori et al. [18]. Thus, the data appear to support our prediction that the final distributions of the Janus particles in both the weak- and strong-trap regimes are the results of arbitrage equilibria. They are not out-of-equilibrium or nonequilibrium systems, but arbitrage equilibrium systems. More experimental and simulation studies are needed at other values of $\alpha$ to confirm this more conclusively. In addition, it would be helpful to derive the first term on the right-hand side of Eq. 13, and Eq. 15, from the first principles. Figure 1: Weibull distribution fit for the weak trap: Experimental data Figure 2: Weibull distribution fit for the weak trap: Simulation data Figure 3: Weibull distribution fit for the strong trap: Experimental data Figure 4: Weibull distribution fit for the strong trap: Simulation data ## IV Schelling-agents model and MIPS We now consider another game-theoretic model system to address motility- induced phase separation (MIPS) from a statistical teleodynamics perspective [13]. Here again, we demonstrate the emergence of arbitrage equilibrium as a result of the self-organizing dynamics of a large population of active agents competing for benefits. There are numerous examples of such behavior in the living world. In ecology, for example, we have the formation of mussel beds in the sea [26, 27, 28, 15]; in sociology, the social segregation of different groups [29, 30]; and in economics, the emergence of income inequality in societies [12]. We develop our model by considering a large lattice of local neighborhoods or blocks, each with $M$ sites that agents can occupy. There are $n$ such blocks, $nM$ sites, and a total of $N$ agents, with an average agent density of $\rho_{0}=N/(nM)$. The state of an agent is defined by specifying the block $i$ in which it is located, and the state of the system is defined by specifying the number of agents, $N_{i}$, in block $i$, for all blocks ($i\in\\{1,\dots,n\\}$ ). Let block $i$ also have $V_{i}$ vacant sites, so $V_{i}=M-N_{i}$. Next, as we did in the case of the ant crater-Janus particles, we define the effective utility, $h_{i}$, for agents in block $i$, which agents try to maximize by moving to better locations (i.e., other blocks), if possible. The effective utility is, again, the net sum of the benefits minus the costs, but the benefits and costs here are different from those of the ant-crater situation, as one might expect. Here, an agent prefers to have more members in its neighborhood, as this aggregation provides certain benefits. In ecology, for example, for mussels, it improves the chances of survival of mussels against predators and sea wave stress [28, 27]. In sociology, it increases social benefits such as meeting life partners, finding better jobs, etc. [30]. Therefore, this affinity benefit term, which represents cooperation between agents, is proportional to the number of agents in its neighborhood. We model this as $\alpha N_{i}$, where $\alpha>0$ is a parameter. However, this affinity benefit comes with a cost. As more and more agents aggregate, they all compete for the same limited resources in the neighborhood. This congestion cost is widely modeled by the quadratic disutility term $-\beta N_{i}^{2}$ ($\beta>0$) [12, 31]. In addition, agents also balance two competing search strategies - exploitation and exploration. Exploitation takes advantage of the opportunities in the immediate, local neighborhood of the agents. On the other hand, exploration examines possibilities outside. This is a widely used strategy in biology, ecology, and sociology. For example, a genetic mutation can be thought of as exploitation, which is searching locally in the design space, while crossover is exploration, which is searching more globally. Regarding exploration, the agents derive a benefit from having a large number of vacant sites to potentially move to in the future should such a need arise. This is the instinct to explore other opportunities, as new vacant sites are potentially new sources of food and other benefits. We call this resource the option benefit term as agents have the option to move elsewhere if needed. Again, following Venkatasubramanian [12, 24, 13], we model this as $\gamma\ln(M-N_{i})$, $\gamma>0$. The logarithmic function captures the diminishing utility of this option, a commonly used feature in economics and game theory. As before, this benefit is also associated with a cost due to competition among agents for these vacant sites. As in the ant-crater case, we model this competition cost as $-\delta\ln N_{i}$, $\delta>0$ [11, 12, 24]. Combining all these, we have the following effective utility function $h_{i}$ for the agents in block $i$ as, $\displaystyle h_{i}(N_{i})=\alpha N_{i}-\beta N_{i}^{2}+\gamma\ln(M-N_{i})-\delta\ln N_{i}$ (16) Intuitively, the first two terms in the equation model the benefit-cost trade- off in the exploitation behavior while the last two model a similar trade-off in exploration. Rewriting this in terms of the density ($\rho_{i}$) of agents in block $i$, $\rho_{i}=N_{i}/M$, and absorbing the constant $M$ into $\alpha$ and $\beta$, we have $\displaystyle h_{i}(\boldsymbol{\rho})=\alpha\rho_{i}-\beta\rho_{i}^{2}+\gamma\ln(1-\rho_{i})-\delta\ln\rho_{i}$ (17) Note that in certain cases, agents may not occupy all $M$ sites and only a fraction of sites can be occupied due to restrictions such as steric factors. This corresponds to a maximum occupancy density ($\rho_{\text{max}}$). In such a situation the utility of vacant sites will be $\ln(1-\rho_{i}/\rho_{\text{max}})$, resulting in the formulation, $\displaystyle h_{i}(\boldsymbol{\rho})=\alpha\rho_{i}-\beta\rho_{i}^{2}+\gamma\ln(1-\rho_{i}/\rho_{\mathrm{max}})-\delta\ln\rho_{i}$ (18) We can set $\delta=1$ without any loss of generality. In addition, we set $\gamma=1$ and $\rho_{\mathrm{max}}=1$ to gain analytical simplicity, but these can be relaxed later if necessary. Therefore, we now have $\displaystyle h_{i}(\boldsymbol{\rho})=\alpha\rho_{i}-\beta\rho_{i}^{2}+\ln(1-\rho_{i})-\ln\rho_{i}$ (19) For simplicity, we define $u(\rho_{i})=\alpha\rho_{i}-\beta\rho_{i}^{2}$. Therefore, the potential $\phi(\boldsymbol{\rho})$ in Eq. 2 becomes $\displaystyle\phi(\boldsymbol{\rho})$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{n}\int h_{i}(\boldsymbol{x})dx_{i}=\frac{M}{N}\sum_{i=1}^{n}\int h_{i}(\boldsymbol{\rho})d\rho_{i}$ $\displaystyle=$ $\displaystyle\frac{M}{N}\sum_{i=1}^{n}\int_{0}^{\rho_{i}}\left[u(\rho)+\ln(1-\rho)-\ln\rho\right]d\rho$ One can generalize the discrete formulation to a continuous one by replacing $\rho_{i}$ by $\rho(r)$, where the density is a continuous function of radius $r$ of the neighborhood as demonstrated by Sivaram and Venkatasubramanian [14] in the self-organized flocking behavior of birds. Now, according to the theory of potential games [17], an arbitrage equilibrium is reached when the potential is maximized. We can determine the equilibrium utility, $h^{*}$, by the Lagrangian multiplier approach mentioned above (Eq. 3), but there exists a simpler alternative that is more convenient for our purposes here. To analyze the equilibrium behavior, we can take the simpler agent-based perspective and exploit the fact that at equilibrium all agents have the same effective utility, i.e., $h_{i}=h^{*}$, for all $i$. In other words, $\displaystyle\alpha\rho^{*}-\beta\rho^{*2}+\ln(1-\rho^{*})-\ln\rho^{*}=h^{*}$ (21) We explore numerically the behavior of $h$ as a function of $\rho$ (Eq. 19), as shown in Fig. 5 ($\beta=0$, different $\alpha$) and Fig. 6 ($\alpha=6$, different $\beta$). As we can see, these two plots are qualitatively similar. Below a threshold value of $\alpha$ and $\beta$, the utility function is monotonic and has a unique density (blue curve) for a given utility value. Above the threshold, the utility is non-monotonic (green curve) and can have multiple density values for the same utility. This behavior is known as the van der Waals loop in thermodynamics. In mathematical terms, this cubic-like function has three real and positive zeros. The red dotted line shows this. The orange curve represents the threshold behavior. Figure 5: Effective Utility vs Density: $h$ vs $\rho$ for different $\alpha$. The black points are the spinodal points ($\rho_{s1}=0.211,h_{s1}=2.585;\rho_{s2}=0.789,h_{s2}=3.415$). The red points are the binodal points ($\rho_{b1}=0.071,h_{b1}=3.00;\rho_{b2}=0.929,h_{b2}=3.00$). Figure 6: Effective Utility vs Density: $h$ vs $\rho$ for different $\beta$ The red points are binodal points. The black points are the spinodal points ($\rho_{s1}$ = 0.237, $h_{s1}$ = 2.535; $\rho_{s2}$ = 0.685, $h_{s2}$ = 2.864). The red points are the binodal points ($\rho_{b1}$ = 0.117, $h_{b1}$ = 2.708; $\rho_{b2}$ = 0.829, $h_{b2}$ = 2.708). Note that whether all agents remain in a single phase of uniform density dispersed throughout the region or separate into various groups is determined by the slope $\partial h/\partial\rho\big{|}_{\rho^{*}}$, which is the second derivative of $\phi$, $\partial^{2}\phi/\partial^{2}\rho\big{|}_{\rho^{*}}$, and the zeros of $h(\rho)$. This behavior is mathematically equivalent to spinodal decomposition in thermodynamics, widely studied, for example, in the phase separation of alloys and polymer blends [32, 33]. In thermodynamics, the phase between the spinodal points (discussed below in more detail) is unstable because it corresponds to increasing the free energy of the system, and hence the single phase splits into two phases of different densities to lower the free energy. For the same reason, the phases between the spinodal and binodal points are metastable, and the phases at the binodal points are stable. A similar behavior happens here in statistical teleodynamics as well. Here, the goal is to maximize the potential $\phi$ in (LABEL:eq:schelling- potential). Thus, in Fig 5 and Fig. 6, we observe that for $\alpha=0$ (blue curve) and $\alpha=4$ (orange curve), $\partial h/\partial\rho\leq 0$ (i.e. negative slope; recall that $\partial h/\partial\rho$ = $\partial^{2}\phi/\partial^{2}\rho$). In such a parameter regime, phase separation does not occur. However, for higher values of $\alpha$, regions with $\partial h/\partial\rho>0$ (i.e., positive slope), phase separation develops. We better understand this from Fig. 7. The upper part of this figure shows the potential ($\phi$) vs the density ($\rho$) curve (in green) for $\alpha=6$, $\beta=0$. The plotted equation is $\phi=\alpha\frac{\rho^{2}}{2}-\beta\frac{\rho^{3}}{3}-\rho\ln{\rho}-(1-\rho)\ln{(1-\rho)}-2.6\rho$ (22) The linear term $2.6\rho$ is subtracted from the actual potential function as a way of rescaling to highlight the double hump nature of the $\phi-\rho$ curve. This subtraction is done purely for illustrative purposes only, as this double hump otherwise is not so visible in the scale of the figure. In all our calculations and simulations, this subtraction is not needed and hence is not done. The spinodal points are shown as black dots, where $\partial h/\partial\rho\big{|}_{\rho^{*}}$ = $\partial^{2}\phi/\partial^{2}\rho\big{|}_{\rho^{*}}=0$. The corresponding spinodal points are also shown in Fig. 5 as black dots on the green curve ($\alpha=6$, $\beta=0$). Fig. 7 also shows the binodal points (in red, connected by the common tangent line), where $\partial h/\partial\rho\big{|}_{\rho^{*}}$ = $\partial^{2}\phi/\partial^{2}\rho\big{|}_{\rho^{*}}<0$. The corresponding binodal points are seen in Fig. 5 as red points connected by the red dotted line. As we see, the two binodal points enjoy the same effective utility (3.00), which is the arbitrage equilibrium. The bottom part of Fig. 7 shows the loci of binodal points (red curve) and of spinodal points (black curve) for different values of $\alpha$ ($\beta=0$). As $\alpha$ changes, the binodal and spinodal points change, and for $\alpha>4$ ($\beta=0$) they disappear. Within the spinodal region, shown in dark gray, the miscibility gap, a single phase of uniform density is unstable and would split into two phases of different densities. The reason is that the potential $\phi$ of a large agent group here is less than the sum of the two potentials of the low-density group and the high-density group at the binodal points. We see this geometrically from the common tangent line connecting the binodal points to be above the single-phase green curve between the spinodal points. Agents in such regions will be self-driven towards the high-density binodal point to increase their utility. So $\phi$ increases, and the system splits into two groups of different densities. Thus, for the green curve in Fig. 5, a self-organized, utility-driven, stable phase separation occurs spontaneously at the binodal points (red dotted line) at the arbitrage equilibrium. While the miscibility gap is unstable, the region immediately outside of it, between the black and red curves, is metastable. Beyond the red curve, one has a stable single phase of uniform density - no phase separation here. Figure 7: Game potential ($\phi$) curve and the spinodal and binodal points. For $\alpha=6,\beta=0$, the spinodal densities are 0.211 and 0.789; the binodal densities are 0.071 and 0.929. In summary, for high values of $\alpha$ (e.g., green curve in Fig. 5), combined with average densities in the miscibility gap, we observe the spontaneous emergence of two phases, high and low density groups of agents, at arbitrage equilibrium, driven by the self-actuated pursuit of maximum utility by the agents. In Fig. 8, we show the region (shaded in yellow) within which the phase separation occurs at the arbitrage equilibrium for the values of the average density $\rho_{0}$, $\alpha$, and $\beta$ within the region. In Fig. 9 and Fig. 10, we show the 2-D slices of the yellow region of spontaneous phase separation. For a given value of $\alpha$, $\beta$, and $\rho_{0}$, they show the loci of the two densities (i.e., low- and high-density groups) of the corresponding equilibrium states of the agents. Intuitively, in the high-density phase, agents derive so much more benefit from the affinity term (due to the high $\alpha$) that it more than compensates for the disutilities due to congestion and competition, thus yielding a high effective utility. Similarly, in the low-density phase, the benefits of reduced congestion and lower competition combined with increased option benefit more than compensate for the loss of utility from the affinity term. Thus, every agent enjoys the same effective utility $h^{*}$ in one phase or the other at the arbitrage equilibrium. This causes equilibrium because, as noted, there is no more arbitrage incentive left for agents to switch neighborhoods. As noted, this analysis is mathematically equivalent to spinodal decomposition in statistical thermodynamics, with an important difference. In statistical thermodynamics, agents try to minimize their chemical potentials and the free energy of the system. Here, in statistical teleodynamics, agents try to maximize their utilities ($h_{i}$) and the game-theoretic potential ($\phi$). In thermodynamics, chemical potentials are equal in phase equilibrium. In teleodynamics, the effective utilities are equal in arbitrage equilibrium. The parallel is striking, but it is not surprising because, as Venkatasubramanian has shown [12, 13], statistical teleodynamics is a generalization of statistical thermodynamics for goal-driven agents. Therefore, given this mathematical equivalence, one should expect to observe “macroscopic” phenomena generally associated with thermodynamics (such as phase separation and equilibrium) in entirely different contexts (such as MIPS or social segregation in socioeconomic systems). The “microscopic” mechanisms of the self-organizing dynamics might differ in different contexts. Thus, the driving force for the movements of nonliving agents could be temperature, pressure, or chemical potential gradients, whereas the driving force for living agents is effective utility. As noted, since our theory is “mesoscopic” in character, it is agnostic to the “microscopic” details. Figure 8: Phase separation region at the arbitrage equilibrium Figure 9: Constant $\alpha$ (vertical) 2-D slices of the yellow region Figure 10: Constant $\beta$ (horizontal) 2-D slices of the yellow region ### IV.1 Agent-based simulation results We tested our model using an agent-based simulation on a $300\times 300$ lattice (90,000 cells total), for $\alpha=6$, $\beta=0$, and for different $N$ ($N$ = 22,500, 45,000, 55,000). For details of the simulations, the reader is referred to the Methods section. Figure 11: Equilibrium patterns for different average densities, $\rho_{0}$, at the end of 10,000 iterations, for $\alpha=6,$ $\beta=0$. (A), (B), and (C) represent, respectively, the sparsely distributed dots pattern (I) obtained for 22,500 agents ($\rho_{0}=0.25$), the corresponding density distribution at the end of 10,000 iterations, and the average utility evolution over the iterations. Similar results are shown for the labyrinthine pattern (II) in (D), (E), and (F) for 45,000 agents ($\rho_{0}=0.5$), and for the “gapped” pattern (III) in (G), (H), and (I) for 55,000 agents ($\rho_{0}=0.61$). In our simulations, we observe (Fig. 11) the three basic types of patterns, or “macroscopic” states, namely, (I) sparsely distributed dots, (II) labyrinthine or worm-like structures, and (III) “gapped” patterns that are seen empirically, for example, in mussel beds [27] for different mussel densities. As one might expect, the size of the interaction neighborhood (see the Methods section) plays a role in determining the specific details, i.e., the “microscopic” features, of these “macroscopic” states. That is, for example, the detailed “microscopic” features of the labyrinthine or worm-like structures might look different for different neighborhood sizes, but their “macroscopic” state would remain labyrinthine. Thus, the basic “macroscopic” states are found to be robust. The “macroscopic” state transitions appear like phase transitions, moving from category I to category II to III as the density increases. We believe that these “macroscopic” and “microscopic” characteristics reflect the structure of the phase-space landscape of $\phi$. Thus, as the agents move around in the physical space, the system wanders around in the phase-space landscape, settling into one state or the other. Since a “macroscopic” state could be achieved via many different “microscopic” states, i.e., _multiplicity_ , one gets different “microscopic” outcomes in different simulation runs, while the “macroscopic” outcome remains the same and robust. The “macroscopic” states are the attractors seen in many nonlinear dynamical systems. The corresponding density histograms and the evolution of average utility as a function of iteration are also shown in Fig. 11. For this configuration (i.e., $\alpha=6$, $\beta=0$), the spinodal densities (the black points in Fig. 5 and Fig. 7) are 0.211 and 0.789, and the binodal densities (the red points in Fig. 5 and Fig. 7) are 0.071 and 0.929. A word of caution as we discuss the results. As noted, our simple coarse- grained model is equivalent in spirit to the van der Waals or the Ising model in statistical mechanics. Therefore, we do not expect our analytical model and associated simulations to capture all the nuances and complexities of the real-world patterns in active matter systems. As predicted by our theory, the density histograms show (Fig. 11, B, E, H) spinodal decomposition for the three agent populations ($N$ = 22,500, 45,000, 55,000). That is, there are two phases, one with low-density groups and the other with high-density groups. These two densities are around the binodal densities predicted by the theory. Although the theory predicts two sharp binodal density values ($\rho_{1}$ = 0.071 and $\rho_{2}$ = 0.929), it would be hard to see such precise results in the simulation for one main reason. Theoretical predictions are based on concepts from statistical mechanics, which only work well for an extremely large number of agents (such as the Avogadro number of molecules, $\sim 10^{23}$). This is when the statistical estimates and outcomes are extremely precise, as in the case of, for example, alloys in materials science. In our simulation, we have only 22,500 - 55,500 agents. So, the statistics is not that precise. Therefore, one should expect to see a distribution of values instead of singular peaks. That is what we observe in our simulations. We also observe that the distributions around the low binodal density are narrower, whereas they are broader at the high binodal density. The reason is the following. As we see in Fig. 5, an individual agent reaches its maximum utility at the upper spinodal point at the spinodal density of 0.789, while the entire agent bed reaches its maximum potential $\phi$ (and hence the arbitrage equilibrium) at the binodal density of 0.929 as seen from Fig. 7. Thus, in high-density clusters, individual agents constantly compete to reach the upper spinodal point (density = 0.789) of higher individual utility, while the competition of the other agents to reach the same state drives the agent bed away from the spinodal point to the binodal point (density = 0.929). Therefore, agents mainly bounce around between these two points, the spinodal density of 0.789 and the binodal density of 0.929, with a weighted average density of about 0.85 (see Table 1) right in the middle. We also observe in Fig. 11 (C, F, I) that the average utility improves as the simulation proceeds, as the agents maneuver around to increase their effective utilities, and then finally settles and fluctuates around the arbitrage equilibrium value. The key statistics are summarized in Table 1. The spinodal and binodal densities are the same for the three different cases of $N$, because $\alpha=6,\beta=0$ for all cases (see Fig. 5, green curve). We also find that the average utility of Phase-1 is almost the same as that of the corresponding Phase-2, as predicted by the theory. Thus, we see that a vast majority (86-90%) of the agents are in their arbitrage equilibrium states, either in Phase-1 or Phase-2. Table 1: Summary of key metrics in phase separation Category N Spinodal densities Binodal densities Phase-1 Phase-2 $\rho_{s1}$ $\rho_{s2}$ $\rho_{b1}$ $\rho_{b2}$ Share of agents($\%$) Avg. density Avg. utility Share of agents($\%$) Avg. density Avg. utility Sparsely distributed dots 22,500 0.211 0.789 0.071 0.929 33.28 0.052 3.217 52.63 0.855 3.330 Labyrinthine 45,000 0.211 0.789 0.071 0.929 8.71 0.051 3.234 79.85 0.850 3.338 Gapped 55,000 0.211 0.789 0.071 0.929 4.32 0.050 3.250 85.42 0.848 3.341 ### IV.2 Stability of the Arbitrage Equilibrium We can determine the stability of this equilibrium by performing a Lyapunov stability analysis [11, 12]. A Lyapunov function $V$ is a continuously differentiable function that takes positive values everywhere except at the equilibrium point (i.e., $V$ is positive definite), and decreases (or is not increasing) along every trajectory traversed by the dynamical system ($\dot{V}$ is negative definite or negative semi-definite). A dynamical system is locally stable at equilibrium if $\dot{V}$ is negative semi-definite and is asymptotically stable if $\dot{V}$ is negative definite. Following Venkatasubramanian [12], we identify our Lyapunov function $V(\boldsymbol{\rho})$ $\displaystyle V(\boldsymbol{\rho})=\phi^{*}(\boldsymbol{\rho})-\phi(\boldsymbol{\rho})$ (23) where $\phi^{*}$ is the potential at the arbitrage equilibrium (AE) (recall that $\phi^{*}$ is at its maximum at AE) and $\phi(\boldsymbol{\rho})$ is the potential at any other state. Note that $V(\boldsymbol{\rho})$ has the desirable properties we seek: (i) $V(\boldsymbol{\rho^{*}})$ = 0 at AE and $V(\boldsymbol{\rho})$ $>$ 0 elsewhere, i.e., $V(\boldsymbol{\rho})$ is positive definite; (ii) Since $\phi(\boldsymbol{\rho})$ increases as it approaches the maximum, $V(\boldsymbol{\rho})$ decreases with time, so it is easy to see that $\dot{V}$ is negative definite. Therefore, the arbitrage equilibrium is not only stable but also asymptotically stable. Figure 12: Stability analysis. (A) Equilibrium configuration of agents at the end of 10,000 iterations (B) disturbed configuration at 10,001${}^{\text{th}}$ iteration (C) equilibrium configuration at the end of 15,000 iteration (D) evolution of average utility over iterations. The sharp decrease in the average utility at 10,001${}^{\text{th}}$ is due to the disturbance introduced at 10,001${}^{\text{th}}$ iteration. Our simulation results confirm this theoretical prediction (see Fig. 12). We show the stability results for the configuration of Fig. 11-A, as an example. After the agents population reached equilibrium (10,000 iterations), we disturbed the equilibrium state by randomly changing the positions of the agents. As a result, the average utility of the population goes down as seen from the sharp drop at the 10,001th iteration in Fig. 12-D. Fig. 12-B shows the disturbed state of the agent groups. The simulation is then continued from this new disturbed far-from-equilibrium state. As we see from Fig. 12-C, the agent population recovers quickly to reach the original category-I “macroscopic“ state even though some of the “microscopic” features are different this time. The reader might have noticed that two small groups in Fig. 12-A have merged to become one larger group in two different locations in Fig. 12-C. This phenomenon is called Ostwald Ripening in materials science. We also notice that the average utility is back to its old level. This analysis shows that the arbitrage equilibrium region is not only stable, but asymptotically stable. That is, the “macroscopic” structures are resilient and self-healing. Given the speed of recovery, it could possibly be exponentially stable, but we have not proved this analytically here. It is interesting to observe that this result is similar to that of the dynamics of the income game [11, 12] and the dynamics of flocking birds [14]. ## V Connection with Statistical Thermodynamics To appreciate how statistical teleodynamics is similar to and different from statistical thermodynamics, let us consider a familiar example, the thermodynamic system of gas molecules in a container, from the perspective of statistical teleodynamics. We call this the Thermodynamic Game [11, 12]. Now, real gas molecules are, of course, purpose-free, and hence do not pursue maximum utility. However, in this game-theoretic formulation, we show that our imaginary molecules-like agents, when they pursue a particular form of “utility” as given in Eq. 24, behave like gas molecules. So, approaching this from the perspective of a potential game, we introduce the following “utility,” $h_{i}$, for our molecular agents in state $i$: ${h}_{i}(E_{i},N_{i})=-\beta E_{i}-\ln N_{i}$ (24) where $E_{i}$ is the energy of an agent in state $i$, $N_{i}$ is the number of agents in state $i$, $\beta=1/k_{B}T$, $k_{B}$ is the Boltzmann constant and $T$ is temperature. The first term models the tendency of molecules to prefer low-energy states (since our “molecular” agent tries to maximize its utility, the negative sign leads to smaller values of $E_{i}$). The $-\ln N_{i}$ term models the disutility of competition we have used in the sections above. This term models the “restless” nature of molecules and their propensity to spread out. This is so because the $-\ln N_{i}$ term incentivizes the agent to leave its current location of higher $N_{i}$ to a location of lower $N_{i}$ all the time. By integrating this effective utility, we obtain the potential $\phi(\mathbf{x})$ as $\phi(\mathbf{x})=-\frac{\beta}{N}E+\frac{1}{N}\ln\frac{N!}{\prod_{i=1}^{n}(Nx_{i})!}$ (25) where $E=N\sum_{i=1}^{n}x_{i}E_{i}$ is the total energy that is conserved, $N$ is the total number of molecules, $n$ is the total number of states, and $x_{i}=N_{i}/N$. This game reaches a unique Nash equilibrium when $\phi(\mathbf{x})$ is maximized [17]. To determine the equilibrium distribution, we maximize the Lagrangian given by Eq. 3 $L=\phi+\lambda(1-\sum_{i=1}^{n}x_{i})$ and obtain the well-known Boltzmann exponential distribution of energy at equilibrium $x_{i}^{*}=\frac{\exp(-\beta E_{i})}{\sum_{j=1}^{n}\exp(-\beta E_{j})}$ (26) Thus, the arbitrage equilibrium of this game is the same as the statistical thermodynamic equilibrium, as expected. The critical insight here is, as Venkatasubramanian et al. first showed [11], that the second term in Eq. 25 is the same as entropy (except for the Boltzmann constant). Thus, maximizing potential in population game theory is equivalent to maximizing entropy in statistical mechanics subject to the constraints given by the first term in Eq. 25, i.e., the constraint on total energy $E$. This is a deep and beautiful connection between statistical mechanics and potential game theory. This fundamental connection allows us to generalize statistical thermodynamics to statistical teleodynamics and lays the foundation towards a universal theory of emergent equilibrium behavior of both purpose-free and purpose-driven agents. Pursuing this line of enquiry some more, we recognize another important connection. From Eq. 25, we have $\phi=-\frac{1}{Nk_{B}T}(E-TS)=-\frac{\beta}{N}A$ (27) where $A=E-TS$ is the Helmholtz free energy. Indeed, in statistical thermodynamics, $A$ is called a thermodynamic potential. Again, we see the correspondence between the game-theoretic potential and the thermodynamic potential. Furthermore, we see the correspondence between utility and the chemical potential, with an important difference. Active agents try to increase their utilities, whereas thermodynamic agents try to decrease their chemical potential. In the Thermodynamic Game, an arbitrage equilibrium is reached when all agents have the same effective utility, which is equivalent to the chemical potential being equal in thermodynamics. In fact, our theory reveals the critical insight that both living and non-living agents are driven by arbitrage opportunities towards equilibrium, except that their arbitrage currencies are different. For thermodynamic agents, the currency is the chemical potential, whereas for living agents, the utility. Thus, we see that statistical teleodynamics is a natural generalization of statistical thermodynamics for goal-driven agents. The perspective of statistical teleodynamics reveals an underappreciated insight in statistical mechanics, which is the recognition that when $N_{i}$ (or $x_{i}$) follows the exponential distribution in Eq. 26, $h_{i}=h^{*}$ for all molecules. In other words, it is an invariant. As noted, this is the defining criterion for arbitrage equilibrium. Thus, the exponential distribution is a curve of constant effective utility (or equivalently constant chemical potential), for all values of $E_{i}$. That is, it is an isoutility curve for $h_{i}$ defined by Eq. 24. This turns out to be a particularly valuable insight in the context of the emergence of a fair income distribution in the dynamics of the free market [12]. This recognition reveals another valuable insight that is also not readily appreciated in statistical mechanics. We realize that we do not necessarily have to maximize the potential $\phi(x)$ to derive the equilibrium distribution (when the equilibrium is unique, i.e., $\partial^{2}\phi/\partial^{2}x<0$). We can adopt the agent perspective and recognize that equilibrium is reached when all agents enjoy the same utility $h_{i}=h^{*}$. Therefore, we have ${h}_{i}={h^{*}}=-\beta E_{i}-\ln N_{i}^{*},~{}~{}i\in\\{1,\dots,n\\}$ (28) where $N_{i}^{*}$ is given by the equilibrium distribution. From this, it is easy to derive the Boltzmann energy distribution (Eq. 26) by rearranging and solving for $N_{i}^{*}$. In fact, this is the trick we used to derive Eq. 8 from Eq. 7. Thus, we see that without necessarily invoking the system perspective of maximizing the potential (which is equivalent to maximizing entropy with constraints), we can derive the Boltzmann distribution easily. This important property is not seen that clearly in statistical mechanics. In statistical mechanics, we typically maximize entropy or minimize Gibbs free energy to arrive at equilibrium results. That is, the emphasis is on the system perspective; the individual agent’s view is not given importance. This is one of the important philosophical differences between statistical thermodynamics and statistical teleodynamics. While the former is generally a top-down, system-oriented perspective, the latter is decidedly a bottom-up, agent- oriented perspective. Both Eq. 27 and Eq. 24 reveal another interesting feature of the statistical teleodynamics framework with respect to thermodynamic laws [34, 35, 36]. Since maximizing potential $\phi$ is the same as maximizing entropy $S$ subject to the constraint on $E$ (Eq. 27), we see the two laws of thermodynamics embedded in this equation. The second term is the root of the second law of maximizing entropy and the first term is the source of the first law of energy conservation. Similarly, in Eq. 24, the two laws are embedded in the same manner. Eq. 27 is the system-based view of the two laws, while Eq. 24 is the agent-based view. Therefore, we realize that the non-entropic terms in utility and potential serve the role of constraints on entropy maximization. This interpretation provides a deeper understanding of the effective utility functions (and their potential versions) of living agents, such as ants. Consider the effective utility of an ant given by Eq. 4. The $-\ln N_{i}$ term, as before, corresponds to the second law of entropy maximization, and the other two terms play the role of the first law of enforcing constraints. However, unlike the thermodynamics first law of energy conservation, the teleodynamics “first law” does not enforce conservation, but rather constraints. Thus, we see that the teleodynamics “first law” is the general case, and the thermodynamic first law is a special case where the constraint on the total energy is also the conservation of it. The “first law” of teleodynamics allows for more complicated and “non-thermodynamic” constraints to be imposed on entropy maximization. We also see that the “second law” of teleodynamics is similar to the second law of thermodynamics, but with the broader concept of arbitrage equilibrium. The ”zeroth law” of teleodynamics is that all agents continually try to increase their effective utilities. This is essentially the Darwinian survival-of-the-fittest principle in biology and the Smithian constant pursuit of self-interest principle in economics. ## VI Connection with MIPS It is instructive to compare the phase separation phenomena described by our analysis of non-thermodynamic systems, such as mussel beds [15] or social groups [16], with MIPS. MIPS is generally described as a non-equilibrium or out-of-equilibrium behavior of active matter systems [5, 7, 6, 19, 3, 4]. Based on our analysis above, we argue that MIPS is indeed an equilibrium phenomenon, but a different kind of equilibrium, an arbitrage equilibrium. As we show above, under certain conditions, this arbitrage equilibrium is equivalent to a statistical or thermodynamic equilibrium. In this section, we explore this perspective at some length. We start with the work reported by Takatori and Brady [6]. In particular, four of their equations are relevant to our discussion here. They are for the active pressure ($\mathrm{\Pi^{act}}$), Helmholtz free energy ($\mathrm{F^{act}}$), Gibbs free energy ($\mathrm{G^{act}}$), and the chemical potential ($\mathrm{\mu^{act}}$) of the self-propelled particles, respectively, as given below (we have changed their volume fraction symbol $\phi$ to $\theta$ so that it is not confused with our $\phi$ which is the game-theoretic potential). $\displaystyle\Pi^{act}$ $\displaystyle=$ $\displaystyle nk_{s}T_{s}[1-\theta-\theta^{2}+3\theta\mathrm{Pe_{R}}(1-\theta/\theta_{0})^{-1}]$ (29) $\displaystyle\frac{F^{act}}{Nk_{s}T_{s}}=\ln\theta-\frac{\theta(\theta+2)}{2}-3\mathrm{Pe_{R}}\theta_{0}\ln(1-\theta/\theta_{0})$ $\displaystyle+F^{0}(k_{s}T_{s},\mathrm{Pe_{R}})$ (30) $\displaystyle\frac{G^{act}}{Nk_{s}T_{s}}$ $\displaystyle=$ $\displaystyle\frac{F^{act}}{Nk_{s}T_{s}}+\frac{\Pi^{act}}{nk_{s}T_{s}}$ (31) $\displaystyle\mu^{act}(k_{s}T_{s},\theta,\mathrm{Pe_{R}})=\mu^{0}(k_{s}T_{s},\mathrm{Pe_{R}})+k_{s}T_{s}\ln\theta$ $\displaystyle+k_{s}T_{s}\ln\Gamma(\theta,\mathrm{Pe_{R}})$ (32) where $\theta$ is the volume fraction, $\theta_{0}$ is the volume fraction at close packing, $N$ is the number of active swimmers, $k_{s}T_{s}=\zeta U_{0}^{2}\tau_{R}/6$, $U_{0}$ is the intrinsic swim speed, $\tau_{R}$ is the reorientation time, $\zeta$ is the hydrodynamic drag factor, $\mathrm{Pe_{R}}$ is the Peclet number, $\mathrm{\Pi^{act}}$ is the active pressure, $\mathrm{F^{act}}$ is the non-equilibrium Helmholtz free energy, $\mathrm{G^{act}}$ is the nonequilibrium Gibbs free energy, $\mathrm{F^{0}}$ is the reference state Helmholtz free energy, $\mu^{act}$ is the nonequilibrium chemical potential, $\mu^{0}$ is the reference state of the chemical potential, $\Gamma$ is a nonlinear expression (for more details on these quantities, see [6]). In Eq. 32, the second term on the right-hand side represents the entropic, ideal-gas contribution to the chemical potential. The third term is the nonideal term that is the analog of enthalpic attraction between active swimmers, and is represented by $\Gamma(\theta,Pe_{R})$ that resembles the fugacity coefficient in classical thermodynamics. We observe that our Eq. 22 is equivalent to Eq. 31. The game-theoretic potential is the negative of the Gibbs free energy, since our active agents maximize utility rather than minimizing the chemical potential. Our Fig. 7 showing the spinodal and binodal regions is equivalent to Figure 3 in their paper (except for sign reversal due to game potential $\phi=-G^{act}$). Note that we are not saying that they are equal; they are equivalent in the sense that both produce a function ($\phi$ or $G^{act}$) with two maxima (for $\phi$) or equivalently two minima (for $G^{act}$) so that the common tangent line can be drawn to determine the binodal points. Thus, they are two equivalent models of phase separation in active matter systems. Now, from Eq. 18 ($\delta=1$), we have $\displaystyle h_{i}(\boldsymbol{\rho})=\alpha\rho_{i}-\beta\rho_{i}^{2}+\gamma\ln(1-\rho_{i}/\rho_{\mathrm{max}})-\ln\rho_{i}$ (33) The nonequilibrium chemical potential $\mu^{act}$ (Eq. 32) is equivalent to our effective utility $h_{i}$ in Eq. 33. We see that our entropic term $-\ln\rho_{i}$ corresponds to their $k_{s}T_{s}\ln\theta$ (the signs are opposite because the utility is negative of the chemical potential), and the rest are reflected in $\ln\Gamma(\theta,\mathrm{Pe_{R}})$, where $\Gamma(\theta,\mathrm{Pe_{R}})$ is given by [6] $\displaystyle\Gamma(\theta,\mathrm{Pe_{R}})=(1-\theta/\theta_{0})^{-3\theta_{0}\mathrm{Pe_{R}}}\exp[\theta^{3}-\theta^{2}/2$ $\displaystyle+3\mathrm{Pe_{R}}\theta_{0}(1-\theta_{0})/(1-\theta/\theta_{0})$ $\displaystyle-3\theta(1-\theta_{0}\mathrm{Pe_{R}})]$ (34) and therefore $\displaystyle\ln\Gamma(\theta,\mathrm{Pe_{R}})=-3\mathrm{Pe_{R}}\theta_{0}\ln(1-\theta/\theta_{0})+(\theta^{3}-\theta^{2}/2)$ $\displaystyle+3\mathrm{Pe_{R}}\theta_{0}(1-\theta_{0})/(1-\theta/\theta_{0})$ $\displaystyle-3\theta(1-\theta_{0}\mathrm{Pe_{R}})$ Their $\ln\Gamma(\theta,\mathrm{Pe_{R}})$ is much more complicated than our equivalent terms, as it captures all the detailed phenomenology of the active particle dynamics, whereas our model, again, is agnostic of such details. We show in Fig. 13 the fit of Eq. 33 to the data from Eq. 32, for $\theta_{0}=1$ ($\rho_{\mathrm{max}}=1$) and $Pe_{\mathrm{R}}=0.05$, as an example. We see that despite ignoring the complicated details in Eq. 32, the simpler Eq. 33 fits the true $\mu^{act}$ quite well. The purpose of this exercise is not to accurately predict Eq. 32 using Eq. 33. Obviously, Eq. 32 will always do better than Eq. 33 as it incorporates more of the “microscopic” details of the dynamics that Eq. 33 ignores. The objective is to show that the simpler “mesoscopic” generic model in Eq. 33, which can be used as a template for MIPS in different domains including non-physicochemical phenomena such as social segregation [16], has sufficient predictive and explanatory power to do just as well as the more detailed customized model in Eq. 32. Figure 13: Effective utility vs Density with $\theta_{0}=1$ and $\mathrm{Pe_{R}}=0.05$ Therefore, we argue that what Takatori and Brady describe as “nonequilibrium chemical potential” and “free energy of nonequilibrium active matter” in their paper are actually equilibrium quantities, namely, arbitrage equilibrium quantities. As our analysis above (Figures 7-10), and their own Figures 1 and 3 [6], show these active agents are in phase equilibrium, since their effective utilities are equal. This phase separation is not driven by thermodynamic quantities such as the chemical potential and Gibbs free energy, but rather by other factors that are relevant to the “microscopic” mechanisms in the domain of the agents. These mechanisms could differ in different domains. They are different, for example, for bacteria in biology [13], ants that ferry sand grains, mussels in the sea [27], birds in a flock [14], and humans in social and economic settings [12, 16]. But they all belong to the same universality class, which is governed by the same mathematical structures and conditions described in Eq. 1-4, and Eq. 19 of statistical teleodynamics. This mathematical structure determines the properties of the arbitrage equilibrium. When the competition cost in the effective utilities of agents has the form $-\ln N_{i}$ (or $-\ln\rho)$, this arbitrage equilibrium corresponds to the thermodynamic or statistical equilibrium, as we discussed in Section V. We see a similar correspondence for active Brownian particles whose velocities are density dependent, as discussed by Cates and Tailleur [7]. They write the chemical potential $\mu$ of active particles without Brownian diffusion as $\mu=\mu_{\mathrm{id}}+\mu_{\mathrm{ex}}=\ln\rho(\mathbf{r})+\ln v(\rho(\mathbf{r}))$ (36) where the propulsion speed $v(\rho)$ monotonically decreases with increasing density. In our framework, this would be equivalent to the effective utility $\displaystyle h(\rho)=-\ln v(\rho(\mathbf{r}))-\ln\rho(\mathbf{r})$ (37) As Cates and Tailleur [7] show, this dynamics will lead to phase separation if there exists a concave region in the free energy corresponding to Eq. 36. In our case, this would correspond to the existence of a convex region in the potential $\phi$ as seen in Eq. 22 and Fig. 7. However, a more complete version of this model should include the utility provided by the empty spaces in which the particles can potentially move. This becomes particularly important for high particle density $\rho$ when the empty space becomes valuable for motile particles. This is what we call an option benefit in Eq. 19. Therefore, Eq. 37 now becomes $\displaystyle h(\rho)=-\ln v(\rho(\mathbf{r}))+\gamma\ln(1-\rho(\mathbf{r}))-\ln\rho(\mathbf{r})$ (38) Furthermore, if the propulsion speed decays exponentially with respect to density, as Cates and Tailleur suggest [7], then we have $v(\rho)=\exp(-\alpha_{v}\rho)$, which leads to $\displaystyle h(\rho)=\alpha_{v}\rho+\gamma\ln(1-\rho(\mathbf{r}))-\ln\rho(\mathbf{r})$ (39) For $\gamma=1$, $\alpha_{v}>4$ results in a non-monotonic cubic profile (i.e., the van der Waals loop), as seen in Fig. 5, and hence phase separation for the appropriate densities. ### VI.1 Connection with Chemotaxis and MIPS Zhao et al. [20] discuss MIPS in the context of chemotaxis, where there is a directed motion of the active particle along the chemical gradient. While they predict and explain the effect of chemotaxis on MIPS from a dynamic perspective, here we do the same from the perspective of statistical teleodynamics. Venkatasubramanian et al. [13] proposed the effective utility of a particle in a chemoattractant environment as $\displaystyle h_{i}=\alpha c_{i}-\ln n_{i}$ (40) where $c_{i}$ is the concentration of the chemoattractant at a given state $i$, and $n_{i}$ is the number of particles at the state. The first term corresponds to an affinity for states (in this case, regions) with higher concentrations of the chemoattractant, and the second term corresponds to the entropic component as before. They also derive the game-theoretic potential, $\phi(\mathbf{x})$ as $\displaystyle\phi(\mathbf{x})=\alpha\frac{C}{N}+\frac{1}{N}\ln\frac{N!}{\prod_{i=1}^{n}(Nx_{i})!}$ (41) where $N$ is the number of active particles, $x_{i}=n_{i}/N$, and $C$ is the total amount of chemoattractant. This formulation is equivalent to that of O’Byrne and Tailleur [10]. In the continuum limit, $c_{i}$ can be replaced by $c(\mathbf{r})$, the concentration distribution of the chemoattractant in the particle environment at the location $r$. Similarly, $N_{i}$ is replaced by $\rho(\mathbf{r})$, the density of the active particle at a given location. Now, the utility becomes $\displaystyle h(c,\rho)=\alpha c(\mathbf{r})-\ln\rho(\mathbf{r})+\ln N$ (42) and similarly, the game theoretic potential $\displaystyle\phi=\frac{\int\alpha c(\mathbf{r})\rho(\mathbf{r})d\mathbf{r}}{N}-\frac{1}{N}\int\rho(\mathbf{r})\ln\rho(\mathbf{r})d\mathbf{r}$ (43) O’Byrne and Tailleur [10], who proposed a coarse-grained diffusive model for active matter dynamics driven by a chemorepellant concentration field $c$, describe the resultant coarse-grain dynamics using an effective free energy functional $\mathcal{F}$ and deterministic flux $J_{D}$. $\displaystyle J_{D}=-D\rho\nabla\left\\{\left[v_{1}+v_{0}\frac{\alpha_{1}+(d-1)\Gamma_{1}}{\alpha_{0}+(d-1)\Gamma_{0}}\right]\frac{c}{dD}+\log\rho\right\\}$ (44) where $J_{D}=\nabla\left(\delta\mathcal{F}/\delta\rho\right)$, the gradient of the functional derivative of the free energy ($\rho$ is the particle density). $\displaystyle\delta\mathcal{F}/\delta\rho=-D\left\\{\left[v_{1}+v_{0}\frac{\alpha_{1}+(d-1)\Gamma_{1}}{\alpha_{0}+(d-1)\Gamma_{0}}\right]\frac{c}{dD}+\log\rho\right\\}$ (45) We identify that this functional derivative, $\left(\delta\mathcal{F}/\delta\rho\right)$, is the chemical potential of the active particle, which is the negative of the utility in Eq. 42. Once again, as before, we see the equivalence of these two frameworks. Observe that our Eq. 42 maps to Eq. 45 upto a proportionality constant with $\alpha=-\left[v_{1}+v_{0}\dfrac{\alpha_{1}+(d-1)\Gamma_{1}}{\alpha_{0}+(d-1)\Gamma_{0}}\right]\dfrac{1}{dD}$. The simulations performed in the study (with the parameter values: $v_{0}=1,v_{1}=0.2,\alpha_{0}=50,\alpha_{1}=\Gamma_{1}=0$) correspond to $\alpha<0$ in our formulation, the chemorepellant case. If the initial amount of the chemoattractant is fixed, as the particles consume the chemoattractant, its local concentration decreases with increasing local density $\rho(r)$ of the particles. This decrease can be modeled as $c(\mathbf{r})=-k^{\prime}\rho[\mathbf{r}]$, giving density-dependent utility as, $\displaystyle h(\rho)=-\alpha^{\prime}\rho-\ln\rho+\text{constant}$ (46) where $\alpha^{\prime}=\alpha k^{\prime}$. Now, consider the density-dependent velocity model of the active Brownian particles. Incorporating that dynamics (Eq. 39) into Eq. 46, we have $\displaystyle h(\rho)$ $\displaystyle=$ $\displaystyle(\alpha_{v}-\alpha^{\prime})\rho(\mathbf{r})+\gamma\ln(1-\rho(\mathbf{r}))-\ln\rho(\mathbf{r})$ (47) Rewriting Eq. 47 in terms of the chemical potential ($\mu^{act}$), we have $\displaystyle\mu^{act}(\rho)$ $\displaystyle=$ $\displaystyle-(\alpha_{v}-\alpha^{\prime})\rho(\mathbf{r})-\gamma\ln(1-\rho(\mathbf{r}))+\ln\rho(\mathbf{r})$ This system can show phase separation depending on the $\alpha_{v}$ and $\alpha^{\prime}$ values. We show, for example, three cases ($\gamma=1)$: (i) no chemotaxis: $\alpha_{v}=9$ and $\alpha^{\prime}=0$, (ii) weak chemotaxis: $\alpha_{v}=9$ and $\alpha^{\prime}=2$, and (iii) strong chemotaxis: $\alpha_{v}=9$ and $\alpha^{\prime}=7$ in Fig. 14. We see that the non- monotonocity of $h$ changes depending on the values of $\alpha_{v}$ and $\alpha^{\prime}$, thus determining whether a phase separation occurs or not. Specifically, the nonmonotonic cubic profile (i.e., the van der Waals loop) occurs when $\alpha_{v}-\alpha^{\prime}>4$. We see that a strong presence of chemotaxis can prevent phase separation (blue curve). Figure 14: Effect of chemotaxis on utility based on Eq. 47 with $\gamma=1$ ## VII Conclusions In developing a theory of emergent behavior of self-propelled agents that dynamically change states, one faces three questions from the very beginning: (i) Why do the agents change states? (ii) How do they change states? (iii) What happens to the system, i.e., the population of such agents, eventually? The answers to the first two questions depend on the “microscopic” fundamental mechanisms of the particular situation. For example, gas molecules change states driven by thermal agitation, ants by pheromone signals, flocking birds by visual and auditory cues, humans by socioeconomic considerations, and so on. Thus, the “microscopic” details of why and how agents move around depend on the appropriate fundamental mechanisms of physics, chemistry, biology, ecology, sociology, or economics, as the case may be. Statistical teleodynamics is agnostic of such “microscopic” mechanisms. It has a “mesoscopic” view of the agents, as it answers the question of what happens eventually. Towards this goal, we have developed simple models that offer an appropriate coarse-grained description that is not restricted by system-specific details and nuances, but without losing key conceptual insights and relevance to empirical macroscopic phenomena. The spirit of our modeling is similar to that of van der Waals in developing his equation of state. In our theory, the central concept is the effective utility $h_{i}$ of an agent. All agents constantly compete to increase their utilities in an environment with resource constraints. The resources may be energy, space, food, money, etc. This competition, under certain conditions, eventually leads to an arbitrage equilibrium in which all agents have the same effective utility. Given the “mesoscopic” coarse-graining in our models, the effective utility $h_{i}$ is agnostic of the “microscopic” details of its components. Consider, for example, the emergent behaviors of ants and Janus particles. Their “microscopic” mechanisms of motion, i.e., the “whys” and “hows,” are very different for the two cases. However, these different “microscopic” mechanisms result in the same “mesoscopic” model of their effective utilities (Eq. 4 and Eq. 13), thereby predicting the same “macroscopic” emergent behavior, namely, the Weibull distribution. We find a similar situation with motility-induced phase separation phenomena. As our mathematical analysis shows in Section V, the necessary and sufficient conditions for phase separation are, respectively: (i) the effective utility function at arbitrage equilibrium ($h^{*}$) must be at least a cubic or cubic- equivalent of the order parameter (e.g., $\rho$ in Eq. 21) with three distinct real and positive zeros, and (ii) the initial value of the order parameter is within the spinodal region of the miscibility gap (e.g., Figs. 7-8). Again, these “mesoscopic” requirements can be met by using many different “microscopic” mechanisms. Janus particles in the work of Takatori et al. [6] do it one way, but mussels in the sea use different mechanisms for their pattern formation [37]. In sociology and economics, agents use entirely different “microscopic” socioeconomic processes to dynamically switch states that lead to social and economic segregation [12, 29, 16]. Comparing our model in Eq. 21 and Eq. 22 with that of Takatori et al. [6] in Eq. 29-32, we see that the latter is more complicated, as it reflects the “microscopic” details of this dynamics. In our simpler model, such details have been avoided. However, their phase diagram (Fig. 1 in [6]) and the Gibbs free energy vs. volume fraction plot (Fig. 3 in [6]) have the same qualitative features as our model shown in Fig. 7-8. These qualitative features are the existence of two minima in the Gibbs free energy (corresponding to two maxima in our game-theoretic potential $\phi$) and the miscibility gap in the phase diagram. Although these motility-induced phase separation phenomena are generally characterized as nonequilibrium or out-of-equilibrium phenomena [4, 7, 6], our analysis recognizes them as the result of the arbitrage equilibrium. As we discussed in Section V, this is, in principle, the same as thermodynamic equilibrium, with the only difference being the arbitrage currency. In thermodynamics, the currency is the chemical potential, and in teleodynamics, it is the effective utility. Otherwise, the mathematical structure in both situations is the same. This resolves the puzzle noted by many [9, 10, 5, 7, 8, 6] that some active matter systems that look like out-of-equilibrium systems at the microscopic scale behave macroscopically like simple equilibrium systems of passive matter. Statistical teleodynamics is applicable to the entire range of the agency spectrum going from purpose-free thermodynamic agents (e.g., molecules) to purpose-driven rational agents (e.g., humans) with the different kinds of biological and ecological agents (e.g., bacteria, ants, birds, mussels, etc.) situated somewhere in between in this spectrum of self-actualizing capabilities. This is summarized in Table 2, which lists utility function templates in different domains. In this paper, we have already discussed examples of the thermodynamic game, ant-crater formation, and social segregation. In economics, the emergence of the income distribution can be modeled by [11, 12] $h_{i}=\alpha\ln S_{i}-\beta\left(\ln S_{i}\right)^{2}-\ln N_{i}.$ (49) where the first term is the benefit of income ($S_{i}$), the second is the cost of work expended to earn this income, and the last is the cost of competition. In ecology, the flocking behavior of bird-like agents is modeled by [14] $h_{i}=\alpha N_{i}-\beta N_{i}^{2}+\gamma N_{i}l_{i}-\ln N_{i}$ (50) where the first term is the security benefit of having many birds in the neighborhood, the second is the congestion cost of these neighbors, the third is the alignment ($l_{i}$) benefit of flying in the same direction as the neighbors, and the last is again the cost of competition. We wish to stress that all of these systems reach arbitrage equilibria. Some of the equilibria have well-known distributions as outcomes, such as exponential (energy), lognormal (income), or Weibull (ant craters, Janus particles). But some others have “messy” distributions, as in the case of social segregation and birds flocking. The specific outcome depends on the non-entropic terms in the effective utility function, i.e., the “first law” of teleodynamics terms that enforce the constraints on entropy maximization. Table 2: Utility functions in different domains Domain | System | Utility function ($h_{i}$) ---|---|--- Physics | Energy distribution | $-\beta E_{i}-\ln N_{i}$ Physics | Janus particles | $-\dfrac{\omega r_{i}^{a}}{a}-\ln N_{i}$ Biology | Bacterial chemotaxis | $\alpha c_{i}-\ln N_{i}$ Ecology | Ant craters | $b-\dfrac{\omega r_{i}^{a}}{a}-\ln N_{i}$ Ecology | Birds flocking | $\alpha N_{i}-\beta N_{i}^{2}+\gamma N_{i}l_{i}-\ln N_{i}$ Sociology | Social segregation | $\eta N_{i}-\xi N_{i}^{2}+\ln(H-N_{i})-\ln N_{i}$ Economics | Income distribution | $\alpha\ln S_{i}-\beta\left(\ln S_{i}\right)^{2}-\ln N_{i}$ By comparing the mathematical structure of the various effective utility functions, we observe a certain universality across the different domains. They are all based on benefit-cost trade-offs, but the actual nature of the benefits and costs depend on the details of the specific domain, as one would expect. The main requirement to belong to this universality class is that the disutility due to competition can be modeled (or at least reasonably approximated) as $-\ln N_{i}$ (discrete case) or $-\ln\rho$ (continuous case). This is a critical requirement, as Kanbur and Venkatasubramanian explain [24]. This agent-based property directly leads to the system-wide property of entropy, thereby connecting the agents and the system in a cohesive mathematical framework. This term also facilitates the integration of potential game theory with statistical mechanics, paving the way for a universal theory of emergent equilibrium phenomena in active and passive matter. One might argue that our theory is not that different from what has already been proposed to explain MIPS using chemical the potential and free-energy- based approaches reported in the literature [7, 6]. We agree that the conventional thermodynamic perspective is suitable for physicochemical systems, although even here, one has to invoke the new concept of “nonequilibrium chemical potential” [7, 25]. Thus, conventional concepts based on thermodynamics are already proving inadequate for handling even physicochemical systems. We believe that it would be even more problematic for higher-order living agents, such as those listed in Table 2. For example, how would one relate the salary $S_{i}$ of an economic agent to its “nonequilibrium chemical potential”? A utility-based perspective is much more intuitive and, hence, a natural framework. The other important contribution, we believe, is the conceptual progress we have made in recognizing that many of these so-called nonequilibrium systems are actually systems in equilibrium, an arbitrage equilibrium. Our analysis suggests that the pursuit of maximum utility or survival fitness, combined with competitive dynamics under constraints, could be a universal self-organizing mechanism for active matter. In biology, in general, the search for improving one’s fitness occurs in the design space of genetic features. Here, mutation and crossover operations facilitate movements in the feature space such that an agent improves itself genetically via Darwinian evolution to increase its utility, i.e., survival fitness. In economics, on the other hand, agents search in the products and/or services space so that they can offer better products/services to improve their economic survival fitness in a competitive marketplace. Thus, this mechanism is reminiscent of Adam Smith’s invisible hand. In all these different domains, each agent pursues its own self-interest to increase its own $h_{i}$, but a stable collective order emerges spontaneously as a result of the competitive dynamics among all agents under constraints. Therefore, we suggest that it is a pair of invisible hands: one is the pursuit of maximum utility and the other the competitive dynamics under constraints. We need both principles for the arbitrage equilibrium to emerge spontaneously. By formulating equilibria in active matter more broadly, unrestricted by the narrow confines of thermodynamics, statistical teleodynamics and arbitrage equilibria open up conceptual possibilities that coherently accommodate active matter in the entire range of nonliving to living agents more naturally. However, what we have presented is a van der Waals-like version of statistical teleodynamics. Much more needs to be done to further develop this framework to address challenging emergent phenomena in physics, chemistry, biology, ecology, sociology, and economics. ## Methods The agent-based simulation was performed using Python. We distributed agents on a 2-D $300\times 300$ grid with 90,000 cells. Three simulation studies are reported in this paper – with 22,500 agents, 45,000 agents, and 55,000 agents. For each case, initially, the agents were randomly distributed on the grid, with each agent occupying one cell. The dynamical evolution of the system is determined by two neighborhoods around an agent $i$. One is the local neighborhood of _interaction_ , which is an area with 49 cells that surround the agent $i$ (including the cell $i$ is occupying). The other is the _exploration_ neighborhood (which is larger than the interaction neighborhood and contains it) within which an agent $i$ can explore and move to another cell to improve its utility $h_{i}$. The exploration neighborhood has 1680 cells. The neighborhood sizes are parameters that can be varied to balance the need to allow for complex patterns to emerge at arbitrage equilibrium and the need to accomplish this in a reasonable amount of computational time. We found that our combination (49 and 1680) accomplishes this well. The density of agents in any cell is defined as the ratio of the number of agents in the interaction neighborhood to the total number of cells in the neighborhood. At each iteration, every agent is given the opportunity to move to a vacant cell in the exploration neighborhood where it would have higher utility than its current cell. If the agent does not find a vacant cell, it chooses to stay at its current location. After an agent moves, its utility, and its neighbors’ density and utility are updated. The simulations were carried out for 10,000 iterations, at which time the system typically reached the arbitrage equilibrium. ## Acknowledgements The first author thanks John Brady, Kyle Bishop, and Chris Durning for helpful discussions. This work was supported in part by a grant to the Center for Managing Systemic Risk from Columbia University. ## Author Contributions VV: Conceptualization, Theory, Methodology, Analysis, Investigation, Supervision, Funding Acquisition, and Writing; A. Sivaram: Methodology, Analysis, Investigation, and Writing; NS: Software development and analysis; A. Sankar: Software development and analysis. The authors have no conflicts of interest to declare. ## Data Availablity and Reproducibility Statement Data for Fig. 1-4 are from Takatori et al. [18]. Figures 5-6 are plotted using Eq. 19. Figures 7-10 are plotted using Eq. 22. Figures 11-12 are from agent- based simulations, the code of which will be made available upon request to the corresponding author. We are currently updating the code for better ease of use. Figure 13 is plotted using the equation. 32-33, and Figure 14 using Eq. 48. ## References * Toner _et al._ [2005] J. Toner, Y. Tu, and S. Ramaswamy, Hydrodynamics and phases of flocks, Annals of Physics 318, 170 (2005). * Narayan _et al._ [2007] V. Narayan, S. Ramaswamy, and N. Menon, Long-lived giant number fluctuations in a swarming granular nematic, Science 317, 105 (2007). * Ramaswamy [2010] S. Ramaswamy, The mechanics and statistics of active matter, Annu. Rev. Condens. Matter Phys. 1, 323 (2010). * Marchetti _et al._ [2013] M. C. Marchetti, J.-F. Joanny, S. Ramaswamy, T. B. Liverpool, J. Prost, M. Rao, and R. A. Simha, Hydrodynamics of soft active matter, Reviews of Modern Physics 85, 1143 (2013). * Cates and Tailleur [2013] M. E. Cates and J. Tailleur, When are active brownian particles and run-and-tumble particles equivalent? consequences for motility-induced phase separation, EPL (Europhysics Letters) 101, 20010 (2013). * Takatori and Brady [2015] S. C. Takatori and J. F. Brady, Towards a thermodynamics of active matter, PHYSICAL REVIEW E 91 (2015). * Cates and Tailleur [2015] M. E. Cates and J. Tailleur, Motility-induced phase separation, Annu. Rev. Condens. Matter Phys. 6, 219 (2015). * Gonnella _et al._ [2015] G. Gonnella, D. Marenduzzo, A. Suma, and A. Tiribocchi, Motility-induced phase separation and coarsening in active matter, Comptes Rendus Physique 16, 316 (2015). * Berkowitz [2020] R. Berkowitz, Active particles map to passive random walks, Physics 13, s146 (2020). * O’Byrne and Tailleur [2020] J. O’Byrne and J. Tailleur, Lamellar to micellar phases and beyond: When tactic active systems admit free energy functionals, Physical Review Letters 125, 208003 (2020). * Venkatasubramanian _et al._ [2015] V. Venkatasubramanian, Y. Luo, and J. Sethuraman, How much inequality in income is fair?: A microeconomic game theoretic perspective, Physica A: Statistical Mechanics and its Applications 435, 120 (2015). * Venkatasubramanian [2017] V. Venkatasubramanian, _How Much Inequality Is Fair?: Mathematical Principles of a Moral, Optimal, and Stable Capitalist Society_ (Columbia University Press, 2017). * Venkatasubramanian _et al._ [2022] V. Venkatasubramanian, A. Sivaram, and L. Das, A unified theory of emergent equilibrium phenomena in active and passive matter, Computers & Chemical Engineering 164, 107887 (2022). * Sivaram and Venkatasubramanian [2022] A. Sivaram and V. Venkatasubramanian, Arbitrage equilibrium, invariance, and the emergence of spontaneous order in the dynamics of birds flocking, ArXiv:2207.13743v4 (2022). * Venkatasubramanian _et al._ [2023] V. Venkatasubramanian, A. Sankar, and A. Sivaram, Invisible hand and arbitrage equilibrium in the self-organizing dynamics of pattern formation in ecological systems, arXiv preprint arXiv:2312.05765 (2023). * Venkatasubramanian _et al._ [2024] V. Venkatasubramanian, J. Shi, L. Goldman, A. Sankar, and A. Sivaram, Density and affinity dependent social segregation and arbitrage equilibrium in a multi-class schelling game, arXiv preprint arXiv:5450525 (2024). * Sandholm [2010] W. H. Sandholm, _Population games and evolutionary dynamics_ (MIT press, 2010). * Takatori _et al._ [2016] S. C. Takatori, R. D. Dier, J. Vermant, and J. F. Brady, Acoustic trapping of active matter, Nature Communications 7:10694 (2016). * Ahmad K. Omar and Brady [2023] S. A. M. Ahmad K. Omar, Hyeongjoo Row and J. F. Brady, Mechanical theory of nonequilibrium coexistence and motility-induced phase separation, PNAS 120 (2023). * Zhao _et al._ [2023] H. Zhao, A. Košmrlj, and S. S. Datta, Chemotactic motility-induced phase separation, Phys. Rev. Lett. 131, 118301 (2023). * Easley _et al._ [2010] D. Easley, J. Kleinberg, _et al._ , _Networks, crowds, and markets_ , Vol. 8 (Cambridge university press Cambridge, 2010). * Rosenthal [1973] R. W. Rosenthal, A class of games possessing pure-strategy nash equilibria, International Journal of Game Theory 2, 65 (1973). * Monderer and Shapley [1996] D. Monderer and L. S. Shapley, Potential games, Games and economic behavior 14, 124 (1996). * Kanbur and Venkatasubramanian [2020] R. Kanbur and V. Venkatasubramanian, Occupational arbitrage equilibrium as an entropy maximizing solution, The European Physical Journal Special Topics 229, 1661 (2020). * S. C. Takatori [2016] J. F. B. S. C. Takatori, Forces, stresses and the (thermo?) dynamics of active matter, Current Opinion in Colloid & Interface Science 21 (2016). * Koppel _et al._ [2005] J. v. d. Koppel, M. Rietkerk, N. Dankers, and P. M. J. Herman, Scale dependent feedback and regular spatial patterns in young mussel beds., The American Naturalist 165, E66 (2005). * Quan-Xing _et al._ [2013] L. Quan-Xing, A. Doelman, V. Rottschfer, M. de Jager, P. M. J. Herman, M. Rietkerk, and J. v. d. K. Koppel, Phase separation explains a new class of self-organized spatial patterns in ecological systems, Proceedings of the National Academy of Sciences 110, 11905 (2013). * de Jager _et al._ [2020] M. de Jager, J. van de Koppel, E. J. Weerman, and F. J. Weissing, Patterning in mussel beds explained by the interplay of multi-level selection and spatial self-organization, Frontiers in Ecology and Evolution 8 (2020). * Schelling [1969] T. C. Schelling, Models of segregation, The American Economic Review 59, 488 (1969). * Nilforoshan _et al._ [2023] H. Nilforoshan, W. Looi, E. Pierson, B. Villanueva, N. Fishman, Y. Chen, J. Sholar, B. Redbird, D. Grusky, and J. Leskovec, Human mobility networks reveal increased segregation in large cities, Nature 624, 586 (2023). * Choné and Linnemer [2019] P. Choné and L. Linnemer, The quasilinear quadratic utility model: an overview, CESifo Working Paper, No. 7640 (2019). * Cahn [1961] J. W. Cahn, On spinodal decomposition, Acta metallurgica 9, 795 (1961). * Favvas and Mitropoulos [2008] E. Favvas and A. C. Mitropoulos, What is spinodal decomposition, Journal of Engineering Science and Technology Review 1, 25 (2008). * Venkatasubramanian _et al._ [2004] V. Venkatasubramanian, S. Katare, P. R. Patkar, and F.-p. Mu, Spontaneous emergence of complex optimal networks through evolutionary adaptation, Computers & chemical engineering 28, 1789 (2004). * Venkatasubramanian _et al._ [2006] V. Venkatasubramanian, D. N. Politis, and P. R. Patkar, Entropy maximization as a holistic design principle for complex optimal networks, AIChE journal 52, 1004 (2006). * Venkatasubramanian [2007] V. Venkatasubramanian, A theory of design of complex teleological systems: Unifying the darwinian and boltzmannian perspectives, Complexity 12, 14 (2007). * Liu _et al._ [2012] Q.-X. Liu, E. J. Weerman, P. M. J. Herman, H. Olff, and J. van de Koppel, Alternative mechanisms alter the emergent properties of self-organization in mussel beds, Proceedings of the Royal Society B: Biological Sciences 279, 2744 (2012).
# Unknown Domain Inconsistency Minimization for Domain Generalization Seungjae Shin 1, HeeSun Bae11footnotemark: 1 1, Byeonghu Na1, Yoon-Yeong Kim2 & Il-Chul Moon1,3 1Department of Industrial and Systems Engineering, KAIST 2Department of Statistics, University of Seoul, 3summary.ai <EMAIL_ADDRESS><EMAIL_ADDRESS>Equal contribution ###### Abstract The objective of domain generalization (DG) is to enhance the transferability of the model learned from a source domain to unobserved domains. To prevent overfitting to a specific domain, Sharpness-Aware Minimization (SAM) reduces source domain’s loss sharpness. Although SAM variants have delivered significant improvements in DG, we highlight that there’s still potential for improvement in generalizing to unknown domains through the exploration on data space. This paper introduces an objective rooted in both parameter and data perturbed regions for domain generalization, coined Unknown Domain Inconsistency Minimization (UDIM). UDIM reduces the loss landscape inconsistency between source domain and unknown domains. As unknown domains are inaccessible, these domains are empirically crafted by perturbing instances from the source domain dataset. In particular, by aligning the loss landscape acquired in the source domain to the loss landscape of perturbed domains, we expect to achieve generalization grounded on these flat minima for the unknown domains. Theoretically, we validate that merging SAM optimization with the UDIM objective establishes an upper bound for the true objective of the DG task. In an empirical aspect, UDIM consistently outperforms SAM variants across multiple DG benchmark datasets. Notably, UDIM shows statistically significant improvements in scenarios with more restrictive domain information, underscoring UDIM’s generalization capability in unseen domains. Our code is available at https://github.com/SJShin-AI/UDIM. ## 1 Introduction Domain Generalization (DG) (Zhou et al., 2022; Wang et al., 2022) focuses on domain shift that arises when training and testing occur across distinct domains, i.e. a domain of real pictures in training, and a separate domain of cartoon images in testing. The objective of DG is to train a model on a given source domain dataset, and generalizes well on other unobserved domains. To address the domain discrepancy between the source domain and other domains, various methods have been proposed: 1) alignment-based methods (Li et al., 2021a; Wald et al., 2021); 2) augmentation-based methods (Qiao et al., 2020a; Zhou et al., 2021); and 3) regularization-based methods (Arjovsky et al., 2019; Krueger et al., 2021; Rame et al., 2022). While these methodologies have demonstrated promising results, they often underperform in settings where given domain information is particularly limited (Wang et al., 2021b; Qiao et al., 2020b). Also, most methods lack theoretical guarantees on the minimization of the target risk at the distribution level. In contrast to the aforementioned methods, Sharpness-aware optimizers, which flatten the loss landscape over a perturbed parameter region, demonstrate promising performances in DG tasks (Zhang et al., 2023b; Wang et al., 2023). By optimizing the perturbed local parameter regions, these approaches relieve the model overfitting to a specific domain, thereby enhancing the adaptability of the model across various domains. Also, this concept has a solid theoretical foundation based on the parameter space analysis with PAC-Bayes theories (McAllester, 1999; Dziugaite & Roy, 2017). While the perturbation methods based on the parameter space have shown promising improvements in DG tasks, this paper theoretically claims that perturbation rooted in the data space is essential for robust generalization to unobserved domains. Accordingly, this paper introduces an objective that leverages both parameter and data perturbed regions for domain generalization. In implementation, our objective minimizes the loss landscape discrepancy between a source domain and unknown domains, where unknown domains are emulated by perturbing instances from the source domain datasets. Recognizing the loss landscape discrepancy as an Inconsistency score across different domains, we name our objective as Unknown Domain Inconsistency Minimization (UDIM). Introduction of UDIM on the DG framework has two contributions. First, we theoretically prove that the integration of sharpness-aware optimization and UDIM objective becomes the upper bound of population risk for all feasible domains, without introducing unoptimizable terms. Second, we reformulate the UDIM objective into a practically implementable term. This is accomplished from deriving the worst-case perturbations for both parameter space and data space, each in a closed-form expression. Our experiments on various DG benchmark datasets illustrate that UDIM consistently improves the generalization ability of parameter-region based methods. Moreover, we found that these improvements become more significant as domain information becomes more limited. ## 2 Preliminary ### 2.1 Problem Definition of Domain Generalization This paper investigates the task of domain generalization in the context of multi-class classification (Arjovsky et al., 2019; Sagawa et al., 2019; Nam et al., 2021). We define an input as $x\in\mathbb{R}^{d}$ and its associated class label as $y\in\\{1,..,C\\}$. Let $\mathscr{D}_{e}$ represent a distribution of $e$-th domain. $\mathcal{E}$ is a set of indices for all domains, and $\mathscr{D}:=\\{\mathscr{D}_{e}\\}_{e\in\mathcal{E}}$ denotes the set of distributions for all domains, where every domain shares the same class set. For instance, let’s hypothesize that video streams from autonomous cars are being collected, and the data collection at days and nights will constitute two distinct domains. A sampled dataset from the $e$-th domain is denoted by $D_{e}=\left\\{(x_{i},y_{i})\right\\}_{i=1}^{n_{e}}$ where $(x_{i},y_{i})\sim\mathscr{D}_{e}$ and $n_{e}$ is the number of data instances of $e$-th domain. Throughout this paper, let $\theta\in\Theta$ represents a parameter of trained model $f_{\theta}$, where $\Theta$ is a set of model parameters. Using $D_{e}$, we define a loss function as $\mathcal{L}_{D_{e}}(\theta)=\frac{1}{n_{e}}\sum_{(x_{i},y_{i})\in D_{e}}\ell(f_{\theta}(x_{i}),y_{i})$, where we sometimes denote $\ell(f_{\theta}(x_{i}),y_{i})$ as $\ell(x_{i},\theta)$. The population risk for domain $e$ is given by $\mathcal{L}_{\mathscr{D}_{e}}(\theta)=\mathbb{E}_{(x,y)\sim\mathscr{D}_{e}}[\ell(f_{\theta}(x),y)]$. Then, the population risk over all domains is defined as $\mathcal{L}_{\mathscr{D}}(\theta)=\sum_{e\in\mathcal{E}}p(e)\mathcal{L}_{\mathscr{D}_{e}}(\theta)$, where $p(e)$ represents the occurrence probability of domain $e$. In essence, the primary goal of training a model, $f_{\theta}$, is to minimize the population risk, $\mathcal{L}_{\mathscr{D}}(\theta)$. In practical scenarios, we only have access to datasets derived from a subset of all domains. We call these accessible domains and datasets as source domains and source domain datasets, respectively; denoted as $\mathscr{D}_{S}=\\{\mathscr{D}_{s}\\}_{s\in\mathcal{S}}$ and $D_{S}=\\{D_{s}\\}_{s\in\mathcal{S}}$ where $\mathcal{S}$ is the set of indexes for source domains. As $\mathscr{D}_{S}\neq\mathscr{D}$, $D_{S}$ deviates from the distribution $\mathscr{D}$ under the sampling bias of $\mathscr{D}_{S}$. As a consequence, a model parameter ${\theta}^{*}_{S}=\text{argmin}_{\theta}\mathcal{L}_{D_{S}}(\theta)$, which is trained exclusively on $D_{S}$, might not be optimal for $\mathcal{L}_{\mathscr{D}}(\theta)$. Accordingly, domain generalization emerges as a pivotal task to optimize $\theta^{*}=\text{argmin}_{\theta}\mathcal{L}_{\mathscr{D}}(\theta)$ by only utilizing the source domain dataset, $D_{S}$. ### 2.2 Variants of Sharpness-Aware Minimization for Domain Generalization Recently, a new research area has emerged by considering optimization over the parameter space (Foret et al., 2020; Kwon et al., 2021). Several studies have focused on the problem of $\theta$ overfitted to a training dataset (Wang et al., 2023; Zhang et al., 2023b; a). These studies confirmed that optimization on the perturbed parameter region improves the generalization performance of the model. To construct a model that can adapt to unknown domains, it is imperative that an optimized parameter point is not overfitted to the source domain datasets. Accordingly, there were some studies to utilize the parameter perturbation in order to avoid such overfitting, as elaborated below (Table 1). Among variations in Table 1, Sharpness-Aware Minimization (SAM) (Foret et al., 2020) is the most basic form of the parameter perturbation, which regularizes the local region of $\theta$ to be the flat minima on the loss curvature as $\min_{\theta}\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+\|\theta\|_{2}$. Here, $\epsilon$ is the perturbation vector to $\theta$; and $\rho$ is the maximum size of the perturbation vector. Subsequently, methodologies for regularizing stronger sharpness were introduced (Zhang et al., 2023b; Wang et al., 2023; Zhang et al., 2023a). These approaches exhibited incremental improvements in domain generalization tasks. Check Appendix A for more explanations. Table 1: Objectives of SAM variants for DG Method | Objective ---|--- SAM (Foret et al., 2020) | $\max\limits_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)$ GAM (Zhang et al., 2023b) | $\mathcal{L}_{D_{s}}(\theta)+\rho\max\limits_{\|\epsilon\|_{2}\leq\rho}\|\nabla\mathcal{L}_{D_{s}}(\theta+\epsilon)\|$ SAGM (Wang et al., 2023) | $\mathcal{L}_{D_{s}}(\theta)+\mathcal{L}_{D_{s}}(\theta+\rho\nabla\mathcal{L}_{D_{s}}(\theta)\big{/}\|\nabla\mathcal{L}_{D_{s}}(\theta)\|-\alpha\nabla\mathcal{L}_{D_{s}}(\theta))$ FAM (Zhang et al., 2023a) | $\mathcal{L}_{D_{s}}(\theta)+\max\limits_{\|\epsilon\|_{2}\leq\rho}\left(\mathcal{L}_{D_{s}}(\theta+\epsilon)-\mathcal{L}_{D_{s}}(\theta)\right)+\rho\max\limits_{\|\epsilon\|_{2}\leq\rho}\|\nabla\mathcal{L}_{D_{s}}(\theta+\epsilon)\|$ We are motivated by this characteristic to extend the parameter perturbation towards data perturbation, which could be effective for the exploration of unknown domains. None of SAM variants proposed data-based perturbation for unknown domain discovery. Fundamentally, SAM variants predominantly focus on identifying the flat minima for the given source domain dataset, $D_{S}$. In Section 3.1, we highlight that finding flat minima in the source domain cannot theoretically ensure generalization to unknown target domains. Consequently, we demonstrate that generalization should be obtained from unobserved domains, rather than solely the source domain. Since SAM imposes the perturbation radius of $\rho$ on only $\theta$, we hypothesize $D_{s}$ could be perturbed by additional mechanism to generalize $D_{s}$ toward $\mathscr{D}$. While SAM minimizes the loss over $\rho$-ball region of $\theta$, its actual implementation minimizes the maximally perturbed loss w.r.t. $\theta+\epsilon^{*}$; where the maximal perturbation, $\epsilon^{*}$, is approximated in a closed-form solution via Taylor expansion as follows: $\displaystyle\operatorname*{max}_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)\approx\mathcal{L}_{D_{s}}(\theta+\epsilon^{*})\text{ where }\epsilon^{*}\approx\rho\cdot\text{sign}(\nabla_{\theta}\mathcal{L}_{D_{s}}(\theta))\frac{|\nabla_{\theta}\mathcal{L}_{D_{s}}(\theta)|}{\|\nabla_{\theta}\mathcal{L}_{D_{s}}(\theta)\|_{2}^{2}}.$ (1) The existence of this closed-form solution of $\epsilon^{*}$ simplifies the learning procedure of SAM by avoiding min-max game and its subsequent problems, such as oscillation of parameters (Chu et al., 2019). This closed- form solution can also be applied to the perturbation on the data space. ## 3 Method ### 3.1 Motivation : Beyond the Source domain-based Flatness Based on the context of domain generalization, Theorem 3.1 derives the relationship between the SAM loss on the source domain dataset, denoted as $\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)$, and the generalization loss on an arbitrary unknown domain, $\mathcal{L}_{\mathscr{D}_{e}}(\theta)$, for a model parameter $\theta\in\Theta$ as follows: ###### Theorem 3.1. (Rangwani et al., 2022) For $\theta\in\Theta$ and arbitrary domain $\mathscr{D}_{e}\in\mathscr{D}$, with probability at least $1-\delta$ over realized dataset $D_{s}$ from $\mathscr{D}_{s}$ with $|D_{s}|=n$, the following holds under some technical conditions on $\mathcal{L}_{\mathscr{D}_{e}}(\theta)$, where $h_{0}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ is a strictly increasing function. $\displaystyle\mathcal{L}_{\mathscr{D}_{e}}(\theta)$ $\displaystyle\leq\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+\mathcal{D}^{\text{f}}({\mathscr{D}_{s}}||{\mathscr{D}_{e}})+h_{0}(\frac{\|\theta\|_{2}^{2}}{\rho^{2}})$ (2) Theorem 3.1 provides an upper bound for $\mathcal{L}_{\mathscr{D}_{e}}(\theta)$. The second term of this upper bound, represented as $\mathcal{D}^{\text{f}}({\mathscr{D}_{s}}||{\mathscr{D}_{e}})$, corresponds to the distribution discrepancy between $\mathscr{D}_{s}$ and $\mathscr{D}_{e}$. Notably, $\mathcal{D}^{\text{f}}$ denotes the discrepancy based on $f$-divergence. When $e\neq s$, $D_{e}$ is inaccessible in the context of domain generalization. Accordingly, the SAM optimization on $D_{s}$ leaves an unoptimizable term in its upper bound, posing challenges for the domain generalization. In a setting that only $D_{s}$ and model parameter $\theta$ are accessible, generating unseen domain data becomes infeasible. Nontheless, by perturbing $D_{s}$ towards the direction that is most sensitive given $\theta$, we can emulate the worst-case scenario for an unobserved domain (Sagawa et al., 2019). While parameters trained via the SAM optimizer may exhibit flat regions based on the source domain dataset, $D_{s}$, there is no theoretic study on the flat minima under unobserved domains. By identifying the worst-case scenario that maximizes the loss landscape difference between domains, our methodology seeks the generalization across the unknown domains. ### 3.2 UDIM : Unknown Domain Inconsistency Minimization This section proposes an objective based on both parameter and data perturbed regions for domain generalization, coined Unknown Domain Inconsistency Minimization (UDIM). UDIM minimizes the loss landscape discrepancy between the source domain and unknown domains, where unknown domains are empirically realized by perturbing instances from the source domain dataset, $D_{s}$. Let $\Theta_{\theta,\rho}=\\{\theta^{{}^{\prime}}|\|\theta^{{}^{\prime}}-\theta\|_{2}\leq\rho\\}$, which is $\rho$-ball perturbed region of a specific parameter point $\theta$. When training a parameter $\theta$ on an arbitrary domain dataset $D_{e}$ with the SAM optimizer, some regions within $\Theta_{\theta,\rho}$ are expected to be optimized as flat regions for source domain. Following the notation of Parascandolo et al. (2020b), define $N^{\gamma,\rho}_{e,\theta}:=\\{\theta^{{}^{\prime}}\in\Theta_{\theta,\rho}\text{ }|\text{ }\big{|}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\mathcal{L}_{D_{e}}(\theta)\big{|}\leq\gamma\\}$, which is the region in $\Theta_{\theta,\rho}$ where the loss value of $\theta$ deviates by no more than $\gamma$. Given small enough values of $\mathcal{L}_{D_{e}}(\theta)$ and $\gamma$, $N^{\gamma,\rho}_{e,\theta}$ could be recognized as a flat minima for $e$-th domain. We aim to utilize the flat minima of $\theta$ obtained through training on $D_{s}$ using the SAM optimizer, where we represent $s$-th domain as our source domain. The goal is to regularize the loss and its corresponding landscape of unknown domains, so that the flat minima of the unknown domain aligns with that of the source domain. By optimizing the domain of worst-case deviation in the loss landscape from $D_{s}$, we facilitate regularization across a range of intermediary domains. Eq. 3 formalizes our motivation, which is cross-domain inconsistency score: $\displaystyle\mathcal{I}^{\gamma}_{s}(\theta)=\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}|\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\mathcal{L}_{D_{s}}(\theta^{{}^{\prime}})|$ (3) In the above equation, the inner maximization seeks the worst-case parameter point $\theta^{{}^{\prime}}$ that amplifies the domain-wise loss disparity, while the outer maximization identifies $e$-th domain that maximizes $\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}|\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\mathcal{L}_{D_{s}}(\theta^{{}^{\prime}})|$. Parascandolo et al. (2020a) utilizes an equation similar to Eq. 3, and this equation is also employed to indicate $\theta$ with a loss surface that is invariant to the environment changes. Our methodology differs from Parascandolo et al. (2020a), which simply uses inconsistency as a metric, in that we employ it directly as the objective we aim to optimize. Let’s assume that $\theta$ exhibits sufficiently low loss along with flat loss landscape of $D_{s}$. Further, if we can identify $\theta$ with a low value of $\mathcal{I}^{\gamma}_{s}(\theta)$, then this $\theta$ would also demonstrate consistently good generalization for unknown domains. This motivation leads to the UDIM’s objective, which specifically targets the reduction of the cross- domain inconsistency score across unobserved domains. (a) Changes in the loss landscape across domains based on the perturbed parameter space of $\theta$: initial state (left), post-SAM optimization (center), and subsequent to UDIM application (right). (b) Changes in domain-wise inconsistency sharpness based on data space of $D_{s}$ before (left) and after (right) applying UDIM. Figure 1: Illustration of our model, UDIM, based on parameter space (a) and data space (b). (a) We define flatness within a perturbed region by minimizing the inconsistency loss relative to the unknown domains, around the flat region derived from the source domain. (b) Furthermore, by reducing the domain-wise inconsistency within the input perturbed regions, where $\rho_{x}$ denotes perturbation length, our method can also be interpreted as an data space perspective of SAM. ##### Objective Formulation of UDIM While we define the cross-domain inconsistency as $\mathcal{I}^{\gamma}_{s}(\theta)$, the final formulation of UDIM is the optimization of $\theta$ regarding both source and unknown domain losses from the flat-minima perspective. Eq. 4 is the parameter optimization of our proposal: $\displaystyle\min_{\theta}\Big{(}\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+\lambda_{1}\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}|\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\mathcal{L}_{D_{s}}(\theta^{{}^{\prime}})|+\lambda_{2}\|\theta\|_{2}\Big{)}$ (4) The objective of UDIM consists of three components. The first term is a SAM loss on $D_{s}$, guiding $\theta$ toward a flat minima in the context of $D_{s}$. Concurrently, as $\theta$ provides flattened local landscape, the second term weighted with $\lambda_{1}$ is a region-based loss disparity between the worst-case domain $e$ and the source domain. Subsequently, the algorithm seeks to diminish the region-based loss disparity between the worst- case domain and the source domain. Figure 1 (a) illustrates the update of the loss landscape in the parameter-space for each domain, based on the optimization of Eq. 4. As SAM optimization on $D_{s}$ progresses, $N^{\gamma,\rho}_{s,\theta}$ is expected to be broadened. However, SAM optimization does not imply the minimization of $\mathcal{I}^{\gamma}_{s}(\theta)$ (center). Given that the optimization of $\mathcal{I}^{\gamma}_{s}(\theta)$ is conducted on $N^{\gamma,\rho}_{s,\theta}$, we expect the formation of flat minima spanning all domains in the optimal state (rightmost). Figure 2: Inconsistency score of each method on PACS training dataset (X-axis: training iteration). Y-axis is depicted in a log-scale. The optimization of Eq. 4 can also be interpreted as minimizing another form of sharpness in the data space. Let’s suppose all other domains lie within a finite perturbation of the source domain’s dataset. In the context of the data space over the $D_{s}$, the optimization can be seen as identifying the worst- case perturbed dataset, $D_{e}$, by evaluating $\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}|\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\mathcal{L}_{D_{s}}(\theta^{{}^{\prime}})|$. If we view the perturbation of the $D_{s}$, not as discrete choices but as a continuum within the data-space; our optimization can be viewed as minimizing the sharpness of domain-wise inconsistency in the data space. Figure 1 (b) illustrates this interpretation. After such optimization, the resulting parameter $\theta$ can offer the consistent generalization over domains located within the perturbed data space. At this juncture, a crucial consideration is whether the SAM optimization on $D_{s}$ focuses on minimizing the second term, $\mathcal{I}^{\gamma}_{s}(\theta)$, or not. We illustrate the limited capability of SAM variants in minimizing the second term, by an analytical illustration of Figure 1 (a); and by an empirical demonstration of Figure 2. Therefore, UDIM covers the optimization area that SAM does not operate. ##### Theoretical Analysis of UDIM Given the definition of $\Theta_{\theta,\rho}=\\{\theta^{{}^{\prime}}|\|\theta^{{}^{\prime}}-\theta\|_{2}\leq\rho\\}$, we introduce $\Theta_{\theta,\rho{{}^{\prime}}}=\operatorname*{arg\,max}_{\Theta_{\theta,\hat{\rho}}\subseteq N^{\gamma,\rho}_{s,\theta}}\hat{\rho}$, which is the largest $\rho{{}^{\prime}}$-ball region around $\theta$ in $N^{\gamma,\rho}_{s,\theta}$.111For the sake of simplicity, we do not inject the notation of $s$ or $\gamma$ in $\Theta_{\theta,\rho{{}^{\prime}}}$. Theorem 3.2 introduces the generalization bound of Eq. 4, which is the objective of UDIM. Theorem 3.2 states that Eq. 4 can become the upper bound of $\mathcal{L}_{\mathscr{D}}(\theta)$, which is the population risk over all domains. ###### Theorem 3.2. For $\theta\in\Theta$ and arbitrary domain $e\in\mathcal{E}$, with probability at least $1-\delta$ over realized dataset $D_{e}$ from $\mathscr{D}_{e}$, the following holds under technical conditions on $\mathcal{L}_{\mathscr{D}_{e}}(\theta)$ and $\mathcal{L}_{D_{e}}(\theta)$, where $h:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ is a strictly increasing function. (Proof in Appendix B.1.) $\displaystyle\mathcal{L}_{\mathscr{D}}(\theta)$ $\displaystyle\leq\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+(1-\frac{1}{|\mathcal{E}|})\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}|\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\mathcal{L}_{D_{s}}(\theta^{{}^{\prime}})|+h(\frac{\|\theta\|_{2}^{2}}{\rho^{2}})$ (5) In such a case, Theorem 3.2 retains the same form of weight decay term as presented in Theorem 3.1. Unlike Theorem 3.1, which contains terms that are inherently unoptimizable, our objective is capable of minimizing every term encompassed in the upper bound of Theorem 3.2 because we do not have an inaccessible term, $\mathcal{D}^{\text{f}}({\mathscr{D}_{s}}||{\mathscr{D}_{e}})$. ### 3.3 Implementation of UDIM This section reformulates the second term of Eq. 4 into an implementable form. We first formalize the perturbation of $D_{s}$ to emulate the worst-case domain for $\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}|\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\mathcal{L}_{D_{s}}(\theta^{{}^{\prime}})|$. ##### Inconsistency-Aware Domain Perturbation on $D_{s}$ For clarification, we explain the perturbation process based on an arbitrary input instance $x$ from $D_{s}$. Given that the magnitude of the perturbation vector for input $x$ is constrained to $\rho_{x}$, the perturbed input $\tilde{x}$ can be expressed as follows: $\displaystyle\tilde{x}=x+\underset{\epsilon_{x}:\|\epsilon_{x}\|_{2}\leq\rho_{x}}{\text{argmax}}\max\limits_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\Big{(}\ell(x+\epsilon_{x},\theta^{{}^{\prime}})-\ell(x,\theta^{{}^{\prime}})\Big{)}\approx x+\underset{\epsilon_{x}:\|\epsilon_{x}\|_{2}\leq\rho_{x}}{\text{argmax}}\max\limits_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\ell(x+\epsilon_{x},\theta^{{}^{\prime}})$ (6) $\displaystyle\underset{\text{1st Taylor}}{\approx}x+\underset{\epsilon_{x}:\|\epsilon_{x}\|_{2}\leq\rho_{x}}{\text{argmax}}\Big{(}\ell(x+\epsilon_{x},\theta)+\rho^{\prime}\|\nabla_{\theta}\ell(x+\epsilon_{x},\theta)\|_{2}\Big{)}$ (7) Assuming $N^{\gamma,\rho}_{s,\theta}$ as flat minima of $\theta$, $\ell(x,\theta^{{}^{\prime}})$ is almost invariant for $\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}$. We assume that the invariant value of $\ell(x,\theta^{{}^{\prime}})$ does not significantly influence the direction of $x$’s perturbation because it becomes almost constant in $N^{\gamma,\rho}_{s,\theta}$. Therefore, we cancel out this term in Eq. 6. Additionally, as we cannot specify the regional shape of $N^{\gamma,\rho}_{s,\theta}$, it is infeasible to search maximal point $\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}$. As a consequence, we utilize $\Theta_{\theta,\rho{{}^{\prime}}}$, which is the largest $\rho{{}^{\prime}}$-ball region within $N^{\gamma,\rho}_{s,\theta}$, to approximately search the maximal point $\theta^{{}^{\prime}}$. It should be noted that $\Theta_{\theta,\rho{{}^{\prime}}}\subseteq N^{\gamma,\rho}_{s,\theta}\subseteq\Theta_{\theta,\rho}$, where we assume that $\rho{{}^{\prime}}$ gradually approaches $\rho$ during the SAM optimization. Through the first-order Taylor expansion222We empirically found out that extending it into second-order does not affect the resulting performance. for the maximum point within $\Theta_{\rho{{}^{\prime}}}$, we can design the aforementioned perturbation loss as Eq. 7. Consequently, the perturbation is carried out in a direction that maximizes the loss and $\rho{{}^{\prime}}$-weighted gradient norm of the original input $x$. ##### Inconsistency Minimization on $\theta$ After the perturbation on $x\in D_{s}$, we get an inconsistency-aware perturbed dataset, $\tilde{D}_{s}$; which approximates the worst-case of unobserved domains. Accordingly, we can formulate the optimization of $\mathcal{I}^{\gamma}_{s}(\theta)$ based on $\theta$ as $\min_{\theta}\max\limits_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\Big{(}\mathcal{L}_{\tilde{D}_{s}}(\theta^{{}^{\prime}})-\mathcal{L}_{{D}_{s}}(\theta^{{}^{\prime}})\Big{)}$. For the above min-max optimization, we approximate the search for the maximum parameter $\theta^{{}^{\prime}}$ in a closed-form using second order taylor- expansion, similar to the approach of SAM in Eq. 1. $\displaystyle\max\limits_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\Big{(}\mathcal{L}_{\tilde{D}_{s}}(\theta^{{}^{\prime}})-\mathcal{L}_{{D}_{s}}(\theta^{{}^{\prime}})\Big{)}{\approx}\mathcal{L}_{\tilde{D}_{s}}(\theta)-\mathcal{L}_{{D}_{s}}(\theta)+\rho{{}^{\prime}}\|\nabla_{\theta}\mathcal{L}_{\tilde{D}_{s}}(\theta)\|_{2}+\max\limits_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\frac{1}{2}\theta^{{}^{\prime}\top}\mathbf{H}_{\tilde{D}_{s}}\theta^{{}^{\prime}}$ (8) $\displaystyle=\Big{(}\mathcal{L}_{\tilde{D}_{s}}(\theta)-\mathcal{L}_{{D}_{s}}(\theta)\Big{)}+\rho{{}^{\prime}}\|\nabla_{\theta}\mathcal{L}_{\tilde{D}_{s}}(\theta)\|_{2}+\gamma\max\limits_{i}\lambda^{\tilde{D}_{s}}_{i}/\lambda^{{D}_{s}}_{i}$ (9) Full derivation and approximation procedure of Eq. 8 and Eq. 9 are in Appendix B.2. $\mathbf{H}_{\tilde{D}_{s}}$ in Eq. 8 denotes a Hessian matrix of the perturbed dataset, $\tilde{D}_{s}$. Also, $\lambda^{\tilde{D}_{s}}_{i}$ in Eq. 9 denotes $i$-th eigenvalue of $\mathbf{H}_{\tilde{D}_{s}}$. Finally, Eq. 9 becomes the objective with three components: 1) the loss difference between $\tilde{D}_{s}$ and $D_{s}$, 2) the gradient norm of $\tilde{D}_{s}$ and 3) the maximum eigenvalue ratio between $\tilde{D}_{s}$ and $D_{s}$. Note that $\lambda^{\tilde{D}_{s}}_{i}/\lambda^{{D}_{s}}_{i}$ is minimized when $\mathbf{H}_{\tilde{D}_{s}}$ becomes equivalent to $\mathbf{H}_{D_{s}}$. While Eq. 9 is differentiable with respect to $\theta$ and can thus be utilized as a tractable objective, computing the Hessian matrix for an over- parameterized $\theta$ is computationally demanding. In line with Rame et al. (2022), we replace the objective with the Hessian matrix (Eq. 9) with an optimization based on gradient variance. Accordingly, Eq. 10 represents the gradient variance-based objective as an optimization with respect to $\theta$ as follows: (See detailed derivations in Appendix B.3) $\displaystyle\min_{\theta}\rho{{}^{\prime}}\|\nabla_{\theta}\mathcal{L}_{\tilde{D}_{s}}(\theta)\|_{2}+\|\text{Var}(\mathbf{G}_{\tilde{D}_{s}})-\text{Var}(\mathbf{G}_{{D}_{s}})\|_{2}$ (10) Here, ${\mathbf{g}}_{i}$ is a per-sample gradient for $i$-th sample; and $\mathbf{G}_{{D}}=\\{{\mathbf{g}}_{i}\\}^{|D|}_{i=1}$ is a set of per-sample gradient for $x_{i}\in D$. Accordingly, the variance of $\mathbf{G}_{{D}}$, which we denote as $\text{Var}(\mathbf{G}_{{D}})$, is calculated as $\text{Var}(\mathbf{G}_{{D}})=\frac{1}{|D|-1}\sum^{|D|}_{i=1}\big{(}\mathbf{g}_{i}-\bar{\mathbf{g}}\big{)}^{2}$. Matching the gradient variances between two distinct datasets encapsulates a specific form of loss matching between them (Rame et al., 2022). This allows us to unify loss matching and Hessian matching under a single optimization using $\text{Var}(\mathbf{G}_{{D}})$. Summary Our methodology applies the perturbation technique to both input $x$ and parameter $\theta$. Particularly, the perturbation on inputs and parameters is necessary to minimize $\mathcal{I}^{\gamma}_{s}(\theta)$ under unknown domains, which is the unique contribution of this work. Ablation study in Section 4.3 supports that UDIM, which is the combination of the Eq. 7 and Eq. 10, yields the best performance compared to various perturbations and optimizations. Algorithm of UDIM is in Appendix C. A single iteration of UDIM optimization with SAM loss can be described as the following procedure: 1\. Construction of $\tilde{D_{s}}$ via Inconsistency-Aware Domain Perturbation on $D_{s}$ $\displaystyle\tilde{D_{s}}=\\{(\tilde{x}_{i},y_{i})\,|\,(x_{i},y_{i})\in D_{s}\\}{\text{ where }}\tilde{x}_{i}=x_{i}+\rho_{x}\frac{\nabla_{x_{i}}\big{(}\ell(x,\theta_{t})+\rho^{\prime}\|\nabla_{\theta_{t}}\ell(x,\theta_{t})\|_{2}\big{)}}{\big{\|}\nabla_{x_{i}}\big{(}\ell(x,\theta_{t})+\rho^{\prime}\|\nabla_{\theta_{t}}\ell(x,\theta_{t})\|_{2}\big{)}\big{\|}_{2}}$ (11) 2\. SAM loss and Inconsistency Minimization on the current parameter $\theta_{t}$ $\displaystyle\theta_{t+1}=\theta_{t}-\eta\nabla_{\theta_{t}}\Big{(}\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta_{t}+\epsilon)+\rho{{}^{\prime}}\|\nabla_{\theta_{t}}\mathcal{L}_{\tilde{D}_{s}}(\theta_{t})\|_{2}+\|\text{Var}(\mathbf{G}_{\tilde{D}_{s}})-\text{Var}(\mathbf{G}_{{D}_{s}})\|_{2}+\lambda_{2}\|\theta_{t}\|_{2}\Big{)}$ (12) Table 2: Test accuracy for CIFAR-10-C. Each level states the severity of corruption. Bold is the best case of each column or improved performances with the respective sharpness-based optimizers. Method | level1 | level2 | level3 | level4 | level5 | Avg ---|---|---|---|---|---|--- LOO DG Based | ERM | 75.9$\pm$ 0.5 | 72.9$\pm$ 0.4 | 70.0$\pm$ 0.4 | 65.9$\pm$ 0.4 | 59.9$\pm$ 0.5 | 68.9 IRM (Arjovsky et al., 2019) | 37.6$\pm$ 2.7 | 36.0$\pm$ 2.8 | 34.6$\pm$ 2.6 | 32.8$\pm$ 2.1 | 30.8$\pm$ 1.9 | 34.3 GroupDRO (Sagawa et al., 2019) | 76.0$\pm$ 0.1 | 72.9$\pm$ 0.1 | 69.8$\pm$ 0.2 | 65.5$\pm$ 0.3 | 59.5$\pm$ 0.5 | 68.7 OrgMixup (Zhang et al., 2018) | 77.1$\pm$ 0.0 | 74.2$\pm$ 0.1 | 71.4$\pm$ 0.1 | 67.4$\pm$ 0.2 | 61.2$\pm$ 0.1 | 70.3 Mixup (Yan et al., 2020) | 76.3$\pm$ 0.3 | 73.2$\pm$ 0.2 | 70.2$\pm$ 0.2 | 66.1$\pm$ 0.1 | 60.1$\pm$ 0.1 | 69.2 CutMix (Yun et al., 2019) | 77.9$\pm$ 0.0 | 74.2$\pm$ 0.1 | 70.8$\pm$ 0.2 | 66.3$\pm$ 0.3 | 60.0$\pm$ 0.4 | 69.8 MTL (Blanchard et al., 2021) | 75.6$\pm$ 0.5 | 72.7$\pm$ 0.4 | 69.9$\pm$ 0.2 | 65.9$\pm$ 0.0 | 60.2$\pm$ 0.3 | 68.9 MMD (Li et al., 2018b) | 76.4$\pm$ 0.1 | 73.2$\pm$ 0.2 | 70.0$\pm$ 0.3 | 65.7$\pm$ 0.3 | 59.6$\pm$ 0.5 | 69.0 CORAL Sun & Saenko (2016) | 76.0$\pm$ 0.4 | 72.9$\pm$ 0.2 | 69.9$\pm$ 0.0 | 65.8$\pm$ 0.1 | 59.6$\pm$ 0.1 | 68.8 SagNet (Nam et al., 2021) | 76.6$\pm$ 0.2 | 73.6$\pm$ 0.3 | 70.5$\pm$ 0.4 | 66.4$\pm$ 0.4 | 60.1$\pm$ 0.4 | 69.5 ARM (Zhang et al., 2021) | 75.7$\pm$ 0.1 | 72.9$\pm$ 0.1 | 69.9$\pm$ 0.2 | 65.9$\pm$ 0.2 | 59.8$\pm$ 0.3 | 68.8 DANN (Ganin et al., 2016) | 75.4$\pm$0.4 | 72.6$\pm$0.3 | 69.7$\pm$0.2 | 65.6$\pm$0.0 | 59.6$\pm$0.2 | 68.6 CDANN (Li et al., 2018c) | 75.3$\pm$0.2 | 72.3$\pm$0.2 | 69.4$\pm$0.2 | 65.3$\pm$0.1 | 59.4$\pm$0.2 | 68.3 VREx Krueger et al. (2021) | 76.0$\pm$ 0.2 | 73.0$\pm$ 0.2 | 70.0$\pm$ 0.2 | 66.0$\pm$ 0.1 | 60.0$\pm$ 0.2 | 69.0 RSC (Huang et al., 2020) | 76.1$\pm$ 0.4 | 73.2$\pm$ 0.5 | 70.1$\pm$ 0.5 | 66.2$\pm$ 0.5 | 60.1$\pm$ 0.5 | 69.1 Fishr (Rame et al., 2022) | 76.3$\pm$ 0.3 | 73.4$\pm$ 0.3 | 70.4$\pm$ 0.5 | 66.3$\pm$ 0.8 | 60.1$\pm$ 1.1 | 69.3 SDG Based | M-ADA (Qiao et al., 2020a) | 77.2$\pm$0.2 | 74.2$\pm$0.1 | 71.2$\pm$0.0 | 67.1$\pm$0.1 | 61.1$\pm$0.1 | 70.2 LTD (Wang et al., 2021a) | 75.3$\pm$0.2 | 73.0$\pm$0.0 | 70.6$\pm$0.0 | 67.2$\pm$0.2 | 61.7$\pm$0.2 | 69.6 SAM Based | SAM (Foret et al., 2020) | 79.0$\pm$0.3 | 76.0$\pm$0.3 | 72.9$\pm$0.3 | 68.7$\pm$0.2 | 62.5$\pm$0.3 | 71.8 UDIM w/ SAM | 80.3$\pm$0.0 | 77.7$\pm$0.1 | 75.1$\pm$0.0 | 71.5$\pm$0.1 | 66.2$\pm$0.1 | 74.2 SAGM (Wang et al., 2023) | 79.0$\pm$0.1 | 76.2$\pm$0.0 | 73.2$\pm$0.2 | 69.0$\pm$0.3 | 62.7$\pm$0.4 | 72.0 UDIM w/ SAGM | 80.1$\pm$0.1 | 77.5$\pm$0.1 | 74.8$\pm$0.1 | 71.2$\pm$0.2 | 65.9$\pm$0.2 | 73.9 GAM (Zhang et al., 2023b) | 79.5$\pm$0.1 | 76.8$\pm$0.1 | 74.0$\pm$0.2 | 69.9$\pm$0.1 | 64.1$\pm$0.2 | 72.8 UDIM w/ GAM | 81.4$\pm$0.1 | 78.9$\pm$0.0 | 76.3$\pm$0.0 | 72.8$\pm$0.1 | 67.4$\pm$0.1 | 75.3 ## 4 Experiment ### 4.1 Implementation Datasets and Implementation Details We validate the efficacy of our method, UDIM, via experiments across multiple datasets for domain generalization. First, we conducted evaluation on CIFAR-10-C (Hendrycks & Dietterich, 2019), a synthetic dataset that emulates various domains by applying several synthetic corruptions to CIFAR-10 (Krizhevsky et al., 2009). Furthermore, we extend our evaluation to real-world datasets with multiple domains, namely PACS (Li et al., 2017), OfficeHome (Venkateswara et al., 2017), and DomainNet (Peng et al., 2019). Since UDIM can operate regardless of the number of source domains, we evaluate UDIM under both scenarios on real-world datasets: 1) when multiple domains are presented in the source (Leave-One-Out Domain Generalization, LOODG); and 2) when a single domain serves as the source (Single Source Domain Generalization, SDG). Unless specified, we report the mean and standard deviation of accuracies from three replications. Appendix D.1 provides information on the dataset and our implementations. Implementation of UDIM To leverage a parameter $\theta$ exhibitng flat minima on $D_{s}$ during the minimization of $\mathcal{I}^{\gamma}_{s}(\theta)$, we perform a warm-up training on $\theta$ with the SAM loss on $D_{s}$. While keeping the total number of iterations consistent with other methods, we allocate initial iterations for warm-up based on the SAM loss. As a result, we expect to optimize Eq. 4 with a sufficiently wide region of $N^{\gamma,\rho}_{s,\theta}$ for sufficiently low $\gamma$. The gradient variance, $\text{Var}(\mathbf{G}_{{D}_{s}})$ in Eq. 10, necessitates the costly computation of per-sample gradients with respect to $\theta$. We utilize BackPACK (Dangel et al., 2020), which provides the faster computation of per-sample gradients. Also, we compute the gradient variance only for the classifier parameters, which is an efficient practice to improve the performance with low computational costs (Shin et al., 2023). Appendix D.1 specifies additional hyperparameter settings of UDIM. Baselines for Comparison Since our approach, UDIM, is tested under both LOODG and SDG scenarios, we employed methods tailored for each scenario as baselines. These include strategies for robust optimization (Arjovsky et al., 2019; Sagawa et al., 2019) and augmentations for novel domain discovery (Zhang et al., 2018; Yun et al., 2019; Nam et al., 2021). Appendix D.2 enumerates baselines for comparisons. For methods that leverage relationships between source domains, a straightforward application is not feasible in SDG scenarios. In such cases, we treated each batch as if it came from a different domain to measure experimental performance. ERM means the base model trained with standard classification loss. We also utilize the sharpness-based approaches as the baselines. Note that the first term of Eq. 4 (SAM loss on $D_{s}$) can be substituted with objectives for similar flatness outcomes, such as SAGM (Wang et al., 2023) and GAM (Zhang et al., 2023b). Table 3: Test accuracy for PACS. Each column represents test domain for LOODG, and train domain for SDG. ∗ denotes that the performances are from its original paper. Bold indicates the best case of each column or improved performances when combined with the sharpness-based optimizers. | Leave-One-Out Source Domain Generalization | Single Source Domain Generalization ---|---|--- Method | Art | Cartoon | Photo | Sketch | Avg | Art | Cartoon | Photo | Sketch | Avg Fishr∗ (Best among LOODG methods) | 88.4∗$\pm$0.2 | 78.7∗$\pm$0.7 | 97.0∗$\pm$0.1 | 77.8∗$\pm$2.0 | 85.5∗ | 75.9$\pm$1.7 | 81.1$\pm$0.7 | 46.9$\pm$0.7 | 57.2$\pm$4.4 | 65.3 RIDG (Chen et al., 2023b) | 86.3$\pm$1.1 | 81.0$\pm$1.0 | 97.4$\pm$0.7 | 77.5$\pm$2.5 | 85.5 | 76.2$\pm$1.4 | 80.0$\pm$1.8 | 48.5$\pm$2.8 | 54.8$\pm$2.4 | 64.9 ITTA (Chen et al., 2023a) | 87.9$\pm$1.4 | 78.6$\pm$2.7 | 96.2$\pm$0.2 | 80.7$\pm$2.2 | 85.8 | 78.4$\pm$1.5 | 79.8$\pm$1.3 | 56.5$\pm$3.7 | 60.7$\pm$0.9 | 68.8 M-ADA | 85.5$\pm$0.7 | 80.7$\pm$1.5 | 97.2$\pm$0.5 | 78.4$\pm$1.4 | 85.4 | 78.0$\pm$1.1 | 79.5$\pm$1.2 | 47.1$\pm$0.4 | 55.7$\pm$0.5 | 65.1 LTD | 85.7$\pm$1.9 | 79.9$\pm$0.9 | 96.9$\pm$0.5 | 83.3$\pm$0.5 | 86.4 | 76.8$\pm$0.7 | 82.5$\pm$0.4 | 56.2$\pm$2.5 | 53.6$\pm$1.4 | 67.3 SAM | 86.8$\pm$0.6 | 79.6$\pm$1.4 | 96.8$\pm$0.1 | 80.2$\pm$0.7 | 85.9 | 77.7$\pm$1.1 | 80.5$\pm$0.6 | 46.7$\pm$1.1 | 54.2$\pm$1.5 | 64.8 UDIM w/ SAM | 88.5$\pm$0.1 | 86.1$\pm$0.1 | 97.3$\pm$0.1 | 82.7$\pm$0.1 | 88.7 | 81.5$\pm$0.1 | 85.3$\pm$0.4 | 67.4$\pm$0.8 | 64.6$\pm$1.7 | 74.7 SAGM | 85.3$\pm$2.5 | 80.9$\pm$1.1 | 97.1$\pm$0.4 | 77.8$\pm$0.5 | 85.3 | 78.9$\pm$1.2 | 79.8$\pm$1.0 | 44.7$\pm$1.8 | 55.6$\pm$1.1 | 64.8 UDIM w/ SAGM | 88.9$\pm$0.2 | 86.2$\pm$0.3 | 97.4$\pm$0.4 | 79.5$\pm$0.8 | 88.0 | 81.6$\pm$0.3 | 84.8$\pm$1.2 | 68.1$\pm$0.8 | 63.3$\pm$0.9 | 74.5 GAM | 85.5$\pm$0.6 | 81.1$\pm$1.0 | 96.4$\pm$0.2 | 81.0$\pm$1.7 | 86.0 | 79.1$\pm$1.3 | 79.7$\pm$0.9 | 46.3$\pm$0.6 | 56.6$\pm$1.1 | 65.4 UDIM w/ GAM | 87.1$\pm$0.9 | 86.3$\pm$0.4 | 97.2$\pm$0.1 | 81.8$\pm$1.1 | 88.1 | 82.4$\pm$0.9 | 84.2$\pm$0.4 | 68.8$\pm$0.8 | 64.0$\pm$0.7 | 74.9 (a) Sensitivity analyses of UDIM on $\rho^{\prime}$, $\lambda_{1}$, and $\rho_{x}$. (b) Acc over iterations Figure 3: (a) Sensitivity analyses of UDIM. (b) test accuracy plot of UDIM and sharpness-based approaches based on training iterations. Shaded regions represent standard deviation. ### 4.2 Classification Accuracies on Various Benchmark Datasets To assess the effectiveness of each method under unknown domains, we present the accuracy results on unknown target domains. Table 2 shows the results on CIFAR-10-C. The sharpness-based methods exhibit excellent performances compared to the other lines of methods. This underscores the importance of training that avoids overfitting to a specific domain. By adding UDIM (specifically, the inconsistency term of Eq. 4) to these SAM-based approaches, we consistently observed improved performances compared to the same approaches without UDIM. Table 3 shows the results on the PACS dataset. Similar to the results in Table 2, UDIM consistently outperforms the existing baselines across each scenario. This improvement is particularly amplified in the single source scenario. Unlike the Leave-One-Out scenario, the single source scenario is more challenging as it requires generalizing to unknown domains using information from a single domain. These results emphasize the robust generalization capability of UDIM under unknown domains. This section aims to examine the efficacy of UDIM by analyzing the sensitivity of each hyper-parameter used in UDIM’s implementation. Additionally, We perform an ablation study by composing each part of the UDIM’s objective in Eq. 12. Unless specified, each experiment is carried out in the Single source domain generalization scenario using the PACS dataset. ### 4.3 Sensitivity Analyses and Ablation Study Figure 4: Ablation study of UDIM Sensitivity Analyses Figure 3 (a) shows the sensitivity of $\rho^{{}^{\prime}}$, $\lambda_{1}$ and $\rho_{x}$, which are main hyper- parameters of UDIM, under the feasible range of value lists. As a default setting of UDIM, we set $\rho^{{}^{\prime}}$=0.05, $\lambda_{1}$=1 and $\rho_{x}$=1. In implementation, we multiply $\rho_{x}$ by the unnormalized gradient of $x$. Therefore, the values presented in the ablation study have a larger scale. Also, we compare the performances between SAM and UDIM w/ SAM to compare the effectiveness of UDIM objective. Each figure demonstrates that UDIM’s performance remains robust and favorable, invariant to the changes in each hyper-parameter. Figure 3 (b) presents the test accuracies over training iterations while varying the sharpness-based approaches used alongside UDIM. Regardless of which method is used in conjunction, additional performance improvements over the iterations are observed compared to the original SAM variants. Ablation Studies Figure 4 presents the ablation results of UDIM, which were carried out by replacing a subpart of the UDIM’s objective with alternative candidates and subsequently assessing the performances. We conduct various ablations: ’SourceOpt’ represents the optimization method for $D_{s}$, ’Perturb’ indicates the perturbation method utilized to emulate unknown domains, and ’PerturbOpt’ indicates the optimization for the perturbed dataset $\tilde{D}_{s}$. Appendix D.3 enumerates each ablation candidate and its implementation. UDIM, depicted by the red bar, consistently outperforms in all ablations, suggesting the effectiveness of our derived objective formulation. (a) SAM (b) SAGM (c) GAM (d) UDIM Figure 5: Sharpness plots for models trained using various methods: the upper plot shows sharpness on the perturbed parameter space, while the lower plot displays sharpness on the perturbed data space. The colormap of each row is normalized into the same scale for fair comparison. ### 4.4 Sharpness Analyses Figure 1 claims that UDIM would reduce the suggested sharpness in both parameter space and data space. To support this claim with experiments, Figure 5 enumerates the sharpness plots for models trained with sharpness-based methods and those trained with UDIM. These figures are obtained by training models in the single source domain setting of the PACS dataset, and each plot is drawn utilizing target domain datasets, which are not utilized for training. The top row of Figure 5 represents the measurement of sharpness in the parameter space by introducing a random perturbation to the parameter $\theta$. Additionally, to examine the sharpness in the perturbed data space of unobserved domains, the bottom side of Figure 5 illustrates sharpness based on the input-perturbed region of the target domain datasets. Current SAM variants struggle to maintain sufficient flatness within both the perturbed parameter space and data space. On the other hand, UDIM effectively preserves flatness in the perturbed parameter space and the data space of unknown domains. Within the region, the model trained using UDIM also exhibits a lower loss value compared to other methods. Through preserving flatness in each space, we confirm that the optimization with UDIM, both in parameter and data space, has practical effectiveness. ## 5 Conclusion We introduce UDIM, a novel approach to minimize the discrepancy in the loss landscape between the source domain and unobserved domains. Combined with SAM variants, UDIM consistently improves generalization performance on unobserved domains. This performance is achieved by perturbing both domain and parameter spaces, where UDIM optimization is conducted based on the iterative update between the dataset and the parameter. Experimental results demonstrate accuracy gains, up to $9.9\%$ in some settings, by adopting UDIM in current sharpness-based approaches. #### Acknowledgments This research was supported by AI Technology Development for Commonsense Extraction, Reasoning, and Inference from Heterogeneous Data(IITP) funded by the Ministry of Science and ICT(2022-0-00077). ## References * Arjovsky et al. (2019) Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. _arXiv preprint arXiv:1907.02893_ , 2019. * Blanchard et al. (2021) Gilles Blanchard, Aniket Anand Deshmukh, Ürun Dogan, Gyemin Lee, and Clayton Scott. Domain generalization by marginal transfer learning. _The Journal of Machine Learning Research_ , 22(1):46–100, 2021. * Chen et al. (2023a) Liang Chen, Yong Zhang, Yibing Song, Ying Shan, and Lingqiao Liu. Improved test-time adaptation for domain generalization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 24172–24182, 2023a. * Chen et al. (2023b) Liang Chen, Yong Zhang, Yibing Song, Anton van den Hengel, and Lingqiao Liu. Domain generalization via rationale invariance. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 1751–1760, 2023b. * Chu et al. (2019) Casey Chu, Kentaro Minami, and Kenji Fukumizu. Smoothness and stability in gans. In _International Conference on Learning Representations_ , 2019. * Csurka (2017) Gabriela Csurka. Domain adaptation for visual applications: A comprehensive survey. _arXiv preprint arXiv:1702.05374_ , 2017. * Dangel et al. (2020) Felix Dangel, Frederik Kunstner, and Philipp Hennig. BackPACK: Packing more into backprop. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=BJlrF24twB. * Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_ , pp. 248–255. Ieee, 2009. * Dziugaite & Roy (2017) Gintare Karolina Dziugaite and Daniel M Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. _arXiv preprint arXiv:1703.11008_ , 2017. * Feldman (2020) Dan Feldman. Introduction to core-sets: an updated survey. _arXiv preprint arXiv:2011.09384_ , 2020. * Foret et al. (2020) Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In _International Conference on Learning Representations_ , 2020. * Ganin et al. (2016) Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. _The journal of machine learning research_ , 17(1):2096–2030, 2016. * Goodfellow et al. (2014) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. _arXiv preprint arXiv:1412.6572_ , 2014. * He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 770–778, 2016. * Hendrycks & Dietterich (2019) Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. _arXiv preprint arXiv:1903.12261_ , 2019. * Huang et al. (2020) Zeyi Huang, Haohan Wang, Eric P Xing, and Dong Huang. Self-challenging improves cross-domain generalization. In _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16_ , pp. 124–140. Springer, 2020. * Jang et al. (2022) JoonHo Jang, Byeonghu Na, Dong Hyeok Shin, Mingi Ji, Kyungwoo Song, and Il chul Moon. Unknown-aware domain adversarial learning for open-set domain adaptation. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), _Advances in Neural Information Processing Systems_ , 2022. URL https://openreview.net/forum?id=IwC_x50fvU. * Kim et al. (2022) Minyoung Kim, Da Li, Shell X Hu, and Timothy Hospedales. Fisher sam: Information geometry and sharpness aware minimisation. In _International Conference on Machine Learning_ , pp. 11148–11161. PMLR, 2022. * Kim et al. (2023) Yoon-Yeong Kim, Youngjae Cho, Joonho Jang, Byeonghu Na, Yeongmin Kim, Kyungwoo Song, Wanmo Kang, and Il-Chul Moon. SAAL: Sharpness-aware active learning. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), _Proceedings of the 40th International Conference on Machine Learning_ , volume 202 of _Proceedings of Machine Learning Research_ , pp. 16424–16440. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/kim23c.html. * Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014. * Krizhevsky et al. (2009) Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009\. * Krueger et al. (2021) David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In _International Conference on Machine Learning_ , pp. 5815–5826. PMLR, 2021. * Kwon et al. (2021) Jungmin Kwon, Jeongseop Kim, Hyunseo Park, and In Kwon Choi. Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. In _International Conference on Machine Learning_ , pp. 5905–5914. PMLR, 2021. * Laurent & Massart (2000) Beatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model selection. _Annals of statistics_ , pp. 1302–1338, 2000. * Li et al. (2017) Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In _Proceedings of the IEEE international conference on computer vision_ , pp. 5542–5550, 2017. * Li et al. (2018a) Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy Hospedales. Learning to generalize: Meta-learning for domain generalization. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 32, 2018a. * Li et al. (2018b) Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. Domain generalization with adversarial feature learning. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 5400–5409, 2018b. * Li et al. (2021a) Lei Li, Ke Gao, Juan Cao, Ziyao Huang, Yepeng Weng, Xiaoyue Mi, Zhengze Yu, Xiaoya Li, and Boyang Xia. Progressive domain expansion network for single domain generalization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 224–233, 2021a. * Li et al. (2021b) Pan Li, Da Li, Wei Li, Shaogang Gong, Yanwei Fu, and Timothy M Hospedales. A simple feature augmentation for domain generalization. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 8886–8895, 2021b. * Li et al. (2018c) Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, and Dacheng Tao. Deep domain generalization via conditional invariant adversarial networks. In _Proceedings of the European conference on computer vision (ECCV)_ , pp. 624–639, 2018c. * Madry et al. (2017) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. _arXiv preprint arXiv:1706.06083_ , 2017. * McAllester (1999) David A McAllester. Pac-bayesian model averaging. In _Proceedings of the twelfth annual conference on Computational learning theory_ , pp. 164–170, 1999. * Nam et al. (2021) Hyeonseob Nam, HyunJae Lee, Jongchan Park, Wonjun Yoon, and Donggeun Yoo. Reducing domain gap by reducing style bias. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 8690–8699, 2021. * Parascandolo et al. (2020a) Giambattista Parascandolo, Alexander Neitz, ANTONIO ORVIETO, Luigi Gresele, and Bernhard Schölkopf. Learning explanations that are hard to vary. In _International Conference on Learning Representations_ , 2020a. * Parascandolo et al. (2020b) Giambattista Parascandolo, Alexander Neitz, Antonio Orvieto, Luigi Gresele, and Bernhard Schölkopf. Learning explanations that are hard to vary. _arXiv preprint arXiv:2009.00329_ , 2020b. * Peng et al. (2019) Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 1406–1415, 2019. * Qiao et al. (2020a) Fengchun Qiao, Long Zhao, and Xi Peng. Learning to learn single domain generalization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 12556–12565, 2020a. * Qiao et al. (2020b) Fengchun Qiao, Long Zhao, and Xi Peng. Learning to learn single domain generalization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 12556–12565, 2020b. * Rame et al. (2022) Alexandre Rame, Corentin Dancette, and Matthieu Cord. Fishr: Invariant gradient variances for out-of-distribution generalization. In _International Conference on Machine Learning_ , pp. 18347–18377. PMLR, 2022. * Rangwani et al. (2022) Harsh Rangwani, Sumukh K Aithal, Mayank Mishra, Arihant Jain, and Venkatesh Babu Radhakrishnan. A closer look at smoothness in domain adversarial training. In _International Conference on Machine Learning_ , pp. 18378–18399. PMLR, 2022. * Sagawa et al. (2019) Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks. In _International Conference on Learning Representations_ , 2019. * Schraudolph (2002) Nicol N Schraudolph. Fast curvature matrix-vector products for second-order gradient descent. _Neural computation_ , 14(7):1723–1738, 2002. * Shin et al. (2023) Seungjae Shin, Heesun Bae, Donghyeok Shin, Weonyoung Joo, and Il-Chul Moon. Loss-curvature matching for dataset selection and condensation. In _International Conference on Artificial Intelligence and Statistics_ , pp. 8606–8628. PMLR, 2023. * Shui et al. (2022) Changjian Shui, Boyu Wang, and Christian Gagné. On the benefits of representation regularization in invariance based domain generalization. _Machine Learning_ , 111(3):895–915, 2022. * Sun & Saenko (2016) Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In _Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part III 14_ , pp. 443–450. Springer, 2016. * Venkateswara et al. (2017) Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 5018–5027, 2017. * Wald et al. (2021) Yoav Wald, Amir Feder, Daniel Greenfeld, and Uri Shalit. On calibration and out-of-domain generalization. _Advances in neural information processing systems_ , 34:2215–2227, 2021. * Wang et al. (2022) Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip Yu. Generalizing to unseen domains: A survey on domain generalization. _IEEE Transactions on Knowledge and Data Engineering_ , 2022. * Wang et al. (2023) Pengfei Wang, Zhaoxiang Zhang, Zhen Lei, and Lei Zhang. Sharpness-aware gradient matching for domain generalization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 3769–3778, 2023. * Wang et al. (2021a) Zijian Wang, Yadan Luo, Ruihong Qiu, Zi Huang, and Mahsa Baktashmotlagh. Learning to diversify for single domain generalization. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 834–843, 2021a. * Wang et al. (2021b) Zijian Wang, Yadan Luo, Ruihong Qiu, Zi Huang, and Mahsa Baktashmotlagh. Learning to diversify for single domain generalization. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 834–843, 2021b. * Yan et al. (2020) Shen Yan, Huan Song, Nanxiang Li, Lincan Zou, and Liu Ren. Improve unsupervised domain adaptation with mixup training. _arXiv preprint arXiv:2001.00677_ , 2020. * Yun et al. (2019) Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In _Proceedings of the IEEE/CVF international conference on computer vision_ , pp. 6023–6032, 2019. * Zhang et al. (2018) Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In _International Conference on Learning Representations_ , 2018. * Zhang et al. (2021) Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, and Chelsea Finn. Adaptive risk minimization: Learning to adapt to domain shift. _Advances in Neural Information Processing Systems_ , 34:23664–23678, 2021. * Zhang et al. (2023a) Xingxuan Zhang, Renzhe Xu, Han Yu, Yancheng Dong, Pengfei Tian, and Peng Cu. Flatness-aware minimization for domain generalization. _arXiv preprint arXiv:2307.11108_ , 2023a. * Zhang et al. (2023b) Xingxuan Zhang, Renzhe Xu, Han Yu, Hao Zou, and Peng Cui. Gradient norm aware minimization seeks first-order flatness and improves generalization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 20247–20257, 2023b. * Zhong et al. (2022) Zhun Zhong, Yuyang Zhao, Gim Hee Lee, and Nicu Sebe. Adversarial style augmentation for domain generalized urban-scene segmentation. _Advances in Neural Information Processing Systems_ , 35:338–350, 2022. * Zhou et al. (2021) Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. Domain generalization with mixstyle. _arXiv preprint arXiv:2104.02008_ , 2021. * Zhou et al. (2022) Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2022. ## Appendix A Explanation of Sharpness Variants for Domain Generalization Gradient norm-Aware Minimization (GAM) Zhang et al. (2023b) introduces first- order flatness, which minimizes a maximal gradient norm within a perturbation radius, to regularize a stronger flatness than SAM. Accordingly, GAM seeks minima with uniformly flat curvature across all directions. Sharpness-Aware Gradient Matching (SAGM) Wang et al. (2023) minimizes an original loss, the corresponding perturbed loss, and the gap between them. This optimization aims to identify a minima that is both flat and possesses a sufficiently low loss value. Interpreting the given formula, this optimization inherently regularizes the gradient alignment between the original loss and the perturbed loss. Flatness-Aware Minimization (FAM) Zhang et al. (2023a) concurrently optimizes both zeroth-order and first-order flatness to identify flatter minima. To compute various sharpness metric on the different order, it incurs a higher computational cost. ## Appendix B Proofs and Discussions ### B.1 Proof for Theorem 3.2 First, we provide some theorem, definition, and assumptions needed to prove the Theorem 3.2. ###### Theorem B.1. (Foret et al., 2020) For any $\rho>0$ which satisfies $\mathcal{L}_{\mathscr{D}_{e}}(\theta)\leq\mathbb{E}_{\epsilon\sim p(\epsilon)}\mathcal{L}_{\mathscr{D}_{e}}(\theta+\epsilon)$, with probability at least $1-\delta$ over realized dataset $D_{e}$ from $\mathscr{D}_{e}$ with $|D_{e}|=n$, the following holds under some technical conditions on $\mathcal{L}_{\mathscr{D}_{e}}(\theta)$: $\mathcal{L}_{\mathscr{D}_{e}}(\theta)\leq\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{e}}(\theta+\epsilon)+h_{e}(\frac{\|\theta\|_{2}^{2}}{\rho^{2}}),$ where $h_{e}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ is a strictly increasing function. ###### Definition B.2. Let $\Theta_{\theta,\rho}=\\{\theta^{{}^{\prime}}|\|\theta^{{}^{\prime}}-\theta\|_{2}\leq\rho\\}$ and $N^{\gamma,\rho}_{e,\theta}=\\{\theta^{{}^{\prime}}\in\Theta_{\theta,\rho}|~{}|\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\mathcal{L}_{D_{e}}(\theta)|\leq\gamma\\}$. ###### Assumption B.3. $\mathcal{L}_{\mathscr{D}}(\theta)\leq\mathbb{E}_{\epsilon\sim p(\epsilon)}\mathcal{L}_{\mathscr{D}}(\theta+\epsilon)$ where $p(\epsilon)\sim\mathcal{N}(0,\sigma^{2}I)$ for some $\sigma>0$. ###### Assumption B.4. $\max_{e\in\mathcal{E}}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})\geq\mathcal{L}_{D_{s}}(\theta^{{}^{\prime}})$ for all $\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}$. In practice, Assumption B.4 is acceptable. Contrary to the dataset $D_{e}$ from an unobserved domain $e$, $D_{s}$ is a source domain dataset provided to us and hence, available for training. Moreover, the region of $N^{\gamma,\rho}_{s,\theta}$ would be flat with respect to local minima, $\theta^{*}$, from the perspective of the source domain dataset. Therefore, the perturbed loss on the source domain dataset, $L_{D_{s}}(\theta^{{}^{\prime}})$, would be likely to have a sufficiently low value. ###### Theorem B.5. For $\theta\in\Theta$ and arbitrary domain $e\in\mathcal{E}$, with probability at least $1-\delta$ over realized dataset $D_{e}$ from $\mathscr{D}_{e}$ with $|D_{e}|=n$, the following holds under some technical conditions on $\mathcal{L}_{\mathscr{D}_{e}}(\theta)$. $\displaystyle\mathcal{L}_{\mathscr{D}}(\theta)$ $\displaystyle\leq\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+(1-\frac{1}{|\mathcal{E}|})\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}|\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\mathcal{L}_{D_{s}}(\theta^{{}^{\prime}})|+h(\frac{\|\theta\|_{2}^{2}}{\rho^{2}})$ (13) where $h:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ is a strictly increasing function. ###### Proof. For the derivation of Theorem 3.2, we assume the case of single source domain generalization, where $s$ represents a single domain in $\mathcal{E}$. It should be noted that the number of available source domain does not affect the validity of this proof because we can consider multiple source domains as a single source domain by $D_{s}=\cup_{i\in\mathcal{S}}D_{i}$. Based on the definition, $\mathcal{L}_{\mathscr{D}}(\theta)=\frac{1}{|\mathcal{E}|}\sum_{e\in\mathcal{E}}\mathcal{L}_{\mathscr{D}_{e}}(\theta)=\frac{1}{|\mathcal{E}|}\Big{(}\mathcal{L}_{\mathscr{D}_{s}}(\theta)+\sum_{e\in\mathcal{E},e\neq s}\mathcal{L}_{\mathscr{D}_{e}}(\theta)\Big{)}$. From Theorem B.1, we can derive the generalization bound of a source domain $s$ as follows: $\displaystyle\mathcal{L}_{\mathscr{D}_{s}}(\theta)\leq\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+h_{s}(\frac{\|\theta\|_{2}^{2}}{\rho^{2}})$ (14) where $h_{s}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ is a strictly increasing function. To define the upper bound of $\mathcal{L}_{\mathscr{D}}(\theta^{*})$, we need to find the upper bound of the remaining term, $\sum_{e\in\mathcal{E},e\neq s}\mathcal{L}_{\mathscr{D}_{e}}(\theta)$. Here, we introduce a parameter set $\Theta_{\rho{{}^{\prime}}}:=\operatorname*{arg\,max}_{\Theta_{\hat{\rho}}\subseteq N^{\gamma,\rho}_{s,\theta}}\hat{\rho}$, which is the largest $\rho{{}^{\prime}}$-ball region around $\theta$ in $N^{\gamma,\rho}_{s,\theta}$. Then, we can construct inequality as follows: $\displaystyle\max_{\|\epsilon\|_{2}\leq\rho{{}^{\prime}}}\mathcal{L}_{D_{e}}(\theta+\epsilon)\leq\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})\leq\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{e}}(\theta+\epsilon)$ (15) This is because $\Theta_{\rho^{\prime}}\subset N^{\gamma,\rho}_{s,\theta}\subset\Theta_{\theta,\rho}$. Similar to Foret et al. (2020); Kim et al. (2022), we make use of the following result from Laurent & Massart (2000): $\displaystyle z\sim\mathcal{N}(0,\sigma^{2}I)\Rightarrow\|z\|^{2}_{2}\leq k\sigma^{2}\Bigg{(}1+\sqrt{\frac{\log{n}}{k}}\Bigg{)}^{2}\,\,\text{with probability at least}\,\,1-\frac{1}{\sqrt{n}}$ (16) We set $\rho=\sigma(\sqrt{k}+\sqrt{\log n})$. Then, it enables us to connect expected perturbed loss and maximum loss as follows: $\displaystyle\mathbb{E}_{\epsilon\sim\mathcal{N}(0,\sigma^{2}{I})}\Big{[}\mathcal{L}_{D_{e}}(\theta+\epsilon)\Big{]}\leq(1-\frac{1}{\sqrt{n}})\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{e}}(\theta+\epsilon)+\frac{1}{\sqrt{n}}l_{e,max}$ (17) Here, $l_{e,max}$ is the maximum loss bound when $\|z\|^{2}_{2}\geq\rho^{2}$. Also, we introduce $\sigma^{\prime}$ where $\rho^{\prime}=\sigma^{\prime}(\sqrt{k}+\sqrt{\log n})$. Then, similar to Eq. 17, we also derive the equation for $\rho^{\prime}$ as follows: $\displaystyle\mathbb{E}_{\epsilon\sim\mathcal{N}(0,(\sigma^{{}^{\prime}})^{2}{I})}\Big{[}\mathcal{L}_{D_{e}}(\theta+\epsilon)\Big{]}$ $\displaystyle\leq(1-\frac{1}{\sqrt{n}})\max_{\|\epsilon\|_{2}\leq\rho^{\prime}}\mathcal{L}_{D_{e}}(\theta+\epsilon)+\frac{1}{\sqrt{n}}l^{\prime}_{e,max}$ (18) $\displaystyle\leq(1-\frac{1}{\sqrt{n}})\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})+\frac{1}{\sqrt{n}}l^{\prime}_{e,max}$ (19) where $l^{\prime}_{e,max}$ is the maximum loss bound when $\|z\|^{2}_{2}\geq(\rho^{\prime})^{2}$. Then, we have the below derivation by summing up all $e\neq s$ and using the fact that $l_{e,max}\leq l^{\prime}_{e,max}$: $\displaystyle\frac{1}{(|\mathcal{E}|-1)}\sum_{e\in\mathcal{E},e\neq s}\mathbb{E}_{\epsilon\sim\mathcal{N}(0,{\sigma^{{}^{\prime}}}^{2}{I})}\Big{[}\mathcal{L}_{D_{e}}(\theta+\epsilon)\Big{]}$ (20) $\displaystyle\leq(1-\frac{1}{\sqrt{n}})\frac{1}{(|\mathcal{E}|-1)}\sum_{e\in\mathcal{E},e\neq s}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})+\frac{1}{\sqrt{n}}\frac{1}{(|\mathcal{E}|-1)}\sum_{e\in\mathcal{E},e\neq s}l^{\prime}_{e,max}$ (21) $\displaystyle\leq(1-\frac{1}{\sqrt{n}})\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})+\frac{1}{\sqrt{n}}\max_{e\in\mathcal{E}}l^{\prime}_{e,max}$ (22) Using Assumption B.3 and PAC-Bayesian generalization bound (McAllester, 1999; Dziugaite & Roy, 2017; Foret et al., 2020), we find the upper bound of the sum of target domain losses. $\displaystyle\frac{1}{(|\mathcal{E}|-1)}\sum_{e\in\mathcal{E},e\neq s}\mathcal{L}_{\mathscr{D}_{e}}(\theta)\leq\frac{1}{(|\mathcal{E}|-1)}\sum_{e\in\mathcal{E},e\neq s}\mathbb{E}_{\epsilon\sim\mathcal{N}(0,(\sigma^{\prime})^{2}I)}[\mathcal{L}_{\mathscr{D}_{e}}(\theta+\epsilon)]$ (23) $\displaystyle\leq\frac{1}{(|\mathcal{E}|-1)}\sum_{e\in\mathcal{E},e\neq s}\\{\mathbb{E}_{\epsilon\sim\mathcal{N}(0,(\sigma^{\prime})^{2}I)}[\mathcal{L}_{{D}_{e}}(\theta+\epsilon)]+h_{e}(\frac{\|\theta\|_{2}^{2}}{\rho^{2}})\\}$ (24) $\displaystyle\leq(1-\frac{1}{\sqrt{n}})\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})+\frac{1}{\sqrt{n}}\max_{e\in\mathcal{E}}l^{\prime}_{e,max}+\frac{1}{(|\mathcal{E}|-1)}\sum_{e\in\mathcal{E},e\neq s}h_{e}(\frac{\|\theta\|_{2}^{2}}{\rho^{2}})$ (25) $\displaystyle\Rightarrow$ $\displaystyle\sum_{e\in\mathcal{E},e\neq s}\mathcal{L}_{\mathscr{D}_{e}}(\theta)\leq(|\mathcal{E}|-1)(1-\frac{1}{\sqrt{n}})\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})+\tilde{h}(\frac{\|\theta\|_{2}^{2}}{\rho^{2}})$ (26) where $h_{e},\tilde{h}$ are strictly increasing functions. We use the fact that the sum of strictly increasing functions is also a strictly increasing function. By integrating results of Eq. 14 and 26, $\displaystyle\frac{1}{|\mathcal{E}|}\Big{(}\mathcal{L}_{\mathscr{D}_{s}}(\theta)+\sum_{e\in\mathcal{E},e\neq s}\mathcal{L}_{\mathscr{D}_{e}}(\theta)\Big{)}\leq$ $\displaystyle\frac{1}{|\mathcal{E}|}\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)$ $\displaystyle+(1-\frac{1}{|\mathcal{E}|})(1-\frac{1}{\sqrt{n}})\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})+h(\frac{\|\theta\|_{2}^{2}}{\rho^{2}})$ (27) where $h:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ is a strictly increasing function. The first and second terms of RHS in the above inequality are upper bounded by the maximum perturbed loss for source domain and unknown domain inconsistency loss. $\displaystyle\frac{1}{|\mathcal{E}|}\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+(1-\frac{1}{|\mathcal{E}|})(1-\frac{1}{\sqrt{n}})\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})$ (28) $\displaystyle\leq\frac{1}{|\mathcal{E}|}\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+(1-\frac{1}{|\mathcal{E}|})\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})$ (29) $\displaystyle=\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+(1-\frac{1}{|\mathcal{E}|})\Big{(}\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)\Big{)}$ (30) $\displaystyle\leq\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+(1-\frac{1}{|\mathcal{E}|})\Big{(}\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{D_{s}}(\theta^{{}^{\prime}})\Big{)}$ (31) $\displaystyle\leq\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+(1-\frac{1}{|\mathcal{E}|})\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}(\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\mathcal{L}_{D_{s}}(\theta^{{}^{\prime}}))$ (32) $\displaystyle=\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+(1-\frac{1}{|\mathcal{E}|})\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}|\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\mathcal{L}_{D_{s}}(\theta^{{}^{\prime}})|$ (33) The last equality comes from the Assumption B.4. To sum up, we can derive the upper bound of the population loss for whole domain using the maximum perturbed loss for source domain and unknown domain inconsistency loss with weight decay term. $\displaystyle\mathcal{L}_{\mathscr{D}}(\theta)\leq\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)+(1-\frac{1}{|\mathcal{E}|})\max_{e\in\mathcal{E}}\max_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}|\mathcal{L}_{D_{e}}(\theta^{{}^{\prime}})-\mathcal{L}_{D_{s}}(\theta^{{}^{\prime}})|+h(\frac{\|\theta\|_{2}^{2}}{\rho^{2}})$ (34) ∎ ### B.2 Detailed Explanation on Approximation Here, we show the full derivation of Eq. 8 and 9 as follows. $\displaystyle\max\limits_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\Big{(}\mathcal{L}_{\tilde{D}_{s}}(\theta^{{}^{\prime}})-\mathcal{L}_{{D}_{s}}(\theta^{{}^{\prime}})\Big{)}\approx\max\limits_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\mathcal{L}_{\tilde{D}_{s}}(\theta^{{}^{\prime}})-\mathcal{L}_{{D}_{s}}(\theta)+\gamma^{{}^{\prime}}$ (35) $\displaystyle\underset{\text{ 2nd Taylor}}{\approx}\mathcal{L}_{\tilde{D}_{s}}(\theta)-\mathcal{L}_{{D}_{s}}(\theta)+\rho{{}^{\prime}}\|\nabla_{\theta}\mathcal{L}_{\tilde{D}_{s}}(\theta)\|_{2}+\max\limits_{\theta^{{}^{\prime}}\in N^{\gamma,\rho}_{s,\theta}}\frac{1}{2}\theta^{{}^{\prime}\top}\mathbf{H}_{\tilde{D}_{s}}\theta^{{}^{\prime}}$ (36) $\displaystyle=\Big{(}\mathcal{L}_{\tilde{D}_{s}}(\theta)-\mathcal{L}_{{D}_{s}}(\theta)\Big{)}+\rho{{}^{\prime}}\|\nabla_{\theta}\mathcal{L}_{\tilde{D}_{s}}(\theta)\|_{2}+\gamma\max\limits_{i}\lambda^{\tilde{D}_{s}}_{i}/\lambda^{{D}_{s}}_{i}$ (37) ### B.3 Discussion on Hessian matrix and Gradient Variance ##### Hessian Matching This section first discusses how the Hessian matrix matching between two different datasets could be substituted by the gradient variance matching for the respective datasets. It should be noted that we follow the motivation and derivation of Rame et al. (2022), and this section just re-formalize the derivations based on our notations. ${\mathbf{g}}_{i}$ is a per-sample gradient for $i$-th sample; and $\mathbf{G}_{{D}}=\\{{\mathbf{g}}_{i}\\}^{|D|}_{i=1}$ is a set of per-sample gradient for $x_{i}\in D$. Accordingly, the variance of $\mathbf{G}_{{D}}$, which we denote as $\text{Var}(\mathbf{G}_{{D}})$, is calculated as $\text{Var}(\mathbf{G}_{{D}})=\frac{1}{|D|-1}\sum^{|D|}_{i=1}\big{(}\mathbf{g}_{i}-\bar{\mathbf{g}}\big{)}^{2}$. We first revisit the Eq. 37, which is our intermediate objective as follows: $\displaystyle\Big{(}\mathcal{L}_{\tilde{D}_{s}}(\theta)-\mathcal{L}_{{D}_{s}}(\theta)\Big{)}+\rho{{}^{\prime}}\|\nabla_{\theta}\mathcal{L}_{\tilde{D}_{s}}(\theta)\|_{2}+\gamma\max\limits_{i}\lambda^{\tilde{D}_{s}}_{i}/\lambda^{{D}_{s}}_{i}$ (38) The last term of Eq. 38, $\max\limits_{i}\lambda^{\tilde{D}_{s}}_{i}/\lambda^{{D}_{s}}_{i}$, refers to the maximum eigenvalue ratio of the Hessian matrices between two different datasets, $\tilde{D}_{s}$ and ${D}_{s}$. This ratio is minimized when Hessian matrices of $\tilde{D}_{s}$ and ${D}_{s}$ becomes equivalent. Then, $\max\limits_{i}\lambda^{\tilde{D}_{s}}_{i}/\lambda^{{D}_{s}}_{i}$ is approximated to the hessian matrix matching between $\tilde{D}_{s}$ and ${D}_{s}$ as $\|\mathbf{H}_{\tilde{D}_{s}}-\mathbf{H}_{{D}_{s}}\|_{2}$. Computing the full Hessian matrix in over-parameterized networks is computationally challenging. Therefore, we express the formula using the diagonalized Hessian matrix, denoted as $\hat{\mathbf{H}}_{{D}_{s}}$, which results in $\|\hat{\mathbf{H}}_{\tilde{D}_{s}}-\hat{\mathbf{H}}_{{D}_{s}}\|_{2}$. Let the Fisher Information Matrix (Rame et al., 2022) as ${\bm{F}}=\sum_{i=1}^{n}\mathbb{E}_{\hat{y}\sim P_{\theta}(\cdot|x_{i})}\left[\nabla_{\theta}\log p_{\theta}(\hat{y}|x_{i})\nabla_{\theta}\log p_{\theta}(\hat{y}|x_{i})^{\top}\right]$, where $p_{\theta}(\cdot|x_{i})$ is the density of $f_{\theta}$ on a input instance $x$. Fisher Information Matrix (FIM) approximates the Hessian $\mathbf{H}$ with theoretically probably bounded errors under mild assumptions Schraudolph (2002). Then, diagonalized Hessian matrix matching between $\tilde{D}_{s}$ and ${D}_{s}$, $\|\hat{\mathbf{H}}_{\tilde{D}_{s}}-\hat{\mathbf{H}}_{{D}_{s}}\|_{2}$ could be replaced by $\|\hat{{\bm{F}}}_{\tilde{D}_{s}}-\hat{{\bm{F}}}_{{D}_{s}}\|_{2}$, where $\hat{{\bm{F}}}$ also denotes the diagonalized version of ${\bm{F}}$. Empirically, $\hat{{\bm{F}}}$ is equivalent to the gradient variance of the trained model, $f_{\theta}$. This finally confirms the validation of our objective, gradient variance difference between $\tilde{D}_{s}$ and ${D}_{s}$ as $\|\text{Var}(\mathbf{G}_{\tilde{D}_{s}})-\text{Var}(\mathbf{G}_{{D}_{s}})\|_{2}$. Table 2 on the main paper of Rame et al. (2022) empirically supports that the similarity between Hessian diagonals and gradient variances is over 99.99$\%$. ##### Loss Matching Matching the gradient variances for all parameters of our model, $f_{\theta}$, incurs significant computational overhead. In this study, we restrict the gradient variance matching to a subset of entire parameters, specificall y selecting the classifier parameters. In this section, we demonstrate that by matching gradient variances of two different datasets based on the classifier parameters, it inherently achieve the loss matching across those datasets. For simplicity in notation, we refer an arbitrary domain index as $e$. Let $x_{e}^{i}$ represent the $i$-th sample and $y_{e}^{i}$ be its corresponding target class label. We denote $z_{e}^{i}\in\mathbb{R}^{d}$ as the features for this $i$-th sample from domain $e$. The associated classifier layer $W$ is characterized by weights $\\{w_{k}\\}_{k=1}^{d}$ and bias $b$. First, we assume the mean squared error as our loss function for our analysis. For the $i$-th sample, the gradient of the loss with respect to $b$ is given by $\nabla_{b}\ell(f_{\theta}(x_{e}^{i}),y_{e}^{i})=(f_{\theta}(x_{e}^{i})-y_{e}^{i})$. Hence, the gradient variance based on the parameter $b$ for domain $e$ is given by: $\text{Var}(\mathbf{G}_{D_{e}}^{b})=\frac{1}{n_{e}}\sum_{i=1}^{n_{e}}(f_{\theta}(x_{e}^{i})-y_{e}^{i})^{2}$, which directly aligns with the mean squared error (MSE) between the predictions and the target labels in domain $e$. Considering our objective, $\|\text{Var}(\mathbf{G}_{\tilde{D}_{s}})-\text{Var}(\mathbf{G}_{{D}_{s}})\|_{2}$, the gradient variance matching on $b$ could be recognized as mean squared error loss matching, which is $\|\frac{1}{|\tilde{D}_{s}|}\sum_{(x^{i},y^{i})\in\tilde{D}_{s}}(f_{\theta}(x^{i})-y^{i})^{2}-\frac{1}{|{D}_{s}|}\sum_{(x^{j},y^{j})\in{D}_{s}}(f_{\theta}(x^{j})-y^{j})^{2}\|_{2}$. ##### Analysis on the remaining term We also investigate the gradient variance matching based on $\\{w_{k}\\}_{k=1}^{d}\in W$, which are remaining part of the classifier parameter $W$. The gradients with respect to the $w_{k}$ is derived as $\nabla_{w_{k}}\ell(y_{e}^{i},\hat{y}_{e}^{i})=(\hat{y}_{e}^{i}-y_{e}^{i})z_{e}^{i,k}$. Thus, the uncentered gradient variance in $w_{k}$ for domain $e$ is: $\text{Var}(\textbf{G}_{D_{e}}^{w_{k}})=\frac{1}{n_{e}}\sum_{i=1}^{n_{e}}\left((\hat{y}_{e}^{i}-y_{e}^{i})z_{e}^{i,k}\right)^{2}$. Different from the case of $b$, $\text{Var}(\textbf{G}_{D_{e}}^{w_{k}})$ adds a weight $z_{e}^{i,k}$ on the mean squared error. As $z_{e}^{i,k}$ act as a weight, gradient variance matching on $w_{k}$ still learns toward matching the MSE loss between two different datasets. ## Appendix C Algorithm of UDIM Here, we present the algorithm of UDIM as follows. Input: Source dataset $D_{s}$; perturbation threshold for model parameter $\theta$ and data, $\rho^{\prime}$ and $\rho_{x}$; learning rate $\eta$; warmup ratio for source domain flatness, $p$; number of total training iterations, $N$; Other hyperpameters. Output: Trained model $f_{\theta}(x)$ for _$t=1,...,N/p$_ do Warmup $f_{\theta}(x)$ with SAM optimization as Eq. 1 end for for _$t=N/p,...,N$_ do Define $D_{B}=\\{(x_{i},y_{i})\\}_{i=1}^{|B|}$ i.i.d. sampled from $D_{s}$ Make $\tilde{D}_{B}=\\{(\tilde{x}_{i},y_{i})\\}_{i=1}^{|B|}$, with $\tilde{x}_{i}$ for all $i$ as $\tilde{x}_{i}=x_{i}+\rho_{x}\frac{\nabla_{x_{i}}\big{(}\ell(x,\theta_{t})+\rho^{\prime}\|\nabla_{\theta_{t}}\ell(x,\theta_{t})\|_{2}\big{)}}{\big{\|}\nabla_{x_{i}}\big{(}\ell(x,\theta_{t})+\rho^{\prime}\|\nabla_{\theta_{t}}\ell(x,\theta_{t})\|_{2}\big{)}\big{\|}_{2}}$ Update $f_{\theta}(x)$ by $\theta_{t+1}\leftarrow\theta_{t}-\eta\nabla_{\theta_{t}}\Big{(}\max\limits_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{B}}(\theta_{t}+\epsilon)+\rho{{}^{\prime}}\|\nabla_{\theta_{t}}\mathcal{L}_{\tilde{D}_{B}}(\theta_{t})\|_{2}+\|\text{Var}(\mathbf{G}_{\tilde{D}_{B}})-\text{Var}(\mathbf{G}_{D_{B}})\|_{2}+\lambda_{2}\|\theta_{t}\|_{2}\Big{)}$ end for Algorithm 1 Training algorithm of UDIM w/ SAM We represent the algorithm of UDIM with SAM as a default setting. It should be noted that our method can be orthogonally utilized with other sharpness-based optimization methods. ## Appendix D Experiment ### D.1 Implementation details ##### Dataset Explanation * • PACS (Li et al., 2017) comprises of four domains, which are photos, arts, cartoons and sketches. This dataset contains 9,991 images. It consists of 7 classes. * • OfficeHome (Venkateswara et al., 2017) includes four domains, which are art, clipart, product and real. This dataset contains 15,588 images. It consists of 65 classes. * • DomainNet (Peng et al., 2019) consists of six domains, which are clipart, infograph, painting, quickdraw, real and sketch. This dataset contains 586,575 images. It consists of 345 classes. * • CIFAR-10-C (Hendrycks & Dietterich, 2019) has been utilized to evaluate the robustness of a trained classifier. Images are corrupted from CIFAR10 (Krizhevsky et al., 2009) test dataset under 5 levels of severity. Corruption types include, brightness, contrast, defocus-blur, elastic-transform, fog, frost, gaussian-blur, gaussian-noise, glass-blur, impulse-noise, jpeg- compression, motion-blur, pixelate, saturate, shot-noise, snow, spatter, speckle-noise, zoom-blur total of 19 types. ##### Network Architecture and Optimization We use ResNet-18 (He et al., 2016) for CIFAR-10-C and ResNet-50 (He et al., 2016) for other datasets pretrained on ImageNet (Deng et al., 2009) and use Adam (Kingma & Ba, 2014) optimizer basically. learning rate is set as $3\times 10^{-5}$ following Wang et al. (2023). For calculating the gradient related materials, e.g. Hessian, we utilize BackPACK (Dangel et al., 2020) package. ##### Experimental settings For PACS and OfficeHome, we trained for total of 5,000 iterations. For DomainNet, we trained for 15,000 iterations. For CIFAR-10, since it usually trains for 100 epochs, we translate it to iterations, which becomes total of $781\times 100=78,100$ iterations. Unless specified, we use batch size as 32 for PACS, OfficeHome, and DomainNet and 64 for CIFAR-10-C. For other hyperparameters, we follow the experiment settings of Wang et al. (2023) unless specified. Although our method mainly focuses on the domain generalization, our concept could be also effectively utilized for domain adaptation Csurka (2017) and open-set domain adaptation Jang et al. (2022). ##### Hyperparameter setting of UDIM The main hyperparameters of UDIM is $\rho,\rho^{{}^{\prime}}$, $\lambda_{1}$ and $\rho_{x}$. Throughout all experiments, we set $\rho$=0.05 without any hyperparameter tuning. For $\rho^{{}^{\prime}}$, we used values [0.01, 0.025, 0.05], and in most experiments, the value 0.05 consistently showed good performance. It should be noted that a warm-up using the SAM loss is required before the full UDIM optimization to ensure that $\rho^{{}^{\prime}}$=0.05 can be utilized validly. For both $\lambda_{1}$ and $\rho_{x}$, we used values in the range [1,10] and reported the best performances observed among these results. Our methodology applies perturbations to each instance of the original source domain dataset, effectively doubling the number of unique instances in a single batch compared to experiments for the baselinse. As a result, we utilized half the batch size of other baselines. ##### Evaluation Detail For reporting the model performance, model selection criterion is important. We get the test performance whose accuracy for source validation dataset is best. For PACS and OfficeHome, we evaluated every 100 iterations and for DomainNet, we evaluated every 1,000 iterations for Leave-One-Out Source Domain Generalization and every 5,000 iterations for Single Source Domain Generalization. ### D.2 Baseline description In this paragraph, we explain baselines that we used for comparison. Specifically, we compare our method with (1) methods whose objectives are mainly related to Leave-One Out Source Domain Generalization, (2) methods which are mainly modeled for Single Source Domain Generalization, and (3) sharpness-aware minimization related methods, as we reported in tables repeatedly. IRM (Arjovsky et al., 2019) tries to learn a data representation such that the optimal classifier matches for all training distributions. Specifically, it minimizes the empirical risk and the regularization term, the multiplication of samples’ gradients, to motivate the invariance of the predictor. GroupDRO (Sagawa et al., 2019) minimizes the loss by giving different weight to each domain. Weight term for each domain is proportional to the domain’s current loss. OrgMixup (Zhang et al., 2018) represents the naive mixup technique which is generally utilized in machine learning community to boost generalization. Mixup (Yan et al., 2020) is a mixup among domains. Cutmix (Yun et al., 2019) is another skill which is widely used in machine learning community to boost generalization. Specifically, it mixes up parts of inputs randomly by pixel-wise. Mixstyle (Zhou et al., 2021) mix up the statistics (specifically, mean and standard deviation) of the feature. The mixed feature statistics are applied to the style-normalized input. We did not consider the domain label. MTL (Blanchard et al., 2021) considers the exponential moving average (EMA) of features. MLDG (Li et al., 2018a) is a meta learning based method for domain generalization. Specifically, it simulates the domain shift between train and test during training procedure by synthesizing virtual testing domains within each mini-batch. Then it optimizes meta loss using the synthesized dataset. MMD (Li et al., 2018b) minimizes the discrepancy of feature distributions in a every domain pair-wise manner, while minimizing the empirical risk for source domains. CORAL (Sun & Saenko, 2016) is similar to MMD. However, while MMD employs the gaussian kernel to measure the feature discrepancy, CORAL aligns the second- order statistics between different distributions with a nonlinear transformation. This alignment is achieved by matching the correlations of layer activations in deep neural networks. SagNet (Nam et al., 2021) disentangles style features from class categories to prevent bias. Specifically, it makes two networks, content network and style network, and trains both networks to be invariant to other counterpart by giving randomized features (updating the content network with randomized styled features and vice versa). ARM (Zhang et al., 2021) represents adaptive risk minimization. Specifically, it makes an adaptive risk representing context. DANN represents Domain Adversarial Neural Networks, and it iteratively trains a discriminator which discriminates domain and a featurizer to learn a feature which becomes invariant to domain information. CDANN is class conditional version of DANN. VREx (Krueger et al., 2021) controls the discrepancy between domains by minimizing the variance of loss between domains. RSC (Huang et al., 2020) challenges the dominant features of training domain (by masking some specific percentage of dominant gradient), so it can focus on label-related domain invariant features. Fishr (Rame et al., 2022) approximates the hessian as the variance of gradient matrix, and they align the gradient variance of each domain. M-ADA (Qiao et al., 2020a) perturbs input data to simulate the unseen domain data, yet with adequate regularization not to make the data be too far from the original one. The adversarial perturbation direction is affected by the wasserstein autoencoder. Note that this method is specifically designed for Single source domain generalization. LTD (Wang et al., 2021a) perturbs source domain data with augmentation network, maximize the mutual information between the original feature and the perturbed feature so that the perturbed feature is not too far from the original feature (with contrastive loss), and maximize likelihood of the original feature. Note that this method is also specifically designed for Single source domain generalization. SAM (Foret et al., 2020) is an optimization technique to consider the sharpness of loss surface. It first perturbs parameter to its worst direction, gets gradient and update the calculated gradient at the original parameter point. SAGM (Wang et al., 2023) minimizes an original loss, the corresponding perturbed loss, and the gap between them. This optimization aims to identify a minima that is both flat and possesses a sufficiently low loss value. Interpreting the given formula, this optimization inherently regularizes the gradient alignment between the original loss and the perturbed loss. GAM (Zhang et al., 2023b) introduces first-order flatness, which minimizes a maximal gradient norm within a perturbation radius, to regularize a stronger flatness than SAM. Accordingly, GAM seeks minima with uniformly flat curvature across all directions. RIDG (Chen et al., 2023b) presents a new approach in deep neural networks focusing on decision-making in the classifier layer, diverging from the traditional emphasis on features. It introduces a ’rationale matrix’, derived from the relationship between features and weights, to guide decisions for each input. A novel regularization term is proposed to align each sample’s rationale with the class’s mean, enhancing stability across samples and domains. ITTA (Chen et al., 2023a) proposes an Improved Test-Time Adaptation (ITTA) method for domain generalization. ITTA uses a learnable consistency loss for the TTT task to better align with the main prediction task and introduces adaptive parameters in the model, recommending updates solely during the test phase. This approach aims to address the issues of auxiliary task selection and parameter updating in test-time training. ### D.3 ablation Figure 4 of the main paper presents the ablation results of UDIM, which were carried out by replacing a subpart of the UDIM’s objective with alternative candidates and subsequently assessing the performances. This section enumerates enumerates each ablation candidate and its implementation. We conduct various ablations: ’SourceOpt’ represents the optimization method for $D_{s}$, ’Perturb’ indicates the perturbation method utilized to emulate unknown domains, and ’PerturbOpt’ indicates the optimization for the perturbed dataset $\tilde{D}_{s}$. It should be noted that each ablation means that only the specific part is substituted, while keeping the other parts of UDIM unchanged. ##### SourceOpt The optimization for the source domain dataset in UDIM is originally based on the SAM loss, which is $\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{D_{s}}(\theta+\epsilon)$. To verify the significance of flatness modeling, we replaced the original optimization with simple ERM, which we refer to as the ERM ablation. ##### Perturb The perturbation process of UDIM is conducted based on Eq. 11. We substituted the perturbation method in Eq. 11 with traditional adversarial attack techniques, which are the cases of FGSM (Goodfellow et al., 2014) and PGD (Madry et al., 2017), to compare their performance outcomes. ##### PerturbOpt Lastly, to observe the ablation for inconsistency minimization between the perturbed domain dataset $\tilde{D}_{s}$ and $D_{s}$, we replaced the optimization in Eq. 10 with both ERM and SAM-based optimizations. Each case in the PerturbOpt ablation is denoted as ERM or SAM. ### D.4 Additional Results In this section, we report more results that we did not report in the main paper due to the space issue. Table 4 shows the model performance of total baselines (At the table 3 of the main paper, there are only the model performance of some baselines). As we can see in the table, our method, UDIM, consistently improves SAM-based optimization variants and show best performances for each column. We mark - for training failure case (when the model performance is near 1$\%$). Table 4: Test accuracy for PACS. For Leave-One-Out Source Domain Generalization, each column represents test domain, and train domain for Single Source Domain Generalization. ∗ denotes performances are from its original paper considering LOODG. For SDG scenario, we generated the experiment results for all baselines. Bold indicates the best case of each column or improved performances when combined with the respective sharpness- based optimizers. | Leave-One-Out Source Domain Generalization | Single Source Domain Generalization ---|---|--- Method | Art | Cartoon | Photo | Sketch | Avg | Art | Cartoon | Photo | Sketch | Avg ERM | 86.9$\pm$2.3 | 79.5$\pm$1.5 | 96.6$\pm$0.5 | 78.2$\pm$4.1 | 85.3 | 79.9$\pm$0.9 | 79.9$\pm$0.8 | 48.1$\pm$5.8 | 59.6$\pm$1.1 | 66.9 IRM∗ | 85.0∗$\pm$1.6 | 77.6∗$\pm$0.9 | 96.7∗$\pm$0.3 | 78.5∗$\pm$2.6 | 84.4∗ | 73.3$\pm$1.5 | 77.8$\pm$2.3 | 46.9$\pm$0.8 | 49.7$\pm$3.0 | 61.9 GroupDRO | 84.8$\pm$2.2 | 79.4$\pm$1.2 | 97.3$\pm$0.3 | 75.8$\pm$1.0 | 84.3 | 79.0$\pm$0.5 | 79.0$\pm$0.6 | 42.0$\pm$2.9 | 60.8$\pm$3.9 | 65.2 OrgMixup | 87.7$\pm$0.3 | 77.4$\pm$1.3 | 97.6$\pm$0.3 | 76.3$\pm$0.9 | 84.8 | 74.5$\pm$1.1 | 79.8$\pm$0.4 | 46.8$\pm$2.1 | 55.5$\pm$1.9 | 64.1 Mixup | 86.9$\pm$1.3 | 78.2$\pm$0.7 | 97.8$\pm$0.4 | 73.7$\pm$2.9 | 84.2 | 77.4$\pm$1.4 | 80.0$\pm$1.2 | 47.3$\pm$1.6 | 58.2$\pm$1.2 | 65.7 CutMix | 80.5$\pm$0.7 | 75.7$\pm$1.4 | 97.0$\pm$0.5 | 74.8$\pm$1.7 | 82.0 | 71.1$\pm$0.5 | 76.4$\pm$3.1 | 37.7$\pm$0.3 | 50.4$\pm$4.0 | 58.9 Mixstyle | 84.4$\pm$2.3 | 80.4$\pm$0.6 | 95.6$\pm$0.1 | 80.5$\pm$1.1 | 85.2 | 78.1$\pm$2.8 | 78.8$\pm$1.1 | 56.1$\pm$3.9 | 54.7$\pm$2.9 | 66.9 MTL | 85.4$\pm$2.2 | 78.8$\pm$2.2 | 96.5$\pm$0.2 | 74.4$\pm$2.0 | 83.8 | 76.7$\pm$1.2 | 78.7$\pm$1.7 | 44.7$\pm$2.0 | 59.5$\pm$1.4 | 64.9 MLDG | 87.7$\pm$0.6 | 77.5$\pm$0.7 | 96.6$\pm$0.6 | 75.3$\pm$1.9 | 84.3 | - | - | - | - | - MMD∗ | 84.5∗$\pm$0.6 | 79.7∗$\pm$0.7 | 97.5∗$\pm$0.4 | 78.1∗$\pm$1.3 | 85.0∗ | 75.4$\pm$1.1 | 80.1$\pm$0.5 | 45.2$\pm$1.2 | 58.2$\pm$0.6 | 64.7 CORAL∗ | 87.7∗$\pm$0.6 | 79.2∗$\pm$1.1 | 97.6∗$\pm$0.0 | 79.4∗$\pm$0.7 | 86.0∗ | 76.3$\pm$0.8 | 79.2$\pm$2.0 | 45.9$\pm$1.7 | 57.0$\pm$1.4 | 64.6 SagNet | 87.1$\pm$1.1 | 78.0$\pm$1.9 | 96.8$\pm$0.2 | 78.4$\pm$1.4 | 85.1 | 77.4$\pm$0.0 | 78.9$\pm$1.8 | 47.6$\pm$2.4 | 56.4$\pm$4.0 | 65.1 ARM | 86.4$\pm$0.1 | 78.8$\pm$0.6 | 96.1$\pm$0.1 | 75.1$\pm$3.3 | 84.1 | 76.2$\pm$0.5 | 75.5$\pm$4.0 | 45.2$\pm$5.7 | 61.9$\pm$2.0 | 64.7 DANN∗ | 85.9∗$\pm$0.5 | 79.9∗$\pm$1.4 | 97.6∗$\pm$0.2 | 75.2∗$\pm$2.8 | 84.6∗ | 79.0$\pm$1.4 | 76.5$\pm$2.0 | 48.7$\pm$2.1 | 57.9$\pm$4.7 | 65.5 CDANN∗ | 84.0∗$\pm$0.9 | 78.5∗$\pm$1.5 | 97.0∗$\pm$0.4 | 71.8∗$\pm$3.9 | 82.8∗ | 78.5$\pm$1.5 | 78.7$\pm$2.0 | 48.3$\pm$3.1 | 56.9$\pm$2.2 | 65.6 VREx | 87.2$\pm$0.5 | 77.8$\pm$0.8 | 96.8$\pm$0.3 | 75.2$\pm$3.4 | 84.3 | 75.3$\pm$2.1 | 80.2$\pm$0.4 | 44.9$\pm$2.8 | 56.8$\pm$2.6 | 64.3 RSC | 81.0$\pm$0.7 | 77.6$\pm$1.0 | 95.3$\pm$0.8 | 75.0$\pm$1.4 | 82.2 | 68.9$\pm$2.3 | 70.6$\pm$3.6 | 41.1$\pm$3.1 | 45.9$\pm$3.1 | 56.6 Fishr∗ | 88.4∗$\pm$0.2 | 78.7∗$\pm$0.7 | 97.0∗$\pm$0.1 | 77.8∗$\pm$2.0 | 85.5∗ | 75.9$\pm$1.7 | 81.1$\pm$0.7 | 46.9$\pm$0.7 | 57.2$\pm$4.4 | 65.3 RIDG | 86.3$\pm$1.1 | 81.0$\pm$1.0 | 97.4$\pm$0.7 | 77.5$\pm$2.5 | 85.5 | 76.2$\pm$1.4 | 80.0$\pm$1.8 | 48.5$\pm$2.8 | 54.8$\pm$2.4 | 64.9 ITTA | 87.9$\pm$1.4 | 78.6$\pm$2.7 | 96.2$\pm$0.2 | 80.7$\pm$2.2 | 85.8 | 78.4$\pm$1.5 | 79.8$\pm$1.3 | 56.5$\pm$3.7 | 60.7$\pm$0.9 | 68.8 M-ADA | 85.5$\pm$0.7 | 80.7$\pm$1.5 | 97.2$\pm$0.5 | 78.4$\pm$1.4 | 85.4 | 78.0$\pm$1.1 | 79.5$\pm$1.2 | 47.1$\pm$0.4 | 55.7$\pm$0.5 | 65.1 LTD | 85.7$\pm$1.9 | 79.9$\pm$0.9 | 96.9$\pm$0.5 | 83.3$\pm$0.5 | 86.4 | 76.8$\pm$0.7 | 82.5$\pm$0.4 | 56.2$\pm$2.5 | 53.6$\pm$1.4 | 67.3 SAM | 86.8$\pm$0.6 | 79.6$\pm$1.4 | 96.8$\pm$0.1 | 80.2$\pm$0.7 | 85.9 | 77.7$\pm$1.1 | 80.5$\pm$0.6 | 46.7$\pm$1.1 | 54.2$\pm$1.5 | 64.8 UDIM w/ SAM | 88.5$\pm$0.1 | 86.1$\pm$0.1 | 97.3$\pm$0.1 | 82.7$\pm$0.1 | 88.7 | 81.5$\pm$0.1 | 85.3$\pm$0.4 | 67.4$\pm$0.8 | 64.6$\pm$1.7 | 74.7 SAGM | 85.3$\pm$2.5 | 80.9$\pm$1.1 | 97.1$\pm$0.4 | 77.8$\pm$0.5 | 85.3 | 78.9$\pm$1.2 | 79.8$\pm$1.0 | 44.7$\pm$1.8 | 55.6$\pm$1.1 | 64.8 UDIM w/ SAGM | 88.9$\pm$0.2 | 86.2$\pm$0.3 | 97.4$\pm$0.4 | 79.5$\pm$0.8 | 88.0 | 81.6$\pm$0.3 | 84.8$\pm$1.2 | 68.1$\pm$0.8 | 63.3$\pm$0.9 | 74.5 GAM | 85.5$\pm$0.6 | 81.1$\pm$1.0 | 96.4$\pm$0.2 | 81.0$\pm$1.7 | 86.0 | 79.1$\pm$1.3 | 79.7$\pm$0.9 | 46.3$\pm$0.6 | 56.6$\pm$1.1 | 65.4 UDIM w/ GAM | 87.1$\pm$0.9 | 86.3$\pm$0.4 | 97.2$\pm$0.1 | 81.8$\pm$1.1 | 88.1 | 82.4$\pm$0.9 | 84.2$\pm$0.4 | 68.8$\pm$0.8 | 64.0$\pm$0.7 | 74.9 Table 5: Results on OfficeHome dataset. ∗ represents we got the results from Wang et al. (2023) and ′ from Rame et al. (2022) considering Leave-One-Out Source Domain Generalization. For Single Source Domain Generalization case, we report the model performances generated under our experiment setting. | Leave-One-Out Source Domain Generalization | Single Source Domain Generalization ---|---|--- Method | Art | Clipart | Product | Real World | Avg | Art | Clipart | Product | Real World | Avg ERM | 61.4$\pm$1.0 | 53.5$\pm$0.2 | 75.9$\pm$0.2 | 77.1$\pm$0.2 | 67.0 | 55.6$\pm$0.6 | 52.8$\pm$1.6 | 50.3$\pm$1.1 | 59.4$\pm$0.3 | 54.5 IRM∗ | 61.8∗$\pm$1.0 | 52.3∗$\pm$1.0 | 75.2∗$\pm$0.8 | 77.2∗$\pm$1.1 | 66.6∗ | 54.9$\pm$0.3 | 53.2$\pm$0.6 | 48.6$\pm$0.7 | 59.2$\pm$0.1 | 54.0 GroupDRO | 61.3$\pm$2.0 | 53.3$\pm$0.4 | 75.4$\pm$0.3 | 76.0$\pm$1.0 | 66.5 | 55.1$\pm$0.2 | 52.0$\pm$0.5 | 50.3$\pm$1.1 | 59.3$\pm$0.3 | 54.2 OrgMixup | 63.7$\pm$1.1 | 55.4$\pm$0.1 | 77.1$\pm$0.2 | 78.9$\pm$0.6 | 68.8 | 56.0$\pm$1.1 | 54.4$\pm$1.3 | 50.4$\pm$0.4 | 61.0$\pm$0.7 | 55.5 Mixup | 64.1$\pm$0.6 | 54.9$\pm$0.7 | 76.6$\pm$0.4 | 78.7$\pm$0.5 | 68.6 | 55.5$\pm$0.6 | 54.1$\pm$1.1 | 49.4$\pm$1.7 | 59.4$\pm$0.6 | 54.6 CutMix | 63.2$\pm$0.3 | 52.1$\pm$1.7 | 77.2$\pm$0.8 | 78.1$\pm$0.6 | 67.7 | 53.5$\pm$0.8 | 52.2$\pm$1.2 | 47.7$\pm$1.7 | 60.2$\pm$0.4 | 53.4 Mixstyle∗ | 51.1∗$\pm$0.3 | 53.2∗$\pm$0.4 | 68.2∗$\pm$0.7 | 69.2∗$\pm$0.6 | 60.4 | 44.3$\pm$0.5 | 29.8$\pm$1.2 | 33.6$\pm$0.5 | 48.5$\pm$0.9 | 39.0 MTL | 60.1$\pm$0.5 | 52.0$\pm$0.3 | 75.7$\pm$0.3 | 77.2$\pm$0.4 | 66.3 | 55.3$\pm$0.3 | 53.3$\pm$0.4 | 49.0$\pm$0.3 | 60.4$\pm$0.1 | 54.5 MLDG∗ | 63.7∗$\pm$0.3 | 54.5∗$\pm$0.6 | 75.9∗$\pm$0.4 | 78.6∗$\pm$0.1 | 68.2∗ | - | - | - | - | - MMD∗ | 63.0∗$\pm$0.1 | 53.7∗$\pm$0.9 | 76.1∗$\pm$0.3 | 78.1∗$\pm$0.5 | 67.7∗ | 55.1$\pm$0.2 | 52.0$\pm$0.5 | 50.3$\pm$1.1 | 59.3$\pm$0.3 | 54.2 CORAL | 64.1$\pm$0.5 | 54.5$\pm$1.7 | 76.2$\pm$0.4 | 77.8$\pm$0.5 | 68.2 | 55.6$\pm$0.6 | 52.8$\pm$1.6 | 50.3$\pm$1.1 | 59.4$\pm$0.3 | 54.5 SagNet | 62.3$\pm$1.1 | 51.7$\pm$0.3 | 75.4$\pm$1.0 | 78.1$\pm$0.2 | 66.9 | 56.9$\pm$1.2 | 53.4$\pm$2.1 | 50.8$\pm$0.3 | 61.2$\pm$0.8 | 55.6 ARM | 59.9$\pm$0.6 | 51.8$\pm$0.5 | 73.3$\pm$0.5 | 75.7$\pm$0.8 | 65.2 | 55.0$\pm$0.1 | 51.6$\pm$1.1 | 47.3$\pm$0.8 | 59.3$\pm$0.7 | 53.3 DANN∗ | 59.9∗$\pm$1.3 | 53.0∗$\pm$0.3 | 73.6∗$\pm$0.7 | 76.9∗$\pm$0.5 | 65.9∗ | 55.2$\pm$0.8 | 49.3$\pm$1.5 | 48.4$\pm$1.5 | 58.4$\pm$0.2 | 52.8 CDANN∗ | 61.5∗$\pm$1.4 | 50.4∗$\pm$2.4 | 74.4∗$\pm$0.9 | 76.6∗$\pm$0.8 | 65.7∗ | 55.2$\pm$0.7 | 49.9$\pm$1.4 | 47.6$\pm$1.3 | 58.6$\pm$0.5 | 52.8 VREx | 61.4$\pm$1.0 | 52.2$\pm$0.2 | 76.1$\pm$0.6 | 77.6$\pm$0.9 | 66.8 | 55.5$\pm$0.6 | 52.6$\pm$0.2 | 49.1$\pm$1.0 | 59.3$\pm$0.6 | 54.1 RSC | - | - | - | - | - | - | - | - | - | - Fishr${}^{{}^{\prime}}$ | 62.4$\pm$0.5 | 54.4$\pm$0.4 | 76.2$\pm$0.5 | 78.3$\pm$0.1 | 67.8 | 55.1$\pm$0.4 | 51.2$\pm$0.1 | 49.2$\pm$1.0 | 59.9$\pm$1.4 | 53.9 RIDG | 63.6$\pm$0.7 | 55.0$\pm$0.9 | 76.0$\pm$0.8 | 77.5$\pm$0.7 | 68.0 | 56.8$\pm$0.5 | 55.4$\pm$0.7 | 50.5$\pm$0.3 | 60.9$\pm$0.1 | 55.9 ITTA | 61.8$\pm$0.9 | 57.0$\pm$1.0 | 74.3$\pm$0.3 | 77.3$\pm$0.3 | 67.6 | 56.0$\pm$0.4 | 51.5$\pm$0.8 | 50.5$\pm$0.6 | 61.6$\pm$0.4 | 54.9 SAM | 62.2$\pm$0.7 | 55.9$\pm$0.1 | 77.0$\pm$0.9 | 78.8$\pm$0.6 | 68.5 | 56.9$\pm$0.4 | 53.8$\pm$1.1 | 50.9$\pm$0.7 | 61.5$\pm$0.8 | 55.8 UDIM (w/ SAM) | 63.5$\pm$1.3 | 58.6$\pm$0.4 | 76.9$\pm$0.6 | 79.1$\pm$0.3 | 69.5 | 58.1$\pm$0.6 | 55.0$\pm$0.9 | 53.8$\pm$0.1 | 64.3$\pm$0.2 | 57.8 SAGM | 63.1$\pm$2.1 | 56.2$\pm$0.4 | 77.3$\pm$0.2 | 78.4$\pm$0.4 | 68.8 | 57.7$\pm$0.3 | 54.8$\pm$1.0 | 51.5$\pm$1.2 | 61.4$\pm$0.1 | 56.3 UDIM (w/ SAGM) | 64.4$\pm$0.3 | 57.3$\pm$0.5 | 77.1$\pm$0.4 | 79.1$\pm$0.3 | 69.5 | 58.5$\pm$0.4 | 55.7$\pm$0.6 | 54.5$\pm$0.1 | 64.5$\pm$0.4 | 58.3 GAM | 64.0$\pm$0.5 | 58.6$\pm$1.2 | 77.5$\pm$0.1 | 79.3$\pm$0.2 | 69.8 | 59.4$\pm$0.6 | 56.1$\pm$0.9 | 53.3$\pm$0.6 | 63.4$\pm$0.2 | 58.1 UDIM (w/ GAM) | 64.2$\pm$0.3 | 57.4$\pm$0.9 | 77.5$\pm$0.1 | 79.3$\pm$0.3 | 69.6 | 58.7$\pm$0.3 | 55.7$\pm$0.0 | 53.6$\pm$0.4 | 64.4$\pm$0.0 | 58.1 Table 5 and 7 shows the model performances for OfficeHome (Venkateswara et al., 2017) and DomainNet (Peng et al., 2019) dataset, respectively. Similar to the above tables, UDIM shows performance improvement over sharpness-aware baselines, and good performances for each column consistently. Table 6: Results on DomainNet datasets - Leave-One-Out (Multi) Source Domain Generalization Method | clipart | infograph | painting | quickdraw | real | sketch | Avg ---|---|---|---|---|---|---|--- ERM | 62.8$\pm$0.4 | 20.2$\pm$0.3 | 50.3$\pm$0.3 | 13.7$\pm$0.5 | 63.7$\pm$0.2 | 52.1$\pm$0.5 | 43.8 IRM | 48.5$\pm$2.8 | 15.0$\pm$1.5 | 38.3$\pm$4.3 | 10.9$\pm$0.5 | 48.2$\pm$5.2 | 42.3$\pm$3.1 | 33.9 GroupDRO | 47.2$\pm$0.5 | 17.5$\pm$0.4 | 33.8$\pm$0.5 | 9.3$\pm$0.3 | 51.6$\pm$0.4 | 40.1$\pm$0.6 | 33.3 MTL | 57.9$\pm$0.5 | 18.5$\pm$0.4 | 46.0$\pm$0.1 | 12.5$\pm$0.1 | 59.5$\pm$0.3 | 49.2$\pm$0.1 | 40.6 MLDG | 59.1$\pm$0.2 | 19.1$\pm$0.3 | 45.8$\pm$0.7 | 13.4$\pm$0.3 | 59.6$\pm$0.2 | 50.2$\pm$0.4 | 41.2 MMD | 32.1$\pm$13.3 | 11.0$\pm$4.6 | 26.8$\pm$11.3 | 8.7$\pm$2.1 | 32.7$\pm$13.8 | 28.9$\pm$11.9 | 23.4 CORAL | 59.2$\pm$0.1 | 19.7$\pm$0.2 | 46.6$\pm$0.3 | 13.4$\pm$0.4 | 59.8$\pm$0.2 | 50.1$\pm$0.6 | 41.5 SagNet | 57.7$\pm$0.3 | 19.0$\pm$0.2 | 45.3$\pm$0.3 | 12.7$\pm$0.5 | 58.1$\pm$0.5 | 48.8$\pm$0.2 | 40.3 ARM | 49.7$\pm$0.3 | 16.3$\pm$0.5 | 40.9$\pm$1.1 | 9.4$\pm$0.1 | 53.4$\pm$0.4 | 43.5$\pm$0.4 | 35.5 DANN∗ | 53.8∗$\pm 0.7$ | 17.8∗$\pm 0.3$ | 43.5∗$\pm 0.3$ | 11.9∗$\pm 0.5$ | 56.4∗$\pm 0.3$ | 46.7∗$\pm 0.5$ | 38.4∗ CDANN∗ | 53.4∗$\pm 0.4$ | 18.3∗$\pm 0.7$ | 44.8∗$\pm 0.3$ | 12.9∗$\pm 0.2$ | 57.5∗$\pm 0.4$ | 46.7∗$\pm 0.2$ | 38.9∗ VREx | 47.3$\pm$3.5 | 16.0$\pm$1.5 | 35.8$\pm$4.6 | 10.9$\pm$0.3 | 49.6$\pm$4.9 | 42.0$\pm$3.0 | 33.6 RSC | 55.0$\pm$1.2 | 18.3$\pm$0.5 | 44.4$\pm$0.6 | 12.2$\pm$0.2 | 55.7$\pm$0.7 | 47.8$\pm$0.9 | 38.9 Fishr∗ | 58.2∗$\pm$0.5 | 20.2∗$\pm$0.2 | 47.7∗$\pm$0.3 | 12.7∗$\pm$0.2 | 60.3∗$\pm$0.2 | 50.8∗$\pm$0.1 | 41.7∗ SAM | 64.5$\pm$0.3 | 20.7$\pm$0.2 | 50.2$\pm$0.1 | 15.1$\pm$0.3 | 62.6$\pm$0.2 | 52.7$\pm$0.3 | 44.3 UDIM (w/ SAM) | 63.5$\pm$0.1 | 21.01$\pm$0.1 | 50.63$\pm$0.1 | 14.76$\pm$0.1 | 62.5$\pm$0.1 | 53.39$\pm$0.1 | 44.3 Table 7: Results on DomainNet datasets - Single Source Domain Generalization Method | clipart | infograph | painting | quickdraw | real | sketch | Avg ---|---|---|---|---|---|---|--- ERM | 27.5$\pm$0.5 | 26.6$\pm$0.1 | 28.5$\pm$0.3 | 7.1$\pm$0.1 | 28.9$\pm$0.4 | 29.4$\pm$0.3 | 24.7 IRM | - | - | - | - | - | - | - GroupDRO | 27.6$\pm$0.4 | 26.6$\pm$0.8 | 28.9$\pm$0.3 | 7.3$\pm$0.3 | 28.7$\pm$0.2 | 29.4$\pm$0.7 | 24.8 OrgMixup | 28.3$\pm$0.1 | 27.3$\pm$0.1 | 29.4$\pm$0.3 | 7.7$\pm$0.4 | 30.2$\pm$0.2 | 30.0$\pm$0.7 | 25.5 Mixup | 27.5$\pm$0.3 | 26.9$\pm$0.8 | 28.6$\pm$0.4 | 7.0$\pm$0.1 | 28.7$\pm$0.2 | 29.6$\pm$0.6 | 24.7 CutMix | 28.0$\pm$0.2 | 26.5$\pm$0.2 | 28.4$\pm$0.6 | 6.4$\pm$0.1 | 29.0$\pm$0.3 | 29.7$\pm$0.8 | 24.6 Mixstyle | 18.3$\pm$0.8 | 14.5$\pm$0.3 | 19.6$\pm$0.2 | 5.0$\pm$0.1 | 20.5$\pm$1.3 | 21.3$\pm$0.8 | 16.5 MTL | 27.3$\pm$0.2 | 26.6$\pm$0.3 | 28.7$\pm$0.1 | 7.8$\pm$0.1 | 28.2$\pm$0.2 | 28.7$\pm$0.6 | 24.5 MLDG | - | - | - | - | - | - | - MMD | 27.6$\pm$0.4 | 26.6$\pm$0.7 | 28.5$\pm$0.3 | 7.4$\pm$0.3 | 28.9$\pm$0.3 | 29.5$\pm$0.9 | 24.8 CORAL | 18.1$\pm$12.8 | 17.4$\pm$12.3 | 18.9$\pm$13.4 | 4.8$\pm$3.4 | 19.5$\pm$13.8 | 20.2$\pm$14.3 | 16.5 SagNet | 27.6$\pm$0.3 | 25.6$\pm$0.5 | 28.5$\pm$0.7 | 7.3$\pm$0.2 | 28.8$\pm$0.5 | 28.8$\pm$0.8 | 24.4 ARM | 17.0$\pm$12.0 | 16.8$\pm$11.9 | 17.7$\pm$12.5 | 4.1$\pm$2.9 | 17.5$\pm$12.4 | 18.8$\pm$13.3 | 15.3 DANN | 26.8$\pm$0.8 | 25.2$\pm$0.9 | 28.0$\pm$0.4 | 6.8$\pm$0.6 | 27.6$\pm$0.2 | 28.0$\pm$0.2 | 23.8 CDANN | 26.8$\pm$0.7 | 25.8$\pm$0.2 | 27.6$\pm$0.2 | 6.8$\pm$0.6 | 27.6$\pm$0.2 | 28.0$\pm$0.2 | 23.8 VREx | 18.6$\pm$13.1 | 17.3$\pm$12.3 | 19.3$\pm$13.6 | 4.7$\pm$3.3 | 19.3$\pm$13.7 | 19.9$\pm$14.1 | 16.5 RSC | - | - | - | - | - | - | - Fishr | 30.0$\pm$0.3 | 26.6$\pm$0.7 | 28.9$\pm$0.3 | 7.5$\pm$0.7 | 28.4$\pm$0.7 | 28.9$\pm$0.3 | 24.7 SAM | 28.4$\pm$0.2 | 26.9$\pm$0.1 | 29.1$\pm$0.4 | 6.9$\pm$0.5 | 30.0$\pm$0.2 | 29.8$\pm$0.7 | 25.2 UDIM (w/ SAM) | 30.0$\pm$0.1 | 23.8$\pm$0.4 | 31.0$\pm$0.1 | 12.6$\pm$0.1 | 30.7$\pm$0.2 | 34.0$\pm$0.3 | 27.0 ### D.5 Additional Analyses In this section, we additionally provide analysis on 1) Comparison with Shui et al. (2022), which utilzes both 1) data-based perturbation and 2) parameter- based regularization on their own framework. Afterwards, we provide visual inspections on the perturbed domain instances, which are constructed by Eq 11. #### D.5.1 Analytical Comparison with Shui et al. (2022) UDIM and Shui et al. (2022) share a similar direction, as both methodologies involve 1) generating virtual samples through distinct data perturbation methods, and 2) implementing their own types of regularization on these samples. Shui et al. (2022) introduced novel regularization techniques for the embedding function based on theoretical analysis. Their approach involves establishing an upper bound on the balanced error rate in the test environment, which is obtained from the combination of the balanced error rates in the source environments, the feature-conditional invariance, and the smoothness of the embedding function. In particular, they focused on minimizing the smoothness term by reducing the Frobenius norm of the Jacobian matrix with respect to the embedding function. To implement this regularization term over unobserved regions, they use virtual samples generated by a linear combination of samples from each source. On the other hand, UDIM also introduces an upper bound on the generalization loss in an unknown domain. This bound includes the SAM loss on the source domain and a region-based loss disparity between the worst-case domain and the source domain. This disparity is implemented by a gradient variance-based objective outlined in Eq. 10. The objective involves a perturbed dataset constructed by perturbing samples, where the direction of perturbation is determined by the inconsistencies described in Eq. 7. #### D.5.2 Analytic comparison with domain augmentation and adversarial attack The proposed method, UDIM, involves applying arbitrary perturbation to a given instance to create a new instance, which is similar to domain augmentation and adversarial attacks. However, UDIM has the following differences and advantages compared to them. ##### Comparison with domain augmentation First, we taxonomize domain augmentation based on its dependency on learning signals such as parameters or loss, dividing it into not-learnable domain augmentation Zhang et al. (2018); Zhou et al. (2021); Li et al. (2021b) and learnable domain augmentation Zhang et al. (2018); Zhou et al. (2021); Li et al. (2021b). Please note that we briefly cite the representative methodologies among wide range of augmentation techniques. While non-learnable domain augmentations are effective in generating new styles and may generalize well to specific types of domains, it does not guarantee generalization across wide range of unseen domains, as discussed in the Theorem 3.2. In contrast, UDIM’s data perturbation method is designed to generate perturbations towards the most vulnerable or worst domain from a parameter space perspective, enabling a reduction of the generalization bound in Eq 5, even in scenarios involving numerous unobserved domains. Additionally, it’s important to note that these domain augmentation techniques could be applied orthogonally to the UDIM framework, for instance, by implementing domain augmentation prior to UDIM’s data perturbation. Learnable augmentations, similar to UDIM, determine its augmentation direction based on the current parameter response. However, these methodologies do not link their augmentation with a theoretical analysis to assure minimization of the target objective, which is left-hand side of Eq 5 of our manuscript. UDIM’s data perturbation impacts the generalization bound from a parameter perspective as it takes into account a parameter loss curvature information, rather than just a single parameter point, when determining perturbations. ##### Comparison with adversarial attack Adversarial attacks also introduce perturbations in the direction most vulnerable to the current parameters, but methodologies like FGSM Goodfellow et al. (2014) and PGD Madry et al. (2017) do not consider the local parameter curvature in their perturbation process. By integrating perturbations on instances with attention to parameter loss curvature; and parameter perturbation, we facilitate the modeling of inconsistency in unknown domains, as described in Eq 3. Having said that, Kim et al. (2023) also utilize worst- case instance selection on the active learning framework by utilizing the parameter perturbation. In the literature of coreset selection Feldman (2020), Shin et al. (2023) also utilizes the perturbed parameter region to attain samples which effectively represent whole dataset. From a mathematical perspective, UDIM’s data perturbation involves receiving not only gradients related to the simple cross-entropy loss but also additional gradients concerning the norm of gradient, as elaborated in Eq 7. ##### Combination of domain augmentation with SAM optimizer We accordingly report the experimental results of models, which combines various domain augmentation techniques with SAM optimization in Table 8. We reported the average test accuracy for each domain in each setting. Applying SAM optimization to data instances of augmented domains led to mixed results: some methodologies improved, others didn’t, but all still under-performed compared to UDIM. We hypothesize that the observed performance decline in certain augmentations combined with the SAM optimizer might stem from an unstable learning process. This instability may arise from attempting to minimize sharpness in the perturbed domain prematurely, before ensuring flatness in the source domain. Method | Leave-One-Out Source Domain Generalization | Single Source Domain Generalization ---|---|--- SAM | 85.9% | 64.8% SAM w/ Mixup | 84.51% | 62.28% SAM w/ Mixstyle | 86.56% | 68.59% SAM w/ Simple Augment Li et al. (2021b) | 86.26% | 64.97% SAM w/ advstyle Zhong et al. (2022) | 85.28% | 61.02% UDIM | 88.7% | 74.7% Table 8: Performance comparison of models using combinations of domain augmentation variants and SAM optimizer, alongside UDIM model, on the PACS dataset. #### D.5.3 Visual Inspection on the Perturbed Domain In this section, we aim to illustrate how perturbed instances are created depending on the size of $\rho$, which represents the magnitude of data perturbation. Each plot utilizes the PACS dataset and a model trained under the Single Source Domain Generalization setting. Figure 6 displays instances perturbed through the original UDIM methodology. As the perturbation size increases, we can observe a gradual distortion of the image’s semantics, highlighting the importance of selecting an appropriate perturbation size. Figure 7 shows the scenario where the data perturbation of the original UDIM method is applied to the input channel, instead of input pixels. This approach of perturbing the channel over pixels has the advantage of maintaining the basic shape and line information of the image, ensuring the preservation of essential visual features. Figure 6: Pixel-perturbed domain instances by UDIM model trained on PACS dataset by varying the perturbation size Figure 7: Channel-perturbed domain instances by UDIM model trained on PACS dataset by varying the perturbation size
# Towards End-to-End Synthetic Speech Detection Guang Hua, _Member, IEEE_ , Andrew Beng Jin Teoh, , and Haijian Zhang This work was supported by the 2020–2021 International Scholar Exchange Fellowship (ISEF) Program at the Chey Institute for Advanced Studies, South Korea. _(Corresponding Author: Andrew Beng Jin Teoh)_ G. Hua and H. Zhang are with the School of Electronic Information, Wuhan University, Wuhan 430072, China (e-mail<EMAIL_ADDRESS>[email protected]).A. B. J. Teoh is with the School of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul 120749, South Korea (e-mail: [email protected]). ###### Abstract The constant Q transform (CQT) has been shown to be one of the most effective speech signal pre-transforms to facilitate synthetic speech detection, followed by either hand-crafted (subband) constant Q cepstral coefficient (CQCC) feature extraction and a back-end binary classifier, or a deep neural network (DNN) directly for further feature extraction and classification. Despite the rich literature on such a pipeline, we show in this paper that the pre-transform and hand-crafted features could simply be replaced by end-to-end DNNs. Specifically, we experimentally verify that by only using standard components, a light-weight neural network could outperform the state-of-the- art methods for the ASVspoof2019 challenge. The proposed model is termed Time- domain Synthetic Speech Detection Net (TSSDNet), having ResNet- or Inception- style structures. We further demonstrate that the proposed models also have attractive generalization capability. Trained on ASVspoof2019, they could achieve promising detection performance when tested on disjoint ASVspoof2015, significantly better than the existing cross-dataset results. This paper reveals the great potential of end-to-end DNNs for synthetic speech detection, without hand-crafted features. ###### Index Terms: Synthetic speech detection, speech forensics, ASVspoof2019, ASVspoof2015, cross-dataset testing, end-to-end. ## I Introduction The success of deep learning technology has shifted the paradigm of speech synthesis from the classic hidden Markov model based framework [1] to neural speech synthesis. Equipped with powerful deep neural network (DNN) architectures e.g., [2], and fueled by massive training data, today’s text-to- speech (TTS) systems could synthesize high quality speech that is hard to be distinguished from human voices. Despite the multitude of benefits, these advances have also improved the quality of voice spoofing attacks, including voice conversion [3], impersonation [4], cloning [5], etc., posing new challenges to synthetic speech detection. For nearly a decade, the combination of a front-end feature extractor and a back-end binary classifier is the _de facto_ framework for synthetic speech detection. Within this framework, an overwhelming majority of the existing works have focused on the development of hand-crafted front-end features, including fundamental frequency, power spectrum, octave spectrum, linear frequency cepstral coefficient (LFCC), mel-frequency cepstral coefficient (MFCC), cepstral mean and variance (CMVN), cochlear filter cepstral coefficient (CFCC), filter bank based cepstral coefficient, linear prediction cepstral coefficient (LPCC), modified group delay (MGD), relative phase shift (RPS), constant Q cepstral coefficient (CQCC), and many of their variations and combinations [6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. Usually, one or a few of these features are used to train a Gaussian mixture model (GMM) or a support vector machine (SVM) for classification. Taking the advantage of DNNs in classification tasks, multilayer perceptron (MLP) and convolutional neural network (CNN) based classifiers have been used to replace the conventional back-end classifiers [16, 17, 18, 15, 19, 20, 21, 22]. On the other side, DNN structures have also been used at the front-end to facilitate feature extraction [23, 22, 24, 25], followed by conventional classifiers. DNNs can also work across the front- and back-end, with pre-transformed features as input [26, 27, 28, 29, 30]. Figure 1: Relationship between the existing front-end$\rightarrow$back-end pipeline and the proposed end-to-end framework for synthetic speech detection. Among hand-crafted features, CQCC has been found to be the best choice, which is also the baseline feature in the ASVspoof2019 challenge [31]. Recently, Yang _et al._ developed a set of subband CQCC features for better detection performance [19]. Subsequently, Das _et al._[21] further fused $8$ hand- crafted features, followed by an MLP classifier. For deep learning based approach, Lavrentyeva _et al._ [29] proposed the use of FFT, LFCC, and CMVN, followed by a CNN for classification, while using CQT as model input, Li _et al._ [30] incorporated the so called Res2Net structure and squeeze-and- excitation (SE) block. With score level fusion, Lavrentyeva _et al._ [29] and Li _et al._ [30] have achieved the state-of-the-art performance on ASVspoof2019 dataset. Based on the above overview, the existing mainstream workflow for synthetic speech detection is summarized in the brown blocks of Fig. 1. It can be seen that a time-frequency transform (e.g., CQT) of the speech waveform before hand-crafted feature extraction (e.g., CQCC), or before feeding the data into a DNN, has become an implicit standard routine in the existing works. However, since DNNs are best known for their excellent capability of feature extraction, there naturally arises a question of whether it is necessary to apply these pre-transforms. In fact, these transforms usually discard some information about the observed speech signal. For example, the CQT feature, more precisely the log power spectrum of the CQT [30], does not have the phase information of the signal. To further generate the CQCC, even more information will be discarded [19]. From hand-crafted feature engineering point of view, a good feature captures discriminative information between classes and is also compact in size, but the same principle may not apply to the DNN regime. In this paper, we show that the pre-transforms, as well as the hand-crafted features, are in fact not a must for DNN based synthetic speech detection. Despite the rich hand-crafted features, we experimentally verify that via the use of standard DNN structures, an end-to-end light-weight neural network with mere speech waveform could achieve even better results. Our proposal is motivated by recent works analyzing raw-waveform based DNNs [32] and the attempt of applying end-to-end DNNs to related speech processing tasks, e.g., speech separation [33]. The proposed model is thus termed as Time-domain Synthetic Speech Detection Net (TSSDNet). We note that the first work on end- to-end synthetic speech detection was probably carried out by Muckenhirn _et al._ [34], in which a basic feedforward sequential CNN was used. It was tested on older datasets, not achieving the state-of-the-art results. In our design of the TSSDNet, two types of advanced CNN structures are considered, including ResNet-style skip connection with $1\times 1$ kernels [35] and Inception-style parallel convolutions [36], respectively. We demonstrate that via proper training, the proposed networks outperform the state-of-the-art hand-crafted feature based detectors as well as DNN based ones on the challenging ASVspoof2019 dataset [31]. To analyze practical merits of the proposed methods, we further perform a cross-dataset evaluation between ASVspoof2019 and ASVspoof2015[37]111The ASVspoof2017 dataset is not considered in this paper because it only has replay attack. Although replay attack is seen to be considered together with synthesis attack, the underlying mechanism is very different. The physical access portion of ASVspoof2019 is also excluded for the same reason., demonstrating their promising generalization capability. (a) ResNet style, Res-TSSDNet. (b) Inception style, Inc-TSSDNet. Figure 2: Structures of the proposed models, where all the conv layers apply “SAME” padding, and in all local max pooling layers, stride$=$kernel size. $M$: number of stacked ResNet- and Inception-Style modules. $C_{\text{R}}$: number of channels in Res-TSSDNet. $C_{\text{I}}$: number of channels in Inc- TSSDNet. ## II The Proposed Models In many deep learning tasks such as object recognition or semantic understanding, it has been found that generally the deeper the network, the better the performance [35, 36, 38]. However, in synthetic speech detection, the critical feature is the artifact left behind data forgery, which may not contain any semantic information. Since deeper features are more towards higher level semantic information, which may not be suitable to represent the subtle forgery artifacts, we hypothesize that the network for synthetic speech detection should be relatively shallower. Grabbing the essence of the popular ResNet [35] and Inception network [36], the proposed end-to-end TSSDNets are designed as follows. ### II-A Model Structure The proposed Res-TSSDNet and Inc-TSSDNet are depicted in Fig. 2 (a) and (b), respectively, which share the same first layer, $3$ final fully-connected linear layers, and global max pooling before the linear layers. The ResNet- style and Inception-style blocks are repeated for $M$ times, respectively, and batch normalization (BN) is applied in both networks. $C_{\text{R}}$ and $C_{\text{I}}$ denote the number of channels in the corresponding modules, which may vary across layers. Noticeably, to increase the receptive field and control model complexity, dilated convolutions [39] with dilation $d$ are incorporated in the Inc-TSSDNet, which is different from the original Inception network [36]. All the convolution layers apply “SAME” padding with $\text{stride}=1$, while for the pooling layers the stride equals to the corresponding kernel size. ### II-B Training Strategy #### II-B1 Data Preparation Normally, the training data contain raw speech recordings with varied durations. To align the training data, we adopt the treatment in [30]. In [30], the training examples are truncated or repeated until the duration is $6.4$ seconds to generate the CQT feature, while here we keep every example with $6$ seconds, with the default $16$ kHz sample rate. These $6$-second examples are then directly fed into the networks for end-to-end training. Since all convolution layers have “SAME” padding, the length of feature vector ($9.6\times 10^{4}$ at input) is reduced solely by the pooling layers. Note that hand-crafted feature based method, e.g., CQCC [19], is insensitive to the length of recording since all time slices contribute to classifier training. Batch size is set to $32$. . #### II-B2 Weighted Cross-Entropy Loss Considering the fact that in general data-driven media content forgery detection tasks, the number of genuine examples is usually much less than the number of fake ones, we apply weighted cross-entropy (WCE) loss during the training phase to cope with data imbalance. Let $\\{x_{i},y_{i}\\}$ compose the labeled training set, where $\forall i$, label $y_{i}\in\\{0,1\\}$, then the WCE loss is given by ${\mathop{\rm WCE}\nolimits}\left({{\bf{z}},{y_{i}}}\right)=-{w_{{y_{i}}}}\log\left({{z_{{y_{i}}}}}\right),$ (1) where $\mathbf{z}=[z_{0},z_{1}]$ contains the softmax probabilities of the $2$ classes, and $w_{y_{i}}$ is the inverse ratio of label $y_{i}$ in the training set. For all the training processes, we use the Adam [40] optimizer and default settings. Exponential learning rate decay with a multiplicative factor of $0.95$ is adopted. The model yielding the lowest equal error rate (EER) on development set within $100$ epochs is selected for evaluation. #### II-B3 Mixup Regularization For practical forensic merits, the trained model is expected to generalize to unseen attacks, and the ASVspoof datasets have been specially designed for this purpose. In this paper, we consider the mixup regularization [41] as a booster to further improve the generalization capability. Specifically, it uses a set of mixed examples and labels, instead of the original set, to train the network, i.e., $\tilde{x}_{i}=\lambda x_{i}+(1-\lambda)x_{j},\quad\tilde{y}_{i}=\lambda y_{i}+(1-\lambda)y_{j},$ (2) where $\\{x_{i},y_{i}\\}$ and $\\{x_{j},y_{j}\\}$ are two randomly selected training pairs, $\lambda\sim\mathop{\rm Beta}(\alpha,\alpha)$, and $\alpha\in(0,\infty)$ is a hyperparameter. The implementation of the mixup regularization could be carried out via the following equivalent loss function, ${{\mathop{\rm CE}}_{{\rm{mixup}}}}\left({\tilde{\bf{z}},{y_{i}},{{y}_{j}}}\right)=\lambda{\mathop{\rm CE}}(\tilde{\bf{z}},{y_{i}})+(1-\lambda){\mathop{\rm CE}}(\tilde{\bf{z}},{{y}_{j}}),$ (3) where $\tilde{\bf{z}}$ contains the softmax probabilities from mixed examples, and ${\mathop{\rm CE}}(\cdot,\cdot)$ is the standard cross-entropy (CE) loss, equivalent to setting $w_{0}=w_{1}$ in (1). ## III Results We first present the main results obtained by the proposed networks in comparison with the benchmark and the state-of-the-art solutions on the latest ASVspoof2019 dataset. We then perform ablation study, followed by cross- dataset evaluation on the ASVspoof2015 dataset. All the results are generated using a single GeForce GTX 1080 or 1080Ti GPU. PyTorch implementations of the proposed TSSDNets are available at: _https://github.com/ghuawhu/end-to-end- synthetic-speech-detection_. TABLE I: EER (%) of the proposed and state-of-the-art methods on ASVspoof2019 LA dev and eval sets, $M=4$, $C_{\text{L}}=\\{64,32\\}$, $C_{\text{R}}=\\{32,64,128,128\\}$, $C_{\text{I}}=\\{8,16,32,32\\}$. Method | #Param | Dev | Eval ---|---|---|--- Baseline LFCC+GMM [42] | - | $0.43$ | $9.57$ Baseline CQCC+GMM [42] | - | $2.71$ | $8.09$ Subband CQCC+MLP [19] | - | - | $8.04$ $8$ Features+MLP[21] | - | $0.00$ | $4.13$ Spec+VGG+SincNet [28] | $>4.32$M | $0.00$ | $8.01$ Spec+CQCC+ResNet+SE [27] | $5.80$M | $0.00$ | $6.70$ FFT+CNN [29] | $10.2$M | $0.04$ | $4.53$ $3$ Features+CNN [29] | $30.6$M | $0.00$ | $1.86$ CQT+Res2Net+SE [30] | $0.92$M | $0.43$ | $2.50$ $3$ Features+Res2Net+SE [30] | $2.76$M | $0.00$ | $1.89$ CQT+2D-Res-TSSDNet | $0.97$M | $0.59$ | $5.89$ End-to-End Res-TSSDNet | $0.35$M | $0.74$ | $\mathbf{1.64}$ End-to-End Inc-TSSDNet | $\mathbf{0.09}$M | $1.09$ | $4.04$ ### III-A Main Results The comparison of the results in terms of EER obtained on the logical access (LA) development and evaluation sets of the ASVspoof2019 challenge is presented in Table I, where the 2D-Res-TSSDNet is the 2D version of the Res- TSSDNet, having the same architecture except that all the convolution and pooling ($2\times 2$ pooling) layers use 2D kernels instead. TABLE II: Ablation study of Res-TSSDNet and Inc-TSSDNet, using ASVspoof2019 LA eval EER (%), $C_{\text{L}}=\\{64,32\\}$. | $M$ | $C_{\text{R}}$ | $1\times 1$ | #Param | Eval ---|---|---|---|---|--- Res-TSSDNet | $3$ | $\\{32,64,128\\}$ | Yes | $0.18$M | $11.37$ $4$ | $\\{32,64,128,128\\}$ | No | $0.32$M | $2.69$ $4$ | $\\{32,64,128,128\\}$ | Yes | $0.35$M | $\mathbf{1.64}$ $5$ | $\\{32,64,128,128,128\\}$ | No | $0.47$M | $5.14$ $5$ | $\\{32,64,128,128,128\\}$ | Yes | $0.51$M | $4.58$ | $M$ | $C_{\text{I}}$ | Dilation $d$ | #Param | Eval ---|---|---|---|---|--- Inc-TSSDNet | $3$ | $\\{8,16,32\\}$ | $\\{2^{0},\ldots,2^{3}\\}$ | $0.04$M | $10.39$ $4$ | $\\{8,16,32,32\\}$ | $\\{2^{0},\ldots,2^{3}\\}$ | $0.09$M | $4.04$ $5$ | $\\{8,16,32,64,64\\}$ | $\\{2^{0},\ldots,2^{3}\\}$ | $0.35$M | $5.31$ $4$ | $\\{8,16,32,32\\}$ | $\\{2^{0},\ldots,2^{7}\\}$ | $0.34$M | $\mathbf{3.75}$ $5$ | $\\{8,16,32,64,64\\}$ | $\\{2^{0},\ldots,2^{7}\\}$ | $1.34$M | $4.20$ We make the following remarks from the main results. i) The works of [19] and [21] represent the best results of sophisticated hand-crafted feature engineering plus an MLP as the back-end classifier. ii) The majority of recent works belong to the type of pre-transform (or light feature engineering) plus DNNs to further perform feature extraction and classification. iii) All the works incorporating DNNs rely on the feature and model fusion for performance improvement, and in [29] and [30], the fused results have achieved EERs below $2\%$. iv) The 2D-Res-TSSDNet result is obtained with experimental settings identical to [30] without fusion, and it can be seen that when working with 2D pre-transform input, the use of advanced DNN components, i.e., Res2Net and SE, becomes very necessary. v) Most importantly, the proposed Res-TSSDNet is a single end-to-end network (no fusion, no feature engineering), containing less than a half of trainable weights than the one in [30] and only about one-tenth than in [29], but it achieves the overall lowest evaluation EER by a clear margin. vi) Lastly, the Inc-TSSDNet is extremely light, having only $0.09$M parameters, but it could still achieve an EER lower than those from [27, 28, 29] heavy models. ### III-B Ablation Study We first perform ablation study by varying the depth or width of the networks, and the results are summarized in Table II. For the Res-TSSDNet, the column “$1\times 1$” indicates whether the “skip connection” in Fig. 2 (a) is used. It can be seen that going either shallower or deeper will result in the raise of EER, while with the use of ResNet skip connection, the network could achieve $1.05\%$ EER reduction over the one without using it. Similarly for the Inc-TSSDNet, the sweet spot also lies in the moderate depth or width. We further perform intra-model sensitivity analysis using the two proposed end-to-end models in Table I. Fixing all hyperparameters, the two models are trained from scratch using ASVspoof2019 training set for over $30$ times, and the dev and eval EERs are summarized in Fig. 3. It can be seen that the EERs of both models are bounded within certain ranges (except one outlier eval EER $>4\%$ for the Res-TSSDNet). The Inc-TSSDNet yields tighter dev EERs over the Res-TSSDNet, but eval EERs of the former are clearly higher. We can see from Fig. 3 and Table II that the intra-model differences may be as significant as the differences from model configurations. Relatively lighter models are hence recommended for their better trade-offs between accuracy and efficiency. Figure 3: Intra-model performance on ASVspoof2019. In addition, we have also discovered that i) changing all the activations from ReLU to leaky or parametric ReLU does not lead to a clear performance difference; ii) The first layer with a $1\times 7$ convolution kernel, adopted from ResNet setting, is slightly better than using a $1\times 3$ kernel; iii) Global max pooling is found to be more effective than global average pooling before the linear layers for both networks, but for the 2D-Res-TSSDNet, we stick to global average pooling; iv) The EERs of using standard CE are slightly higher than those using the WCE; v) Duration of training example also matters. Experimental results using $5$-second truncation yielded a slight performance degradation, but when $2$-second truncation is applied, the EER on evaluation set increased drastically. ### III-C Cross-Dataset Testing We now perform the cross-dataset experiments. Since the ASVspoof2015 training set contains relatively old speech synthesis methods, we focus on using networks trained on the training set of more advanced ASVspoof2019 to test on the dev and eval sets of ASVspoof2015. The intra- and inter-dataset EERs are presented in Table III. It can be seen that the GMMs learned from LFCC and CQCC features in ASVspoof2019 are generally inconsistent with the data in ASVspoof2015. For the best Res-TSSDNet on ASVspoof2019, it could not generalize to ASVspoof2015 either, whose EERs indicate almost indistinguishable softmax probability distributions for real and fake classes. However, by incorporating mixup regularization and increasing the level of mixup level $\alpha$, we observe that the Res-TSSDNet can significantly reduce the cross dataset EERs to less than $2\%$, while slightly sacrificing the performance on the original dataset. Further, all the Inc-TSSDNets have very attractive generalization capability even for the lightest model. The $M=5$, $8$-branch version yields the best cross-dataset performance with $1.96\%$ eval EER. This is a significant score compared to the existing cross-dataset results as reported in [43, 44, 45]. Noticeably in [45], also trained on ASVspoof2019 training set and tested on ASVspoof2015, the use of the CQT based features could only achieve EERs greater than $20\%$ (see Table 2 in [45]). For completeness, the detection error trade-off (DET) curves on ASVspoof2015 evaluation set using a few methods in Table III are provided in Fig. 4. TABLE III: EER (%) of networks trained on ASVspoof2019 training set, tested on ASVspoof2015 dev and eval sets. Method | 2019 | 2015 ---|---|--- Eval | Dev | Eval Baseline LFCC+GMM [42] | $9.57$ | $19.82$ | $15.91$ Baseline CQCC+GMM [42] | $8.09$ | $47.72$ | $39.90$ Res-TSSDNet | $1.64$ | $39.42$ | $42.52$ Mixup, $\alpha=0.1$, Res-TSSDNet | $2.07$ | $5.48$ | $5.46$ Mixup, $\alpha=0.5$, Res-TSSDNet | $2.29$ | $3.50$ | $5.75$ Mixup, $\alpha=1.0$, Res-TSSDNet | $2.16$ | $\mathbf{0.71}$ | $\mathbf{1.95}$ $M=3$, $4$-branch, Inc-TSSDNet | $10.39$ | ${5.31}$ | ${5.24}$ $M=4$, $4$-branch, Inc-TSSDNet | $4.04$ | ${2.78}$ | ${3.29}$ $M=4$, $8$-branch, Inc-TSSDNet | $3.75$ | ${1.84}$ | $2.16$ $M=5$, $8$-branch, Inc-TSSDNet | $4.20$ | $\mathbf{1.31}$ | $\mathbf{1.96}$ Figure 4: DET curves of cross-dataset testing on ASVspoof2015 eval set. ## IV Conclusion We have shown that a light-weight end-to-end neural network, significantly different from the exiting front- and back-end pipeline, could achieve to date the best synthetic speech detection results. It reduces the ASVspoof2019 eval EER by a clear margin compared to much heavier networks fed by pre-transform inputs, sophisticated hand-crafted features plus MLP classifiers, or the fusion of many systems of such kinds. We have further shown via cross-dataset testing that the proposed networks could also generalize to unseen dataset. In the ongoing ASVspoof2021 challenge, a new speech deepfake (DF) detection task is introduced specially for synthetic deepfake speech detection, and end-to- end methods are being given more attention, e.g., the RawNet2 [46] is used as a baseline. ## References * [1] K. Tokuda, Y. Nankaku, T. Toda, H. Zen, J. Yamagishi, and K. Oura, “Speech synthesis based on hidden markov models,” _Proc. IEEE_ , vol. 101, no. 5, pp. 1234–1252, May 2013. * [2] Y. Ren, C. Hu, X. Tan, T. Qin, S. Zhao, Z. Zhao, and T.-Y. Liu, “FastSpeech 2: Fast and high-quality end-to-end text to speech,” in _Proc. Int. Conf. Learning Representations (ICLR)_ , 2021, pp. 1–15. * [3] X. Tian, S. W. Lee, Z. Wu, E. S. Chng, and H. Li, “An exemplar-based approach to frequency warping for voice conversion,” _IEEE/ACM Trans. Audio, Speech, Lang. Process._ , vol. 25, no. 10, pp. 1863–1876, 2017. * [4] Y. Gao, R. Singh, and B. Raj, “Voice impersonation using generative adversarial networks,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018)_ , Apr. 2018, pp. 2506–2510. * [5] S. Ö. Arık, J. Chen, K. Peng, W. Ping, and Y. Zhou, “Neural voice cloning with a few samples,” in _Proc. the 32nd International Conference on Neural Information Processing Systems (NeurIPS)_ , 2018, pp. 10 019–10 029. * [6] J. Sanchez, I. Saratxaga, I. Hernáez, E. Navas, D. Erro, and T. Raitio, “Toward a universal synthetic speech spoofing detection using phase information,” _IEEE Trans. Inf. Forensics Security_ , vol. 10, no. 4, pp. 810–820, Apr. 2015. * [7] M. Sahidullah, T. Kinnunen, and C. Hanilçi, “A comparison of features for synthetic speech detection,” in _Proc. Interspeech_ , 2015. * [8] I. Saratxaga, J. Sanchez, Z. Wu, I. Hernaez, and E. Navas, “Synthetic speech detection using phase information,” _Speech Communication_ , vol. 81, pp. 30–41, 2016. * [9] T. B. Patel and H. A. Patil, “Significance of source–filter interaction for classification of natural vs. spoofed speech,” _IEEE J. Sel. Topics Signal Process._ , vol. 11, no. 4, pp. 644–659, 2017. * [10] ——, “Cochlear filter and instantaneous frequency based features for spoofed speech detection,” _IEEE J. Sel. Topics Signal Process._ , vol. 11, no. 4, pp. 618–631, 2017. * [11] D. Paul, M. Pal, and G. Saha, “Spectral features for synthetic speech detection,” _IEEE J. Sel. Topics Signal Process._ , vol. 11, no. 4, pp. 605–617, Jun. 2017. * [12] L. Wang, S. Nakagawa, Z. Zhang, Y. Yoshida, and Y. Kawakami, “Spoofing speech detection using modified relative phase information,” _IEEE J. Sel. Topics Signal Process._ , vol. 11, no. 4, pp. 660–670, 2017\. * [13] M. Todisco, H. Delgado, and N. Evans, “Constant Q cepstral coefficients: A spoofing countermeasure for automatic speaker verification,” _Computer Speech & Language_, vol. 45, pp. 516–535, 2017. * [14] M. Pal, D. Paul, and G. Saha, “Synthetic speech detection using fundamental frequency variation and spectral features,” _Computer Speech & Language_, vol. 48, pp. 31–50, 2018. * [15] J. Yang, R. K. Das, and N. Zhou, “Extraction of octave spectra information for spoofing attack detection,” _IEEE/ACM Trans. Audio, Speech, Lang. Process._ , vol. 27, no. 12, pp. 2373–2384, Dec. 2019. * [16] X. Tian, X. Xiao, E. S. Chng, and H. Li, “Spoofing speech detection using temporal convolutional neural network,” in _2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)_ , 2016, pp. 1–6. * [17] H. Muckenhirn, P. Korshunov, M. Magimai-Doss, and S. Marcel, “Long-term spectral statistics for voice presentation attack detection,” _IEEE/ACM Trans. Audio, Speech, Lang. Process._ , vol. 25, no. 11, pp. 2098–2111, 2017. * [18] H. Yu, Z. Tan, Z. Ma, R. Martin, and J. Guo, “Spoofing detection in automatic speaker verification systems using dnn classifiers and dynamic acoustic features,” _IEEE Trans. Neural Netw. Learn. Syst._ , vol. 29, no. 10, pp. 4633–4644, Oct. 2018. * [19] J. Yang, R. K. Das, and H. Li, “Significance of subband features for synthetic speech detection,” _IEEE Trans. Inf. Forensics Security_ , vol. 15, pp. 2160–2170, 2020. * [20] C. Zhang, C. Yu, and J. H. L. Hansen, “An investigation of deep-learning frameworks for speaker verification antispoofing,” _IEEE J. Sel. Topics Signal Process._ , vol. 11, no. 4, pp. 684–694, 2017. * [21] R. K. Das, J. Yang, and H. Li, “Long range acoustic features for spoofed speech detection,” in _Proc. Interspeech_ , 2019, pp. 1058–1062. * [22] Z. Chen, Z. Xie, W. Zhang, and X. Xu, “ResNet and model fusion for automatic spoofing detection,” in _Proc. Interspeech_ , Aug. 2017, pp. 102–106. * [23] Y. Qian, N. Chen, and K. Yu, “Deep features for automatic spoofing detection,” _Speech Communication_ , vol. 85, pp. 43–52, 2016. * [24] M. Adiban, H. Sameti, and S. Shehnepoor, “Replay spoofing countermeasure using autoencoder and siamese networks on asvspoof 2019 challenge,” _Computer Speech & Language_, vol. 64, no. 101105, pp. 1–13, 2020. * [25] Y. Qian, N. Chen, H. Dinkel, and Z. Wu, “Deep feature engineering for noise robust spoofing detection,” _IEEE/ACM Trans. Audio, Speech, Lang. Process._ , vol. 25, no. 10, pp. 1942–1955, 2017. * [26] B. Chettri, D. Stoller, V. Morfi, M. A. M. Ramírez, E. Benetos, and B. L. Sturm, “Ensemble Models for Spoofing Detection in Automatic Speaker Verification,” in _Proc. Interspeech_ , 2019, pp. 1018–1022. * [27] C.-I. Lai, N. Chen, J. Villalba, and N. Dehak, “ASSERT: Anti-Spoofing with Squeeze-Excitation and Residual Networks,” in _Proc. Interspeech 2019_ , 2019, pp. 1013–1017. * [28] H. Zeinali, T. Stafylakis, G. Athanasopoulou, J. Rohdin, I. Gkinis, L. Burget, and J. Černocký, “Detecting Spoofing Attacks Using VGG and SincNet: BUT-Omilia Submission to ASVspoof 2019 Challenge,” in _Proc. Interspeech_ , 2019, pp. 1073–1077. * [29] G. Lavrentyeva, S. Novoselov, A. Tseren, M. Volkova, A. Gorlanov, and A. Kozlov, “STC antispoofing systems for the ASVspoof2019 challenge,” in _Proc. Interspeech_ , 2019, pp. 1033–1037. * [30] X. Li, N. Li, C. Weng, X. Liu, D. Su, D. Yu, and H. Meng, “Replay and synthetic speech detection with Res2Net architecture,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2021)_ , 2021. * [31] X. Wang and _et al._ , “ASVspoof 2019: A large-scale public database of synthesized, converted and replayed speech,” _Computer Speech & Language_, vol. 64, no. 101114, pp. 1–24, 2020. * [32] H. Muckenhirn, V. Abrol, M. Magimai-Doss, and S. Marcel, “Understanding and Visualizing Raw Waveform-Based CNNs,” in _Proc. Interspeech 2019_ , 2019, pp. 2345–2349. * [33] Y. Luo and N. Mesgarani, “Conv-TasNet: Surpassing ideal time-frequency magnitude masking for speech separation,” _IEEE/ACM Trans. Audio, Speech, Lang. Process._ , vol. 27, no. 8, pp. 1256–1266, 2019. * [34] H. Muckenhirn, M. Magimai-Doss, and S. Marcel, “End-to-end convolutional neural network-based voice presentation attack detection,” in _2017 IEEE International Joint Conference on Biometrics (IJCB)_ , 2017, pp. 335–341. * [35] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” _arXiv, 1512.03385_ , pp. 1–14, 2015. * [36] C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in _2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2015, pp. 1–9. * [37] Z. Wu, T. Kinnunen, N. Evans, J. Yamagishi, C. Hanilci, M. Sahidullah, and A. Sizov, “ASVspoof 2015: the first automatic speaker verification spoofing and countermeasures challenge,” in _Proc. Interspeech_ , 2015, pp. 1–5. * [38] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in _Proc. International Conference on Learning Representations (ICLR)_ , 2015, pp. 1–14. * [39] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” in _Proc. International Conference on Learning Representations (ICLR)_ , 2016, pp. 1–13. * [40] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in _Proc. International Conference on Learning Representations (ICLR)_ , 2017, pp. 1–15. * [41] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “Mixup: Beyond empirical risk minimization,” in _Proc. International Conference on Learning Representations (ICLR)_ , 2018, pp. 1–13. * [42] M. Todisco, X. Wang, V. Vestman, M. Sahidullah, H. Delgado, A. Nautsch, J. Yamagishi, N. Evans, T. Kinnunen, and K. A. Lee, “ASVspoof 2019: Future horizons in spoofed and fake audio detection,” in _Proc. Interspeech_ , 2019, pp. 1008–1012. * [43] D. Paul, M. Sahidullah, and G. Saha, “Generalization of spoofing countermeasures: A case study with ASVspoof 2015 and BTAS 2016 corpora,” in _IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2017, pp. 2047–2051. * [44] P. Korshunov and S. Marcel, “A cross-database study of voice presentation attack detection,” in _Handbook of Biometric Anti-Spoofing–Presentation Attack Detection, 2nd Ed._ Springer, 2019, pp. 363–389. * [45] R. K. Das, J. Yang, and H. Li, “Assessing the scope of generalized countermeasures for anti-spoofing,” in _IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2020, pp. 6589–6593. * [46] H. Tak, J. Patino, M. Todisco, A. Nautsch, N. Evans, and A. Larcher, “End-to-end anti-spoofing with RawNet2,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2021, pp. 6369–6373.
# Stability analysis and control of decision-making of miners in blockchain Kosuke Toda Graduate School of Engineering Science, Osaka University, Machikaneyama 1-3, Toyonaka-shi, Osaka, 560–8531, Japan. Naomi Kuze Graduate School of Engineering Science, Osaka University, Machikaneyama 1-3, Toyonaka- shi, Osaka, 560–8531, Japan. Toshimitsu Ushio Graduate School of Engineering Science, Osaka University, Machikaneyama 1-3, Toyonaka-shi, Osaka, 560–8531, Japan. ###### Abstract To maintain blockchain-based services with ensuring its security, it is an important issue how to decide a mining reward so that the number of miners participating in the mining increases. We propose a dynamical model of decision-making for miners using an evolutionary game approach and analyze the stability of equilibrium points of the proposed model. The proposed model is described by the 1st-order differential equation. So, it is simple but its theoretical analysis gives an insight into the characteristics of the decision-making. Through the analysis of the equilibrium points, we show the transcritical bifurcations and hysteresis phenomena of the equilibrium points. We also design a controller that determines the mining reward based on the number of participating miners to stabilize the state where all miners participate in the mining. Numerical simulation shows that there is a trade- off in the choice of the design parameters. Keywords— B lockchain, Proof-of-work, Decision-making, Evolutionary Game, Bifurcation, Hysteresis, Feedback control. ## 1 Introduction Blockchain is a distributed ledger technology for recording transactions that underlies various fields such as digital currency like Bitcoin [1], data sharing [2], and computer security [3]. Blockchain-based services use cryptography to record transactions as a chain of blocks. A block consists of a block header and transaction data. The block header contains a cryptographic hash of its previous block, which makes blockchain-based services resistant to tampering. In these services, participants called miners create blocks in a distributed manner, and the longest chain of blocks is considered to be legitimate. When a miner succeeds in creating a block, he/she gets a reward called a mining reward. Blockchain-based services approve transactions through a consensus algorithm. As a consensus algorithm, proof-of-work (PoW) is typically used. In this algorithm, the mining difficulty is set using a scalar value called a nonce in the block header. To create a block, miners must find a nonce such that the cryptographic hash value for the previous block satisfies specific conditions. The process of creating blocks is called a mining. In general, a cryptographic hash value for a block is unique according to the nonce contained in the block. Moreover, a nonce that satisfies the specific conditions cannot be calculated directly. As a result, an exhaustive search imposes a large computational cost on miners, which contributes to the resistance to tampering. Because transaction approvals depend on miner calculations (such calculations are very costly and require a lot of energy [4, 5]), the participation of many miners is needed to maintain blockchain-based services and ensure blockchain system security [6, 7]. Therefore, it is important to analyze the decision-making problem of whether miners participate in the mining according to the energy consumption and mining rewards. Game theory is used to analyze interactions among rational decision-makers. Many studies have adopted game theory to analyze blockchain-related issues with PoW [8], such as decision-making problems in the mining considering the energy consumption [9, 10]. Evolutionary game theory has been used as a powerful mathematical tool for analyzing dynamical models of evolutionary selection [11]. Dynamical characteristics of the selection process are modeled by replicator dynamics. Control methods for the replicator dynamics have been studied in [12, 13, 14]. Evolutionary game models and replicator dynamics are also used in analyzing blockchain-related issues such as mining pool selection problems [15, 16], and attack scenarios [17]. We previously focused on a decision-making problem of whether miners participate in the mining according to the energy consumption and the mining rewards, and modeled it as a non-cooperative game. Through theoretical and numerical analysis, we showed the property of Nash equilibria [18]. However, in this study, we assumed that, once miners choose a strategy (i.e., participation in the mining or not), they do not change their strategies. Practically, the miners may decide to participate in the mining dynamically based on their current earned mining rewards. It is important to analyze such a dynamical decision-making process. In this paper, we propose a dynamical model of the decision-making problem for miners, by applying an evolutionary game approach. We analyze the stability of its equilibrium points and show the existence of transcritical bifurcations and hysteresis phenomena with the coexistence of two asymptotically stable equilibrium points: one corresponds to the state where all miners participate in the mining and the other to the state where the number of participating miners is minimum. The former equilibrium point is preferable to maintain blockchain-based services. We propose a controller that determines the mining reward based on the number of current participating miners so as to stabilize the equilibrium point if at least one miner participates in the initial time. The remainder of this paper is organized as follows. In Section 2, we propose an evolutionary game-based dynamical model of the decision-making process. In Section 3, we analyze the stability of its equilibrium point. In Section 4, we design a state feedback controller to let all miners participate in the mining. ## 2 Dynamical model of decision-making We assume that miners in a blockchain network are partitioned into two sets $\mathcal{M}$ and $\mathcal{N}$, where miners in $\mathcal{M}$ always participate in the mining and those in $\mathcal{N}$ have two strategies, participating in the mining (strategy $s_{k}=1$) and not participating in the mining (strategy $s_{k}=0$), where $k\in\mathcal{N}$. Note that $\mathcal{M}\cap\mathcal{N}=\emptyset$. Denoted by $m$ and $n$ are the cardinalities of $\mathcal{M}$ and $\mathcal{N}$, respectively (we assume $m\geq 1$ and $n\geq 1$). We define $x_{0}$ and $x_{1}$ as the ratios of miners in $\mathcal{N}$ that choose strategies $0$ and $1$, respectively. Note that $\displaystyle x_{0}+x_{1}=1.$ (2.1) Miners need to find a nonce such that the first $h$ bits of the hash of the block are all $0$. Then, $D=2^{h}$ is the difficulty parameter and $1/D$ is the probability that a miner creates a block with one hash calculation [19]. When miner $k\in\mathcal{M}\cup\mathcal{N}$ participates in the mining, he/she needs a cost $c$ per unit operating time. The average number $w_{k}=f_{k}(c)$ of hash queries calculated per unit operating time by miner $k$ depends on the cost $c$, and we assume that $f_{k}(c)$ is the same for all miners. In this paper, for simplicity, we assume $f_{k}(c)=vc\,(v>0)$ for any $k\in\mathcal{M}\cup\mathcal{N}$. The mining of blocks can be described as a Poisson process [20, 21]. That is, the block creation time is exponentially distributed [22]. The rate $\lambda_{k}$ of the Poisson process of miner $k\in\mathcal{M}\cup\mathcal{N}$ is given by $\lambda_{k}=w_{k}/D$ [21]111 Note that a combination of independent Poisson processes is still a Poisson process. Thus, the rate of the Poisson process of all miners is written as $\sum_{i\in\mathcal{M}\cup\mathcal{N}}\lambda_{i}$. . If miner $k$ chooses $s_{k}=1$, then the rate of the Poisson process is $\lambda_{k}=s_{k}f_{k}(c)/D=s_{k}c/d$ (we define $d=D/v$, in this paper). Let $R$ be the mining reward. Based on the previous work [18], the expected reward $R_{k}$ and the expected cost $CS_{k}$ for the mining of miner $k$ are calculated as follows. $\displaystyle R_{k}=\frac{\lambda_{k}}{\sum_{i\in\mathcal{M}\cup\mathcal{N}}\lambda_{i}}R=\frac{Rs_{k}}{m+nx_{1}},$ (2.2) $\displaystyle CS_{k}=\frac{c\lambda_{k}}{(\sum_{i\in\mathcal{M}\cup\mathcal{N}}\lambda_{i})^{2}}=\frac{ds_{k}}{(m+nx_{1})^{2}}.$ (2.3) We define the utility function $u_{i}(x_{0},x_{1})$ of miners that choose the strategy $i\in\\{0,\ 1\\}$ as $\displaystyle u_{i}(x_{0},x_{1})=\begin{cases}0&\mbox{if}\;\;i=0,\\\ \frac{1}{m+nx_{1}}\left(R-\frac{d}{m+nx_{1}}\right)&\mbox{if}\;\;i=1,\end{cases}$ (2.4) which means that the utility of a miner who participates in the mining is the difference between the expected reward $R_{k}$ and the expected cost $CS_{k}$. Based on the principle of the evolutionary game [11], the dynamics of the ratio of miners that choose the strategy $i$ is given by $\displaystyle\frac{\dot{x}_{i}}{x_{i}}=u_{i}(x_{0},x_{1})-\bar{u}(x_{0},x_{1})\;(i=0,1),$ (2.5) where $\bar{u}(x_{0},x_{1})=\sum_{i=0}^{1}x_{i}u_{i}(x_{0},x_{1})$ is the average utility of all miners. According to (2.4), (2.5) is rewritten as $\displaystyle\dot{x}_{1}=-\dot{x}_{0}=\frac{x_{1}(1-x_{1})}{m+nx_{1}}\left(R-\frac{d}{m+nx_{1}}\right)\eqqcolon\varphi_{R}(x_{1}).$ (2.6) Thus, the dynamics of the decision-making of miners is described by the above 1st-order differential equation and the reward $R$ plays an important role in the decision-making of the miners for the participation in the mining. In the following, we investigate stability and stabilization of equilibrium points of (2.6). For that purpose, the concept of a basin of attraction [23] is important. Let $\xi(t;x_{1}^{\mathrm{init}})$ be the solution of (2.6) that starts from an initial state $x_{1}^{\mathrm{init}}$ at time $t=0$. For a given asymptotically stable equilibrium point $x_{1}^{\prime}$ of (2.6), the basin of attraction is defined as the set of all initial states $x_{1}^{\mathrm{init}}$ such that $\xi(t;x_{1}^{\mathrm{init}})$ is defined for all $t\geq 0$ and $\lim_{t\to\infty}\xi(t;x_{1}^{\mathrm{init}})=x_{1}^{\prime}$. ## 3 Stability analysis In this section, we investigate the stability of the equilibrium point $x_{1}=0,\ 1,\ x_{1}^{*}$ of (2.6), where $\displaystyle x_{1}^{*}=\frac{1}{n}\left(\frac{d}{R}-m\right).$ (3.1) When $1/(m+n)<R/d<1/m$, the equilibrium point $x_{1}^{*}$ satisfies $0<x_{1}^{*}<1$. This equilibrium point is the state where the utility $u_{1}(1-x_{1}^{*},x_{1}^{*})$ is equal to $0$, that is, the utility for the strategy $0$ is equal to that for the strategy $1$. We investigate the local stability of the three equilibrium points. The derivative of $\varphi_{R}(x_{1})$ with respect to $x_{1}$ is $\displaystyle\frac{\partial\varphi_{R}(x_{1})}{\partial x_{1}}=\left(-\frac{x_{1}}{m+nx_{1}}+\frac{1-x_{1}}{m+nx_{1}}-\frac{nx_{1}(1-x_{1})}{(m+nx_{1})^{2}}\right)$ $\displaystyle\hskip 56.9055pt\times\left(R-\frac{d}{m+nx_{1}}\right)+\frac{dnx_{1}(1-x_{1})}{(m+nx_{1})^{3}}.$ (3.2) Thus, we obtain $\displaystyle\left.\frac{\partial\varphi_{R}(x_{1})}{\partial x_{1}}\right|_{x_{1}=0}=\frac{1}{m}\left(R-\frac{d}{m}\right),$ (3.3) $\displaystyle\left.\frac{\partial\varphi_{R}(x_{1})}{\partial x_{1}}\right|_{x_{1}=1}=-\frac{1}{m+n}\left(R-\frac{d}{m+n}\right),$ (3.4) $\displaystyle\left.\frac{\partial\varphi_{R}(x_{1})}{\partial x_{1}}\right|_{x_{1}=x_{1}^{*}}=\frac{R^{3}}{nd^{2}}\left(\frac{d}{R}-m\right)\left((m+n)-\frac{d}{R}\right).$ (3.5) Table 1: The relation between $R/d$ and the stability of equilibrium points. Condition for $R/d$ | $x_{1}=0$ | $x_{1}=1$ | $x_{1}=x_{1}^{*}$ ---|---|---|--- $R/d<1/(m+n)$ | S | U | S $1/(m+n)<R/d<1/m$ | S | S | U $R/d>1/m$ | U | S | S Thus, we have their stability conditions as shown in Table 1, where S (resp. U) represents an asymptotically stable (resp. unstable) point. Figure 1: The $m-R/d$ parameter plane where $n$ is fixed to $n=2$. Fig. 1 shows the $m-R/d$ parameter plane with $n=2$ where the meaning of each region is as follows. In the region $(A)$, both $x_{1}=1$ and $x_{1}^{*}<0$ are asymptotically stable equilibrium points, and the basin of attraction of $x_{1}=1$ is $(0,\ \infty)$, that is, every solution of (2.6) starting in $(0,1]$ converges to $1$. In the region $(B)$, both $x_{1}=0$ and $x_{1}=1$ are asymptotically stable equilibrium points, and basins of attraction of $x_{1}=0$ and $x_{1}=1$ are $(-\infty,\ x_{1}^{*})$ and $(x_{1}^{*},\ \infty)$, respectively, that is, every solution of (2.6) starting in $[0,\ x_{1}^{*})$ converges to $0$, and every solution of (2.6) starting in $(x_{1}^{*},\ 1]$ converges to $1$. In the region $(C)$, both $x_{1}=0$ and $x_{1}^{*}>0$ are asymptotically stable equilibrium points, and the basin of attraction of $x_{1}=0$ is $(-\infty,\ 1)$, that is, every solution of (2.6) starting in $[0,\ 1)$ converges to $0$. Figure 2: The relation between $R/d$ and the stability of equilibrium points when $m=n=2$. Shown in Fig. 2 is a bifurcation diagram with respect to the bifurcation parameter $R/d$, where $m=n=2$. The solid (resp. dashed) line represents an asymptotically stable (resp. unstable) equilibrium point. Two curves of equilibrium points pass through $(x_{1},R)=(1,d/(m+n))$ (resp. $(x_{1},R)=(0,d/m)$), one given by $x_{1}=x_{1}^{*}$, the other by $x_{1}=1$ (resp. $x_{1}=0$). Both curves exist on both sides of $R=d/(m+n)$ (resp. $R=d/m$). The stability along each curve exchanges on passing through $R=d/(m+n)$ (resp. $R=d/m$). Thus, the exchange of stability (known as a transcritical bifurcation) [24] is observed when $(x_{1},R)=(0,d/m),(1,d/(m+n))$. We show in A that the vector field (2.6) satisfies the condition of the transcritical bifurcation shown in [24]. Since the values of $x_{i}\,(i=0,1)$ satisfy $0\leq x_{i}\leq 1$, we observe jump phenomena owing to these transcritical bifurcations. Moreover, $R>d/m$ needs to be satisfied so that all miners in $\mathcal{N}$ participate in the mining. It is noted that, once the miners participate in the mining, they continue to participate in the mining until the reward $R$ becomes $d/(m+n)$. Thus, a hysteresis phenomenon of the equilibrium points is observed. Figure 3: Trajectories of (2.6) when $m=n=2$, $d=100$, and $R=40$, from $x_{1}^{\mathrm{init}}=0.1$ (blue) and $x_{1}^{\mathrm{init}}=0.9$ (red). Fig. 3 shows trajectories of (2.6) from an initial state $x_{1}^{\mathrm{init}}=0.1$ (blue) and $x_{1}^{\mathrm{init}}=0.9$ (red). When $d/(m+n)<R<d/m$, both $x_{1}=0$ and $x_{1}=1$ are asymptotically stable points whose basins of attraction are $(-\infty,\ x_{1}^{*})$ and $(x_{1}^{*},\ \infty)$, respectively. Thus, the number of miners who participate in the mining converges to $0$ if the initial ratio is less than $x_{1}^{*}$ because their utility is negative and they prefer non-participation. ## 4 Stabilization The result in Section 3 implies that no miner in $\mathcal{N}$ participates in the mining in the steady state when the mining reward $R^{*}$ satisfies $R^{*}<d/(m+n)$. When the mining reward $R^{*}$ satisfies $d/(m+n)<R^{*}<d/m$, miners’ behaviors depend on the initial state $x_{1}^{\mathrm{init}}$, i.e., no miner in $\mathcal{N}$ participates in the mining in the steady state when $x_{1}^{\mathrm{init}}<x_{1}^{*}$. We propose a state feedback controller to adjust the reward based on the ratio $x_{1}$ so that all miners in $\mathcal{N}$ participate in the mining, i.e., let every trajectory of $x_{1}$ with its initial state in $(0,\ 1]$ converge to $1$. ### 4.1 Case where $R^{*}<d/(m+n)$ First, we show that $x_{1}=1$ cannot be stabilized when $R^{*}<d/(m+n)$. We consider the following state feedback controller $R_{1}(x_{1})$ that adjusts the reward based on the ratio $x_{1}$. $\displaystyle R=R_{1}(x_{1}),\;R_{1}(1)=R^{*}<\frac{d}{m+n}.$ (4.1) The controlled trajectory of $x_{1}$ by (4.1) is described by $\displaystyle\dot{x}_{1}=\frac{x_{1}(1-x_{1})}{m+nx_{1}}\left(R_{1}(x_{1})-\frac{d}{m+n}\right)\eqqcolon\psi_{R}(x_{1}).$ (4.2) The derivative of $\psi_{R}(x_{1})$ with respect to $x_{1}$ is $\displaystyle\frac{\partial\psi_{R}(x_{1})}{\partial x_{1}}=\left(-\frac{x_{1}}{m+nx_{1}}+\frac{1-x_{1}}{m+nx_{1}}-\frac{nx_{1}(1-x_{1})}{(m+nx_{1})^{2}}\right)$ $\displaystyle\hskip 42.67912pt\times\left(R_{1}(x_{1})-\frac{d}{m+nx_{1}}\right)$ $\displaystyle\hskip 48.36967pt+\frac{x_{1}(1-x_{1})}{m+nx_{1}}\left(\frac{\partial R_{1}(x_{1})}{\partial x_{1}}+\frac{dn}{(m+nx_{1})^{2}}\right).$ (4.3) We obtain $\displaystyle\left.\frac{\partial\psi_{R}(x_{1})}{\partial x_{1}}\right|_{x_{1}=1}=-\frac{1}{m+n}\left(R_{1}(1)-\frac{d}{m+n}\right)>0.$ (4.4) Therefore, the unstable equilibrium point $x_{1}=1$ cannot be stabilized even if the feedback controller is used. ### 4.2 Case where $d/(m+n)<R^{*}<d/m$ Next, we show that $x_{1}=1$ can be an asymptotically stable equilibrium point whose basin of attraction is $(0,\ 1]$ with a state feedback controller. We introduce the following state feedback controller $R_{2}(x_{1})$ to adjust the reward based on the ratio $x_{1}$, $\displaystyle R=R_{2}(x_{1})=R^{*}+\Delta R(x_{1}),\;\Delta R(1)=0,$ (4.5) and let every trajectory of $x_{1}$ with its initial state in $(0,\ 1]$ converge to $1$. #### 4.2.1 The condition of the feedback gain We give $\bar{x}_{1}$ satisfying $x_{1}^{*}<\bar{x}_{1}\leq 1$ and $\varepsilon>0$. For a given reward $R^{*}\in(d/(m+n),d/m)$, let $\Delta R(x_{1})$ be $\displaystyle\Delta R(x_{1})=\begin{cases}K(\bar{x}_{1}-x_{1})&\mbox{if}\;\;x_{1}<x_{1}^{*}+\varepsilon,\\\ 0&\mbox{otherwise},\end{cases}$ (4.6) where $K>0$ is a feedback gain. We obtain a condition for the gain $K$ and $\varepsilon$ such that every trajectory of $x_{1}$ with its initial state in $(0,\ 1]$ converges to $1$ as in Proposition 1. ###### Proposition 1. Assume $d/(m+n)<R^{*}<d/m$. Let $\zeta_{R}(x_{1})$ be $\displaystyle\zeta_{R}(x_{1})$ $\displaystyle\coloneqq- Knx_{1}^{2}+(Kn\bar{x}_{1}-Km+R^{*}n)x_{1}$ $\displaystyle\hskip 71.13188pt+(R^{*}m+Km\bar{x}_{1}-d),$ (4.7) and let $\alpha,\beta\;(\alpha<\beta)$ be real solutions of the quadratic equation $\zeta_{R}(x_{1})=0$. Then, every trajectory of $x_{1}$ with its initial state in $(0,\ 1]$ converges to $1$ if the gain $K$ and $\varepsilon$ satisfy $\displaystyle K>\frac{d-R^{*}m}{m\bar{x}_{1}}\ (>0),$ (4.8) $\displaystyle 0<\varepsilon\begin{cases}<\beta-x_{1}^{*}&{\rm if}\;\;\beta<1,\\\ \leq 1-x_{1}^{*}&{\rm if}\;\;\beta\geq 1.\end{cases}$ (4.9) ###### Proof. With the controller (4.5) and (4.6), the dynamics of $x_{1}$ ($x_{1}<x_{1}^{*}+\varepsilon$) is given by $\displaystyle\dot{x}_{1}=\eta_{R}(x_{1}),$ (4.10) $\displaystyle\eta_{R}(x_{1})\coloneqq\frac{x_{1}(1-x_{1})}{m+nx_{1}}\left(R^{*}+K(\bar{x}_{1}-x_{1})-\frac{d}{m+nx_{1}}\right).$ (4.11) According to (4.7), $\eta_{R}(x_{1})$ can be rewritten as $\displaystyle\eta_{R}(x_{1})=\frac{x_{1}(1-x_{1})\zeta_{R}(x_{1})}{(m+nx_{1})^{2}}.$ (4.12) First, we prove that the quadratic equation $\zeta_{R}(x_{1})=0$ has two distinct real solutions under (4.8). We have $\displaystyle\zeta_{R}(x_{1}^{*})=K(\bar{x}_{1}-x_{1}^{*})(m+nx_{1})>0,$ (4.13) which implies with (4.8) that the quadratic equation $\zeta_{R}(x_{1})=0$ has two distinct real solutions $\alpha,\beta$ satisfying $\alpha<x_{1}^{*}<\beta$. Next, we prove $\alpha<0$ under (4.8). We obtain $\displaystyle\zeta_{R}(0)$ $\displaystyle=m\bar{x}_{1}K-(d-R^{*}m)$ $\displaystyle>m\bar{x}_{1}\frac{d-R^{*}m}{m\bar{x}_{1}}-(d-R^{*}m)=0,$ (4.14) from (4.7) and (4.8). Since $\zeta_{R}(x_{1})$ is a convex upward quadratic function, the smaller solution $\alpha$ of $\zeta_{R}(x_{1})=0$ satisfies $\alpha<0$. Finally, we prove that the system controlled by (4.5) satisfies $\dot{x}_{1}>0$ for any $x_{1}\in(0,\ 1)$ under (4.8) and (4.9). When $\beta<1$, $\eta_{R}(x_{1})>0$ for any $x_{1}\in(0,\ \beta)$ from (4.12). It is obvious that $\dot{x}_{1}>0$ for any $x_{1}\in(0,x_{1}^{*}+\varepsilon)$ from (4.9). For any $x_{1}\in[x_{1}^{*}+\varepsilon,1)$, $\dot{x}_{1}>0$ because $K=0$. Thus, $\displaystyle\dot{x}_{1}>0\;\mbox{for}\;\mbox{any}\;x_{1}\in(0,\;1).$ (4.15) Similary, it is also shown by (4.9) that (4.15) holds for any $\beta\geq 1$. Therefore, every trajectory of $x_{1}$ with its initial state in $(0,\ 1]$ converges to $1$ under (4.8) and (4.9). ∎ It is noted that $\bar{x}_{1}<\beta$ since $d/(m+n)<R^{*}$. So, (4.6) is continuous if $\varepsilon=\bar{x}_{1}-x_{1}^{*}$. #### 4.2.2 Performance evaluation In this section, we provide the numerical analysis of the controller. We consider the case where $m=n=2$, $d=100$, and $R^{*}=40$. Then, we have $x_{1}^{*}=0.25$ from (3.1). Let the initial state of $x_{1}$ be $x_{1}^{\mathrm{init}}=0.1$. We consider the following two cases where $K$ and $\varepsilon$ satisfy (4.8) and (4.9). Case 1) $\bar{x}_{1}=0.26,\;\varepsilon=0.005,\;K=56.8125$. Case 2) $\bar{x}_{1}=1,\;\varepsilon=0.75,\;K=10.1$. (a) (b) Figure 4: Trajectories of (a) the state $x_{1}$ and (b) the reward $R_{2}(x_{1})$ with a feedback controller satisfying Proposition 1, when $m=n=2$, $d=100$, $R^{*}=40$, $x_{1}^{*}=0.25$ from $x_{1}^{\mathrm{init}}=0.1$, where $\bar{x}_{1}=0.26,\varepsilon=0.005,K=56.8125$ (red) and $\bar{x}_{1}=1,\varepsilon=0.75,K=10.1$ (blue). Fig. 4 shows trajectories of the state and the reward. The red and blue lines represent the trajectories of Cases 1) and 2), respectively. In Case 1), it takes a longer time than Case 2) for the state $x_{1}$ to converge to $1$, but the reward $R_{2}(x_{1})$ returns to the original value $R^{*}$ quickly. Note that $R_{2}(x_{1})$ in Case 1) is not continuous because we switch the input $\Delta R(x_{1})$ to $0$ when $x_{1}=x_{1}^{*}+\varepsilon$ (see (4.6)). In Case 2), the state $x_{1}$ converges to $1$ quickly, but it takes longer time than Case 1) for the reward $R_{2}(x_{1})$ to return to its original value $R^{*}$. Thus, there is a trade-off in the choice of the design parameters $\bar{x}_{1}$ and $\varepsilon$. ## 5 Conclusion We proposed a dynamical model of the decision-making of miners in the blockchain. The proposed model is described by the 1st-order differential equation. So, it is simple but its theoretical analysis gives an insight into the characteristics of the decision-making. We analyzed the stability of its equilibrium points. We showed the occurrence of the transcritical bifurcations and observed a hysteresis phenomenon. We also proposed a feedback controller and showed that it can stabilize the state where all miners participate in the mining from any non-zero initial participation ratio of the miners. Our future work is to extend our model to the case where miners’ computational performances are different from each other. ## Acknowledgements This research was supported by JST ERATO JPMJER1603. ## Appendix A Transcritical bifurcation We consider the following system. $\displaystyle\dot{x}=f(x,\mu),\;\;x\in\mathbb{R},\;\;\mu\in\mathbb{R}.$ (A.1) We assume that $\displaystyle f(x,\mu)=xF(x,\mu),$ (A.2) where $F:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ satisfies the following condition. $\displaystyle F(x,\mu)\coloneqq\begin{cases}\frac{f(x,\mu)}{x}&x\neq 0,\\\ \frac{\partial f(0,\mu)}{\partial x}&x=0.\end{cases}$ (A.3) Then, it is shown in [24] that (A.1) undergoes a transcritical bifurcation at $(x,\mu)=(0,0)$ if the following three conditions hold. (T1) $f(0,0)=0,\;\frac{\partial f(0,0)}{\partial x}=0$, (T2) $\frac{\partial f(0,0)}{\partial\mu}=0$, (T3) $\frac{\partial^{2}f(0,0)}{\partial x\partial\mu}\neq 0,\;\frac{\partial^{2}f(0,0)}{\partial x^{2}}\neq 0$. Thus we will show that (2.6) satisfies the above three conditions at $(x_{1},R)=(0,d/m),\ (1,d/(m+n))$. ### A.1 Case where $(x_{1},R)=(0,d/m)$ First, we consider the following coordination transformation by which $(x_{1},R)=(0,d/m)$ is transformed to $(x,\mu)=(0,0)$. $\displaystyle\left(\begin{array}[]{c}x_{1}\\\ R\end{array}\right)=\left(\begin{array}[]{c}x\\\ \mu\end{array}\right)+\left(\begin{array}[]{c}0\\\ \frac{d}{m}\end{array}\right).$ (A.10) Then, we define $\displaystyle f(x,\mu)$ $\displaystyle\coloneqq\varphi_{\mu+\frac{d}{m}}(x)$ $\displaystyle=\frac{x(1-x)}{m+nx}\left(\mu+\frac{d}{m}-\frac{d}{m+nx}\right)=xF(x,\mu),$ (A.11) where the function $F$ is defined by $\displaystyle F(x,\mu)\coloneqq\frac{1-x}{m+nx}\left(\mu+\frac{d}{m}-\frac{d}{m+nx}\right).$ (A.12) Then, we have $\displaystyle\frac{\partial f(x,\mu)}{\partial x}$ $\displaystyle=F(x,\mu)+x\frac{\partial F(x,\mu)}{\partial x}$ $\displaystyle=\left(-\frac{x}{m+nx}+\frac{1-x}{m+nx}-\frac{nx(1-x)}{(m+nx)^{2}}\right)$ $\displaystyle\hskip 28.45274pt\times\left(\mu+\frac{d}{m}-\frac{d}{m+nx}\right)+\frac{dnx(1-x)}{(m+nx)^{3}}.$ (A.13) It is obvious that $\displaystyle F(x,\mu)=\frac{f(x,\mu)}{x}$ (A.14) when $x\neq 0$ and $\displaystyle F(0,\mu)=\frac{\mu}{m}=\frac{\partial f(0,\mu)}{\partial x}\;\;(\because\eqref{eq:A_phi_deriv_x})$ (A.15) when $x=0$. Thus, the function $f$ defined by (A.11) satisfies (A.2) and (A.3). Next, we show that $f(x,\mu)$ satisfies the conditions (T1) – (T3). We obtain $\displaystyle\frac{\partial f(x,\mu)}{\partial\mu}=\frac{x(1-x)}{m+nx},$ (A.16) $\displaystyle\frac{\partial^{2}f(x,\mu)}{\partial x\partial\mu}=-\frac{x}{m+nx}+\frac{1-x}{m+nx}-\frac{nx(1-x)}{(m+nx)^{2}},$ (A.17) $\displaystyle\frac{\partial^{2}f(x,\mu)}{\partial x^{2}}=\left(\frac{n^{2}x(1-x)}{(m+nx)^{2}}-\frac{n(1-x)}{m+nx}+\frac{nx}{m+nx}-1\right)$ $\displaystyle\hskip 14.22636pt\times\frac{2}{m+nx}\left(\mu+\frac{d}{m}-\frac{d}{m+nx}\right)+\frac{2dn}{(m+nx)^{3}}$ $\displaystyle\hskip 28.45274pt\times\left(\frac{-2nx(1-x)}{m+nx}-x+(1-x)\right).$ (A.18) Thus, $f(x,\mu)$ satisfies the conditions (T1) – (T3) because $\displaystyle f(0,0)$ $\displaystyle=0,$ (A.19) $\displaystyle\frac{\partial f(0,0)}{\partial x}$ $\displaystyle=0,$ (A.20) $\displaystyle\frac{\partial f(0,0)}{\partial\mu}$ $\displaystyle=0,$ (A.21) $\displaystyle\frac{\partial^{2}f(0,0)}{\partial x\partial\mu}$ $\displaystyle=\frac{1}{m}\neq 0,$ (A.22) $\displaystyle\frac{\partial^{2}f(0,0)}{\partial x^{2}}$ $\displaystyle=\frac{2dn}{m^{3}}\neq 0.$ (A.23) ### A.2 Case where $(x_{1},R)=(1,d/(m+n))$ It is noted that the dynamics of $x_{0}$ is written as follows. $\displaystyle\dot{x}_{0}$ $\displaystyle=-\frac{x_{0}(1-x_{0})}{m+n(1-x_{0})}\left(R-\frac{d}{m+n(1-x_{0})}\right)$ $\displaystyle=-\varphi_{R}(1-x_{0})$ (A.24) because $x_{0}$ and $x_{1}$ satisfies (2.1). Thus, it is sufficient to show that (A.24) undergoes a transcirtical bifurcation at $(x_{0},R)=(0,d/(m+n))$. We consider the following coordination transformation by which $(x_{0},R)=(0,d/(m+n))$ is transformed to $(x,\mu)=(0,0)$. $\displaystyle\left(\begin{array}[]{c}x_{0}\\\ R\end{array}\right)=\left(\begin{array}[]{c}x\\\ \mu\end{array}\right)+\left(\begin{array}[]{c}0\\\ \frac{d}{m+n}\end{array}\right).$ (A.31) Then, we define $\displaystyle f(x,\mu)$ $\displaystyle\coloneqq-\varphi_{\mu+\frac{d}{m+n}}(1-x)$ $\displaystyle=-\frac{x(1-x)}{m+n(1-x)}$ $\displaystyle\hskip 28.45274pt\times\left(\mu+\frac{d}{m+n}-\frac{d}{m+n(1-x)}\right)$ $\displaystyle=xF(x,\mu),$ (A.32) where the function $F$ is defined by $\displaystyle F(x,\mu)$ $\displaystyle\coloneqq-\frac{1-x}{m+n(1-x)}$ $\displaystyle\hskip 28.45274pt\times\left(\mu+\frac{d}{m+n}-\frac{d}{m+n(1-x)}\right).$ (A.33) Then, we have $\displaystyle\frac{\partial f(x,\mu)}{\partial x}$ $\displaystyle=-\left(\frac{-x+(1-x)}{m+n(1-x)}+\frac{nx(1-x)}{(m+n(1-x))^{2}}\right)$ $\displaystyle\hskip 14.22636pt\times\left(\mu+\frac{d}{m+n}-\frac{d}{m+n(1-x)}\right)$ $\displaystyle\hskip 28.45274pt+\frac{dnx(1-x)}{(m+n(1-x))^{3}}.$ (A.34) It is obvious that $\displaystyle F(x,\mu)=\frac{f(x,\mu)}{x}$ (A.35) when $x\neq 0$ and $\displaystyle F(0,\mu)=-\frac{\mu}{m+n}=\frac{\partial f(0,\mu)}{\partial x}\;\;(\because\eqref{eq:A_phi_1_deriv_x})$ (A.36) when $x=0$. Thus, the function $f$ defined by (A.32) satisfies (A.2) and (A.3). In the same way as Appendix A.1, we obtain the partial derivatives of $f(x,\mu)$ and show that $f(x,\mu)$ defined by (A.24) satisfies the conditions (T1) – (T3) because $\displaystyle f(0,0)$ $\displaystyle=0,$ (A.37) $\displaystyle\frac{\partial f(0,0)}{\partial x}$ $\displaystyle=0,$ (A.38) $\displaystyle\frac{\partial f(0,0)}{\partial\mu}$ $\displaystyle=0,$ (A.39) $\displaystyle\frac{\partial^{2}f(0,0)}{\partial x\partial\mu}$ $\displaystyle=-\frac{1}{m+n}\neq 0,$ (A.40) $\displaystyle\frac{\partial^{2}f(0,0)}{\partial x^{2}}$ $\displaystyle=\frac{2dn}{(m+n)^{3}}\neq 0.$ (A.41) Therefore, (2.6) undergoes transcritical bifurcations at $(x_{1},R)=(0,d/m),\ (1,d/(m+n))$. ## References * [1] S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” 2008. [Online]. Available: http://bitcoin.org/bitcoin.pdf * [2] Q. Xia, E. B. Sifah, K. O. Asamoah, J. Gao, X. Du, and M. Guizani, “MeDShare: Trust-less medical data sharing among cloud service providers via blockchain,” _IEEE Access_ , vol. 5, pp. 14 757–14 767, 2017. * [3] A. Ouaddah, A. A. Elkalam, and A. A. Ouahman, “Fairaccess: a new blockchain-based access control framework for the internet of things,” _Security and Communication Networks_ , vol. 9, no. 18, pp. 5943–5964, 2016\. * [4] J. Truby, “Decarbonizing bitcoin: Law and policy choices for reducing the energy consumption of blockchain technologies and digital currencies,” _Energy Research & Social Science_, vol. 44, pp. 399–410, 2018. * [5] “Cambridge bitcoin electricity consumption index,” (accessed on 1 March 2021). [Online]. Available: https://www.cbeci.org/ * [6] W. Cai, Z. Wang, J. B. Ernst, Z. Hong, C. Feng, and V. C. M. Leung, “Decentralized applications: The blockchain-empowered software system,” _IEEE Access_ , vol. 6, pp. 53 019–53 033, 2018. * [7] Y. Liu, Z. Fang, M. H. Cheung, W. Cai, and J. Huang, “A social welfare maximization mechanism for blockchain storage,” _arXiv preprint arXiv:2103.05866_ , 2021. * [8] Z. Liu, N. C. Luong, W. Wang, D. Niyato, P. Wang, Y. Liang, and D. I. Kim, “A survey on blockchain: A game theoretical perspective,” _IEEE Access_ , vol. 7, pp. 47 615–47 643, 2019. * [9] N. Dimitri, “Bitcoin mining as a contest,” _Ledger_ , vol. 2, pp. 31–37, 2017\. * [10] A. Fiat, A. Karlin, E. Koutsoupias, and C. Papadimitriou, “Energy equilibria in proof-of-work mining,” in _Proceedings of the 2019 ACM Conference on Economics and Computation_ , 2019, pp. 489–502. * [11] J. W. Weibull, _Evolutionary game theory_. MIT press, 1997. * [12] T. Kanazawa, H. Goto, and T. Ushio, “Replicator dynamics with dynamic payoff reallocation based on the government’s payoff,” _IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences_ , vol. E91-A, no. 9, pp. 2411–2418, 2008. * [13] T. Kanazawa, Y. Fukumoto, T. Ushio, and T. Misaka, “Replicator dynamics with pigovian subsidy and capitation tax,” _Nonlinear Analysis, Theory, Methods and Applications_ , vol. 71, no. 12, pp. e818–e826, 2009. * [14] T. Morimoto, T. Kanazawa, and T. Ushio, “Subsidy-based control of heterogeneous multiagent systems modeled by replicator dynamics,” _IEEE Transactions on Automatic Control_ , vol. 61, no. 10, pp. 3158–3163, 2016. * [15] X. Liu, W. Wang, D. Niyato, N. Zhao, and P. Wang, “Evolutionary game for mining pool selection in blockchain networks,” _IEEE Wireless Communications Letters_ , vol. 7, no. 5, pp. 760–763, 2018. * [16] K. Fujita, Y. Zhang, M. Sasabe, and S. Kasahara, “Mining pool selection problem in the presence of block withholding attack,” in _Proceedings of 2020 IEEE International Conference on Blockchain_ , 2020, pp. 321–326. * [17] S. Kim and S. G. Hahn, “Mining pool manipulation in blockchain network over evolutionary block withholding attack,” _IEEE Access_ , vol. 7, pp. 144 230–144 244, 2019. * [18] K. Toda, N. Kuze, and T. Ushio, “Game-theoretic approach to a decision-making problem for blockchain mining,” _IEEE Control Systems Letters_ , vol. 5, no. 5, pp. 1783–1788, 2021. * [19] J. Debus, “Consensus methods in blockchain systems,” _FSBC Working Paper_ , 2017. [Online]. Available: http://www.fs-blockchain.de/ * [20] N. Houy, “The bitcoin mining game,” _Ledger_ , vol. 1, pp. 53–68, 2016. * [21] D. Kraft, “Difficulty control for blockchain-based consensus systems,” _Peer-to-Peer Networking and Applications_ , vol. 9, no. 2, pp. 397–413, 2016\. * [22] S. Kasahara and J. Kawahara, “Effect of bitcoin fee on transaction-confirmation process,” _Journal of Industrial & Management Optimization_, vol. 15, no. 1, pp. 365–386, 2019. * [23] H. K. Khalil, _Nonlinear systems_ , 3rd ed. Prentice Hall, 2002. * [24] S. Wiggins, _Introduction to applied nonlinear dynamical systems and chaos_ , 2nd ed. Springer, 2003.
11institutetext: Stellenbosch University, South Africa 22institutetext: Praelexis, South Africa 33institutetext: KU Leuven, Belgium # A derivation of variational message passing (VMP) for latent Dirichlet allocation (LDA) Rebecca M.C. Taylor 1122 Dirko Coetsee 112233 Johan A. du Preez 11 ###### Abstract Latent Dirichlet Allocation (LDA) is a probabilistic model used to uncover latent topics in a corpus of documents. Inference is often performed using variational Bayes (VB) algorithms, which calculate a lower bound to the posterior distribution over the parameters. Deriving the variational update equations for new models requires considerable manual effort; variational message passing (VMP) has emerged as a "black-box" tool to expedite the process of variational inference. But applying VMP in practice still presents subtle challenges, and the existing literature does not contain the steps that are necessary to implement VMP for the standard smoothed LDA model, nor are available black-box probabilistic graphical modelling software able to do the word-topic updates necessary to implement LDA. In this paper, we therefore present a detailed derivation of the VMP update equations for LDA. We see this as a first step to enabling other researchers to calculate the VMP updates for similar graphical models. ###### keywords: Latent Dirichlet Allocation, Variational, Graphical Model, Message Passing, VMP, derivation ## 1 Introduction Latent Dirichlet Allocation (LDA) [1] is an effective and popular probabilistic document model with many applications [2]. These include, banking and finance: clustering banking clients based on their transactions and identification of insurance fraud [3]. genomics: using multilocus genotype data to learn about population structure and assign individuals to populations [4], classification of gene expression in healthy and diseased tissues, [5] and prediction of the functional effects of genetic variation [6], image processing: image clustering and scene recognition [7, 8], and medical: feature extraction for rare and emerging diseases [9] and medical data set clustering [10, 11]. While LDA can extract latent topics of any type from a wide range of inputs, it is most commonly used to extract latent semantic information from text. The scale at which LDA is applied has continued to grow with the increasing availability of computing resources, large volumes of data [12], and improved inference algorithms. LDA is usually represented as a graphical model, as shown in Figure 1. Figure 1: Plate model for LDA as a Bayesian network. Each node in the graph represents a random variable and edges represent conditional dependencies. This graph represents a probability distribution, where $M$ documents contain $N$ words, and each word represents one of $K$ possible topics. Section 3.1 provides further details. LDA has been extended and modified to create many new but similar graphical models such as Filtered-LDA [13], author topic models [14], relational topic models [15], dynamic topic models [16], and spatial LDA [17]. Exact inference is computationally intractable for many useful graphical models, including LDA and its many variants [1, 18, 19], [20, p461]. A range of approximate inference techniques is therefore used. Particle based approaches such as Markov chain Monte Carlo (MCMC) [20, p462] have been used but are still computationally expensive [21]. Because larger data sources are now readily available, faster and comparably accurate methods to these particle-based approaches have gained popularity [19]. These include variational Bayes (VB) [22, 23, 24] and expectation propagation (EP) [25, 26, 27]. Variational Bayes in particular is notable for achieving good run-time performance. By optimising a bound on the posterior distribution, it provides guarantees on the inference quality that is not always attainable with approximate methods. VB is usually available only for exponential family distributions. In Section 3.2 we will therefore provide a short overview of the exponential family and some of the notation we use later on in this paper. Constructing a new model that uses VB is labour-intensive since the modeller must derive each of the variational update equations manually. This process has been somewhat eased by the introduction of variational message passing (VMP), a formulation of VB that promises to be general enough to apply to a wide variety of graphical models but also a "black-box" inference engine that automates the calculation of the update equations [19]. A general overview of VMP is provided in Section 4. The available VMP toolkits, however, do not cater for all possible conditional probability distributions, either because a specific distribution is not implemented yet or would be too costly in terms of computational or memory resources. The conditional Dirichlet-multinomial distribution in particular is necessary to implement smoothed LDA, as discussed in Section 5 but, to the best of our knowledge, is not found in any of the popular toolkits [28, 29]. As a first step towards incorporating this distribution, and as an aid to other researchers using similar models, we derive the full VMP update equations for LDA in Section 5. This is the first time, to the best of our knowledge, that these equations have been published. ## 2 Related work A goal of probabilistic modelling is to differentiate between the model and the inference algorithm to answer queries about the model. Concerning the modelling aspect, two prominent LDA models exist, a non-smoothed and a smoothed version [1]. The smoothed version, which is considered here, has become the standard version for topic modelling, and we will only consider it further here. Most related to our work is the inference aspect of probabilistic modelling, particularly the different inference techniques that have been proposed for LDA. As far as inference is concerned, there is generally a trade-off between inference quality, speed of execution, and guarantees such as whether converge is guaranteed. Below we mention some of the inference techniques and how they relate to VMP. Collapsed Gibbs sampling (a type of MCMC technique) is often the inference technique of choice for LDA because it is theoretically exact in the limit. Although it provides high quality inference results, it often requires a prolonged run-time for LDA. Variational Bayesian inference is not exact, but is usually faster, is guaranteed to converge, and provides some guarantees on the quality of the result. The smoothed version of LDA was first introduced using variational Bayesian inference [1]. An online version was later introduced in 2011 [30], and a stochastic variant in 2013 [31]. Both of these later methods are beneficial when there are larger amounts of data, but they do not improve LDA performance as much as other later methods [30, 31, 23, 32]. Structured stochastic variational inference, introduced in 2015 [33], further improved performance and scalability. Standard VB, however, is still a popular technique for LDA due to its simplicity and many Python implementations. Variational message passing (VMP) [34, 35] is the message passing equivalent of the standard version of VB. It is a useful tool for constructing a variational inference solution for a large variety of conjugate exponential graphical models. A non-conjugate variant of VMP, namely non-conjugate VPM (NCVMP) is also available for certain other models [36], but not applicable to this work. An advantage of VMP is that it can speed up the process of deriving a variational solution to a new graphical model [19]. There are software toolkits that implement VMP for general graphical models [28, 29], but none, as far as we can tell, are able to do VMP for LDA-type models at the moment. For Infer.NET, arguably the most well-known VMP toolkit, the reason is that the current software implementation stores all intermediate messages as separate objects in memory, and the VMP messages for a typical Dirichlet-multinomial distribution used in LDA would take too much memory. Although there are no immediate plans to change the implementation to allow these updates [37], we believe that doing this could be fruitful future work. Unfortunately, VMP update equations are therefore sometimes derived by hand, but these results are usually not published [38, 12], in contrast to the current work. ## 3 Background In this section we present the graphical model for LDA, and also introduce the exponential family. ### 3.1 The latent Dirichlet allocation (LDA) graphical model Latent Dirichlet Allocation (LDA) is a hierarchical graphical model that can be represented by the directed graphical model shown in Figure 1 [1] (see Table 1 for details about the symbols). The graphical model shown in Figure 1 allows us to visually identify the conditional independence assumptions in the LDA model. Arrows in the graph indicate the direction of dependence. From Figure 1, we can see that the $n$’th word in document $m$ is $W_{m,n}$. The distribution over this word depends on the topic $Z_{m,n}$ present in the document, which, selects the Dirichlet random vector $\mathbf{\phi}_{k}$ that describes the words present in each topic. Figures 3 and 4 illustrate Dirichlet distributions of cardinality $K=3$ to illustrate the effect of $\bm{\alpha}$ on the distribution. Low values of $\alpha_{k}$ correspond to a low bias towards the corresponding $\theta_{k}$ parameter. These biases are also called pseudocounts. Figure 2: Unrolled BN representation of LDA for a two document corpus with two words per document. A single branch is highlighted (where the document number is $m=1$ and the word number $n=2$). For the meanings of the symbols, refer to Table 1. Table 1: Symbols used for the LDA model shown in Figure 2. Symbol | Description ---|--- $M$ | Total number of documents $m$ | Current document $N$ | Number of words in current document $n$ | Current word (in document) $K$ | Total number of topics $k$ | Current topic $V$ | Total number words in the vocabulary $v$ | Current word (in vocabulary) $\mathsf{v}$ | Observed word (in vocabulary) $\bm{\theta}_{m}$ | Topic-document Dirichlet for document $m$ $Z_{m,n}$ | Topic-document categorical for word $n$ in document $m$ $W_{m,n}$ | Word-topic conditional categorical for word $n$ in document $m$ $\bm{\phi}_{k}$ | Word-topic Dirichlet for topic $k$ In LDA, the topic-word Dirichlet distributions can range from as low as $K=3$ for a three topic model, up to very large values of $K$ for models containing hundreds of topics. The word-topic Dirichlet distributions typically have a much higher cardinality, typically in the thousands or hundreds of thousands since it corresponds to the vocabulary size. Figure 3: Dirichlet distributions of cardinality, $3$, visualised in 3D. (a) Dirichlet with $\bm{\alpha}=\\{1,1,1\\}$. This is the non-informative Dirichlet. (b) Dirichlet with $\bm{\alpha}=\\{1,1,5\\}$. There is more mass on the corner of the Z-axis due to the higher pseudocount of $\alpha_{3}$. Figure 4: Dirichlet distributions of cardinality, $3$, visualised in 3D. For both (a) and (b), all $3$ $\bm{\alpha}$’s are equal. (a) Dirichlet with $\bm{\alpha}=\\{0.5,0.5,0.5\\}$. There is more mass on the corners of all $3$ axes than in the middle of the hyperplane (sparse). (b) Dirichlet with $\bm{\alpha}=\\{5,5,5\\}$. There is less mass on the corners of all $3$ axes than in the middle of the hyperplane. ### 3.2 The exponential family (EF) of distributions The standard VMP algorithm is limited to distributions in the exponential family. The exponential family is the only family of distributions with finite-sized sufficient statistics [39, 40, 41]. Many useful distributions, including the Dirichlet and categorical distributions, fall into this family, as do all the distributions involved in LDA. The probability distribution of a random vector $\bm{x}$ with parameters $\bm{\eta}$ can always be written in the following mathematically convenient form if it falls within the exponential family [34], $\displaystyle p(\bm{x};\bm{\eta})$ $\displaystyle=\frac{1}{Z(\bm{\eta})}h(\bm{x})\exp\left\\{\bm{\eta}^{T}\bm{T}(\bm{x})\right\\},$ (1) where $\bm{\eta}$ is called the natural parameters and $\bm{T}(\bm{x})$ is the sufficient statistics vector, called so since with a sufficiently large sample, the probability of $\bm{x}$ under $\bm{\eta}$ only depends on $\bm{x}$ through $\bm{T}(\bm{x})$. The partition function, $Z(\bm{\eta})$, normalises the distribution to unity volume i.e. $\displaystyle Z(\bm{\eta})=\int h(\bm{x})\exp\left\\{\bm{\eta}^{T}\bm{T}(\bm{x})\right\\}\text{d}\bm{x}.$ (2) We can also formulate Equation 1 as, $\displaystyle p(\bm{x};\bm{\eta})$ $\displaystyle=h(\bm{x})\exp\left\\{\bm{\eta}^{T}\bm{T}(\bm{x})-A(\bm{\eta})\right\\},$ $\displaystyle\text{with }A(\bm{\eta})\triangleq\log Z(\bm{\eta}).$ (3) $A(\bm{\eta})$ is known as the log-partition or cumulant function since it can be used to find the cumulants of a distribution. Below we show the first cumulant (the mean) of exponential family distributions which will be used later, $\displaystyle\nabla_{\bm{\eta}}A(\bm{\eta})$ $\displaystyle=\nabla_{\bm{\eta}}\ln\left[\int h(\bm{x})\exp\left\\{\bm{\eta}^{T}\bm{T}(\bm{x})\right\\}\text{d}\bm{x}\right]$ $\displaystyle=\frac{1}{\int h(\bm{x})\exp\left\\{\bm{\eta}^{T}\bm{T}(\bm{x})\right\\}\text{d}\bm{x}}\nabla_{\bm{\eta}}\left[\int h(\bm{x})\exp\left\\{\bm{\eta}^{T}\bm{T}(\bm{x})\right\\}\text{d}\bm{x}\right]$ $\displaystyle=\int\frac{1}{Z(\bm{\eta})}h(\bm{x})\exp\left\\{\bm{\eta}^{T}\bm{T}(\bm{x})\right\\}\bm{T}(\bm{x})\text{d}\bm{x}$ $\displaystyle=\int p(\bm{x})\bm{T}(\bm{x})\text{d}\bm{x}$ $\displaystyle=\left<\bm{T}(\bm{x})\right>_{p(\bm{x})}$ (4) This shows that we can find the first expected moment of a distribution in the exponential family by taking the derivative of its log-partition function. This will always be the same as finding the expected value of the sufficient statistics vector. ## 4 Variational message passing (VMP) Variational Bayes (VB) is a framework for approximating the full posterior distribution over a model’s parameters and latent variables in an iterative, Expectation Maximization (EM)-like manner [22], since the true distribution can often not be calculated efficiently for models of interest, such as LDA. Variational message passing (VMP) is a way to derive the VB update equations for a given model [34, 35]. A formulation of optimization in terms of local computations is required to translate VB into its message passing variant, VMP. ### 4.1 The generic VMP algorithm Conjugate-exponential models, of which LDA is an example, are models where conjugacy exists between all parent-child relationships and where all distributions are in the exponential family. For these models we can perform variational inference by performing local computations and message passing. Based on the derivation in [34], these local computations depend only on the variables within the Markov blanket of a node. The nodes in the Markov blanket (shown in Figure 5) are nodes that are either parents, children, or co-parents of the node. The co-parents of a node are the parents of its children and excludes the node itself. For Bayesian networks, the joint distribution can be expressed in terms of the conditional distributions at each node $x_{i}$, $\displaystyle p(\bm{x})=\prod_{i}p(x_{i}\mid\text{pa}_{x_{i}}),$ (5) where $\text{pa}_{x_{i}}$ are the parents of node $x_{i}$ and $x_{i}$ the variables associated with node $x_{i}$ [42]. These variables can be either hidden, meaning we don’t know, or or observed, meaning we know their values at inference. Figure 5: The Markov blanket of a node ($x_{j}$). The nodes in the Markov blanket are the shaded nodes and are defined by the set of parents (pa), children (ch) and co-parents (cp) of a node. The variational update equation for a node $x_{j}$ depends only on expectations over variables appearing in the Markov blanket of that node [35]. In this section we review the general VMP message passing update equations and apply them to the example graphical model shown in Figure 6 to explain the broader principles involved in VMP. Figure 6: The child and parent nodes in the example we will use to present the message passing equations. In the example, $\bm{x}$ is the parent node of $\bm{y}$, and $\bm{y}$ the child of $\bm{x}$; the node names correspond to the random vectors. The node $\bm{x}$, which represents a random vector is the parent of $\bm{y}$. In exponential family form this is written as, $\displaystyle p(\bm{x};\bm{\eta})$ $\displaystyle=\frac{1}{Z(\bm{\eta})}h(\bm{x})\exp\left\\{\bm{\eta}^{T}\bm{T}(\bm{x})\right\\},$ (6) $\displaystyle=h(\bm{x})\exp\left\\{\bm{\eta}^{T}\bm{T}(\bm{x})-A(\bm{\eta})\right\\},$ $\displaystyle\text{with }A(\bm{\eta})\triangleq\ln Z(\bm{\eta}),$ (7) where $\bm{\eta}$ are the natural parameters, $\bm{T}(\bm{x})$ the sufficient statistics vector, and $A(\bm{\eta})$ the log-partition function. If we limit ourselves to distributions in this family, the prior and posterior distributions have the same form [34]. During inference, we therefore only need to update the values of the parameters and do not have to change the functional form [34]. #### 4.1.1 Message to a child node Continuing with the description of the graphical model in Figure 6, the parent to child node message for parent node $\bm{x}$ and child node $\bm{y}$ is the expectation of the sufficient statistics vector [35], $\bm{\mu}^{\text{p2c}}_{\bm{x}\rightarrow\bm{y}}=\left<\bm{T}(\bm{x})\right>_{p(\bm{x})}.$ (8) We can calculate this by using the derivative of the log-partition function as seen in Equation 4, $\left<\bm{T}(\bm{x})\right>_{p(\bm{x})}=\nabla_{\bm{\eta}}A(\bm{\eta}).$ (9) The parent to child message is therefore, $\bm{\mu}^{\text{p2c}}_{\bm{x}\rightarrow\bm{y}}=\nabla_{\bm{\eta}}A(\bm{\eta}).$ (10) This message which contains the expected values of the natural parameters of $\bm{x}$ now becomes the new natural parameters of $\bm{y}$. We can therefore write the child node’s distribution as $p(\bm{y}\mid\left<\bm{T}(\bm{x})\right>)$ after receiving a message from node $x$. #### 4.1.2 Message to a parent node Because we limit ourselves to conjugate-exponential models, the exponential form of the child distribution can always be re-arranged to match that of the parent distribution. This is due to the multi-linear properties of conjugate- exponential models [34]. We define the re-arranged version of the sufficient statistics as $\bm{\varphi}(\bm{y})$. This version is in the correct functional form to send a child to parent message [34, 35]. A child to parent message can therefore be written as, $\bm{\mu}^{\text{c2p}}_{\bm{y}\rightarrow\bm{x}}=\left<\bm{\varphi}(\bm{y})\right>_{p(\bm{y})}.$ (11) Note that if any node, ${a}$, is observed then the messages are as defined above, but with $\left<\bm{\varphi}(\bm{a})\right>$ replaced by $\bm{\varphi}(\bm{a})$. I.e. if we know the true values, we use them. When a parent node has received all of its required messages, we can update its belief by finding its updated natural parameter vector $\bm{\eta}^{\prime}$. In the general case for a graphical model containing a set of nodes $\bm{x}=\set{{x}_{1},{x}_{2},...,{x}_{U}}$, the update for parent ${x}_{j}$ becomes, $\ \bm{\eta}^{\prime}_{{x_{j}}}=\left\\{\bm{\mu}_{{x}_{i}\rightarrow{x}_{j}}\right\\}_{{x}_{i}\in\text{pa}_{{x}_{j}}}+\sum_{s\in\text{ch}_{{x}_{j}}}\bm{\mu}_{{x}_{s}\rightarrow{x}_{j}}.$ (12) Updating the parent node $\bm{x}$ from the graphical model in Figure 6 will result in the update equation $\bm{\eta}^{\prime}_{\bm{x}}=\bm{\mu}_{\bm{\theta}\rightarrow\bm{x}}+\bm{\mu}_{\bm{y}\rightarrow\bm{x}}$. We now present the full VMP algorithm in Algorithm 1 as given by Winn [34] but using our notation defined above. Algorithm 1 Variational Message Passing (VMP) [34] Initialization: Initialize each factor distribution $\bm{q}_{j}$ by initializing the corresponding moment vector $\left<\bm{T}_{j}(\bm{x}_{j})\right>$ with random vector $\bm{x}_{j}$. Iteration: 1. 1. For each node $\bm{x}_{j}$ in turn: 1. (a) Retrieve messages from all parent and child nodes, as defined in Equation 8 and Equation 11. This requires child nodes to retrieve messages from the co- parents of $\bm{x}_{j}$ (Figure 5). 2. (b) Compute updated natural parameter vector $\bm{\eta}^{\prime}_{j}$ using Equation 12. 3. (c) Compute updated moment vector $\left<\bm{T}_{j}(\bm{x}_{j})\right>$ given the new setting of the parameter vector. $\text{ELBO}(\bm{q})$ (optional). Termination: If the increase in the bound is negligible or a specified number of iterations has been reached, stop. Otherwise, repeat from step 1. ## 5 The VMP algorithm for LDA Here we describe the distribution at each node and also derive the child to parent and parent to child messages (where applicable) for the LDA graph. We keep the messages dependent only on the current round of message passing which can be considered to be one epoch. ### 5.1 Topic-document Dirichlet nodes $\bm{\theta}_{m}$ In exponential family form we can write each topic-document Dirichlet as, $\displaystyle\ln\text{Dir}(\bm{\theta}_{{m}};\bm{\alpha}_{{m}})$ $\displaystyle=\left[\begin{array}[]{c}\alpha_{{m},1}-1\\\ \alpha_{{m},2}-1\\\ \vdots\\\ \alpha_{{m},K}-1\end{array}\right]^{T}\left[\begin{array}[]{c}\ln\theta_{{m},1}\\\ \ln\theta_{{m},2}\\\ \vdots\\\ \ln\theta_{m,K}\end{array}\right]-\ln\frac{\Gamma(\sum_{k}\alpha_{{m},k})}{\prod_{k}\Gamma(\alpha_{{m},k})}.$ (21) For each $m$ we can identify the natural parameters as, $\bm{\eta}_{\bm{\theta}_{{m}}}=\left[\begin{array}[]{c}\alpha_{m,1}-1\\\ \alpha_{m,2}-1\\\ \vdots\\\ \alpha_{m,K}-1\end{array}\right],$ (22) and the sufficient statistics as, $\bm{T}(\bm{\theta}_{m})=\left[\begin{array}[]{c}\ln\theta_{m,1}\\\ \ln\theta_{m,2}\\\ \vdots\\\ \ln\theta_{m,K}\end{array}\right].$ (23) #### 5.1.1 Message to a child node $Z_{m,n}$ The parent to child node message (for parent node $\bm{\theta}_{m}$ and child node $Z_{m,n}$) is the expectation of the sufficient statistics vector (Equation 8), $\bm{\mu}^{\text{p2c}}_{\bm{\theta}_{m}\rightarrow Z_{w,n}}=\left<\bm{T}(\bm{\theta}_{m})\right>_{p(\bm{\theta}_{m})}.$ (24) Using Equation 9, we can calculate this expectation using the derivative of the log-partition function. It is shown in [34, p128] to be, $\displaystyle\left<\ln(\theta_{k})\right>_{p(\theta_{k})}$ $\displaystyle=\psi(\alpha_{k})-\psi(\sum_{k}\alpha_{k}),$ (25) where $\psi$ is the digamma function. The parent to child message from each topic-document Dirichlet node is therefore, $\bm{\mu}^{\text{p2c}}_{\bm{\theta}_{m}\rightarrow Z_{{m},n}}=\left[\begin{array}[]{c}\psi(\alpha_{{m},1})-\psi(\sum_{k}\alpha_{{m},k})\\\ \vdots\\\ \psi(\alpha_{{m},K})-\psi(\sum_{k}\alpha_{{m},k})\end{array}\right].$ (26) We can now insert these expected sufficient statistics at the child node. The natural parameter vector is then, $\bm{\eta}^{\prime}_{Z_{{m},n}}=\left[\begin{array}[]{c}\psi(\alpha_{{m},1})-\psi(\sum_{k}\alpha_{{m},k})\\\ \vdots\\\ \psi(\alpha_{{m},K})-\psi(\sum_{k}\alpha_{{m},k})\end{array}\right]=\left[\begin{array}[]{c}\ln\theta^{{}^{\prime}}_{{m},1}\\\ \vdots\\\ \ln\theta^{{}^{\prime}}_{{m},K}\end{array}\right],$ (27) with $\bm{\theta}^{\prime}$ denoting the updated topic proportions, and $\bm{\eta}^{\prime}_{Z_{{m},n}}$ the updated natural parameter vector. Because these are not inherently normalised, normalisation is required to represent them as true probability distributions. Note also the conjugacy between the Dirichlet and categorical distributions: this allows us to simply update the natural parameters without changing the form of the distribution [12, 34]. ### 5.2 Topic-document categorical nodes $Z_{m,n}$ In exponential form we can represent the topic-document categorical distribution for a specific word in a specific document as, $\displaystyle\ln\text{Cat}(Z_{{m},n}\mid\bm{\theta}_{{m}})$ $\displaystyle=\left[\begin{array}[]{c}\ln\theta_{{m},1}\\\ \ln\theta_{{m},2}\\\ \vdots\\\ \ln\theta_{m,k}\end{array}\right]^{T}\left[\begin{array}[]{c}\llbracket Z_{{m},n}=1\rrbracket\\\ \llbracket Z_{{m},n}=2\rrbracket\\\ \vdots\\\ \llbracket Z_{{m},n}=K\rrbracket\end{array}\right]-\ln(\sum_{k}\theta_{m,k}),$ (36) $\displaystyle\therefore A(\bm{\theta}_{{m}})$ $\displaystyle=\ln(\sum_{k}\theta_{m,k}),$ (37) by applying Equation 6. For each branch (as defined in Figure 2) we can identify the natural parameters as, $\bm{\eta}_{Z_{{m},n}}=\left[\begin{array}[]{c}\ln\theta_{{m},1}\\\ \ln\theta_{{m},2}\\\ \vdots\\\ \ln\theta_{m,k}\end{array}\right],$ (38) and the sufficient statistics as, $\bm{T}(Z_{{m},n})=\left[\begin{array}[]{c}\llbracket Z_{{m},n}=1\rrbracket\\\ \llbracket Z_{{m},n}=2\rrbracket\\\ \vdots\\\ \llbracket Z_{{m},n}=K\rrbracket\end{array}\right].$ (39) #### 5.2.1 Message to a parent node $\bm{\theta}_{m}$ Before deriving the message from child node $Z_{{m},{n}}$ to parent node $\bm{\theta}_{m}$, some discussion regarding the effect of the incoming message from node $W_{{m},{n}}$ on node $Z_{{m},{n}}$ is in order. Because this message to the topic-document node $Z_{{m},{n}}$ (from the respective word-topic node $W_{{m},{n}}$) is a child to parent message, this message is added to the natural parameter vector such that, $\displaystyle\ln\bm{\tilde{\theta}}^{{}^{\prime}}_{{m}}$ $\displaystyle=\ln\bm{\theta}_{{m}}+\ln\bm{p}_{{m},{n}}$ $\displaystyle=\ln(\bm{\theta}_{{m}}\bm{p}_{{m},{n}}),$ (40) with $\tilde{.}$ indicating that the factor is unnormalised. Re-normalizing so that $\sum_{k}\theta^{{}^{\prime}}_{{m},k}=1$ gives, $\displaystyle\ln\bm{{\theta}}^{{}^{\prime}}_{{m}}$ $\displaystyle=\ln(\frac{\bm{\theta}_{{m}}\bm{p}_{{m},{n}}}{\sum_{k}\theta_{{m},k}{p}_{{m},{n},k}}),$ (41) where $\\{p_{{m},{n},1},...,p_{{m},{n},K}\\}$ are the topic probabilities for a specific word in a specific document as given by the message from node ${W}_{{m},{n}}$. Once each topic-document node $Z_{{m},{n}}$ has received its message from the corresponding word-topic node $W_{m,n}$, the natural parameter vector at node $Z_{{m},{n}}$ (from Equation 36) will have been modified to be $\bm{\eta}^{\prime}_{Z_{{m},n}}=\ln\bm{\theta}^{\prime}_{{m}}$. In the case where no message has been received from node ${W}_{{m},{n}}$, then $\ln\bm{\theta}^{\prime}_{{m}}=\ln\bm{\theta}_{{m}}$. In LDA, however, we will only ever update the topic-document Dirichlet node $\bm{\theta}_{m}$ from the topic-document node $Z_{{m},{n}}$ after receiving a message from the word- topic side of the graph (except at initialisation). The message from a topic-document node $Z_{{m},{n}}$ to a topic-document node $\bm{\theta}_{m}$ is also a child to parent message. To send a child to parent message we need to rearrange the exponential form of the child distribution to match the parent distribution (as presented in Section 4.1). We can rearrange Equation 36 in terms of $\bm{\theta}^{\prime}_{{m}}$ as follows, $\displaystyle\ln\text{Cat}({Z}_{{m},{n}}\mid\bm{\theta}_{{m}})$ $\displaystyle=\left[\begin{array}[]{c}\llbracket Z_{{m},{n}}=1\rrbracket\\\ \llbracket Z_{{m},{n}}=2\rrbracket\\\ \vdots\\\ \llbracket Z_{{m},{n}}=K\rrbracket\end{array}\right]^{T}\left[\begin{array}[]{c}\ln\theta^{{}^{\prime}}_{{m},1}\\\ \ln\theta^{{}^{\prime}}_{{m},2}\\\ \vdots\\\ \ln\theta^{{}^{\prime}}_{{m},K}\end{array}\right]-\ln(\sum_{k}\theta^{{}^{\prime}}_{m,k}).$ (50) The message towards node $\bm{\theta}_{m}$ is therefore, $\displaystyle\bm{\mu}^{\text{c2p}}_{{Z}_{{m},{n}}\rightarrow\bm{\theta}_{{m}}}$ $\displaystyle=\left<\varphi({Z}_{{m},{n}})\right>_{p({Z}_{{m},{n}})}$ $\displaystyle=\left<\left[\begin{array}[]{c}\llbracket Z_{{m},{n}}=1\rrbracket\\\ \llbracket Z_{{m},{n}}=2\rrbracket\\\ \vdots\\\ \llbracket Z_{{m},{n}}=K\rrbracket\end{array}\right]\right>_{p({Z}_{{m},{n}})}=\left[\begin{array}[]{c}\theta^{{}^{\prime}}_{{m},1}\\\ \theta^{{}^{\prime}}_{{m},2}\\\ \vdots\\\ \theta^{{}^{\prime}}_{{m},K}.\end{array}\right]=\left[\begin{array}[]{c}\frac{\theta_{{m,1}}{p}_{{m},{n},1}}{\sum_{k}\theta_{{m},k},{p}_{{m},{n},k}}\\\ \frac{\theta_{{m,2}}{p}_{{m},{n},2}}{\sum_{k}\theta_{{m},k}{p}_{{m},{n},k}}\\\ \vdots\\\ \frac{\theta_{{m,K}}{p}_{{m},{n},K}}{\sum_{k}\theta_{{m},k}{p}_{{m},{n},k}}\end{array}\right].$ (63) For each topic-document node $\bm{\theta}_{m}$, for a single branch in the graph, and for a single topic ${k}$, we have the update, $\alpha^{{}^{\prime}}_{{m},{k}}=\frac{\theta_{{m,k}}{p}_{{m},{n},k}}{\sum_{j}\theta_{{m},j}{p}_{{m},{n},j}}+\alpha^{\text{prior}}_{{m},{k}},$ (64) with $\alpha^{\text{prior}}_{{m},{k}}$ representing the initial hyperparameter settings. Over all words in the document we therefore have, $\alpha^{{}^{\prime}}_{{m},{k}}=\sum_{n}\frac{\theta_{{m,k}}{p}_{{m},{n},k}}{\sum_{j}\theta_{{m},j}{p}_{{m},{n},j}}+\alpha^{\text{prior}}_{{m},{k}},$ (65) which can also be written as, $\alpha^{{}^{\prime}}_{{m},{k}}=\sum_{n}\frac{\theta^{{}^{\prime}}_{{m,k}}}{\sum_{j}\theta^{{}^{\prime}}_{{m},j}}+\alpha^{\text{prior}}_{{m},{k}}.$ (66) #### 5.2.2 Message to a child node $W_{m,n}$ The message from a topic-document node $Z_{m,n}$ to a word-topic node $W_{{m},n}$ is a parent to child message. Based on Equation 8, the message is, $\bm{\mu}^{\text{p2c}}_{{Z}_{{m},n}\rightarrow{W}_{{m},n}}=\left<\bm{T}({Z}_{{m},n})\right>_{p({Z}_{{m},n})}.$ (67) Equation 6 defines the log-partition function that can be used to calculate this moment using Equation 9. To do this we need to re-parameterise the natural parameter vector from Equation 36. This is shown below for a single word in a single topic $k$, $\displaystyle\eta_{k}$ $\displaystyle\equiv\ln\theta^{\prime}_{k},$ $\displaystyle\therefore\theta^{{}^{\prime}}_{k}$ $\displaystyle=e^{\eta_{k}},$ $\displaystyle\therefore A(\bm{\eta})$ $\displaystyle=\ln(\sum_{k}e^{\eta_{k}}).$ From this we can calculate the expected sufficient statistics using Equation 4 for a specific topic ${k}$: $\displaystyle<\llbracket Z_{{m},n}=k\rrbracket>_{p({Z}_{{m},n})}$ $\displaystyle=\frac{\delta}{\delta\eta_{k}}A(\bm{\eta})$ $\displaystyle=\frac{e^{\eta_{k}}}{\sum_{j}e^{\eta_{j}}}$ $\displaystyle=\frac{\theta^{{}^{\prime}}_{{m},{k}}}{\sum_{j}\theta^{{}^{\prime}}_{{m},{j}}}$ $\displaystyle=\theta^{*}_{{m},{{k}}},$ $\displaystyle\text{where the $\theta^{*}$'s are normalised}.$ (68) Each $\theta^{*}_{{m},{k}}$ is a normalised topic proportion for a single topic. We can write the full parent to child message for a word within a topic as, $\bm{\mu}^{p2c}_{{Z}_{m,n}\rightarrow{W}_{{m},n}}=\left[\begin{array}[]{c}\theta^{*}_{{m},1}\\\ \vdots\\\ \theta^{*}_{{m},K}\end{array}\right],$ (69) with, $\theta^{*}_{{m},k}=\frac{\theta^{{}^{\prime}}_{{m},{k}}}{\sum_{j}\theta^{{}^{\prime}}_{{m},{j}}}.$ (70) These updated topic proportions are then used at the word-topic node ${W}_{{m},n}$. ### 5.3 Word-topic conditional categorical nodes $W_{m,n}$ Initially, the $n$th word of a document is described by $K$ word-topic distributions. We call this a conditional categorical distribution; for a topic ${k}$ this reduces to a single categorical distribution. For all $K$ we can write, $\displaystyle\ln\text{Cat}({W}_{{m},n}\mid{Z}_{{m},n},\bm{\Phi})$ $\displaystyle=\left[\begin{array}[]{c}\sum_{k}\llbracket Z_{{m},n}={k}\rrbracket\ln\phi_{k,1}\\\ \sum_{k}\llbracket Z_{{m},n}={k}\rrbracket\ln\phi_{k,2}\\\ \vdots\\\ \sum_{k}\llbracket Z_{{m},n}={k}\rrbracket\ln\phi_{k,V}\end{array}\right]^{T}\left[\begin{array}[]{c}\llbracket W_{{m},n}=1\rrbracket\\\ \llbracket W_{{m},n}=2\rrbracket\\\ \vdots\\\ \llbracket W_{{m},n}=V\rrbracket\end{array}\right]$ (79) $\displaystyle-\sum_{k}\llbracket Z_{{m},n}={k}\rrbracket\ln(\sum_{v}\phi_{k,v}),$ where the vocabulary over all words ranges from $1$ to $V$. #### 5.3.1 Message to a categorical parent node $Z_{m,n}$ Each word-topic node ${W}_{{m},{n}}$ is a child of a topic-document node ${Z}_{m,n}$; we therefore need to send child to parent messages between each pair of nodes. We can rewrite Equation 79 in terms of ${Z}_{m,n}$ to give, $\displaystyle\ln\text{Cat}({W}_{{m},{n}}\mid{Z}_{{m},{n}},\bm{\Phi})$ $\displaystyle=\left[\begin{array}[]{c}\sum_{v}\llbracket W_{m,n}=v\rrbracket\ln\phi_{1,v}\\\ \vdots\\\ \sum_{v}\llbracket W_{m,n}=v\rrbracket\ln\phi_{k,v}\\\ \vdots\\\ \sum_{v}\llbracket W_{m,n}=v\rrbracket\ln\phi_{K,v}\end{array}\right]^{T}\left[\begin{array}[]{c}\llbracket{Z}_{m,n}=1\rrbracket\\\ \vdots\\\ \llbracket{Z}_{m,n}=k\rrbracket\\\ \vdots\\\ \llbracket{Z}_{m,n}=K\rrbracket\end{array}\right].$ (90) After observing the word $\mathsf{v}$, Equation 90 reduces to a categorical form (this is always the case in standard LDA). Using Equation 68, we can then write the word-topic child to parent message as, $\displaystyle\bm{\widetilde{\mu}}^{\text{c2p}}_{{W}_{m,n}\rightarrow{Z}_{m,n}}$ $\displaystyle=\left[\begin{array}[]{c}\ln{\phi}_{1,\mathsf{v}}\\\ \vdots\\\ \ln{\phi}_{k,\mathsf{v}}\\\ \vdots\\\ \ln{\phi}_{K,\mathsf{v}}\end{array}\right],$ (96) where $\mathsf{v}$ is the observed word, and the message is unnormalised. This is because we have taken a slice through the word-topic distributions for a specific word, which means that the result is not a true distribution. We normalise the message to obtain the topic proportions (for each word in each document), which gives, $\displaystyle\bm{{\mu}}^{\text{c2p}}_{{W}_{m,n}\rightarrow{Z}_{m,n}}$ $\displaystyle=\left[\begin{array}[]{c}\ln\phi^{*}_{1,\mathsf{v}}\\\ \vdots\\\ \ln\phi^{*}_{k,\mathsf{v}}\\\ \vdots\\\ \ln\phi^{*}_{K,\mathsf{v}}\end{array}\right],$ (102) with, $\phi^{*}_{k,\mathsf{v}}=\frac{\phi_{k,\mathsf{v}}}{\sum_{j}\phi_{k,\mathsf{v}}}.$ (103) To determine the updated document topic proportions we update the natural parameter vector by adding these topic weightings to the current document topic proportions, $\bm{\eta}^{\prime}_{{Z}_{m,n}}=\left[\begin{array}[]{c}\ln\theta_{{m},1}+\ln\phi^{*}_{1,\mathsf{v}}\\\ \vdots\\\ \ln\theta_{{m},K}+\ln\phi^{*}_{K,\mathsf{v}}\end{array}\right].$ (104) #### 5.3.2 Message to a Dirichlet parent node $\bm{\phi}_{k}$ To send child to parent messages from the a word-topic node ${W}_{{m},n}$ to each word-topic node $\bm{\phi}_{k}$, re-parameterisation is required. After parameterisation in terms of $\phi_{k}$ we have, $\displaystyle\ln\text{Cat}({W}_{{m},n}\mid{Z}_{{m},n},\bm{\phi}_{{k}})$ $\displaystyle=\left[\begin{array}[]{c}\llbracket{Z}_{{m},n}={k}\rrbracket\llbracket W_{{m},n}=1\rrbracket\\\ \vdots\\\ \llbracket{Z}_{{m},n}={k}\rrbracket\llbracket W_{{m},n}=v\rrbracket\\\ \vdots\\\ \llbracket{Z}_{{m},n}={k}\rrbracket\llbracket W_{{m},n}=V\rrbracket\end{array}\right]^{T}\left[\begin{array}[]{c}\ln\phi_{k,1}\\\ \vdots\\\ \ln\phi_{k,v}\\\ \vdots\\\ \ln\phi_{k,V}\end{array}\right]$ (115) $\displaystyle+\text{ terms involving }\ln\bm{\phi}_{{Z}_{{m},n}\neq{k}},$ where $\phi_{k,\mathsf{v}}$ are the topic proportions for word $\mathsf{v}$ in topic ${k}$. The messages from one of these categorical beliefs to $\bm{\phi}_{k}$ can then be written as, $\displaystyle\bm{\mu}^{\text{c2p}}_{W_{{m},{n}}\rightarrow\bm{\phi}_{k}}$ $\displaystyle=\left<\varphi({W_{{m},{n}},Z_{{m},n}=k})\right>_{p(W_{{m},{n}},Z_{m,n}=k)}$ $\displaystyle=\left<\llbracket Z_{{m},n}=k\rrbracket\llbracket{W}_{{m},{n}}=v\rrbracket\right>_{p({W}_{{m},{n}},Z_{{m},n}=k)}$ $\displaystyle=\left<\llbracket Z_{{m},n}=k\rrbracket\right>_{p({Z_{{m},n}=k)}},\text{ because $W_{{m},{n}}$ is observed}.$ Note that because ${W}_{m,n}$ is observed, the values in the vector for all entries except for where ${W}_{{m},{n}}={v}$, are zero. To update $\bm{\phi}_{{k}}$, we simply add all the incoming message to the respective $\bm{\beta}_{k}$ values. For a specific word in the vocabulary ${v}$ this would be, $\displaystyle\beta^{{}^{\prime}}_{k,{v}}$ $\displaystyle=\sum_{m}\sum_{n}\theta^{*}_{{m},k}+\beta^{\text{prior}}_{k,{v}},$ (117) with $\theta^{*}_{{m},k}$ denoting the normalised probability of topic $k$ for document $m$. We can see that the scaled topic proportions for word $\mathsf{v}$ are simply added to the respective word’s word-topic Dirichlet’s parameters. ### 5.4 Word-topic Dirichlet nodes $\bm{\phi}_{k}$ The word-topic distribution factors are of Dirichlet form. For the entire graph, we have: $\bm{\Phi}=\\{\bm{\phi}_{1},...,\bm{\phi}_{K}\\}$. For each topic $k$ we write, $\displaystyle\ln\text{Dir}(\bm{\phi}_{{k}};\bm{\beta}_{{k}})$ $\displaystyle=\left[\begin{array}[]{c}\beta_{{k},1}-1\\\ \beta_{{k},2}-1\\\ \vdots\\\ \beta_{{k},V}-1\end{array}\right]^{T}\left[\begin{array}[]{c}\ln\phi_{{k},1}\\\ \ln\phi_{{k},2}\\\ \vdots\\\ \ln\phi_{{k},V}\end{array}\right]-\ln\frac{\Gamma(\sum_{v}\beta_{k,v})}{\prod_{v}\Gamma(\beta_{{k},v})}.$ (126) #### 5.4.1 Message to a child node $W_{m,n}$ These messages are very similar to the ones on the topic-document side of the graph since they are also parent to child messages with each parent having a Dirichlet form. For each topic ${k}$ the messages sent to all word-topic nodes $W_{{m},{n}}$ (one for each word in each topic) will be identical, $\bm{\mu}^{\text{p2c}}_{\bm{\phi}_{{k}}\rightarrow W_{m,n}}=\left[\begin{array}[]{c}\psi(\beta_{{k},1})-\psi(\sum_{v}\beta_{{k},v})\\\ \vdots\\\ \psi(\beta_{{k},V})-\psi(\sum_{v}\beta_{{k},v})\end{array}\right].$ (127) The additional complexity comes from the fact that the child nodes $W_{{m},{n}}$ need to assimilate messages from $K$ Dirichlet distributions and not only from one, as in the topic-document side of the graph. We now perform a similar update to the update seen in Equation 27, except that we have $K$ messages added instead of only one. For each ${k}$ we have, $\bm{\eta}^{\prime}_{\bm{\phi}_{{k}}}=\left[\begin{array}[]{c}\psi(\beta_{k,1})-\psi(\sum_{v}\beta_{k,v})\\\ \vdots\\\ \psi(\beta_{{k},V})-\psi(\sum_{v}\beta_{{k},v})\end{array}\right]=\left[\begin{array}[]{c}\ln\phi^{{}^{\prime}}_{k,1}\\\ \vdots\\\ \ln\phi^{{}^{\prime}}_{k,V}\end{array}\right],$ (128) where $\phi^{{}^{\prime}}$ denotes the updated values. We have now presented the VMP message updates for each node in the LDA graphical model. Using these messages, VMP for LDA can be implemented using a range of message passing schedules. In the next section we provide one such message passing schedule. ## 6 Message passing schedule Because LDA has a simple, known structure per document, it is sensible to construct a fixed message passing schedule. This is not always the case for graphical models, for example in some cases we chose rather to base the schedule on message priority using divergence measures to prioritise messages according to their expected impact [43, 44, 45, 46]. In Algorithm 2, we presented our proposed VMP message passing schedule for LDA. It is based on the message passing schedule of the approximate loopy belief update (ALBU) VMP implementation in [46] that uses a form of belief propagation [47, 42][p364-366]. There are, of course, are many other variants that one could use. Algorithm 2 Message passing schedule for LDA For each epoch: * For each document $\bm{m}$: * For each word $\bm{n}$ in document $\bm{m}$: * – send messages from each node $\bm{\phi}_{k}$ to node $W_{m,n}$ * – observe word $W_{m,n}=\mathsf{v}$ * – send message from node $W_{m,n}$ to node ${Z_{m,n}}$ * – send message from node $Z_{m,n}$ to $\bm{\theta}_{m}$ For each word $\bm{n}$ in document $\bm{m}$: * – send message from node $\bm{\theta}_{m}$ to node $Z_{m,n}$ * – send message from node $Z_{m,n}$ to node $W_{m,n}$ For each word $n$ in each document $m$: * – send messages from node ${W_{m,n}}$ to each $\bm{\phi}_{k}$ Based on this schedule, as well as the message passing equations provided in 4, VMP can be implemented for LDA. ## 7 Conclusion and future work VMP, an elegant and tenable solution to inference problems, has not been presented in detail for the standard, smoothed LDA graphical model, which is surprising in view of its speed and ease of use. In this article, we provided an introduction to variational message passing (VMP), the message passing equivalent of VB. We present the generic VMP algorithm and then applied VMP to the LDA graphical model. Finally we proposed a message passing schedule for VMP for LDA. For future work, we recommend that VMP and VB be compared in terms of execution time for LDA, and that alternative message passing schedules be investigated to improve execution time and convergence rate. We also recommend that the VMP equations for other, similar graphical models be derived and published in a similar manner. ## References * [1] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022, 2003. * [2] Hamed Jelodar, Yongli Wang, Chi Yuan, Xia Feng, Xiahui Jiang, Yanchao Li, and Liang Zhao. Latent dirichlet allocation (lda) and topic modeling: models, applications, a survey. Multimedia Tools and Applications, 78(11):15169–15211, 2019. * [3] Ling Liu and Zijiang Yang. Identifying fraudulent online transactions using data mining and statistical techniques. In 2012 7th International Conference on Computing and Convergence Technology (ICCCT), pages 321–324. IEEE, 2012. * [4] Jonathan K Pritchard, Matthew Stephens, and Peter Donnelly. Inference of population structure using multilocus genotype data. Genetics, 155(2):945–959, 2000. * [5] Hima Bindu Yalamanchili, Soon Jye Kho, and Michael L Raymer. Latent dirichlet allocation for classification using gene expression data. In 2017 IEEE 17th International Conference on Bioinformatics and Bioengineering (BIBE), pages 39–44. IEEE, 2017. * [6] Daniel Backenroth, Zihuai He, Krzysztof Kiryluk, Valentina Boeva, Lynn Pethukova, Ekta Khurana, Angela Christiano, Joseph D Buxbaum, and Iuliana Ionita-Laza. Fun-lda: a latent dirichlet allocation model for predicting tissue-specific functional effects of noncoding variation: methods and applications. The American Journal of Human Genetics, 102(5):920–942, 2018. * [7] Li Fei-Fei and Pietro Perona. A bayesian hierarchical model for learning natural scene categories. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 2, pages 524–531. IEEE, 2005. * [8] Liangliang Cao and Li Fei-Fei. Spatially coherent latent topic model for concurrent segmentation and classification of objects and scenes. In 2007 IEEE 11th International Conference on Computer Vision, pages 1–8. IEEE, 2007. * [9] Aakansha Gupta and Rahul Katarya. Pan-lda: A latent dirichlet allocation based novel feature extraction model for covid-19 data using machine learning. Computers in biology and medicine, page 104920, 2021. * [10] Reva Joshi, Ritu Prasad, Pradeep Mewada, and Praneet Saurabh. Modified lda approach for cluster based gene classification using k-mean method. Procedia Computer Science, 171:2493–2500, 2020. * [11] M Selvi, K Thangaramya, MS Saranya, K Kulothungan, S Ganapathy, and A Kannan. Classification of medical dataset along with topic modeling using lda. In Nanoelectronics, Circuits and Communication Systems, pages 1–11. Springer, 2019. * [12] Andrés R Masegosa, Ana M Martinez, Helge Langseth, Thomas D Nielsen, Antonio Salmerón, Darío Ramos-López, and Anders L Madsen. Scaling up bayesian variational inference using distributed computing clusters. International Journal of Approximate Reasoning, 88:435–451, 2017\. * [13] Fuad Alattar and Khaled Shaalan. Emerging research topic detection using filtered-lda. AI, 2(4):578–599, 2021. * [14] Mark Steyvers, Padhraic Smyth, Michal Rosen-Zvi, and Thomas Griffiths. Probabilistic author-topic models for information discovery. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 306–315, 2004. * [15] Jonathan Chang and David Blei. Relational topic models for document networks. In Artificial intelligence and statistics, pages 81–88. PMLR, 2009\. * [16] David M Blei and John D Lafferty. Dynamic topic models. In Proceedings of the 23rd international conference on Machine learning, pages 113–120, 2006. * [17] Xiaogang Wang and Eric Grimson. Spatial latent dirichlet allocation. In NIPS, volume 20, pages 1577–1584, 2007. * [18] David Blei, Andrew Ng, and Michael Jordan. Latent dirichlet allocation. In T. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems, volume 14. MIT Press, 2002\. * [19] David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877, 2017. * [20] Christopher M Bishop. Pattern recognition and machine learning. springer, 2006. * [21] Martin J Wainwright, Michael I Jordan, et al. Graphical models, exponential families, and variational inference. Foundations and Trends® in Machine Learning, 1(1–2):1–305, 2008. * [22] Hagai Attias. A variational baysian framework for graphical models. In Advances in neural information processing systems, pages 209–215, 2000. * [23] Arthur Asuncion, Max Welling, Padhraic Smyth, and Yee Whye Teh. On smoothing and inference for topic models. In Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence, pages 27–34. AUAI Press, 2009. * [24] Michael Braun and Jon McAuliffe. Variational inference for large-scale models of discrete choice. Journal of the American Statistical Association, 105(489):324–335, 2010. * [25] Thomas P Minka. Expectation propagation for approximate bayesian inference. In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pages 362–369. Morgan Kaufmann Publishers Inc., 2001\. * [26] Thomas Peter Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, Massachusetts Institute of Technology, 2001. * [27] Thomas Minka. Power ep. Dep. Statistics, Carnegie Mellon University, Pittsburgh, PA, Tech. Rep, 2004. * [28] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015\. Software available from tensorflow.org. * [29] T. Minka, J.M. Winn, J.P. Guiver, Y. Zaykov, D. Fabian, and J. Bronskill. /Infer.NET 0.3, 2018. Microsoft Research Cambridge. http://dotnet.github.io/infer. * [30] Chong Wang, John Paisley, and David Blei. Online variational inference for the hierarchical dirichlet process. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 752–760. JMLR Workshop and Conference Proceedings, 2011. * [31] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14(5), 2013. * [32] Issei Sato and Hiroshi Nakagawa. Rethinking collapsed variational bayes inference for lda. arXiv preprint arXiv:1206.6435, 2012. * [33] Matthew D Hoffman and David M Blei. Structured stochastic variational inference. In Artificial Intelligence and Statistics, pages 361–369, 2015\. * [34] John Michael Winn. Variational message passing and its applications. PhD thesis, Citeseer, 2004. * [35] John Winn and Christopher M Bishop. Variational message passing. Journal of Machine Learning Research, 6(Apr):661–694, 2005. * [36] David A Knowles and Tom Minka. Non-conjugate variational message passing for multinomial and binary regression. In Advances in Neural Information Processing Systems, pages 1701–1709, 2011. * [37] Thomas P Minka. private communication, April. 2019. * [38] Andrés R Masegosa, Ana M Martınez, Helge Langseth, Thomas D Nielsen, Antonio Salmerón, Darío Ramos-López, and Anders L Madsen. d-vmp: Distributed variational message passing. In Conference on Probabilistic Graphical Models, pages 321–332. PMLR, 2016. * [39] Nuha Zamzami and Nizar Bouguila. Sparse count data clustering using an exponential approximation to generalized dirichlet multinomial distributions. IEEE Transactions on Neural Networks and Learning Systems, 2020\. * [40] Nuha Zamzami and Nizar Bouguila. High-dimensional count data clustering based on an exponential approximation to the multinomial beta-liouville distribution. Information Sciences, 524:116–135, 2020. * [41] Yijun Pan, Zeyu Zheng, and Dianzheng Fu. Bayesian-based anomaly detection in the industrial processes. IFAC-PapersOnLine, 53(2):11729–11734, 2020. * [42] Daphne Koller, Nir Friedman, and Francis Bach. Probabilistic graphical models: principles and techniques. MIT press, 2009. * [43] Daniek Brink. Using probabilistic graphical models to detect dynamic objects for mobile robots. 2016\. * [44] Everhard Johann Louw. A probabilistic graphical model approach to multiple object tracking. 2018\. * [45] Simon Streicher and Johan du Preez. Strengthening probabilistic graphical models: The purge-and-merge algorithm. IEEE Access, 2021. * [46] Rebecca Taylor and Johan A du Preez. Albu: An approximate loopy belief message passing algorithm for lda to improve performance on small data sets. arXiv preprint arXiv:2110.00635, 2021. * [47] Steffen L Lauritzen and David J Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society: Series B (Methodological), 50(2):157–194, 1988.
# Using Social Dynamics to Make Individual Predictions: Variational Inference with a Stochastic Kinetic Model Zhen Xu, Wen Dong, and Sargur Srihari Department of Computer Science and Engineering University at Buffalo <EMAIL_ADDRESS> ###### Abstract Social dynamics is concerned primarily with interactions among individuals and the resulting group behaviors, modeling the temporal evolution of social systems via the interactions of individuals within these systems. In particular, the availability of large-scale data from social networks and sensor networks offers an unprecedented opportunity to predict state-changing events at the individual level. Examples of such events include disease transmission, opinion transition in elections, and rumor propagation. Unlike previous research focusing on the collective effects of social systems, this study makes efficient inferences at the individual level. In order to cope with dynamic interactions among a large number of individuals, we introduce the stochastic kinetic model to capture adaptive transition probabilities and propose an efficient variational inference algorithm the complexity of which grows _linearly_ — rather than exponentially— with the number of individuals. To validate this method, we have performed epidemic-dynamics experiments on wireless sensor network data collected from more than ten thousand people over three years. The proposed algorithm was used to track disease transmission and predict the probability of infection for each individual. Our results demonstrate that this method is more efficient than sampling while nonetheless achieving high accuracy. ## 1 Introduction The field of social dynamics is concerned primarily with interactions among individuals and the resulting group behaviors. Research in social dynamics models the temporal evolution of social systems via the interactions of the individuals within these systems [8]. For example, opinion dynamics can model the opinion state transitions of an entire population in an election scenario [3], and epidemic dynamics can predict disease outbreaks ahead of time [9]. While traditional social-dynamics models focus primarily on the macroscopic effects of social systems, often we instead wish to know the answers to more specific questions. Given the movement and behavior history of a subject with Ebola, can we tell how many people should be tested or quarantined? City-size quarantine is not necessary, but family-size quarantine is insufficient. We aim to model a method to evaluate the paths of illness transmission and the risks of infection for _individuals_ , so that limited medical resources can be most efficiently distributed. The rapid growth of both social networks and sensor networks offers an unprecedented opportunity to collect abundant data at the individual level. From these data we can extract temporal interactions among individuals, such as meeting or taking the same class. To take advantage of this opportunity, we model social dynamics from an individual perspective. Although such an approach has considerable potential, in practice it is difficult to model the dynamic interactions and handle the costly computations when a large number of individuals are involved. In this paper, we introduce an event-based model into social systems to characterize their temporal evolutions and make tractable inferences on the individual level. Our research on the temporal evolutions of social systems is related to dynamic Bayesian networks and continuous time Bayesian networks [20, 12, 17]. Traditionally, a coupled hidden Markov model is used to capture the interactions of components in a system [2], but this model does not consider dynamic interactions. However, a stochastic kinetic model is capable of successfully describing the interactions of molecules (such as collisions) in chemical reactions [21, 11], and is widely used in many fields such as chemistry and cell biology [1, 10]. We introduce this model into social dynamics and use it to focus on individual behaviors. A challenge in capturing the interactions of individuals is that in social dynamics the state space grows exponentially with the number of individuals, which makes exact inference intractable. To resolve this we must apply approximate inference methods. One class of these involves sampling-based methods. Rao and Teh introduce a Gibbs sampler based on local updates [19], while Murphy and Russell introduce Rao-Blackwellized particle filtering for dynamic Bayesian networks [16]. However, sampling-based methods sometimes mix slowly and require a large number of samples/particles. To demonstrate this issue, we offer empirical comparisons with two major sampling methods in Section 4. An alternative class of approximations is based on variational inference. Opper and Sanguinetti apply the variational mean field approach to factor a Markov jump process [18], and Cohn and El-Hay further improve its efficiency by exploiting the structure of the target network [4]. A problem is that in an event-based model such as a stochastic kinetic model (SKM), the variational mean field is not applicable when a single event changes the states of two individuals simultaneously. Here, we use a general expectation propagation principle [13] to design our algorithm. This paper makes three contributions: First, we introduce the discrete event model into social dynamics and make tractable inferences on both individual behaviors and collective effects. To this end, we apply the stochastic kinetic model to define adaptive transition probabilities that characterize the dynamic interaction patterns in social systems. Second, we design an efficient variational inference algorithm whose computation complexity grows linearly with the number of individuals. As a result, it scales very well in large social systems. Third, we conduct experiments on epidemic dynamics to demonstrate that our algorithm can track the transmission of epidemics and predict the probability of infection for each individual. Further, we demonstrate that the proposed method is more efficient than sampling while nonetheless achieving high accuracy. The remainder of this paper is organized as follows. In Section 2, we briefly review the coupled hidden Markov model and the stochastic kinetic model. In Section 3, we propose applying a variational algorithm with the stochastic kinetic model to make tractable inferences in social dynamics. In Section 4, we detail empirical results from applying the proposed algorithm to our epidemic data along with the proximity data collected from sensor networks. Section 5 concludes. ## 2 Background ### 2.1 Coupled Hidden Markov Model A coupled hidden Markov model (CHMM) captures the dynamics of a discrete time Markov process that joins a number of distinct hidden Markov models (HMMs), as shown in Figure 1(a). $\mathbf{x}_{t}=(x_{t}^{(1)},\dots,x_{t}^{(M)})$ defines the hidden states of all HMMs at time $t$, and $x_{t}^{(m)}$ is the hidden state of HMM $m$ at time $t$. $\mathbf{y}_{t}=(y_{t}^{(1)},\dots,y_{t}^{(M)})$ are observations of all HMMs at time $t$, and $y_{t}^{(m)}$ is the observation of HMM $m$ at time $t$. $P(\mathbf{x}_{t}|\mathbf{x}_{t-1})$ are transition probabilities, and $P(\mathbf{y}_{t}|\mathbf{x}_{t})$ are emission probabilities for CHMM. Given hidden states, all observations are independent. As such, $P(\mathbf{y}_{t}|\mathbf{x}_{t})=\prod_{m}P(y_{t}^{(m)}|x_{t}^{(m)})$, where $P(y_{t}^{(m)}|x_{t}^{(m)})$ is the emission probability for HMM $m$ at time $t$. The joint probability of CHMM can be defined as follows: $P\left(\mathbf{x}_{1,\dots,T},\mathbf{y}_{1,\dots,T}\right)=\prod_{t=1}^{T}P(\mathbf{x}_{t}|\mathbf{x}_{t-1})P(\mathbf{y}_{t}|\mathbf{x}_{t}).$ (1) For a CHMM that contains $M$ HMMs in a binary state, the state space is $2^{M}$, and the state transition kernel is a $2^{M}\times 2^{M}$ matrix. In order to make exact inferences, the classic forward-backward algorithm sweeps a forward/filtering pass to compute the forward statistics $\alpha_{t}(\mathbf{x}_{t})=P(\mathbf{x}_{t}|\mathbf{y}_{1,\dots,t})$ and a backward/smoothing pass to estimate the backward statistics $\beta_{t}(\mathbf{x}_{t})=\frac{P(\mathbf{y}_{t+1,\dots,T}|\mathbf{x}_{t})}{P(\mathbf{y}_{t+1,\dots,T}|\mathbf{y}_{1,\dots,t})}$. Then it can estimate the one-slice statistics $\gamma_{t}(\mathbf{x}_{t})=P(\mathbf{x}_{t}|\mathbf{y}_{1,\dots,T})=\alpha_{t}(\mathbf{x}_{t})\beta_{t}(\mathbf{x}_{t})$ and two-slice statistics $\xi_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t})=P(\mathbf{x}_{t-1},\mathbf{x}_{t}|\mathbf{y}_{1,\dots,T})=\frac{\alpha_{t-1}(\mathbf{x}_{t-1})P(\mathbf{x}_{t}|\mathbf{x}_{t-1})P(\mathbf{y}_{t}|\mathbf{x}_{t})\beta_{t}(\mathbf{x}_{t})}{P(\mathbf{y}_{t}|\mathbf{y}_{1,\dots,t-1})}$. Its complexity grows exponentially with the number of HMM chains. In order to make tractable inferences, certain factorizations and approximations must be applied. In the next section, we introduce a stochastic kinetic model to lower the dimensionality of transition probabilities. Figure 1: Illustration of (a) Coupled Hidden Markov Model, (b) Stochastic Kinetic Model. ### 2.2 The Stochastic Kinetic Model A stochastic kinetic model describes the temporal evolution of a chemical system with $M$ species $\mathcal{X}=\\{X_{1},X_{2},\cdots,X_{M}\\}$ driven by $V$ events (or chemical reactions) parameterized by rate constants $\mathbf{c}=(c_{1},\dots,c_{V})$. An event (chemical reaction) $k$ has a general form as follows: $\displaystyle r_{1}X_{1}+\cdots+r_{M}X_{M}\overset{c_{k}}{\longrightarrow}p_{1}X_{1}+\cdots+p_{M}X_{M}.$ The species on the left are called _reactants_ , and $r_{m}$ is the number of $m$th reactant molecules consumed during the reaction. The species on the right are called _products_ , and $p_{m}$ is the number of $m$th product molecules produced in the reaction. Species involved in the reaction ($r_{m}>0$) without consumption or production ($r_{m}=p_{m}$) are called _catalysts_. At any specific time $t$, the populations of the species is $\mathbf{x_{t}}=(x_{t}^{(1)},\dots,x_{t}^{(M)})$. An event $k$ happens with rate $h_{k}(\mathbf{x_{t}},c_{k})$, determined by the rate constant and the current population state [21]: $\displaystyle h_{k}(\mathbf{x_{t}},c_{k})=$ $\displaystyle c_{k}g_{k}(\mathbf{x_{t}})=c_{k}\prod_{m=1}^{M}g_{k}^{(m)}(x_{t}^{(m)}).$ (2) The form of $g_{k}(\mathbf{x_{t}})$ depends on the reaction. In our case, we adopt the product form $\prod_{m=1}^{M}g_{k}^{(m)}(x_{t}^{(m)})$, which represents the total number of ways that reactant molecules can be selected to trigger event $k$ [21]. Event $k$ changes the populations by $\mathbf{\Delta_{k}}=\mathbf{x}_{t}-\mathbf{x}_{t-1}$. The probability that event $k$ will occur during time interval $(t,t+dt]$ is $h_{k}(\mathbf{x_{t}},c_{k})dt$. We assume at each discrete time step that no more than one event will occur. This assumption follows the linearization principle in the literature [17], and is valid when the discrete time step is small. We treat each discrete time step as a unit of time, so that $h_{k}(\mathbf{x_{t}},c_{k})$ represents the probability of an event. In epidemic modeling, for example, an infection event $v_{i}$ has the form $S+I\overset{c_{i}}{\longrightarrow}2I$, such that a susceptible individual ($S$) is infected by an infectious individual ($I$) with rate constant $c_{i}$. If there is only one susceptible individual (type $m=1$) and one infectious individual (type $m=2$) involved in this event, $h_{i}(\mathbf{x_{t}},c_{i})=c_{i}$, $\mathbf{\Delta_{i}}=[-1~{}~{}1]^{T}$ and $P(\mathbf{x}_{t}-\mathbf{x}_{t-1}=\mathbf{\Delta_{i}})=P(\mathbf{x}_{t}|\mathbf{x}_{t-1},v_{i})=c_{i}$. In a traditional hidden Markov model, the transition kernel is typically fixed. In comparison, SKM is better at capturing dynamic interactions in terms of the events with rates dependent on reactant populations, as shown in Eq.(2). ## 3 Variational Inference with the Stochastic Kinetic Model In this section, we define the likelihood of the entire sequence of hidden states and observations for an event-based model, and derive a variational inference algorithm and parameter-learning algorithm. ### 3.1 Likelihood for Event-based Model In social dynamics, we use a discrete time Markov model to describe the temporal evolutions of a set of individuals $x^{(1)},\dots,x^{(M)}$ according to a set of $V$ events. To cope with dynamic interactions, we introduce the SKM and express the state transition probabilities in terms of event probabilities, as shown in Figure 1(b). We assume at each discrete time step that no more than one event will occur. Let $v_{1},\dots,v_{T}$ be a sequence of events, $\mathbf{x_{1}},\dots,\mathbf{x_{T}}$ a sequence of hidden states, and $\mathbf{y_{1}},\dots,\mathbf{y_{T}}$ a set of observations. Similar to Eq.(1), the likelihood of the entire sequence is as follows: $\displaystyle P\left(\mathbf{x}_{1,\dots,T},\mathbf{y}_{1,\dots,T},v_{1,\dots,T}\right)=\prod_{t=1}^{T}P(\mathbf{x}_{t},v_{t}|\mathbf{x}_{t-1})P(\mathbf{y}_{t}|\mathbf{x}_{t}),\mbox{ where }$ (3) $\displaystyle P(\mathbf{x}_{t},v_{t}|\mathbf{x}_{t-1})=\begin{cases}c_{k}\cdot g_{k}\left(\mathbf{x}_{t-1}\right)\cdot\delta(\mathbf{x}_{t}-\mathbf{x}_{t-1}\equiv\mathbf{\Delta_{k}})&\mbox{if }v_{t}=k\\\ (1-\sum_{k}c_{k}g_{k}\left(\mathbf{x}_{t-1}\right))\cdot\delta(\mathbf{x}_{t}-\mathbf{x}_{t-1}\equiv\mathbf{0})&\mbox{if }v_{t}=\emptyset\end{cases}.$ $P(\mathbf{x}_{t},v_{t}|\mathbf{x}_{t-1})$ is the event-based transition kernel. $\delta(\mathbf{x}_{t}-\mathbf{x}_{t-1}\equiv\mathbf{\Delta_{k}})$ is 1 if the previous state is $\mathbf{x}_{t-1}$ and the current state is $\mathbf{x}_{t}=\mathbf{x}_{t-1}+\mathbf{\Delta_{k}}$, and 0 otherwise. $\mathbf{\Delta_{k}}$ is the effect of event $v_{k}$. $\emptyset$ represents an auxiliary event, meaning that there is no event. Substituting the product form of $g_{k}$, the transition kernel can be written as follows: $\displaystyle P(\mathbf{x}_{t},v_{t}=k|\mathbf{x}_{t-1})=c_{k}\prod_{m}g_{k}^{(m)}(x_{t-1}^{(m)})\cdot\prod_{m}\delta(x_{t}^{(m)}-x_{t-1}^{(m)}\equiv\Delta_{k}^{(m)}),$ (4) $\displaystyle P(\mathbf{x}_{t},v_{t}=\emptyset|\mathbf{x}_{t-1})=(1-\sum_{k}c_{k}\prod_{m}g_{k}^{(m)}(x_{t-1}^{(m)}))\cdot\prod_{m}\delta(x_{t}^{(m)}-x_{t-1}^{(m)}\equiv 0),$ (5) where $\delta(x_{t}^{(m)}-x_{t-1}^{(m)}\equiv\Delta_{k}^{(m)})$ is 1 if the previous state of an individual $m$ is $x_{t-1}^{(m)}$ and the current state is $x_{t}^{(m)}=x_{t-1}^{(m)}+\Delta_{k}^{(m)}$, and 0 otherwise. ### 3.2 Variational Inference for Stochastic Kinetic Model As noted in Section 2.1, exact inference in social dynamics is intractable due to the formidable state space. However, we can approximate the posterior distribution $P(\mathbf{x}_{1,...,T},v_{1,...,T}|\mathbf{y}_{1,...,T})$ using an approximate distribution within the exponential family. The inference algorithm minimizes the KL divergence between these two distributions, which can be formulated as an optimization problem [13]: $\displaystyle\mbox{Minimize:}\sum_{t,\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})\cdot\log\frac{\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})}{P(\mathbf{x}_{t},v_{t}|\mathbf{x}_{t-1})P(\mathbf{y}_{t}|\mathbf{x}_{t})}$ (6) $\displaystyle\hskip 180.00027pt-\sum_{t,\mathbf{x}_{t}}\prod_{m}\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})\log\prod_{m}\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})$ $\displaystyle\mbox{Subject to: }\sum_{v_{t},\mathbf{x}_{t-1},\\{\mathbf{x}_{t}\backslash x_{t}^{(m)}\\}}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})=\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})\mbox{, for all }t,m,x_{t}^{(m)},$ $\displaystyle\hphantom{\mbox{Subject to: }}\sum_{v_{t},\\{\mathbf{x}_{t-1}\backslash x_{t-1}^{(m)}\\},\mathbf{x}_{t}}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})=\hat{\gamma}_{t-1}^{(m)}(x_{t-1}^{(m)})\mbox{, for all }t,m,x_{t-1}^{(m)},~{}$ $\displaystyle\hphantom{\mbox{Subject to: }}\sum_{x_{t}^{(m)}}\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})=1\mbox{, for all }t,m.$ The objective function is the Bethe free energy, composed of average energy and Bethe entropy approximation [22]. $\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})$ is the approximate two- slice statistics and $\hat{\gamma}^{(m)}(x_{t}^{(m)})$ is the approximate one- slice statistics for each individual $m$. They form the approximate distribution over which to minimize the Bethe free energy. The $\sum_{t,\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}}$ is an abbreviation for summing over $t$, $\mathbf{x}_{t-1}$, $\mathbf{x}_{t}$, and $v_{t}$. $\sum_{\\{\mathbf{x}_{t}\backslash x_{t}^{(m)}\\}}$ is the sum over all individuals in $\mathbf{x_{t}}$ except $x_{t}^{(m)}$. We use similar abbreviations below. The first two sets of constraints are marginalization conditions, and the third is normalization conditions. To solve this constrained optimization problem, we first define the Lagrange function using Lagrange multipliers to weight constraints, then take the partial derivatives with respect to $\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})$, and $\hat{\gamma}^{(m)}(x_{t}^{(m)})$. The dual problem is to find the approximate forward statistics $\hat{\alpha}_{t-1}^{(m)}(x_{t-1}^{(m)})$ and backward statistics $\hat{\beta}_{t}^{(m)}(x_{t}^{(m)})$ in order to maximize the pseudo-likelihood function. The duality is between minimizing Bethe free energy and maximizing pseudo-likelihood. The fixed-point solution for the primal problem is as follows111The derivations for the optimization problem and its solution are shown in the Supplemental Material.: $\displaystyle\hat{\xi}(x_{t-1}^{(m)},x_{t}^{(m)},v_{t})=\frac{1}{Z_{t}}\sum_{m^{\prime}\neq m,x_{t-1}^{(m^{\prime})},x_{t}^{(m^{\prime})}}{\scriptstyle P(\mathbf{x}_{t},v_{t}|\mathbf{x}_{t-1})\cdot\prod_{m}\hat{\alpha}_{t-1}^{(m)}(x_{t-1}^{(m)})\cdot\prod_{m}P(y_{t}^{(m)}|x_{t}^{(m)})\cdot\prod_{m}\hat{\beta}_{t}^{(m)}(x_{t}^{(m)})}.$ (7) $\hat{\xi}(x_{t-1}^{(m)},x_{t}^{(m)},v_{t})$ is the two-slice statistics for an individual $m$, and $Z_{t}$ is the normalization constant. Given the factorized form of $P(\mathbf{x}_{t},v_{t}|\mathbf{x}_{t-1})$ in Eqs. (4) and (5), everything in Eq. (7) can be written in a factorized form. After reformulating the term relevant to the individual $m$, $\hat{\xi}(x_{t-1}^{(m)},x_{t}^{(m)},v_{t})$ can be shown neatly as follows: $\displaystyle\hat{\xi}_{t}(x_{t-1}^{(m)},x_{t}^{(m)},v_{t})=\frac{1}{Z_{t}}\hat{P}(x_{t}^{(m)},v_{t}|x_{t-1}^{(m)})\cdot\hat{\alpha}_{t-1}^{(m)}(x_{t-1}^{(m)})P(y_{t}^{(m)}|x_{t}^{(m)})\hat{\beta}_{t}^{(m)}(x_{t}^{(m)}),$ (8) where the marginalized transition kernel $\hat{P}(x_{t}^{(m)},v_{t}|x_{t-1}^{(m)})$ for the individual $m$ can be defined as: $\displaystyle\hat{P}(x_{t}^{(m)},v_{t}=k|x_{t-1}^{(m)})={\displaystyle c_{k}g_{k}^{(m)}(x_{t-1}^{(m)})\prod\limits_{m^{\prime}\neq m}\tilde{g}_{k,t-1}^{(m^{\prime})}\cdot\delta(x_{t}^{(m)}-x_{t-1}^{(m)}\equiv\Delta_{k}^{(m)})},$ (9) $\displaystyle\hat{P}(x_{t}^{(m)},v_{t}=\emptyset|x_{t-1}^{(m)})={\scriptstyle{\displaystyle\left(1-\sum\limits_{k}c_{k}g_{k}^{(m)}(x_{t-1}^{(m)})\prod\limits_{m^{\prime}\neq m}\hat{g}_{k,t-1}^{(m^{\prime})}\right)\delta(x_{t}^{(m)}-x_{t-1}^{(m)}\equiv 0),}}$ (10) $\displaystyle{\scriptstyle\tilde{g}_{k,t-1}^{(m^{\prime})}=\sum\limits_{x_{t}^{(m^{\prime})}-x_{t-1}^{(m^{\prime})}\equiv\Delta_{k}^{(m^{\prime})}}\alpha_{t-1}^{(m^{\prime})}(x_{t-1}^{(m^{\prime})})P(y_{t}^{(m^{\prime})}|x_{t}^{(m^{\prime})})\beta_{t}^{(m^{\prime})}(x_{t}^{(m^{\prime})})g_{k}^{(m^{\prime})}(x_{t-1}^{(m^{\prime})})\big{/}\sum\limits_{x_{t}^{(m^{\prime})}-x_{t-1}^{(m^{\prime})}\equiv 0}\alpha_{t-1}^{(m^{\prime})}(x_{t-1}^{(m^{\prime})})P(y_{t}^{(m^{\prime})}|x_{t}^{(m^{\prime})})\beta_{t}^{(m^{\prime})}(x_{t}^{(m^{\prime})})},$ $\displaystyle{\scriptstyle\hat{g}_{k,t-1}^{(m^{\prime})}=\sum\limits_{x_{t}^{(m^{\prime})}-x_{t-1}^{(m^{\prime})}\equiv 0}\alpha(x_{t-1}^{(m^{\prime})})P(y_{t}^{(m^{\prime})}|x_{t}^{(m^{\prime})})\beta_{t}^{(m^{\prime})}(x_{t}^{(m^{\prime})})g_{k}^{(m^{\prime})}(x_{t-1}^{(m^{\prime})})\big{/}\sum\limits_{x_{t}^{(m^{\prime})}-x_{t-1}^{(m^{\prime})}\equiv 0}\alpha_{t-1}^{(m^{\prime})}(x_{t-1}^{(m^{\prime})})P(y_{t}^{(m^{\prime})}|x_{t}^{(m^{\prime})})\beta_{t}^{(m^{\prime})}(x_{t}^{(m^{\prime})})},$ In the above equations, we consider the mean field effect by summing over the current and previous states of all the other individuals $m^{\prime}\neq m$. The marginalized transition kernel considers the probability of event $k$ on the individual $m$ given the context of the temporal evolutions of the other individuals. Comparing Eqs. (9) and (10) with Eqs. (4) and (5), instead of multiplying $g_{k}^{(m^{\prime})}(x_{t-1}^{(m^{\prime})})$ for individual $m^{\prime}\neq m$, we use the expected value of $g_{k}^{(m^{\prime})}$ with respect to the marginal probability distribution of $x_{t-1}^{(m^{\prime})}$. Complexity Analysis: In our inference algorithm, the most computation- intensive step is the marginalization in Eqs. (9)-(10). The complexity is $O(MS^{2})$, where $M$ is the number of individuals and $S$ is the state space of a single individual. The complexity of the entire algorithm is therefore $O(MS^{2}TN)$, where $T$ is the number of time steps and $N$ is the number of iterations until convergence. As such, the complexity of our algorithm grows only linearly with the number of individuals; it offers excellent scalability when the number of tracked individuals becomes large. ### 3.3 Parameter Learning In order to learn the rate constant $c_{k}$, we maximize the expected log likelihood. In a stochastic kinetic model, the probability of a sample path is given in Eq. (3). The expected log likelihood over the posterior probability conditioned on the observations $\mathbf{y}_{1},\dots,\mathbf{y}_{T}$ takes the following form: $\displaystyle\log P\left(\mathbf{x}_{1,\dots,T},\mathbf{y}_{1,\dots,T},v_{1,\dots,T}\right)=\sum_{t,\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})\cdot\log(P(\mathbf{x}_{t},v_{t}|\mathbf{x}_{t-1})P(\mathbf{y}_{t}|\mathbf{x}_{t})).$ $\hat{\xi}_{t}\left(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}\right)$ is the approximate two-slice statistics defined in Eq. (6). Maximizing this expected log likelihood by setting its partial derivative over the rate constants to 0 gives the maximum expected log likelihood estimation of these rate constants. $\displaystyle c_{k}=\frac{\sum_{t,\mathbf{x}_{t-1},\mathbf{x}_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=k)}{\sum_{t,\mathbf{x}_{t-1},\mathbf{x}_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=\emptyset)g_{k}(\mathbf{x}_{t-1})}\approx\frac{\sum_{t}\ \sum_{\mathbf{x}_{t-1},\mathbf{x}_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=k)}{\sum_{t}\ \prod_{m}\sum_{x_{t-1}^{(m)}}\hat{\gamma}_{t-1}^{(m)}(x_{t-1}^{(m)})g_{k}^{(m)}(x_{t-1}^{(m)})}.$ (11) As such, the rate constant for event $k$ is the expected number of times that this event has occurred divided by the total expected number of times this event could have occurred. To summarize, we provide the variational inference algorithm below. Algorithm: Variational Inference with a Stochastic Kinetic Model Given the observations $y_{t}^{(m)}$ for $t=1,\dots,T$ and $m=1,\dots,M$, find $x_{t}^{(m)}$, $v_{t}$ and rate constants $c_{k}$ for $k=1,\dots,V$. Latent state inference. Iterate through the following forward and backward passes until convergence, where $\hat{P}(x_{t}^{(m)},v_{t}|x_{t-1}^{(m)})$ is given by Eqs. (9) and (10). * • Forward pass. For $t=1,\dots,T$ and $m=1,\dots,M$, update $\hat{\alpha}_{t}^{(m)}(x_{t}^{(m)})$ according to $\displaystyle\hat{\alpha}_{t}^{(m)}(x_{t}^{(m)})\leftarrow\frac{1}{Z_{t}}\sum_{x_{t-1}^{(m)},v_{t}}\hat{\alpha}_{t-1}^{(m)}(x_{t-1}^{(m)})\hat{P}(x_{t}^{(m)},v_{t}|x_{t-1}^{(m)})P(y_{t}^{(m)}|x_{t}^{(m)}).$ * • Backward pass. For $t=T,\dots,1$ and $m=1,\dots,M$, update $\hat{\beta}_{t-1}^{(m)}(x_{t-1}^{(m)})$ according to $\displaystyle\hat{\beta}_{t-1}^{(m)}(x_{t-1}^{(m)})\leftarrow\frac{1}{Z_{t}}\sum_{x_{t}^{(m)},v_{t}}\hat{\beta}_{t}^{(m)}(x_{t}^{(m)})\hat{P}(x_{t}^{(m)},v_{t}|x_{t-1}^{(m)})P(y_{t}^{(m)}|x_{t}^{(m)}).$ Parameter estimation. Iterate through the latent state inference (above) and rate constants estimate of $c_{k}$ according to Eq. (11), until convergence. ## 4 Experiments on Epidemic Applications In this section, we evaluate the performance of variational inference with a stochastic kinetic model (VISKM) algorithm of epidemic dynamics, with which we predict the transmission of diseases and the health status of each individual based on proximity data collected from sensor networks. ### 4.1 Epidemic Dynamics In epidemic dynamics, $G_{t}=(\mathcal{M},E_{t})$ is a dynamic network, where each node $m\in\mathcal{M}$ is an individual in the network, and $E_{t}=\\{(m_{i},m_{j})\\}$ is a set of edges in $G_{t}$ representing that individuals $m_{i}$ and $m_{j}$ have interacted at a specific time $t$. There are two possible hidden states for each individual $m$ at time $t$, $x_{t}^{(m)}\in\\{0,1\\}$, where 0 indicates the susceptible state and 1 the infectious state. $y_{t}^{(m)}\in\\{0,1\\}$ represents the presence or absence of symptoms for individual $m$ at time $t$. $P(y_{t}^{(m)}|x_{t}^{(m)})$ represents the observation probability. We define three types of events in epidemic applications: (1) A previously infectious individual recovers and becomes susceptible again: $I\overset{c_{1}}{\longrightarrow}S$. (2) An infectious individual infects a susceptible individual in the network: $S+I\overset{c_{2}}{\longrightarrow}2I$. (3) A susceptible individual in the network is infected by an outside infectious individual: $S\overset{c_{3}}{\longrightarrow}I$. Based on these events, the transition kernel can be defined as follows: $\displaystyle P(x_{t}^{(m)}=0|x_{t-1}^{(m)}=1)=c_{1},~{}P(x_{t}^{(m)}=1|x_{t-1}^{(m)}=1)=1-c_{1},$ $\displaystyle P(x_{t}^{(m)}\negthinspace=\negthinspace 0|x_{t-1}^{(m)}\negthinspace=\negthinspace 0)=(1-c_{3})(1-c_{2})^{C_{m,t}},~{}P(x_{t}^{(m)}\negthinspace=\negthinspace 1|x_{t-1}^{(m)}\negthinspace=\negthinspace 0)=1-(1-c_{3})(1-c_{2})^{C_{m,t}},$ where $C_{m,t}=\sum_{m^{\prime}:(m^{\prime},m)\in E_{t}}\delta(x_{t}^{(m^{\prime})}\equiv 1)$ is the number of possible infectious sources for individual $m$ at time $t$. Intuitively, the probability of a susceptible individual becoming infected is 1 minus the probability that no infectious individuals (inside or outside the network) infected him. When the probability of infection is very small, we can approximate $P(x_{t}^{(m)}=1|x_{t-1}^{(m)}=0)\approx c_{3}+c_{2}\cdot C_{m,t}$. ### 4.2 Experimental Results Data Explanation: We employ two data sets of epidemic dynamics. The real data set is collected from the Social Evolution experiment [5]. This study records “common cold” symptoms of 65 students living in a university residence hall from January 2009 to April 2009, tracking their locations and proximities using mobile phones. In addition, the students took periodic surveys regarding their health status and personal interactions. The synthetic data set was collected on the Dartmouth College campus from April 2001 to June 2004, and contains the movement history of 13,888 individuals [15]. We synthesized disease transmission along a timeline using the popular susceptible- infectious-susceptible (SIS) epidemiology model [14], then applied the VISKM to calibrate performance. We selected this data set because we want to demonstrate that our model works on data with a large number of people over a long period of time. Evaluation Metrics and Baseline Algorithms: We select the receiver operating characteristic (ROC) curve as our performance metric because the discrimination thresholds of diseases vary. We first compare the accuracy and efficiency of VISKM with Gibbs sampling (Gibbs) and particle filtering (PF) on the Social Evolution data set [6, 7].222Code and data are available at http://cse.buffalo.edu/~wendong/. Both Gibbs sampling and particle filtering iteratively sample the infectious and susceptible latent state sequences and the infection and recovery events conditioned on these state sequences. Gibbs- Prediction-10000 indicates 10,000 iterations of Gibbs sampling with 1000 burn- in iterations for the prediction task. PF-Smoothing-1000 similarly refers to 1000 iterations of particle filtering for the smoothing task. All experiments are performed on the same computer. Individual State Inference: We infer the probabilities of a hidden infectious state for each individual at different times under different scenarios. There are three tasks: 1. _Prediction_ : Given an individual’s past health and current interaction patterns, we predict the current infectious latent state. Figure 2(a) compares prediction performance among the different approximate inference methods. 2. _Smoothing:_ Given an individual’s interaction patterns and past health with missing periods, we infer the infectious latent states during these missing periods. Figure 2(b) compares the performance of the three inference methods. 3. _Expansion_ : Given the health records of a portion ($\sim 10\%$) of the population, we estimate the individual infectious states of the entire population before medically inspecting them. For example, given either a group of volunteers willing to report their symptoms or the symptom data of patients who came to hospitals, we determine the probabilities that the people near these individuals also became or will become infected. This information helps the government or aid agencies to efficiently distribute limited medical resources to those most in need. Figure 2(c) compares the performance of the different methods. From the above three graphs, we can see that all three methods identify the infectious states in an accurate way. However, VISKM outperforms Gibbs sampling and particle filtering in terms of area under the ROC curve for all three tasks. VISKM has an advantage in the smoothing task because the backward pass helps to infer the missing states using subsequent observations. In addition, the performance of Gibbs and PF improves as the number of samples/particles increases. (a) Prediction (b) Smoothing (c) Expansion (d) Dartmouth (e) Social Evolution Statistics (f) Dartmouth Statistics Figure 2: Experimental results. (a-c) show the prediction, smoothing, and expansion performance comparisons for Social Evolution data, while (d) shows performance of the three tasks for Dartmouth data. (e-f) represent the statistical inferences for both data sets. Figure 2(d) shows the performance of the three tasks on the Dartmouth data set. We do not apply the same comparison because it takes too much time for sampling. From the graph, we can see that VISKM infers most of the infectious moments of individuals in an accurate way for a large social system. In addition, the smoothing results are slightly better than the prediction results because we can leverage observations from both directions. The expansion case is _relatively_ poor, because we use only very limited information to derive the results; however, even in this case the ROC curve has good discriminating power to differentiate between infectious and susceptible individuals. Collective Statistics Inference: After determining the individual results, we aggregate them to approximate the total number of infected individuals in the social system as time evolves. This offers a collective statistical summary of the spread of disease in one area as in traditional research, which typically scales the sample statistics with respect to the sample ratio. Figures 2(e) and (f) show that given $20\%$ of the Social Evolution data and $10\%$ of the Dartmouth data, VISKM estimates the collective statistics better than the other methods. Efficiency and Scalability: Table 1 shows the running time of different algorithms for the Social Evolution data on the same computer. From the table, we can see that Gibbs sampling runs slightly longer than PF, but they are in the same scale. However, VISKM requires much less computation time. In addition, the computation time of VISKM grows linearly with the number of individuals, which validates the complexity analysis in Section 3.2. Thus, it offers excellent scalability for large social systems. In comparison, Gibbs sampling and PF grow super linearly with the number of individuals, and roughly linearly with the number of samples. Summary: Our proposed VISKM achieves higher accuracy in terms of area under ROC curve and collective statistics than Gibbs sampling or particle filtering (within 10,000 iterations). More importantly, VISKM is more efficient than sampling with much less computation time. Additionally, the computation time of VISKM grows linearly with the number of individuals, demonstrating its excellent scalability for large social systems. Table 1: Running time for different approximate inference algorithms. Gibbs_10000 refers to Gibbs sampling for 10,000 iterations, and PF_1000 to particle filtering for 1000 iterations. Other entries follow the same pattern. All times are measured in seconds. | VISKM | Gibbs_1000 | Gibbs_10000 | PF_1000 | PF_10000 ---|---|---|---|---|--- 60 People | 0.78 | 771 | 7820 | 601 | 6100 30 People | 0.39 | 255 | 2556 | 166 | 1888 15 People | 0.19 | 101 | 1003 | 122 | 1435 ## 5 Conclusions In this paper, we leverage sensor network and social network data to capture temporal evolution in social dynamics and infer individual behaviors. In order to define the adaptive transition kernel, we introduce a stochastic dynamic mode that captures the dynamics of complex interactions. In addition, in order to make tractable inferences we propose a variational inference algorithm the computation complexity of which grows linearly with the number of individuals. Large-scale experiments on epidemic dynamics demonstrate that our method effectively captures the evolution of social dynamics and accurately infers individual behaviors. More accurate collective effects can be also derived through the aggregated results. Potential applications for our algorithm include the dynamics of emotion, opinion, rumor, collaboration, and friendship. ## References * [1] Adam Arkin, John Ross, and Harley H McAdams. Stochastic kinetic analysis of developmental pathway bifurcation in phage $\lambda$-infected escherichia coli cells. Genetics, 149(4):1633–1648, 1998. * [2] Matthew Brand, Nuria Oliver, and Alex Pentland. Coupled hidden markov models for complex action recognition. In Proc. of CVPR, pages 994–999, 1997. * [3] Claudio Castellano, Santo Fortunato, and Vittorio Loreto. Statistical physics of social dynamics. Reviews of modern physics, 81(2):591, 2009. * [4] Ido Cohn, Tal El-Hay, Nir Friedman, and Raz Kupferman. Mean field variational approximation for continuous-time bayesian networks. The Journal of Machine Learning Research, 11:2745–2783, 2010. * [5] Wen Dong, Bruno Lepri, and Alex Sandy Pentland. Modeling the co-evolution of behaviors and social relationships using mobile phone data. In Proc. of the 10th International Conference on Mobile and Ubiquitous Multimedia, pages 134–143. ACM, 2011. * [6] Wen Dong, Alex Pentland, and Katherine A Heller. Graph-coupled hmms for modeling the spread of infection. In Proc. of UAI, pages 227–236, 2012. * [7] Arnaud Doucet and Adam M Johansen. A tutorial on particle filtering and smoothing: Fifteen years later. Handbook of Nonlinear Filtering, 12(656-704):3, 2009. * [8] Steven N Durlauf and H Peyton Young. Social dynamics, volume 4. MIT Press, 2004. * [9] Stephen Eubank, Hasan Guclu, VS Anil Kumar, Madhav V Marathe, Aravind Srinivasan, Zoltan Toroczkai, and Nan Wang. Modelling disease outbreaks in realistic urban social networks. Nature, 429(6988):180–184, 2004. * [10] Daniel T Gillespie. Stochastic simulation of chemical kinetics. Annu. Rev. Phys. Chem., 58:35–55, 2007. * [11] Andrew Golightly and Darren J Wilkinson. Bayesian parameter inference for stochastic biochemical network models using particle markov chain monte carlo. Interface focus, 2011. * [12] Creighton Heaukulani and Zoubin Ghahramani. Dynamic probabilistic models for latent feature propagation in social networks. In Proc. of ICML, pages 275–283, 2013. * [13] Tom Heskes and Onno Zoeter. Expectation propagation for approximate inference in dynamic bayesian networks. In Proc. of UAI, pages 216–223, 2002. * [14] Matt J Keeling and Pejman Rohani. Modeling infectious diseases in humans and animals. Princeton University Press, 2008. * [15] David Kotz, Tristan Henderson, Ilya Abyzov, and Jihwang Yeo. CRAWDAD data set dartmouth/campus (v. 2007-02-08). Downloaded from http://crawdad.org/dartmouth/campus/, 2007. * [16] Kevin Murphy and Stuart Russell. Rao-blackwellised particle filtering for dynamic bayesian networks. In Sequential Monte Carlo methods in practice, pages 499–515. Springer, 2001. * [17] Uri Nodelman, Christian R Shelton, and Daphne Koller. Continuous time bayesian networks. In Proc. of UAI, pages 378–387. Morgan Kaufmann Publishers Inc., 2002. * [18] Manfred Opper and Guido Sanguinetti. Variational inference for markov jump processes. In Proc. of NIPS, pages 1105–1112, 2008. * [19] V. Rao and Y. W. Teh. Fast MCMC sampling for markov jump processes and continuous time bayesian networks. In Proc. of UAI, 2011. * [20] Joshua W Robinson and Alexander J Hartemink. Learning non-stationary dynamic bayesian networks. The Journal of Machine Learning Research, 11:3647–3680, 2010. * [21] Darren J Wilkinson. Stochastic modeling for systems biology. CRC press, 2011. * [22] Jonathan S Yedidia, William T Freeman, and Yair Weiss. Understanding belief propagation and its generalizations. Exploring artificial intelligence in the new millennium, 8:236–239, 2003. ## 6 Appendix ### 6.1 Derivation of the optimization problem in Eq.(6) Let $P(\mathbf{x}_{1,...,T},v_{1,...,T}|\mathbf{y}_{1,...,T})$ be the exact posterior. Our goal is to approximate this posterior by a distribution $Q(\mathbf{x}_{1,...,T},v_{1,...,T})$ in the exponential family that minimizes the KL divergence between these two distributions: $\displaystyle KL(Q(\mathbf{x}_{1,...,T},v_{1,...,T})|P(\mathbf{x}_{1,...,T},v_{1,...,T}|\mathbf{y}_{1,...,T}))$ $\displaystyle=$ $\displaystyle\sum_{\mathbf{x}_{1,...,T},v_{1,...,T}}Q(\mathbf{x}_{1,...,T},v_{1,...,T})\log[\frac{Q(\mathbf{x}_{1,...,T},v_{1,...,T})\cdot P(\mathbf{y}_{1,...,T})}{P(\mathbf{x}_{1,...,T},\mathbf{y}_{1,...,T},v_{1,...,T})}]$ $\displaystyle=$ $\displaystyle\sum_{\mathbf{x}_{1,...,T},v_{1,...,T}}Q(\mathbf{x}_{1,...,T},v_{1,...,T})\log Q(\mathbf{x}_{1,...,T},v_{1,...,T})$ $\displaystyle-\sum_{t=1}^{T}\sum_{\mathbf{x}_{1,...,T},v_{1,...,T}}Q(\mathbf{x}_{1,...,T},v_{1,...,T})\log P(\mathbf{x}_{t},\mathbf{y}_{t},v_{t}|\mathbf{x}_{t-1}).$ (12) In the first step, we apply the definition of conditional probability and KL- divergence. In the second, we omit $P(\mathbf{y}_{1,...,T})$ because it is a constant in this optimization problem. In addition, we decompose $P\left(\mathbf{x}_{1,\dots,T},\mathbf{y}_{1,\dots,T},v_{1,\dots,T}\right)=\prod_{t=1}^{T}P(\mathbf{x}_{t},\mathbf{y}_{t},v_{t}|\mathbf{x}_{t-1})$. We then define the approximate two-slice statistics $\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})$ and one-slice statistics $\hat{\gamma}(\mathbf{x}_{t})$. Both are in the exponential family. In this context, we have $M$ individuals in the system and the mean-field approximation can be shown as $\hat{\gamma}(\mathbf{x}_{t})=\prod_{m=1}^{N}\hat{\gamma}^{(m)}(x_{t}^{(m)})$, where $\hat{\gamma}^{(m)}(x_{t}^{(m)})$ is the approximate one-slice statistics for individual $m$. Given the observation that $Q(\mathbf{x}_{1,...,T},v_{1,...,T})$ can be expressed as a product of two- slice statistics divided by a product of one-slice statistics, then $\displaystyle Q(\mathbf{x}_{1,...,T},v_{1,...,T})=\frac{\prod_{t=1}^{T}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})}{\prod_{t=1}^{T-1}\hat{\gamma}(\mathbf{x}_{t})}=\frac{\prod_{t=1}^{T}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})}{\prod_{t=1}^{T-1}\prod_{m=1}^{M}\hat{\gamma}^{(m)}(x_{t}^{(m)})}.$ (13) If we substitute Eq. (13) into Eq. (12), the objective function becomes the following: $\displaystyle\sum_{\mathbf{x}_{1,...,T},v_{1,...,T}}Q(\mathbf{x}_{1,...,T},v_{1,...,T})\log\frac{\prod_{t=1}^{T}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})}{\prod_{t=1}^{T-1}\prod_{m}\hat{\gamma}^{(m)}(x_{t}^{(m)})}$ $\displaystyle-\sum_{t=1}^{T}\sum_{\mathbf{x}_{1,...,T},v_{1,...,T}}Q(\mathbf{x}_{1,...,T},v_{1,...,T})\log P(\mathbf{x}_{t},\mathbf{y}_{t},v_{t}|\mathbf{x}_{t-1})$ $\displaystyle=\sum_{t,\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})\log\frac{\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})}{P(\mathbf{x}_{t},\mathbf{y}_{t},v_{t}|\mathbf{x}_{t-1})}$ $\displaystyle-\sum_{t,\mathbf{x}_{t}}\prod_{m}\hat{\gamma}^{(m)}(x_{t}^{(m)})\log\prod_{m}\hat{\gamma}^{(m)}(x_{t}^{(m)}).$ (14) This objective function is subject to marginalization and normalization constraints: $\displaystyle\sum_{v_{t},\mathbf{x}_{t-1},\\{\mathbf{x}_{t}\backslash x_{t}^{(m)}\\}}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})=\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})\mbox{, for all }t,m,x_{t}^{(m)},$ $\displaystyle\sum_{v_{t},\\{\mathbf{x}_{t-1}\backslash x_{t-1}^{(m)}\\},\mathbf{x}_{t}}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})=\hat{\gamma}_{t-1}^{(m)}(x_{t-1}^{(m)})\mbox{, for all }t,m,x_{t-1}^{(m)},$ $\displaystyle\sum_{x_{t}^{(m)}}\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})=1\mbox{, for all }t,m.$ $\sum_{\\{\mathbf{x}_{t}\backslash x_{t}^{(m)}\\}}$ refers to the sum over all values of $\mathbf{x}_{t}$ except $x_{t}^{(m)}$. ### 6.2 Derivation of the inference algorithm from Eq.(8) to Eq.(10) The optimization problem derived from Eq. (14) along with the constraints can be shown as follows: $\displaystyle\sum\limits_{t,\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})\log\frac{\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})}{P(\mathbf{x}_{t},\mathbf{y}_{t},v_{t}|\mathbf{x}_{t-1})}-\sum\limits_{t,\mathbf{x}_{t}}\prod\limits_{m}\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})\log\prod\limits_{m}\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})$ (15) subject to: $\displaystyle\sum_{v_{t},\mathbf{x}_{t-1},\\{\mathbf{x}_{t}\backslash x_{t}^{(m)}\\}}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})=\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})\mbox{, for all }t,m,x_{t}^{(m)},$ $\displaystyle\sum_{v_{t},\\{\mathbf{x}_{t-1}\backslash x_{t-1}^{(m)}\\},\mathbf{x}_{t}}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})=\hat{\gamma}_{t-1}^{(m)}(x_{t-1}^{(m)})\mbox{, for all }t,m,x_{t-1}^{(m)},$ $\displaystyle\sum_{x_{t}^{(m)}}\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})=1\mbox{, for all }t,m.$ We apply the method of Lagrange multipliers to solve this, which begins with forming the Lagrange function to be optimized: $\displaystyle L$ $\displaystyle=\sum\limits_{t,\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})\log\frac{\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})}{P(\mathbf{x}_{t},\mathbf{y}_{t},v_{t}|\mathbf{x}_{t-1})}-\sum\limits_{t,\mathbf{x}_{\mathbf{t}}}\prod\limits_{m}\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})\log\prod\limits_{m}\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})$ (16) $\displaystyle+\sum_{t,m,x_{t}^{(m)}}\lambda_{t}^{(m)}(x_{t}^{(m)})\left(\sum_{v_{t},\mathbf{x}_{t-1},\\{\mathbf{x}_{t}\backslash x_{t}^{(m)}\\}}\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})-\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})\right)$ $\displaystyle+\sum_{t,m,x_{t-1}^{(m)}}\mu_{t-1}^{(m)}(x_{t-1}^{(m)})\left(\sum_{v_{t},\\{\mathbf{x}_{t-1}\backslash x_{t-1}^{(m)}\\},\mathbf{x}_{t}}\hat{\gamma}_{t-1}^{(m)}(x_{t-1}^{(m)})-\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})\right).$ $\displaystyle+\sum_{t,m,x_{t}^{(m)}}\nu(x_{t}^{(m)})\left(\sum_{x_{t}^{(m)}}\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})-1\right)$ We then set the partial derivatives of Eq. (16) over $\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})$ to 0, which results in the following: $\displaystyle\frac{\partial L}{\partial\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})}=\log\frac{\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})}{P(\mathbf{x}_{t},\mathbf{y}_{t},v_{t}|\mathbf{x}_{t-1})}+1-\sum_{m}\lambda_{t}^{(m)}(x_{t}^{(m)})-\sum\limits_{m}\mu_{t-1}^{(m)}(x_{t-1}^{(m)})\stackrel{{\scriptstyle\mbox{set}}}{{=}}0$ $\displaystyle\Rightarrow\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})\propto\exp\left(\sum\limits_{m}\mu_{t-1}^{(m)}(x_{t-1}^{(m)})\right)P(\mathbf{x}_{t},\mathbf{y}_{t},v_{t}|\mathbf{x}_{t-1})\exp\left(\sum\limits_{m}\lambda_{t}^{(m)}(x_{t}^{(m)})\right),$ As such, we see that ${\hat{\alpha}_{t-1}^{(m)}(x_{t-1}^{(m)})=\exp(\mu_{t-1}^{(m)}(x_{t-1}^{(m)}))}$ is associated with the forward probabilities and ${\hat{\beta}_{t}^{(m)}(x_{t}^{(m)})=\exp(\lambda_{t}^{(m)}(x_{t}^{(m)}))}$ with the backward probabilities, with $\hat{\gamma}_{t}^{(m)}(x_{t}^{(m)})=\hat{\alpha}_{t}^{(m)}(x_{t}^{(m)})\hat{\beta}_{t}^{(m)}(x_{t}^{(m)})$. We can determine the two-slice statistics for an individual $m$ by marginalizing the other individuals $m^{\prime}\neq m$: $\displaystyle\hat{\xi}(x_{t-1}^{(m)},x_{t}^{(m)},v_{t})=\sum_{m^{\prime}\neq m,x_{t-1}^{(m^{\prime})},x_{t}^{(m^{\prime})}}\hat{\xi}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})$ $\displaystyle\propto\sum_{m^{\prime}\neq m,x_{t-1}^{(m^{\prime})},x_{t}^{(m^{\prime})}}P(\mathbf{x}_{t},v_{t}|\mathbf{x}_{t-1})\cdot\prod_{m}\hat{\alpha}_{t-1}^{(m)}(x_{t-1}^{(m)})\cdot\prod_{m}P(y_{t}^{(m)}|x_{t}^{(m)})\cdot\prod_{m}\hat{\beta}_{t}^{(m)}(x_{t}^{(m)}).$ The above is the same as in Eq. (7). ### 6.3 Derivation of the parameter-learning algorithm From Eq.(3), the log-likelihood of the entire sequence can be shown as this: $\displaystyle\log P\left(\mathbf{x}_{1,\dots,T},\mathbf{y}_{1,\dots,T},v_{1,\dots,T}\right)=\sum_{t=1}^{T}\log P(\mathbf{x}_{t},v_{t}|\mathbf{x}_{t-1})+\sum_{t=1}^{T}\log P(\mathbf{y}_{t}|\mathbf{x}_{t}),\mbox{ where }$ (17) $\displaystyle P(\mathbf{x}_{t},v_{t}|\mathbf{x}_{t-1})=\begin{cases}c_{k}\cdot g_{k}\left(\mathbf{x}_{t-1}\right)\cdot\delta(\mathbf{x}_{t}-\mathbf{x}_{t-1}\equiv\mathbf{\Delta_{k}})&\mbox{if }v_{t}=k\\\ (1-\sum_{k}c_{k}g_{k}\left(\mathbf{x}_{t-1}\right))\cdot\delta(\mathbf{x}_{t}-\mathbf{x}_{t-1}\equiv\mathbf{0})&\mbox{if }v_{t}=\emptyset\end{cases}.$ The probabilities for state transition can be shown as the probabilities of a set of events. The expected log likelihood over the posterior probability conditioned on the observations $\mathbf{y}_{1},\dots,\mathbf{y}_{T}$ takes the following form: $\displaystyle\mathbf{E}_{P(\mathbf{x}_{1,...,T},v_{1,...,T}|\mathbf{y}_{1,...,T})}\left(\log P\left(\mathbf{x}_{1,\dots,T},\mathbf{y}_{1,\dots,T},v_{1,\dots,T}\right)\right)~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$ (18) $\displaystyle=$ $\displaystyle\sum_{t,\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t})\cdot\log\left(P(\mathbf{x}_{t},v_{t}|\mathbf{x}_{t-1})P(\mathbf{y}_{t}|\mathbf{x}_{t})\right)$ $\displaystyle=$ $\displaystyle\sum_{t,\mathbf{x}_{t-1},\mathbf{x}_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=v)\cdot\log\left(P(\mathbf{x}_{t},v_{t}=v|\mathbf{x}_{t-1})P(\mathbf{y}_{t}|\mathbf{x}_{t})\right)$ $\displaystyle+$ $\displaystyle\sum_{t,\mathbf{x}_{t-1},\mathbf{x}_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=\emptyset)\cdot\log\left(P(\mathbf{x}_{t},v_{t}=\emptyset|\mathbf{x}_{t-1})P(\mathbf{y}_{t}|\mathbf{x}_{t})\right)$ At a given time $t$, there are two possible cases: $v_{t}=v$, where $v\in\\{1,\cdots,V\\}$, and $v_{t}=\emptyset$. The derivatives with respect to $c_{k}$ can be shown as follows: $\displaystyle\frac{\partial\log P(\mathbf{x}_{t},v_{t}=k|\mathbf{x}_{t-1})}{\partial c_{k}}=\frac{1}{c_{k}}$ $\displaystyle\frac{\partial\log P(\mathbf{x}_{t},v_{t}=\emptyset|\mathbf{x}_{t-1})}{\partial c_{k}}=\frac{-g_{k}(\mathbf{x}_{t-1})}{1-\sum_{k}c_{k}g_{k}(\mathbf{x}_{t-1})}$ Note that here we do not detail $\delta(\mathbf{x}_{t}-\mathbf{x}_{t-1}\equiv\mathbf{\Delta_{k}})$ and $\delta(\mathbf{x}_{t}-\mathbf{x}_{t-1}\equiv\mathbf{0})$ explicitly, because when calculating the derivatives of expected log likelihood in Eq.(18) these terms will be contained in $\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=k)$ and $\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=\emptyset)$. Next we take the derivative of expected log likelihood with respect to $c_{k}$: $\displaystyle\frac{\mathbf{E}_{P(\mathbf{x}_{1,...,T},v_{1,...,T}|\mathbf{y}_{1,...,T})}\left(\log P\left(\mathbf{x}_{1,\dots,T},\mathbf{y}_{1,\dots,T},v_{1,\dots,T}\right)\right)}{\partial c_{k}}$ (19) $\displaystyle=$ $\displaystyle\sum_{t,\mathbf{x}_{t-1},\mathbf{x}_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=k)\frac{1}{c_{k}}-\sum_{t,\mathbf{x}_{t-1},\mathbf{x}_{t},}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=\emptyset)\frac{g_{k}(\mathbf{x}_{t-1})}{1-\sum_{k}c_{k}g_{k}(\mathbf{x}_{t-1})}$ Because we assume that the auxiliary event dominates when the time step is small, we approximate $1-\sum_{k}c_{k}g_{k}(\mathbf{x}_{t})\approx 1$ and $\sum_{\mathbf{x}_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=\emptyset)\approx\hat{\gamma}_{t-1}(\mathbf{x}_{t-1})$. After applying this approximation and setting the derivative to $0$, the result is as follows: $\displaystyle c_{k}$ $\displaystyle=\frac{\sum_{t}\ \sum_{\mathbf{x}_{t-1},\mathbf{x}_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=k)}{\sum_{t}\ \sum_{\mathbf{x}_{t-1},\mathbf{x}_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=\emptyset)g_{k}(\mathbf{x}_{t-1})}$ (20) $\displaystyle\approx\frac{\sum_{t}\ \sum_{\mathbf{x}_{t-1},\mathbf{x}_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=k)}{\sum_{t}\ \sum_{\mathbf{x}_{t-1}}\hat{\gamma}_{t-1}(\mathbf{x}_{t-1})g_{k}(\mathbf{x}_{t-1})}$ $\displaystyle=\frac{\sum_{t}\ \sum_{\mathbf{x}_{t-1},\mathbf{x}_{t}}\hat{\xi}_{t}(\mathbf{x}_{t-1},\mathbf{x}_{t},v_{t}=k)}{\sum_{t}\ \prod_{m}\sum_{x_{t-1}^{(m)}}\hat{\gamma}_{t-1}^{(m)}(x_{t-1}^{(m)})g_{k}^{(m)}(x_{t-1}^{(m)})}.$
# Complex Momentum for Optimization in Games Jonathan Lorraine$\vphantom{e}{}^{1,2}$ David Acuna$\vphantom{e}{}^{1,2,3}$ Paul Vicol$\vphantom{e}{}^{1,2}$ David Duvenaud$\vphantom{e}{}^{1,2}$ University of Toronto$\vphantom{e}{}^{1}$ Vector Institute$\vphantom{e}{}^{2}$ NVIDIA$\vphantom{e}{}^{3}$ {lorraine, davidj, pvicol<EMAIL_ADDRESS> ###### Abstract We generalize gradient descent with momentum for optimization in differentiable games to have complex-valued momentum. We give theoretical motivation for our method by proving convergence on bilinear zero-sum games for simultaneous and alternating updates. Our method gives real-valued parameter updates, making it a drop-in replacement for standard optimizers. We empirically demonstrate that complex-valued momentum can improve convergence in realistic adversarial games—like generative adversarial networks—by showing we can find better solutions with an almost identical computational cost. We also show a practical generalization to a complex-valued Adam variant, which we use to train BigGAN to better inception scores on CIFAR-10. ## 1 Introduction Gradient-based optimization has been critical for the success of machine learning, updating a single set of parameters to minimize a single loss. A growing number of applications require learning in games, which generalize single-objective optimization. Common examples are GANs [1], actor-critic models [2], curriculum learning [3, 4, 5], hyperparameter optimization [6, 7, 8, 9], adversarial examples [10, 11], learning models [12, 13, 14], domain adversarial adaptation [15], neural architecture search [16, 17], and meta- learning [18, 19]. Games consist of multiple players, each with parameters and objectives. We often want solutions where no player gains from changing their strategy unilaterally, e.g., Nash equilibria [20] or Stackelberg equilibria [21]. Classical gradient-based learning often fails to find these equilibria due to rotational dynamics [22]. Numerous saddle point finding algorithms for zero- sum games have been proposed [23, 24]. Gidel et al. [25] generalizes GD with momentum to games, showing we can use a negative momentum to converge if the eigenvalues of the Jacobian of the gradient vector field have a large imaginary part. We use the terminology in Gidel et al. [25] and say (purely) cooperative or adversarial games for games with (purely) real or imaginary eigenvalues. Setups like GANs are not purely adversarial, but rather have both _purely cooperative and adversarial eigenspaces_ – i.e., eigenspaces with purely real or imaginary eigenvalues. In cooperative eigenspaces, the players do not interfere with each other. We want solutions that converge with simultaneous and alternating updates in purely adversarial games – a setup where existing momentum methods fail. Also, we want solutions that are robust to different mixtures of adversarial and cooperative eigenspaces, because this depends on the games eigendecomposition which can be intractable. To solve this we unify and generalize existing momentum methods [26, 25] to recurrently linked momentum – a setup with multiple recurrently linked momentum buffers with potentially negative coefficients shown in Figure 2(c). We show that selecting two of these recurrently linked buffers with appropriate momentum coefficients can be interpreted as the real and imaginary parts of a single _complex buffer_ and complex momentum coefficient – see Figure 2(d). This setup (a) allows us to converge in adversarial games with simultaneous updates, (b) only introduces one new optimizer parameter – the phase or $\arg$ of our momentum, (c) allows us to gain intuitions via complex analysis, (d) is trivial to implement in libraries supporting complex arithmetic, and (e) robustly converges for different eigenspace mixtures. Intuitively, our complex buffer stores historical gradient information, oscillating between adding or subtracting at a frequency dictated by the momentum coefficient. Classical momentum only adds gradients, and negative momentum changes between adding or subtracting each iteration, while we oscillate at an arbitrary (fixed) frequency – see Figure 3.1. This reduces rotational dynamics during training by canceling out opposing updates. ### Contributions * • We provide generalizations and variants of classical [27, 28, 29], negative [25, 30], and aggregated [26] momentum for learning in differentiable games. * • We show our methods converges on adversarial games – including bilinear zero- sum games and the Dirac-GAN – with simultaneous and alternating updates. * • We illustrate a robustness during optimization, converging faster and over a larger range of mixtures of cooperative and adversarial games than existing first-order methods. * • We give a practical extension of our method to a complex-valued Adam [31] variant, which we use to train a BigGAN [32] on CIFAR-10, improving [32]’s inception scores. Actual JAX implementation: changes in green ⬇ mass = .8 + .3j def momentum(step_size, mass): ... def update(i, g, state): x, velocity = state velocity = mass * velocity + g x=x-jnp.real(step_size(i)*velocity) return x, velocity ... Figure 1: How to modify JAX’s SGD with momentum here to use complex momentum. The only changes are in green. jnp.real gets the real part of step_size times the momentum buffer (called velocity here). We use a complex mass for our method in this case $\beta=|\beta|\exp(i\arg(\beta))=$0.9$\exp(i\nicefrac{{\pi}}{{$8$}})\approx.8+.3i$. ## 2 Background Appendix Table 2 summarizes our notation. Consider the optimization problem: $\boldsymbol{\theta}^{*}\vcentcolon=\textnormal{arg\,min}_{\boldsymbol{\theta}}\mathcal{L}(\boldsymbol{\theta})$ (1) We can find local minima of loss $\mathcal{L}$ using (stochastic) gradient descent with step size $\alpha$. We denote the loss gradient at parameters $\boldsymbol{\theta}^{j}$ by $\boldsymbol{g}^{j}\\!\\!\vcentcolon=\\!\\!\boldsymbol{g}(\boldsymbol{\theta}^{j})\\!\\!\vcentcolon=\\!\\!\smash{\left.\nabla_{\boldsymbol{\theta}}\mathcal{L}(\boldsymbol{\theta})\right|_{\boldsymbol{\theta}^{j}}}$. $\smash{\boldsymbol{\theta}^{j\\!+\\!1}=\boldsymbol{\theta}^{j}-\alpha\boldsymbol{g}^{j}}$ (SGD) Momentum can generalize SGD. For example, Polyak’s Heavy Ball [27]: $\displaystyle\smash{\boldsymbol{\theta}^{j\\!+\\!1}=\boldsymbol{\theta}^{j}-\alpha\boldsymbol{g}^{j}+\beta(\boldsymbol{\theta}^{j}-\boldsymbol{\theta}^{j-1})}$ (2) Which can be equivalently written with momentum buffer $\smash{\boldsymbol{\mu}^{j}=\nicefrac{{(\boldsymbol{\theta}^{j}-\boldsymbol{\theta}^{j-1})}}{{\alpha}}}$. $\smash{\boldsymbol{\mu}^{j\\!+\\!1}=\beta\boldsymbol{\mu}^{j}-\boldsymbol{g}^{j},\quad\quad\boldsymbol{\theta}^{j\\!+\\!1}=\boldsymbol{\theta}^{j}+\alpha\boldsymbol{\mu}^{j\\!+\\!1}}$ (SGDm) We can also generalize SGDm to aggregated momentum [26], shown in Appendix Algorithm 3. ### 2.1 Game Formulations Another class of problems is learning in games, which includes problems like generative adversarial networks (GANs) [1]. We focus on $2$-player games —with players denoted by $A$ and $B$—where each player minimizes their loss $\mathcal{L}_{A},\mathcal{L}_{B}$ with their parameters $\boldsymbol{\theta}_{A}\in\mathbb{R}^{d_{A}}$, $\boldsymbol{\theta}_{B}\in\mathbb{R}^{d_{B}}$​​. Solutions to $2$-player games – which are assumed unique for simplicity – can be defined as: $\displaystyle\smash{\boldsymbol{\theta}_{\\!A}^{*}\\!\vcentcolon=\\!\textnormal{arg\,min}_{\boldsymbol{\theta}_{\\!A}}\\!\mathcal{L}_{\\!A}(\boldsymbol{\theta}_{\\!A},\\!\boldsymbol{\theta}_{\\!B}^{*}),\,\,\,\,\,\boldsymbol{\theta}_{\\!B}^{*}\\!\vcentcolon=\\!\textnormal{arg\,min}_{\boldsymbol{\theta}_{\\!B}}\\!\mathcal{L}_{\\!B}(\boldsymbol{\theta}_{\\!A}^{*},\\!\boldsymbol{\theta}_{\\!B})}$ (3) In deep learning, losses are non-convex with many parameters, so we often focus on finding local solutions. If we have a player ordering, then we have a Stackelberg game. For example, in GANs, the generator is the leader, and the discriminator is the follower. In hyperparameter optimization, the hyperparameters are the leader, and the network parameters are the follower. If $\boldsymbol{\theta}_{\\!B}^{*}(\boldsymbol{\theta}_{\\!A})$ denotes player $B$’s best-response function, then Stackelberg game solutions can be defined as: $\displaystyle\smash{\boldsymbol{\theta}_{\\!A}}^{*}\\!\vcentcolon=\textnormal{arg\,min}_{\boldsymbol{\theta}_{\\!A}}\\!\mathcal{L}_{\\!A}(\boldsymbol{\theta}_{\\!A},\\!\boldsymbol{\theta}_{\\!B}^{*}(\boldsymbol{\theta}_{\\!A})),\,\,\,\,\,\boldsymbol{\theta}_{\\!B}^{*}(\boldsymbol{\theta}_{\\!A})\\!\vcentcolon=\textnormal{arg\,min}_{\boldsymbol{\theta}_{\\!B}}\\!\mathcal{L}_{\\!B}(\boldsymbol{\theta}_{\\!A},\\!\boldsymbol{\theta}_{\\!B})$ (4) If $\mathcal{L}_{\\!A}$ and $\mathcal{L}_{\\!B}$ are differentiable in $\boldsymbol{\theta}_{\\!A}$ and $\boldsymbol{\theta}_{\\!B}$ we say the game is differentiable. We may be able to approximately find $\boldsymbol{\theta}_{\\!A}^{*}$ efficiently if we can do SGD on: $\mathcal{L}_{\\!A}^{*}(\boldsymbol{\theta}_{\\!A})\vcentcolon=\mathcal{L}_{\\!A}(\boldsymbol{\theta}_{\\!A},\boldsymbol{\theta}_{\\!B}^{*}(\boldsymbol{\theta}_{\\!A}))$ (5) Unfortunately, SGD would require computing $\nicefrac{{d\mathcal{L}_{\\!A}^{*}}}{{d\boldsymbol{\theta}_{\\!A}}}$, which often requires $\nicefrac{{d\boldsymbol{\theta}_{\\!B}^{*}}}{{d\boldsymbol{\theta}_{\\!A}}}$, but $\boldsymbol{\theta}_{\\!B}^{*}(\boldsymbol{\theta}_{\\!A})$ and its Jacobian are typically intractable. A common optimization algorithm to analyze for finding solutions is simultaneous SGD (SimSGD) – sometimes called gradient descent ascent for zero-sum games – where $\boldsymbol{g}_{\\!A}^{j}\vcentcolon=\boldsymbol{g}_{\\!A}(\boldsymbol{\theta}_{\\!A}^{j},\boldsymbol{\theta}_{\\!B}^{j})$ and $\boldsymbol{g}_{\\!B}^{j}\vcentcolon=\boldsymbol{g}_{\\!B}(\boldsymbol{\theta}_{\\!A}^{j},\boldsymbol{\theta}_{\\!B}^{j})$ are estimators for $\left.\smash{\nabla_{\boldsymbol{\theta}_{\\!A}}}\mathcal{L}_{\\!A}\right|_{\boldsymbol{\theta}_{\\!A}^{j},\boldsymbol{\theta}_{\\!B}^{j}}$ and $\left.\smash{\nabla_{\boldsymbol{\theta}_{\\!B}}\mathcal{L}_{\\!B}}\right|_{\boldsymbol{\theta}_{\\!A}^{j},\boldsymbol{\theta}_{\\!B}^{j}}$: $\displaystyle\smash{\boldsymbol{\theta}_{\\!A}^{j\\!+\\!1}=\boldsymbol{\theta}_{\\!A}^{j}-\alpha\boldsymbol{g}_{\\!A}^{j},\quad\boldsymbol{\theta}_{\\!B}^{j\\!+\\!1}=\boldsymbol{\theta}_{\\!B}^{j}-\alpha\boldsymbol{g}_{\\!B}^{j}}$ (SimSGD) We simplify notation with the concatenated or joint-parameters $\smash{\boldsymbol{\omega}\\!\vcentcolon=\\![\boldsymbol{\theta}_{\\!A},\boldsymbol{\theta}_{\\!B}]\\!\in\\!\mathbb{R}^{d}}$ and the joint-gradient vector field $\hat{\boldsymbol{g}}:\mathbb{R}^{d}\to\mathbb{R}^{d}$, which at the $j^{th}$ iteration is the joint-gradient denoted: $\hat{\boldsymbol{g}}^{j}\vcentcolon=\hat{\boldsymbol{g}}(\boldsymbol{\omega}^{j})\vcentcolon=[\boldsymbol{g}_{\\!A}(\boldsymbol{\omega}^{j}),\boldsymbol{g}_{\\!B}(\boldsymbol{\omega}^{j})]=[\boldsymbol{g}_{\\!A}^{j},\boldsymbol{g}_{\\!B}^{j}]$ (6) We extend to $n$-player games by treating $\boldsymbol{\omega}$ and $\hat{\boldsymbol{g}}$ as concatenations of the players’ parameters and loss gradients, allowing for a concise expression of the SimSGD update with momentum (SimSGDm): $\displaystyle\smash{\boldsymbol{\mu}^{j\\!+\\!1}=\beta\boldsymbol{\mu}^{j}-\hat{\boldsymbol{g}}^{j},\quad\quad\boldsymbol{\omega}^{j\\!+\\!1}=\boldsymbol{\omega}^{j}+\alpha\boldsymbol{\mu}^{j\\!+\\!1}}$ (SimSGDm) Gidel et al. [25] show classical momentum choices of $\beta\in[$0$,$1$)$ do not improve solution speed over SimSGD in some games, while negative momentum helps if the Jacobian of the joint-gradient vector field $\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}$ has complex eigenvalues. Thus, for purely adversarial games with imaginary eigenvalues, any non- negative momentum and step size will not converge. For cooperative games – i.e., minimization – $\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}$ has strictly real eigenvalues because it is a losses Hessian, so classical momentum works well. gradient$\boldsymbol{\mu}$update$\alpha$$\beta$ (a) Classical [27] gradient$\boldsymbol{\mu}_{(1)}$…$\boldsymbol{\mu}_{(n)}$update$\alpha_{(1)}$$\beta_{(1)}$$\alpha_{(n)}$$\beta_{(n)}$ (b) Aggregated [26] gradient$\boldsymbol{\mu}_{(1)}$$\boldsymbol{\mu}_{(2)}$update$\beta_{(1,2)}$$\alpha_{(1)}$$\beta_{(1,1)}$$\beta_{(2,1)}$$\alpha_{(2)}$$\beta_{(2,2)}$ (c) Recurrently linked (new) gradient$\Re(\boldsymbol{\mu}$)$\Im(\boldsymbol{\mu}$)update${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\Im(\beta)}$$\alpha$$\Re(\beta)$$-{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\Im(\beta)}$$\Re(\beta)$ (d) Complex (ours) Figure 2: We show computational diagrams for momentum variants simultaneously updating all players parameters, which update the momentum buffers $\boldsymbol{\mu}$ at iteration $j\\!+\\!1$ with coefficient $\beta$ via $\boldsymbol{\mu}^{j\\!+\\!1}\\!=\\!(\beta\boldsymbol{\mu}^{j}-$gradient$)$. Our parameter update is a linear combination of the momentum buffers weighted by step sizes $\alpha$. _(a)_ Classical momentum [27, 29], with a single buffer and coefficient $\beta\in[0,1)$. _(b)_ Aggregated momentum [26] which adds multiple buffers with different coefficients. _(c)_ Recurrently linked momentum, which adds cross-buffer coefficients and updates the buffers with $\boldsymbol{\mu}_{(k)}^{j+1}\\!=\\!(\sum_{l}\beta_{(l,k)}\boldsymbol{\mu}_{(l)}^{j}-$gradient$)$. We allow $\beta_{(l,k)}$ to be negative like negative momentum [25] for solutions with simultaneous updates in adversarial games. _(d)_ Complex momentum is a special case of recurrently linked momentum with two buffers and ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\beta_{(1,1)}}\\!=\\!{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\beta_{(2,2)}}\\!=\\!{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\Re(\beta)}$, ${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\beta_{(1,2)}}\\!=\\!-{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\beta_{(2,1)}}\\!=\\!{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\Im(\beta)}$. Analyzing other recurrently linked momentum setups is an open problem. ### 2.2 Limitations of Existing Methods Higher-order: Methods using higher-order gradients are often harder to parallelize across GPUs, [33], get attracted to bad saddle points [34], require estimators for inverse Hessians [35, 36], are complicated to implement, have numerous optimizer parameters, and can be more expensive in iteration and memory cost [37, 36, 35, 38, 39, 40]. Instead, we focus on first-order methods. First-order: Some first-order methods such as extragradient [41] require a second, costly, gradient evaluation per step. Similarly, methods alternating player updates are bottlenecked by waiting until after the first player’s gradient is used to evaluate the second player’s gradient. But, many deep learning setups can parallelize computation of both players’ gradients, making alternating updates effectively cost another gradient evaluation. We want a method which updates with the effective cost of one gradient evaluation. Also, simultaneous updates are a standard choice in some settings [15]. Robust convergence: We want our method to converge in purely adversarial game’s with simultaneous updates – a setup where existing momentum methods fail [25]. Furthermore, computing a games eigendecomposition is often infeasibly expensive, so we want methods that robustly converge over different mixtures of adversarial and cooperative eigenspaces. We are particularly interested in eigenspace mixtures that that are relevant during GAN training – see Figure 7 and Appendix Figure 9. ### 2.3 Coming up with our Method Combining existing methods: Given the preceding limitations, we would like a robust first-order method using a single, simultaneous gradient evaluation. We looked at combining aggregated [26] with negative [25] momentum by allowing negative coefficients, because these methods are first-order and use a single gradient evaluation – see Figure 2(b). Also, aggregated momentum provides robustness during optimization by converging quickly on problems with wide range of conditioning, while negative momentum works in adversarial setups. We hoped to combine their benefits, gaining robustness to different mixtures of adversarial and cooperative eigenspaces. However, with this setup we could not find solutions that converge with simultaneous updates in purely adversarial games. Generalize to allow solutions: We generalized the setup to allow recurrent connections between momentum buffers, with potentially negative coefficients – see Figure 2(c) and Appendix Algorithm 4. There are optimizer parameters so this converges with simultaneous updates in purely adversarial games, while being first-order with a single gradient evaluation – see Corollary 1. However, in general, this setup could introduce many optimizer parameters, have unintuitive behavior, and not be amenable to analysis. So, we choose a special case of this method to help solve these problems. A simple solution: With two momentum buffers and correctly chosen recurrent weights, we can interpret our buffers as the real and imaginary part of one complex buffer – see Figure 2(d). This method is (a) capable of converging in purely adversarial games with simultaneous updates – Corollary 1, (b) only introduces one new optimizer parameter – the phase of the momentum coefficient, (c) is tractable to analyze and have intuitions for with Euler’s formula – ex., Eq. (8), (d) is trivial to implement in libraries supporting complex arithmetic – see Figure 1, and (e) can be robust to games with different mixtures of cooperative and adversarial eigenspaces – see Figure 5. DiscriminatorGeneratorNorm of joint-gradient $\|\hat{\boldsymbol{g}}\|$Distance to optimumIterations Figure 3: Complex momentum helps correct rotational dynamics when training a Dirac-GAN [42]. _Left:_ Parameter trajectories with step size $\alpha\\!=\\!0.1$ and momentum $\beta\\!=\\!0.9\exp(i\nicefrac{{\pi}}{{8}})$. We include the classical, real and positive momentum which diverges for any step size. _Right:_ The distance from optimum, which has a linear convergence rate matching our prediction with Theorem 1 and (14). ## 3 Complex Momentum We describe our proposed method, where the momentum coefficient $\beta\in\mathbb{C}$, step size $\alpha\in\mathbb{R}$, momentum buffer $\boldsymbol{\mu}\in\mathbb{C}^{d}$, and player parameters $\boldsymbol{\omega}\in\mathbb{R}^{d}$. The simultaneous (or Jacobi) update is: $\displaystyle\smash{\boldsymbol{\mu}^{j\\!+\\!1}=\beta\boldsymbol{\mu}^{j}-\hat{\boldsymbol{g}}^{j},\,\,\,\,\,\boldsymbol{\omega}^{j\\!+\\!1}=\boldsymbol{\omega}^{j}+\Re(\alpha\boldsymbol{\mu}^{j\\!+\\!1})}$ (SimCM) There are many ways to get a real-valued update from $\boldsymbol{\mu}\in\mathbb{C}$, but we only consider updates equivalent to classical momentum when $\beta\in\mathbb{R}$. Specifically, we simply update the parameters using the real component of the momentum: $\Re(\boldsymbol{\mu})$. Algorithm 1 (SimCM) Momentum 1:$\smash{\beta,\alpha\in\mathbb{C},\boldsymbol{\mu}\in\mathbb{C}^{d},\boldsymbol{\omega}^{$0$}\in\mathbb{R}^{d}}$ 2:for $i=$1$\dots N$ do 3: $\smash{\boldsymbol{\mu}^{j\\!+\\!1}\\!=\\!\beta\boldsymbol{\mu}^{j}\\!-\\!\hat{\boldsymbol{g}}^{j}}$ 4: $\boldsymbol{\omega}^{j\\!+\\!1}\\!=\\!\boldsymbol{\omega}^{j}\\!+\\!\Re(\alpha\boldsymbol{\mu}^{j\\!+\\!1})$ return $\boldsymbol{\omega}^{N}$ We show the SimCM update in Algorithm 1 and visualize it in Figure 2(d). We also show the alternating (or Gauss-Seidel) update, which is common for GAN training: $\displaystyle\boldsymbol{\mu}^{j\\!+\\!1}_{A}\\!$ $\displaystyle=\\!\beta\boldsymbol{\mu}^{j}_{A}\\!-\\!\hat{\boldsymbol{g}}_{A}(\boldsymbol{\omega}^{j}\\!),\boldsymbol{\theta}_{\\!A}^{j\\!+\\!1}\\!=\\!\boldsymbol{\theta}_{\\!A}^{j}\\!+\\!\Re(\alpha\boldsymbol{\mu}^{j\\!+\\!1}_{A}\\!)$ (AltCM) $\displaystyle\boldsymbol{\mu}^{j\\!+\\!1}_{B}\\!$ $\displaystyle=\\!\beta\boldsymbol{\mu}^{j}_{B}\\!-\\!\hat{\boldsymbol{g}}_{B}(\boldsymbol{\theta}_{\\!A}^{j\\!+\\!1}\\!\\!,\boldsymbol{\theta}_{\\!B}^{j}\\!),\boldsymbol{\theta}_{\\!B}^{j\\!+\\!1}\\!=\\!\boldsymbol{\theta}_{\\!B}^{j}\\!+\\!\Re(\alpha\boldsymbol{\mu}^{j\\!+\\!1}_{B}\\!)$ #### Generalizing negative momentum: Consider the negative momentum from Gidel et al. [25]: $\boldsymbol{\omega}^{j\\!+\\!1}=\boldsymbol{\omega}^{j}-\alpha\hat{\boldsymbol{g}}^{j}+\beta(\boldsymbol{\omega}^{j}-\boldsymbol{\omega}^{j-1})$. Expanding (SimCM) with $\boldsymbol{\mu}^{j}=\nicefrac{{(\boldsymbol{\omega}^{j}-\boldsymbol{\omega}^{j-1})}}{{\alpha}}$ for real momentum shows the negative momentum method of Gidel et al. [25] is a special case of our method: $\displaystyle\boldsymbol{\omega}^{j\\!+\\!1}=\boldsymbol{\omega}^{j}+\Re(\alpha(\beta\nicefrac{{(\boldsymbol{\omega}^{j}-\boldsymbol{\omega}^{j-1})}}{{\alpha}}-\hat{\boldsymbol{g}}^{j}))=\boldsymbol{\omega}^{j}-\alpha\hat{\boldsymbol{g}}^{j}+\beta(\boldsymbol{\omega}^{j}-\boldsymbol{\omega}^{j-1})$ (7) ### 3.1 Dynamics of Complex Momentum For simplicity, we assume Numpy-style [43] component-wise broadcasting for operations like taking the real-part $\Re(\boldsymbol{z})$ of vector $\boldsymbol{z}=[z_{1},\dots,z_{n}]\in\mathbb{C}^{n}$, with proofs in the Appendix. Expanding the buffer updates with the polar components of $\beta$ gives intuition for complex momentum: $\displaystyle\begin{split}\vphantom{A^{A^{A^{A}}}}\boldsymbol{\mu}^{j\\!+\\!1}\\!=\\!\beta\boldsymbol{\mu}^{j}-\hat{\boldsymbol{g}}^{j}&\iff\boldsymbol{\mu}^{j\\!+\\!1}\\!=\\!\beta(\beta(\cdots)-\hat{\boldsymbol{g}}^{j-1})-\hat{\boldsymbol{g}}^{j}\iff\boldsymbol{\mu}^{j\\!+\\!1}\\!=\\!\smash{-\sum_{k=0}^{k=j}\beta^{k}\hat{\boldsymbol{g}}^{j-k}}\iff\\\ \Re(\boldsymbol{\mu}^{j\\!+\\!1})\\!=\\!-&\sum_{k=0}^{k=j}|\beta|^{k}\cos(k\arg(\beta))\hat{\boldsymbol{g}}^{j-k},\,\,\,\,\,\Im(\boldsymbol{\mu}^{j\\!+\\!1})\\!=\\!-\sum_{k=0}^{k=j}|\beta|^{k}\sin(k\arg(\beta))\hat{\boldsymbol{g}}^{j-k}\end{split}$ (8) The final line is simply by Euler’s formula (26). From (8) we can see $\beta$ controls the momentum buffer $\boldsymbol{\mu}$ by having $|\beta|$ dictate prior gradient decay rates, while $\arg(\beta)$ controls oscillation frequency between adding and subtracting prior gradients, which we visualize in Figure 3.1. Dependence on gradient $k$Iteration $k$ Figure 4(()): We show the real part of our momentum buffer – which dictates the parameter update – at the $50^{th}$ iteration $\Re(\boldsymbol{\mu}^{50})$ dependence on past gradients $\hat{\boldsymbol{g}}^{k}$ for $k\\!=\\!1\dots 50$. The momentum magnitude is fixed to $|\beta|\\!=\\!$0.9$$ as in Figure 3. Euler’s formula is used in (8) to for finding dependence or coefficient of $\hat{\boldsymbol{g}}^{k}$ via $\Re(\boldsymbol{\mu}^{50})\\!=\\!-\sum_{k=0}^{k=50}|\beta|^{k}\cos(k\arg(\beta))\hat{\boldsymbol{g}}^{j-k}$. Complex momentum allows smooth changes in the buffers dependence on past gradients. Momentum phase $\arg(\beta)$Momentum magnitude $|\beta|$Number of steps to converge Figure 4(()): How many steps simultaneous complex momentum on a Dirac-GAN takes for a set solution distance. We fix step size $\alpha\\!=\\!$0.1$$ as in Figure 3, while varying the phase and magnitude of our momentum $\beta\\!=\\!|\beta|\exp(i\arg(\beta))$. There is a red star at the optima, dashed red lines at real $\beta$, and a dashed magenta line for simultaneous gradient descent. There are no real-valued $\beta$ that converge for this – or any – $\alpha$ with simultaneous updates [25]. Appendix Figure 8 compares this with alternating updates (AltCM). Expanding the parameter updates with the Cartesian components of $\alpha$ and $\beta$ is key for Theorem 1, which characterizes the convergence rate: $\displaystyle\begin{split}\smash{\boldsymbol{\mu}^{j\\!+\\!1}}\\!=\\!\smash{\beta\boldsymbol{\mu}^{j}-\hat{\boldsymbol{g}}^{j}\iff}\\\ \smash{\Re(\boldsymbol{\mu}^{j\\!+\\!1})}\\!=\\!\smash{\Re(\beta)\\!\Re(\boldsymbol{\mu}^{j})\\!-\\!\Im(\beta)\\!\Im(\boldsymbol{\mu}^{j})\\!-\\!\Re(\hat{\boldsymbol{g}}^{j})},\,\,\,&\Im(\boldsymbol{\mu}^{j\\!+\\!1})\\!=\\!\Im(\beta)\\!\Re(\boldsymbol{\mu}^{j})\\!+\\!\Re(\beta)\\!\Im(\boldsymbol{\mu}^{j})\end{split}$ (9) $\displaystyle\boldsymbol{\omega}^{j\\!+\\!1}\\!=\\!\boldsymbol{\omega}^{j}\\!+\\!\Re(\alpha\boldsymbol{\mu}^{j\\!+\\!1})\iff\boldsymbol{\omega}^{j\\!+\\!1}\\!=\\!\boldsymbol{\omega}^{j}\\!-\\!\alpha\hat{\boldsymbol{g}}^{j}\\!+\\!\Re(\alpha\beta)\\!\Re(\boldsymbol{\mu}^{j})\\!-\\!\Im(\alpha\beta)\\!\Im(\boldsymbol{\mu}^{j})$ (10) So, we can write the next iterate with a fixed-point operator: $\smash{[\Re(\boldsymbol{\mu}^{j\\!+\\!1}),\\!\Im(\boldsymbol{\mu}^{j\\!+\\!1}),\\!\boldsymbol{\omega}^{j\\!+\\!1}]\\!\\!=\\!\boldsymbol{F}_{\alpha,\beta}([\Re(\boldsymbol{\mu}^{j}),\\!\Im(\boldsymbol{\mu}^{j}),\\!\boldsymbol{\omega}^{j}]\\!)}$ (11) (9) and (10) allow us to write the Jacobian of $\boldsymbol{F}_{\alpha,\beta}$ which can be used to bound convergence rates near fixed points, which we name the Jacobian of the augmented dynamics of buffer $\boldsymbol{\mu}$ and joint- parameters $\boldsymbol{\omega}$ and denote with: $\boldsymbol{R}\\!\vcentcolon=\\!\nabla_{[\boldsymbol{\mu},\boldsymbol{\omega}]}\boldsymbol{F}_{\alpha,\beta}=\begin{bmatrix}\Re(\beta)\boldsymbol{I}&-\Im(\beta)\boldsymbol{I}&-\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}\\\ \Im(\beta)\boldsymbol{I}&\Re(\beta)\boldsymbol{I}&0\\\ \Re(\alpha\beta)\boldsymbol{I}&-\Im(\alpha\beta)\boldsymbol{I}&\boldsymbol{I}\\!-\\!\alpha\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}\\\ \end{bmatrix}$ (12) So, for quadratic losses our parameters evolve via: $\smash{[\Re(\boldsymbol{\mu}^{j\\!+\\!1}),\\!\Im(\boldsymbol{\mu}^{j\\!+\\!1}),\\!\boldsymbol{\omega}^{j\\!+\\!1}]^{\top}\\!\\!=\\!\boldsymbol{R}\,[\Re(\boldsymbol{\mu}^{j}),\\!\Im(\boldsymbol{\mu}^{j}),\\!\boldsymbol{\omega}^{j}]^{\top}\\!}$ (13) We can bound convergence rates by looking at the spectrum of $\boldsymbol{R}$ with Theorem 1. ###### Theorem 1 (Consequence of Prop. 4.4.1 Bertsekas [44]). Convergence rate of complex momentum: If the spectral radius $\rho(\boldsymbol{R})\\!=\\!\rho(\nabla_{[\boldsymbol{\mu},\boldsymbol{\omega}]}\boldsymbol{F}_{\alpha,\beta})\\!<\\!1$, then, for $[\boldsymbol{\mu},\boldsymbol{\omega}]$ in a neighborhood of $[\boldsymbol{\mu}^{*}\\!,\boldsymbol{\omega}^{*}]$, the distance of $[\boldsymbol{\mu}^{j}\\!,\boldsymbol{\omega}^{j}]$ to the stationary point $[\boldsymbol{\mu}^{*}\\!,\boldsymbol{\omega}^{*}]$ converges at a linear rate $\mathcal{O}((\rho(\boldsymbol{R})+\epsilon)^{j}),\forall\epsilon\\!>\\!0$. Here, linear convergence means $\lim_{j\to\infty}\\!\\!\nicefrac{{\|\boldsymbol{\omega}^{j\\!+\\!1}-\boldsymbol{\omega}^{*}\|}}{{\|\boldsymbol{\omega}^{j}-\boldsymbol{\omega}^{*}\|}}\\!\in\\!($0$,$1$)$, where $\boldsymbol{\omega}^{*}$ is a fixed point. We should select optimization parameters $\alpha,\beta$ so that the augmented dynamics spectral radius $\operatorname*{Sp}(\boldsymbol{R}(\alpha,\beta))\\!<\\!1$—with the dependence on $\alpha$ and $\beta$ now explicit. We may want to express $\operatorname*{Sp}(\boldsymbol{R}(\alpha,\beta))$ in terms of the spectrum $\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$, as in Theorem 3 in Gidel et al. [25]: $\smash{\boldsymbol{f}(\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}),\alpha,\beta)\\!=\\!\operatorname*{Sp}(\boldsymbol{R}(\alpha,\beta))}$ (14) We provide a Mathematica command in Appendix A.2 for a cubic polynomial $p$ characterizing $\boldsymbol{f}$ with coefficients that are functions of $\alpha,\beta$ & $\lambda\in\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$, whose roots are eigenvalues of $\boldsymbol{R}$, which we use in subsequent results. O’donoghue and Candes [45], Lucas et al. [26] mention that in practice we do not know the condition number, eigenvalues – or the mixture of cooperative and adversarial eigenspaces – of a set of functions that we are optimizing, so we try to design algorithms which work over a large range. Sharing this motivation, we consider convergence behavior on games ranging from purely adversarial to cooperative. In Section 4.2 at every non-real $\beta$ we could select $\alpha$ and $|\beta|$ so Algorithm 1 converges. We define _almost-positive_ to mean $\arg(\beta)\\!=\\!\epsilon$ for small $\epsilon$, and show there are almost- positive $\beta$ which converge. ###### Corollary 1 (Convergence of Complex Momentum). There exist $\alpha\in\mathbb{R},\beta\in\mathbb{C}$ so Algorithm 1 converges for bilinear zero-sum games. More-so, for small $\epsilon$ (we show for $\epsilon=\frac{\pi}{$16$}$), if $\arg(\beta)=\epsilon$ (i.e., almost- positive) or $\arg(\beta)=\pi-\epsilon$ (i.e., almost-negative), then we can select $\alpha,|\beta|$ to converge. Why show this? Our result complements Gidel et al. [25] who show that for all real $\alpha,\beta$ Algorithm 1 _does not_ converge. We include the proof for bilinear zero-sum games, but the result generalizes to some games that are purely adversarial near fixed points, like Dirac GANs [34]. The result’s second part shows evidence there is a sense in which the only $\beta$ that do not converge are real (with simultaneous updates on purely adversarial games). It also suggests a form of robustness, because almost-positive $\beta$ can approach acceleration in cooperative eigenspaces, while converging in adversarial eigenspaces, so almost-positive $\beta$ may be desirable when we have games with an uncertain or variable mixtures of real and imaginary eigenvalues like GANs. Sections 4.2, 4.3, and 4.4 investigate this further. ### 3.2 What about Acceleration? With classical momentum, finding the step size $\alpha$ and momentum $\beta$ to optimize the convergence rate tractable if $$0$\\!<\\!l\\!\leq\\!L$ and $\smash{\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})\\!\in\\![l,L]^{d}}$ [46] – i.e., we have an $l$-strongly convex and $L$-Lipschitz loss. The conditioning $\kappa\\!=\\!\nicefrac{{L}}{{l}}$ can characterize problem difficulty. Gradient descent with an appropriate $\alpha$ can achieve a convergence rate of $\smash{\frac{\kappa-$1$}{\kappa+$1$}}$, but using momentum with appropriate $(\alpha^{*}\\!,\beta^{*})$ can achieve an _accelerated_ rate of $\smash{\rho^{*}\\!=\\!\frac{\sqrt{\kappa}-$1$}{\sqrt{\kappa}+$1$}}$. However, there is no consensus for constraining $\smash{\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})}$ in games for tractable and useful results. Candidate constraints include monotonic vector fields generalizing notions of convexity, or vector fields with bounded eigenvalue norms capturing a kind of sensitivity [47]. Figure 7 shows $\smash{\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})}$ for a GAN – we can attribute some eigenvectors to a single player’s parameters. The discriminator can be responsible for the largest and smallest norm eigenvalues, suggesting we may benefit from varying $\alpha$ and $\beta$ for each player as done in Section 4.4. ### 3.3 Implementing Complex Momentum Complex momentum is trivial to implement with libraries supporting complex arithmetic like JAX [48] or Pytorch [49]. Given an SGD implementation, we often only need to change a few lines of code – see Figure 1. Also, (9) and (10) can be easily used to implement Algorithm 1 in a library without complex arithmetic. More sophisticated optimizers like Adam can trivially support complex optimizer parameters with real-valued updates, which we explore in Section 4.4. ### 3.4 Scope and Limitations For some games, we need higher than first-order information to converge – ex., pure-response games [7] – because the first-order information for a player is identically zero. So, momentum methods only using first-order info will not converge in general. However, we can combine methods with second-order information and momentum algorithms [7, 9]. Complex momentum’s computational cost is almost identical to classical and negative momentum, except we now have a buffer with twice as many real parameters. We require one more optimization hyperparameter than classical momentum, which we provide an initial guess for in Section 4.5. ## 4 Experiments We investigate complex momentum’s performance in training GANs and games with different mixtures of cooperative and adversarial eigenspaces, showing improvements over standard baselines. Code for experiments will be available on publication, with reproducibility details in Appendix C. Overview: We start with a purely adversarial Dirac-GAN and zero-sum games, which have known solutions $\boldsymbol{\omega}^{*}\\!=\\!(\boldsymbol{\theta}_{\\!A}^{*},\boldsymbol{\theta}_{\\!B}^{*})$ and spectrums $\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$, so we can assess convergence rates. Next, we evaluate GANs generating $2$D distributions, because they are simple enough to train with a plain, alternating SGD. Finally, we look at scaling to larger-scale GANs on images which have brittle optimization, and require optimizers like Adam. Complex momentum provides benefits in each setup. We only compare to first-order optimization methods, despite there being various second-order methods due limitations discussed in Section 2.2. ### 4.1 Optimization in Purely Adversarial Games Here, we consider the optimizing the Dirac-GAN objective, which is surprisingly hard and where many classical optimization methods fail, because $\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$ is imaginary near solutions: $\smash{\min_{x}\max_{y}-\log(1+\exp(-xy))-\log(2)}$ (15) Figure 3 empirically verifies convergence rates given by Theorem 1 with (14), by showing the optimization trajectories with simultaneous updates. Figure 3.1 investigates how the components of the momentum $\beta$ affect convergence rates with simultaneous updates and a fixed step size. The best $\beta$ was almost-positive (i.e., $\arg(\beta)\\!=\\!\epsilon$ for small $\epsilon$). We repeat this experiment with alternating updates in Appendix Figure 8, which are standard in GAN training. There, almost-positive momentum is best (but negative momentum also converges), and the benefit of alternating updates can depend on if we can parallelize player gradient evaluations. ### 4.2 How Adversarialness Affects Convergence Rates Here, we compare optimization with first-order methods for purely adversarial, cooperative, and mixed games. We use the following game, allowing us to easily interpolate between these regimes: $\smash{\min_{\boldsymbol{x}}\max_{\boldsymbol{y}}\boldsymbol{x}^{\top}(\boldsymbol{\gamma}\boldsymbol{A})\boldsymbol{y}+\boldsymbol{x}^{\top}((\boldsymbol{I}-\boldsymbol{\gamma})\boldsymbol{B}_{1})\boldsymbol{x}-\boldsymbol{y}^{\top}((\boldsymbol{I}-\boldsymbol{\gamma})\boldsymbol{B}_{2})\boldsymbol{y}}$ (16) # grad. eval. to convergeMax adversarialness $\gamma_{max}$ Figure 5: We compare first-order methods convergence rates on the game in (16), with $\boldsymbol{A}\\!=\\!\boldsymbol{B}_{1}\\!=\\!\boldsymbol{B}_{2}$ diagonal and entries linearly spaced in $[\nicefrac{{$1$}}{{$4$}},$4$]$. We interpolate from purely cooperative to a mixture of purely cooperative and adversarial eigenspaces in $\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$ by making $\boldsymbol{\gamma}$ diagonal with $\gamma_{j}\\!\sim\\!U[0,\gamma_{max}]$, inducing $j^{th}$ eigenvalue pair to have $\arg(\lambda_{j})\\!\approx\\!\pm\gamma_{j}\frac{\pi}{2}$. So, $\gamma_{max}$ controls the largest possible eigenvalue $\arg$ or _max adversarialness_. Every method generalizes gradient descent-ascent (GDA) by adding an optimizer parameter, tuned via grid search. Positive momentum and negative momentum do not converge if there are purely adversarial eigenspaces (i.e., $\gamma_{max}\\!=\\!1$). Almost-positive momentum $\arg(\beta)\\!=\\!\epsilon$ $>\\!0$ like ${\color[rgb]{1,.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,.5,0}\nicefrac{{\pi}}{{8}}}$ allows us to approach the acceleration of positive momentum if sufficiently cooperative (i.e., $\gamma_{max}\\!<\\!$0.5$$), while still converging if there are purely adversarial eigenspaces (i.e., $\gamma_{max}\\!=\\!1$). Tuning $\arg(\beta)$ with complex momentum performs competitively with extragradient (EG), optimistic gradient (OG) for any adversarialness – ex., ${\color[rgb]{.75,.5,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,.5,.25}\arg(\beta)\\!=\\!\nicefrac{{\pi}}{{2}}}$ does well if there are purely adversarial eigenspaces (i.e., $\gamma_{max}\\!=\\!1$). If $\boldsymbol{\gamma}\\!=\\!\boldsymbol{I}$ the game is purely adversarial, while if the $\boldsymbol{\gamma}\\!=\\!\boldsymbol{0}$ the game is purely cooperative. Figure 6 explores $\operatorname*{Sp}(\boldsymbol{R})$ in purely adversarial games for a range of $\alpha,\beta$, generalizing Figure 4 in Gidel et al. [25]. At every non-real $\beta$—i.e., $\arg(\beta)\\!\neq\\!\pi$ or $0$—we could select $\alpha,|\beta|$ that converge. $\Im(\operatorname*{Sp}(\boldsymbol{R}))$$\arg(\beta)=$0$$$\arg(\beta)=\frac{\pi}{$4$}$The real component of the spectrum of the augmented learning dynamics $\Re(\operatorname*{Sp}(\boldsymbol{R}))$$\arg(\beta)=\frac{\pi}{$2$}$$\arg(\beta)=\ \frac{$3$\pi}{$4$}$$\arg(\beta)=\pi$$|\beta|$ Figure 6: The spectrum of the augmented learning dynamics $\boldsymbol{R}$ is shown, whose spectral norm is the convergence rate in Theorem 1. Each image is a different momentum phase $\arg(\beta)$ for a range of $\alpha,\\!|\beta|\\!\in\\![$0$,\\!$1$]$. The opacity of an eigenvalue (eig) is the step size $\alpha$ and the color corresponds to momentum magnitude $|\beta|$. A red unit circle shows where all eigs must lie to converge for a fixed $\alpha,\beta$. If the max eig norm $<\\!$1$$, we draw a green circle whose radius is our convergence rate and a green star at the associated eig. Notably, at every non-real $\beta$ we can select $\alpha,\\!|\beta|$ for convergence. The eigs are symmetric over the $x$-axis, and eigs near $\Re(\lambda)\\!=\\!$1$$ dictate convergence rate. Eigs near the center are due to state augmentation, have small magnitudes, and do not impact convergence rate. Simultaneous gradient descent corresponds to the magenta values where $|\beta|\\!=\\!$0$$. Figure 5 compares first-order algorithms as we interpolate from the purely cooperative games (i.e., minimization) to mixtures of purely adversarial and cooperative eigenspaces, because this setup range can occur during GAN training – see Figure 7. Our baselines are simultaneous SGD (or gradient descent-ascent (GDA)), extragradient (EG) [41], optimistic gradient (OG) [50, 51, 52], and momentum variants. We added extrapolation parameters for EG and OG so they are competitive with momentum – see Appendix Section C.3. We show how many gradient evaluations for a set solution distance, and EG costs two evaluations per update. We optimize convergence rates for each game and method by grid search, as is common for optimization parameters in deep learning. Takeaway: In the cooperative regime – i.e., $\gamma_{max}\\!<\\!.5$ or $\max_{\lambda\in\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})}|\arg(\lambda)|\\!<\\!\nicefrac{{\pi}}{{4}}$ – the best method is classical, positive momentum, otherwise we benefit from a method for learning in games. If we have purely adversarial eigenspaces then GDA, positive and negative momentum fail to converge, while EG, OG, and complex momentum can converge. In games like GANs, our eigendecomposition is infeasible to compute and changes during training – see Appendix Figure 9 – so we want an optimizer that converges robustly. Choosing any non-real momentum $\beta$ allows robust convergence for every eigenspace mixture. More so, almost-positive momentum $\beta$ allows us to approach acceleration when cooperative, while still converging if there are purely adversarial eigenspaces. ### 4.3 Training GANs on $2$D Distributions Here, we investigate improving GAN training using alternating gradient descent updates with complex momentum. We look at alternating updates, because they are standard in GAN training [1, 32, 53]. It is not clear how EG and OG generalize to alternating updates, so we use positive and negative momentum as our baselines. We train to generate a $2$D mixture of Gaussians, because more complicated distribution require more complicated optimizers than SGD. Figure 1 shows all changes necessary to use the JAX momentum optimizer for our updates, with full details in Appendix C.4. We evaluate the log-likelihood of GAN samples under the mixture as an imperfect proxy for matching. Appendix Figure 10 shows heatmaps for tuning $\arg(\beta)$ and $|\beta|$ with select step sizes. Takeaway: The best momentum was found at the almost- positive $\beta\approx$0.7$\exp(i\nicefrac{{\pi}}{{$8$}})$ with step size $\alpha\\!=\\!0.03$, and for each $\alpha$ we tested a broad range of non-real $\beta$ outperformed any real $\beta$. This suggests we may be able to often improve GAN training with alternating updates and complex momentum. ### 4.4 Training BigGAN with a Complex Adam Here, we investigate improving larger-scale GAN training with complex momentum. However, larger-scale GANs train with more complicated optimizers than gradient descent – like Adam [31] – and have notoriously brittle optimization. We look at training BigGAN [32] on CIFAR-10 [54], but were unable to succeed with optimizers other than [32]-supplied setups, due to brittle optimization. So, we attempted to change procedure minimally by taking [32]-supplied code here which was trained with Adam, and making the $\beta_{1}$ parameter – analogous to momentum – complex. The modified complex Adam is shown in Algorithm 2, where the momentum bias correction is removed to better match our theory. It is an open question on how to best carry over the design of Adam (or other optimizers) to the complex setting. Training each BigGAN took $10$ hours on an NVIDIA T4 GPU, so Figure C.4 and Table 1 took about $1000$ and $600$ GPU hours respectively. Figure C.4 shows a grid search over $\arg(\beta_{1})$ and $|\beta_{1}|$ for a BigGAN trained with Algorithm 2. We only changed $\beta_{1}$ for the discriminator’s optimizer. Takeaway: The best momentum was at the almost- positive $\beta_{1}\\!\approx\\!$0.8$\exp(i\nicefrac{{\pi}}{{$8$}})$, whose samples are in Appendix Figure C.4. Algorithm 2 Complex Adam variant without momentum bias-correction 1:$\beta_{1}\\!\in\\!\mathbb{C},\beta_{2}\\!\in\\![$0$,\\!$1$)$ 2:$\alpha\\!\in\\!\mathbb{R}^{+},\epsilon\\!\in\\!\mathbb{R}^{+}$ 3:for $j=1\dots N$ do 4: $\\!\\!\\!\\!\\!\boldsymbol{\mu}^{j\\!+\\!1}\\!\\!=\\!\beta_{1}\boldsymbol{\mu}^{j}-\boldsymbol{g}^{j}$ 5: $\\!\\!\\!\\!\\!\boldsymbol{v}^{j\\!+\\!1}\\!\\!=\\!\beta_{2}\boldsymbol{v}^{j}\\!+\\!(1\\!-\\!\beta_{2})(\boldsymbol{g}^{j})^{\\!$2$}$ 6: $\\!\\!\\!\\!\\!\hat{\boldsymbol{v}}^{j\\!+\\!1}\\!\\!=\\!\frac{\boldsymbol{v}^{j\\!+\\!1}}{1-(\beta_{2})^{j}}$ 7: $\\!\\!\\!\\!\\!\boldsymbol{\omega}^{j\\!+\\!1}\\!\\!=\\!\boldsymbol{\omega}^{j}+\alpha\frac{\Re(\boldsymbol{\mu}^{j})}{\sqrt{\hat{\boldsymbol{v}}^{j\\!+\\!1}}+\epsilon}$ return $\boldsymbol{\omega}^{N}$ CIFAR-10 BigGAN | Best IS for $10$ seeds ---|--- Discriminator $\beta_{1}$ | Min | Max $0$ – [32]’s default | $8.9$ | $9.1$ $$0.8$\exp(i\nicefrac{{\pi}}{{$8$}})$ – ours | $$8.96$({\color[rgb]{0,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,0}+.06})$ | $$9.25$({\color[rgb]{0,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,0}+.15})$ $0.8$ | $$3.12$({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}-5.78})$ | $$9.05$({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}-0.05})$ Table 1: We display the best inception scores (IS) found over $10$ runs for training BigGAN on CIFAR-10 with various optimizer settings. We use a complex Adam variant outlined in Algorithm 2, where we only tuned $\beta_{1}$ for the discriminator. The best parameters found in Figure C.4 were $\beta_{1}=$0.8$\exp(i\nicefrac{{\pi}}{{$8$}})$, which improved the min and max IS from our runs of the BigGAN authors baseline, which was the SoTA optimizer in this setting to best of our knowledge. We tested $\beta_{1}=$0.8$$ to see if the gain was solely from tuning $|\beta_{1}|$, which occasionally failed and decreased the best IS. We tested the best momentum value over $10$ seeds against the author-provided baseline in Appendix Figure C.4, with the results summarized in Table 1. [32] reported a single inception score (IS) on CIFAR-10 of $9.22$, but the best we could reproduce over the seeds with the provided PyTorch code and settings was $9.10$. Complex momentum improves the best IS found with $$9.25$({\color[rgb]{0,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,0}+.15}\text{ over author code},{\color[rgb]{0,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,0}+.03}\text{ author reported})$. We trained a real momentum $|\beta_{1}|\\!=\\!$0.8$$ to see if the improvement was solely from tuning the momentum magnitude. This occasionally failed to train and decreased the best IS over re-runs, showing we benefit from a non-zero $\arg(\beta_{1})$. ### 4.5 A Practical Initial Guess for Optimizer Parameter $\arg(\beta)$ Here, we propose a practical initial guess for our new hyperparameter $\arg(\beta)$. Corollary 1 shows we can use almost-real momentum coefficients (i.e., $\arg(\beta)$ is close to $0$). Figure 5 shows almost-positive $\beta$ approach acceleration in cooperative eigenspaces, while converging in all eigenspaces. Figure 7 shows GANs can have both cooperative and adversarial eigenspaces. Figures 10 and C.4 do a grid search over $\arg(\beta)$ for GANs, finding that almost-positive $\arg(\beta)\approx\nicefrac{{\pi}}{{$8$}}$ works in both cases. Also, by minimally changing $\arg(\beta)$ from $0$ to a small $\epsilon$, we can minimally change other hyperparameters in our model, which is useful to adapt existing, brittle setups like in GANs. Based on this, we propose an initial guess of $\arg(\beta)=\epsilon$ for a small $\epsilon>0$, where $\epsilon=\nicefrac{{\pi}}{{$8$}}$ worked in our GAN experiments. -$\pi$ -$\frac{\pi}{2}$ $0$ -$\frac{\pi}{2}$ -$\pi$Phase of eigenvalue $\arg(\lambda)$Spectrum of Jacobian of joint-grad $\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}^{j})$ for GANLog-magnitude of eigenvalue $\log(|\lambda|)$disc. unsure gen.Does eigenvector point at a player? Figure 7: A log-polar coordinate visualization reveals structure in the spectrum for a GAN at the end of the training on a $2$D mixture of Gaussians with a $1$-layer (disc)riminator and (gen)erator, so the joint-parameters $\boldsymbol{\omega}\\!\in\\!\mathbb{R}^{$723$}$. It is difficult to see structure by graphing the Cartesian (i.e., $\Re$ and $\Im$) parts of eigenvalues, because they span orders of magnitude, while being positive and negative. Appendix Figure 9 shows the spectrum through training. There is a mixture of many cooperative (i.e., real or $\arg(\lambda)\\!\approx\\!$0$,\pm\pi$) and some adversarial (i.e., imaginary or $\arg(\lambda)\\!\approx\\!\pm\frac{\pi}{$2$}$) eigenvalues, so – contrary to what the name may suggest – generative adversarial networks are not purely adversarial. We may benefit from optimizers leveraging this structure like complex momentum. Eigenvalues are colored if the associated eigenvector is mostly in one player’s part of the joint-parameter space – see Appendix Figure 9 for details on this. Many eigenvectors lie mostly in the the space of (or point at) a one player. The structure of the set of eigenvalues for the disc. (green) is different than the gen. (red), but further investigation of this is an open problem. Notably, this may motivate separate optimizer choices for each player as in Section 4.4. ## 5 Related Work Accelerated first-order methods: A broad body of work exists using momentum- type methods [27, 28, 55, 56], with a recent focus on deep learning [29, 57, 58, 59, 60]. But, these works focus on momentum for minimization as opposed to in games. Learning in games: Various works approximate response-gradients - some by differentiating through optimization [61, 34, 62]. Multiple works try to leverage game eigenstructure during optimization [63, 64, 65, 39, 66, 67]. First-order methods in games: In some games, we can get away with using only first-order methods – Zhang et al. [68, 69], Ibrahim et al. [70], Bailey et al. [71], Jin et al. [72], Azizian et al. [47], Nouiehed et al. [73], Zhang et al. [74] discuss when and how these methods work. Gidel et al. [25] is the closest work to ours, showing a negative momentum can help in some games. Zhang and Wang [30] note the suboptimality of negative momentum in a class of games. Azizian et al. [75], Domingo-Enrich et al. [76] investigate acceleration in some games. Bilinear zero-sum games: Zhang and Yu [77] study the convergence of gradient methods in bilinear zero-sum games. Their analysis extends Gidel et al. [25], showing that we can achieve faster convergence by having separate step sizes and momentum for each player or tuning the extragradient step size. Loizou et al. [78] provide convergence guarantees for games satisfying a _sufficiently bilinear_ condition. Learning in GANs: Various works try to make GAN training easier with methods leveraging the game structure [79, 80, 81, 53, 82]. Metz et al. [83] approximate the discriminator’s response function by differentiating through optimization. Mescheder et al. [34] find solutions by minimizing the norm of the players’ updates. Both of these methods and various others [84, 85, 86] require higher-order information. Daskalakis et al. [52], Gidel et al. [87], Chavdarova et al. [88] look at first-order methods. Mescheder et al. [42] explore problems for GAN training convergence and Berard et al. [22] show that GANs have significant rotations affecting learning. ## 6 Conclusion In this paper we provided a generalization of existing momentum methods for learning in differentiable games by allowing a complex-valued momentum with real-valued updates. We showed that our method robustly converges in games with a different range of mixtures of cooperative and adversarial eigenspaces than current first-order methods. We also presented a practical generalization of our method to the Adam optimizer, which we used to improve BigGAN training. More generally, we highlight and lay groundwork for investigating optimizers which work well with various mixtures of cooperative and competitive dynamics in games. ### Societal Impact Our main contribution in this work is methodological – specifically, a scalable algorithm for optimizing in games. Since our focus is on improving optimization methods, we do not expect there to be direct negative societal impacts from this contribution. ### Acknowledgements Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute. Paul Vicol was supported by an NSERC PGS-D Scholarship. We thank Guodong Zhang, Guojun Zhang, James Lucas, Romina Abachi, Jonah Phillion, Will Grathwohl, Jakob Foerster, Murat Erdogdu, Ken Jackson, and Ioannis Mitliagkis for feedback and helpful discussion. ## References * Goodfellow et al. [2014] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In _Advances in Neural Information Processing Systems_ , pages 2672–2680, 2014. * Pfau and Vinyals [2016] David Pfau and Oriol Vinyals. Connecting generative adversarial networks and actor-critic methods. _arXiv preprint arXiv:1610.01945_ , 2016. * Baker et al. [2019] Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. Emergent tool use from multi-agent autocurricula. In _International Conference on Learning Representations_ , 2019. * Balduzzi et al. [2019] David Balduzzi, Marta Garnelo, Yoram Bachrach, Wojciech Czarnecki, Julien Perolat, Max Jaderberg, and Thore Graepel. Open-ended learning in symmetric zero-sum games. In _International Conference on Machine Learning_ , pages 434–443. PMLR, 2019. * Sukhbaatar et al. [2018] Sainbayar Sukhbaatar, Zeming Lin, Ilya Kostrikov, Gabriel Synnaeve, Arthur Szlam, and Rob Fergus. Intrinsic motivation and automatic curricula via asymmetric self-play. In _International Conference on Learning Representations_ , 2018. * Lorraine and Duvenaud [2018] Jonathan Lorraine and David Duvenaud. Stochastic hyperparameter optimization through hypernetworks. _arXiv preprint arXiv:1802.09419_ , 2018. * Lorraine et al. [2020] Jonathan Lorraine, Paul Vicol, and David Duvenaud. Optimizing millions of hyperparameters by implicit differentiation. In _International Conference on Artificial Intelligence and Statistics_ , pages 1540–1552. PMLR, 2020. * MacKay et al. [2019] Matthew MacKay, Paul Vicol, Jon Lorraine, David Duvenaud, and Roger Grosse. Self-tuning networks: Bilevel optimization of hyperparameters using structured best-response functions. In _International Conference on Learning Representations (ICLR)_ , 2019. * Raghu et al. [2020] Aniruddh Raghu, Maithra Raghu, Simon Kornblith, David Duvenaud, and Geoffrey Hinton. Teaching with commentaries. _arXiv preprint arXiv:2011.03037_ , 2020. * Bose et al. [2020] Avishek Joey Bose, Gauthier Gidel, Hugo Berrard, Andre Cianflone, Pascal Vincent, Simon Lacoste-Julien, and William L Hamilton. Adversarial example games. _arXiv preprint arXiv:2007.00720_ , 2020. * Yuan et al. [2019] Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. _IEEE Transactions on Neural Networks and Learning Systems_ , 30(9):2805–2824, 2019. * Rajeswaran et al. [2020] Aravind Rajeswaran, Igor Mordatch, and Vikash Kumar. A game theoretic framework for model based reinforcement learning. _arXiv preprint arXiv:2004.07804_ , 2020. * Abachi et al. [2020] Romina Abachi, Mohammad Ghavamzadeh, and Amir-massoud Farahmand. Policy-aware model learning for policy gradient methods. _arXiv preprint arXiv:2003.00030_ , 2020. * Bacon et al. [2019] Pierre-Luc Bacon, Florian Schäfer, Clement Gehring, Animashree Anandkumar, and Emma Brunskill. A Lagrangian method for inverse problems in reinforcement learning. _lis.csail.mit.edu/pubs_ , 2019. * Acuna et al. [2021] David Acuna, Guojun Zhang, Marc T Law, and Sanja Fidler. f-domain-adversarial learning: Theory and algorithms for unsupervised domain adaptation with neural networks, 2021. URL https://openreview.net/forum?id=WqXAKcwfZtI. * Grathwohl et al. [2018] Will Grathwohl, Elliot Creager, Seyed Kamyar Seyed Ghasemipour, and Richard Zemel. Gradient-based optimization of neural network architecture. 2018\. * Adam and Lorraine [2019] George Adam and Jonathan Lorraine. Understanding neural architecture search techniques. _arXiv preprint arXiv:1904.00438_ , 2019. * Ren et al. [2018] Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenenbaum, Hugo Larochelle, and Richard S Zemel. Meta-learning for semi-supervised few-shot classification. _arXiv preprint arXiv:1803.00676_ , 2018. * Ren et al. [2020] Mengye Ren, Eleni Triantafillou, Kuan-Chieh Wang, James Lucas, Jake Snell, Xaq Pitkow, Andreas S Tolias, and Richard Zemel. Flexible few-shot learning with contextual similarity. _arXiv preprint arXiv:2012.05895_ , 2020. * Morgenstern and Von Neumann [1953] Oskar Morgenstern and John Von Neumann. _Theory of Games and Economic Behavior_. Princeton University Press, 1953. * Von Stackelberg [2010] Heinrich Von Stackelberg. _Market Structure and Equilibrium_. Springer Science & Business Media, 2010. * Berard et al. [2019] Hugo Berard, Gauthier Gidel, Amjad Almahairi, Pascal Vincent, and Simon Lacoste-Julien. A closer look at the optimization landscapes of generative adversarial networks. In _International Conference on Learning Representations_ , 2019. * Arrow et al. [1958] Kenneth Joseph Arrow, Hirofumi Azawa, Leonid Hurwicz, and Hirofumi Uzawa. _Studies in Linear and Non-Linear Programming_ , volume 2. Stanford University Press, 1958. * Freund and Schapire [1999] Yoav Freund and Robert E Schapire. Adaptive game playing using multiplicative weights. _Games and Economic Behavior_ , 29(1-2):79–103, 1999. * Gidel et al. [2019] Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, Rémi Le Priol, Gabriel Huang, Simon Lacoste-Julien, and Ioannis Mitliagkas. Negative momentum for improved game dynamics. In _The 22nd International Conference on Artificial Intelligence and Statistics_ , pages 1802–1811. PMLR, 2019. * Lucas et al. [2018] James Lucas, Shengyang Sun, Richard Zemel, and Roger Grosse. Aggregated momentum: Stability through passive damping. In _International Conference on Learning Representations_ , 2018. * Polyak [1964] Boris T Polyak. Some methods of speeding up the convergence of iteration methods. _USSR Computational Mathematics and Mathematical Physics_ , 4(5):1–17, 1964. * Nesterov [1983] Yurii E Nesterov. A method for solving the convex programming problem with convergence rate o (1/k^ 2). In _Dokl. Akad. Nauk SSSR_ , volume 269, pages 543–547, 1983. * Sutskever et al. [2013] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In _International Conference on Machine Learning_ , pages 1139–1147, 2013. * Zhang and Wang [2020] Guodong Zhang and Yuanhao Wang. On the suboptimality of negative momentum for minimax optimization. _arXiv preprint arXiv:2008.07459_ , 2020. * Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014. * Brock et al. [2018] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. In _International Conference on Learning Representations_ , 2018. * Osawa et al. [2019] Kazuki Osawa, Yohei Tsuji, Yuichiro Ueno, Akira Naruse, Rio Yokota, and Satoshi Matsuoka. Large-scale distributed second-order optimization using kronecker-factored approximate curvature for deep convolutional neural networks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 12359–12367, 2019. * Mescheder et al. [2017] Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. The numerics of GANs. In _Advances in Neural Information Processing Systems_ , pages 1825–1835, 2017. * Schäfer and Anandkumar [2019] Florian Schäfer and Anima Anandkumar. Competitive gradient descent. In _Advances in Neural Information Processing Systems_ , pages 7623–7633, 2019. * Wang et al. [2019] Yuanhao Wang, Guodong Zhang, and Jimmy Ba. On solving minimax optimization locally: A follow-the-ridge approach. In _International Conference on Learning Representations_ , 2019. * Hemmat et al. [2020] Reyhane Askari Hemmat, Amartya Mitra, Guillaume Lajoie, and Ioannis Mitliagkas. Lead: Least-action dynamics for min-max optimization. _arXiv preprint arXiv:2010.13846_ , 2020. * Schäfer et al. [2020] Florian Schäfer, Anima Anandkumar, and Houman Owhadi. Competitive mirror descent. _arXiv preprint arXiv:2006.10179_ , 2020. * Czarnecki et al. [2020] Wojciech Marian Czarnecki, Gauthier Gidel, Brendan Tracey, Karl Tuyls, Shayegan Omidshafiei, David Balduzzi, and Max Jaderberg. Real world games look like spinning tops. _arXiv preprint arXiv:2004.09468_ , 2020. * Zhang et al. [2020a] Guojun Zhang, Kaiwen Wu, Pascal Poupart, and Yaoliang Yu. Newton-type methods for minimax optimization. _arXiv preprint arXiv:2006.14592_ , 2020a. * Korpelevich [1976] GM Korpelevich. The extragradient method for finding saddle points and other problems. _Matecon_ , 12:747–756, 1976. * Mescheder et al. [2018] Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? In _International Conference on Machine learning (ICML)_ , pages 3481–3490. PMLR, 2018. * Harris et al. [2020] Charles R Harris, K Jarrod Millman, Stéfan J van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J Smith, et al. Array programming with numpy. _Nature_ , 585(7825):357–362, 2020. * Bertsekas [2008] D Bertsekas. _Nonlinear Programming_. Athena Scientific, 2008. * O’donoghue and Candes [2015] Brendan O’donoghue and Emmanuel Candes. Adaptive restart for accelerated gradient schemes. _Foundations of computational mathematics_ , 15(3):715–732, 2015. * Goh [2017] Gabriel Goh. Why momentum really works. _Distill_ , 2(4):e6, 2017. * Azizian et al. [2020a] Waïss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. A tight and unified analysis of gradient-based methods for a whole spectrum of differentiable games. In _International Conference on Artificial Intelligence and Statistics_ , pages 2863–2873. PMLR, 2020a. * Bradbury et al. [2018] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, and Skye Wanderman-Milne. JAX: composable transformations of Python+NumPy programs, 2018\. URL http://github.com/google/jax. * Paszke et al. [2017] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. _Openreview_ , 2017. * Chiang et al. [2012] Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin, and Shenghuo Zhu. Online optimization with gradual variations. In _Conference on Learning Theory_ , pages 6–1. JMLR Workshop and Conference Proceedings, 2012. * Rakhlin and Sridharan [2013] Alexander Rakhlin and Karthik Sridharan. Optimization, learning, and games with predictable sequences. In _Proceedings of the 26th International Conference on Neural Information Processing Systems-Volume 2_ , pages 3066–3074, 2013. * Daskalakis et al. [2018] Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, and Haoyang Zeng. Training gans with optimism. In _International Conference on Learning Representations (ICLR 2018)_ , 2018. * Wu et al. [2019] Yan Wu, Jeff Donahue, David Balduzzi, Karen Simonyan, and Timothy Lillicrap. Logan: Latent optimisation for generative adversarial networks. _arXiv preprint arXiv:1912.00953_ , 2019. * Krizhevsky [2009] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. * Nesterov [2013] Yurii Nesterov. _Introductory lectures on convex optimization: A basic course_ , volume 87. Springer Science & Business Media, 2013. * Maddison et al. [2018] Chris J Maddison, Daniel Paulin, Yee Whye Teh, Brendan O’Donoghue, and Arnaud Doucet. Hamiltonian descent methods. _arXiv preprint arXiv:1809.05042_ , 2018. * Zhang and Mitliagkas [2017] Jian Zhang and Ioannis Mitliagkas. Yellowfin and the art of momentum tuning. _arXiv preprint arXiv:1706.03471_ , 2017. * Choi et al. [2019] Dami Choi, Christopher J Shallue, Zachary Nado, Jaehoon Lee, Chris J Maddison, and George E Dahl. On empirical comparisons of optimizers for deep learning. _arXiv preprint arXiv:1910.05446_ , 2019. * Zhang et al. [2019] Michael R Zhang, James Lucas, Geoffrey Hinton, and Jimmy Ba. Lookahead optimizer: k steps forward, 1 step back. _arXiv preprint arXiv:1907.08610_ , 2019. * Chen et al. [2020] Ricky TQ Chen, Dami Choi, Lukas Balles, David Duvenaud, and Philipp Hennig. Self-tuning stochastic optimization with curvature-aware gradient filtering. _arXiv preprint arXiv:2011.04803_ , 2020. * Foerster et al. [2018] Jakob Foerster, Richard Y Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, and Igor Mordatch. Learning with opponent-learning awareness. In _International Conference on Autonomous Agents and MultiAgent Systems_ , pages 122–130, 2018. * Maclaurin et al. [2015] Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-based hyperparameter optimization through reversible learning. In _International Conference on Machine Learning_ , pages 2113–2122, 2015. * Letcher et al. [2019] Alistair Letcher, David Balduzzi, Sébastien Racaniere, James Martens, Jakob N Foerster, Karl Tuyls, and Thore Graepel. Differentiable game mechanics. _Journal of Machine Learning Research_ , 20(84):1–40, 2019. * Nagarajan et al. [2020] Sai Ganesh Nagarajan, David Balduzzi, and Georgios Piliouras. From chaos to order: Symmetry and conservation laws in game dynamics. In _International Conference on Machine Learning_ , pages 7186–7196. PMLR, 2020. * Omidshafiei et al. [2020] Shayegan Omidshafiei, Karl Tuyls, Wojciech M Czarnecki, Francisco C Santos, Mark Rowland, Jerome Connor, Daniel Hennes, Paul Muller, Julien Pérolat, Bart De Vylder, et al. Navigating the landscape of multiplayer games. _Nature communications_ , 11(1):1–17, 2020. * Gidel et al. [2020] Gauthier Gidel, David Balduzzi, Wojciech Marian Czarnecki, Marta Garnelo, and Yoram Bachrach. Minimax theorem for latent games or: How i learned to stop worrying about mixed-nash and love neural nets. _arXiv preprint arXiv:2002.05820_ , 2020. * Perolat et al. [2020] Julien Perolat, Remi Munos, Jean-Baptiste Lespiau, Shayegan Omidshafiei, Mark Rowland, Pedro Ortega, Neil Burch, Thomas Anthony, David Balduzzi, Bart De Vylder, et al. From poincar$\backslash$’e recurrence to convergence in imperfect information games: Finding equilibrium via regularization. _arXiv preprint arXiv:2002.08456_ , 2020. * Zhang et al. [2021] Guodong Zhang, Yuanhao Wang, Laurent Lessard, and Roger Grosse. Don’t fix what ain’t broke: Near-optimal local convergence of alternating gradient descent-ascent for minimax optimization. _arXiv preprint arXiv:2102.09468_ , 2021. * Zhang et al. [2020b] Guodong Zhang, Xuchao Bao, Laurent Lessard, and Roger Grosse. A unified analysis of first-order methods for smooth games via integral quadratic constraints. _arXiv preprint arXiv:2009.11359_ , 2020b. * Ibrahim et al. [2020] Adam Ibrahim, Waıss Azizian, Gauthier Gidel, and Ioannis Mitliagkas. Linear lower bounds and conditioning of differentiable games. In _International Conference on Machine Learning_ , pages 4583–4593. PMLR, 2020. * Bailey et al. [2020] James P Bailey, Gauthier Gidel, and Georgios Piliouras. Finite regret and cycles with fixed step-size via alternating gradient descent-ascent. In _Conference on Learning Theory_ , pages 391–407. PMLR, 2020. * Jin et al. [2020] Chi Jin, Praneeth Netrapalli, and Michael Jordan. What is local optimality in nonconvex-nonconcave minimax optimization? In _International Conference on Machine Learning_ , pages 4880–4889. PMLR, 2020. * Nouiehed et al. [2019] Maher Nouiehed, Maziar Sanjabi, Tianjian Huang, Jason D Lee, and Meisam Razaviyayn. Solving a class of non-convex min-max games using iterative first order methods. _Advances in Neural Information Processing Systems_ , 32:14934–14942, 2019. * Zhang et al. [2020c] Guojun Zhang, Pascal Poupart, and Yaoliang Yu. Optimality and stability in non-convex smooth games. _arXiv e-prints_ , pages arXiv–2002, 2020c. * Azizian et al. [2020b] Waïss Azizian, Damien Scieur, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. Accelerating smooth games by manipulating spectral shapes. _arXiv preprint arXiv:2001.00602_ , 2020b. * Domingo-Enrich et al. [2020] Carles Domingo-Enrich, Fabian Pedregosa, and Damien Scieur. Average-case acceleration for bilinear games and normal matrices. _arXiv preprint arXiv:2010.02076_ , 2020. * Zhang and Yu [2019] Guojun Zhang and Yaoliang Yu. Convergence of gradient methods on bilinear zero-sum games. In _International Conference on Learning Representations_ , 2019. * Loizou et al. [2020] Nicolas Loizou, Hugo Berard, Alexia Jolicoeur-Martineau, Pascal Vincent, Simon Lacoste-Julien, and Ioannis Mitliagkas. Stochastic hamiltonian gradient methods for smooth games. In _International Conference on Machine Learning_ , pages 6370–6381. PMLR, 2020. * Liu et al. [2020] Mingrui Liu, Youssef Mroueh, Jerret Ross, Wei Zhang, Xiaodong Cui, Payel Das, and Tianbao Yang. Towards better understanding of adaptive gradient algorithms in generative adversarial nets. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=SJxIm0VtwH. * Peng et al. [2020] Wei Peng, Yu-Hong Dai, Hui Zhang, and Lizhi Cheng. Training gans with centripetal acceleration. _Optimization Methods and Software_ , 35(5):955–973, 2020. * Albuquerque et al. [2019] Isabela Albuquerque, João Monteiro, Thang Doan, Breandan Considine, Tiago Falk, and Ioannis Mitliagkas. Multi-objective training of generative adversarial networks with multiple discriminators. In _International Conference on Machine Learning_ , pages 202–211. PMLR, 2019. * Hsieh et al. [2019] Ya-Ping Hsieh, Chen Liu, and Volkan Cevher. Finding mixed nash equilibria of generative adversarial networks. In _International Conference on Machine Learning_ , pages 2810–2819. PMLR, 2019. * Metz et al. [2016] Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. _arXiv preprint arXiv:1611.02163_ , 2016. * Qin et al. [2020] Chongli Qin, Yan Wu, Jost Tobias Springenberg, Andrew Brock, Jeff Donahue, Timothy P Lillicrap, and Pushmeet Kohli. Training generative adversarial networks by solving ordinary differential equations. _arXiv preprint arXiv:2010.15040_ , 2020. * Schäfer et al. [2019] Florian Schäfer, Hongkai Zheng, and Anima Anandkumar. Implicit competitive regularization in gans. _arXiv preprint arXiv:1910.05852_ , 2019. * Jolicoeur-Martineau and Mitliagkas [2019] Alexia Jolicoeur-Martineau and Ioannis Mitliagkas. Connections between support vector machines, wasserstein distance and gradient-penalty gans. _arXiv preprint arXiv:1910.06922_ , 2019. * Gidel et al. [2018] Gauthier Gidel, Hugo Berard, Gaëtan Vignoud, Pascal Vincent, and Simon Lacoste-Julien. A variational inequality perspective on generative adversarial networks. In _International Conference on Learning Representations_ , 2018. * Chavdarova et al. [2019] Tatjana Chavdarova, Gauthier Gidel, Francois Fleuret, and Simon Lacoste-Julien. Reducing noise in gan training with variance reduced extragradient. In _Proceedings of the international conference on Neural Information Processing Systems_ , number CONF, 2019. * Salimans et al. [2016] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In _Advances in Neural Information Processing Systems_ , pages 2234–2242, 2016. * Foucart [2012] Simon Foucart. Matrix norm and spectral radius. https://www.math.drexel.edu/~foucart/TeachingFiles/F12/M504Lect6.pdf, 2012\. Accessed: 2020-05-21. * Boyd et al. [2004] Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. _Convex optimization_. Cambridge university press, 2004. * Hahnloser et al. [2000] Richard HR Hahnloser, Rahul Sarpeshkar, Misha A Mahowald, Rodney J Douglas, and H Sebastian Seung. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. _Nature_ , 405(6789):947–951, 2000. Appendix: Complex Momentum for Optimization in Games Table 2: Notation SGD | Stochastic Gradient Descent ---|--- CM | Complex Momentum SGDm, SimSGDm, … | …with momentum SimSGD, SimCM | Simultaneous … AltSGD, AltCM | Alternating … GAN | Generative Adversarial Network [1] EG | Extragradient [41] OG | Optimistic Gradient [52] IS | Inception Score [89] $\vcentcolon=$ | Defined to be equal to $x,y,z,\dots\in\mathbb{C}$ | Scalars $\boldsymbol{x},\boldsymbol{y},\boldsymbol{z},\dots\in\mathbb{C}^{n}$ | Vectors $\boldsymbol{X},\boldsymbol{Y},\boldsymbol{Z},\dots\in\mathbb{C}^{n\times n}$ | Matrices $\boldsymbol{X}^{\top}$ | The transpose of matrix $\boldsymbol{X}$ $\boldsymbol{I}$ | The identity matrix $\Re(z),\Im(z)$ | The real or imaginary component of $z\in\mathbb{C}$ $i$ | The imaginary unit. $z\in\mathbb{C}\implies z=\Re(z)+i\Im(z)$ $\widebar{z}$ | The complex conjugate of $z\in\mathbb{C}$ $|z|\vcentcolon=\sqrt{z\widebar{z}}$ | The magnitude or modulus of $z\in\mathbb{C}$ $\arg(z)$ | The argument or phase of $z\in\mathbb{C}\implies z=|z|\exp(i\arg(z))$ $z\\!\in\\!\mathbb{C}$ is _almost-positive_ | $\arg(z)\\!=\\!\epsilon$ for small $\epsilon$ respectively $A,B$ | A symbol for the outer/inner players $d_{A},d_{B}\in\mathbb{N}$ | The number of weights for the outer/inner players $\boldsymbol{\theta}$ | A symbol for the parameters or weights of a player $\boldsymbol{\theta}_{\\!A}\in\mathbb{R}^{d_{A}},\boldsymbol{\theta}_{\\!B}\in\mathbb{R}^{d_{B}}$ | The outer/inner parameters or weights $\mathcal{L}:\mathbb{R}^{n}\to\mathbb{R}$ | A symbol for a loss $\mathcal{L}_{\\!A}(\boldsymbol{\theta}_{\\!A},\boldsymbol{\theta}_{\\!B}),\mathcal{L}_{\\!B}(\boldsymbol{\theta}_{\\!A},\boldsymbol{\theta}_{\\!B})$ | The outer/inner losses – $\mathbb{R}^{d_{A}+d_{B}}\mapsto\mathbb{R}$ $\boldsymbol{g}_{\\!A}(\boldsymbol{\theta}_{\\!A},\boldsymbol{\theta}_{\\!B}),\boldsymbol{g}_{\\!B}(\boldsymbol{\theta}_{\\!A},\boldsymbol{\theta}_{\\!B})$ | Gradient of outer/inner losses w.r.t. their weights in $\mathbb{R}^{d_{A}/d_{B}}$ $\boldsymbol{\theta}_{\\!B}^{*}\\!(\boldsymbol{\theta}_{\\!A}\\!)\\!\vcentcolon=\\!\operatorname*{arg\,min}\limits_{\boldsymbol{\theta}_{\\!B}}\\!\\!\mathcal{L}_{\\!B}\\!(\\!\boldsymbol{\theta}_{\\!A}\\!,\\!\boldsymbol{\theta}_{\\!B}\\!)$ | The best-response of the inner player to the outer player $\mathcal{L}_{\\!A}^{*}(\boldsymbol{\theta}_{\\!A})\\!\vcentcolon=\\!\mathcal{L}_{\\!A}\\!(\\!\boldsymbol{\theta}_{\\!A}\\!,\\!\boldsymbol{\theta}_{\\!B}^{*}(\boldsymbol{\theta}_{\\!A})\\!)$ | The outer loss with a best-responding inner player $\boldsymbol{\theta}_{\\!A}^{*}\\!\vcentcolon=\\!\operatorname*{arg\,min}\limits_{\boldsymbol{\theta}_{\\!A}}\mathcal{L}_{\\!A}^{*}(\boldsymbol{\theta}_{\\!A}\\!)$ | Outer optimal weights with a best-responding inner player $d\vcentcolon=d_{A}+d_{B}$ | The combined number of weights for both players $\boldsymbol{\omega}\vcentcolon=[\boldsymbol{\theta}_{\\!A},\boldsymbol{\theta}_{\\!B}]\in\mathbb{R}^{d}$ | A concatenation of the outer/inner weights $\hat{\boldsymbol{g}}(\boldsymbol{\omega})\\!\vcentcolon=\\![\boldsymbol{g}_{\\!A}(\boldsymbol{\omega}),\boldsymbol{g}_{\\!B}(\boldsymbol{\omega})]\in\mathbb{R}^{d}$ | A concatenation of the outer/inner gradients $\boldsymbol{\omega}^{$0$}=[\boldsymbol{\theta}_{\\!A}^{$0$},\boldsymbol{\theta}_{\\!B}^{$0$}]\in\mathbb{R}^{d}$ | The initial parameter values $j$ | An iteration number $\hat{\boldsymbol{g}}^{j}\vcentcolon=\hat{\boldsymbol{g}}(\boldsymbol{\omega}^{j})\in\mathbb{R}^{d}$ | The joint-gradient vector field at weights $\boldsymbol{\omega}^{j}$ $\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}^{j}\vcentcolon=\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}|_{\boldsymbol{\omega}^{j}}\in\mathbb{R}^{d\times d}$ | The Jacobian of the joint-gradient $\hat{\boldsymbol{g}}$ at weights $\boldsymbol{\omega}^{j}$ $\alpha\in\mathbb{C}$ | The step size or learning rate $\beta\in\mathbb{C}$ | The momentum coefficient $\beta_{1}\in\mathbb{C}$ | The first momentum parameter for Adam $\boldsymbol{\mu}\in\mathbb{C}^{d}$ | The momentum buffer $\lambda\in\mathbb{C}$ | Notation for an arbitrary eigenvalue $\operatorname*{Sp}(\boldsymbol{M})\in\mathbb{C}^{n}$ | The spectrum – or set of eigenvalues – of $\boldsymbol{M}\in\mathbb{R}^{n\times n}$ _Purely adversarial/cooperative_ game | $\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$ is purely real/imaginary $\rho(\boldsymbol{M})\\!\vcentcolon=\\!\max_{z\in\operatorname*{Sp}(\boldsymbol{M})}|z|$ | The spectral radius in $\mathbb{R}^{+}$ of $\boldsymbol{M}\in\mathbb{R}^{n\times n}$ $\boldsymbol{F}_{\alpha,\beta}([\boldsymbol{\mu},\boldsymbol{\omega}])$ | Fixed point op. for CM, or augmented learning dynamics $\boldsymbol{R}\vcentcolon=\nabla_{[\boldsymbol{\mu},\boldsymbol{\omega}]}\boldsymbol{F}_{\alpha,\beta}\in\mathbb{R}^{$3$d\times$3$d}$ | Jacobian of the augmented learning dynamics in Corollary 1 $\alpha^{*}\\!,\beta^{*}\\!\vcentcolon=\\!\operatorname*{arg\,min}\limits_{\alpha,\beta}\rho(\boldsymbol{R}(\alpha,\\!\beta)\\!)$ | The optimal step size and momentum coefficient $\rho^{*}\vcentcolon=\rho(R(\alpha^{*},\beta^{*}))$ | The optimal spectral radius or convergence rate $\kappa\vcentcolon=\frac{\max\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\boldsymbol{g})}{\min\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\boldsymbol{g})}$ | Condition number, for convex single-objective optimization $\sigma_{min}^{2}(\boldsymbol{M})\\!\vcentcolon=\\!\max\operatorname*{Sp}(\boldsymbol{M}^{\top}\boldsymbol{M})$ | The minimum singular value of a matrix $\boldsymbol{M}$ ## Appendix A Supporting Results First, some basic results about complex numbers that are used: $z=\Re(z)+i\Im(z)=|z|\exp(i\arg(z))$ (17) $\widebar{z}=\Re(z)-i\Im(z)=|z|\exp(-i\arg(z))$ (18) $\exp(iz)+\exp(-iz)=2\cos(z)$ (19) $\widebar{z_{1}z_{2}}=\widebar{z}_{1}\widebar{z}_{2}$ (20) $\nicefrac{{1}}{{2}}(z+\widebar{z})=\Re(z)$ (21) $\Re(z_{1}z_{2})=\Re(z_{1})\Re(z_{2})-\Im(z_{1})\Im(z_{2})$ (22) $z_{1}+z_{2}=(\Re(z_{1})+\Re(z_{2}))+i(\Im(z_{1})+\Im(z_{2}))$ (23) $z_{1}z_{2}=(\Re(z_{1})\Re(z_{2})-\Im(z_{1})\Im(z_{2}))+i(\Im(z_{1})\Re(z_{2})+\Re(z_{1})\Im(z_{2}))$ (24) $z_{1}z_{2}=|z_{1}||z_{2}|\exp(i(\arg(z_{1})+\arg(z_{2})))$ (25) $z^{k}=|z|^{k}\exp(i\arg(z)k)=|z|^{k}(\cos(k\arg(z))+i\sin(k\arg(z))$ (26) This Lemma shows how we expand the complex-valued momentum buffer $\boldsymbol{\mu}$ into its Cartesian components as in (9). ###### Lemma 1. $\displaystyle\boldsymbol{\mu}^{j\\!+\\!1}$ $\displaystyle=\beta\boldsymbol{\mu}^{j}-\hat{\boldsymbol{g}}^{j}\iff$ $\displaystyle\Re(\boldsymbol{\mu}^{j\\!+\\!1})$ $\displaystyle=\Re(\beta)\Re(\boldsymbol{\mu}^{j})-\Im(\beta)\Im(\boldsymbol{\mu}^{j})-\Re(\hat{\boldsymbol{g}}^{j}),\Im(\boldsymbol{\mu}^{j\\!+\\!1})=\Im(\beta)\Re(\boldsymbol{\mu}^{j})+\Re(\beta)\Im(\boldsymbol{\mu}^{j})-\Im(\hat{\boldsymbol{g}}^{j})$ ###### Proof. $\displaystyle\boldsymbol{\mu}^{j\\!+\\!1}$ $\displaystyle=\beta\boldsymbol{\mu}^{j}-\hat{\boldsymbol{g}}^{j}$ $\displaystyle\iff\boldsymbol{\mu}^{j\\!+\\!1}$ $\displaystyle=\left(\Re(\beta)+i\Im(\beta)\right)\left(\Re(\boldsymbol{\mu}^{j})+i\Im(\boldsymbol{\mu}^{j})\right)-\left(\Re(\hat{\boldsymbol{g}}^{j})+i\Im(\hat{\boldsymbol{g}}^{j})\right)$ $\displaystyle\iff\boldsymbol{\mu}^{j\\!+\\!1}$ $\displaystyle=\left(\Re(\beta)\Re(\boldsymbol{\mu}^{j})-\Im(\beta)\Im(\boldsymbol{\mu}^{j})\right)+$ $\displaystyle\hskip 108.405pti\left(\Im(\beta)\Re(\boldsymbol{\mu}^{j})+\Re(\beta)\Im(\boldsymbol{\mu}^{j})\right)-\left(\Re(\hat{\boldsymbol{g}}^{j})+i\Im(\hat{\boldsymbol{g}}^{j})\right)$ $\displaystyle\iff\boldsymbol{\mu}^{j\\!+\\!1}$ $\displaystyle=\left(\Re(\beta)\Re(\boldsymbol{\mu}^{j})-\Im(\beta)\Im(\boldsymbol{\mu}^{j})-\Re(\hat{\boldsymbol{g}}^{j})\right)+$ $\displaystyle\hskip 108.405pti\left(\Im(\beta)\Re(\boldsymbol{\mu}^{j})+\Re(\beta)\Im(\boldsymbol{\mu}^{j})-\Im(\hat{\boldsymbol{g}}^{j})\right)$ $\displaystyle\iff\Re(\boldsymbol{\mu}^{j\\!+\\!1})\\!$ $\displaystyle=\\!\Re(\beta)\Re(\boldsymbol{\mu}^{j})\\!-\\!\Im(\beta)\Im(\boldsymbol{\mu}^{j})\\!-\\!\Re(\hat{\boldsymbol{g}}^{j}),\Im(\boldsymbol{\mu}^{j\\!+\\!1})\\!=\\!\Im(\beta)\Re(\boldsymbol{\mu}^{j})\\!+\\!\Re(\beta)\Im(\boldsymbol{\mu}^{j})\\!-\\!\Im(\hat{\boldsymbol{g}}^{j})$ ∎ We further assume $\Im(\hat{\boldsymbol{g}}^{j})$ is $0$ \- i.e., our gradients are real-valued. This Lemma shows how we can decompose the joint- parameters $\boldsymbol{\omega}$ at the next iterate as a linear combination of the joint-parameters, joint-gradient, and Cartesian components of the momentum-buffer at the current iterate as in (10). ###### Lemma 2. $\displaystyle\boldsymbol{\omega}^{j\\!+\\!1}=\boldsymbol{\omega}^{j}+\Re(\alpha\boldsymbol{\mu}^{j\\!+\\!1})\iff\boldsymbol{\omega}^{j\\!+\\!1}=\boldsymbol{\omega}^{j}-\Re(\alpha)\hat{\boldsymbol{g}}^{j}+\Re(\alpha\beta)\Re(\boldsymbol{\mu}^{j})-\Im(\alpha\beta)\Im(\boldsymbol{\mu}^{j})$ ###### Proof. $\displaystyle\Re(\alpha\boldsymbol{\mu}^{j\\!+\\!1})$ $\displaystyle=$ $\displaystyle\left(\Re(\alpha)\Re(\boldsymbol{\mu}^{j\\!+\\!1})-\Im(\alpha)\Im(\boldsymbol{\mu}^{j\\!+\\!1})\right)$ $\displaystyle=$ $\displaystyle\left(\Re(\alpha)\left(\Re(\beta)\Re(\boldsymbol{\mu}^{j})-\Im(\beta)\Im(\boldsymbol{\mu}^{j})-\hat{\boldsymbol{g}}^{j}\right)-\Im(\alpha)\left(\Im(\beta)\Re(\boldsymbol{\mu}^{j})+\Re(\beta)\Im(\boldsymbol{\mu}^{j})\right)\right)$ $\displaystyle=$ $\displaystyle-\Re(\alpha)\hat{\boldsymbol{g}}^{j}+\left(\Re(\alpha)\left(\Re(\beta)\Re(\boldsymbol{\mu}^{j})-\Im(\beta)\Im(\boldsymbol{\mu}^{j})\right)-\Im(\alpha)\left(\Im(\beta)\Re(\boldsymbol{\mu}^{j})+\Re(\beta)\Im(\boldsymbol{\mu}^{j})\right)\right)$ $\displaystyle=$ $\displaystyle-\Re(\alpha)\hat{\boldsymbol{g}}^{j}+\left(\Re(\alpha)\Re(\beta)-\Im(\alpha)\Im(\beta)\right)\Re(\boldsymbol{\mu}^{j})-\left(\Re(\alpha)\Im(\beta)+\Im(\alpha)\Re(\beta)\right)\Im(\boldsymbol{\mu}^{j})$ $\displaystyle=$ $\displaystyle-\Re(\alpha)\hat{\boldsymbol{g}}^{j}+\Re(\alpha\beta)\Re(\boldsymbol{\mu}^{j})-\Im(\alpha\beta)\Im(\boldsymbol{\mu}^{j})$ Thus, $\displaystyle\boldsymbol{\omega}^{j\\!+\\!1}=\boldsymbol{\omega}^{j}+\Re(\alpha\boldsymbol{\mu}^{j\\!+\\!1})\iff\boldsymbol{\omega}^{j\\!+\\!1}=\boldsymbol{\omega}^{j}-\Re(\alpha)\hat{\boldsymbol{g}}^{j}+\Re(\alpha\beta)\Re(\boldsymbol{\mu}^{j})-\Im(\alpha\beta)\Im(\boldsymbol{\mu}^{j})$ ∎ ### A.1 Theorem 1 Proof Sketch See 1 ###### Proof. We reproduce the proof for a simpler case of quadratic games, which is simple case of Polyak [27]’s well-known method for analyzing the convergence of iterative methods. Bertsekas [44] generalizes this result from quadratic games to when we are sufficiently close to any stationary point. For quadratic games, we have that $\hat{\boldsymbol{g}}^{j}=\left(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}\right)^{\top}\boldsymbol{\omega}^{j}$. Well, by Lemma 1 and Lemma 2 we have: $\begin{pmatrix}\Re(\boldsymbol{\mu}^{j\\!+\\!1})\\\ \Im(\boldsymbol{\mu}^{j\\!+\\!1})\\\ \boldsymbol{\omega}^{j\\!+\\!1}\end{pmatrix}=\boldsymbol{R}\begin{pmatrix}\Re(\boldsymbol{\mu}^{j})\\\ \Im(\boldsymbol{\mu}^{j})\\\ \boldsymbol{\omega}^{j}\end{pmatrix}$ (27) By telescoping the recurrence for the $j^{th}$ augmented parameters: $\begin{pmatrix}\Re(\boldsymbol{\mu}^{j})\\\ \Im(\boldsymbol{\mu}^{j})\\\ \boldsymbol{\omega}^{j}\end{pmatrix}=\boldsymbol{R}^{j}\begin{pmatrix}\Re(\boldsymbol{\mu}^{0})\\\ \Im(\boldsymbol{\mu}^{0})\\\ \boldsymbol{\omega}^{0}\end{pmatrix}$ (28) We can compare $\boldsymbol{\mu}^{j}$ with the value it converges to $\boldsymbol{\mu}^{*}$ which exists if $\boldsymbol{R}$ is contractive. We do the same with $\boldsymbol{\omega}$. Because $\boldsymbol{\mu}^{*}=\boldsymbol{R}\boldsymbol{\mu}^{*}=\boldsymbol{R}^{j}\boldsymbol{\mu}^{*}$: $\begin{pmatrix}\Re(\boldsymbol{\mu}^{j})-\Re(\boldsymbol{\mu}^{*})\\\ \Im(\boldsymbol{\mu}^{j})-\Im(\boldsymbol{\mu}^{*})\\\ \boldsymbol{\omega}^{j}-\boldsymbol{\omega}^{*}\end{pmatrix}=\boldsymbol{R}^{j}\begin{pmatrix}\Re(\boldsymbol{\mu}^{0})-\Re(\boldsymbol{\mu}^{*})\\\ \Im(\boldsymbol{\mu}^{0})-\Im(\boldsymbol{\mu}^{*})\\\ \boldsymbol{\omega}^{0}-\boldsymbol{\omega}^{*}\end{pmatrix}$ (29) By taking norms: $\displaystyle\left\|\begin{pmatrix}\Re(\boldsymbol{\mu}^{j})-\Re(\boldsymbol{\mu}^{*})\\\ \Im(\boldsymbol{\mu}^{j})-\Im(\boldsymbol{\mu}^{*})\\\ \boldsymbol{\omega}^{j}-\boldsymbol{\omega}^{*}\end{pmatrix}\right\|_{2}$ $\displaystyle=\left\|\boldsymbol{R}^{j}\begin{pmatrix}\Re(\boldsymbol{\mu}^{0})-\Re(\boldsymbol{\mu}^{*})\\\ \Im(\boldsymbol{\mu}^{0})-\Im(\boldsymbol{\mu}^{*})\\\ \boldsymbol{\omega}^{0}-\boldsymbol{\omega}^{*}\end{pmatrix}\right\|_{2}$ (30) $\displaystyle\implies\left\|\begin{pmatrix}\Re(\boldsymbol{\mu}^{j})-\Re(\boldsymbol{\mu}^{*})\\\ \Im(\boldsymbol{\mu}^{j})-\Im(\boldsymbol{\mu}^{*})\\\ \boldsymbol{\omega}^{j}-\boldsymbol{\omega}^{*}\end{pmatrix}\right\|_{2}$ $\displaystyle\leq\left\|\boldsymbol{R}^{j}\right\|_{2}\left\|\begin{pmatrix}\Re(\boldsymbol{\mu}^{0})-\Re(\boldsymbol{\mu}^{*})\\\ \Im(\boldsymbol{\mu}^{0})-\Im(\boldsymbol{\mu}^{*})\\\ \boldsymbol{\omega}^{0}-\boldsymbol{\omega}^{*}\end{pmatrix}\right\|_{2}$ (31) With Lemma 11 from Foucart [90], we have there exists a matrix norm $\forall\epsilon>0$ such that: $\|\boldsymbol{R}^{j}\|\leq\left(\rho\left(\boldsymbol{R}\right)+\epsilon\right)^{j}$ (32) We also have an equivalence of norms in finite-dimensional spaces. So for all norms $\|\cdot\|$, $\exists C\geq B>0$ such that: $B\|\boldsymbol{R}^{j}\|\leq\|\boldsymbol{R}^{j}\|_{2}\leq C\|\boldsymbol{R}^{j}\|$ (33) Combining (32) and (33) we have: $\left\|\begin{pmatrix}\Re(\boldsymbol{\mu}^{j})-\Re(\boldsymbol{\mu}^{*})\\\ \Im(\boldsymbol{\mu}^{j})-\Im(\boldsymbol{\mu}^{*})\\\ \boldsymbol{\omega}^{j}-\boldsymbol{\omega}^{*}\end{pmatrix}\right\|_{2}\leq C\left(\rho\left(\boldsymbol{R}\right)+\epsilon\right)^{j}\left\|\begin{pmatrix}\Re(\boldsymbol{\mu}^{0})-\Re(\boldsymbol{\mu}^{*})\\\ \Im(\boldsymbol{\mu}^{0})-\Im(\boldsymbol{\mu}^{*})\\\ \boldsymbol{\omega}^{0}-\boldsymbol{\omega}^{*}\end{pmatrix}\right\|_{2}$ (34) So, we have: $\left\|\begin{pmatrix}\Re(\boldsymbol{\mu}^{j})-\Re(\boldsymbol{\mu}^{*})\\\ \Im(\boldsymbol{\mu}^{j})-\Im(\boldsymbol{\mu}^{*})\\\ \boldsymbol{\omega}^{j}-\boldsymbol{\omega}^{*}\end{pmatrix}\right\|_{2}=\mathcal{O}((\rho(\boldsymbol{R})+\epsilon)^{j})$ (35) Thus, we converge linearly with a rate of $\mathcal{O}(\rho(\boldsymbol{R})+\epsilon)$. ∎ ### A.2 Characterizing the Augmented Dynamics Eigenvalues Here, we present polynomials whose roots are the eigenvalues of our the Jacobian of our augmented dynamics $\operatorname*{Sp}(\boldsymbol{R})$, given the eigenvalues of the Jacobian of the joint-gradient vector field $\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$. We use a similar decomposition as Gidel et al. [25]. We can expand $\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}=PTP^{-1}$ where $T$ is an upper-triangular matrix and $\lambda_{i}$ is an eigenvalue of $\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}$. $T=\begin{bmatrix}\lambda_{1}&*&\dots&*\\\ $0$&\dots&\dots&\dots\\\ \dots&\dots&\dots&*\\\ $0$&\dots&$0$&\lambda_{d}\\\ \end{bmatrix}$ (36) We then break up into components for each eigenvalue, giving us submatrices $\mathbf{R}_{k}\in\mathbb{C}^{$3$\times$3$}$: $\boldsymbol{R}_{k}\vcentcolon=\begin{bmatrix}\Re(\beta)&-\Im(\beta)&-\lambda_{k}\\\ \Im(\beta)&\Re(\beta)&0\\\ \Re(\alpha\beta)&-\Im(\alpha\beta)&1-\Re(\alpha)\lambda_{k}\\\ \end{bmatrix}$ (37) We can get the characteristic polynomial of $\boldsymbol{R}_{k}$ with the following Mathematica command, where we use substitute the symbols $r+iu=\lambda_{k}$, $a=\Re(\beta)$, $b=\Im(\beta)$, $c=\Re(\alpha)$, and $d=\Im(\alpha)$. CharacteristicPolynomial[{{a, -b, -(r + u I)}, {b, a, 0}, {a c - b d, -(b c + a d), 1 - c (r + u I)}}, x] The command gives us the polynomial associated with eigenvalue $\lambda_{k}=r+iu$: $p_{k}(x)=-a^{2}x+a^{2}+acrx+iacux+2ax^{2}-2ax-b^{2}x+b^{2}+bdrx+ibdux- crx^{2}-icux^{2}-x^{3}+x^{2}$ (38) Consider the case where $\lambda_{k}$ is imaginary – i.e, $r=0$ – which is true in all purely adversarial and bilinear zero-sum games. Then (38) simplifies to: $p_{k}(x)=-a^{2}x+a^{2}+iacux+2ax^{2}-2ax-b^{2}x+b^{2}+ibdux- icux^{2}-x^{3}+x^{2}$ (39) Our complex $\lambda_{k}$ come in conjugate pairs where $\lambda_{k}=u_{k}i$ and $\bar{\lambda}_{k}=-u_{k}i$. (39) has the same roots for $\lambda_{k}$ and $\bar{\lambda}_{k}$, which can be verified by writing the roots with the cubic formula. This corresponds to spiraling around the solution in either a clockwise or counterclockwise direction. Thus, we restrict to analyzing $\lambda_{k}$ where $u_{k}$ is positive without loss of generality. If we make the step size $\alpha$ real – i.e., $d=0$ – then (39) simplifies to: $p_{k}(x)=x(-a^{2}+iacu-2a-b^{2})+a^{2}+x^{2}(2a-icu+1)+b^{2}-x^{3}$ (40) Using a heuristic from single-objective optimization, we look at making step size proportional to the inverse of the magnitude of eigenvalue $k$ – i.e., $\alpha_{k}=\frac{\alpha^{\prime}}{|\lambda_{k}|}=\frac{\alpha^{\prime}}{u_{k}}$. With this, (40) simplifies to: $p_{k}(x)=x(-a^{2}+ia\alpha^{\prime}-2a-b^{2})+a^{2}+x^{2}(2a-i\alpha^{\prime}+1)+b^{2}-x^{3}$ (41) Notably, in (41) there is no dependence on the components of imaginary eigenvalue $\lambda_{k}=r+iu=$0$+iu$, by selecting a $\alpha$ that is proportional to the eigenvalues inverse magnitude. We can simplify further with $a^{2}+b^{2}=|\beta|^{2}$: $p_{k}(x)=x(\Re(\beta)(i\alpha^{\prime}-2)-|\beta|^{2})+x^{2}(2\Re(\beta)-i\alpha^{\prime}+1)+|\beta|^{2}-x^{3}$ (42) We could expand this in polar form for $\beta$ by noting $\Re(\beta)=|\beta|\cos(\arg(\beta))$: $p_{k}(x)=x(|\beta|\cos(\arg(\beta))(i\alpha^{\prime}-2)-|\beta|^{2})+x^{2}(2|\beta|\cos(\arg(\beta))-i\alpha^{\prime}+1)+|\beta|^{2}-x^{3}$ (43) We can simplify further by considering an imaginary $\beta$ – i.e., $\Re(\beta)=0$ or $\cos(\arg(\beta))=0$: $p_{k}(x)=|\beta|^{2}-x|\beta|^{2}-x^{2}(i\alpha^{\prime}-1)-x^{3}$ (44) The roots of these polynomials can be trivially evaluated numerically or symbolically with the by plugging in $\beta,\alpha,$ and $\lambda_{k}$ then using the cubic formula. This section can be easily modified for the eigenvalues of the augmented dynamics for variants of complex momentum by defining the appropriate $\boldsymbol{R}$ and modifying the Mathematica command to get the characteristic polynomial for each component, which can be evaluated if it is a sufficiently low degree using known formulas. ### A.3 Convergence Bounds See 1 ###### Proof. Note that Theorem 1 bounds the convergence rate of Algorithm 1 by $\operatorname*{Sp}(\boldsymbol{R})$. Also, (40) gives a formula for 3 eigenvalues in $\operatorname*{Sp}(\boldsymbol{R})$ given $\alpha,\beta,$ and an eigenvalue $\lambda\in\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$. The formula works by giving outputting a cubic polynomial whose roots are eigenvalues of $\operatorname*{Sp}(\boldsymbol{R})$, which can be trivially evaluated with the cubic formula. We denote the $k^{th}$ eigenspace of $\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$ with eigenvalue $\lambda_{k}=ic_{k}$ and $|c_{1}|\leq\dots\leq|c_{n}|$, because bilinear zero-sum games have purely imaginary eigenvalues due to $\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}$ being antisymmetric. Eigenvalues come in a conjugate pairs, where $\bar{\lambda_{k}}=i(-c_{k})$ If we select momentum coefficient $\beta=|\beta|\exp(i\arg(\beta))$ and step size $\alpha_{k}=\frac{\alpha_{k}^{\prime}}{|c_{k}|}$, and use that $\lambda\in\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$ are imaginary, then – as shown in Appendix Section A.2 – (40) simplifies to: $p_{k}(x)=x(|\beta|\cos(\arg(\beta))(i\alpha_{k}^{\prime}-2)-|\beta|^{2})+x^{2}(2|\beta|\cos(\arg(\beta))-i\alpha_{k}^{\prime}+1)+|\beta|^{2}-x^{3}$ (45) So, with these parameter selections, the convergence rate of Algorithm 1 in the $k^{th}$ eigenspace is bounded by the largest root of (45). First, consider $\arg(\beta)=\pi-\epsilon$, where $\epsilon=\frac{\pi}{16}$. We select $\alpha_{k}^{\prime}=$0.75$$ (equivalently, $\alpha_{k}=\frac{$0.75$}{|c_{k}|}$) and $|\beta|=$0.986$$ via grid search. Using the cubic formula on the associated $p(x)$ from (45) the maximum magnitude root has size $\approx$0.9998$<1$, so this selection converges in the $k^{th}$ eigenspace. So, selecting: $\displaystyle\hat{\alpha}$ $\displaystyle\leq\min_{k}\alpha_{k}$ (46) $\displaystyle=\min_{k}\frac{$0.75$}{c_{k}}$ (47) $\displaystyle=\frac{$0.75$}{\max_{k}c_{k}}$ (48) $\displaystyle=\frac{$0.75$}{\|\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}\|_{2}}$ (49) with $\beta=$0.986$\exp(i(\pi-\epsilon))$ will converge in each eigenspace. Now, consider $\arg(\beta)=\epsilon=\frac{\pi}{16}$ with $\alpha_{k}^{\prime}=$0.025$$ and $|\beta|=$0.9$$. Using the cubic formula on the associated $p(x)$ from (45) the maximum magnitude root has size $\approx$0.973$<1$, so this selection converges in the $k^{th}$ eigenspace. So, selecting: $\displaystyle\hat{\alpha}$ $\displaystyle\leq\min_{k}\alpha_{k}$ (50) $\displaystyle=\min_{k}\frac{$0.025$}{c_{k}}$ (51) $\displaystyle=\frac{$0.025$}{\max_{k}c_{k}}$ (52) $\displaystyle=\frac{$0.025$}{\|\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}\|_{2}}$ (53) with $\beta=$0.9$\exp(i\epsilon)$ will converge in each eigenspace. Thus, for any of the choices of $\arg(\beta)$ we can select $\hat{\alpha},|\beta|$ that converges in every eigenspace, and thus converges. ∎ In the preceding proof, our prescribed selection of $\hat{\alpha}$ depends on knowing the largest norm eigenvalue of $\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$, because our selections of $\hat{\alpha}\,\,\propto\,\,\frac{1}{\|\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}\|_{2}}$. We may not have access to largest norm eigenvalue of $\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$ in- practice. Nonetheless, this shows that a parameter selection exists to converge, even if it may be difficult to find. Often, in convex optimization we describe choices of $\alpha,\beta$ in terms of the largest and smallest norm eigenvalues of $\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}})$ (i.e. the Hessian of the loss) [91]. ## Appendix B Algorithms Here, we include additional algorithms, which may be of use to some readers. Algorithm 3 show aggregated momentum [26]. Algorithm 4 shows the recurrently linked momentum that generalizes and unifies aggregated momentum with negative momentum [25]. Algorithm 5 shows our algorithm with alternating updates, which we use for training GANs. Algorithm 6 shows our method with all real-valued objects, if one wants to implement complex momentum in a library that does not support complex arithmetic. Algorithm 3 Aggregated Momentum 1:Select number of buffers $K\in\mathbb{N}$ 2:Select $\beta_{(k)}\in[0,1)$ for $k=1\dots K$ 3:Select $\alpha_{(k)}\in\mathbb{R}^{+}$ for $k=1\dots K$ 4:Initialize $\boldsymbol{\mu}_{(k)}^{0}$ for $k=1\dots K$ 5:for $j=$1$\dots N$ do 6: for $k=$1$\dots K$ do 7: $\boldsymbol{\mu}_{(k)}^{j\\!+\\!$1$}=\beta_{(k)}\boldsymbol{\mu}_{(k)}^{j}-\hat{\boldsymbol{g}}^{j}$ 8: $\boldsymbol{\omega}^{j\\!+\\!$1$}=\boldsymbol{\omega}^{j}+\sum_{k=1}^{K}\alpha_{(k)}\boldsymbol{\mu}^{j\\!+\\!1}_{(k)}$ return $\boldsymbol{\omega}_{N}$ Algorithm 4 Recurrently Linked Momentum 1:Select number of buffers $K\in\mathbb{N}$ 2:Select $\beta_{(l,k)}\in\mathbb{R}$ for $l=1\dots K$ and $k=1\dots K$ 3:Select $\alpha_{(k)}\in\mathbb{R}^{+}$ for $k=1\dots K$ 4:Initialize $\boldsymbol{\mu}_{(k)}^{0}$ for $k=1\dots K$ 5:for $j=$1$\dots N$ do 6: for $k=$1$\dots K$ do 7: $\boldsymbol{\mu}_{(k)}^{j+1}\\!=\\!\sum_{l}\beta_{(l,k)}\boldsymbol{\mu}_{(l)}^{j}-\hat{\boldsymbol{g}}^{j}$ 8: $\boldsymbol{\omega}^{j\\!+\\!$1$}=\boldsymbol{\omega}^{j}+\sum_{k=1}^{K}\alpha_{(k)}\boldsymbol{\mu}^{j\\!+\\!1}_{(k)}$ return $\boldsymbol{\omega}_{N}$ Algorithm 5 (AltCM) Momentum 1:Select $\beta\in\mathbb{C},\alpha\in\mathbb{R}^{+}$ 2:Initialize $\boldsymbol{\mu}_{A}^{0},\boldsymbol{\mu}_{B}^{0}$ 3:for $j=$1$\dots N$ do 4: $\boldsymbol{\mu}_{A}^{j\\!+\\!$1$}=\beta\boldsymbol{\mu}_{A}^{j}-\boldsymbol{g}_{\\!A}^{j}$ 5: $\boldsymbol{\theta}_{\\!A}^{j\\!+\\!$1$}=\boldsymbol{\theta}_{\\!A}^{j}+\Re(\alpha\boldsymbol{\mu}^{j\\!+\\!$1$}_{A})$ 6: $\boldsymbol{\mu}_{B}^{j\\!+\\!$1$}=\beta\boldsymbol{\mu}_{B}^{j}-\boldsymbol{g}_{\\!B}(\boldsymbol{\theta}_{\\!A}^{j\\!+\\!1}\\!\\!\\!,\boldsymbol{\theta}_{\\!B}^{j})$ 7: $\boldsymbol{\theta}_{\\!B}^{j\\!+\\!$1$}=\boldsymbol{\theta}_{\\!B}^{j}+\Re(\alpha\boldsymbol{\mu}^{j\\!+\\!1}_{B})$ return $\boldsymbol{\omega}_{N}$ Algorithm 6 (SimCM) Complex Momentum - $\mathbb{R}$ valued 1:Select $\Re(\beta),\Im(\beta),\Re(\alpha),\Im(\alpha)\in\mathbb{R}$ 2:Select $\Re(\beta),\Im(\beta),\Re(\alpha),\Im(\alpha)\in\mathbb{R}$ 3:Initialize $\Re(\boldsymbol{\mu})^{0},\Im(\boldsymbol{\mu})^{0}$ 4:for $j=1\dots N$ do 5: $\Re(\boldsymbol{\mu}^{j\\!+\\!1})=\Re(\beta)\Re(\boldsymbol{\mu}^{j})-\Im(\beta)\Im(\boldsymbol{\mu}^{j})-\hat{\boldsymbol{g}}^{j}$ 6: $\Im(\boldsymbol{\mu}^{j\\!+\\!1})=\Re(\beta)\Im(\boldsymbol{\mu}^{j})+\Im(\beta)\Re(\boldsymbol{\mu}^{j})$ 7: $\boldsymbol{\omega}^{j\\!+\\!1}\\!\\!=\\!\boldsymbol{\omega}^{j}\\!-\\!\Re(\alpha\\!)\\!\hat{\boldsymbol{g}}^{j}\\!+\\!\Re(\alpha\beta\\!)\\!\Re(\boldsymbol{\mu}^{j}\\!)\\!-\\!\Im(\alpha\beta\\!)\\!\Im(\boldsymbol{\mu}^{j}\\!)$ return $\boldsymbol{\omega}_{N}$ ### B.1 Complex Momentum in PyTorch Our method can be easily implemented in PyTorch 1.6+ by using complex tensors. The only necessary change to the SGD with momentum optimizer is extracting the real-component from momentum buffer as with JAX – see here. In older versions of Pytorch, we can use a tensor to represent the momentum buffer $\boldsymbol{\mu}$, step size $\alpha$, and momentum coefficient $\beta$. Specifically, we represent the real and imaginary components of the complex number independently. Then, we redefine the operations __add__ and __mult__ to satisfy the rules of complex arithmetic – i.e., equations (23) and (24). ## Appendix C Experiments ### C.1 Computing Infrastructure and Runtime For the purely adversarial experiments in Sections 4.1 and 4.2, we do our computing in CPU. Training each $2$D GAN in Section 4.3 takes $2$ hours and we can train $10$ simultaneously on an NVIDIA T$4$ GPU. Training each CIFAR GAN in Section 4.4 takes $10$ hours and we can only train $1$ model per NVIDIA T4 GPU. ### C.2 Optimization in Purely Adversarial Games We include the alternating update version of Figure 3.1 in Figure 8, which allows us to contrast simultaneous and alternating updates. With alternating updates on a Dirac-GAN for $\alpha\\!=\\!$0.1$$ the best value for the momentum coefficient $\beta$ was complex, but we could converge with real, negative momentum. Simultaneous updates may be a competitive choice with alternating updates, only if alternating updates cost two gradient evaluations per step, which is common in deep learning setups. ### C.3 How Adversarialness Affects Convergence Rates We include the extragradient (EG) update with extrapolation parameter $\alpha^{\prime}$ and step size $\alpha$: $\displaystyle\boldsymbol{\omega}^{j\\!+\\!\frac{1}{2}}\\!=\\!\boldsymbol{\omega}^{j}\\!-\\!\alpha^{\prime}\hat{\boldsymbol{g}}^{j}$ (EG) $\displaystyle\boldsymbol{\omega}^{j\\!+\\!1}\\!=\\!\boldsymbol{\omega}^{j}\\!-\\!\alpha\hat{\boldsymbol{g}}^{j\\!+\\!\frac{1}{2}}\\!$ and the optimistic gradient (OG) update with extrapolation parameter $\alpha^{\prime}$ and step size $\alpha$: $\displaystyle\boldsymbol{\omega}^{j\\!+\\!1}\\!=\\!\boldsymbol{\omega}^{j}\\!-\\!2\alpha\hat{\boldsymbol{g}}^{j}\\!+\\!\alpha^{\prime}\hat{\boldsymbol{g}}^{j-1}$ (OG) Often, EG and OG are used with $\alpha=\alpha^{\prime}$, however we found that this constraint crippled these methods in cooperative games (i.e., minimization). As such, we tuned the extrapolation parameter $\alpha^{\prime}$ separately from the step size $\alpha$, so EG and OG were competitive baselines. We include Figure 9 which investigates a GANs spectrum throughout training, and elaborates on the information that is shown in Figure 7. This shows that there are many real and imaginary eigenvalues, so GAN training is neither purely cooperative or purely adversarial. Also, the structure of the set of eigenvalues for the discriminator is different than the generator, which may motivate separate optimizer choices. The structure between the players persists through training, but the eigenvalues grow in magnitude and spread out their phases. This indicates how adversarial the game is can change during training. Momentum phase $\arg(\beta)$SimCM for # grad. eval. = # stepsMomentum Magnitude $|\beta|$AltCM for # grad. eval.# grad. eval. to convergeAltCM for # steps# steps to converge Figure 8: We show many steps and gradient evaluations, both simultaneous and alternating complex momentum on a Dirac-GAN take for a set solution distance. We fix step size $\alpha\\!=\\!$0.1$$ as in Figure 3, while varying the phase and magnitude of our momentum $\beta\\!=\\!|\beta|\exp(i\arg(\beta))$. There is a red star at the optima, dashed red lines at real $\beta$, and a dashed magenta line for simultaneous or alternating gradient descent. We only display color for convergent setups. _Left:_ Simultaneous complex momentum (SimCM). This is the same as Figure 3.1, which we repeat to contrast with alternating updates. There are no real-valued $\beta$ that converge for this – or any – $\alpha$ with simultaneous updates [25]. Simultaneous updates can parallelize gradient computation for all players at each step, thus costing only one gradient evaluation per step for many deep learning setups. The best rate of convergence per step and gradient evaluation is $\approx{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}$0.955$}$. _Middle:_ Alternating complex momentum (AltCM), where we show how many gradient evaluations – as opposed to steps – to reach a set solution distance. Alternating updates are bottlenecked by waiting for first player’s update to compute the second players update, effectively costing two gradient evaluations per step for many deep learning setups. Negative momentum can converge here, as shown by Gidel et al. [25], but the best momentum is still complex. Also, Alternating updates can make the momentum phase $\arg(\beta)$ choice less sensitive to our convergence. The best rate of convergence per gradient evaluation is $\approx{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}$0.965$}$. _Right:_ AltCM, where we show how many steps to reach a set solution distance. The best rate of convergence per step is $\approx{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}$0.931$}$. Takeaway: If we can parallelize computation of both players gradients we can benefit from SimCM, however if we can not then AltCM can converge more quickly and for a broader set of optimizer parameters. In any case, the best solution uses a complex momentum $\beta$ for this $\alpha$. Joint-gradient $\hat{\boldsymbol{g}}$ index $l$$\log(|a_{lk}|+\epsilon)$ for $a_{lk}$ in $\nabla_{\\!\boldsymbol{\omega}}\hat{\boldsymbol{g}}^{j}$Joint-parameter $\boldsymbol{\omega}$ index $k$Jacobian of joint-gradient $\nabla_{\\!\boldsymbol{\omega}}\hat{\boldsymbol{g}}^{j}$ for GANAbs of eigenvector componentIndex into $\mathbf{v}$ – first $337$ are for D’s paramsEigenvector $\mathbf{v}$ components for eigenvalues $\lambda\in\operatorname*{Sp}(\nabla_{\\!\boldsymbol{\omega}}\hat{\boldsymbol{g}})$-$\pi$ -$\frac{\pi}{2}$ $0$ -$\frac{\pi}{2}$ -$\pi$Phase of eigenvalue $\arg(\lambda)$disc. unsure gen.Does eigenvector point at a player?Log- magnitude of eigenvalue $\log(|\lambda|)$Start of trainingEnd of trainingLog- polar graph of the spectrum of Jacobian of the joint-gradient $\operatorname*{Sp}(\nabla_{\\!\boldsymbol{\omega}}\hat{\boldsymbol{g}}^{j})$ throughout training Figure 9: These plots investigate the spectrum of the Jacobian of the joint-gradient for the GAN in Figure 7 through training. The spectrum is key for bounding convergence rates in learning algorithms. _Top left:_ The Jacobian $\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}$ for a GAN on a $2$D mixture of Gaussians with a two-layer, fully-connected $16$ hidden unit discriminator (D) and generator (G) at the end of training. In the concatenated parameters $\boldsymbol{\omega}\in\mathbb{R}^{723}$, the first $337$ are for D, while the last $386$ are for G. We display the $\log$ of the absolute value of each component plus $\epsilon=10^{-10}$. The upper left and lower right quadrants are the Hessian of D and G’s losses respectively. _Top Right:_ We visualize two randomly sampled eigenvectors from $\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}$. The first part of the parameters is for the discriminator, while the second part is for the generator. Given an eigenvalue with eigenvector $\mathbf{v}$, we roughly approximate attributing eigenvectors to players by calculating how much of it lies in D’s parameter space with $\frac{\|\mathbf{v}_{1:|\textnormal{D}|}\|_{1}}{\|\mathbf{v}\|_{1}}=\frac{\|\mathbf{v}_{1:337}\|_{1}}{\|\mathbf{v}\|_{1}}$. If this ratio is near $1$ (or $0$) and say _the eigenvector mostly points at D (or G)_. The blue eigenvector mostly points at G, while the orange eigenvector is unclear. Finding useful ways to attribute eigenvalues to players is an open problem. _Bottom:_ The spectrum of the Jacobian of the joint-gradient $\operatorname*{Sp}(\nabla_{\boldsymbol{\omega}}\hat{\boldsymbol{g}}^{j})$ is shown in log-polar coordinates, because it is difficult to see structure when graphing in Cartesian (i.e., $\Re$ and $\Im$) coordinates, due to eigenvalues spanning orders of magnitude, while being positive and negative. The end of training is when we stop making progress on the log-likelihood. We have imaginary eigenvalues at $\arg(\lambda)=\pm\nicefrac{{\pi}}{{$2$}}$, positive eigenvalues at $\arg(\lambda)=$0$$, and negative eigenvalues at $\arg(\lambda)=\pm\pi$. _Takeaway:_ There is a banded structure for the coloring of the eigenvalues that persists through training. We may want different optimizer parameters for the discriminator and generator, due to asymmetry in their associated eigenvalues. Also, the magnitude of the eigenvalues grows during training, and the $\arg$s spread out indicating the game can change eigenstructure near solutions. ### C.4 Training GANs on $2$D Distributions For $2$D distributions, the data is generated by sampling from a mixture of $8$ Gaussian distributions, which are distributed uniformly around the unit circle. For the GAN, we use a fully-connected network with $4$ hidden ReLU [92] layers with $256$ hidden units. We chose this architecture to be the same as Gidel et al. [25]. Our noise source for the generator is a $4$D Gaussian. We trained the models for $100\,000$ iterations. The performance of the optimizer settings is evaluated by computing the negative log-likelihood of a batch of $100\,000$ generated $2$D samples. Momentum phase $\arg(\beta)$Step size $\alpha=$0.001$$Momentum Magnitude $|\beta|$Step Size $\alpha=$0.003$$NLL, lower = better Figure 10: Heatmaps of the negative log-likelihood (NLL) for tuning $\arg(\beta),|\beta|$ with various fixed $\alpha$ on a $2$D mixture of Gaussians GAN. We highlight the best performing cell in red, which had $\arg(\beta)\approx\nicefrac{{\pi}}{{$8$}}$. Runs equivalent to alternating SGD are shown in a magenta box. We compare to negative momentum with alternating updates as in Gidel et al. [25] in the top row with $\arg(\beta)=\pi$. _Left_ : Tuning the momentum with $\alpha=$0.001$$. _Right_ : Tuning the momentum with $\alpha=$0.003$$. Figure 11(()): Mixture of Gaussian samples from GAN with the best hyperparameters from the heatmaps in Appendix Figure 10 Figure 11(()): Class-conditional CIFAR-10 samples from GAN with the best hyperparameters from the heatmap in C.4 Momentum phase $\arg(\beta_{1})$Momentum magnitude $|\beta_{1}|$Inception score, higher = better Figure 12(()): The inception score (IS) for a grid search on $\arg(\beta_{1})$ and $|\beta_{1}|$ for training BigGAN on CIFAR-10 with the Adam variant in Algorithm 2. The $\beta_{1}$ is complex for the discriminator, while the generator’s optimizer is fixed to author-supplied defaults. Red points are runs that failed to train to the minimum IS in the color bar. The vertical magenta line denotes runs equivalent to alternating SGD. Negative momentum failed to train for any momentum magnitude $|\beta_{1}|>.5$, so we do not display it for more resolution near values of interest. Inception score (IS)Iteration Figure 12(()): We compare the best optimization parameters from grid search Figure C.4 for our complex Adam variant (i.e., Algorithm 2) shown in green, with the author provided values shown in red for the CIFAR-10 BigGAN over $10$ seeds. A star is displayed at the best IS over all runs, a cross is displayed at the worst IS over all runs, while a circle is shown at the best IS for each run. Dashed lines are shown at the max/min IS over all runs at each iteration, low-alpha lines are shown for each runs IS, while solid lines are shown for the average IS over all seeds at each iteration. The results are summarized in Table 1.
# An Improved and More Accurate Expression for a PDF Related to Eigenvalue- Based Spectrum Sensing Fuhui Zhou, _Member, IEEE_ , and Norman C. Beaulieu, _Fellow, IEEE_ Fuhui Zhou is with the School of Information Engineering, Nanchang University, P. R. China, 330031 (e-mail: [email protected]). Norman C. Beaulieu is with Beijing University of of Posts and Telecommunications and the Beijing Key Laboratory of Network System Architecture and Convergence, Beijing, China 100876 (e-mail: [email protected]). The research was supported by the National Natural Science Foundation of China (61701214), the Young Natural Science Foundation of Jiangxi Province (20171BAB212002), and the China Postdoctoral Science Foundation (2017M610400). ###### Abstract Cooperative spectrum sensing based on the limiting eigenvalue ratio of the covariance matrix offers superior detection performance and overcomes the noise uncertainty problem. While an exact expression exists, it is complex and multiple useful approximate expressions have been published in the literature. An improved, more accurate, integral solution for the probability density function of the ratio is derived using order statistical analysis to remove the simplifying, but incorrect, independence assumption. Thereby, the letter makes an advance in the rigorous theory of eigenvalue-based spectrum sensing. ###### Index Terms: Cooperative spectrum sensing, eigenvalue ratio analysis, order statistics, probability distribution. ## I Introduction THE Eigenvalue-based detection schemes, using the eigenvalues of the covariance matrix to construct a test statistic, are considered to be one of the most effective methods to test for the presence of a primary user (PU) signal in cognitive radio systems [1]. The maximum-to-minimum eigenvalue (MME) detector is based on the ratio of the largest eigenvalue to the smallest eigenvalue of the covariance matrix. However, theoretical results for eigenvalue ratio schemes usually depend on asymptotic assumptions, since the distribution of the ratio of two extreme eigenvalues is difficult to compute [1]-[5]. The probability of false alarm (PFA) is the probability that the PU is absent but is detected to be present. PFA is one of the most important performance metrics in spectrum sensing for cognitive radio. Accurate determination of the PFA improves the accuracy of the decision threshold of a detector, and the efficiency of spectrum utilization. Note that the derivation of the PFA is dependent on the probability density function (PDF) of the test statistic of the detector under the hypothesis that the PU is absent. Meanwhile, this PDF is not known exactly in a tractable form suitable for further theoretical analysis, and the derivation of a popular approximation necessarily employs assumptions that are not rigorously correct mathematically. This letter makes a contribution to the theory of eigenvalue-based spectrum sensing in two ways, by removing invalid assumption, and by deriving a more accurate mathematically tractable approximation to this important PDF after removing the invalid assumption. The distribution of the ratio of the two extreme eigenvalues for the MME scheme is commonly approximated by the Tracy-Widom distribution based on the Tracy-Widom law [1], [2]. However, this approximation is based on the unrealistic assumption that the number of the received signal samples as well as the number of the cooperating secondary users are infinite. It has been shown that this approximation is poor when the number of the signal samples is small [5], [6]. An improved approximation to the PDF of the ratio of the two extreme eigenvalues is derived in [5] and [6] with the assumption that the two extreme eigenvalues are independent and Gaussian distributed. In [7] and [8], an exact PDF for the ratio of the two extreme eigenvalues has been derived. The derived exact expression for the PDF is quite complex, and other authors have chosen to use the approximation over the exact solution in [5], [6], [9]-[11]. In this paper, an improved PDF approximation of the ratio of the two extreme eigenvalues is derived by using the order statistic theory and the Gaussian distribution assumption. It is shown that the derived PDF is more accurate than the commonly employed PDFs given in [2] and [5], and is simpler than the exact PDF given in [7]. The rest of this paper is organized as follows. Section II presents the system model and MME spectrum sensing. An improved solution for the PDF of the ratio of two extreme eigenvalues is presented in Section III. Section IV presents simulation results. The paper concludes with Section V. ## II System Model and MME Spectrum Sensing Figure 1: The system model. As shown in Fig. 1, a cognitive radio network with $M$ SUs that cooperatively detect one PU is considered. During the sensing time, each SU collects $N$ samples of received signal, denoted by ${{x_{i}}\left(n\right)}$, where $n=1,2,\cdots,N$ and $i=1,2,\cdots,M$. Note that all the SUs receive the signal at the same time. To achieve synchronous sampling, each SU has the center frequency derived from the local oscillator and the same digital clock [1], [2], [5], [6]. This system model for cooperative spectrum sensing has been widely applied in these works. Then, the samples are transmitted to the fusion center. The aim of cooperative spectrum sensing is to construct a test statistic and make a decision between the two hypotheses $\left({H_{0}}\ \textrm{and}\ {H_{1}}\right)$ based on those collected samples, where $H_{0}$ denotes the absence of the PU, and $H_{1}$ represents the presence of the PU. Thus, the samples from each SU under the two hypotheses are given as $\displaystyle{{H_{0}}:}\ {{x_{i}}\left(n\right)={w_{i}}\left(n\right)}$ (1a) $\displaystyle{{H_{1}}:}\ {{x_{i}}\left(n\right)={h_{i}}\left(n\right)\sqrt{P}_{s}{s}\left(n\right)+{w_{i}}\left(n\right)}$ (1b) where ${w_{i}}\left(n\right)\sim{\mathcal{CN}}\left({0,\sigma_{w}^{2}}\right)$ is complex Gaussian noise and $\mathcal{CN}\left({0,\sigma_{w}^{2}}\right)$ denotes the complex Gaussian normal distribution with mean zero and variance $\sigma_{w}^{2}$. ${s}\left(n\right)$ is the primary user signal and ${h_{i}}\left(n\right)$ are the channel coefficients. $P_{s}$ is the transmitted power of the primary user. The distribution of the PU signal is unknown and independent of the noise. Based on the collected samples from $M$ SUs, a data matrix is defined as $\mathbf{X}=\left[{X_{1}^{T},X_{2}^{T},\cdots,X_{M}^{T}}\right]$, where ${X_{m}}=\left[{{x_{m}}\left(1\right)},\ {{x_{m}}\left(2\right)},\ \cdots,\ {{x_{m}}\left(N\right)}\right]$ with $m=1,2,\cdots,M$. The sample covariance matrix is defined as ${\mathbf{R}_{x}}=\left({1/N}\right)\mathbf{X}{\mathbf{X}^{H}}$, where ${\left(\cdot\right)^{H}}$ represents the Hermitian transpose operator. Let ${\lambda_{1}}\geq{\lambda_{2}}\geq\cdots\geq{\lambda_{M}}$ denote the ordered eigenvalues of the matrix ${\mathbf{R}_{x}}$. The test statistic for the MME spectrum sensing scheme was formulated in [5]. It is denoted by $T_{\xi}$, given as $\displaystyle\ \begin{split}{T_{\xi}}=\frac{{\lambda_{1}}}{{\lambda_{M}}}\mathop{\mathbin{\lower 1.72218pt\hbox{$\buildrel\mathbin{\buildrel\scriptstyle<\over{\smash{\scriptstyle\relbar}\vphantom{{}_{\scriptstyle x}}}}\over{\smash{\scriptstyle>}\vphantom{{}_{x}}}$}}}\limits_{H_{1}}^{H_{0}}{\gamma_{\xi}}\end{split}$ (2) where ${\gamma_{\xi}}$ is the decision threshold of the MME spectrum sensing scheme. ## III Improved Solution for The PDF of The Ratio of Two Extreme Eigenvalues Under Hypothesis $H_{0}$ In [1], [2], the PDF of the ratio of two limiting eigenvalues under the hypothesis $H_{0}$ is approximated by the PDF of the Tracy-Widom distribution. This approximation is poor when the number of samples is small or moderate. The PDF of the ratio is derived based on the assumption that the two limiting eigenvalues are independent normal random variables [5]. The assumption that the two extreme eigenvalues are independent is not correct and is removed in this paper. In [5], when there only exist Gaussian noises, the largest eigenvalue and the smallest eigenvalue of the covariance matrix are assumed to be normal random variables, namely $\displaystyle{\lambda_{1}}\sim{\mathcal{N}}\left({{u_{{\lambda_{1}}}},\sigma_{{\lambda_{1}}}^{2}}\right)$ (3a) $\displaystyle{\lambda_{M}}\sim{\mathcal{N}}\left({{u_{{\lambda_{M}}}},\sigma_{{\lambda_{M}}}^{2}}\right)$ (3b) where $u_{{\lambda_{1}}}$ and $u_{{\lambda_{M}}}$ are the means of the largest eigenvalue and of the smallest eigenvalue, respectively. The variances of the largest eigenvalue and of the smallest eigenvalue are denoted by $\sigma_{{\lambda_{1}}}^{2}$ and $\sigma_{{\lambda_{M}}}^{2}$, respectively. According to [5], the means and variances of the largest eigenvalue and of the smallest eigenvalue are given as $\displaystyle{u_{{\lambda_{\rm K}}}}=\mathbb{E}\left({{\lambda_{\rm K}}}\right)$ (4a) $\displaystyle\sigma_{{\lambda_{\rm K}}}^{2}=\mathbb{E}\left({\lambda_{\rm K}^{2}}\right)-u_{{\lambda_{\rm K}}}^{2}$ (4b) $\displaystyle\mathbb{E}\left({\lambda_{1}^{p}}\right)=C_{0}^{-1}{\beta_{{\lambda_{1}}}}\left(p\right)$ (4c) $\displaystyle\mathbb{E}\left({\lambda_{M}^{p}}\right)=C_{0}^{-1}{\beta_{{\lambda_{M}}}}\left(p\right)$ (4d) where ${C_{0}}=\mathop{\Pi}\limits_{i=1}^{M}\left({N-i}\right)!\mathop{\Pi}\limits_{j=1}^{M}\left({M-j}\right)!$; $\mathbb{E}[\cdot]$ denotes the expectation operator; $\rm K=\left\\{1,M\right\\}$; ${\beta_{{\lambda_{1}}}}\left(p\right)$ and ${\beta_{{\lambda_{M}}}}\left(p\right)$ are given by eq. $\left(5\right)$ at the top of the next page. In eq. $\left(5\right)$, $\rm sgn\left(\cdot\right)$ denotes the Signum function; $\alpha_{m}$ is the $m$th element of the $\alpha$ and $\alpha$ is the permutation of $\left\\{1,2,\cdots,M-1\right\\}$; ${p_{i,j}}=p+N-M+i+j$; $\sum{l_{1}^{M-1}}=\sum\limits_{i=1}^{M-1}{{l_{i}}}$; $l_{1}^{M-1}!=\prod\limits_{i=1}^{M-1}{{l_{i}}!}$; $\sum\nolimits_{{l_{1\sim M-1}}}^{{L_{1\sim M-1}}}{}=\sum\nolimits_{{l_{1}}=0}^{{L_{1}}}{\sum\nolimits_{{l_{2}}=0}^{{L_{2}}}\cdots}\sum\nolimits_{{l_{M-1}}=0}^{{L_{M-1}}}$; $\mathcal{S}$ is any subset of the set $\left\\{l_{1},l_{2},\cdots,l_{M-1}\right\\}$ and $l_{m}$ is from $0$ to ${L_{{\alpha_{m}},m}}-1$; ${\left|{\mathcal{S}}\right|}$ represents the cardinality of subset $\mathcal{S}$. $\displaystyle{\beta_{{\lambda_{M}}}}\left(p\right)=\sum\limits_{i,j}^{M}{{{\left({-1}\right)}^{i+j}}}\sum\limits_{\alpha}{{\mathop{\rm sgn}}\left(\alpha\right)\prod\limits_{m=1}^{M-1}{\Gamma\left({{L_{{\alpha_{m}},m}}}\right)}}\left({\sum\nolimits_{{l_{1\sim M-1}}}^{{L_{1\sim M-1}}}{\frac{{\Gamma\left({\sum{l_{1}^{M-1}}+{p_{i,j}}-1}\right)}}{{l_{1}^{M-1}!{M^{\sum{l_{1}^{M-1}}+{p_{i,j}}-1}}}}}}\right)$ (5a) $\displaystyle{\beta_{{\lambda_{1}}}}\left(p\right)=\sum\limits_{i,j}^{M}{{{\left({-1}\right)}^{i+j}}}\sum\limits_{\alpha}{{\mathop{\rm sgn}}\left(\alpha\right)\prod\limits_{m=1}^{M-1}{\Gamma\left({{L_{{\alpha_{m}},m}}}\right)}}\left({\sum\limits_{\mathcal{S}}{{{\left({-1}\right)}^{\left|{\mathcal{S}}\right|}}}\frac{{\Gamma\left({\sum\mathcal{S}+{p_{i,j}}-1}\right)}}{{\prod{l_{1}^{M-1}!{M^{\sum{l_{1}^{M-1}}+{p_{i,j}}-1}}}}}}\right)$ (5b) $\displaystyle{L_{{\alpha_{m}},m}}=\left\\{\begin{array}[]{l}\begin{array}[]{*{20}{c}}{N-M+m+{\alpha_{m}}-1}&{\text{if}\ {\alpha_{m}}<i\ \text{and}\ m<j}\end{array}\\\ \begin{array}[]{*{20}{c}}{N-M+m+{\alpha_{m}}+1}&{\text{if}\ {\alpha_{m}}\geq i\ \text{and}\ m\geq j}\end{array}\\\ \begin{array}[]{*{20}{c}}{N-M+m+{\alpha_{m}}}&{\text{otherwise}}\end{array}\end{array}\right.$ (5i) Since the largest eigenvalue and the smallest eigenvalue are order statistics, the joint PDF, $f_{r,s}\left({x,y}\right)$, of ${X_{r}}$ and ${X_{s}}$, $1\leq r<s\leq M$, ${X_{r}}\leq{X_{s}}$, for $x\leq y$, is [12], $\displaystyle{f_{r,s}}\left({x,y}\right)=$ $\displaystyle\frac{{M!\left[{F_{r}}\left(x\right)\right]^{r-1}{f_{r}}\left(x\right)}{f_{s}}\left(y\right)}{{\left({r-1}\right)!\left({s-r-1}\right)!\left({M-s}\right)!}}$ $\displaystyle\times{\left[{{F_{s}}\left(y\right)-{F_{r}}\left(x\right)}\right]^{s-r-1}}{\left[{1-{F_{s}}\left(y\right)}\right]^{M-s}}$ (6) where ${F_{r}}\left(x\right)$ and ${F_{s}}\left(y\right)$ are the cumulative marginal distribution functions (CMDFs) for ${X_{r}}$ and ${X_{s}}$, respectively, and ${f_{r}}\left(x\right)$ and ${f_{s}}\left(y\right)$ denote the marginal PDFs for ${X_{r}}$ and ${X_{s}}$. Thus, the joint PDF for the largest eigenvalue and the smallest eigenvalue, ${\widetilde{f}_{{\lambda_{1}},{\lambda_{M}}}}\left({x,y}\right)$, is given by $\displaystyle{\widetilde{f}_{{\lambda_{1}},{\lambda_{M}}}}\left({x,y}\right)=$ $\displaystyle M\left({M-1}\right){f_{{\lambda_{1}}}}\left(x\right){f_{{\lambda_{M}}}}\left(y\right)$ $\displaystyle\times{\left[{{F_{{\lambda_{1}}}}\left(x\right)-{F_{{\lambda_{M}}}}\left(y\right)}\right]^{M-2}}$ (7) where $F_{{\lambda_{1}}}\left(x\right)$ and $F_{{\lambda_{M}}}\left(y\right)$ are the cumulative distribution functions (CDF) of the two extreme eigenvalues, respectively, and $f_{{\lambda_{1}}}\left(x\right)$ and $f_{{\lambda_{M}}}\left(y\right)$ are their corresponding marginal PDFs. $M$ is the number of secondary users. Therefore, the improved PDF for the ratio of the two extreme eigenvalues, ${\widetilde{f}_{Z}}\left(z\right)$ is derived as $\displaystyle{\widetilde{f}_{Z}}\left(z\right)=\int_{-\infty}^{\infty}{{{\widetilde{f}}_{{\lambda_{1}},{\lambda_{M}}}}\left({yz,y}\right)}\left|y\right|dy$ (8) where $\left|\cdot\right|$ is the magnitude operator. After substituting eq. $\left(7\right)$ into eq. $\left(8\right)$ and some algebraic manipulations, the improved PDF, ${\widetilde{f}_{Z}}\left(z\right)$, is given by $\displaystyle{\widetilde{f}_{Z}}\left(z\right)=M\left({M-1}\right)\sum\limits_{i=1}^{M-2}{\left(\begin{array}[]{l}M-2\\\ i\end{array}\right)}{\left({-1}\right)^{M-2-i}}$ (11) $\displaystyle\times\int_{-\infty}^{\infty}\left[F_{{\lambda_{1}}}\left({yz}\right)\right]^{i}\left[F_{{\lambda_{M}}}\left(y\right)\right]^{M-2-i}{f_{{\lambda_{1}}}}\left({yz}\right){f_{{\lambda_{M}}}}\left(y\right)\left|y\right|dy.$ (12) Therefore, the improved PDF for the ratio of the extreme eigenvalues is derived, based on the assumption that the two extreme eigenvalues follow normal distributions [5], as $\displaystyle{\widetilde{f}_{Z}}\left(z\right)=$ $\displaystyle M\left({M-1}\right)\sum\limits_{i=1}^{M-2}{\left(\begin{array}[]{l}M-2\\\ i\end{array}\right)}{\left({-1}\right)^{M-2-i}}$ (15) $\displaystyle\times\int_{-\infty}^{\infty}\Bigg{\\{}\left[{\Phi}\left({\frac{{yz-{u_{1}}}}{{{\sigma_{1}}}}}\right)\right]^{i}\left[{\Phi}\left({\frac{{z-{u_{2}}}}{{{\sigma_{2}}}}}\right)\right]^{M-2-i}$ $\displaystyle\times\frac{{\left|y\right|{e^{-\left[{\frac{{{{\left({yz-{u_{1}}}\right)}^{2}}}}{{2{\sigma_{1}}}}+\frac{{{{\left({y-{u_{2}}}\right)}^{2}}}}{{2{\sigma_{2}}}}}\right]}}}}{{2\pi{\sigma_{1}}{\sigma_{2}}}}\Bigg{\\}}dy$ (16) where $\Phi\left(x\right)$ is the CDF of the standard normal distribution. Although the solution for the ratio of the two limiting eigenvalues given in eq. $\left(10\right)$ is in integral form, the integral is well behaved, having a stictly positive integrand, and it can be evaluated readily by using standard numerical computation or commonly available mathematical softwares. It is seen that the proposed solution for the PDF of the ratio of the two limiting eigenvalues is simpler than the solution given in [8, eq. $\left(7\right)$]. It is seen from eq. $\left(10\right)$ and eq. $\left(7\right)$ given in [8] that the complexity of these two expressions mainly depends on the multiple integration. Moreover, double integrations are required. In [8], there are ${\rm O}\left({{N^{2}}}\right)$ additions due to the permutation operation and ${\rm O}\left({{NM^{4}}}\right)$ multiplications in the integrals, where ${\rm O}$ is the big ${\rm O}$ notation [13]. In eq. $\left(10\right)$, $M-1$ additions and ${\rm O}\left({{M}}\right)$ multiplications in the integrals are required. In the simulation, a comparison of the required time for these two expressions is given to further clarify the superiority of our proposed PDF in term of the complexity. ## IV Simulation Evaluations And Discussion In this section, simulation results are given to contrast the proposed simple form for the PDF of the ratio of the two limiting eigenvalues and the expressions for the PDF of that ratio given in [2], [5] and [7]. We also present some example results that compare the accuracies of the new and previous theoretical approximations for the PDF of the ratio of the eigenvalues to the exact PDF obtained by simulation. The noises are independent identically distributed Gaussian real noises with mean zero and unit variance. All the simulation results are achieved by using $10^{6}$ Monte Carlo simulations. The number of samples and the number of secondary users are set as $N=50$ and $M=10$ or $N=50$ and $M=20$, respectively. Figure 2: Comparison of the improved PDF approximation for the ratio of the two limiting eigenvalues of the covariance matrix with the known approximate PDFs for $N=50$ and $M=10$. Fig. 2 shows the commonly employed approximations to the PDF of the ratio of the two limiting eigenvalues of the covariance matrix obtained by using existing methods and our proposed approximation which is obtained without the assumption that the two eigenvalues are independent. In Fig. 2, the empirical PDF curve is the empirical PDF of the ratio of the two liming eigenvalues while the TW PDF is the PDF approximation obtained by using the Tracy-Widom distribution of order $2$ [2]. The PDF curve labeled eq. (30) [5] is the approximate solution given in [5, eq. $\left(30\right)$]. The PDF curve labeled PDF [7] is the exact PDF given in [7]. It is observed that the PDF of the ratio test statistic obtained by using the new solution matches better with the empirical PDF than the PDFs obtained from the other two methods. It is also seen that the PDF given by the exact solution in [7] matches well with the empirical PDF. The result consists with the result obtained in [7]. The PDF from [5, eq. $\left(30\right)$] and the PDF used to approximate the Tracy- Widom PDF in [2] are both approximate PDFs, and both are inferior approximations to the new approximation. The Tracy-Widom PDF of order $2$ is not a good approximation to the precise PDF obtained by simulation. The reason is that the Tracy-Widom PDF of order $2$ for the ratio of the two limiting eigenvalues is valid when $\mathop{\lim}\limits_{N\to\infty}\frac{M}{N}=c$, where $c$ is a constant. However, in the simulation, $N$ and $M$ are set as $50$ and $10$, respectively, and these values do not satisfy the limiting condition. It is seen that the new solution provides very high accuracy. The reason is that our new solution is derived without the independence assumption that the samples are independent, and the only source of discrepancy is the Gaussian distribution assumption, which causes only small discrepancies when the number of samples is even moderately large. These results show that the major source of error in the previous approximations is the independence assumption, and not the Gaussian assumption. Figure 3: Comparison of the improved PDF approximation for the ratio of the two limiting eigenvalues of the covariance matrix with the known approximate PDFs, for $N=50$ and $M=10$ or $N=50$ and $M=20$. Fig. 3 shows the comparison of the improved PDF approximation for the ratio of the two limiting eigenvalues of the covariance matrix with the known approximate PDF given by eq. (30) [5] with different $M$. It is seen that the accuracy of both our proposed solution and the form given by [5] increases with $M$. The reason is that the accuracy of the mean and variance of the two limiting eigenvalues increases with $M$. TABLE I: Comparison of the required computation times (s) Schemes $\left(N,M\right)$ | $\left(50,5\right)$ | $\left(100,5\right)$ | $\left(100,10\right)$ | $\left(100,20\right)$ ---|---|---|---|--- Eq. $\left(7\right)$ in [8] | 10.248 | 14.835 | 27.482 | 43.498 Eq. $\left(10\right)$ | 6.529 | 6.572 | 10.593 | 22.179 In order to compare the complexity of our proposed PDF form with that of the form given by eq. $\left(7\right)$ in [8], the computation times for different parameters $\left(N,M\right)$ are given in Table 1. The results are obtained by using a computer with 64-bit Intel(R) Core(TM) i7-4790 CPU, $8$ GB RAM. It is seen from Table 1 that the required time for calculating the PDF given by eq. $\left(7\right)$ in [8] is larger than that for our proposed PDF. This indicates the complexity of our proposed expression is lower than that presented in [8]. It further verifies that our proposed PDF is simpler than that proposed in [8]. ## V Conclusion A new approximation for the PDF of the ratio of the two limiting eigenvalues of the covariance matrix in eigenvalue-based spectrum sensing was derived based on order statistic analysis. The new approximate solution is the most accurate approximation known, and its derivation does not rely on an invalid independence assumption used to derive a popular previous approximation. The precise new approximation was used to show that the major source of error in previous approximation is the independence assumption and not Gaussian approximation. The relative poorness of the Tracy-Widom approximation was clarified and explained. ## References * [1] Y. Zeng and Y. C. Liang, “Eigenvalue-based spectrum sensing algorithms for cognitive radio,” _IEEE Trans. Commun._ , vol. 57, no. 6, pp. 1784-1793, Jun. 2009. * [2] F. Penna, R. Garello, and M. A. Spirito, “Cooperative spectrum sensing based on the limiting eigenvalue ratio distribution in Wishart matrices,” _IEEE Commun. Lett._ , vol. 13, no. 7, pp. 507-509, Jul. 2009. * [3] A. Kortun, T. Ratnarajah, M. Sellathurai, C. Zhong, and C. B. Papadias,“On the performance of eigenvalue-based cooperative spectrum sensing for cognitive radio,” _IEEE J. Sel. Topics Signal Process._ , vol. 5, no. 2, pp. 49-55, Feb. 2011. * [4] P. Zhang and R. C. Qiu, “GLRT-based spectrum sensing with blindly learned feature under rank-1 assumption,” _IEEE Trans. Commun._ , vol. 61, no. 1, pp. 87-96, Jan. 2013. * [5] A. L. Rao, and M. S. Alouini, “Generalized mean detector for collaborative spectrum sensing,” _IEEE Trans. Commun._ , vol. 12, no. 3, pp. 963-974, Mar. 2013. * [6] F. F. Gao, C. Qian, H. Qian, and, T. Zhao, “Sensing and recognition for multiple-primary-power level scenario with noise uncertainty,” _IEEE Trans. Veh. Technol._ , vol. 66, no. 3, pp. 2289-2300, Mar. 2017. * [7] M. Matthaiou, M. R. Mckay, P. J. Smith, and J. A. Nossek, “On the condition number distribution of complex Wishart matrices,” _IEEE Trans. Commun._ , vol. 58, no. 6, pp. 1705-1717, 2010. * [8] F. Penna, R. Garello, D. Figlioli, and M. A. Spirito, “Exact nonasymptotic threshold for eigenvalue-based spectrum sensing,” in _Proc. IEEE Int. Conf. Cognit. Radio Oriented Wireless Netw. Commun. (CROWNCOM)_ , Hannover, Germany, June 2009. * [9] F. Zhou, N. C. Beaulieu, Z. Li, and J. Si, “Feasibility of maximum eigenvalue cooperative spectrum sensing based on Cholesky factorisation,” _IET Commun._ , vol. 10, no. 2, pp. 199-206, Feb. 2016. * [10] C. Miyanaga, Y. Blostein, S. S. D. Kuriki, and X. Shi, “MIMO zero-forcing detection analysis for correlated and estimated Rician fading,” _IEEE Trans. Veh. Technol._ , vol. 61, no. 67, pp. 3087-3099, 2012. * [11] S. K. Sharma, S. Chatzinotas, and B. Ottersten, “Eigenvalue-Based sensing and SNR estimation for cognitive radio in presence of noise correlation,” _IEEE Trans. Veh. Technol._ , vol. 62, no. 8, pp. 3671-3684, 2013. * [12] H. A. David and H. N. Nagaraja, _Order Statistics_ , 3rd. New Jersey, 2003. * [13] G. H. Golub and C. F. Van Loan, _Matrix Computations_ , 3th ed. Johns Hopkins University Press, London, 1996.
11institutetext: European Southern Observatory, Av. Alonso de Córdova 3107, Vitacura, Santiago, Chile , 11email<EMAIL_ADDRESS>22institutetext: INAF, Osservatorio Astronomico di Roma, via Frascati 33, I-00078 Monteporzio Catone, Italy 33institutetext: Aix Marseille Université, CNRS, LAM (Laboratoire d’Astrophysique de Marseille) UMR 7326, 13388, Marseille, France 44institutetext: INAF - Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, via Gobetti 93/3, I-40129, Bologna, Italy 55institutetext: Observatoire de Genève, Université de Genève, 51 Ch. des Maillettes, 1290 Versoix, Switzerland 66institutetext: Instituto de Investigación Multidisciplinar en Ciencia y Tecnología, Universidad de La Serena, Raúl Bitrán 1305, La Serena, Chile 77institutetext: Departamento de Física y Astronomía, Universidad de La Serena, Norte, Av. Juan Cisternas 1200, La Serena, Chile 88institutetext: SUPA, Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, UK 99institutetext: INAF- Astronomical Observatory of Trieste, via G.B.Tiepolo 11, 34143 Trieste, Italy 1010institutetext: Department of Astronomy, The University of Texas at Austin, Austin, TX, 78712 1111institutetext: Instituto de Astrofísica, Universidad Católica de Chile, Vicuña Mackenna 4860, Santiago, Chile 1212institutetext: Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD, 21218, USA 1313institutetext: The Cosmic Dawn Center, Niels Bohr Institute, Copenhagen University, Juliane Maries Vej 30, DK-2100 Copenhagen, Denmark 1414institutetext: University of Bologna, Department of Physics and Astronomy (DIFA), Via Gobetti 93/2, I-40129, Bologna, Italy # The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 R. Thomas 11 L. Pentericci The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 O. Le Fevre The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 G. Zamorani The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 D. Schaerer The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 R. Amorin The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 M. Castellano The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 A. C. Carnall The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 S. Cristiani The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 F. Cullen The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 S. L. Finkelstein The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 F. Fontanot The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 L. Guaita The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 P. Hibon The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 N. Hathi The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 J. P. U. Fynbo The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 Y. Khusanova The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 A. M. Koekemoer The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 D. McLeod The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 R. J. McLure The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 F. Marchi The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 L. Pozzetti The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 A. Saxena The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 M. Talia The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 M. Bolzonella The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003The Intergalactic medium transmission towards z$\gtrsim$4 galaxies with VANDELS and the impact of dust attenuation††thanks: Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 194.A-2003 (;) ###### Abstract Aims. Our aim is to estimate the intergalactic medium transmission towards UV- selected star-forming galaxies at redshift 4 and above and study the effect of the dust attenuation on these measurements. Methods. The ultra-violet spectrum of high redshift galaxies is a combination of their intrinsic emission and the effect of the Inter-Galactic medium (IGM) absorption along their line of sight. Using data coming from the unprecedented deep spectroscopy from the VANDELS ESO public survey carried out with the VIMOS instrument we compute both the dust extinction and the mean transmission of the IGM as well as its scatter from a set of 281 galaxies at z¿3.87. Because of a degeneracy between the dust content of the galaxy and the IGM, we first estimate the stellar dust extinction parameter E(B-V) and study the result as a function of the dust prescription. Using these measurements as constraint for the spectral fit we estimate the IGM transmission Tr(Ly$\alpha$). Both photometric and spectroscopic SED fitting are done using the SPectroscopy And photometRy fiTting tool for Astronomical aNalysis (SPARTAN) that is able to fit the spectral continuum of the galaxies as well as photometric data. Results. Using the classical Calzetti’s attenuation law we find that E(B-V) goes from 0.11 at z=3.99 to 0.08 at z=5.15. These results are in very good agreement with previous measurements from the literature. We estimate the IGM transmission and find that the transmission is decreasing with increasing redshift from Tr(Ly$\alpha$)=0.53 at z=3.99 to 0.28 at z=5.15. We also find a large standard deviation around the average transmission that is more than 0.1 at every redshift. Our results are in very good agreement with both previous measurements from AGN studies and with theoretical models. ###### Key Words.: Extragalactic astronomy – Spectroscopy – High redshift – Intergalactic medium ## 1 Introduction The observation of distant galaxies necessarily include the effect of the Inter-Galactic Medium (IGM) along the line of sight (LOS), and its associated extinction. The light coming from those sources is travelling through clouds that are lying along the line of sight. As the redshift of the source increases, the clouds along the LOS can be so numerous that all the light below the Lyman $\alpha$ line (at 1216Å, hereafter, Ly$\alpha$) can be absorbed. Numerous authors have studied this phenomenon and it is thought that it is a natural result of the hierarchical formation of structure (e.g. Cen et al. 1994). More than two decades ago, shortly after a work on the effect of the intergalactic medium on galaxy emission by Yoshii & Peterson (1994), Madau (1995) (hereafter M95) simulated the average IGM transmission as a function of redshift and found that it strongly decreases with increasing redshift. Moreover, the IGM leads to a very specific stair-like pattern where each step corresponds to a line of the Lyman series of the Hydrogen atom. In addition, a large scatter was expected and, for instance, the average transmission at z = 3.5 was estimated to range from 20% to 70% with an average of 40% (M95). A decade later Meiksin (2006) (hereafter M06) updated this model producing a new IGM prescription using the $\Lambda$-CDM model of Meiksin & White (2004). It was found that the IGM transmission is higher than the one of M95, mainly because of differences in the estimates of the contributions of resonant absorption. More recently, Inoue et al. (2014) developed a new model of transmission. Their model predicts a weaker absorption in the range z=3-5 than the M95 models while it becomes stronger at z¿6. For years, the average transmission (noted Tr(Ly$\alpha$)) has been estimated from the Ly$\alpha$ forest measurements on the LOS of QSOs. It is often referred to as the HI optical depth $\tau_{eff}$ with Tr(Ly$\alpha$) = $\exp(-\tau_{eff})$ and its measurements are used to constrain the intensity of the ionizing background (Haardt & Madau 1996; Rauch et al. 1997; Bolton et al. 2005) and to investigate the sources responsible for the ionizing background. Surprisingly, only a few reports have been published on the observed dispersion in Tr(Ly$\alpha$) as a function of redshift. Faucher- Giguère et al. (2008b) used 86 high-resolution quasar spectra with a high signal-to-noise ratio to provide reference measurements of the dispersion in Tr(Ly$\alpha$) over 2.2 ¡ z ¡ 4.6. Until few years ago, no observational study had been made of the evolution of the IGM transmission from galaxy samples mainly because of the lack of large spectroscopic samples with high signal-to-noise ratios at high redshift that would probe a wavelength range significantly bluer than Ly$\alpha$. Hence, the comparison of IGM transmission towards extended galaxies with point-like QSOs had not yet been performed. In a recent paper (Thomas et al. 2017a) we were able to compute for the first time the IGM transmission towards a set of more than 2000 galaxies (with $\sim$120 of them at z¿4) provided by the VIMOS Ultra Deep Survey (VUDS; Le Fèvre et al. 2015). This study allowed us to show that, (i) the IGM transmission towards galaxies was a measurable parameter, (ii) the IGM transmission at $z<4$ was in very good agreement with the one computed towards QSO data in terms of both absolute measurements and also scatter around the mean values), (iii) at $z>4$ there might be a possible departure of the observational data from the theoretical prediction. This observed difference was interpreted as a signature of degeneracy between the dust and IGM models. In this paper we perform a study of 281 galaxies at z¿4 from the very deep VANDELS survey (McLure et al., 2018b; Pentericci et al., 2018) to compute the IGM properties. We therefore have more than twice the number of galaxies we had for the VUDS sample and with much deeper observation (ranging from 20 to 80 hours, instead of 14h). We also focus on the impact of different dust attenuation prescription on the IGM measurements. We describe the VANDELS galaxy sample and selection in Sect. 2 . The fitting method with the SPARTAN tool and the range of IGM templates used in the spectral fitting is described in Sect. 3 along the definition of the Ly$\alpha$ transmission we use in this paper. The estimation of the dust extinction and IGM transmission are described in Sect. 4 and 5, respectively. We look at stacked spectra of different population in Sect.6. Finally, we discuss the robustness of our results in section 7. All magnitudes are given in the AB system (Oke & Gunn, 1983) and we use a cosmology with $\Omega_{M}$ = 0.3, $\Omega_{\Lambda}$ = 0.7 and h = 0.7. ## 2 Data and sample selection Our study is based on galaxies from the VANDELS survey. The data sample selection is described in McLure et al. (2018b) while the data reduction and redshift measurements and validation are described in Pentericci et al. (2018). We briefly present an overview of the survey in this section. VANDELS is a public spectroscopic survey carried out with the VIMOS instrument (Le Fèvre et al., 2003) located at the NASMYTH focus of the Unit Telescope 3 Melipal of the Very Large Telescope (VLT). It made use of the medium resolution grism spanning a wavelength window from 4800 to 10000Å with a spectral resolution of R=580. It targeted $\sim$2100 objects in a wide redshift range (1.0¡z¡7). Targets were selected in the two widely observed UDS and CDFS fields covering a total area of 0.2 deg2. The primary target selection was performed using the photometric redshift technique. The reduction of the raw data was carried out using the EASYLIFE package (Garilli et al., 2012) and all redshifts were estimated using the EZ software (Garilli et al., 2010). A redshift flag has been assigned to each redshift measurement. This flag corresponds to the probability of the redshift to be correct. The quality scheme is composed of six values. Flags 2, 3, 4 and 9 (for objects with a single emission line) are the most reliable flags with a probability to be correct of 75%, 95%, 100% and 80%, respectively. A quality flag of 1 indicates a 50% probability of being correct, while a quality flag of 0 indicates that no redshift could be assigned. At the moment of writing, the internal VANDELS database provides 1527 unique sources (with more than 1300 available from the DR2). It gives access to 1-dimensional and 2-dimenstional spectra. Photometric data are available for each of the VANDELS galaxies from different ground-based or space-based observatory. Both fields are partially covered by optical and infrared photometric observations coming from the CANDELS survey with ACS and WFC3/IR and SPITZER/IRAC instruments (Galametz et al., 2013; Guo et al., 2013). Ground based data are also available with optical bands from the Subaru/Suprime-Cam instrument (Furusawa et al., 2008; Cardamone et al., 2010; Furusawa et al., 2016; Sobral et al., 2012), near infrared bands from the VIRCAM instrument from the VLT (Jarvis et al., 2013) and near infrared bands from the WIRcam camera of the CFHT (Hsieh et al., 2012). We refer the reader to McLure et al. (2018b) for further details. The aim of this paper is to study the IGM towards high redshift galaxies. As presented in Sect.1, the IGM signature in the spectra of distant galaxies is a stair-like pattern below the Ly$\alpha$ line. The Ly$\alpha$ transmission that we want to estimate is computed between the Ly$\alpha$ position, at 1216Å, and the Ly$\beta$ position at 1025Å. Therefore we must be able to observe this wavelength domain for our analysis. As the reduction process is sometimes not very efficient in extracting the edges of the spectra, we take a lower limit for our observed windows at 5000Å (instead of the nominal 4800Å limit of the medium resolution grism of VIMOS). This leads to a minimum redshift of z = 3.87. We do not impose, a-priori, any threshold on the signal to noise (SNR) nor the redshift flag for our working sample but we show the distribution ofSNR per spectral pixel measured with the recipe from Stoehr et al. 2008 (the dispersion of VANDELS spectra is $\sim$2.55Å/pix) along the distribution of apparent magnitude in the i-band in Fig.1. This leads to a selected sample of 281 galaxies. In our sample 25 galaxies have redshift flag of 1, 69 have a redshift flag of 2 or 9, and 185 have a redshift flag of 3 or 4. Therefore 2/3 of our selected sample has an assigned redshift with a probability to be correct higher than 95%. The stability of our results with respect to the choice of redshift flag is discussed in Sect.7. Figure 1: Observed properties of our selected sample of galaxies and comparison to the global released VANDELS data. Left: Apparent magnitude (sextractor MAGAUTO) of our data in the i-band. Right: Signal-to-Noise measurements; at 1070-1170Å restframe in red and at a fixed observed wavelength (6000-7400Å) in purple. In the former case the median SNR is 0.97. ## 3 Method ### 3.1 The SPARTAN tool To estimate the IGM transmission toward our galaxies we use the SPARTAN tool which is able to fit both photometry and spectroscopic data. In this paper we use the capability of SPARTAN to fit spectroscopic dataset and photometric dataset separately. This single data type fitting follows the same recipe as other codes used in the literature (e.g. Salim et al. 2007, Thomas et al. 2017b). For a given object and a single template the $\chi^{2}$ and associated probability are estimated with: $\chi^{2}=\sum_{i=1}^{N}\frac{(F_{obs,i}-A_{i}F_{syn,i})^{2}}{\sigma_{i}^{2}};\;\;P=\exp\left[-\frac{1}{2}(\chi^{2}-\chi^{2}_{min})\right]$ (1) where N, Fobs,i, Fsyn,i, $\sigma_{i}$ ,Ai and $\chi^{2}_{min}$ stand for the number of observed data points, the flux of the data point itself, the synthetic template value at the same wavelength, the observed error associated to Fobs,i, the normalization factor applied to the template, and the minimum $\chi^{2}$ of the library of template, respectively. The latter is used to set the maximum of the probability distribution to unity. From the properties of the exponential function this is only a normalization factor and does not change the values of the parameters’ estimation nor their errors. The set of probability values (second part of equation 1) are then used to create the probability distribution function (PDF, whose integral is normalized to unity) for each parameter to be estimated. From the PDF we create the cumulative distribution function (CDF) where the measured value of the parameter is taken where CDF(X)=0.5 and the errors on this measurement correspond to the value of the parameter for which the CDF=0.05 and 0.95. The photometric fitting process is performed as follows. The set of synthetic templates is redshifted to the redshift of the fitted galaxies and then normalized in one pre-defined band. For the photometric fitting we performed in this paper this normalization is applied in the i-band. Once this normalization is done, SPARTAN convolves the normalized template with all the photometric bandpasses available for the observed galaxy. Finally, the relations in Eq.1 are applied to estimate the physical parameters of the observed galaxy and their associated errors. When dealing with spectroscopic data, the general principle of the fitting process is similar. Nevertheless, this type of data allows for a different normalisation method. SPARTAN has to normalize the redshifted template to the observed spectrum. As for the photometry, we can consider a photometric filter and estimate the magnitude in the same filter from the spectrum itself. This magnitude serves as a normalization to all the templates. This approach, widely used in the literature with photometric dataset, uses normalization that is always done in a given photometric band (e.g., the i-band). As a result, each galaxy is normalized to the template in a different rest-frame region and all the galaxies are not treated in a similar manner. The spectroscopy opens a new redshift-dependent method of normalization. This method uses an emission line free region available in the spectrum. In the UV spectrum of distant galaxies, a region free from strong spectral line is between 1070 and 1170 Å (rest-frame). When fitting a UV spectrum at z=4.5, this spectral region will be shifted at 5885-6435Å. SPARTAN will compute a spectro-photometric point in this region directly in the template and in the data using a box filter of the size of this region. This box-magnitude will then be used to normalize the template to the observed spectrum. At higher redshift, i.e. z=5.0, this spectral region will be at a redder wavelength (6420-7020Å) and this observed-frame region will be used to perform the normalization as well. This new method of normalization has the merit of being consistent from one object to another. Moreover, as it is used in emission line free region, it relies less on the emission line physics of the templates. We use in this paper the latter redshift dependent normalisation method and we make the normalisation in the region 1070-1170Å (restframe). This choice is supported by; (i) Once redshifted this region provides a wide window for SNR estimation ($\sim$500Å @ z=3.87 and $\sim$750Å @ z=6.5), (ii) It is free of strong emission line, (iii) considering the VIMOS wavelength window, it is one of the only wide-enough wavelength range available across the redshift range we consider. ### 3.2 IGM models and Tr(Ly$\alpha$) definition To estimate the IGM transmission we must be able to fit it. For years, the IGM transmission was fixed at a single value at a given redshift most often using the M95 model that provides a single transmission curve at a given redshift. Therefore, it was assumed that at a given redshift, the lines of sight of objects observed at a different position in the sky are populated by hydrogen clouds with the same properties. In M95, the author provides an estimate of the 1 $\sigma$ dispersion and, as mentioned in Sect. 1, it can vary from 20% to 70% at z=3.5. Additionally, it was shown that this dispersion around the mean IGM could produce better photometric redshift (Furusawa et al., 2000). Therefore, we proposed in our previous paper to use a set of empirical models that can reproduce this dispersion in the IGM transmission (Thomas et al. 2017a, hereafter T17). We summarize here how these templates were constructed. To test different line of sights during the SED-fitting we constructed 6 additional templates around the mean of M06. This additional models were built considering the $\pm 1$ sigma variation of M95 IGM models (see Fig.3a in M95 paper at $z=3.5$) that we propagated at any redshift. Finally, to explore more possibilities, we created, still from this $\pm 1$ sigma variation, the $\pm 0.5\sigma$ and $\pm 1.5\sigma$. As a result, the IGM can be chosen from the set of 7 discrete values at any redshift and this allows us to use the IGM as a free parameter in out fitting procedure and explore a larger range of IGM transmission. At z=3.0 it ranges from 20% to 100% while at z=5.0, it ranges from 5% to 50%. As an example, we show in Fig.2 the set of extinction curves at z=4.0. Figure 2: Example of IGM transmission curves at z=4.0. The red curve is from M06 Prescription while the black curves represent the augmented prescription from Thomas et al. (2017a). The latter allows, at this redshift to span possible transmission from $\sim$15% to $\sim$90%. The grey area shows where we compute the Ly$\alpha$ transmission. In this paper we aim at computing the Ly$\alpha$ Transmission, Tr(Lyα) which is determined as the mean transmission between 1070Å and 1170Å computed on the transmission curve itself, shown in Fig.2 by the grey area. In the case of SPARTAN we use the PDF of Tr(Lyα) to estimate the value of the parameter, as described in Sect. 3. Finally, we emphasize that the IGM models we use here, while based on simulation, are also empirical (in the additional curves we use). More recent models include more components in the simulations such as the inclusion of the CGM (Steidel et al., 2018; Kakiichi & Dijkstra, 2018). We will compare these different prescriptions in a near-future paper. It is also worth noting that the general shape of the curves is the same from one model to another while it can vary from one line-of-sight to another, depending on the presence of absorbers. In T17 we compared the results of the fit using the templates presented here and real Lyman $\alpha$ forest simulation (Bautista et al., 2015) and found that there was a very good agreement in the resulting measurement of the Ly$\alpha$ Transmission. ## 4 Dust content of z¿4 galaxies Figure 3: Dust evolution from our 281 galaxies. Top Panel: The four different dust prescriptions used to estimate the dust extinction in our sample. In red we show the prescription from Calzetti et al. (2000), in black the prescription from Prevot et al. (1984) for the SMC, in violet the prescription from Fitzpatrick (1986) for the LMC and in green for the Milky Way prescription by Allen (1976). Bottom Panel: Evolution with redshift of the dust attenuation in our selected sample of 281 galaxies from the photometric fitting for the four dust prescription shown in the Top Panel. Measurements report the mean and median absolute deviation for both the redshift and the E(B-V) values. We compare our results with previous measurements found in the literature at similar redshifts. The empty black diamonds are estimation from Bouwens et al. (2009), black triangles from Bouwens et al. (2007) and light blue cross from Ouchi et al. (2004). The dashed black line shows a fit from Hayes et al. (2011). Note: All the VANDELS estimation are at the same redshifts, violet ones have been slightly shifted for clarity. In Thomas et al. (2017a) we identified a potential strong degeneracy between the estimates of the dust content of the galaxy and the IGM transmission prescription. This degeneracy is more prominent at z¿4. In other words, the same data can be fitted with high values of both dust extinction and IGM transmission or lower values for both parameters. This is due to the small wavelength range available from UV spectra for fitting that is not able to constrain the dust content of the galaxy. In order to address this problem in the present work, we measure the IGM transmission in a two step process. First, we estimate the dust content of each galaxy in our sample using the photometric data presented in Sect. 2. We fit the SED over a broader wavelength range than the spectra including NIR data, providing robust constraints on dust extinction. Then, we estimate Tr(Ly$\alpha$), keeping the E(B-V) value fixed to the one measured during the photometric-fitting process. For this two-step fitting process we use the following parameter space. We use Bruzual & Charlot (2003) models with a Chabrier (2003) initial mass function. The stellar-phase metallicity ranges from sub-solar (0.2$Z_{\odot}$ and 0.4$Z_{\odot}$) to solar (1.0$Z_{\odot}$). We assume a star formation history prescription that is exponentially delayed with a timescale parameter, $\tau$, ranging from 0.1 Gyr to 1.0 Gyr. The ages range from 1 Myr to 3 Gyr in 24 steps. It is worth noting that this range of age is further limited by the age of the Universe at the redshift that is considered during the fit. The emission lines are added to the template following the work of Schaerer & de Barros (2009) that adds nebular continuum and emission using the conversion from ionizing photon into H$\beta$ luminosity. Then other emission lines are added using line ratio from Anders & Fritze-v. Alvensleben (2003). This first step is made to estimate the dust extinction. Therefore, we use in this section four different dust extinction curves111 http://webast.ast.obs- mip.fr/hyperz/hyperz$\\_$manual1/node10.html: the classical starburst galaxy prescription from Calzetti et al. (2000) (hereafter SB) with an extrapolation, the Small Magellanic Cloud from (Prevot et al. 1984, hereafter SMC), the prescription for the Large Magellanic Cloud (Fitzpatrick 1986, hereafter LMC) and finally the prescription for the Milky Way (Allen 1976, hereafter MW). All the curves are presented in Fig. 3 (top panel). For the photometric fitting, the E(B-V) parameter can vary from 0.0 to 0.39 (in 0.03 steps). Finally, the IGM prescription is using the models developed in Thomas et al. (2017a) based on the M06 models (see previous section). The redshift used for this fitting is the spectroscopic redshift, $z_{spec}$. Finally, it is worth noting that during this fitting process the IGM is estimated. These IGM measurements from the pure photometric fitting are discussed in Sect.7. Table 1: Measurements of E(B-V) from the fit of photometry only using four different dust prescriptions. Each value represents the mean in each redshfit bins and the errors is the median absolute deviation. ¡z¿ | MW | LMC | SMC | SB ---|---|---|---|--- 3.99 | 0.14$\pm$0.06 | 0.11$\pm$0.06 | 0.04$\pm$0.03 | 0.11$\pm$0.06 4.23 | 0.11$\pm$0.06 | 0.09$\pm$0.06 | 0.03$\pm$0.03 | 0.10$\pm$0.05 4.59 | 0.12$\pm$0.06 | 0.08$\pm$0.03 | 0.04$\pm$0.03 | 0.11$\pm$0.06 5.15 | 0.08$\pm$0.06 | 0.05$\pm$0.03 | 0.03$\pm$0.03 | 0.08$\pm$0.06 The dust extinction measurements can be seen in Fig. 3 (bottom panel) that shows the evolution of the dust attenuation with redshift for our 281 selected galaxies and in Tab. 1 we report the measurements. Using the SB prescription we report mean values of E(B-V)=0.11; 0.10; 0.10 and 0.08 at z=3.99; 4.23; 4.59 and 5.15, respectively. Using the LMC curve the measurements are very similar and we obtain 0.11, 0.09, 0.08 and 0.05 at the same redshifts. The prescription of the SMC leads to a much weaker extinction at any redshift with 0.04, 0.03, 0.03 and 0.02 while the Milky-Way extinction from MW leads to slightly higher values of E(B-V with 0.14, 0.11, 0.12 and 0.08. This is easily explained by the fact that for a fixed value of E(B-V) the curve of the SMC will lead to a much stronger extinction when studying UV-restframe galaxies (below 2000Å) while the curve from the MW will give lower extinctions. The measurements obtained using SB, LMC and MW are in good agreement with previous estimation at similar redshifts. Studying dropout selected galaxies Bouwens et al. (2009) reports an E(B-V) measurement of 0.14 at z$\sim$3.8 while E(B-V)=0.095 at z=5.0. At similar redshift, Ouchi et al. (2004) used Lyman break galaxies between 3.5¡z¡5.2 and measured E(B-V)=0.075 at z=4.7. It is worth mentioning that SB-like laws have been supported by other studies in the literature (e-g McLure et al. 2018a). On the contrary, the measurements using SMC are in strong disagreement with previous measurement from the literature, as reported in Scoville et al. (2015) and Fudamoto et al. (2017). We will use, in the rest of the paper, both SB and MW models (LMC being very close to the SB) and see how this influences the measurements of the IGM. ## 5 Tr(Ly$\alpha$) towards z¿4 galaxies Figure 4: Example of fits of VANDELS galaxies at redshift $z>4$. In each plot the spectrum is shown by the black line while the best fit, produced by SPARTAN, is given in red. We indicate the position of the Ly$\alpha$ line and the redshift for each spectrum. We measured in the last section the dust content of our galaxies. We now move towards the estimation of the IGM transmission with the spectral fit. In order to estimate it we constrain the spectral fit of our VANDELS spectroscopic data fixing the E(B-V) to the one measured during the fit of the photometric data. We consider individual E(B-V) value for each of our galaxies and do not use the average values presented in Fig. 3. The other parameters, such as age or metallicity are still free to vary and the parameter ranges correspond to the ones of the fit of Sect.4. Examples of spectral fit of VANDELS galaxies are presented in Fig. 4 at various redshift and they show that SPARTAN reproduces very well the UV continuum of our galaxies at all wavelengths. It is worth mentioning that we can see that the Ly$\alpha$ is poorly reproduced. As presented in the previous section, the emission lines are added using line ratios and therefore not fitted individually. Our results remain if we mask out the line during the fit. Table 2: Measurements of the Tr(Ly$\alpha$) from our study. We report in this table the redshift, the Tr(Ly$\alpha$), the standard deviation, the standard error, the Tr(Ly$\alpha$) for redshift flag 2, 3 and 4 only (with the number of galaxies with these flags in parenthesis) and the measurements of the same quantity from the pure photometric fitting. Redshift | Ngal | Tr(Ly$\alpha$) | Std deviation | Standard error | Flag 2,3&4 | Photometry ---|---|---|---|---|---|--- 3.99 | 81 | 0.55 | 0.14 | 0.015 | 0.54 (58) | 0.41 4.23 | 74 | 0.49 | 0.14 | 0.016 | 0.48 (50) | 0.24 4.59 | 72 | 0.42 | 0.13 | 0.015 | 0.41 (54) | 0.34 5.15 | 54 | 0.29 | 0.11 | 0.015 | 0.30 (23) | 0.26 Results on the measurement of Tr(Ly$\alpha$) are presented in Fig. 5 where we show the distributions of the Lyman $\alpha$ transmission in four redshift bins: 3.85¡z¡4.1; 4.1¡z¡4.4, 4.4¡z¡4.8 and at z¿4.8. We also display the evolution of this quantity with redshift as compared with previous measurements in the literature (we take uneven binning to ensure a maximum number of galaxies in each bins). Table. 2 provides the measurements for each bin. Figure 5: Lyman $\alpha$ transmission (Tr(Ly$\alpha$)) as function of redshift. The four top first small plots show the distributions of the transmission in four redshift bins 3.85¡z¡4.1, 4.1¡z¡4.4, 4.4¡z¡4.8 and z¿4.8 for each dust prescription. We indicate in each of these plot, the number of galaxies entering in the distribution. The bottom plot display the evolution the transmission with redshift. Our measurements are indicated in red for SB and green for MW and in violet for LMC. We show the measurements from QSO in blue from Becker et al. (2013), from the VIMOS Ultra Deep Survey (VUDS, Thomas et al. (2017a), in black) and the theoretical prediction from Meiksin (2006) represented with the black dashed line. We estimated the Ly$\alpha$ transmission in these four redshift bins. Using the SB dust attenuation, this quantity goes from Tr(Ly$\alpha$)=0.55 at z=3.99 with a large standard deviation of 0.14 to Tr(Ly$\alpha$)=0.29 at z=5.15 with a standard deviation of 0.11. The standard error of the mean is small and is below 0.02 at any redshift. The measurements using the MW dust curve are giving similar measurements. This measurements are similar to measurements done with QSOs at similar redshift. Becker et al. (2013) measured Tr(Ly$\alpha$)=0.59 at z=3.70 and Tr(Ly$\alpha$)$\sim$0.35 at z$\sim$4.8. This shows that even at high redshift we are able to reproduce equivalent measurements with galaxies. Comparing our results to theoretical prediction, we find that we are in good agreement with the M06 models that predicts Tr(Ly$\alpha$)=0.39 at z=4.6 and Tr(Ly$\alpha$)=0.25 at z=5.15. We note that our measurements are in partial disagreement with our previous measurements at z¿4 from the VUDS galaxies. At z=4.23, the difference from our previous measurement is more than 10% and reaches 20% at z¿4.5. Nevertheless, as reported in Thomas et al. (2017a), this high values of Tr(Ly$\alpha$) could actually be corrected limiting the E(B-V) to low values. Thus, the method we employed in the present paper with an estimation of the E(B-V) value before the spectral fit seems to correct for this degeneracy. More importantly we report a large standard deviation of Tr(Ly$\alpha$) for all our measured points. It goes from 0.14 at z=3.99 to 0.11 at z=5.15. This is in good agreement with our previous study and confirms that IGM should be treated as free parameter during fitting process. Surprisingly, we find that the measurement using the LMC prescription are above the other ones with a difference that peaks at +0.08 at z=4.23 while the highest redshift point is in good agreement with the other dust solutions. We try to investigate these differences in the next section. ## 6 Averaged spectra Finally, we have a look at averaged spectra of the population of our selected galaxies. We build two stack spectra that are constructed based on the IGM transmission as measured in our VANDELS galaxies: one where we select all the galaxies with a transmission higher than the mean curve given by M06, and one where all the galaxies have a transmission lower than the mean curve. Each stack spectrum is constructed using the specstack222https://specstack.readthedocs.io/en/latest/ program (Thomas, 2019b) that works as follows. For a given stack we de-redshift all the individual spectra and normalise them in a region red-ward of the Ly$\alpha$ line free of emission or absorption lines, in this case between the SiII($\lambda 1260$) and OI($\lambda 1303$) absorption line. Then we re-grid all the spectra in a common wavelength grid. Finally, at a given wavelength, we compute the mean of all the fluxes using a sigma-clipping method (at 3$\sigma$). Figure 6: Averaged spectra. We display two stack spectra. In blue we show the average of all the spectra (79) with IGM transmission lower than the mean. The mean redshift is these spectrum $\sim$4.36. In red we show the average of all the spectra (101) with IGM transmission higher than the mean, with a mean redshift of $\sim$4.60. The stack spectra have been made with the specstack program (Thomas, 2019b). The averaged spectra are presented in Fig. 6 (using the fits made with SB). This figure shows that below Ly$\alpha$ at 1215$\mathrm{\AA}$, there is a non- negligible variation in flux. The low-transmission stacked spectra are on average $\sim$30% dimmer than the stacked spectra with high IGM transmission. This means that at z$>$4, the standard deviation of the IGM is very important, and that IGM transmission should be treated as a free parameter when studying galaxy at such high redshift. It is also worth mentioning that the average redshift of the two stacks is slightly different. For the stacks with IGM below than the mean, the average redshift is $z\sim 4.36$ while it is $z\sim 4.60$ for the galaxies with an IGM higher than the mean. Consequently, the difference might be even higher than what we measure here. Finally, the figure shows that the spectra beyond the Ly$\alpha$ line are very similar. Few absorption lines are stronger in the case of high IGM transmission (e.g. OI and SiIV) but others present similar strength (e.g. SiII and CII) it is therefore delicate to conclude on this aspect. ## 7 Discussion ### 7.1 Flag system and flux calibration As mentioned in Sect. 2 we did not take into account the flag system when selecting our galaxies. However we checked if our result are impacted by the presence of lower quality redshift measurements that could potentially indicate presence of low-redshift interlopers. We removed the redshift flags 1 and 9 and results are reported in Table 2 and displayed in Fig. 7. The only notable difference is the last point that is at a slightly lower redshift at z=4.98 instead of z=5.15, indicating that the highest measured redshift have a lower quality than the lower redshift sample. This is because the redshift is often measured due to the presence of the Ly$\alpha$ which leads to a redshift quality flag of 9. For the other points the changes in Tr(Ly$\alpha$) is less than 0.01, which represent less than 2% of difference. We conclude that including redshift flag 1, 2 and 9 have almost no impact on the global result on our study. As reported in Pentericci et al. (2018) the very bluest part of the VANDELS spectra was suffering from a systematic mismatch with the broad-band photometry available for the sources. The underlying cause for this is still under investigation but for the moment (and at the time of DR1 and 2) we have implemented an empirically derived correction to the spectra. This effect could in principle be relevant for the objects belonging to the first redshift bin. For this reason we repeated the same measurements in the two first bins using uncorrected spectra and measure an IGM transmission of 0.55 at z=3.99 with a standard deviation of 0.16 and 0.51 at z=4.23 with a standard deviation 0.16 as well. This shows that the spectral corrections effects are negligible and lower than 5%. Figure 7: Intergalactic medium transmission Tr(Ly$\alpha$) as function of redshift from different estimation. In red we show our final results, as presented in Fig.5, in blue we show the evolution of Tr(Ly$\alpha$) for galaxies with a redshift flag of 3 or 4 only and in black we show the results out of the photometric fit only. ### 7.2 Discussion on the method Another measurement we have done is the measurement of the IGM transmission from the first pass photometric fitting performed in Sect. 4 (with free dust extinction parameter). Results are reported in Fig. 7 and Tab. 2. As expected the measurements are in strong disagreement with the results from the spectral fit. The difference is on average -0.11 towards low Tr(Ly$\alpha$) values. This difference can reach up to 0.24 at z=4.23. This shows that the use of photometric data alone to constrain the IGM transmission is not efficient. Photometric data are less numerous and we have access to less bands to constrain the fit. Indeed, photometry provides us with a data-point every 500 or 1000Å . Spectroscopy, to the contrary, brings much more constraints with one data-point every $\sim$4Å. We finally test the difference in dust extinction estimation (E(B-V), using SB) and Tr(Ly$\alpha$) if we do not fix the dust extinction during the spectral fit. Leaving it free, the dust extinction that we measure increases with respect to the photometric fitting of Sect.4. The measurement of E(B-V) give 0.12, 0.13, 0.15 and 0.12 at z=3.99, 4.23, 4.59 and z=5.15. While we are still in the dispersion, this corresponds to a change of 10% at z=4.05 and more than 50% at z=5.15. When looking at Tr(Ly$\alpha$), the measurements is slightly different from the main results of our paper. The first point at z=3.99 remains the same while the second and third measurements are higher when leaving the dust free with Tr(Ly$\alpha$)=0.51 at z=4.23 and Tr(Ly$\alpha$)=0.44 at z=4.59. This makes a difference of between 0.03 and 0.02, respectively. The strongest difference is for the last point where the $\Delta$Tr(Ly$\alpha$)=0.05. These behavior is expected. If the dust content goes toward higher E(B-V) values (i.e. more extinction), the IGM transmission must compensate this extinction going toward higher Tr(Ly$\alpha$) values (higher transmission). This behaviour was already noted in our previous study. ### 7.3 IGM template resampling In the work presented previously we used 7 IGM transmission curves at any redshift. We want to know now if the sampling of our prescription has an influence on the final measurements. To this aim, we created a new IGM prescription, not composed of 7 possible transmissions, but of 31 transmission templates. We keep the same range but add intermediate curves, each at multiples of 0.1$\sigma$ from 0.1 to 1.5$\sigma$. The transmission curves at z=4.0 can be seen in Fig.8 (top). Figure 8: Top: Intergalactic medium transmission at z=4 in the case where we finely sample the IGM templates and consider 31 transmission curves instead of 7. The blue curve shows the average M06 prescription and the grey region locates the region where we measure Tr(Lyα). Bottom: Comparison of the Tr(Ly$\alpha$) in the case where we use 7 curves or 31 curves. Using this fine-sampled prescription we recompute the dust and the IGM transmission using all the three dust prescription, the results are displayed in Fig. 8 where we compare the results from the fit with the 7-curves prescription and the ones where we use the 31-curves version of the IGM prescription. This comparison shows that the difference is minimal. Using the SB dust attenuation the use of both prescription has no effect and the difference is less than 0.1%. For the two other prescription the main difference is for the point at the lowest redshift where the difference reaches 4% for the LMC and 5% for the MW prescription. We can conclude that the prescription with 7-curves seems to be detailed enough and adding more curves does not substantially change the results. ## 8 Conclusion This paper reports the study of the intergalactic medium transmission Tr(Ly$\alpha$) at z¿4. We measured the IGM transmission from the spectra of 281 galaxies coming from the VANDELS public survey carried out by the VIMOS instrument at the VLT. Galaxies have been observed up to $\sim$80h, thus providing unprecedented spectral depth. Using a previously published IGM transmission prescription for template fitting studies we used the SPARTAN fitting tool to compute the IGM transmission. We summarize our results below: * • In order to tackle the dust-IGM degeneracy that was discovered in a previous study, we estimated first the dust content in our galaxies with a pure photometric fitting technique. We estimated the mean E(B-V) at z¿4 and found that it ranges from 0.11 at z=3.99 to 0.08 at z=5.15. These measurements are similar to previous measurements reported in the literature. * • Using individual measurements of E(B-V) as constraint, we use the SPARTAN software to perform the spectral fitting of our galaxies. From this fitting we extract the values of Tr(Ly$\alpha$) at various redshift. It decreases from Tr(Ly$\alpha$)=0.53 at z=3.99 to Tr(Ly$\alpha$)=0.28 at z=5.15. These results match very well the measurements from QSOs study and from theoretical predictions. This reinforces the fact that high redshift galaxies can be used to estimate the IGM. * • Even more importantly the $1\sigma$ scatter of Tr(Ly$\alpha$) is large at any redshift. It is higher than 0.1 and equivalent to the standard deviation reported from QSOs data. * • As expected we find that the IGM transmission measurements are sensitive to the choice of dust attenuation prescription. * • We test whether our results are sensitive to the redshift flag system in place in VANDELS and find that the differences are minimal. * • Due to a lack of observational constraints, the measurements coming from a pure photometric fitting are not able to reproduce the results from the spectral fitting and from the literature. * • Finally, we compute the IGM transmission leaving the dust extinction as a free parameter and we confirm the presence of dust/IGM degeneracy. In that case, the dust extinction goes toward higher value that are in tension with measure from the literature. This is then compensated by an higher IGM transmission. Finally, it is worth reminding that they are multiple IGM models in the literature and they should be tested against real data. High-redshift data samples are getting big enough to make statistically significant tests of these models. This will be studied in a paper in preparation. ###### Acknowledgements. The authors wish to thank the referee we provided us with very insightful comments that improved a lot the paper. ## References * Allen (1976) Allen, C. W. 1976, Astrophysical Quantities * Anders & Fritze-v. Alvensleben (2003) Anders, P. & Fritze-v. Alvensleben, U. 2003, A&A, 401, 1063 * Bautista et al. (2015) Bautista, J. E., Bailey, S., Font-Ribera, A., et al. 2015, J. Cosmology Astropart. Phys., 2015, 060 * Becker et al. (2013) Becker, G. D., Hewett, P. C., Worseck, G., & Prochaska, J. X. 2013, MNRAS, 430, 2067 * Bouwens et al. (2009) Bouwens, R. J., Illingworth, G. D., Franx, M., et al. 2009, ApJ, 705, 936 * Bouwens et al. (2007) Bouwens, R. J., Illingworth, G. D., Franx, M., & Ford, H. 2007, ApJ, 670, 928 * Bruzual & Charlot (2003) Bruzual, G. & Charlot, S. 2003, MNRAS, 344, 1000 * Calzetti et al. (2000) Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, ApJ, 533, 682 * Cardamone et al. (2010) Cardamone, C. N., van Dokkum, P. G., Urry, C. M., et al. 2010, The Astrophysical Journal Supplement Series, 189, 270 * Cen et al. (1994) Cen, R., Miralda-Escudé, J., Ostriker, J. P., & Rauch, M. 1994, ApJ, 437, L9 * Chabrier (2003) Chabrier, G. 2003, PASP, 115, 763 * Fitzpatrick (1986) Fitzpatrick, E. L. 1986, AJ, 92, 1068 * Fudamoto et al. (2017) Fudamoto, Y., Oesch, P. A., Schinnerer, E., et al. 2017, MNRAS, 472, 483 * Furusawa et al. (2016) Furusawa, H., Kashikawa, N., Kobayashi, M. A. R., et al. 2016, ApJ, 822, 46 * Furusawa et al. (2008) Furusawa, H., Kosugi, G., Akiyama, M., et al. 2008, The Astrophysical Journal Supplement Series, 176, 1 * Furusawa et al. (2000) Furusawa, H., Shimasaku, K., Doi, M., & Okamura, S. 2000, ApJ, 534, 624 * Galametz et al. (2013) Galametz, A., Grazian, A., Fontana, A., et al. 2013, The Astrophysical Journal Supplement Series, 206, 10 * Garilli et al. (2010) Garilli, B., Fumana, M., Franzetti, P., et al. 2010, PASP, 122, 827 * Garilli et al. (2012) Garilli, B., Paioro, L., Scodeggio, M., et al. 2012, PASP, 124, 1232 * Guo et al. (2013) Guo, Y., Ferguson, H. C., Giavalisco, M., et al. 2013, The Astrophysical Journal Supplement Series, 207, 24 * Hayes et al. (2011) Hayes, M., Schaerer, D., Östlin, G., et al. 2011, ApJ, 730, 8 * Hsieh et al. (2012) Hsieh, B.-C., Wang, W.-H., Hsieh, C.-C., et al. 2012, The Astrophysical Journal Supplement Series, 203, 23 * Inoue et al. (2014) Inoue, A. K., Shimizu, I., Iwata, I., & Tanaka, M. 2014, MNRAS, 442, 1805 * Jarvis et al. (2013) Jarvis, M. J., Bonfield, D. G., Bruce, V. A., et al. 2013, MNRAS, 428, 1281 * Kakiichi & Dijkstra (2018) Kakiichi, K. & Dijkstra, M. 2018, MNRAS, 480, 5140 * Le Fèvre et al. (2003) Le Fèvre, O., Saisse, M., Mancini, D., et al. 2003, in Proc. SPIE, Vol. 4841, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes, ed. M. Iye & A. F. M. Moorwood, 1670–1681 * Le Fèvre et al. (2015) Le Fèvre, O., Tasca, L. A. M., Cassata, P., et al. 2015, A&A, 576, A79 * Madau (1995) Madau, P. 1995, ApJ, 441, 18 * McLure et al. (2018a) McLure, R. J., Dunlop, J. S., Cullen, F., et al. 2018a, MNRAS, 476, 3991 * McLure et al. (2018b) McLure, R. J., Pentericci, L., Cimatti, A., et al. 2018b, MNRAS, 479, 25 * Meiksin (2006) Meiksin, A. 2006, MNRAS, 365, 807 * Meiksin & White (2004) Meiksin, A. & White, M. 2004, MNRAS, 350, 1107 * Oke & Gunn (1983) Oke, J. B. & Gunn, J. E. 1983, ApJ, 266, 713 * Ouchi et al. (2004) Ouchi, M., Shimasaku, K., Okamura, S., et al. 2004, ApJ, 611, 660 * Pentericci et al. (2018) Pentericci, L., McLure, R. J., Garilli, B., et al. 2018, A&A, 616, A174 * Prevot et al. (1984) Prevot, M. L., Lequeux, J., Maurice, E., Prevot, L., & Rocca-Volmerange, B. 1984, A&A, 132, 389 * Salim et al. (2007) Salim, S., Rich, R. M., Charlot, S., et al. 2007, The Astrophysical Journal Supplement Series, 173, 267 * Schaerer & de Barros (2009) Schaerer, D. & de Barros, S. 2009, A&A, 502, 423 * Scoville et al. (2015) Scoville, N., Faisst, A., Capak, P., et al. 2015, ApJ, 800, 108 * Sobral et al. (2012) Sobral, D., Best, P. N., Matsuda, Y., et al. 2012, MNRAS, 420, 1926 * Steidel et al. (2018) Steidel, C. C., Bogosavljević, M., Shapley, A. E., et al. 2018, ApJ, 869, 123 * Stoehr et al. (2008) Stoehr, F., White, R., Smith, M., et al. 2008, Astronomical Society of the Pacific Conference Series, Vol. 394, DER_SNR: A Simple &amp; General Spectroscopic Signal-to-Noise Measurement Algorithm, ed. R. W. Argyle, P. S. Bunclark, & J. R. Lewis, 505 * Thomas (2019a) Thomas, R. 2019a, Astrophysic Source Code Library, [record ascl:1901.007] * Thomas (2019b) Thomas, R. 2019b, Astrophysic Source Code Library, [record ascl:1904.018] * Thomas (2019a) Thomas, R. 2019a, CatMatch v1.3, 10.5281/zenodo.2626564 * Thomas (2019b) Thomas, R. 2019b, catscii v1.2, 10.5281/zenodo.2587874 * Thomas (2019) Thomas, R. 2019, The Journal of Open Source Software, 4, 1259 * Thomas et al. (2017a) Thomas, R., Le Fèvre, O., Le Brun, V., et al. 2017a, A&A, 597, A88 * Thomas et al. (2017b) Thomas, R., Le Fèvre, O., Scodeggio, M., et al. 2017b, A&A, 602, A35 * Yoshii & Peterson (1994) Yoshii, Y. & Peterson, B. A. 1994, ApJ, 436, 551 ## Appendix A Reproducibility The reproducibility aspect had become a crucial aspect of modern research with the use of softwares and codes. Sharing codes and method in paper is as important as sharing results. In this appendix we aim at answering this aspect. Table 3 lists the availability of the data-related and technique- related aspects of our work. Each point is detailed in the next paragraph. Table 3: Summary of the reproducibility of this work | Public | Partial | Private ---|---|---|--- VANDELS Data | $\surd$ | $\chi$ | $\chi$ SPARTAN-tool | $\surd$ | $\chi$ | $\chi$ Spectral measurements | $\surd$ | $\chi$ | $\chi$ Results | $\surd$ | $\chi$ | $\chi$ Plotting tool: Photon | $\surd$ | $\chi$ | $\chi$ fits file library: dfitspy | $\surd$ | $\chi$ | $\chi$ * • As presented in Sec. 2, the VANDELS survey is a public spectroscopic survey. As such all the data are already publicly available and freely available from the ESO archive facility333http://archive.eso.org/cms.html. * • The SPARTAN-tool is available on GITHUB and comes with all the inputs needed to make the code run. The version released at this moment allows separate fit on the photometry and on the spectroscopy, as used in this paper. The version used in this paper is version 0.4.4444https://astrom- tom.github.io/SPARTAN/build/html/index.html. The final version will be presented in a paper in preparation (Thomas et al, in prep). * • In Addition, the main python packages used during this work are public: catalog query module catscii (v1.2, Thomas 2019b), catalogue matching algorithm catmatch (v1.3 Thomas 2019a), our fits display library dfitspy (v19.3.4, Thomas 2019), the spectrum stacking program specstack (v19.4, Thomas 2019b) and our plotting tool, Photon (v0.3.2, Thomas 2019a). They are all available in the main python package index repository (pypi).
# Series expansions for the Riemann zeta function Alexey Kuznetsov Department of Mathematics and Statistics, York University, Toronto, Ontario, M3J1P3, Canada<EMAIL_ADDRESS> ###### Abstract. We prove a general result on representing the Riemann zeta function as a convergent infinite series in a complex vertical strip containing the critical line. This result can be used to re-derive known expansions as well as to discover new series representations of the Riemann zeta function in terms of the incomplete gamma functions, generalized hypergeometric functions and Meijer $G$-functions. Research supported by the Natural Sciences and Engineering Research Council of Canada. _To the memory of Richard Paris_ Keywords: Riemann zeta function, series expansion, Mellin transform, incomplete gamma function, generalized hypergeometric function, Meijer $G$-function 2020 Mathematics Subject Classification : 11M06, 41A58 ## 1\. Introduction The asymptotics of the Riemann zeta function on the critical line was a topic that greatly interested Richard Paris. He wrote the first paper [8] on this subject in 1994. This was followed by a series of papers [9, 10, 13, 14] in the 1990s, two of them co-authored with his PhD student Shuang Cang. Richard returned to this topic in 2009 [11] and, very recently, in 2022 [12]. To explain the origin of the motivation for his work on the Riemann zeta function, we need to start with the Riemann-Siegel formula (see [15][Section 4.17]): (1) $\displaystyle Z(t):=e^{{\textnormal{i}}\theta(t)}\zeta\big{(}\tfrac{1}{2}+{\textnormal{i}}t\big{)}$ $\displaystyle=2\sum\limits_{n=1}^{N_{t}}\frac{\cos(\theta(t)-t\ln(n))}{\sqrt{n}}$ $\displaystyle+(-1)^{N_{t}-1}\Big{(}\frac{2\pi}{t}\Big{)}^{\frac{1}{4}}\frac{\cos(\pi\tau^{2}/2+3\pi/8)}{\cos(\pi\tau)}+O(t^{-3/4}),\;\;\;t\to+\infty.$ Here $\zeta(s)$ is the Riemann zeta function, $\theta$ is defined as $\theta(t):=\Big{[}\frac{\Gamma(1/4+{\textnormal{i}}t/2)}{\Gamma(1/4-{\textnormal{i}}t/2)}\Big{]}^{1/2}\pi^{-{\textnormal{i}}t/2},$ and $N_{t}:=\lfloor\sqrt{t/(2\pi)}\rfloor$ and $\tau:=1+2(N_{t}-\sqrt{t/(2\pi)})$. The function $Z(t)$ is even, it is real for real values of $t$ and satisfies $|Z(t)|=|\zeta(1/2+{\textnormal{i}}t)|$, which makes it a convenient tool for detecting zeros of $\zeta(s)$ on the critical line $\textnormal{Re}(s)=1/2$. The sum in (1) is the leading term in the asymptotic expansion of $Z(t)$ (it is often called “the main sum”). We also included in (1) the first order correction term, which has asymptotic order $O(t^{-1/4})$. The higher order correction terms can also be given explicitly, see [15][Theorem 4.16]. In applications we are usually interested to compute $Z(t)$ for $t$ very large, where most computational effort is spent in evaluating the main sum in (1). See [2] for results of such computations for $t$ as large as $10^{34}$. For such extremely large values of $t$ the error term $O(t^{-3/4})$ is already very small. Gabcke in his PhD Thesis [3] proved that this error term is in fact less than $0.127t^{-3/4}$ for all $t\geq 200$ and he also provided explicit bounds for higher order error terms. An important feature of the Riemann-Siegel formula (1) is that the main sum, which is the leading asymptotic term, is a discontinuous function of $t$. The discontinuity in the main sum in (1) carries over to a discontinuity in the asymptotic correction term. One would hope that finding asymptotic approximations to $Z(t)$ where the leading term is a smooth function of $t$ will result in smaller correction terms over wider range of values of $t$. Berry and Keating [1] in 1992 were the first to develop a “smoothed” version of the Riemann-Siegel formula. They derived an asymptotic approximation to $Z(t)$, where the leading term is given by a convergent series (2) $Z_{0}(t,K)=2\textnormal{Re}\sum\limits_{n\geq 1}\frac{e^{{\textnormal{i}}(\theta(t)-t\ln(n))}}{\sqrt{n}}\times\frac{1}{2}{\textnormal{erfc}}\Big{(}\frac{\xi(n,t)}{q(K,t)}\sqrt{\frac{t}{2}}\Big{)}.$ In the above formula $\xi(n,t):=\ln(n)-\theta^{\prime}(t)$, $q^{2}(K,t):=K^{2}-{\textnormal{i}}t\theta^{\prime\prime}(t)$ and ${\textnormal{erfc}}(x)$ is the complementary error function. The parameter $K>0$ in (2) can be chosen freely. For large $t$ we have $\xi(n,t)\approx\ln(n/N_{t})$ and $q^{2}(K,t)\approx K^{2}-{\textnormal{i}}/2$, thus $Z_{0}(t,K)$ resembles the main sum in (1) but with the complementary error function smoothing the sharp cut-off at $n=N_{t}$. As Berry and Keating show in [1], in the infinite series in (2) only the terms with $n\leq N_{t}+AK$ are important (the terms with larger $n$ are much smaller). This clarifies the meaning of the parameter $K$: it is proportional to the number of terms by which the truncation in the Riemann- Siegel formula has been smoothed. Berry and Keating also show that the leading approximation term $Z_{0}(t,K)$ contains the first correction term shown in (1) and that numerically $Z_{0}(t,K)$ gives a better approximation to $Z(t)$ for large $t$ when compared with the main sum in (1). Following the paper by Berry and Keating, Richard Paris (together with his PhD student Shuang Cang) developed several other asymptotic approximations to $Z(t)$ and investigated their properties. In [8] an asymptotic formula for $Z(t)$ was derived by applying the Poisson summation to the tail of the Dirichlet series of $\zeta(s)$ for $\textnormal{Re}(s)>1$. In [13] Paris and Cang developed another approximation to $Z(t)$, starting with the following formula (3) $Z(t)=2\textnormal{Re}\;e^{{\textnormal{i}}\theta(t)}\Big{[}\sum\limits_{n\geq 1}n^{-s}Q(s/2,\pi{\textnormal{i}}n^{2})-\frac{\pi^{s/2}e^{\pi{\textnormal{i}}s/4}}{s\Gamma(s/2)}\Big{]},\;\;\;s=1/2+{\textnormal{i}}t,$ where $Q(s,x):=\Gamma(s,x)/\Gamma(s)$ is the normalized incomplete gamma function. Using the uniform asymptotic results for $Q(s,x)$, they found that this new approximation was more accurate than the Riemann-Siegel formula, and that it required little additional computational effort. Two generalizations of this approximation were studied in [9, 11, 14]. Our goal in this paper is to state and prove a general result (Theorem 1 below), which allows one to derive convergent series representations for the Riemann zeta function in a vertical strip containing the critical line. This result and its proof were inspired by the method used by Berry and Keating in [1]. We show that formula (2) is a special case of our general theorem. We also derive several new series representations for the Riemann zeta function, given in in terms of the incomplete Gamma functions, generalized hypergeometric functions and Meijer $G$-functions. ## 2\. Results For $a\in{\mathbb{R}}$ and $b<\pi/2$ we denote (4) ${\mathcal{D}}_{a,b}:=\Big{\\{}(s,x)\in{\mathbb{C}}^{2}\;:\;\textnormal{Re}(s)>-a,\;x\neq 0,\;|{\textnormal{arg}}(x)|<\frac{\pi}{2}-b\Big{\\}}.$ The following theorem is our main result. ###### Theorem 1. Let $g(z)$ be an odd function that is analytic everywhere in the vertical strip $|\textnormal{Re}(z)|<a$ for some $a>1/2$, except for a simple pole at $z=0$ with residue equal to one. We also assume that for some $C>0$ and $b<\frac{\pi}{2}$ we have (5) $|g(z)|<Ce^{b|\textnormal{Im}(z)|/2}\;\;{\textnormal{ for all $z$ with }}\;|\textnormal{Re}(z)|<a\;{\textnormal{ and }}\;|\textnormal{Im}(z)|>1.$ Then the following statements are true: * (i) There exists a function $h(s,x)$, analytic in the domain ${\mathcal{D}}_{a,b}$ and having Mellin transform (6) $\int_{0}^{\infty}h(s,x)x^{w-1}{\textnormal{d}}x=\Gamma(w)g(2w-s),$ where $\frac{1}{2}\max(0,\textnormal{Re}(s))<\textnormal{Re}(w)<\frac{1}{2}(a+\textnormal{Re}(s))$. * (ii) For any $\tau\in{\mathbb{C}}\setminus\\{0\\}$ with $|{\textnormal{arg}}(\tau)|<\frac{\pi}{2}-b$ and $s$ in the vertical strip $1-a<\textnormal{Re}(s)<a$ we have (7) $\displaystyle\pi^{-s/2}\Gamma(s/2)\zeta(s)$ $\displaystyle=\tau^{s/2}\Big{[}-g(s)+2\sum\limits_{n\geq 1}h(s,\pi\tau n^{2})\Big{]}$ $\displaystyle+\tau^{(s-1)/2}\Big{[}-g(1-s)+2\sum\limits_{n\geq 1}h(1-s,\pi\tau^{-1}n^{2})\Big{]}.$ Both infinite series in (7) congerve absolutely and uniformly in the strip $1-a<\textnormal{Re}(s)<a$. ###### Proof. For $s$ with $\textnormal{Re}(s)>-a$ we denote by $I_{s}$ the interval $(\frac{1}{2}\max(0,\textnormal{Re}(s)),\frac{1}{2}(a+\textnormal{Re}(s))\subset{\mathbb{R}}$. For $(s,x)\in{\mathcal{D}}_{a,b}$ and $\gamma\in I_{s}$ we define (8) $h_{\gamma}(s,x):=\frac{1}{2\pi{\textnormal{i}}}\int_{\gamma-{\textnormal{i}}\infty}^{\gamma+{\textnormal{i}}\infty}\Gamma(w)g(2w-s)x^{-w}{\textnormal{d}}w.$ First, let us prove that the above integral converges, so that $h(s,x)$ is indeed well defined. The condition $\gamma\in I_{s}$ implies $\gamma>0$ and $2\gamma-\textnormal{Re}(s)\in(0,a)$. This fact combined with our assumption that $g(z)$ is analytic in the strip $0<\textnormal{Re}(z)<a$ show that the integrand $w\mapsto\Gamma(w)g(2w-s)x^{-w}$ has no singularities on the line of integration $w\in\gamma+{\textnormal{i}}{\mathbb{R}}$. From formula 8.328.1 in [4] we have the following asymptotic result for the gamma function: for $z=x+{\textnormal{i}}y$ (9) $\Gamma(z)\approx\sqrt{2\pi}|y|^{x-1/2}e^{-\pi|y|/2},\;\;\;y\to\infty.$ This result and our assumption (5) imply that there exists a constant $C_{1}=C_{1}(\gamma,s)>0$ such that for $w=\gamma+{\textnormal{i}}y$ and $y\in{\mathbb{R}}$ we have $|\Gamma(w)g(2w-s)|<C_{1}e^{-(\pi/2-b)|y|}(1+|y|)^{\gamma-1/2},\;\;\;y\to\infty.$ From the above asymptotic result (10) and the identity $|x^{-w}|=|x|^{-\gamma}e^{\arg(x)y},\;\;\;w=\gamma+{\textnormal{i}}y,$ we conclude that the integrand in (8) is bounded by (10) $|\Gamma(w)g(2w-s)x^{-w}|\leq C_{1}|x|^{-\gamma}e^{-(\pi/2-b)|y|+\arg(x)y}(1+|y|)^{\gamma-1/2},$ and this (as a function of $y\in{\mathbb{R}}$) is integrable as long as $(s,x)\in{\mathcal{D}}_{a,b}$. Thus $h_{\gamma}(s,x)$ is well-defined for $(s,x)\in{\mathcal{D}}_{a,b}$ and $\gamma\in I_{s}$. The estimate (10) also implies the following result, which we will need later: for any small $\epsilon>0$, $s$ in the half-plane $\textnormal{Re}(s)>-a$ and $\gamma\in I_{s}$, there exists $C_{2}=C_{2}(\epsilon,s,\gamma)$ such that $|h_{\gamma}(s,x)|<C_{2}|x|^{-\gamma}$ for all $x$ with $|\arg(x)|<\pi/2-b-\epsilon$. Next, let us fix $(s,x)\in{\mathcal{D}}_{a,b}$. The estimate (10) shows that the integrand decays exponentially fast as $\textnormal{Im}(w)\to\infty$, uniformly in the vertical strip $\textnormal{Re}(w)\in I_{s}$. Thus, by shifting the contour of integration in this strip, we see that the function $\gamma\mapsto h_{\gamma}(s,x)$ is constant in the interval $I_{s}$ and the functions $\\{h_{\gamma}(s,x)\\}_{\gamma\in I_{s}}$ are analytic continuations of a single function $h(s,x)$. For any fixed $\gamma>0$, the formula (8) defines this function $h(s,x)$ in a domain $\Big{\\{}(s,x)\in{\mathbb{C}}^{2}\;:\;2\gamma-a<\textnormal{Re}(s)<2\gamma,\;x\neq 0,\;|{\textnormal{arg}}(x)|<\frac{\pi}{2}-b\Big{\\}}\subset{\mathcal{D}}_{a,b}.$ This is true since the condition $\gamma\in I_{s}$ is equivalent to $2\gamma-a<\textnormal{Re}(s)<2\gamma$. As $\gamma$ ranges through all positive numbers, the union of these above smaller domains gives us the entire domain ${\mathcal{D}}_{a,b}$, therefore the function $h(s,x)$ is analytic in ${\mathcal{D}}_{a,b}$. As we have established above, for any fixed $s$ with $\textnormal{Re}(s)>-a$ and any $\gamma\in I_{s}$, there exists $C_{2}=C_{2}(s,\gamma)$ such that $|h(s,x)|<C_{2}|x|^{-\gamma}$ for all $x\in(0,\infty)$. This implies that the integral in the left-hand side of (6) converges for all $w$ in the vertical strip $\frac{1}{2}\max(0,\textnormal{Re}(s))<\textnormal{Re}(w)<\frac{1}{2}(a+\textnormal{Re}(s))$, and now identity (6) follows by Mellin inversion formula. This ends the proof of part (i). Next, we denote $G(s)=\pi^{-s/2}\Gamma(s/2)\zeta(s)$. It is well known [15] that $G$ is analytic in ${\mathbb{C}}$, except for two simple poles at $s=0$ and $s=1$ with the corresponding residues $-1$ and $1$ and that $G$ satisfies the functional equation $G(s)=G(1-s)$. It is also known that as $\textnormal{Im}(s)\to\infty$ in any vertical strip $|\textnormal{Re}(s-1/2)|<A$ the function $|\zeta(s)|$ is bounded by $|s|^{B}$ for some $B=B(A)>0$ (in fact, one can take $B=1/6+A$, see [15][Chapter V]). Combining this estimate with (9) we conclude that for any $A>0$ there exists $B_{2}=B_{2}(A)>0$ such that (11) $|G(s)|=O\Big{(}|s|^{B_{2}}e^{-\frac{\pi}{4}|\textnormal{Im}(s)|}\Big{)},\;\;\;\textnormal{Im}(s)\to\infty,$ uniformly in the strip $|\textnormal{Re}(s)-1/2|<A$. Let us fix $s\notin\\{0,1\\}$ and $\tau\in{\mathbb{C}}\setminus\\{0\\}$, satisfying $1-a<\textnormal{Re}(s)<a$ and $|\arg(\tau)|<\pi/2-b$, and take any $u>|\textnormal{Im}(s)|$ and $c>0$ such that $\max(\textnormal{Re}(s),1-\textnormal{Re}(s))<c<a$. We define a contour, traversed counter-clockwise $R_{c,u}:=(c-{\textnormal{i}}u,c+{\textnormal{i}}u]\cup(c+{\textnormal{i}}u,-c+{\textnormal{i}}u]\cup(-c+{\textnormal{i}}u,-c-{\textnormal{i}}u]\cup(-c-{\textnormal{i}}u,c-{\textnormal{i}}u].$ It is clear that $R_{c,u}$ is the boundary of a rectangle built on vertices $\pm c\pm{\textnormal{i}}u$. Now we consider the following integral (12) $F(s):=\frac{1}{2\pi{\textnormal{i}}}\int_{R_{c,u}}G(s+z)g(z)\tau^{-\frac{z}{2}}{\textnormal{d}}z.$ By construction, the contour $R_{c,u}$ contains the three points $0$, $-s$ and $1-s$, which are the simple poles of the integrand. Applying Cauchy’s Theorem we obtain (13) $F(s)=G(s)-\tau^{s/2}g(-s)+\tau^{(s-1)/2}g(1-s).$ Using our assumptions on $\arg(\tau)$ and $g(z)$ (see (5)) and upper bound (11) we conclude that the function $z\mapsto G(s+z)g(z)\tau^{-\frac{z}{2}}$ converges to zero exponentially fast as $\textnormal{Im}(z)\to\pm\infty$, uniformly in the strip $\textnormal{Re}(z)<a$. Thus we can take the limit in (12) and (13) as $u\to+\infty$, and the integrals over the upper and lower sides of the rectangle $R_{c,u}$ will converge to zero, giving us the following result: (14) $\displaystyle G(s)$ $\displaystyle=-\tau^{s/2}g(s)-\tau^{(1-s)/2}g(1-s)$ $\displaystyle+\frac{1}{2\pi{\textnormal{i}}}\int_{c-{\textnormal{i}}\infty}^{c+{\textnormal{i}}\infty}G(s+z)g(z)\tau^{-z/2}{\textnormal{d}}z+\frac{1}{2\pi{\textnormal{i}}}\int_{-c+{\textnormal{i}}\infty}^{-c-{\textnormal{i}}\infty}G(s+z)g(z)\tau^{-z/2}{\textnormal{d}}z.$ When deriving the above formula we also used the fact that $g(-s)=-g(s)$. Since $G(s+z)=G(1-s-z)$, by changing the variable of integration $z\mapsto-z$ in the second integral in the right-hand side of (14) we obtain (15) $G(s)=-\tau^{s/2}g(s)-\tau^{(1-s)/2}g(1-s)+H(s,\tau)+H(1-s,\tau^{-1}),$ where we denoted $H(s,\tau):=\frac{1}{2\pi{\textnormal{i}}}\int_{c-{\textnormal{i}}\infty}^{c+{\textnormal{i}}\infty}f(s+z)g(z)\tau^{-z/2}{\textnormal{d}}z.$ The condition $c>1-\textnormal{Re}(s)$ (that we imposed above) allows us to expand $\zeta(s+z)=\sum\limits_{n\geq 1}n^{-s-z}$ as an absolutely convergent series (this is valid for all $z$ on the vertical line $\textnormal{Re}(z)=c$). Applying Fubini’s theorem we have (16) $\displaystyle H(s,\tau)$ $\displaystyle=\sum\limits_{n\geq 1}\frac{1}{2\pi{\textnormal{i}}}\int_{c-{\textnormal{i}}\infty}^{c+{\textnormal{i}}\infty}\pi^{-(s+z)/2}\Gamma((s+z)/2)g(z)n^{-s-z}\tau^{-z/2}{\textnormal{d}}z$ $\displaystyle=2\tau^{s/2}\sum\limits_{n\geq 1}\frac{1}{2\pi{\textnormal{i}}}\int_{c_{1}-{\textnormal{i}}\infty}^{c_{1}+{\textnormal{i}}\infty}\Gamma(w)g(2w-s)(\pi\tau n^{2})^{-w}{\textnormal{d}}w=2\tau^{s/2}\sum\limits_{n\geq 1}h(s,\pi\tau n^{2}).$ In the second step in the above computation we changed the variable of integration $z\mapsto w=(s+z)/2$ and denoted $c_{1}:=(c+\textnormal{Re}(s))/2$. Note that $\max(\textnormal{Re}(s),1/2)<c_{1}<(a+\textnormal{Re}(s))/2$, which implies $c_{1}\in I_{s}$ and justifies the last step in (16). Formulas (15) and (16) give us the desired result (7). $\sqcap\kern-8.0pt\hbox{$\sqcup$}$ As we show next, under stronger assumption on the function $g(s)$, the function $h(s,x)$ can be given as an integral of the incomplete gamma function. ###### Proposition 1. Assume that the function $g$ satisfies the assumptions of Theorem 1 with some $b<0$. Then there exists a function $f:(0,\infty)\mapsto{\mathbb{C}}$, defined via the Mellin transform (17) $\int_{0}^{\infty}f(x)x^{w-1}{\textnormal{d}}x=wg(2w),\;\;\;-a/2<\textnormal{Re}(w)<a/2,$ which is analytic in the sector $|\arg(x)|<|b|$ and satisfies an identity $f(x)=f(1/x)$ in this sector. We have an integral representation (18) $h(s,x)=x^{-s/2}\int_{0}^{\infty}\Gamma(s/2,v)f(x/v)v^{-1}{\textnormal{d}}v,\;\;\;\textnormal{Re}(s)>-a,\;|\arg(x)|<|b|.$ ###### Proof. For $x>0$ we define (19) $f(x)=\frac{1}{2\pi{\textnormal{i}}}\int_{\gamma-{\textnormal{i}}\infty}^{\gamma+{\textnormal{i}}\infty}wg(2w)x^{-w}{\textnormal{d}}w,$ where $|\gamma|<a/2$. Since we assumed that $b<0$, formula (5) tells us that $|g(2w)|=O\big{(}e^{-|b|\times|\textnormal{Im}(w)|}\big{)}$ as $w\to\infty$ in the strip $|\textnormal{Re}(w)|<a/2$. Thus the integral in (19) converges and $f(x)$ is well defined for $x>0$. Due to the exponential decay of $g(2w)$, the function $f(x)$ is analytic in the sector $|\arg(x)|<|b|$. The fact that the function $wg(2w)$ is even implies that $f(x)=f(1/x)$ for all $x$ in the sector $|\arg(x)|<|b|$, and the fact that $wg(2w)$ is analytic in the strip $|\textnormal{Re}(w)|<a/2$ implies that for every $\epsilon>0$ (20) $f(x)=\begin{cases}O(x^{a/2-\epsilon}),\;\;\;&x\to 0,\\\ O(x^{-a/2+\epsilon}),\;\;\;&x\to\infty,\end{cases}$ and this asymptotics holds in the sector $|\arg(x)|<|b|$. Now, according to (6), the function $x\mapsto h(s,x)$ has Mellin transform $\Phi_{1}(w)\Phi_{2}(w)$, where we denoted $\Phi_{1}(w):=\frac{\Gamma(w)}{w-s/2},\;\;\;\Phi_{2}(w):=(w-s/2)g(2w-s).$ Formula (6.455.1) from [4] implies (21) $\int_{0}^{\infty}x^{-\nu}\Gamma(\nu,x)x^{z-1}{\textnormal{d}}x=\frac{\Gamma(z)}{z-\nu},\;\;\;\textnormal{Re}(z)>0,\textnormal{Re}(z-\nu)>0.$ Therefore, the function $\Phi_{1}(w)$ is the Mellin transform of the function $x\mapsto x^{-s/2}\Gamma(s/2,x)$, and this holds for $\textnormal{Re}(w)>\max(0,\textnormal{Re}(s)/2)$. The function $\Phi_{2}(w)$ is the Mellin transform of $x^{-s/2}f(x)$, for $|\textnormal{Re}(w-s/2)|<a/2$. The reader may check that the half-plane $\textnormal{Re}(w)>\max(0,\textnormal{Re}(s)/2)$ intersectst with the vertical strip $|\textnormal{Re}(w-s/2)|<a/2$ as long as $\textnormal{Re}(s)>-a$. Thus we can use the Mellin convolution identity and conclude that for $x>0$ and $\textnormal{Re}(s)>-a$ $h(s,x)=\int_{0}^{\infty}t^{-s/2-1}\big{(}x/t)^{-s/2}\Gamma(s/2,x/t)f(t){\textnormal{d}}t,$ and from this formula we obtain the desired result (18) by change of variables $t=x/v$. Thus we have established formula (18) for $\textnormal{Re}(s)>-a$ and $x>0$, and we can extend it to the sector $|\arg(x)|<|b|$ using the analyticity and asymptotic properties of $f(x)$ that we established above. $\sqcap\kern-8.0pt\hbox{$\sqcup$}$ Next we present five applications of the above results. Theorem 1 is intended to be used in the following way. We start with a function $g$ satisfying the necessary assumptions and then we try to identify a function $h(s,x)$ via its Mellin transform (6). Sometimes we can identify $h(s,x)$ by finding the desired Mellin transform pair in a table of integral transforms, such as [4] or [7]. This is the method we use in examples 1, 3 and 5 below. When the desired Mellin transform pair can not be found, we may attempt to find an integral or a series representation for $h(s,x)$ – this is the approach we follow in examples 2 and 4. ### Example 1 Take $g(z)=1/z$. This function satisfies conditions of Theorem 1 with $b=0$ and $a=+\infty$. Comparing formula (21) with the Mellin transform equation (6) that defines $h(s,x)$, we find $h(s,x)=\frac{1}{2}x^{-s/2}\Gamma(s/2,x),$ and applying Theorem 1 we obtain the following result, valid for for $s\in{\mathbb{C}}$ and $|\textnormal{arg}(\tau)|<\pi/2$: (22) $\displaystyle\pi^{-s/2}\Gamma(s/2)\zeta(s)$ $\displaystyle=-\frac{\tau^{s/2}}{s}+\pi^{-s/2}\sum\limits_{n\geq 1}n^{-s}\Gamma(s/2,\pi\tau n^{2})$ $\displaystyle+\frac{\tau^{(s-1)/2}}{s-1}+\pi^{(s-1)/2}\sum\limits_{n\geq 1}n^{s-1}\Gamma((1-s)/2,\pi\tau^{-1}n^{2}).$ The above formula is well known. A special case of this formula with $\tau=1$ was used by Riemann to prove the functional equation for $\zeta(s)$ (see [15][Section 2.6]). In the limit as $\tau\to{\textnormal{i}}$ this formula gives us (3). ### Example 2 Now we take $g(z)=e^{\alpha z^{2}}/z$. Conditions of Theorem 1 are satisfied with $a=+\infty$ and any $b<0$. Berry and Keating [1] used this function when deriving the asymptotic term $Z_{0}(t,K)$ in (2), though their method was slightly different. The difference lies in formula (12), where we used a function $G(s)=\pi^{-s/2}\Gamma(s/2)\zeta(s)$ and Berry and Keating used $Z({\textnormal{i}}(s-1/2))$. For this choice of $g(z)$ it seems that there are no explicit representations for $h(s,x)$, however Proposition 1 provides a simple integral representation. Performing a change of variables $x=e^{y}$ and using the Gaussian integral one can easily check that the function $f$ defined by the Mellin transform identity (19) is given by $f(x)=\frac{1}{8\sqrt{\pi\alpha}}\exp\Big{(}-\frac{1}{16\alpha}\ln(x)^{2}\Big{)},\;\;\;x>0.$ Note that the function $2f(x)/x$ is the probability density function of a random variable $\xi$ having lognormal distribution with parameters $\mu=0$ and $\sigma^{2}=8\alpha$. Therefore, formula (18) gives us the following expression $h(s,x)=\frac{1}{2}x^{-s/2}{\mathbb{E}}\big{[}\Gamma\big{(}s/2,xe^{\sqrt{8\alpha}X}\big{)}\big{]},$ where $X$ is a standard normal random variable. ### Example 3 For our third example we choose $g(z)=\frac{\pi/2}{\sin(\pi z/2)}.$ This function satisfies conditions of Theorem 1 with $b=-\pi$ and $a=2$. Formula 5.26 from [7] states that $\int_{0}^{\infty}e^{x}\Gamma(\nu,x)x^{z-1}{\textnormal{d}}x=\frac{\pi\Gamma(z)}{\Gamma(1-\nu)\sin(\pi(\nu+z))},\;\;\;0<\textnormal{Re}(z)<1-\textnormal{Re}(\nu).$ From the above formula and the Mellin transform identity (6) we find $h(s,x)=\frac{1}{2}\Gamma(1+s/2)e^{x}\Gamma(-s/2,x).$ We recall that the normalized incomplete gamma function is $Q(s,x):=\Gamma(s,x)/\Gamma(s)$. Applying the reflection formula for the gamma function we can write $h(s,x)=-\frac{\pi/2}{\sin(\pi s/2)}e^{x}Q(-s/2,x).$ Now Theorem 1 gives us the following result, which seems to be new: ###### Corollary 1. For $-1<\textnormal{Re}(s)<2$ and $|\textnormal{arg}(\tau)|<3\pi/2$ $\displaystyle\pi^{-s/2}\Gamma(s/2)\zeta(s)$ $\displaystyle=-\frac{\pi\tau^{s/2}}{\sin(\pi s/2)}\Big{[}\frac{1}{2}+\sum\limits_{n\geq 1}e^{\pi\tau n^{2}}Q(-s/2,\pi\tau n^{2})\Big{]}$ $\displaystyle-\frac{\pi\tau^{(s-1)/2}}{\cos(\pi s/2)}\Big{[}\frac{1}{2}+\sum\limits_{n\geq 1}e^{\pi\tau^{-1}n^{2}}Q((s-1)/2,\pi\tau^{-1}n^{2})\Big{]}.$ ### Example 4 Example 3 can be extended in the following way. Let $r=p/q$ be a positive rational number and define $g(z)=\frac{\pi r}{\sin(\pi rz)}.$ Now conditions of Theorem 1 are satisfied with $b=-2\pi r$ and $a=1/r$. We claim that in this case $h(s,x)$ can be computed as a sum of two infinite power series: (23) $\displaystyle h(s,x)$ $\displaystyle=-\pi r\sum\limits_{n\geq 0}\frac{(-1)^{n}}{\sin(\pi r(2n+s)}\times\frac{x^{n}}{n!}$ $\displaystyle+\frac{1}{2}\pi x^{-s/2}\sum\limits_{m\geq 0}\frac{(-1)^{m}}{\sin(\pi(s-m/r)/2)}\times\frac{x^{m/(2r)}}{\Gamma(1-s/2+m/(2r))}.$ We will sketch the proof of formula (23) (the main ideas and details are very similar to the proof of Corollary 3.16 in [5]). To establish formula (23), we start by writing $h(s,x)$ in the form (8) : (24) $h(s,x)=\frac{r}{2{\textnormal{i}}}\int_{\gamma-{\textnormal{i}}\infty}^{\gamma+{\textnormal{i}}\infty}\Phi(w){\textnormal{d}}w,\;\;\;{\textnormal{ where }}\;\Phi(w)=\frac{\Gamma(w)x^{-w}}{\sin(\pi r(2w-s))}.$ We take $s$ to be a fixed number in the vertical strip $-1/(2p)<\textnormal{Re}(s)<0$ and $\gamma$ any number in the interval $(0,1/(4p))$. The reader may want to recall the definition of the interval $I_{s}$ in the proof of Theorem 1, to see that this choice of $\gamma$ is legitimate. The gamma function in the integrand in (24) has simple poles at points $-n$, $n=0,1,2,\dots$ and the function $\sin(\pi r(2w-s))^{-1}$ has simple poles at points $(1/2)(s-m/r)$, $m\in{\mathbb{Z}}$. Due to our restriction on $s$, these two sets do not intersect, thus all the poles of the function $\Phi(w)$ are simple. The periodic structure of the poles of $\Phi$ implies that we can find a sequence of $\\{\gamma_{k}\\}_{k\geq 1}$ decreasing to $-\infty$ such that each point $\gamma_{k}$ satisfies $\gamma_{k}<\gamma$ and has distance at least $1/(8p)$ from each pole of $\Phi$. Now we perform the standard operation of shifting the contour of integration in (24) from $\gamma+{\textnormal{i}}{\mathbb{R}}$ to $\gamma_{k}+{\textnormal{i}}{\mathbb{R}}$: (25) $\displaystyle h(s,x)$ $\displaystyle=\pi r\sum{\textnormal{Res}}\big{(}\Phi(w),w=-n\big{)}$ $\displaystyle+\pi r\sum{\textnormal{Res}}\Big{(}\Phi(w),w=\frac{1}{2}\Big{(}s-\frac{m}{r}\Big{)}\Big{)}+\frac{r}{2{\textnormal{i}}}\int_{\gamma_{k}-{\textnormal{i}}\infty}^{\gamma_{k}+{\textnormal{i}}\infty}\Phi(w){\textnormal{d}}w,$ where the summation in the first sum is over all $n$ such that $\gamma_{k}\leq-n\leq 0$ and the summation in the second sum is over all $m$ such that $\gamma_{k}\leq(s-m/r)/2\leq s/2$. The residues in (25) give us the coefficients in the two infinite series in (23). The integral in the right- hand side of the (25) converges to zero as $k\to+\infty$. This is true since the function $x^{-w}\Gamma(w)$ resricted to the vertical line $\textnormal{Re}(w)=\gamma_{k}$ converges to zero uniformly in $\textnormal{Im}(w)$ as $k\to+\infty$ and the term $1/\sin(\pi r(2w-s))$ is bounded on line $\textnormal{Re}(w)=\gamma_{k}$. Here our choice of $\gamma_{k}$ is crucial – we need to ensure that the line of integration $w\in\gamma_{k}+{\textnormal{i}}{\mathbb{R}}$ stays away from the poles of the integrand. To summarize, in the limit as $k\to+\infty$, formula (25) gives us the desired result (23). The infinite series representation (23) is in fact valid for almost all irrational $r$. This can be established in the same way as Corollary 3.16 in [5]. However, for rational $r=p/q$ the convergence of both series in (23) is obvious and for small $p$ and $q$ these series can be simplified further. We already saw that in the case when $r=1/2$ (our example 3) we could express $h(s,x)$ in terms of a normalized incomplete gamma function. Let us consider now the case with $r=1/4$. By separating the terms with $n$ even from the terms with $n$ odd in the first sum in (23) and applying reflection formula for the gamma function in the second sum we obtain (26) $\displaystyle h(s,x)$ $\displaystyle=-\frac{\pi}{4}\frac{\cos(x)}{\sin(\pi s/4)}+\frac{\pi}{4}\frac{\sin(x)}{\cos(\pi s/4)}$ $\displaystyle+\frac{1}{2}\Gamma(s/2)x^{-s/2}{}_{1}F_{2}(1;1/2-s/4,1-s/4;-x^{2}/4).$ ### Example 5 There is another form of $g(z)$ that can lead to somewhat explicit expressions for $h(s,x)$. We take (27) $g(z)=\frac{A}{z}\times\frac{\prod\limits_{j=1}^{n}\Gamma(\alpha_{j}+z/2)\Gamma(\alpha_{j}-z/2)}{\prod\limits_{j=1}^{m}\Gamma(\beta_{j}+z/2)\Gamma(\beta_{j}-z/2)}$ where the normalizing constant $C$ is given by $A=\prod\limits_{j=1}^{n}\Gamma(\alpha_{j})^{-2}\prod\limits_{j=1}^{m}\Gamma(\beta_{j})^{2}.$ If $n\geq m$ and the constants $\beta_{j}$ have positive real part and $\alpha_{j}$ satisfy $\textnormal{Re}(\alpha_{j})>1/4$, then $g(z)$ given in (27) satisfies the assumptions in Theorem 1 with $a=2\times\min\\{\textnormal{Re}(\alpha_{j})\;:\;1\leq j\leq n\\}\;\;{\textnormal{ and }}\;\;b=\pi(m-n).$ For function $g$ given by (27), the function $h(s,x)$ defined via (8) can be expressed in terms of the Meijer $G$-function (see [6]) (28) $h(s,x)=A\times G_{p,q}^{n+2,n}\Big{(}\begin{matrix}{\bf a}-s/2\\\ 0,{\bf b}-s/2\end{matrix}\Big{|}x\Big{)},$ where $p=m+n+1$, $q=m+n+2$ and ${\bf a}:=[1-\alpha_{1},\dots,1-\alpha_{n},\beta_{1},\dots,\beta_{m},0],\;\;\;{\bf b}:=[1,\alpha_{1},\dots,\alpha_{n},1-\beta_{1},\dots,1-\beta_{m}].$ Note that our example 3, where $h(s,x)$ is given in terms of the incomplete gamma function, can be obtained from (27) by setting $n=\alpha_{1}=1$ and $m=0$. It is possible that other choices of parameters may also lead to simple expressions for $h(s,x)$, however we did not pursue this further. ## References * [1] M. V. Berry and J. P. Keating. A new asymptotic representation for $\zeta(\frac{1}{2}+it)$ and quantum spectral determinants. Proc. R. Soc. Lond., 437:151–173, 1992. http://doi.org/10.1098/rspa.1992.0053. * [2] J. W. Bober and G. A. Hiary. New computations of the Riemann zeta function on the critical line. Experimental Mathematics, 27(2):125–137, 2018. https://doi.org/10.1080/10586458.2016.1233083. * [3] W. Gabcke. Neue herleitung und explizite restabschätzung der Riemann-Siegel-formel. PhD Thesis, Georg-August-Universität zu Göttingen, 1979. http://dx.doi.org/10.53846/goediss-5113. * [4] I. S. Gradshteyn and I. M. Ryzhik. Table of integrals, series, and products. Elsevier/Academic Press, Amsterdam, seventh edition, 2007. * [5] A. Kuznetsov, A. Kyprianou, J. C. Pardo, and A. Watson. The hitting time of zero for a stable process. Electronic Journal of Probability, 19:1 – 26, 2014. https://doi.org/10.1214/EJP.v19-2647. * [6] A. M. Mathai and R. K. Saxena. Generalized Hypergeometric Functions with Applications in Statistics and Physical Sciences. Lecture Notes in Mathematics. Springer Berlin, Heidelberg, 1973. https://doi.org/10.1007/BFb0060468. * [7] F. Oberhettinger. Tables of Mellin transform. Springer-Verlag, 1974. * [8] R. B. Paris. An asymptotic representation for the Riemann zeta function on the critical line. Proc. R. Soc. London, 446:565–587, 1994. https://doi.org/10.1098/rspa.1994.0121. * [9] R. B. Paris. A generalisation of Lavrik’s expansion for the Riemann zeta function. Technical Report MACS 94:01, University of Abertay Dundee, 1994\. * [10] R. B. Paris. New asymptotic formulas for the Riemann zeta function on the critical line. Special Functions, proceedings of the international workshop, editors C. Dunkl, M. Ismail and R. Wong, pages 247–261, 2000. https://doi.org/10.1142/9789812792303_0020. * [11] R. B. Paris. A generalisation of an expansion for the Riemann zeta function involving incomplete gamma functions. Applied Mathematical Sciences, 3(60):2973–2984, 2009. * [12] R. B. Paris. An asymptotic approximation for the Riemann zeta function revisited. preprint, 2022. https://arxiv.org/abs/2203.07863. * [13] R. B. Paris and S. Cang. An asymptotic representation for $\zeta(\frac{1}{2}+it)$. Methods and Applications of Analysis, 4(4):449–470, 1997. * [14] R. B. Paris and S. Cang. An exponentially-smoothed Gram-type formula for the Riemann zeta function. Methods and Applications of Analysis, 4(3):326–338, 1997. * [15] E. C. Titchmarsh. The theory of the Riemann zeta-function. Oxford University Press, second edition, 1987.
# Targeted and Troublesome: Tracking and Advertising on Children’s Websites WARNING: Contains potentially NSFW images Zahra Moti Radboud University Asuman Senol imec-COSIC, KU Leuven Hamid Bostani Radboud University Frederik Zuiderveen Borgesius Radboud University Veelasha Moonsamy Ruhr University Bochum Arunesh Mathur Independent Researcher Gunes Acar Radboud University ###### Abstract On the modern web, trackers and advertisers frequently construct and monetize users’ detailed behavioral profiles without consent. Despite various studies on web tracking mechanisms and advertisements, there has been no rigorous study focusing on websites targeted at children. To address this gap, we present a measurement of tracking and (targeted) advertising on websites directed at children. Motivated by the lack of a comprehensive list of child- directed (i.e., targeted at children) websites, we first build a multilingual classifier based on web page titles and descriptions. Applying this classifier to over two million pages from the Common Crawl dataset, we compile a list of two thousand child-directed websites. Crawling these sites from five vantage points, we measure the prevalence of trackers, fingerprinting scripts, and advertisements. Our crawler detects ads displayed on child-directed websites and determines if ad targeting is enabled by scraping ad disclosure pages whenever available. Our results show that around 90% of child-directed websites embed one or more trackers, and about 27% contain targeted advertisements—a practice that should require verifiable parental consent. Next, we identify improper ads on child-directed websites by developing an ML pipeline that processes both images and text extracted from ads. The pipeline allows us to run semantic similarity queries for arbitrary search terms, revealing ads that promote services related to dating, weight loss, and mental health, as well as ads for sex toys and flirting chat services. Some of these ads feature repulsive, sexually-explicit and highly-inappropriate imagery. In summary, our findings indicate a trend of non-compliance with privacy regulations and troubling ad safety practices among many advertisers and child-directed websites. To ensure the protection of children and create a safer online environment, regulators and stakeholders must adopt and enforce more stringent measures. Keywords – online tracking, advertising, children, privacy ## 1 Introduction The proliferation of online tracking for analytics, behavioral advertising, and marketing has resulted in over a decade’s worth of research into this ecosystem. Prior research has shown that not only is online tracking rampant on the web [1] but that trackers use increasingly-invasive tracking mechanisms—e.g., third-party cookies, tracking pixels, evercookies, and browser fingerprinting [2, 3, 4, 1]—to relentlessly build detailed profiles of users across the web without any consent for targeted advertising. Such privacy concerns aside, online advertising has shown to be problematic in other ways. Ads and ad networks are a vector for distributing ransomware, malicious programs, and cryptojackers—posing a serious security threat to users [5, 6, 7, 8, 9, 10, 11, 12, 13]. Ad networks also suffer from click fraud, which is estimated to reach $100 billion in 2023 [14, 15]. Finally, online ads often contain clickbait, untrustworthy, or distasteful content that peddle software downloads, listicles, and health supplements—all of which users find problematic to their online experience [16]. While online tracking and targeted advertising pose a threat to users of all ages, children especially bear an acute cost. Children may not fully understand the consequences of online tracking and revealing their personal data online [17, 18], but they yield immense “pester power” to influence their parents’ purchase decisions [19]. Thus, children are an attractive target audience for advertisers and marketers alike [19, 20], as they are more vulnerable to persuasive advertising [21, 22, 23], and susceptible to harmful content [24, 25]. Despite the aforementioned evidence that suggests a differential impact on children, there is little empirical research on online tracking and advertising practices on children’s websites. The lack of a comprehensive and updated list of websites directed at children poses a major challenge for studying children’s websites. Previous large-scale internet measurement studies have relied on popular website lists such as Tranco [26] and Alexa [27] (before it shut down in 2021 [28]), but these lists may not specify website categories, and even when they do, the website categories may not be reliable and comprehensive [29, 30]. As a result, prior work [31, 23] has only examined online tracking on at most a hundred children’s websites and has been restricted in scope and methods—lacking a comprehensive investigation of both online tracking and advertising. To overcome this limitation, we built our own repository of child-directed websites. We trained a text-based classifier that detects children’s websites using HTML metadata fields such as <title> and <description>. The classifier is based on a pre-trained multilingual model that we fine-tuned for our binary classification task. Applying the classifier to the Common Crawl dataset [32], we compiled a list of 2K manually verified child-directed websites. Figure 1: A sample of improper ads found on child-directed websites in our crawls. To study several online tracking, ad targeting, and problematic ad practices, we crawl our list of 2K child-directed websites—varying the location (five vantage points) and form factors (desktop & mobile). Starting with ad targeting, we study the extent to which ads that appear on children’s websites are targeted—a practice that has come under increasing scrutiny both in the EU and the US [33, 34, 35]. We then present an exploratory analysis of ads from categories deemed problematic for children, such as dating, weight loss, mental health, and ads that contain racy content. Next, we turn to online tracking, which is a necessary ingredient of behavioral advertising. We study the ecosystem of trackers and specifically quantify the prevalence of trackers, cookies, and use of browser fingerprinting techniques such as Canvas, Canvas Font, AudioContext Fingerprinting, and WebRTC local IP discovery [1]. Our work is especially pertinent in light of impending regulatory changes. In the US, there have been calls [35] to update the Children’s Online Privacy Protection Act (COPPA) [36] in order to prohibit “internet companies from collecting personal information from users who are 13 to 16 years old without their consent” and to “ban targeted advertising to children and teens.” The current US President Joe Biden has called for a ban on collecting data on and serving targeted ads to children [34]; whereas in the EU, the upcoming Digital Services Act (DSA) will specifically prohibit ads targeted at children [33]. Our research seeks to offer empirical evidence on advertising and tracking practices employed on children’s websites by making the following contributions: * • Using a lightweight classifier based on web page metadata fields, we build a repository of child-directed websites and crawl them to measure tracking and advertising practices using multiple vantage points and form factors (desktop & mobile). * • We measure targeted ads using two ad vendors’ (Google and Criteo) ad disclosure statements, and find that targeting is enabled for 73% of the ads we could measure. * • Using text and images extracted from the ads, we detect racy ads, and ads about weight loss, dating, and mental health using semantic similarity search based on a lightweight, multilingual language model. While this content analysis is exploratory, our method enables human-in-the-loop investigations with arbitrary queries, and it paves the way for the automatic content analysis of ads. * • We also find ads linking to malicious content, improper ads of sex toys, dating services, and ads containing sexually suggestive images (Figure 1). All the data and software from our study will be made available to researchers.111We share the list of identified child-directed websites and a sample of advertisement disclosures on https://github.com/targeted-and- troublesome/. ## 2 Related Work ### 2.1 Web tracking measurements Over the past decade, several web privacy measurements have shown the scale and complexity of online tracking [37, 1, 38, 39, 40]. Research on stateful tracking has examined how unique tracking identifiers are stored on the client side [41] using cookies [42, 43], localStorage [2], cache (ETags) [2], or other client-side storage mechanisms. On the other hand, research on stateless tracking has examined the use of fingerprinting, a mechanism that exploits differences in browsers and devices to obtain a likely unique identifier [44]. Past research has shown that there are various fingerprinting vectors, including fonts, clock skew, GPUs, audio hardware, installed writing scripts and browser extensions, among others [45, 46, 47, 1, 48, 49, 50]. Research on defense against fingerprinting has contributed methods to detect fingerprinting, tracking and advertising [51, 4, 1, 38, 52, 53]. Our study borrows heuristics from prior work [1, 38] to detect fingerprinting scripts, and we use existing filter lists to identify trackers and advertisers. ### 2.2 Tracking & ads on children’s media Motivated by the challenges posed by ads to children, Cai and Zhao [23] manually labeled ads displayed on 117 children’s websites. They found that 68% of the websites featured ads, and less than half complied with COPPA. The authors also argued that children are unlikely to distinguish many ads from the website’s original content. Vlajic et al. [31] investigated online tracking on twenty websites from Alexa’s “Kids and Teens” category [27] from two vantage points (EU & US). The authors manually analyzed the HTTP headers and quantified hidden images (i.e., likely tracking pixels) loaded from ads and analytics domains. Compared to this past work, we study orders of magnitude more websites, follow more rigorous tracking measurement methods, and compare results across different vantage points. Additionally, we automatically detect targeted ads using ad disclosure pages and present an exploratory analysis of the content of ads that appear on children’s websites. Focusing on mobile platforms, Reyes et al. [54] dynamically analyzed around 6,000 free children’s Android apps and found that most apps violate COPPA due to their use of third-party SDKs. ### 2.3 Improper and malicious ads A recent line of research has investigated the content of online ads. Zeng et al. [16] conducted a survey with 1,000 participants to determine the type of advertising content (e.g., chumboxes, clickbait, political, and low-quality content) that makes people dislike ads. In [55], the same authors also studied problematic ads in the news and misinformation websites, where they found problematic ads served by native ad platforms. In a study leading to the 2020 US elections, Zeng et al. [56] found that ads for misleading political polls that aim to hoover email addresses are widely used in online political advertising. Subramani et al. [7] studied the role of web push notifications in ad delivery, especially malicious ads. Through a large-scale study of malicious ads, Zarras et al. [5] showed that some ad exchanges are more prone to serving malicious ads due to inadequate detection. Akgul et al. [57] examined influencer VPN ads on YouTube and found advertisements disseminating misleading claims about online safety. Ali et al. [58] measured how the distribution of potentially harmful ads on Facebook varies across users. Venkatadri et al. [59] used Facebook’s advertiser interface to study how Facebook obtains personal identifiers used in advertising. In a concurrent work, Zhao et al. [60] analyzed mobile ads aimed at children, uncovering inappropriate advertisements by certified mobile ad SDKs. Medjkoune et al. [61] showed that advertisers can target ads to children by placing their ads in children-focused videos—bypassing YouTube’s age restrictions. ### 2.4 Ad transparency In response to concerns about targeted advertising, ad networks and platforms have offered ad transparency interfaces that allow users to ascertain when and how they are being targeted. Andreou et al. [62] investigated Facebook’s ad explanations and found that they are often incomplete or misleading. Researchers have also argued that ad networks should provide users with interpretable explanations and increase the visibility of disclosure mechanisms [63]. Bin Musa and Nithyanand [64] developed ATOM, a technique for determining data sharing between online trackers and advertisers. They used simulated personas to crawl websites, collect ad images, and conduct statistical analyses to identify correlations between tracker presence and advertiser behavior. Liu et al. [65] developed a framework called AdReveal to investigate different mechanisms used in online targeted advertising. Vallina et al. [66] used statements found in Google’s ad disclosure pages in their crowdsourced measurement of online behavioral advertisements. To detect stealthy ads that aim to bypass adblockers, Storey et al. [67] developed an extension that detects the AdChoices icon using perceptual hashing. While we considered applying Storey et al.’s method, we found URL-based detection of ad disclosure links (§4.6) to be more reliable and efficient. ### 2.5 Website categorization The majority of studies on web categorization have focused on text-based classifiers because most web content and metadata are text-based [68, 69]. Various studies used machine learning models such as BERT and recurrent neural networks to learn contextual representations and features of web pages using meta tags and body content [70, 69, 68]. Other researchers proposed image- based web page classification techniques using pre-trained convolutional neural networks and using Google image search results [71, 72]. In our work, we built a lightweight classifier by fine-tuning an existing distilled language model and using text-based website metadata to detect child-directed websites. ## 3 Building a list of child-directed websites Figure 2: Pipeline for building a list of child-directed websites. It is estimated that there are more than one billion websites on the Internet [73], but only a small fraction are targeted at children. A central challenge, therefore, is identifying the websites that contain content directed to children. We initially searched and found three curated lists of children’s websites: kidSAFE Seal Program [74], CommonSense (filtered for children below the age of 13) [75], and a list compiled by Kaspersky [76]. Unfortunately, these lists contained only a total of 355 websites, some of which were no longer online. To expand our limited list, we experimented with web categorization services such as McAfee, WebShrinker, and SimilarWeb, but decided to use VirusTotal’s (academic) API because other services were either not freely available or did not let us query in bulk. VirusTotal aggregates category labels from third- party scanners and categorization services, including BitDefender and TrendMicro [77]. We used the VirusTotal API to retrieve web category data for the top one million websites from the Chrome User Experience Report (CrUX) list from May 2022 [78]. We observed VirusTotal’s rate limits (20K/day/per academic license) during the process, which took roughly four weeks. By searching for substrings “kid” and “child” in the returned category labels and removing false positives (such as “Child abuse”), we obtained 1,264 websites categorized as related to children. However, our manual verification of these websites following the criteria presented in Appendix A revealed that 68.6% of them were false positives, yielding only 396 child-directed websites. Note that the low accuracy and inconsistency of domain classification/categorization services align with findings from prior work [30]. Combining our initial 355 websites with our verified list of 396 websites and removing all inaccessible (5) and duplicate (164) websites, we obtained a total of 582 child-directed websites. Motivated by the lack of accurate, up-to-date, and comprehensive sources of child-directed websites, we built a classifier to detect child-directed websites using the list of 582 websites as labeled data. Figure 2 illustrates the training and fine-tuning process. We define “child-directed websites” as websites that are primarily intended for use by children, and contain content, activities, or other features that are likely to be appealing to children (see Appendix A, for our criteria). Note that our labeling criteria do not fully overlap with COPPA’s definition [36]; for example, we do not require that sites have the actual knowledge of collecting children’s data. Thus, we do not claim to measure compliance with COPPA or other relevant laws. In our study, children refer to individuals under 13, aligning with US and EU regulations (§6.1). ### 3.1 Labeled data for ML classifier Many web page classification methods use the entire text of the page [70] and its images [72], which can be resource-intensive and time-consuming. Alternatively, researchers have explored web page classification on metadata fields such as <title>, <description>, and <keywords>, which tend to be shorter and shown to have strong correlations with the topic of web pages [68]. We followed the latter approach for its computational efficiency and reasonable accuracy. Our preliminary analysis of over 500K web pages from the most popular one million websites in the Common Crawl dataset [32] showed that more than 97% of the websites have a title, 63% of the websites include a description, and 24% contain a keywords meta tag. Based on these availability statistics, we used the titles and descriptions for classification, leaving out the keywords. To extract the titles and descriptions, we used the following HTML tags: title, description, og:[title|description], and twitter:[title|description]. Applying this method to the WAT metadata files from the June-July 2022 Common Crawl snapshot [32], we extracted the titles and descriptions, limiting ourselves to the top million websites in the Tranco [26] or the CrUX [78] list. We further refined our data by keeping a single page with the shortest URL from each hostname, which is more likely to be the home page. This resulted in metadata from 2.28 million pages, which included pages from the subdomains of the top million Tranco and CrUX websites. We also extracted the same title and metadata information from the 582 known child- directed websites using a simple script based on Playwright [79]. In both instances, when the page had more than one description or title available, we picked the longest one. After completing the data collection process, we constructed a training set for our classifier. For negative samples, we randomly selected 2,500 of the 2.28 million pages and manually checked to remove children’s websites. Our positive samples consisted of 576 title-description pairs after filtering out websites with titles shorter than ten characters. ### 3.2 Building the ML classifier Our training data contained a limited number of labeled samples and our input consisted of text-based meta fields, potentially in multiple languages. This made designing naive classifiers such as bag-of-words and TF-IDF less suitable for our task. Instead, we employed a pre-trained and multi-lingual language model. Pre-trained models have proven to be adequate for general text classification tasks, but they need to be fine-tuned for more specific downstream tasks [70]. In particular, we decided to use the Paraphrase- Multilingual-MPNet-base-v2 (PM-MPNet-v2) model from the SentenceTransformers [80, 81] library, which is a pre-trained multilingual and distilled model based on the MPNet method [82]. The distillation process [80, 83] involves training a smaller model (student) to mimic the behavior of a larger model (teacher). In particular, PM-MPNet-v2 is fine-tuned with a large number of paraphrased sentences in more than 50 languages [80]. However, PM-MPNet-v2 cannot be directly used for text classification since it only produces embeddings that are useful for semantic similarity-related tasks. Thus, we used HuggingFace’s Trainer API [84] and AutoModelForSequenceClassification [85] class to fine-tune the model and add a binary classification layer on top of the PM-MPNet-v2’s embedding outputs. As input to the classifier, we used the concatenation of title and descriptions since this combination gave the best accuracy compared to using title or description alone. In particular, we fine-tuned the model to detect child-directed websites using the training set explained in §3.1. We used the HuggingFace Transformers [86] and Ray Tune libraries’ Population Based Training (PBT) algorithm [87, 88] to find the best-performing hyperparameters (batch size=12, epochs=2, and learning rate=4.2e-05). The fine-tuning process took roughly five minutes on a consumer-grade GPU (GeForce RTX 3080 Ti). To reduce false positives, we employed the modified “Classify-Verify” technique [89], which involves setting an acceptance threshold $t$, and accepting a prediction only if it is above $t$. Following Juarez et al. [90], we choose the threshold that maximizes $F_{\beta=0.5}$, which gives more weight to precision to reduce false positives. $F_{\beta}$ is a weighted harmonic mean of precision and recall, adjustable to emphasize either metric according to the classification task’s needs [91]. A grid search of different threshold values shows that the maximum $F_{\beta=0.5}$ is achieved when $t=0.93$, which reduces the false positive by $50\%$. Ultimately, our classifier achieved a precision of 86% and a recall of 70% using 10-fold cross-validation, as detailed in Table I. TABLE I: Classification results before and after applying threshold (with 10-fold cross-validation). | Precision | Recall | F-beta | TP | FP ---|---|---|---|---|--- Without threshold | 0.79 | 0.81 | 0.79 | 47 | 12 With threshold | 0.86 | 0.70 | 0.82 | 40 | 6 ### 3.3 The list of 2K children’s websites Using the fine-tuned classifier, we calculated the label and probability score for 2.28M web pages from Common Crawl, excluding websites used in the training process. This process took roughly 5 hours. Our classifier identified 53,092 web pages as children’s websites. Due to time constraints, we focused on manually verifying the top 2,500 websites sorted by classifier probability, starting with websites that are most likely to be child-directed. An evaluation of our classifier and the details of our manual verification process can be found in Appendix A.1. Our final list contained 2,004 websites in 48 distinct languages after eliminating false positives and deduplicating websites by their registrable domain (TLD+1). English was the most prevalent, accounting for 63% of all websites. The prevalence of other prominent languages, including Russian, Spanish, French, German, and Portuguese, ranging between 3% and 6%. The list included 582 websites from the training data and 1,422 websites identified by the classifier. Website ranks: 1,422 of the 2,004 websites were ranked in the top 1 million Tranco list (median rank 304K). While over a quarter of the websites are in the top 200K ranks, websites from all popularity levels are captured in our list. 404 of the 582 websites that are not ranked by Tranco were ranked in the top one million by the CrUX list. Only 163 (8%) websites were not ranked either by Crux or Tranco in the top one million. DNS0 Kids filter check: DNS0 Kids [92] is a domain name resolver that detects and filters out content that is not suitable for children such as adult, dating, and copyright-infringing content. To find out the status of the websites in our list, we compared DNS0 Kids with CloudFlare’s DNS resolver. If a website can be resolved by CloudFlare, but not by DNS0, we treated it as blocked. We found that only ten (0.5%) of the 2,004 websites in our list were blocked by DNS0. Reviewing these ten websites, we found six of them to contain pirated videos, including cartoons. The remaining four websites contained activities for children but it was not obvious to us why they were blocked. ## 4 Web Tracking and Advertising Measurements To assess the prevalence of trackers, fingerprinting scripts, and (targeted) advertisements on child-directed websites, we extended Tracker Radar Collector (TRC) [93]. TRC is a Puppeteer-based [94] web crawler, which consists of modules called collectors that record different types of data during a crawl, such as HTTP requests/responses, cookies, screenshots, and JavaScript API calls. New collector modules can be easily added to collect data necessary to perform different web measurements such as ours. Specifically, we added the following collectors to TRC: * • FingerprintCollector (§4.1): detects fingerprinting related function calls and property accesses * • LinkCollector (§4.3): extracts inner page links * • VideoCollector (§4.5): captures the crawl video * • AdCollector (§4.6): detects ads and scrapes ad disclosures We also used the existing TRC collectors, including RequestCollector to capture request/response details and detect tracking-related requests (§4.2), TargetCollector to detect newly-opened tabs in §4.6, CookieCollector to analyze cookies, and finally CMPCollector (§4.4) to interact with the consent dialogs and consent management platforms (CMP). We used TRC’s anti-bot measures [93], which thwarts bot detection to a certain extent by overwriting artifacts typically probed by anti-bot scripts (e.g., navigator.plugins, Notification.permission) [95]. For the mobile crawls, we emulated a mobile browser using TRC’s built-in features to spoof viewport dimensions, touch support, and the user-agent string. ### 4.1 Identifying fingerprinting attempts Identifying fingerprinting scripts can be challenging due to obfuscation and potential false positives. For example, scripts may use Canvas API for both drawing images or fingerprinting the user’s browser [47]. We draw on well- established methods to distinguish between fingerprinting and benign use of fingerprinting vectors [38, 1]. Specifically, we focused on Canvas, WebRTC, Canvas Font, or AudioContext fingerprinting and detected them using the heuristics presented by Iqbal et al. [38]. To detect fingerprinting attempts, we modified the getter and setter methods of the several Web APIs such as CanvasRenderingContext2D.fillText and HTMLCanvasElement.toDataURL to intercept potentially fingerprinting-related function calls and property accesses. Although TRC can intercept JavaScript API calls, we implemented a separate collector (FingerprintCollector) to avoid a known issue that prevented TRC from intercepting early function calls [96]. FingerprintCollector simply injects the instrumentation script into each page and its descendant frames as soon as they are created. We verified that our collector captures calls missed by TRC on both our custom-developed fingerprinting test pages and external demo pages such as BrowserLeaks [97]. ### 4.2 Identifying tracking-related requests To identify tracking-related requests, we utilized the uBlock Origin Core [98] npm package, which mimics the tracking protection of the widely-used uBlock Origin extension [99]. We used uBlock Origin’s default filter lists, which include EasyList and EasyPrivacy, among others [100]. To accurately detect tracking-related requests, we provided uBlock Origin Core with the request’s resource type (e.g., image or script) and the page and request URL derived from the HTTP request/response details recorded by the crawler. Then, we mapped the tracker domains to their owner entities (i.e., organizations/companies) using DuckDuckGo’s entity map [101]. Using entities to quantify tracker prevalence reduces overcounting as multiple domains can be owned by the same business (e.g., googleanalytics.com and doubleclick.net are both owned by Google). ### 4.3 Discovering inner pages We refrained from only focusing on homepages as prior work found that websites’ inner pages tend to contain more trackers and cookies [102, 103]. Thus, we also gathered five inner links from each of the 2,004 websites by conducting four separate link-collection crawls (desktop and mobile crawls from Frankfurt and NYC). We preferred to crawl sites from two vantage points to reduce the time and effort required for the link collection process. We excluded external domain links and documents such as PDFs and images, and we preferred links near the viewport’s center to avoid unrelated links from footers or less visible page areas. After gathering these inner links, we merged them with the homepage URLs to form the final set. ### 4.4 Interacting with consent dialogs Since the GDPR came into effect, websites typically show consent dialogs when viewed from the EU and to some extent even from the US [104]. Ignoring these dialogs may lead to undermeasurement of the tracking and advertising practices. We decided to provide affirmative consent to all data processing request options (accept all) in our crawls to measure the full extent of advertisements and tracking a child could experience. To handle consent dialogs in an accurate and automated manner, we used DuckDuckGo’s autoconsent library [105], which comes bundled with TRC [106]. Autoconsent incorporates rules from Consent-O-Matic [107, 108], allowing programmatic interactions with the CMPs. ### 4.5 Video screen captures To detect ads and scrape their disclosures, our crawler performed a series of interactions with the page, including dismissing popup dialogs, interacting with CMPs, and clicking on visible ad disclosure links(§ 4.6). To monitor these interactions, we added a video capture functionality to the crawler (VideoCollector). We used videos of the crawler’s interactions to troubleshoot potential issues with the crawler process as well as to label animated ads and other crawl artifacts manually. ### 4.6 Identifying ads and ad targeting criteria The AdCollector performed three main functions: 1) detecting ads, 2) scraping ads —including its screenshots, links, iframes, scripts, videos, and background images, and 3) detecting and scraping ad disclosure pages to determine whether an ad is targeted or not. Detecting ads: To detect ads, we built on Zeng et al.’s [16] approach to use EasyList’s rules [109]. EasyList rules are commonly employed by popular adblockers to block or hide ads. For each detected ad element, the crawler recorded a set of attributes, including its position on the page, its dimensions, class, ID, and name, in addition to the complete HTML source and a screenshot. If the ad element contained any child elements, which was mostly true, the crawler recursively recorded their details, including all links, images, video, script, and iframe elements. Small elements ($<30px$ in either dimension) and elements lacking any link, image, background image, or video were excluded. The crawler not only took screenshots of each ad but also downloaded image and video elements that were descendants of the ad element. These media were utilized in the ML-based ad content analysis pipeline 4.6 alongside the ad screenshots. The crawler sent a single HTTP request during the page visit with the appropriate HTTP headers—such as the HTTP Referer [sic.] set to the current address-bar URL—when downloading these files. Finally, the crawler saved data-URL images found within the ad’s subtree. Bin Musa and Nithyanand [64] also utilized EasyList’s rules for ad identification in their study on tracker-advertiser relationships, but their implementation differs from ours. While they focus on detecting image-containing HTTP responses using the EasyList filter set, we query the DOM to detect ad elements, such as div elements, and their relevant descendant elements, such as images, iframes, links (a), and videos. Operating at the DOM level also allows us to detect and scrape ad disclosure pages to detect targeted ads. To verify how accurately our crawler detects ads, we performed a sanity check on a random sample of 105 ads (15 ads from each crawl). The crawler correctly detected ads in 85% of cases, misidentified non-ads in 7.5%, and captured blank or empty ads in 7.5%. Some ad screenshots also included multiple (2.8%) or only part (4.5%) of the ads. However, the overall accuracy and quality of our ads appear to be higher than prior work by Zeng et al. [55], which reported 34% unrendered (blank/unreadable) ads. We attribute this difference in data quality to two potential reasons. First, we use a more realistic crawler equipped with anti- bot measures; and second, unlike Zeng et al., we opted not to click the ads—which may trigger more stringent anti-bot, anti-fraud protections that prevent the delivery or rendering of the ads. Further, to evaluate whether our EasyList-based detector missed any ads, we manually reviewed 50 random pages where no ads were detected by the crawler. Our review did not reveal any false negatives, suggesting that our ad detection was robust. We also verified the accuracy of the ad images separately downloaded by the crawler, finding all of them to be present in the ads shown on the page. Determining targeting criteria: To measure the prevalence of targeted advertisements at scale, we automated the process of scraping ad disclosure (e.g., “Why this ad”) pages. While the content of ad disclosure pages may vary by ad platform, they generally explain in broad terms why a specific advertisement was shown to a user. The reasons may include, for instance, Google’s estimation of your interests or Websites you’ve visited. The disclosure pages may also contain information about the website and the advertiser, and whether ad targeting is turned off for the website or a specific ad. Two example disclosure pages for a targeted and non-targeted ad are shown in Figure 3. (a) Targeted ad (b) Non-targeted ad Figure 3: Google’s ad disclosure pages indicating whether an ad is targeted or not. The top figure belongs to a targeted ad (indicated by Google’s estimation of your interests), while the bottom one is for a non-targeted ad (indicated by Ad personalization is turned off) Ad disclosure pages are reachable by clicking the AdChoices icon and the “Why this ad” button for Google ads [110] and other ad providers. Initially, we attempted to detect the ad disclosure links using fuzzy image matching based on the AdChoices icon. However, we found that the icon’s shape and visibility substantially vary across different ad vendors, and sometimes the icon can be hidden, making it unclickable. We chose to identify ad disclosure links through their URLs, focusing on a fixed set of providers we could detect reliably and deterministically. By analyzing ad disclosure pages in our pilot crawls, we compiled a list of hostnames (i.e., adssettings.google.com, privacy.us.criteo.com and privacy.eu.criteo.com) that appear in the ad disclosure links, and explain whether an ad is targeted or not. We limited our investigation to ad disclosure pages from these two providers because other providers we encountered in our pilot crawls did not offer useful information about the targeting criteria of the ads. Once the crawler detects and clicks on the AdChoices link, the ad disclosure page opens in a new tab. We intercepted this new tab, stored its URL, screenshot and text contents (via document.innerText) for analysis. The scraped text contents are then used to detect whether ad targeting is enabled or not. Specifically, we searched in the ad disclosure texts for specific disclosure statements indicating whether and how an ad was targeted. The disclosure statements include, for instance, Google’s estimation of your interests (targeted), Websites you’ve visited (targeted) and Ad personalization is turned off (non-targeted). If one or more statements indicating targeted ads occur in an ad disclosure text, we label the ad as targeted. Otherwise, we label the ad as non-targeted. Note that we count behavioral or retargeted ads also in the targeted category. We compiled a list of 18 statements (Appendix A.2) incrementally, using over 40K ad disclosure texts extracted during the crawls. We ensured that all ad disclosures contain at least one of these statements, to make sure our analysis is exhaustive. Interacting with the page and ads: Upon loading a page, our crawler waited for 2.5 seconds and dismissed any popup dialogs using heuristics from prior work [29]. We dismissed these dialogs to prevent them from blocking our crawler’s interactions with the webpage. The crawler then waited for another second before scrolling through the page in 10 steps, taking strides of about 500–600 pixels each interlaced with a random delay of 500–600 milliseconds. Finally, after waiting for another second, it scrolled up to the beginning of the page using the same scrolling behavior. We engineered this up-and-down scrolling behavior to allow the webpage to load any ad slots that are lazily loaded as the user scrolls the page below the landing fold. The crawler then identified all ads on the page. It set the border color of each ad to red to visually mark the ads for manual review. The crawler then took a screenshot of the entire page and then scraped each ad in a top-down fashion. To ensure that an advertisement is fully seen, it scrolled down to each ad before taking its screenshot. Finally, the crawler detected ad disclosure links and clicked each one individually to capture all ad disclosure texts and screenshots. We limited the number of scraped ads per page visit to ten, which limits over-representation by a few websites with many ads. Analyzing advertisement content: We identified and measured four kinds of ads in our corpus: weight loss ads, mental health ads, dating services ads, and ads that contain clickbait racy content. While our dataset of ads can be used to perform fuller content analysis, we focused on these four categories since prior work [111, 112] and regulatory reports [113] have argued that these can be especially harmful to children. In fact, many ad networks’ moderation policies [114, 115] explicitly restrict these categories of ads from appearing on children’s websites. We note that the categories we focused on are not exhaustive, and our ads analysis is exploratory, serving as a preliminary investigation into this critical problem. An overview of the ad content analysis pipeline is shown in Figure 4. To identify ads containing click-bait racy content, we employed Google Cloud Vision API’s SafeSearch Detection [116], which is a service that uses deep learning to analyze images and identify potentially unsafe content. It evaluates images against categories such as adult, violent, racy, and medical content and returns likelihood scores for each category, ranging from ‘VERY_UNLIKELY’ to ‘VERY_LIKELY.’ Upon manually evaluating the output generated by the algorithm, we focused on the ‘racy’ category with a likelihood of ‘VERY_LIKELY’. We also tested Microsoft’s Adult Content Detection [117], part of Azure Cognitive Services, to identify racy images. However, due to more false positives compared to Google Cloud Vision API, we chose the latter for our study. We used the Google Cloud Vision API to extract text from ad images following a similar approach to Bin Musa and Nithyanand [64]. The text in each image was extracted using the Optical Character Recognition (OCR) feature of the API, specifically by employing the fullTextAnnotation attribute of the API response. This allowed us to extract text data at different levels, such as page, paragraph, and word. We opted to use the paragraph level since it gives the best separation in ads promoting multiple unrelated products. Despite their name, paragraphs returned by the API were relatively short and akin to sentences (21 characters, on average). We then employed semantic similarity to identify the most similar ad texts (paragraphs) corresponding to a given search query, which in our case were “weight loss”, “mental health”, and “dating”. This approach is versatile and can be used to retrieve ads related to any arbitrary words or phrases. To compute the embeddings of the queries and ad paragraphs, we used the “paraphrase-multilingual-mpnet-base-v2” model, the distilled multilingual model we used to classify web pages (§3.2). To find the most similar results, we calculated the cosine similarity between the embeddings of the search query and each ad paragraph and sorted them accordingly. Next, we manually reviewed the 100 most similar distinct paragraphs and their associated images, including ad screenshots or background ad images, to identify those that pertained to the three categories of interest. We also experimented with BERTopic [118] to create topic models and searched for clusters similar to our chosen categories. While this resulted in well-grouped texts, it required manual verification of numerous (several thousand) clusters. Sorting based on semantic similarity proved to be faster, more flexible, and easier to implement and evaluate, making it the preferred approach for manual reviewing. Figure 4: Overview of the ad content analysis pipeline. ### 4.7 List of crawls The main dataset used in our study consists of seven crawls (Table II), all of which were run in April 2023 using cloud-based servers on Digital Ocean. The crawls include five desktop and two mobile crawls from five and two vantage points, respectively. We limited the mobile browser crawls to two vantage points because we do not focus on mobile-desktop comparison, which we leave to future work. The vantage points used for crawls consisted of Frankfurt, Amsterdam, London, San Francisco, and New York City (NYC), to capture ads and tracking from different jurisdictions. Crawls were run in parallel using separate servers with moderate resources (8 vCPU cores, 16GB RAM). Each crawl took between 13 and 32 hours to complete. A clean browser profile is used for each page visit to prevent ads targeted to our browsing history. During each crawl, we visited both landing and inner pages, following the process described in Section 4.3. When applicable, we accepted all personal data processing on consent (cookie) dialogs. The order of visited pages was randomized within each vantage point. Note that the San Francisco crawl used inner links extracted from the NYC crawl, while the London and Amsterdam crawl used inner links from the Frankfurt crawl. While this constraint did not appear to impact the success rate of visits across these vantage points, future research could explore identifying inner pages during the crawling process. ## 5 Measurement Results Table II summarizes the overall statistics for measurement crawls. A total of 71,567 pages were loaded successfully across all crawls. The success rate of our crawler was over 93%, according to the successful visit criteria we developed and applied (Appendix A.3). For simplicity, certain comparative results presented below are based on desktop crawls from NYC and Frankfurt, representing one location each in the US and the EU. TABLE II: Crawl statistics based on different vantage points. *: Avg./Sum of all sites visited across all crawls. | Form --- factor | Vantage --- point | Successfully --- loaded pages | Successful --- crawling rate Desk. | NYC | 10,310 | 95% SF | 10,301 | 95% LON | 10,270 | 95% FRA | 10,221 | 95% AMS | 10,014 | 93% Mobile | NYC | 10,168 | 94% FRA | 10,283 | 96% Avg./Sum | | 71,567 | 95% ### 5.1 Ad targeting and content analysis Our crawler scraped 70,303 ads from 804 of the 2,004 distinct websites across seven crawls. An average of 36% of the pages contained one or more ads, and we detected targeted ads on 27% of the pages we crawled. The crawler scraped 10,839 and 9,447 ads on average in the crawls from the US and Europe, respectively. #### 5.1.1 Over 70% of ads with disclosures are targeted in nature Our crawler captured a total of 40,281 ad disclosure pages, which we used to determine the advertiser’s identity and whether ad targeting is enabled or not. There are fewer disclosure pages than ads due to ads without disclosure links and failures in detecting or opening those links. In fact, we only consider ad disclosures from two ad providers: Google (97.8%) and Criteo (2.2%), since ad disclosure pages of other providers did not reveal the targeted status of the ad or the advertisers’ identity. Limiting our analysis to 40,281 ads with disclosure pages, we found that targeting was enabled for 73% of the ads with disclosures. Comparing across different privacy jurisdictions, we find 68% of the ads on average were targeted in the EU crawls, compared to 76% in the UK and the US crawls. Comparing the crawls from the two US cities (SF & NYC), we find that 67% of the ads were targeted in the SF desktop crawl, compared to 79% and 82% in the NYC-based desktop and mobile crawls, respectively. Although these variations might be attributed to stricter privacy regulations like CCPA and GDPR, our available data and methods do not permit us to make this attribution. Comparing the Tranco ranks of the 689 websites that contain at least one targeted ad to 59 websites that only contain non-targeted ads, we find a tendency for popular websites to disable ad targeting (Figure 5). Sites with targeted ads had a median rank of $\sim 340K$, while those with only non-targeted ads had a median rank of $\sim 128K$. Note that we only include 40,281 ads for which we can determine the targeted status in this analysis. Figure 5: Tranco rank (x-axis) distribution of sites that use targeted vs. non-targeted ads. Popular websites (below) appear to be more prone to disabling ad targeting. TABLE III: Number of visits and scraped ads, along with percentages of ads/targeted ads per crawl. *: Percentage of targeted ads is only based on ads with disclosures. In the rightmost two columns, we include a site if we scraped at least one ad/targeted ad from one of its pages. | Form --- factor | Vantage --- point # ads | | % sites --- with ads | % sites with --- targeted ads | % targeted --- ads Desk. | NYC | 11,288 | 38% | 30% | 79% | SF | 10,950 | 38% | 28% | 67% | LON | 9,702 | 36% | 27% | 76% | FRA | 9,700 | 36% | 26% | 68% | AMS | 9,250 | 35% | 26% | 67% Mobile | NYC | 10,278 | 36% | 29% | 82% FRA | 9,135 | 33% | 26% | 70% Avg./Sum | | 70,303 | 36% | 27% | 73% #### 5.1.2 Ads can be targeted from anywhere The “About the advertiser” section in Google’s ad disclosures shows the name and location (country) of the advertisers. This information is only available in 70% of the ad disclosures in our dataset. Extracting these fields from the ad disclosure texts, we identified 1,685 distinct advertisers from 81 different countries. Advertisers with the most ads in our data are displayed in Table IV. We note that due to the transient, targeted and localized nature of ad campaigns, the list in Table IV may not represent the most common advertisers on child-directed websites in general. Further, in certain cases (e.g., Gloworld LLC and Marketism), an advertising or marketing agency is listed on the ad disclosure page instead of the company offering the advertised products or services. The top ten advertisers are located in seven different countries and three continents. We observed that many of those advertisers are located far from our crawl vantage points, thus indicating that children visiting websites in our list can be targeted with ads from anywhere in the world. Reviewing a sample of 100 ads from each advertiser, we marked in the rightmost column their predominant ad theme. Five of the ten advertisers display ads for search results about various products—such as depression tests, belly fat reduction, senior meals, and electronic payments—on lesser-known search engines such as IngoSearch [119]. Ads from Betterme [120], a “behavioral healthcare app” with more than 100M installations, featured plans for weight loss, muscle gain, and intermittent fasting (e.g., Figure 1 ⓗ).222We note that Better.me’s data sharing practices with third parties were investigated by Privacy International, but the company reportedly took corrective action[121]. Brain Metrics Initiative displays ads for IQ tests, an example for which is given in Figure 1 ⓒ. Alibaba Hong Kong, on the other hand, displays ads featuring racy and disturbing images of products sold on alibaba.com. For instance, the ad on the top left (ⓐ) in Figure 1 features recurring images in Alibaba ads: a naked baby model (leftmost), rabbit meat (rightmost), and a semi-transparent underwear ad in the middle. We investigate similar racy clickbait ads and other improper ads in the following subsection. TABLE IV: Top ten advertisers by the number of ads across all crawls. Advertiser | Location | # ads | | % --- targeted Type of ads Vinden.nl B.V. | Netherlands | 4,707 | 86% | Search results EXPLORADS | Cyprus | 3,265 | 73% | Search results | All Response --- UK | 2,453 | 68% | Search results Gloworld LLC | USA | 2,365 | 55% | Online learning | Amomama M. --- Cyprus | 921 | 72% | | Workout muscle --- gain, weight loss | Media Quest --- UAE | 910 | 79% | Search results | Brain Metrics I. --- Cyprus | 814 | 50% | IQ tests BetterMe | Cyprus | 731 | 85% | Weight loss Marketism | Israel | 645 | 49% | Search results | Alibaba.com HK --- | Hong Kong --- 541 | 86% | | Products sold --- on Alibaba.com #### 5.1.3 Improper ads on child-directed sites TABLE V: Number of improper ads identified for each crawl. | Form --- factor | Vantage --- point Dating | | Mental --- health | Weight --- loss Racy | | Some- --- what racy Total Desk. | NYC | 4 | 21 | 16 | 21 | 26 | 88 SF | 7 | 9 | 15 | 6 | 25 | 62 LON | 10 | 17 | 48 | 12 | 31 | 118 FRA | 1 | 0 | 48 | 19 | 25 | 93 AMS | 8 | 4 | 82 | 10 | 33 | 137 Mobile | NYC | 22 | 25 | 113 | 98 | 17 | 275 FRA | 18 | 5 | 190 | 11 | 6 | 230 Total | | 70 | 81 | 512 | 177 | 163 | 1003 In total, our crawler collected 199,935 screenshots and images from the 70,303 scraped ads. After deduplicating the images, we queried the Cloud Vision API to obtain the category and OCR texts of the resulting 98,264 distinct images. We manually reviewed 741 images classified as ‘VERY_LIKELY’ racy by the API. Separately, we reviewed 1,136 ad images with OCR text semantically most similar to our search terms (mental health, dating, and weight loss). Due to study limitations, we only examined the ads related to the top 100 distinct texts for each term. Since each distinct text may appear in multiple ads differently, we labeled the images separately and used videos captured by the crawler when the ad was animated or the ad screenshot was obscured. Table V shows the number of improper ads identified in each crawl, amounting to 1,003 across 311 distinct websites. A notable finding is the higher prevalence of such ads on mobile devices compared to desktops in general. Racy images. We found 177 racy ads and 163 somewhat racy ads, considered edge cases due to their potential inappropriateness for child-directed websites. These ads were identified across 80 distinct websites mostly ranked within the top one million according to the Tranco list, with a median rank of 426K. Figure 1 ⓐ, ⓖ, ⓚ are examples of some of these ads. Notably, the majority of the racy ads were encountered in mobile crawls; especially within the NYC crawl (98/177). Out of 177 racy ads, only 38 had disclosure pages. Among these, targeting was enabled for 35 ads. Mental health. By manually labeling 236 ad images, we identified 81 ads offering mental health services on 48 distinct websites. Examples of ads in this category contained “take a depression test” (Figure 1 ⓕ), “online psychiatrists,” “how to get over depression,” and a “mental health chatbot which helps people with depression.” We excluded false positives that were not mental health service offerings, such as ads for “mental health counselor salaries,” “online psychology courses,” and “psychology books.” Dating. Manually labeling 231 ad images, we identified 70 dating platform ads on 48 distinct websites, most of which targeted mobile users. The ads promoted dating platforms such as “dating.com,” and “Live Me,” a live streaming app with ads featuring suggestive imagery (Figure 1, ⓙ, ⓚ). Another ad for DateMyAge.com featured a call to “[m]eet your mature singles” (ⓔ). False positives removed during the manual labeling of this category included ads for customer relationship tools, romantic holiday tours, and online appointment services. Weight loss. We identified 512 weight loss-related ads (plans, apps, products) on 170 distinct websites by labeling 669 ad images. Notably, there was a higher number of weight loss ads on mobile devices, indicating campaigns targeting mobile users. Examples of text featured in these ads included “intermittent fasting for weight loss,” “keto weight loss plan,” and “eating plan to lose weight” (Figure 1 ⓗ). In Figure 1, we provide additional examples of advertisements that are likely not suitable for children. Examples of these included an ad for a test called “Am I Gay Test” ⓓ, for a sex toy ⓘ and a sex toy shop ⓑ333Reportedly Germany’s largest online adult retailer [122] featuring an image of ice cream that could be appealing to children, and ads featuring clickbait and sexually suggestive images. The ads were found on websites related to K-12 e-learning, kids games, coloring books and worksheets, among others. The ads in Figure 1 do not necessarily fit our four investigated categories but showcase the diversity of improper ads. Malicious ad links. Finally, we present an exploratory analysis of whether ads on child-directed websites link to malicious pages. We submitted a sample of links extracted from the ad elements to the VirusTotal API in August 2023. Specifically, we removed links with duplicate hostnames, and for Google ads, we extracted a direct link to the ad landing page using the ‘adurl’ parameter [123]. While the overwhelming majority of the links were classified as benign, 149 of the nearly 3,940 scanned links were flagged as malicious or phishing by at least one scan engine. Notably, the word “taboola” was mentioned in 78 of the 149 detected links as a URL parameter that seems to indicate the ad network (network=taboola). ### 5.2 Tracking and fingerprinting analysis TABLE VI: Average number of third-party and tracker domains, and the prevalence of tracking and fingerprinting on child-directed websites from five vantage points. | Form --- factor | Vantage --- point | 3rd-Party --- domains | Tracker --- domains | Tracker --- entities | % sites --- with 3rd Parties | % sites --- with trackers | % site --- with FP Desk. | NYC | 31.6 | 23.4 | 20.0 | 95% | 90% | 9% SF | 29.3 | 21.3 | 17.8 | 95% | 91% | 9% LON | 21.3 | 14.3 | 10.6 | 96% | 91% | 7% FRA | 23.2 | 15.6 | 11.7 | 95% | 90% | 10% AMS | 21.4 | 14.3 | 10.6 | 93% | 89% | 7% Mobile | NYC | 29.8 | 21.8 | 18.4 | 95% | 91% | 9% FRA | 22.6 | 15.2 | 11.5 | 95% | 90% | 11% Table VI shows the prevalence of third-party trackers detected across different crawls. We find that around 90% of the websites have at least one tracker domain, and over 93% embed at least one third-party domain. TABLE VII: Prevalence of tracker entities in terms of number of distinct websites in Frankfurt and NYC desktop crawls. FRA | NYC ---|--- Entity | # Sites | Entity | # Sites Google | 1,702 | Google | 1,718 Facebook | 458 | Microsoft | 549 Index Exchange | 424 | Adobe | 543 Xandr | 416 | Xandr | 516 Adform | 412 | The Trade Desk | 501 The Trade Desk | 390 | Index Exchange | 495 OpenX | 378 | IPONWEB | 467 Adobe | 366 | Facebook | 456 Quantcast | 361 | Magnite | 446 PubMatic | 359 | OpenX | 426 Third-party trackers. The average number of tracker domains per site differs significantly, e.g., 15.6 and 23.4 in Frankfurt and NYC crawls, respectively, while the median is 15 and 16, respectively. The difference in averages can be attributed to websites with the high number of trackers in the NYC crawl. This explanation is in line with the results displayed in Table VIII, which shows the top five websites with the most trackers in Frankfurt and NYC crawls. Most of these websites are among the top one million, which means they likely receive substantial traffic. Notably, all of these sites displayed ads that were targeted. The numbers shown in the table - number of trackers, requests, and cookies - reflect averages across the web pages. In the NYC crawl, visiting mathfunworksheets.com triggered a total of 1,547 requests involving 161 unique third-party tracker entities (i.e., organizations/companies). Another website, woojr.com found to contain 148 distinct third-party tracker entities when visited from NYC. This website includes resources for children’s activities and educational materials, including printable worksheets and fun activity pages. When visited from Frankfurt, www.wowescape.com, a website offering various games for children and teenagers, triggered requests to 95 distinct third-party tracker entities. Most prevalent trackers. Table VII shows the tracker entities with the most prevalence in Frankfurt and NYC desktop crawls. We found a tracking-related request to Google domains, including its analytics, advertising and tag management scripts on $\sim$84% of the 2,004 child-directed websites in both crawls. Facebook is the second most prevalent entity in the Frankfurt crawl mostly due to Facebook Pixel (on 427 websites), which facilitates ad retargeting and conversion measurement, among others [124]. Largely thanks to Linked Insight Tag (px.ads.linkedin.com, 466 websites), Microsoft is the second most prevalent entity in the NYC crawl. Linked Insight Tag serves multiple purposes, including retargeting, conversion measurement, and providing demographic insights about website visitors [125]. Regional differences. To explore the differences in tracker entities across vantage points, we compared the tracker entities from Frankfurt and NYC desktop crawls. Despite a considerable overlap among the detected tracker entities (Jaccard index=$0.85$), we also identified variations. Specifically, our investigation unveiled 47 tracker entities exclusive to the Frankfurt crawl and 118 tracker entities that were only found in the NYC crawl. For instance, tracking related requests to advanced STORE [126] (ad4m.at & ad4mat.net, 236 websites) exclusively appear in the crawl from Frankfurt, whereas Throtle, a company that provides an identity graph to marketers and advertisers, only appears on 171 websites in the NYC crawl [127]. Furthermore, we find that the majority of the websites in both Frankfurt and NYC crawls (∼70% and ∼72%, respectively) contain third-party trackers that set at least one cookie with the SameSite=None attribute and a lifespan of over three months. Primarily through doubleclick.net domain, Google set these cookies on over 51% of the websites. While identifying the individual purposes of these cookies is out of scope, this combination of cookie attributes (esp. setting SameSite=None) makes it possible to track users across websites. Sites with and without ads. As part of our investigation, we conducted an additional analysis to compare how the number of third parties and trackers change between websites with and without ads. Figure 6 shows that websites with ads tend to have substantially more third-party and tracker domains. More specifically, the figure shows websites with ads tend to contain two to four times more third-party and tracker domains. Figure 6: Comparison of the average number of third-party and tracker domains/entities on websites containing ads vs. not containing ads. TABLE VIII: websites with the most distinct tracker entities. The table shows the websites’ distinct third-party tracker entities, the number of requests, cookies, and Tranco rank. Loc. | Website | | # Trackers --- | # Requests --- | # Cookies --- Rank NYC | mathfunworksheets.com | 161 | 1,547 | 395 | 669K woojr.com | 148 | 2,181 | 391 | 83K innerchildfun.com | 139 | 1,235 | 336 | 308K kidzfeed.com | 138 | 1,050 | 272 | 797K thecolor.com | 138 | 1,068 | 260 | 192K FRA | wowescape.com | 95 | 392 | 55 | 258K webgames.io | 94 | 564 | 92 | 155K coloriages-pour-enfants.net | 90 | 401 | 66 | 919K theschoolrun.com | 87 | 417 | 91 | 760K testsworld.net | 86 | 478 | 138 | - Browser Fingerprinting. We now discuss our findings on fingerprinting scripts on child-directed websites. Table VI shows that we detect fingerprinting scripts on 176 (9%) and 218 (10%) websites in Frankfurt and NYC crawls, respectively. The overall prevalence of fingerprinting aligns with the recent research by Iqbal et al., which finds fingerprinting on 10.18% of the top-100K websites [38]. One of the most prevalent fingerprinters in both crawls is an online payment company (Stripe; 66, 67 sites on Frankfurt and NYC crawls, respectively). According to their help pages [128] Stripe primarily employs fingerprinting for fraud prevention purposes. Webgains (82 sites in the Frankfurt crawl), an affiliate marketing company, also mentions fingerprinting in their Data Processing Agreement with Merchants [129], but without specifying its purpose. The most commonly used fingerprinting method is Canvas fingerprinting, present on about 208 sites in the Frankfurt crawl and about 172 sites in the NYC crawl. We found one or more trackers to be present on more than 90% of mobile websites (Table VI), which is similar to our finding for the desktop websites. NYC and Frankfurt crawls differ slightly in the number of ads: we scraped 10,278 ads in the NYC crawl and only 9,135 in the Frankfurt crawl—the latter is the crawl with the least amount of ads. Slightly more (29 vs 26%) websites in the NYC mobile crawl have targeted ads; and NYC mobile crawl has the highest proportion of targeted ads (82%) across all crawls. We also discovered that improper ads, particularly racy and weight loss ads, were more prevalent on mobile devices compared to desktops. ## 6 Discussion Our research paints a troubling picture of tracking and inappropriate advertising practices on child-directed websites. Advertisements featuring sexually suggestive imagery and ads about weight loss, dating, and mental health may pose potential risks to children’s emotional and psychological welfare. We discuss the legal implications, ethical considerations and limitations of our study below. ### 6.1 Legal implications In this section, we discuss what the law says about tracking and advertising practices uncovered in our research. We focus on the EU General Data Protection Regulation (GDPR) and the US Children’s Online Privacy Protection Act (COPPA)444We do not analyze whether specific companies breach the law. For such an analysis, each case would have to be examined separately, considering all the circumstances of that specific case. Rather, we discuss legal requirements in general terms.. The GDPR and the ePrivacy Directive. Under the GDPR, companies are only allowed to process personal data if they have a ‘legal basis’ for such processing. The GDPR provides six possible legal bases (article 6 GDPR). However, generally, the data subject’s consent is the only possible legal basis for online tracking and behavioral (targeted) advertising [130]. Moreover, the ePrivacy Directive [131] requires, in short, companies to ask the internet user for consent before they use tracking cookies or similar tracking technologies (Article 5(3)). The GDPR’s requirements for valid consent are strict. Consent is only valid if it is really voluntary (‘freely given’), and ‘specific’ and ‘informed’. The data controllers (the website owner and the companies involved in tracking and targeted advertising) must ‘be able to demonstrate that the data subject has consented to process of his or her personal data’ (Article 7(1) GDPR). The GDPR’s requirements for valid consent also apply to consent (for cookies etc.) as prescribed by the ePrivacy Directive. The GDPR has specific rules for consent by children. Roughly summarized, children cannot give valid consent; the parent should give consent instead (Article 8 GDPR). EU member states have set different minimum consent ages, ranging from 13 to 16 years [132]. Hence, only parental consent can legitimize tracking on a children’s website. Observe that a parent clicking a consent dialog (as done by our crawler) does not constitute parental consent under GDPR. Even in low-risk cases, verification of parental responsibility via email may be necessary [133]. The EU Digital Services Act. The rules for tracking and targeting children will become stricter in the EU. From 17 February 2024 on, the EU Digital Services Act [134] applies. Article 28 says, roughly summarized, that online platforms must not use behavioral advertising ‘when they are aware with reasonable certainty that the recipient of the service is a minor’ [134]. This prohibition cannot be overridden with the consent of the child or the parent. The DSA also requires “very large online platforms” [135] (with more than 45 million users in the EU) to publish the advertisements that it presented to users in an online repository, together with information about, for instance, the targeting criteria (Article 33, 39 DSA). The methods that we used in this paper could be used to check the completeness and accuracy of data published in those repositories. COPPA. COPPA regulates companies offering a website or online service directed to children under the age of 13. Specifically, COPPA applies to companies using children’s ‘personal information,’ which includes ‘persistent identifiers such as cookies and device fingerprints’ (COPPA §312.2) [136]. The website owner is responsible for data collection by third parties through its site. Such third parties must also comply with COPPA. Companies based outside the US must also comply with COPPA if their services are directed to children in the US [136]. Our results showed that 27% of the child-directed websites use targeted advertising. Under COPPA, data collection for targeted advertising on these websites is only allowed after getting parents’ Verifiable Parental Consent (VPC). VPC entails utilizing stringent verification methods, such as credit card verification, face recognition, or government ID checks [137]. This makes VPC much more complex than simply clicking an accept button on a dialog. We note that our crawler cannot simply give VPC. ### 6.2 Research Ethics Our crawler visited over 166K pages and it triggered many ad impressions that could be viewed by a real visitor (likely a child). Given the huge scale of the digital ad market (projected to reach US$700bn in 2023 [138]) we believe these ad impressions are a negligible cost for raising the transparency around tracking and ads targeted to children. Furthermore, we took several measures to limit our footprint on the crawled websites. For instance, we only crawled five inner pages from each site in a crawl, and we randomly shuffled the target URLs to avoid concurrently visiting the inner pages of a website. We also took appropriate measures to ensure that no harm was done to collaborators involved in the project, especially when dealing with explicitly graphic images. Disclosures and outreach In July 2023, we reached out to five companies that we found to serve racy ads. One company invited us to a call and explained the likely reason for showing inappropriate ads (websites failing to label themselves as child-directed and a false negative in the ad company’s automated child-directed site detection tool). They added the sites in question to the child-directed list and pledged further investigation. Another thanked us and began an internal review. The third redirected our query to the relevant department. Moreover, we disclosed 34 racy ads to Google by manually visiting the ad disclosure URLs of each racy (Google) ad; and using the Report this ad button. Note that, to identify the ad vendors involved in serving the ad, we used a combination of ad images, and src/href attributes of the ads’ descendant iframe, image and link elements (§4.6). In addition, we shared our preliminary results with a European data protection agency (DPA), and a consumer protection agency. Both showed interest; the DPA asked if there were any websites from their country containing improper ads. The consumer protection agency invited us to present our work. We also shared our findings with civil society and industry organizations including the 5Rights Foundation [139]. We plan to further share our study with regulators and other relevant stakeholders. While using the VirusTotal API, we found and reported three porn websites miscategorized as kids-related to the respective third-party categorization service. Although no response was received, the categories were later rectified. ### 6.3 Limitations Our study predominantly covers websites targeting younger children, as we define ‘children’ as under 13, aligning with both US and EU regulations. While our classifier detected child-directed sites in 48 languages, it may have a potential bias towards English content, which is overrepresented in the training data. The classifier may prefer sites with descriptive titles and content or may carry biases related to website age, design, or accessibility. Moreover, the classifier favors precision over recall to reduce the manual labeling workload. Our research is the first attempt to build a large list of child-directed websites. Classifying websites as child-directed or not is challenging, primarily due to the existence of gray areas that complicate the labeling process. For instance, many websites offer content that appeals to both children and adults. While we tried to exclude websites that can only be used by teachers or parents, certain websites included pages such as online exercises, videos or games that can be consumed by children. To validate our list, we performed further analysis on two subsets of websites: two senior researchers relabeled a random 100-website sample; one of the senior researchers relabeled a random subset of 100 websites with targeted (50) and inappropriate ads (50). In both cases, we identified less than 9% of websites that are related to children, but mainly catered to teachers or parents (8.6%, 8.3%, respectively). While this impurity could be avoided with more conservative labeling, our analysis of 200 websites strongly suggests these cases do not skew our primary findings. While we found fewer targeted ads in the EU than in the US, we cannot directly attribute this to differences in privacy regulation or another specific factor. Failure to detect and interact with consent dialogs may be a confounding factor, among others. When detecting targeted ads, we only used ad disclosure pages from two providers (Google and Criteo) due to the unavailability of useful ad disclosures from other vendors. Thus, our targeted ad detection method depends on the accuracy, completeness and precision of Google and Criteo’s ad disclosures. Websites may treat cloud-based IP addresses or automated browsers differently [140, 141, 39]. To curb such effects, we used the anti-bot detection features of TRC [93]. Reviewing the screenshots captured during the visits, we observed very few blocked visits. We conducted four sets of inner link collection crawls: two from NYC and two from Frankfurt, encompassing both desktop and mobile crawls. This constraint does not appear to impact the success rate of visits across these vantage points; nonetheless, future research could explore the possibility of identifying inner pages during the crawling process. Since we use a fresh profile for each page visit, we may not capture re- targeted or other personalized ads that are only shown to users with a behavioral profile. Future work could extend our method to incorporate personas and warm-up crawls to study such ads. Overall we do not claim that our findings are representative of tracking and advertising practices on child-directed websites. Our focus in this study is not on how ads are targeted, but simply on whether the targeting is enabled or not. ## 7 Conclusion We presented an empirical study of online tracking and advertisements on over 2,000 child-directed websites. Building a lightweight and versatile ML pipeline to analyze ad content, we identify hundreds of cases of improper ads, including weight loss and mental health ads, and ads featuring dating services, racy and sexually suggestive imagery. Our study reveals several notable trends: websites featuring advertisements tend to contain two to four times more trackers, mobile websites exhibit a greater prevalence of inappropriate ads, and popular websites are less likely to deploy targeted advertisements. Our findings provide concrete evidence of troublesome practices that are likely illegal, unethical, or simply careless. We call for more research, regulation and enforcement to limit the ongoing violation of children’s privacy and well-being. ## 8 Acknowledgments Asuman Senol was funded by the Cyber-Defence (CYD) Campus of armasuisse Science and Technology. Veelasha Moonsamy was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC 2092 CASA - 390781972. ## References * [1] S. Englehardt and A. Narayanan, “Online tracking: A 1-million-site measurement and analysis,” in _Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security_ , 2016, pp. 1388–1401. * [2] M. D. Ayenson, D. J. Wambach, A. Soltani, N. Good, and C. J. Hoofnagle, “Flash Cookies and Privacy II: Now with HTML5 and ETag Respawning,” 2011. * [3] I. Fouad, N. Bielova, A. Legout, and N. Sarafijanovic-Djukic, “Missed by filter lists: Detecting unknown third-party trackers with invisible pixels,” _Proceedings on Privacy Enhancing Technologies_ , vol. 2020, no. 2, pp. 499–518, 2020. * [4] G. Acar, C. Eubank, S. Englehardt, M. Juarez, A. Narayanan, and C. Diaz, “The web never forgets: Persistent tracking mechanisms in the wild,” in _Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security_ , 2014, pp. 674–689. * [5] A. Zarras, A. Kapravelos, G. Stringhini, T. Holz, C. Kruegel, and G. Vigna, “The dark alleys of madison avenue: Understanding malicious advertisements,” in _Proceedings of the Conference on Internet Measurement Conference_ , 2014, pp. 373–380. * [6] Z. Li, K. Zhang, Y. Xie, F. Yu, and X. Wang, “Knowing your enemy: understanding and detecting malicious web advertising,” in _Proceedings of the ACM conference on Computer and communications security_ , 2012, pp. 674–686. * [7] K. Subramani, X. Yuan, O. Setayeshfar, P. Vadrevu, K. H. Lee, and R. Perdisci, “When push comes to ads: Measuring the rise of (malicious) push advertising,” in _Proceedings of the ACM Internet Measurement Conference_ , 2020, pp. 724–737. * [8] I. Ilascu, “Hackers push malware via Google search ads for VLC, 7-Zip, CCleaner,” _BleepingComputer_ , Jan. 2023, https://www.bleepingcomputer.com/news/security/hackers-push-malware-via-google-search-ads-for-vlc-7-zip-ccleaner. * [9] L. Abrams, “Ransomware access brokers use Google ads to breach your network,” _BleepingComputer_ , Jan. 2023, https://www.bleepingcomputer.com/news/security/ransomware-access-brokers-use-google-ads-to-breach-your-network. * [10] “MalVirt $|$ .NET Virtualization Thrives in Malvertising Attacks,” https://www.sentinelone.com/labs/malvirt-net-virtualization-thrives-in-malvertising-attacks, 2023, [Accessed 28 Feb. 2023]. * [11] “Now even YouTube serves ads with CPU-draining cryptocurrency miners,” https://arstechnica.com/information-technology/2018/01/now-even-youtube-serves-ads-with-cpu-draining-cryptocurrency-miners/, 2018, [Accessed 28 Feb. 2023]. * [12] “Rogue GIMP Google Ad Pushed Info-Stealer Malware Through Website Replica,” https://www.bitdefender.com/blog/hotforsecurity/rogue-gimp-google-ad-pushed-info-stealer-malware-through-website-replica/, 2022, [Accessed 28 Feb. 2023]. * [13] E. Tekiner, A. Acar, A. S. Uluagac, E. Kirda, and A. A. Selcuk, “SoK: cryptojacking malware,” in _2021 IEEE European Symposium on Security and Privacy (EuroS &P)_, pp. 120–139. * [14] K. C. Wilbur and Y. Zhu, “Click fraud,” _Marketing Science_ , vol. 28, no. 2, pp. 293–308, 2009. * [15] “Estimated cost of digital ad fraud worldwide from 2018 to 2023,” https://www.statista.com/statistics/677466/digital-ad-fraud-cost/, 2023, [Accessed 28 Feb. 2023]. * [16] E. Zeng, T. Kohno, and F. Roesner, “What makes a “bad”’ ad? user perceptions of problematic online advertising,” in _Proceedings of the CHI Conference on Human Factors in Computing Systems_ , 2021, pp. 1–24. * [17] J. Zhao, G. Wang, C. Dally, P. Slovak, J. Edbrooke-Childs, M. Van Kleek, and N. Shadbolt, “‘I Make up a Silly Name’: Understanding Children’s Perception of Privacy Risks Online,” in _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ , New York, NY, USA, 2019, p. 1–13. * [18] P. Kumar, S. M. Naik, U. R. Devkar, M. Chetty, T. L. Clegg, and J. Vitak, “ ‘No Telling Passcodes Out Because They’re Private’: Understanding Children’s Mental Models of Privacy and Security Online,” _Proc. ACM Hum.-Comput. Interact._ , vol. 1, no. CSCW, dec 2017. * [19] M.-A. Lawlor and A. Prothero, “Pester power–a battle of wills between children and their parents,” _Journal of Marketing Management_ , vol. 27, no. 5-6, pp. 561–581, 2011. * [20] D. Kunkel, B. L. Wilcox, J. Cantor, E. Palmer, S. Linn, and P. Dowrick, “Report of the APA task force on advertising and children,” _Washington, DC: American Psychological Association_ , vol. 30, p. 60, 2004\. * [21] D. R. John, “Consumer socialization of children: A retrospective look at twenty-five years of research,” _Journal of consumer research_ , vol. 26, no. 3, pp. 183–213, 1999. * [22] M. Buijzen, “Reducing children’s susceptibility to commercials: Mechanisms of factual and evaluative advertising interventions,” _Media Psychology_ , vol. 9, no. 2, pp. 411–430, 2007. * [23] X. Cai and X. Zhao, “Online advertising on popular children’s websites: Structural features and privacy issues,” _Computers in Human Behavior_ , vol. 29, no. 4, pp. 1510–1518, 2013. * [24] E. Rozendaal, M. Buijzen, and P. Valkenburg, “Comparing children’s and adults’ cognitive advertising competences in the Netherlands,” _Journal of Children and Media_ , vol. 4, no. 1, pp. 77–89, 2010. * [25] M. Ali, M. Blades, C. Oates, and F. Blumberg, “Young children’s ability to recognize advertisements in web page designs,” _British Journal of Developmental Psychology_ , vol. 27, no. 1, pp. 71–83, 2009. * [26] V. Le Pochat, T. Van Goethem, S. Tajalizadehkhoob, M. Korczyński, and W. Joosen, “Tranco: A Research-Oriented Top Sites Ranking Hardened Against Manipulation,” in _Proceedings of the 26th Annual Network and Distributed System Security Symposium_ , 2019. * [27] “Alexa.com (from the Wayback Machine Archive),” https://web.archive.org/web/20210327233130/https://www.alexa.com, 2023, [Accessed 28 Feb. 2023]. * [28] “We retired Alexa.com on May 1, 2022,” https://web.archive.org/web/20221020130106/https://support.alexa.com/hc/en-us/articles/4410503838999, 2023, [Accessed 28 Feb. 2023]. * [29] A. Mathur, G. Acar, M. J. Friedman, E. Lucherini, J. Mayer, M. Chetty, and A. Narayanan, “Dark patterns at scale: Findings from a crawl of 11k shopping websites,” _Proceedings of the ACM on Human-Computer Interaction_ , vol. 3, no. CSCW, pp. 1–32, 2019. * [30] P. Vallina, V. Le Pochat, . Feal, M. Paraschiv, J. Gamba, T. Burke, O. Hohlfeld, J. Tapiador, and N. Vallina-Rodriguez, “Mis-shapes, Mistakes, Misfits: An Analysis of Domain Classification Services,” in _Internet Measurement Conference_ , ser. IMC 2020, pp. 598–618. * [31] N. Vlajic, M. El Masri, G. M. Riva, M. Barry, and D. Doran, “Online tracking of kids and teens by means of invisible images: COPPA vs. GDPR,” in _Proceedings of the 2nd International Workshop on Multimedia Privacy and Security_ , 2018, pp. 96–103. * [32] “June/July 2022 crawl archive now available – Common Crawl,” https://commoncrawl.org/2022/07/june-july-2022-crawl-archive-now-available, 2023, [Accessed 28 Feb. 2023]. * [33] “Digital Services Package: Commission welcomes the adoption by the European Parliament of the EU’s new rulebook for digital services,” https://ec.europa.eu/commission/presscorner/detail/en/IP_22_4313, 2022, [Accessed 28 Feb. 2023]. * [34] “State of the Union Address,” https://www.whitehouse.gov/state-of-the-union-2023/, 2023, [Accessed 28 Feb. 2023]. * [35] “Senators Markey and Cassidy Reintroduce COPPA 2.0, Bipartisan Legislation to Protect Online Privacy of Children and Teens,” https://www.markey.senate.gov/news/press-releases/senators-markey-and-cassidy-reintroduce-coppa-20-bipartisan-legislation-to-protect-online-privacy-of-children-and-teens, 2023, [Accessed 28 Feb. 2023]. * [36] FTC, “Children’s Online Privacy Protection Rule (“COPPA”),” https://www.ftc.gov/node/60175, 2023, [Accessed 28 Feb. 2023]. * [37] F. Roesner, T. Kohno, and D. Wetherall, “Detecting and defending against third-party tracking on the web,” in _Presented as part of the 9th $\\{$USENIX$\\}$ Symposium on Networked Systems Design and Implementation ($\\{$NSDI$\\}$ 12)_, 2012, pp. 155–168. * [38] U. Iqbal, S. Englehardt, and Z. Shafiq, “Fingerprinting the fingerprinters: Learning to detect browser fingerprinting behaviors,” in _2021 IEEE Symposium on Security and Privacy (SP)_. IEEE, 2021, pp. 1143–1161. * [39] D. Cassel, S.-C. Lin, A. Buraggina, W. Wang, A. Zhang, L. Bauer, H.-C. Hsiao, L. Jia, and T. Libert, “OmniCrawl: Comprehensive Measurement of Web Tracking With Real Desktop and Mobile Browsers.” _Proc. Priv. Enhancing Technol._ , vol. 2022, no. 1, pp. 227–252, 2022. * [40] S. Dambra, I. Sanchez-Rola, L. Bilge, and D. Balzarotti, “When Sally Met Trackers: Web Tracking From the Users’ Perspective,” in _31st USENIX Security Symposium (USENIX Security 22)_ , 2022, pp. 2189–2206. * [41] I. Sanchez-Rola, X. Ugarte-Pedrero, I. Santos, and P. G. Bringas, “The web is watching you: A comprehensive review of web-tracking techniques and countermeasures,” _Logic Journal of the IGPL_ , vol. 25, no. 1, pp. 18–29, 2017. * [42] D. Kristol and L. Montulli, “RFC2965 - HTTP state management mechanism,” Tech. Rep., 2000. * [43] Q. Chen, P. Ilia, M. Polychronakis, and A. Kapravelos, “Cookie swap party: Abusing first-party cookies for web tracking,” in _Proceedings of the Web Conference_ , 2021, pp. 2117–2129. * [44] J. R. Mayer and J. C. Mitchell, “Third-party web tracking: Policy and technology,” in _2012 IEEE symposium on security and privacy_. IEEE, 2012, pp. 413–427. * [45] P. Eckersley, “How unique is your web browser?” in _Privacy Enhancing Technologies: 10th International Symposium, Berlin, Germany, July 21-23, 2010\. Proceedings 10_. Springer, 2010, pp. 1–18. * [46] T. Kohno, A. Broido, and K. C. Claffy, “Remote physical device fingerprinting,” _IEEE Transactions on Dependable and Secure Computing_ , vol. 2, no. 2, pp. 93–108, 2005. * [47] K. Mowery and H. Shacham, “Pixel perfect: Fingerprinting canvas in HTML5,” _Proceedings of W2SP_ , 2012. * [48] Y. Cao, S. Li, and E. Wijmans, “(Cross-) browser fingerprinting via os and hardware level features,” in _Proceedings Network and Distributed System Security Symposium_. Internet Society, 2017. * [49] T. Laor, N. Mehanna, A. Durey, V. Dyadyuk, P. Laperdrix, C. Maurice, Y. Oren, R. Rouvoy, W. Rudametkin, and Y. Yarom, “DRAWNAPART: A Device Identification Technique based on Remote GPU Fingerprinting,” in _Network and Distributed System Security Symposium (NDSS)_ , 2022. * [50] P. Laperdrix, O. Starov, Q. Chen, A. Kapravelos, and N. Nikiforakis, “Fingerprinting in Style: Detecting Browser Extensions via Injected Style Sheets.” in _USENIX Security Symposium_ , 2021, pp. 2507–2524. * [51] A. Sjösten, D. Hedin, and A. Sabelfeld, “Essentialfp: Exposing the essence of browser fingerprinting,” in _IEEE European Symposium on Security and Privacy Workshops (EuroS &PW)_, 2021, pp. 32–48. * [52] U. Iqbal, P. Snyder, S. Zhu, B. Livshits, Z. Qian, and Z. Shafiq, “Adgraph: A graph-based approach to ad and tracker blocking,” in _IEEE Symposium on Security and Privacy (SP)_ , 2020, pp. 763–776. * [53] S. Siby, U. Iqbal, S. Englehardt, Z. Shafiq, and C. Troncoso, “WebGraph: Capturing advertising and tracking information flows for robust blocking,” in _31st USENIX Security Symposium (USENIX Security 22)_ , 2022, pp. 2875–2892. * [54] I. Reyes, P. Wijesekera, J. Reardon, A. Elazari Bar On, A. Razaghpanah, N. Vallina-Rodriguez, S. Egelman _et al._ , ““Won’t somebody think of the children?” examining COPPA compliance at scale,” in _The 18th Privacy Enhancing Technologies Symposium_ , 2018. * [55] E. Zeng, T. Kohno, and F. Roesner, “Bad news: Clickbait and deceptive ads on news and misinformation websites,” in _Workshop on Technology and Consumer Protection_ , 2020, pp. 1–11. * [56] E. Zeng, M. Wei, T. Gregersen, T. Kohno, and F. Roesner, “Polls, clickbait, and commemorative $2 bills: problematic political advertising on news and media websites around the 2020 US elections,” in _Proceedings of the 21st ACM Internet Measurement Conference_ , 2021, pp. 507–525. * [57] O. Akgul, R. Roberts, M. Namara, D. Levin, and M. L. Mazurek, “Investigating Influencer VPN Ads on YouTube,” in _IEEE Symposium on Security and Privacy (SP)_ , 2022, pp. 876–892. * [58] M. Ali, A. Goetzen, A. Mislove, E. Redmiles, and P. Sapiezynski, “All things unequal: Measuring disparity of potentially harmful ads on Facebook,” in _6th Workshop on Technology and Consumer Protection_ , 2022. * [59] G. Venkatadri, E. Lucherini, P. Sapiezynski, and A. Mislove, “Investigating sources of PII used in Facebook’s targeted advertising,” _Proc. Priv. Enhancing Technol._ , vol. 2019, no. 1, pp. 227–244, 2019. * [60] Y. Zhao, T. Liu, H. Wang, Y. Liu, J. Grundy, and L. Li, “Are mobile advertisements in compliance with app’s age group?” in _Proceedings of the ACM Web Conference 2023_ , 2023, pp. 3132–3141. * [61] T. Medjkoune, O. Goga, and J. Senechal, “Marketing to children through online targeted advertising: Targeting mechanisms and legal aspects,” in _Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security_ , ser. CCS ’23. New York, NY, USA: Association for Computing Machinery, 2023, p. 180–194. [Online]. Available: https://doi.org/10.1145/3576915.3623172 * [62] A. Andreou, G. Venkatadri, O. Goga, K. P. Gummadi, P. Loiseau, and A. Mislove, “Investigating ad transparency mechanisms in social media: A case study of Facebook’s explanations,” in _NDSS -Network and Distributed System Security Symposium_ , 2018, pp. 1–15. * [63] M. Eslami, S. R. Krishna Kumaran, C. Sandvig, and K. Karahalios, “Communicating algorithmic process in online behavioral advertising,” in _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_ , 2018, pp. 1–13. * [64] M. B. Musa and R. Nithyanand, “Atom: ad-network tomography,” _Proceedings on Privacy Enhancing Technologies_ , vol. 4, pp. 295–313, 2022\. * [65] B. Liu, A. Sheth, U. Weinsberg, J. Chandrashekar, and R. Govindan, “AdReveal: Improving transparency into online targeted advertising,” in _Proceedings of the Twelfth ACM Workshop on Hot Topics in Networks_ , 2013, pp. 1–7. * [66] P. Vallina _et al._ , “Advanced Methods to Audit Online Web Services,” Ph.D. dissertation, Universidad Carlos III de Madrid, Spain, 2023. * [67] G. Storey, D. Reisman, J. Mayer, and A. Narayanan, “The future of ad blocking: An analytical framework and new techniques,” _arXiv preprint arXiv:1705.08568_ , 2017. * [68] X. Song, Y. Zhu, X. Zeng, and X. Chen, “Hierarchical contaminated web page classification based on meta tag denoising disposal,” _Security and Communication Networks_ , vol. 2021, 2021. * [69] Q. Zhao, W. Yang, and R. Hua, “Design and research of composite web page classification network based on deep learning,” in _IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)_ , 2019, pp. 1531–1535. * [70] A. Gupta and R. Bhatia, “Ensemble approach for web page classification,” _Multimedia Tools and Applications_ , vol. 80, no. 16, pp. 25 219–25 240, 2021. * [71] F. Aydos, A. M. Özbayoğlu, Y. Şirin, and M. F. Demirci, “Web page classification with Google Image Search results,” _arXiv preprint arXiv:2006.00226_ , 2020. * [72] D. López-Sánchez, A. G. Arrieta, and J. M. Corchado, “Visual content-based web page categorization with deep transfer learning and metric learning,” _Neurocomputing_ , vol. 338, pp. 418–431, 2019. * [73] “Infographic: How Many Websites Are There?” https://www.statista.com/chart/19058/number-of-websites-online, 2023, [Accessed 28 Feb. 2023]. * [74] “Member list,” https://www.kidsafeseal.com/certifiedproducts.html, 2023, [Accessed 28 Feb. 2023]. * [75] “Best Websites for Kids,” https://www.commonsensemedia.org/website-lists, 2023, [Accessed 28 Feb. 2023]. * [76] “The list of kids’ websites,” https://kids.kaspersky.com/kids-website-list, 2023, [Accessed 28 Feb. 2023]. * [77] “Contributors,” https://support.virustotal.com/hc/en-us/articles/115002146809-Contributors, 2023, [Accessed 28 Feb. 2023]. * [78] “Adding Rank Magnitude to the CrUX Report in BigQuery,” https://developers.google.com/web/updates/2021/03/crux-rank-magnitude, 2021, [Accessed 28 Feb. 2023]. * [79] “Fast and reliable end-to-end testing for modern web apps $|$ Playwright,” https://playwright.dev, 2023, [Accessed 28 Feb. 2023]. * [80] N. Reimers and I. Gurevych, “Sentence Embeddings using Siamese BERT-Networks,” _arXiv preprint arXiv:1908.10084_ , 2019. * [81] “sentence-transformers/all-mpnet-base-v2 $\cdot$ Hugging Face,” https://huggingface.co/sentence-transformers/all-mpnet-base-v2, 2023, [Accessed 28 Feb. 2023]. * [82] K. Song, X. Tan, T. Qin, J. Lu, and T.-Y. Liu, “Mpnet: Masked and permuted pre-training for language understanding,” _Advances in Neural Information Processing Systems_ , vol. 33, pp. 16 857–16 867, 2020. * [83] N. Reimers and I. Gurevych, “Making monolingual sentence embeddings multilingual using knowledge distillation,” _arXiv preprint arXiv:2004.09813_ , 2020. * [84] Hugging Face, “trainer $\cdot$ Hugging Face,” https://huggingface.co/docs/transformers/main_classes/trainer, 2023, [Accessed 28 Feb. 2023]. * [85] ——, “AutoModels,” https://huggingface.co/transformers/v3.0.2/model_doc/auto.html, 2023, [Accessed 28 Feb. 2023]. * [86] Ray Project (Anyscale), “Hyperparameter Search with Transformers and Ray Tune,” https://huggingface.co/blog/ray-tune, 2023, [Accessed 28 Feb. 2023]. * [87] The Ray Team, “Tune: Scalable Hyperparameter Tuning — Ray 2.2.0,” https://docs.ray.io/en/latest/tune/, 2023, [Accessed 28 Feb. 2023]. * [88] M. Jaderberg, V. Dalibard, S. Osindero, W. M. Czarnecki, J. Donahue, A. Razavi, O. Vinyals, T. Green, I. Dunning, K. Simonyan _et al._ , “Population based training of neural networks,” _arXiv preprint arXiv:1711.09846_ , 2017\. * [89] A. Stolerman, R. Overdorf, S. Afroz, and R. Greenstadt, “Classify, but verify: Breaking the closed-world assumption in stylometric authorship attribution,” in _IFIP Working Group_ , vol. 11, 2013, p. 64. * [90] M. Juarez, S. Afroz, G. Acar, C. Diaz, and R. Greenstadt, “A critical evaluation of website fingerprinting attacks,” in _Proceedings of the ACM SIGSAC Conference on Computer and Communications Security_ , 2014, pp. 263–274. * [91] scikit-learn developers, “Scikit-learn: Machine learning in Python,” https://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html, 2011–, version 1.0.2 [Accessed 28 Feb. 2023]. * [92] “DNS0 Kids - A childproof version of the Internet.” https://www.dns0.eu/kids, 2023, [Accessed 4 May 2023]. * [93] “Tracker Radar Collector,” https://github.com/duckduckgo/tracker-radar-collector, 2023, [Accessed 28 Feb. 2023]. * [94] puppeteer, “Puppeteer,” https://github.com/puppeteer/puppeteer, 2023, [Accessed 28 Feb. 2023]. * [95] DuckDuckGo, “notABot.js,” https://github.com/duckduckgo/tracker-radar-collector/blob/main/helpers/notABot.js, 2023, [Accessed 3 Aug. 2023]. * [96] “Early browser API accesses and function calls are missed,” https://github.com/duckduckgo/tracker-radar-collector/issues/77, 2023, [Accessed 2 Aug. 2023]. * [97] “Browserleaks - Check your browser for privacy leaks,” https://browserleaks.com, 2023, [Accessed 3 Aug. 2023]. * [98] “@gorhill/ubo-core,” https://www.npmjs.com/package/@gorhill/ubo-core, 2023, [Accessed 28 Feb. 2023]. * [99] R. Hill and N. Rolls, “uBlock Origin,” https://ublockorigin.com/, [Accessed 28 Feb. 2023]. * [100] ——, “uBlock,” https://github.com/gorhill/uBlock/blob/491bc87e94a503a17fd11cdee35c1f1b6fea24be/platform/mv3/make-rulesets.js#L1285-L1296, 2023, [Accessed 28 Feb. 2023]. * [101] “entity_map.json - DuckDuckGo Tracker Radar,” https://github.com/duckduckgo/tracker-radar/blob/main/build-data/generated/entity_map.json, [Accessed 28 Feb. 2023]. * [102] T. Urban, M. Degeling, T. Holz, and N. Pohlmann, “Beyond the front page: Measuring third party dynamics in the field,” in _Proceedings of The Web Conference 2020_ , 2020, pp. 1275–1286. * [103] W. Aqeel, B. Chandrasekaran, A. Feldmann, and B. M. Maggs, “On Landing and Internal Web Pages: The Strange Case of Jekyll and Hyde in Web Performance Measurement,” in _Proceedings of the ACM Internet Measurement Conference_ , 2020, pp. 680–695. * [104] A. Rasaii, S. Singh, D. Gosain, and O. Gasser, “Exploring the cookieverse: A multi-perspective analysis of web cookies,” in _International Conference on Passive and Active Network Measurement_. Springer, 2023, pp. 623–651. * [105] DuckDuckGo, “autoconsent,” https://github.com/duckduckgo/autoconsent, 2023, [Accessed 28 Feb. 2023]. * [106] “Web Tracking Protections — DuckDuckGo Help Pages,” https://help.duckduckgo.com/duckduckgo-help-pages/privacy/web-tracking-protections/#cookie-pop-up-management, 2023, [Accessed 3 May 2023]. * [107] R. Bagge, C. Matte, Éric Daspet, K. Emanuel, S. Macbeth, and S. Roeland, “Consent-O-Matic,” https://github.com/cavi-au/Consent-O-Matic/, 2019, [Accessed 28 Feb. 2023]. * [108] M. Nouwens, I. Liccardi, M. Veale, D. Karger, and L. Kagal, “Dark Patterns after the GDPR: Scraping Consent Pop-Ups and Demonstrating Their Influence,” in _CHI Conference on Human Factors in Computing Systems_ , 2020, pp. 1–13. * [109] The EasyList authors, https://easylist.to/easylist/easylist.txt, 2023, [Accessed 28 Feb. 2023]. * [110] “Transparency and ad disclosures - Google Ads Help,” https://support.google.com/google-ads/answer/9729263?hl=en, 2023, [Accessed 28 Feb. 2023]. * [111] L. Gak, S. Olojo, and N. Salehi, “The Distressing Ads That Persist: Uncovering The Harms of Targeted Weight-Loss Ads Among Users with Histories of Disordered Eating,” _Proc. ACM Hum.-Comput. Interact._ , vol. 6, no. CSCW2, Nov 2022. * [112] A. J. Campbell, “Rethinking children’s advertising policies for the digital age,” _Loy. Consumer L. Rev._ , vol. 29, p. 1, 2016. * [113] “Age-restricted ads online,” https://www.asa.org.uk/static/44dc1935-0766-4378-91171e6954ae560a/Age-restricted-ads-online-targeting-guidance.pdf, 2023, [Accessed 28 Feb. 2023]. * [114] “Ads & Made for Kids content,” https://support.google.com/adspolicy/answer/9683742?hl=en, 2023, [Accessed 1 Mar. 2023]. * [115] “Restricted Content, Products, and Services,” https://help.taboola.com/hc/en-us/articles/115000220793-Restricted-Content-Products-and-Services, 2023, [Accessed 1 Mar. 2023]. * [116] G. Inc., “Google Cloud Vision API’s SafeSearch Detection,” https://cloud.google.com/vision/docs/detecting-safe-search, 2021. * [117] Microsoft, “Content Moderator - Image Moderation,” https://docs.microsoft.com/en-us/azure/cognitive-services/content-moderator/image-moderation-api, 2021\. * [118] M. Grootendorst, “Bertopic: Neural topic modeling with a class-based tf-idf procedure,” _arXiv preprint arXiv:2203.05794_ , 2022. * [119] “IngoSearch,” https://ingosearch.com, 2023, [Accessed 4 May 2023]. * [120] “BetterMe,” https://betterme.world, 2023, [Accessed 4 May 2023]. * [121] “An unhealthy diet of targeted ads: an investigation into how the diet industry exploits our data,” https://privacyinternational.org/long-read/4603/unhealthy-diet-targeted-ads-investigation-how-diet-industry-exploits-our-data, 2021, [Accessed 5 May 2023]. * [122] “Eis.de Is Now Germany’s Biggest Adult Retailer,” https://www.venus-adult-news.com/en/web-tech/e-commerce/eis-de-is-now-germanys-biggest-adult-retailer, 2023, [Accessed 3 Aug. 2023]. * [123] S. Bell, “How URL Tracking Systems are Abused for Phishing,” _Security Boulevard_ , 2020, https://securityboulevard.com/2020/10/how-url-tracking-systems-are-abused-for-phishing. * [124] “Pixel for Marketing API - Meta Pixel - Documentation - Meta for Developers,” https://developers.facebook.com/docs/meta-pixel/implementation/marketing-api, Aug. 2023, [Accessed 2 Aug. 2023]. * [125] “LinkedIn Insight Tag $|$ LinkedIn Marketing Solutions,” https://business.linkedin.com/marketing-solutions/insight-tag, Aug. 2023, [Accessed 3 Aug. 2023]. * [126] “Advanced Store,” https://www.advanced-store.com/en/, [Accessed: February 24, 2023]. * [127] “Solutions,” https://throtle.io/solutions, 2021, [Accessed 31 Jul. 2023]. * [128] “Stripe $|$ Payment Processing Platform for the Internet,” https://stripe.com/en-nl, 2023, [Accessed 28 Feb. 2023]. * [129] “Join the Smart Affiliate Marketing Network $|$ WEBGAINS,” https://www.webgains.com/public/en, 2023, [Accessed 28 Feb. 2023]. * [130] European Data Protection Board, “Binding Decision 4/2022 on the dispute submitted by the Irish SA on Meta Platforms Ireland Limited and its Instagram service (Art. 65 GDPR),” https://edpb.europa.eu/our-work-tools/our-documents/binding-decision-board-art-65/binding-decision-42022-dispute-submitted_en, 2022, [Accessed 28 Feb. 2023]. * [131] ePrivacy Directive, “Directive 2002/58/EC of the European Parliament and of the Council,” http://data.europa.eu/eli/dir/2002/58/2009-12-19, 2009, [Accessed 28 Feb. 2023]. * [132] I. Milkaite and E. Lievens, “Status quo regarding the child’s article 8 GDPR age of consent for data processing across the EU,” _BIK PORTAL_ , no. 20/12/2019, 2019. * [133] European Data Protection Board, “Guidelines 05/2020 on consent under Regulation 2016/679,” https://edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_202005_consent_en.pdf, 2020, [Accessed 28 Feb. 2023]. * [134] “Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act),” http://data.europa.eu/eli/reg/2022/2065/oj, 2022, [Accessed 28 Feb. 2023]. * [135] European Commission, “Digital Services Act: Commission designates first set of Very Large Online Platforms and Search Engines,” https://ec.europa.eu/commission/presscorner/detail/en/IP_23_2413, 2023. * [136] FTC, “Complying with COPPA: Frequently Asked Questions,” https://www.ftc.gov/node/60394, 2020, [Accessed 28 Feb. 2023]. * [137] ——, “Fortnite Video Game Maker Epic Games to Pay More Than Half a Billion Dollars over FTC Allegations of Privacy Violations and Unwanted Charges,” https://www.ftc.gov/node/80135, 2022, [Accessed 28 Feb. 2023]. * [138] “Digital Advertising - Worldwide $|$ Statista Market Forecast,” https://www.statista.com/outlook/dmo/digital-advertising/worldwide, 2023, [Accessed 28 Feb. 2023]. * [139] “5Rights,” https://5rightsfoundation.com, Nov. 2023, [Accessed 29 Nov. 2023]. * [140] D. Zeber, S. Bird, C. Oliveira, W. Rudametkin, I. Segall, F. Wollsén, and M. Lopatka, “The representativeness of automated web crawls as a surrogate for human browsing,” in _Proceedings of The Web Conference 2020_ , 2020, pp. 167–178. * [141] J. Jueckstock, S. Sarker, P. Snyder, A. Beggs, P. Papadopoulos, M. Varvello, B. Livshits, and A. Kapravelos, “Towards realistic and reproducible web crawl measurements,” in _Proceedings of the Web Conference 2021_ , 2021, pp. 80–91. * [142] M. L. McHugh, “Interrater reliability: the kappa statistic,” _Biochemia Medica_ , vol. 22, no. 3, pp. 276–282, 2012. ## Appendix A Criteria for labeling child-directed websites To identify a child-directed website, we manually visit and review its design, content, and policies, including necessary translations to English. A site is labeled as child-directed if any of the following conditions are met: * • Does the website include content, activities, or games that can be used by children? * • Does the website promote products (e.g., apps, sites, books, videos, workshops, animations, etc.) designed for and usable by children online? * • Does the website include content or promote products whose end users are children, but children’s parents must first subscribe or register? A site is not child-directed if one of the following is true: * • The website redirects to another page that is not child-directed. * • The website features children-related products intended for adult use, such as parents or teachers. * • The website is generally appealing to adults (e.g., news or academic websites). ### A.1 Manual Verification of the Classifier Output Two researchers manually labeled a total of 2,500 websites detected as child- directed by the classifier. Initially, both researchers jointly labeled a 50-website sample, reaching agreement on 45 of these decisions (Cohen’s Kappa=0.79 [142]). The remaining websites were divided into two equal batches and labeled individually by the two researchers. This process took approximately one person-week to complete. We followed the criteria for identifying child-directed websites (Appendix A) and considered four potential labels for each website: child-directed, child-related, non-children and unknown. The majority of the websites (64%) were labeled as child-directed, , while 23% were identified as child-related, indicating they are relevant to children but primarily intended for parents or teachers. About 4.5% of the websites were inaccurately classified as non-children’s sites, such as entertainment bands, academic forums, and movie streaming services, mostly due to vague or brief titles and descriptions. In some cases, discerning whether websites were targeted at children, parents, or teachers was challenging, leading to 5.5% being labeled as ‘unknown’ due to uncertainty about their target audience. Of the misclassified websites, we found that four were adult entertainment websites (0.16%) that had very short metadata fields mentioning words such as “teens”, “cartoon”, “animations” which likely caused the misclassification. ### A.2 Ad Transparency Statements The following ad transparency statements are used to classify advertisements as targeted or non-targeted. Note that targeted categories also include retargeting and behavioral ads. The statements are compiled from Google’s and Criteo’s ad disclosure interfaces, reached via the AdChoices icon. When searching for the statements, we use exact, case-insensitive search. Targeted: * • Google’s estimation of your interests * • Websites you’ve visited * • Your similarity to groups of people the advertiser is trying to reach * • Your activity on Google on this device * • According to your activity on this device * • You have enabled ad personalization * • Information collected by the publisher. The publisher partners with Google to show ads * • Google’s estimation of the languages you know, based on your activity on this device * • Your visit to the advertiser’s website or app * • The advertiser’s interest in reaching new customers who haven’t bought something from them before Non-Targeted: * • Ad personalization is turned off * • You have turned off ad personalization * • Ad personalization is off * • The time of day or your general location * • Google shows ads based on general factors like the time of day and the info on a page, our policies, and your ad personalization settings * • The information on the website you were viewing * • General factors about the placement of the ad ### A.3 Detecting failed or errored visits We classified a visit as failed under the following conditions: if the first request elicited a 4XX or 5XX error; if the size of the first non-3XX response (root document) was less than 512 bytes, following Le Pochat et al. [26]; or if there was no successful (200 OK) response.
# L2 PROFICIENCY ASSESSMENT USING SELF-SUPERVISED SPEECH REPRESENTATIONS ###### Abstract There has been a growing demand for automated spoken language assessment systems in recent years. A standard pipeline for this process is to start with a speech recognition system and derive features, either hand-crafted or based on deep-learning, that exploit the transcription and audio. Though these approaches can yield high performance systems, they require speech recognition systems that can be used for L2 speakers, and preferably tuned to the specific form of test being deployed. Recently a self-supervised speech representation based scheme, requiring no speech recognition, was proposed. This work extends the initial analysis conducted on this approach to a large scale proficiency test, Linguaskill, that comprises multiple parts, each designed to assess different attributes of a candidate’s speaking proficiency. The performance of the self-supervised, wav2vec 2.0, system is compared to a high performance hand-crafted assessment system and a BERT-based text system both of which use speech transcriptions. Though the wav2vec 2.0 based system is found to be sensitive to the nature of the response, it can be configured to yield comparable performance to systems requiring a speech transcription, and yields gains when appropriately combined with standard approaches. Index Terms— automatic assessment of spoken language proficiency, computer- assisted language learning ## 1 Introduction In recent years, the growing number of learners of English as a second language (L2) on a global scale has created an increasing demand for automated spoken language assessment systems for applications in Computer-Assisted Language Learning (CALL) in private practice and classroom situations and to certify language exams. One of the compelling reasons for automatic assessment is the need to evaluate and provide feedback to increasing numbers of learners and return results in a timely manner. Secondly, compared to human graders, not only can automatic systems ensure greater speed, but they can do it at a lower cost, since the recruitment and training of new human experts is expensive and can offer only a small increase in performance. Finally, the use of automatic assessment methods can enhance consistency, reliability, and objectivity of scoring, since machines are not susceptible to rater effects and - more simply - to tiredness [1]. Automated systems for assessing L2 speaking proficiency typically receive sequential data from a learner as input to predict their grade or level with respect to overall proficiency or specific facets of proficiency. This input data may consist - as needed - of phones, recognised words, acoustic features, or other information derived directly from audio or from automatic speech recognition (ASR) transcriptions. In most cases, sets of hand-crafted features related to specific aspects of proficiency, such as fluency [2], pronunciation [3], prosody [4] and text complexity [5], are extracted and fed into graders to predict analytic grades related to those specific aspects. An alternative but similar approach consists of concatenating multiple hand-crafted features related to multiple aspects of proficiency in order to obtain overall feature sets, which are subsequently projected through graders to predict holistic grades, as shown in [6, 7, 8, 9]. The efficacy of hand-crafted features for scoring either overall proficiency or individual aspects heavily relies on their specific underlying assumptions, and salient information about proficiency may risk being discarded. For holistic grading, this limitation has been addressed by substituting hand-crafted features with automatically derived features for holistic grading prediction, either through end-to-end systems [10] or in multiple stages [11, 12]. Other studies have investigated the use of graders that are trained on holistic scores but are defined with both their inputs and structure adapted to target particular aspects of proficiency, such as pronunciation [13], rhythm [14] and text [15, 16]. In these cases, a possible limitation might be the absence of information concerning facets of proficiency that are not present in the input data fed to the grader, although we have shown that it is possible to combine multiple graders targeting different aspects of proficiency in a previous study [17]. Such a limitation is particularly evident in systems using ASR transcriptions, since a) they will contain errors and so will not provide faithful verbatim transcriptions of a learner’s performance, thus not rendering its content appropriately; b) transcriptions do not contain any information relating to the message realisation, such as fluency, pronunciation, intonation, rhythm, or prosody111Some information about pronunciation can be obtained from the performance of the ASR system and associated confidence scores, whereas hesitation markers, truncated words, and other disfluencies (if available from the ASR transcript) might be considered proxies of fluency.. Instead, they represent a precious resource for highly specific tasks in CALL applications, e.g., for spoken grammatical error correction and feedback, as we have shown in [18]. In this paper, in order to tackle these issues and address these limitations, we propose an approach based on self-supervised speech representations using wav2vec 2.0 [19]. Recent studies have shown that self-supervised learning (SSL) is an effective approach in various downstream tasks of speech processing applications, such as ASR, keyword spotting, emotion recognition, intent classification, speaker identification, and speaker diarisation [19, 20]. In these studies, contextual representations were applied by means of pre-trained models. In particular, it has been demonstrated that these models are able to capture a vast range of speech-related features and linguistic information, such as audio, fluency, suprasegmental pronunciation, and even semantic and syntactic text-based features for L1, L2, read and spontaneous speech [21]. In the field of CALL, SSL has been investigated for mispronunciation detection and diagnosis [22, 23, 24] and automatic pronunciation assessment [25]. To the best of our knowledge, our previous work [26] was the first to propose the use of SSL for holistic and analytic proficiency assessment. Nevertheless, this study had two limitations: a) the relatively small amount of data used in the experiments and b) the comparison with a BERT-based [27] baseline only, which, although fed with manual transcriptions containing hesitations, false starts, and truncated words, did not consider purely acoustic features, thus potentially missing strictly speech-related aspects of proficiency. To address these limitations, in the present contribution, we conduct our experiments of proficiency assessment using a large amount of L2 learner data and comparing the performance of wav2vec 2.0 to two types of grader: a BERT-based grader and a standard grader fed with a set of hand-crafted features across different facets of proficiency (see Figure 1). In addition to this, we test the effectiveness of wav2vec 2.0 on a multi-part examination predicting both the overall grades and the individual grades of each part of the exam. Furthermore, we investigate various combinations between the wav2vec2-based, the BERT-based, and the standard graders. In Section 2, we describe the data used in our experiments. Section 3 shows the model architectures and the combinations considered in our study. Section 4 illustrates the results of our experiments. Finally, in Section 5, we discuss and analyse the results. Fig. 1: The three systems considered in this study: a) standard grader, b) BERT-based grader, and c) wav2vec2-based grader. ## 2 DATA The data used in our study are obtained from candidate responses to the spoken parts of the Linguaskill examinations for L2 learners of English, provided by Cambridge English Language Assessment [28]. The Linguaskill speaking exam is divided into five parts. In Part 1, the candidates should answer eight questions about themselves, of which the first two are not marked. The answers last about 10 or 20 seconds. Part 2 consists of a reading aloud activity including eight sentences of 10 seconds each. Part 3 and Part 4 test the candidates’ ability to deliver a long turn, speaking for up to one minute. In the first, the candidates should talk about a given topic, whereas in the latter they should describe one or more graphics, such as charts, diagrams, or information sheets. Finally, in Part 5, test-takers should give their opinions in form of responses of about 20 seconds to five questions related to a given topic. Since each part contributes 20% to the speaking exam, the overall grade is calculated as the average of the grades assigned to the five parts. Each speaker is graded on a scale from 1 to 6 based on the proficiency scales of the Common European Framework of Reference (CEFR) for languages [29], i.e., A1, A2, B1, B2, C1, and C2. Non-overlapping datasets of 31475 and 1033 speakers are used as the training and development/calibration set, respectively. For evaluation, we consider two test sets, LinGen and LinBus, of 1049 and 712 speakers, respectively. LinGen includes learners’ responses to questions on General English, while LinBus contains answers to questions on Business English. Each test set features around 30 L1s and is balanced for gender and proficiency level. The text transcriptions for training and test used by the standard and BERT- based graders are generated using a non-native English ASR system. A Kaldi- based system with a TDNN-F acoustic model and a trigram language model is used similar to that in [30] with average WER of $\sim$20%. ## 3 MODEL ARCHITECTURES Wav2vec2-based graders: in wav2vec 2.0, speech audio is encoded through a multilayer convolutional neural network (CNN). After encoding, masking is applied to spans of the resulting latent representations. Subsequently, these are fed into a transformer to build contextualised representations. Gumbel softmax is used to calculate the contrastive loss on which the model is trained, and speech representations are learned from this training. For our experiments, we initialised the model configuration and processor from a version provided by HuggingFace [31]222huggingface.co/patrickvonplaten/wav2vec2-base. After the learners’ answers are fed into the model, wav2vec 2.0 provides contextualised representations. To handle representations of various audio lengths, we employed a mean pooling method to concatenate 3D representations into 2D representations, which are finally passed through a regression head [26]. Since we trained a grader for each part of the exam, after trying different architectures, for Part 1 and Part 5 we used a regression head composed of a layer of 768 units, a Dropout layer, and the output layer, whereas, for Part 2, Part 3 and Part 4, we used a deeper architecture, consisting of a stack of three layers of 768 units, a Dropout layer, a layer of 128 units, and, finally, the output layer. The graders use mean squared error (MSE) as the loss function. Training uses the AdamW optimiser [32] and hyperparameters vary depending on each part. For Part 1, we used batch size 16, gradient accumulation step 2, dropout rate 0.1, and learning rate 5e-5, and we trained the grader for 2 epochs. For Part 2, we used batch size 16, gradient accumulation step 2, dropout rate 0.5, and learning rate 1e-6, and we trained the grader for 3 epochs. For Part 3 and Part 4, the hyperparameters are the same: we set batch size to 8, gradient accumulation step to 4, dropout rate to 0.5, and learning rate to 1e-5, and we trained the grader for 2 epochs. Finally, the grader for Part 5 has batch size set to 8, gradient accumulation step to 2, dropout rate to 0.1, and learning rate to 5e-5, and we trained it for 1 epoch. As has been said, the first part of wav2vec 2.0 consists of a stack of CNN layers that are used to obtain acoustically meaningful - but contextually independent - features from the raw speech audio signal. This part of the model has already been sufficiently trained during the pre-training stage and does not need fine-tuning. Therefore, for our experiments we froze the parameters of the feature extractor. BERT-based graders: for comparison, we use the text grader presented in [16], which consists of an LSTM with attention over its hidden representation. The inputs are word embeddings obtained by passing the words of each utterance through a trained BERT language model [27]. Standard graders: we also compare our SSL approach to a standard grader, which is a Deep Density Network (DDN) trained on a set of hand-crafted features designed to cover all the different aspects of proficiency and is described in [33]. These features include: grade dependent language model and word level statistics; statistics of phone duration; statistics to capture rhythm; fluency metrics; and fundamental frequency statistics are used to represent intonation. More detailed information about the features employed can be found in [8]. For all graders, their predictions are the result of ensembles. Further information about the ensemble approach can be found in [34]. Systems are calibrated and in a final set of experiments combined using linear combination: ${\hat{y}}^{(n)}=\beta_{0}+\sum_{p\in{\cal P}}\beta_{p}{\hat{y}}^{(n)}_{p}$ (1) where ${\cal P}$ is the set of parts to combine, which may come from multiple systems, and $\beta_{p}$ are the coefficients associated with the parts. For the baseline submission performance the values of $\beta_{p},p>0$ are all constrained to be the same, and equal to 0.2, to yield simple averaging consistent with the combination of operational examiner scores. Ordinary Least Squares (OLS) estimation using the development/calibration set is used to find the values of $\beta_{p}$ when unequal weighting is used. For the evaluation of the grading systems at the per-part level, we use root- mean-square error (RMSE), whilst further comparisons also include Pearson’s correlation coefficient (PCC), Spearman’s rank correlation coefficient (SRC), and the percentage of the predicted scores that are equal to or lie within 0.5 (i.e., within half a grade) ($\%\leq{}0.5$), and within 1.0 (i.e., one grade) ($\%\leq{}1.0$) of the actual score. ## 4 EXPERIMENTAL RESULTS AND ANALYSIS Part-Level Performance: we start our series of experiments with grading each of the five parts of the exam. For this part of our analysis we only consider LinGen. Table 1 reports the results of the three grading systems in terms of RMSE. Model | P1 | P2 | P3 | P4 | P5 ---|---|---|---|---|--- std (${\tt s_{d}}$) | 0.625 | 0.662 | 0.671 | 0.686 | 0.633 BERT (${\tt b_{t}}$) | 0.628 | 0.683 | 0.681 | 0.694 | 0.629 wav2vec2 (${\tt w_{v}}$) | 0.601 | 0.827 | 0.845 | 0.845 | 0.674 Table 1: RMSE results on the five parts of the LinGen exam. We can see that the wav2vec 2.0 performance varies across the parts, with close or better RMSE to the other two graders for Parts 1 and 5 and lower performance on Parts 2, 3 and 5. This appears to be due to the nature of the responses required for different parts. Parts 1 and 5 consist of several short spontaneous answers, whereas Parts 3 and 4 are also composed of spontaneous speech but a single longer and more complex response in each case. The lower wav2vec 2.0 performance may be due to our use of a mean pooling method, which may be giving too compressed a representation for longer utterances. Part 2, by contrast, is similar in length to Part 1 but consists of read speech responses. This part mainly targets pronunciation and fluency skills at the expense of content-related aspects of proficiency so the wav2vec 2.0 system might have been expected to do well. Its higher RMSE might be due to the absence of information related to the reference text read by the test- takers, which is present in the other two grading systems. It is noticeable that the standard grader, which covers all aspects of a candidate’s speech, performs the best, with the BERT grader which cannot measure pronunciation or prosody slightly behind. Submission-Level Performance: the second part of our experiments is focused on the overall grades of the Linguaskill exam, i.e., the average of the grades assigned to the five parts. Table 2 shows the results of the three grading systems both on LinGen and LinBus. | LinGen ---|--- Model | PCC | SRC | RMSE | %$\leq$0.5 | %$\leq$1.0 ${\tt s_{d}}$ | 0.932 | 0.937 | 0.383 | 81.5 | 98.6 ${\tt b_{t}}$ | 0.929 | 0.934 | 0.395 | 80.3 | 98.5 ${\tt w_{v}}$ | 0.908 | 0.931 | 0.455 | 73.3 | 97.3 | LinBus ---|--- Model | PCC | SRC | RMSE | %$\leq$0.5 | %$\leq$1.0 ${\tt s_{d}}$ | 0.911 | 0.918 | 0.416 | 76.5 | 98.3 ${\tt b_{t}}$ | 0.920 | 0.925 | 0.398 | 80.1 | 99.2 ${\tt w_{v}}$ | 0.893 | 0.911 | 0.446 | 72.1 | 97.9 Table 2: Submission-level performance on LinGen and LinBus. They all achieve good results across all metrics, with the standard grader and the BERT-based grader performing moderately better than the wav2vec2-based grading system. As regards the BERT-based grader specifically, we have already observed analogous trends in [16] and [17], and this fact is quite significant, since it highlights the importance of content-related aspects of speaking proficiency, which is far from being a mere surrogate of fluency and pronunciation. Furthermore, it appears that the standard grading system outperforms the other two graders on LinGen, but has a slightly worse performance than the BERT-based grader on LinBus. This might be ascribable to the different language models, since the first test set contains questions on General English, whereas the latter includes questions on Business English, which is typically more specific and complex. Figure 2 shows that the wav2vec2-based graders can discriminate between lower levels of proficiency, but we can clearly see that it is not able to distinguish between the highest levels as its maximum prediction is 4.6, i.e., between grades B2 and C1 (the plots for LinBus show a similar trend). Our hypothesis is that higher-level assessment tends to be more dependent on message construction (what is said) rather than message realisation (how it is said), and wav2vec 2.0 does not have actual knowledge of words. Combinations: as a preliminary step, we investigated combinations of the grading systems by calculating their simple average and using a multiple linear regression model fit with the submission-level predictions, but they did not provide significant gains. Therefore, we investigated the application of a multiple linear regression model using the per-part predictions as predictors for each individual grading system and for four combinations of them. The results on LinGen and LinBus are reported in Table 4. The combinations show performances that are aligned to or better than the individual models across all metrics, with the combination including all three grading systems achieving the best results. This combination also overcomes the issue of the wav2vec2-based grader related to scoring higher levels, as can be seen in Figure 3a. Table 3 reports the $\beta$ coefficients of the individual models described in Table 4 and the combination of all three. In the standard grader, as well as in the BERT-based grader, Parts 1, 2 and 5 affect the linear model most, whereas in the wav2vec2-based grader Part 1 and 5 are the most influential. In their combination, it appears that the highest $\beta$ coefficients are Parts 1 and 5 of the wav2vec2-based grader and Part 2 of the BERT-based and the standard graders. These values seem to confirm the RMSE results shown in Table 1. (a) std (b) wav2vec 2.0 Fig. 2: Reference vs predicted scores for standard and wav2vec2-based graders on LinGen. (a) Fig. 3: Reference vs predicted scores for combined system (${\tt s_{d}}$$\otimes$${\tt b_{t}}$$\otimes$${\tt w_{v}}$) per-part combination ${\tt s_{d}}$$\otimes$${\tt b_{t}}$$\otimes$${\tt w_{v}}$ Model | P1 | P2 | P3 | P4 | P5 | $\beta_{0}$ ---|---|---|---|---|---|--- ${\tt s^{\otimes}_{d}}$ | 0.23 | 0.25 | 0.14 | 0.15 | 0.22 | -0.11 ${\tt b^{\otimes}_{t}}$ | 0.20 | 0.26 | 0.13 | 0.17 | 0.23 | -0.13 ${\tt w^{\otimes}_{v}}$ | 0.29 | 0.05 | 0.01 | 0.01 | 0.45 | 0.76 ${\tt s_{d}}$$\otimes$${\tt b_{t}}$$\otimes$${\tt w_{v}}$ | ${\tt s_{d}}$ | -0.01 | 0.12 | 0.06 | 0.01 | -0.04 | 0.20 ${\tt b_{t}}$ | 0.06 | 0.16 | 0.05 | 0.09 | 0.09 ${\tt w_{v}}$ | 0.20 | -0.08 | -0.02 | 0.02 | 0.20 Table 3: $\beta$ coefficients of per-part linear regression model for the standard (${\tt s^{\otimes}_{d}}$), BERT (${\tt b^{\otimes}_{t}}$), wav2vec2 (${\tt w^{\otimes}_{v}}$), and combination (${\tt s_{d}}$$\otimes$${\tt b_{t}}$$\otimes$${\tt w_{v}}$) estimated on the calibration data. | LinGen ---|--- Model | PCC | SRC | RMSE | %$\leq$0.5 | %$\leq$1.0 ${\tt s^{\otimes}_{d}}$ | 0.932 | 0.937 | 0.382 | 82.3 | 98.7 ${\tt b^{\otimes}_{t}}$ | 0.930 | 0.935 | 0.393 | 80.3 | 98.6 ${\tt w^{\otimes}_{v}}$ | 0.933 | 0.937 | 0.393 | 79.7 | 99.0 ${\tt s_{d}}$$\otimes$${\tt w_{v}}$ | 0.941 | 0.945 | 0.363 | 84.5 | 99.3 ${\tt s_{d}}$$\otimes$${\tt b_{t}}$ | 0.936 | 0.940 | 0.373 | 81.9 | 98.8 ${\tt b_{t}}$$\otimes$${\tt w_{v}}$ | 0.943 | 0.947 | 0.359 | 84.3 | 99.2 ${\tt s_{d}}$$\otimes$${\tt b_{t}}$$\otimes$${\tt w_{v}}$ | 0.943 | 0.947 | 0.356 | 85.0 | 99.1 | LinBus ---|--- Model | PCC | SRC | RMSE | %$\leq$0.5 | %$\leq$1.0 ${\tt s^{\otimes}_{d}}$ | 0.912 | 0.920 | 0.415 | 77.0 | 99.0 ${\tt b^{\otimes}_{t}}$ | 0.920 | 0.924 | 0.400 | 80.1 | 99.0 ${\tt w^{\otimes}_{v}}$ | 0.916 | 0.919 | 0.394 | 79.1 | 99.0 ${\tt s_{d}}$$\otimes$${\tt w_{v}}$ | 0.925 | 0.928 | 0.378 | 82.0 | 99.4 ${\tt s_{d}}$$\otimes$${\tt b_{t}}$ | 0.925 | 0.929 | 0.391 | 80.8 | 99.4 ${\tt b_{t}}$$\otimes$${\tt w_{v}}$ | 0.930 | 0.932 | 0.368 | 82.7 | 99.3 ${\tt s_{d}}$$\otimes$${\tt b_{t}}$$\otimes$${\tt w_{v}}$ | 0.931 | 0.933 | 0.366 | 82.5 | 99.4 Table 4: Results on overall grades on LinGen and LinBus using per-part linear regression estimated on the calibration data. ## 5 CONCLUSIONS In this study, we have extended our recent novel approach to proficiency assessment using a wav2vec2-based grader applying it to a large quantity of L2 learner data. First, we compared its performance on the five parts of the Linguaskill exam to a high performing standard grader fed with hand-crafted features and a BERT-based grader. We found that our proposed approach appears to be sensitive to the nature of the responses for a part with good performance for parts consisting of short spontaneous answers. Secondly, we found that the three grading systems have comparable performances on overall grades, with the wav2vec2-based grader showing some difficulties in assessing higher levels. Finally, we combined the standard and the wav2vec2-based graders by means of various linear combinations and we found interesting improvements. A concern with the wav2vec2 and BERT-based graders is that they are not fully valid alone since neither considers all aspects of the assessment construct and their results are not interpretable to provide feedback to a learner. As well as boosting the assessment performance, combination with the standard hand-crafted feature grader removes these concerns. ## References * [1] A. Van Moere and R. Downey, “Technology and artificial intelligence in language assessment,” in Handbook of Second Language Assessment, D. Tsagari and Banerjee J., Eds., pp. 341–357. DeGruyter Mouton, Boston, 2017. * [2] H. Strik and C. Cucchiarini, “Automatic assessment of second language learners’ fluency,” in Proc. International Congress of Phonetic Sciences (ICPhS) 1999, 1999. * [3] L. Chen, K. Evanini, and X. Sun, “Assessment of non-native speech using vowel space characteristics,” in Proc. 2010 IEEE Spoken Language Technology Workshop, 2010, pp. 139–144. * [4] E. Coutinho, F. Hönig, Y. Zhang, S. Hantke, A. Batliner, E. Nöth, and B. Schuller, “Assessing the prosody of non-native speakers of English: Measures and feature sets,” in Proc. 10th International Conference on Language Resources and Evaluation (LREC’16), 2016. * [5] S. Bhat and S. Yoon, “Automatic assessment of syntactic complexity for spontaneous speech scoring,” Speech Communication, vol. 67, pp. 42–57, 2015. * [6] P. Müller, F. De Wet, C. Van Der Walt, and T. Niesler, “Automatically assessing the oral proficiency of proficient L2 speakers.,” in Proc. Workshop on Speech and Language Technology for Education (SLaTE), 2009, pp. 29–32. * [7] S. Crossley and D. McNamara, “Applications of text analysis tools for spoken response grading,” Language Learning & Technology, vol. 17, no. 2, pp. 171–192, 2013\. * [8] Y. Wang, M.J.F. Gales, K.M. Knill, K. Kyriakopoulos, A. Malinin, R.C. van Dalen, and M. Rashid, “Towards automatic assessment of spontaneous spoken English,” Speech Communication, vol. 104, pp. 47–56, 2018. * [9] Z. Liu, G. Xu, T. Liu, W. Fu, Y. Qi, W. Ding, Y. Song, C. Guo, C. Kong, S. Yang, et al., “Dolphin: a spoken language proficiency assessment system for elementary education,” in Proc. The Web Conference 2020, 2020, pp. 2641–2647. * [10] L. Chen, J. Tao, S. Ghaffarzadegan, and Y. Qian, “End-to-end neural network based automated speech scoring,” in Proc. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 6234–6238. * [11] S. Cheng, Z. Liu, L. Li, Z. Tang, D. Wang, and T.F. Zheng, “ASR-Free pronunciation assessment,” in Proc. Interspeech 2020, 2020, pp. 3047–3051. * [12] K. Takai, P. Heracleous, K. Yasuda, and A. Yoneyama, “Deep learning-based automatic pronunciation assessment for second language learners,” in Proc. International Conference on Human-Computer Interaction, 2020, pp. 338–342. * [13] K. Kyriakopoulos, K.M. Knill, and M.J.F. Gales, “A deep learning approach to assessing non-native pronunciation of English using phone distances,” in Proc. Interspeech 2018, 2018, pp. 1626–1630. * [14] K. Kyriakopoulos, K.M. Knill, and M.J.F. Gales, “A deep learning approach to automatic characterisation of rhythm in non-native English speech,” in Proc. Interspeech 2019, 2019, pp. 1836–1840. * [15] X. Wang, K. Evanini, Y. Qian, and M. Mulholland, “Automated scoring of spontaneous speech from young learners of english using transformers,” in 2021 IEEE Spoken Language Technology Workshop (SLT), 2021, pp. 705–712. * [16] V. Raina, M.J.F. Gales, and K.M. Knill, “Universal adversarial attacks on spoken language assessment systems,” in Proc. Interspeech 2020, 2020, pp. 3855–3859. * [17] S. Bannò, B. Balusu, M. J. F. Gales, K. M. Knill, and K. Kyriakopoulos, “View-specific assessment of L2 spoken English,” in Proc. Interspeech 2022, 2022, pp. 4471–4475. * [18] Y. Lu, S. Bannò, and M.J.F. Gales, “On assessing and developing spoken ’grammatical error correction’ systems,” in Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), 2022, pp. 51–60. * [19] A. Baevski, H. Zhou, A. Mohamed, and M. Auli, “wav2vec 2.0: A framework for self-supervised learning of speech representations,” in NeurIPS 2020, 2020, pp. 1–12. * [20] S.-W Yang, P.-H. Chi, Y.-S. Chuang, C.-I. J. Lai, K. Lakhotia, Y.Y. Lin, A.T. Liu, J. Shi, X. Chang, G.-T. Lin, T.-H. Huang, W.-C. Tseng, K.-T. Lee, D.-R. Liu, Z. Huang, S. Dong, S.-W. Li, S. Watanabe, A. Mohamed, and H.-Y. Lee, “SUPERB: Speech Processing Universal PERformance Benchmark,” in Proc. Interspeech 2021, 2021, pp. 1194–1198. * [21] J. Shah, Y.K. Singla, C. Chen, and R. Ratn Shah, “What all do audio transformer models hear? Probing Acoustic Representations for Language Delivery and its Structure,” arXiv e-prints, p. arXiv:2101.00387, 2021. * [22] M. Wu, K. Li, W.-K. Leung, and H. Meng, “Transformer Based End-to-End Mispronunciation Detection and Diagnosis,” in Proc. Interspeech 2021, 2021, pp. 3954–3958. * [23] X. Xu, Y. Kang, S. Cao, B. Lin, and L. Ma, “Explore wav2vec 2.0 for Mispronunciation Detection,” in Proc. Interspeech 2021, 2021, pp. 4428–4432. * [24] L. Peng, K. Fu, B. Lin, D. Ke, and J. Zhan, “A Study on Fine-Tuning wav2vec2.0 Model for the Task of Mispronunciation Detection and Diagnosis,” in Proc. Interspeech 2021, 2021, pp. 4448–4452. * [25] E. Kim, J.-J. Jeon, H. Seo, and H. Kim, “Automatic pronunciation assessment using self-supervised speech representation learning,” in Proc. Interspeech 2022, 2022, pp. 1411–1415. * [26] S. Bannò and M. Matassoni, “Proficiency assessment of L2 spoken English using wav2vec 2.0,” arXiv e-prints, p. arXiv:2210.13168, 2022. * [27] J. Devlin, M. Chang, L. Kenton, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” arXiv e-prints, p. arXiv:1810.04805, 2018. * [28] K. Ludlow, Official Quick Guide to Linguaskill, Cambridge University Press, Cambridge, 2020. * [29] Council of Europe, Common European Framework of Reference for Languages: Learning, Teaching, Assessment, Cambridge University Press, Cambridge, 2001. * [30] Y. Lu, M.J.F. Gales, K. M. Knill, P. P. Manakul, L. Wang, and Y. Wang, “Impact of ASR performance on spoken grammatical error detection,” in Proc. Interspeech 2019, 2019, pp. 1876–1880. * [31] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. Le Scao, S. Gugger, M. Drame, Q. Lhoest, and A. Rush, “Transformers: State-of-the-art natural language processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2020, pp. 38–45. * [32] I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” in ICLR 2017, 2017. * [33] A. Malinin, A. Ragni, K.M. Knill, and M.J.F. Gales, “Incorporating uncertainty into deep learning for spoken language assessment,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017, pp. 45–50. * [34] X. Wu, K.M. Knill, M.J.F. Gales, and A. Malinin, “Ensemble approaches for uncertainty in spoken language assessment,” in Proc. Interspeech 2020, 2020, pp. 3860–3864.
# Non-abelian extensions and Wells exact sequences of Lie-Yamaguti algebras Qinxiu Sun Department of Mathematics, Zhejiang University of Science and Technology, Hangzhou, 310023<EMAIL_ADDRESS>and Zhen Li Department of Mathematics, Zhejiang University of Science and Technology, Hangzhou, 310023 <EMAIL_ADDRESS> ###### Abstract. The goal of the present paper is to investigate non-abelian extensions of Lie- Yamaguti algebras and explore extensibility of a pair of automorphisms about a non-abelian extension of Lie-Yamaguti algebras. First, we study non-abelian extensions of Lie-Yamaguti algebras and classify the non-abelian extensions in terms of non-abelian cohomology groups. Next, we characterize the non-abelian extensions in terms of Maurer-Cartan elements. Moreover, we discuss the equivalent conditions of the extensibility of a pair of automorphisms about a non-abelian extension of Lie-Yamaguti algebras, and derive the fundamental sequences of Wells in the context of Lie-Yamaguti algebras. Finally, we discuss the previous results in the case of abelian extensions of Lie-Yamaguti algebras. ###### Key words and phrases: Lie-Yamaguti algebra, non-abelian extension, Maurer-Cartan element, extensibility, Wells exact sequence ###### 2010 Mathematics Subject Classification: 17A30, 17A36, 17A40, 17B99 ###### Contents 1. 1 Introduction 2. 2 Preliminaries on Lie-Yamaguti algebras 3. 3 Non-abelian extensions and non-abelian (2,3)-cocycles of Lie-Yamaguti algebras 4. 4 Non-abelian extensions in terms of Maurer-Cartan elements 5. 5 Extensibility of a pair of Lie-Yamaguti algebra automorphisms 6. 6 Wells exact sequences for Lie-Yamaguti algebras 7. 7 Particular case: abelian extensions of Lie-Yamaguti algebras ## 1\. Introduction A Lie-Yamaguti algebra first appeared in Nomizu’s work on the affine invariant connections on homogeneous spaces in 1950’s [23], which was a generalization of Lie algebras and Lie triple systems. In 1960’s, Yamaguti introduced an algebraic structure and named it a general Lie triple system or a Lie triple algebra [34, 33]. Later, the cohomology theory of this object was investigated in [34] by Yamaguti. In the study of Courant algebroids, Kinyon and Weinstein named general Lie triple systems by Lie–Yamaguti algebras [19]. Since then, Lie-Yamaguti algebras have attracted much attention and are widely explored. For example, the irreducible modules of Lie–Yamaguti algebras were considered in [3, 2], deformations and extensions of Lie-Yamaguti algebras were investigated in [20, 35]. The relative Rota-Bxter operators on Lie-Yamaguti algebras and their cohomologies were considered in [25, 31, 32]. Extensions are useful mathematical objects to understand the underlying structures. The non-abelian extension is a relatively general one among various extensions (e.g. central extensions, abelian extensions, non-abelian extensions etc.). Non-abelian extensions were first developed by Eilenberg and Maclane [10], which induced to the low dimensional non-abelian cohomology group. Then numerous works have been devoted to non-abelian extensions of various kinds of algebras, such as Lie (super)algebras, Leibniz algebras, Lie 2-algebras, Lie Yagamuti algebras, associative conformal algebras, Rota-Baxter groups, Rota-Baxter Lie algebras and Rota-Baxter Leibniz algebras, see [5, 7, 11, 13, 14, 17, 21, 22, 16] and their references. The abelian extensions of Lie Yagamuti algebras were considered in [13]. But little is known about the non-abelian extensions of Lie Yagamuti algebras. This is the first motivation for writing this paper. Another interesting study related to extensions of algebraic structures is given by the extensibility or inducibility of a pair of automorphisms. When a pair of automorphisms is inducible? This problem was first considered by Wells [29] for abstract groups and further studied in [18, 24]. Since then, several authors have studied this subject further, see [16, 13, 14, 15, 22] and references therein. The extensibility problem of a pair of derivations on abelian extensions was investigated in [6, 30]. Recently, the extensibility problem of a pair of derivations and automorphisms was extended to the context of abelian extensions of Lie coalgebras [9]. As byproducts, the Wells short exact sequences were obtained for various kinds of algebras [7, 8, 13, 14, 15, 18, 22, 16], which connected the relative automorphism groups and the non- abelian second cohomology groups. Inspired by these results, we investigate extensibility of a pair of automorphisms on a non-abelian extension of Lie Yagamuti algebras. This is another motivation for writing the present paper. Moreover, we give necessary and sufficient conditions for a pair of automorphisms to be extensible, and derive the analogue of the Wells short exact sequences in the context of non-abelian extensions of Lie Yagamuti algebras. The paper is organized as follows. In Section 2, we recall the definition of Lie-Yamaguti algebras and their representations. We also recall some basic information about the cohomology groups of Lie-Yamaguti algebras. In Section 3, we investigate non-abelian extensions and classify the non-abelian extensions using the non-abelian cohomology groups. In Section 4, we characterize equivalent non-abelian extensions using Maurer-Cartan elements. In Section 5, we study the extensibility problem of a pair of automorphisms about a non-abelian extension of Lie-Yamaguti algebras. In Section 6, we derive Wells short exact sequences in the context of non-abelian extensions of Lie-Yamaguti algebras. Finally, we discuss the previous results in the case of abelian extensions of Lie-Yamaguti algebras. Throughout the paper, let $k$ be a field. Unless otherwise specified, all vector spaces and algebras are over $k$. ## 2\. Preliminaries on Lie-Yamaguti algebras We recall the notions of Lie-Yamaguti algebras, representations and their cohomology theory. For the details see [34, 33]. ###### Definition 2.1. A Lie-Yamaguti algebra is a vector space $\mathfrak{g}$ with a bilinear map $[\cdot,\cdot]:\mathfrak{g}\otimes\mathfrak{g}\longrightarrow\mathfrak{g}$ and a trilinear map $\\{\ ,\ ,\ \\}:\mathfrak{g}\otimes\mathfrak{g}\otimes\mathfrak{g}\longrightarrow\mathfrak{g}$, satisfying (1) $[x_{1},x_{2}]+[x_{2},x_{1}]=0,~{}~{}\\{x_{1},x_{2},x_{3}\\}+\\{x_{2},x_{1},x_{3}\\}=0,$ (2) $[[x_{1},x_{2}],x_{3}]+[[x_{2},x_{3}],x_{1}]+[[x_{3},x_{1}],x_{2}]+\\{x_{1},x_{2},x_{3}\\}+\\{x_{2},x_{3},x_{1}\\}+\\{x_{3},x_{1},x_{2}\\}=0,$ (3) $\\{[x_{1},x_{2}],x_{3},y_{1}\\}+\\{[x_{2},x_{3}],x_{1},y_{1}\\}+\\{[x_{3},x_{1}],x_{2},y_{1}\\}=0,$ (4) $\\{x_{1},x_{2},[y_{1},y_{2}]\\}=[\\{x_{1},x_{2},y_{1}\\},y_{2}]+[y_{1},\\{x_{1},x_{2},y_{2}\\}],$ (5) $\\{x_{1},x_{2},\\{y_{1},y_{2},y_{3}\\}\\}=\\{\\{x_{1},x_{2},y_{1}\\},y_{2},y_{3}\\}+\\{y_{1},\\{x_{1},x_{2},y_{2}\\},y_{3}\\}+\\{y_{1},y_{2},\\{x_{1},x_{2},y_{3}\\}\\},$ for all $x_{1},x_{2},x_{3},y_{1},y_{2},y_{3}\in\mathfrak{g}$. Denote it by $(\mathfrak{g},[\ ,\ ],\\{\ ,\ ,\ \\})$ or simply by $\mathfrak{g}$. A subspace $L$ of $\mathfrak{g}$ is an ideal of $\mathfrak{g}$ if $[L,\mathfrak{g}]\subseteq L,\\{L,\mathfrak{g},\mathfrak{g}\\}\subseteq L$ and $\\{\mathfrak{g},\mathfrak{g},L\\}\subseteq L$. An ideal $L$ of $\mathfrak{g}$ is said to be an abelian ideal of $\mathfrak{g}$ if $[L,L]=0$ and $\\{L,L,\mathfrak{g}\\}=\\{\mathfrak{g},L,L\\}=\\{L,\mathfrak{g},L\\}=0$. ###### Definition 2.2. A representation of a Lie-Yamaguti algebra $\mathfrak{g}$ consists of a vector space $V$ together with a linear map $\mu:\mathfrak{g}\longrightarrow{gl}(V)$ and bilinear maps $\theta,D:\mathfrak{g}\wedge\mathfrak{g}\longrightarrow{gl}(V)$ satisfying (6) $\theta([x_{1},x_{2}],x_{3})=\theta(x_{1},x_{3})\mu(x_{2})-\theta(x_{2},x_{3})\mu(x_{1}),$ (7) $[D(x_{1},x_{2}),\mu(y_{1})]=\mu(\\{x_{1},x_{2},y_{1}\\}),$ (8) $\theta(x_{1},[y_{1},y_{2}])=\mu(y_{1})\theta(x_{1},y_{2})-\mu(y_{2})\theta(x_{1},y_{1}),$ (9) $[D(x_{1},x_{2}),\theta(y_{1},y_{2})]=\theta(\\{x_{1},x_{2},y_{1}\\},y_{2})+\theta(y_{1},\\{x_{1},x_{2},y_{2}\\}),$ (10) $\theta(x_{1},\\{y_{1},y_{2},y_{3}\\})=\theta(y_{2},y_{3})\theta(x_{1},y_{1})-\theta(y_{1},y_{3})\theta(x_{1},y_{2})+D(y_{1},y_{2})\theta(x_{1},y_{3}),$ (11) $D(x_{1},x_{2})-\theta(x_{2},x_{1})+\theta(x_{1},x_{2})+\mu([x_{1},x_{2}])-[\mu(x_{1}),\mu(x_{2})]=0,$ for all $x_{1},x_{2},x_{3},y_{1},y_{2},y_{3}\in\mathfrak{g}$. Denote the representation of $\mathfrak{g}$ by $(V,\mu,\theta,D)$ or simply by $V$. When $(V,\mu,\theta,D)$ is a representation of $\mathfrak{g}$, by a direct computation, the following conditions are also satisfied: (12) $D([x_{1},x_{2}],x_{3})+D([x_{2},x_{3}],x_{1})+D([x_{3},x_{1}],x_{2})=0,$ (13) $D(\\{x_{1},x_{2},x_{3}\\},x_{4})+D(x_{3},\\{x_{1},x_{2},x_{4}\\})=[D(x_{1},x_{2}),D(x_{3},x_{4})],$ (14) $\theta(\\{y_{1},y_{2},y_{3}\\},x_{1})=\theta(y_{1},x_{1})\theta(y_{3},y_{2})-\theta(y_{2},x_{1})\theta(y_{3},y_{1})-\theta(y_{3},x_{1})D(y_{1},y_{2}).$ ###### Proposition 2.3. Let $(\mathfrak{g},[\ ,\ ]_{\mathfrak{g}},\\{\ ,\ ,\ \\}_{\mathfrak{g}})$ be a Lie-Yamaguti algebra and $V$ a vector space. Assume that $\mu:\mathfrak{g}\longrightarrow{gl}(V)$ is a linear map and $\theta,D:\mathfrak{g}\wedge\mathfrak{g}\longrightarrow{gl}(V)$ are bilinear maps. Then $(V,\mu,\theta,D)$ is a representation of $\mathfrak{g}$ if and only if $(\mathfrak{g}\oplus V,[\ ,\ ],\\{\ ,\ ,\ \\})$ is a Lie-Yamaguti algebra, where $\\{x+u,y+v,z+w\\}=\\{x,y,z\\}_{\mathfrak{g}}+\theta(y,z)u-\theta(x,z)v+D(x,y)w,$ and $[x+u,y+v]=[x,y]_{\mathfrak{g}}+\mu(x)v-\mu(y)u,$ for all $x,y,z\in T,u,v,w\in V$. The Lie-Yamaguti algebra $(\mathfrak{g}\oplus V,[\ ,\ ],\\{\ ,\ ,\ \\})$ is called the semidirect product Lie-Yamaguti algebra. Denote it simply by $\mathfrak{g}\ltimes V$. ###### Example 2.4. Let $\mathfrak{g}$ be a Lie-Yamaguti algebra. Define ${ad}:\mathfrak{g}\longrightarrow{gl}(\mathfrak{g}),~{}R,L:\mathfrak{g}\wedge\mathfrak{g}\longrightarrow{gl}(\mathfrak{g})$ respectively by ${ad}(x)(y)=[x,y],R(x,y)(z)=\\{z,x,y\\}$ and $L(x,y)(z)=\\{x,y,z\\}$. Then $(\mathfrak{g},{ad},R,L)$ is a representation of $\mathfrak{g}$, which is called the adjoint representation. Next, we recall the cohomology of Lie-Yamaguti algebras following [33]. Let $\mathfrak{g}$ be a Lie-Yamaguti algebra and $(V,\mu,\theta,D)$ be its representation. The cochain complex is given as follows: * • Set $C^{1}(\mathfrak{g},V)=\mathrm{Hom}(\mathfrak{g},V)$ and $C^{0}(\mathfrak{g},V)$ be the subspace spanned by the diagonal elements $(f,f)\in C^{1}(\mathfrak{g},V)\times C^{1}(\mathfrak{g},V)$. * • (For $n\geq 2$) Set $C^{n}(\mathfrak{g},V)=\mathrm{Hom}(\otimes^{n}\mathfrak{g},V)$, where the space of all $n$-linear maps $f\in C^{n}(\mathfrak{g},V)$ satisfying $f(x_{1},\cdot\cdot\cdot,x_{2i-1},x_{2i},\cdot\cdot\cdot,x_{n})=0,~{}~{}\hbox{if}~{}x_{2i-1}=x_{2i},~{}\forall~{}i=1,2,\cdot\cdot\cdot,[\frac{n}{2}].$ Then, for all $n\geq 1$, put the $(2n,2n+1)$-cochain groups $C^{(2n,2n+1)}(\mathfrak{g},V)=C^{2n}(\mathfrak{g},V)\times C^{2n+1}(\mathfrak{g},V)$ and $C^{(3,4)}(\mathfrak{g},V)=C^{3}(\mathfrak{g},V)\times C^{4}(\mathfrak{g},V).$ Define the coboundary operator $\delta=(\delta_{I},\delta_{II})$ in the following cochain complex of $\mathfrak{g}$ with coefficents in $V$: $\textstyle{C^{0}(\mathfrak{g},V)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta}$$\textstyle{C^{(2,3)}(\mathfrak{g},V)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta^{*}}$$\scriptstyle{\delta}$$\textstyle{C^{(4,5)}(\mathfrak{g},V)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta}$$\textstyle{C^{(6,7)}(\mathfrak{g},V)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta}$$\textstyle{\cdot\cdot\cdot}$$\textstyle{C^{(3,4)}(\mathfrak{g},V)}$. For $n\geq 1$, the coboundary operator $\delta=(\delta_{I},\delta_{II}):C^{(2n,2n+1)}(\mathfrak{g},V)\longrightarrow C^{(2n+2,2n+3)}(\mathfrak{g},V)$ is defined by $\displaystyle(\delta_{I}f)(x_{1},\cdot\cdot\cdot,x_{2n+2})$ $\displaystyle=$ $\displaystyle\mu(x_{2n+1})g(x_{1},\cdot\cdot\cdot,x_{2n},x_{2n+2})-\mu(x_{2n+2})g(x_{1},\cdot\cdot\cdot,x_{2n+1})-g(x_{1},\cdot\cdot\cdot,x_{2n},[x_{2n+1},x_{2n+2}])$ $\displaystyle+\sum_{k=1}^{n}(-1)^{n+k+1}D(x_{2k-1},x_{2k})f(x_{1},\cdot\cdot\cdot,x_{2k-2},x_{2k+1},\cdot\cdot\cdot,x_{2n+2})$ $\displaystyle+\sum_{k=1}^{n}\sum_{j=2k+1}^{2n+2}(-1)^{n+k}f(x_{1},\cdot\cdot\cdot,x_{2k-2},x_{2k+1},\cdot\cdot\cdot,\\{x_{2k-1},x_{2k},x_{j}\\},\cdot\cdot\cdot,x_{2n+2})$ and $\displaystyle(\delta_{II}g)(x_{1},\cdot\cdot\cdot,x_{2n+3})$ $\displaystyle=$ $\displaystyle\theta(x_{2n+2},x_{2n+3})g(x_{1},\cdot\cdot\cdot,x_{2n+1})-\theta(x_{2n+1},x_{2n+3})g(x_{1},\cdot\cdot\cdot,x_{2n},x_{2n+2})$ $\displaystyle+\sum_{k=1}^{n+1}(-1)^{n+k+1}D(x_{2k-1},x_{2k})g(x_{1},\cdot\cdot\cdot,x_{2k+1},\cdot\cdot\cdot,x_{2n+3})$ $\displaystyle+\sum_{k=1}^{n+1}\sum_{j=2k+1}^{2n+3}(-1)^{n+k}g(x_{1},\cdot\cdot\cdot,x_{2k+1},\cdot\cdot\cdot,\\{x_{2k-1},x_{2k},x_{j}\\},\cdot\cdot\cdot,x_{2n+3})$ for any pair $(f,g)\in C^{(2n,2n+1)}(\mathfrak{g},V)$ and $x_{1},\cdot\cdot\cdot,x_{2n+3}\in\mathfrak{g}$. And when $n=0$, the coboundary operator $\delta=(\delta_{I},\delta_{II}):C^{0}(\mathfrak{g},V)\longrightarrow C^{2}(\mathfrak{g},V)\times C^{3}(\mathfrak{g},V)$ is defined as follows: $(\delta_{I}f)(x_{1},x_{2})=\mu(x_{1})f(x_{2})-\mu(x_{2})f(x_{1})-f([x_{1},x_{2}]),$ $(\delta_{II}f)(x_{1},x_{2},x_{3})=\theta(x_{2},x_{3})f(x_{1})-\theta(x_{1},x_{3})f(x_{2})+D(x_{1},x_{2})f(x_{3})-f(\\{x_{1},x_{2},x_{3}\\})$ for any $f\in C^{1}(\mathfrak{g},V)$ and $x_{1},x_{2},x_{3}\in\mathfrak{g}$. Define $\delta^{*}=(\delta_{I}^{*},\delta_{II}^{*}):C^{(2,3)}(\mathfrak{g},V)\longrightarrow C^{(3,4)}(\mathfrak{g},V)$ by $\displaystyle\delta_{I}^{*}f(x,y,z)=$ $\displaystyle-\mu(x)f(y,z)-\mu(y)f(z,x)-\mu(z)f(x,y)+f([x,y],z)+f([y,z],x)$ $\displaystyle+f([z,x],y)+g(x,y,z)+g(y,z,x)+g(z,x,y),$ $\displaystyle\delta_{II}^{*}f(x,y,z,w)=$ $\displaystyle\theta(x,w)f(y,z)+\theta(y,w)f(z,x)+\theta(z,w)f(x,y)+g([x,y],z,w)$ $\displaystyle+g([y,z],x,w)+g([z,x],y,w),$ for all $(f,g)\in C^{(2,3)}(\mathfrak{g},V)$ and $x,y,z,w\in\mathfrak{g}$. Denote the set of all the $(2n,2n+1)$-cocycles and the $(2n,2n+1)$-coboundaries, respectively by $Z^{(2n,2n+1)}(\mathfrak{g},V)$ and $B^{(2n,2n+1)}(\mathfrak{g},V)$. In particular, $Z^{(2,3)}(\mathfrak{g},V)=\\{(f,g)\in C^{(2,3)}(\mathfrak{g},V)|\delta(f,g)=0,\delta^{*}(f,g)=0\\},$ $B^{(2,3)}(\mathfrak{g},V)=\\{(\delta_{I}(f),\delta_{II}(f))|f\in C^{1}(\mathfrak{g},V)\\}.$ Define $H^{1}(\mathfrak{g},V)=\\{f\in C^{1}(\mathfrak{g},V)|\delta_{I}(f)=0,\delta_{II}(f)=0\\}$, which is called the first cohomology group of $\mathfrak{g}$ with coefficients in the representation $V$ and define $H^{(2n,2n+1)}(\mathfrak{g},V)=Z^{(2n,2n+1)}(\mathfrak{g},V)/B^{(2n,2n+1)}(\mathfrak{g},V),~{}~{}n\geq 1$ which is called the (2n,2n+1)-cohomology group of $\mathfrak{g}$ with coefficients in the representation $V$. ## 3\. Non-abelian extensions and non-abelian (2,3)-cocycles of Lie-Yamaguti algebras In this section, we are devoted to considering non-abelian extensions and non- abelian (2,3)-cocycles of Lie-Yamaguti algebras. ###### Definition 3.1. Let $\mathfrak{g}$ and $\mathfrak{h}$ be two Lie-Yamaguti algebras. A non- abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$ is a Lie-Yamaguti algebra $\hat{\mathfrak{g}}$, which fits into a short exact sequence of Lie- Yamaguti algebras $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0.$ When $\mathfrak{h}$ is an abelian ideal of $\hat{\mathfrak{g}}$, the extension $\mathcal{E}$ is called an abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$. Denote an extension as above simply by $\hat{\mathfrak{g}}$ or $\mathcal{E}$. A section of $p$ is a linear map $s:\mathfrak{g}\longrightarrow\hat{\mathfrak{g}}$ such that $ps=I_{\mathfrak{g}}$. ###### Definition 3.2. Let $\hat{\mathfrak{g}}_{1}$ and $\hat{\mathfrak{g}}_{2}$ be two non-abelian extensions of $\mathfrak{g}$ by $\mathfrak{h}$. They are said to be equivalent if there is a homomorphism of Lie-Yamaguti algebras $f:\hat{\mathfrak{g}}_{1}\longrightarrow\hat{\mathfrak{g}}_{2}$ such that the following commutative diagram holds: (15) $\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathfrak{h}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i_{1}}$$\textstyle{\hat{\mathfrak{g}}_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{p_{1}}$$\textstyle{\mathfrak{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathfrak{h}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i_{2}}$$\textstyle{\hat{\mathfrak{g}}_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p_{2}}$$\textstyle{\mathfrak{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$ Denote by $\mathcal{E}_{nab}(\mathfrak{g},\mathfrak{h})$ the set of all non- abelian extensions of $\mathfrak{g}$ by $\mathfrak{h}$. Next, we define a non-abelian cohomology group and show that the non-abelian extensions are classified by the non-abelian cohomology groups. ###### Definition 3.3. Let $\mathfrak{g}$ and $\mathfrak{h}$ be two Lie-Yamaguti algebras. A non- abelian (2,3)-cocycle on $\mathfrak{g}$ with values in $\mathfrak{h}$ is a septuple $(\chi,\omega,\mu,\theta,D,\rho,T)$ of maps such that $\omega:\mathfrak{g}\otimes\mathfrak{g}\otimes\mathfrak{g}\longrightarrow\mathfrak{h}$ is trilinear, $\chi:\mathfrak{g}\otimes\mathfrak{g}\longrightarrow\mathfrak{h},~{}\theta,D:\mathfrak{g}\wedge\mathfrak{g}\longrightarrow\mathfrak{gl}(\mathfrak{h})$ are bilinear and $\mu:\mathfrak{g}\longrightarrow\mathfrak{gl}(\mathfrak{h}),~{}\rho,T:\mathfrak{g}\longrightarrow\mathrm{Hom}(\mathfrak{h}\wedge\mathfrak{h},\mathfrak{h})$ are linear, and the following five parts of identities are satisfied for all $x_{i},y_{i},x,y,z\in\mathfrak{g}~{}(i=1,2,3),a,b,c\in\mathfrak{h}$, * • Those resembling Eq. (1): (16) $\chi([x,y]_{\mathfrak{g}}+\chi(y,x)=0,~{}~{}\omega(x,y,z)+\omega(y,x,z)=0,$ (17) $D(x,y)a+D(y,x)a=0,~{}T(x)(a,b)+T(x)(b,a)=0.$ * • Those resembling Eq. (2): $\displaystyle\chi([x,y]_{\mathfrak{g}},z)-\mu(z)\chi(x,y)+\omega(x,y,z)+\chi([y,z]_{\mathfrak{g}},x)-\mu(x)\chi(y,z)+\omega(y,z,x)$ (18) $\displaystyle+\chi([z,x]_{\mathfrak{g}},y)-\mu(y)\chi(z,x)+\omega(z,x,y)=0,$ (19) $\mu([x,y]_{\mathfrak{g}})a+[\chi(x,y),a]_{\mathfrak{h}}+D(x,y)a-\mu(x)\mu(y)a-\theta(y,x)a+\mu(y)\mu(x)a+\theta(x,y)a=0,$ (20) $[\mu(x)a,b]_{\mathfrak{h}}+\rho(x)(a,b)-\mu(x)[a,b]_{\mathfrak{h}}+T(x)(a,b)-[\mu(x)b,a]_{\mathfrak{h}}-\rho(x)(b,a)=0.$ * • Those resembling Eq. (3): (21) $\theta(z,w)\chi(x,y)+\theta(x,w)\chi(y,z)+\theta(y,w)\chi(z,x)=0,$ $\displaystyle D([x,y]_{\mathfrak{g}},z)a-\rho(z)(\chi(x,y),a)+D([y,z]_{\mathfrak{g}},x)a-\rho(x)(\chi(y,z),a)$ (22) $\displaystyle+D([z,x]_{\mathfrak{g}},y)a-\rho(y)(\chi(z,x),a)=0,$ (23) $\theta([x,y]_{\mathfrak{g}},z)a-T(z)(\chi(x,y),a)-\theta(x,z)\mu(y)a+\theta(y,z)\mu(x)a=0,$ (24) $\rho([x,y]_{\mathfrak{g}})(a,b)+\\{\chi(x,y),a,b\\}_{\mathfrak{h}}-\rho(x)(\mu(y)a,b)+\rho(y)(\mu(x)a,b)=0,$ (25) $T(y)(\mu(x)a,b)+\theta(x,y)[a,b]_{\mathfrak{h}}-T(y)(\mu(x)b,a)=0,$ (26) $\\{\mu(x)a,b,c\\}_{\mathfrak{h}}-\rho(x)([a,b]_{\mathfrak{h}},c)-\\{\mu(x)b,a,c\\}_{\mathfrak{h}}=0,$ (27) $T(x)([a,b]_{\mathfrak{h}},c)+T(x)([b,c]_{\mathfrak{h}},a)+T(x)([c,a]_{\mathfrak{h}},b)=0.$ * • Those resembling Eq. (4): (28) $D(x,y)\chi(z,w)=\chi(\\{x,y,z\\}_{\mathfrak{g}},w)-\mu(w)\omega(x,y,z)+\mu(z)\omega(x,y,w)+\chi(z,\\{x,y,w\\}_{\mathfrak{g}}),$ (29) $D(x,y)\mu(z)a=\mu(\\{x,y,z\\}_{\mathfrak{g}})a+[\omega(x,y,z),a]_{\mathfrak{h}}+\mu(z)D(x,y)a,$ (30) $\theta(x,[y,z]_{\mathfrak{g}})a-\rho(x)(a,\chi(y,z))=\mu(y)\theta(x,z)a-\mu(z)\theta(x,y)a,$ (31) $D(x,y)[a,b]_{\mathfrak{h}}=[D(x,y)a,b]_{\mathfrak{h}}+[a,D(x,y)b]_{\mathfrak{h}},$ (32) $\rho(x)(a,\mu(y)b)+[\theta(x,y)a,b]_{\mathfrak{h}}-\mu(y)\rho(x)(a,b)=0,$ (33) $T([x,y]_{\mathfrak{g}})(a,b)+\\{a,b,\chi(x,y)\\}_{\mathfrak{h}}-\mu(x)T(y)(a,b)+\mu(y)T(x)(a,b)=0,$ (34) $\\{a,b,\mu(x)c\\}_{\mathfrak{h}}=\mu(x)\\{a,b,c\\}_{\mathfrak{h}}-[c,T(x)(a,b)]_{\mathfrak{h}},$ (35) $\rho(x)(a,[b,c]_{\mathfrak{h}})=[\rho(x)(a,b),c]_{\mathfrak{h}}+[b,\rho(x)(a,c)]_{\mathfrak{h}}.$ * • Those resembling Eq. (5): $\displaystyle D(x_{1},x_{2})\omega(y_{1},y_{2},y_{3})+\omega(x_{1},x_{2},\\{y_{1},y_{2},y_{3}\\}_{\mathfrak{g}})=\omega(\\{x_{1},x_{2},y_{1}\\}_{\mathfrak{g}},y_{2},y_{3})$ $\displaystyle+\theta(y_{2},y_{3})\omega(x_{1},x_{2},y_{1})+\omega(y_{1},\\{x_{1},x_{2},y_{2}\\}_{\mathfrak{g}},y_{3})-\theta(y_{1},y_{3})\omega(x_{1},x_{2},y_{2})$ (36) $\displaystyle+\omega(y_{1},y_{2},\\{x_{1},x_{2},y_{3}\\}_{\mathfrak{g}})+D(y_{1},y_{2})\omega(x_{1},x_{2},y_{3}),$ $\displaystyle D(x,y)\theta(z,w)a-\theta(z,w)D(x,y)a$ (37) $\displaystyle=$ $\displaystyle\theta(\\{x,y,z\\}_{\mathfrak{g}},w)a+\theta(z,\\{x,y,w\\}_{\mathfrak{g}})a-T(w)(\omega(x,y,z),a)-\rho(z)(a,\omega(x,y,w)),$ (38) $\theta(x,\\{y,z,w\\}_{\mathfrak{g}})a-\rho(x)(a,\omega(y,z,w))=\theta(z,w)\theta(x,y)a-\theta(y,w)\theta(x,z)a+D(y,z)\theta(x,w)a,$ $\displaystyle D(x,y)D(z,w)a-D(z,w)D(x,y)a$ (39) $\displaystyle=$ $\displaystyle D(\\{x,y,z\\}_{\mathfrak{g}},w)a+D(z,\\{x,y,w\\}_{\mathfrak{g}})a-\rho(w)(\omega(x,y,z),a)+\rho(z)(\omega(x,y,w),a),$ (40) $\displaystyle(D(x,y)\rho(z)-\rho(\\{x,y,z\\}_{\mathfrak{g}}))(a,b)$ $\displaystyle=\rho(z)((D(x,y)a,b)+(a,D(x,y)b))+\\{\omega(x,y,z),a,b\\}_{\mathfrak{h}},$ (41) $D(y,z)(\rho(x)(a,b))=\rho(x)(a,D(y,z)b)-\rho(z)(\theta(x,y)a,b)+\rho(y)(\theta(x,z)a,b),$ (42) $\theta(y,z)(\rho(x)(a,b))=\rho(x)(a,\theta(y,z)b)-T(z)(\theta(x,y)a,b)-\rho(y)(b,\theta(x,z)a),$ (43) $\displaystyle(D(x,y)T(z)-T(\\{x,y,z\\}_{\mathfrak{g}}))(a,b)=T(z)((D(x,y)a,b)+(a,D(x,y)b))+\\{a,b,\omega(x,y,z)\\}_{\mathfrak{h}},$ (44) $\displaystyle(T(\\{x,y,z\\}_{\mathfrak{g}})-D(x,y)T(z))(a,b)+\\{a,b,\omega(x,y,z)\\}_{\mathfrak{h}}=(\theta(y,z)T(x)-\theta(x,z)T(y))(a,b),$ (45) $D(x,y)(\\{a,b,c\\}_{\mathfrak{h}})=\\{D(x,y)a,b,c\\}_{\mathfrak{h}}+\\{a,D(x,y)b,c\\}_{\mathfrak{h}}+\\{a,b,D(x,y)c\\}_{\mathfrak{h}},$ (46) $\rho(x)(a,\rho(y)(b,c))=\rho(y)(\rho(x)(a,b),c)+\rho(y)(b,\rho(x)(a,c))-\\{\theta(x,y)a,b,c\\}_{\mathfrak{h}},$ (47) $\\{a,b,\theta(x,y)c\\}_{\mathfrak{h}}=\theta(x,y)\\{a,b,c\\}_{\mathfrak{h}}-T(y)(T(x)(a,b),c)-\rho(x)(c,T(y)(a,b)),$ (48) $\rho(x)(a,T(y)(b,c))=T(y)(\rho(x)(a,b),c)+T(y)(b,\rho(x)(a,c))-\\{a,c,\theta(x,y)a\\}_{\mathfrak{h}},$ (49) $\\{a,b,D(x,y)c\\}_{\mathfrak{h}}=D(x,y)\\{a,b,c\\}_{\mathfrak{h}}-\rho(y)(T(x)(a,b),c),+\rho(x)(T(y)(a,b),c),$ (50) $\\{a,b,\rho(x)(c,d)\\}_{\mathfrak{h}}=\rho(x)(\\{a,b,c\\}_{\mathfrak{h}},d)-\\{c,T(x)(a,b),d\\}_{\mathfrak{h}}+\rho(x)(c,\\{a,b,d\\}_{\mathfrak{h}}),$ (51) $\rho(x)(a,\\{b,c,d\\}_{\mathfrak{h}})=\\{\rho(x)(a,b),c,d\\}_{\mathfrak{h}}+\\{b,\rho(x)(a,c),d\\}_{\mathfrak{h}}+\\{b,c,\rho(x)(a,d)\\}_{\mathfrak{h}}.$ (52) $\\{a,b,T(x)(c,d)\\}_{\mathfrak{h}}=T(x)(\\{a,b,c\\}_{\mathfrak{h}},d)+T(x)(c,\\{a,b,d\\}_{\mathfrak{h}})+\\{c,d,T(x)(a,b)\\}_{\mathfrak{h}}.$ ###### Definition 3.4. Let $(\chi_{1},\omega_{1},\mu_{1},\theta_{1},D_{1},\rho_{1},T_{1})$ and $(\chi_{2},\omega_{2},\mu_{2},\theta_{2},D_{2},\rho_{2},T_{2})$ be two non- abelian (2,3)-cocycles on $\mathfrak{g}$ with values in $\mathfrak{h}$. They are said to be equivalent if there exists a linear map $\varphi:\mathfrak{g}\longrightarrow\mathfrak{h}$ such that for all $x,y,z\in\mathfrak{g}$ and $a,b\in\mathfrak{h}$, the following equalities hold: (53) $\chi_{1}(x,y)-\chi_{2}(x,y)=[\varphi(x),\varphi(y)]_{\mathfrak{h}}+\varphi[x,y]_{\mathfrak{g}}-\mu_{2}(x)\varphi(y)+\mu_{2}(y)\varphi(x),$ $\displaystyle\omega_{1}(x,y,z)-\omega_{2}(x,y,z)=\theta_{2}(x,z)\varphi(y)-D_{2}(x,y)\varphi(z)+\rho_{2}(x)(\varphi(y),\varphi(z))-\theta_{2}(y,z)\varphi(x)$ (54) $\displaystyle+T_{2}(z)(\varphi(x),\varphi(y))-\rho_{2}(y)(\varphi(x),\varphi(z))-\\{\varphi(x),\varphi(y),\varphi(z)\\}_{\mathfrak{h}}+\varphi\\{x,y,z\\}_{\mathfrak{g}},$ (55) $\mu_{1}(x)a-\mu_{2}(x)a=[a,\varphi(x)]_{\mathfrak{h}},$ (56) $\theta_{1}(x,y)a-\theta_{2}(x,y)a=\rho_{2}(x)(a,\varphi(y))-T_{2}(y)(a,\varphi(x))+\\{a,\varphi(x),\varphi(y)\\}_{\mathfrak{h}},$ (57) $D_{1}(x,y)a-D_{2}(x,y)a=\rho_{2}(y)(\varphi(x),a)-\rho_{2}(x)(\varphi(y),a)+\\{\varphi(x),\varphi(y),a\\}_{\mathfrak{h}},$ (58) $\rho_{1}(x)(a,b)-\rho_{2}(x)(a,b)=\\{a,\varphi(x),b\\}_{\mathfrak{h}},~{}~{}T_{1}(x)(a,b)-T_{2}(x)(a,b)=\\{b,a,\varphi(x)\\}_{\mathfrak{h}}.$ For convenience, we abbreviate a non-abelian (2,3)-cocycle $(\chi,\omega,\mu,\theta,D,\rho,T)$ as $(\chi,\omega)$, denote the equivalent class of a non-abelian (2,3)-cocycle $(\chi,\omega,\mu,\theta,D,\rho,T)$ simply by $[(\chi,\omega)]$. Furthermore, we denote the set of equivalent classes of non-abelian (2,3)-cocycles by $H_{nab}^{(2,3)}(\mathfrak{g},\mathfrak{h})$. Using the above notations, we define multilinear maps $[\ ,\ ]_{\chi}$ and $[\ ,\ ,\ ]_{\omega}$ on $\mathfrak{g}\oplus\mathfrak{h}$ by (59) $\displaystyle[x+a,y+b]_{\chi}$ $\displaystyle=[x,y]_{\mathfrak{g}}+\chi(x,y)+\mu(x)b-\mu(y)a+[a,b]_{\mathfrak{h}},$ $\displaystyle\\{x+a,y+b,z+c\\}_{\omega}=$ $\displaystyle\\{x,y,z\\}_{\mathfrak{g}}+\omega(x,y,z)+D(x,y)c+\theta(y,z)a-\theta(x,z)b$ (60) $\displaystyle+T(z)(a,b)+\rho(x)(b,c)-\rho(y)(a,c)+\\{a,b,c\\}_{\mathfrak{h}}$ for all $x,y,z\in\mathfrak{g}$ and $a,b,c\in\mathfrak{h}$. ###### Proposition 3.5. With the above notions, $(\mathfrak{g}\oplus\mathfrak{h},[\ ,\ ]_{\chi},\\{\ ,\ ,\ \\}_{\omega})$ is a Lie-Yamaguti algebra if and only if the septuple $(\chi,\omega,\mu,\theta,D,\rho,T)$ is a non-abelian (2,3)-cocycle. Denote this Lie-Yamaguti algebra $(\mathfrak{g}\oplus\mathfrak{h},[\ ,\ ]_{\chi},\\{\ ,\ ,\ \\}_{\omega})$ simply by $\mathfrak{g}\oplus_{(\chi,\omega)}\mathfrak{h}$. ###### Proof. $(\mathfrak{g}\oplus\mathfrak{h},[\ ,\ ]_{\chi},\\{\ ,\ ,\ \\}_{\omega})$ is a Lie-Yamaguti algebra if and only if Eqs. (1)-(5) hold for $[\ ,\ ]_{\chi},\\{\ ,\ ,\ \\}_{\omega}$. In fact, it is easy to check that Eq. (1) holds for $[\ ,\ ]_{\chi},\\{\ ,\ ,\ \\}_{\omega}$ if and only if (16)-(17) hold. In the following, we always assume that $x,y,z,w\in\mathfrak{g}$ and $a,b,c,d\in\mathfrak{h}$. For the Eq. (2), we discuss it for the following cases: for all $x_{1},x_{2},x_{3}\in\mathfrak{g}\oplus\mathfrak{h}$, 1. $(I)$ when all of the three elements $x_{1},x_{2},x_{3}$ belong to $\mathfrak{g}$, (2) holds if and only if (• ‣ 3.3) holds. 2. $(II)$ when $x_{1},x_{2},x_{3}$ equal to: 1. $(i)$ $x,y,a$ or $a,x,y$ or $x,a,y$ respectively, (2) holds if and only if (19) holds. 2. $(ii)$ $a,b,x$ or $a,x,b$ or $x,a,b$ respectively, (2) holds for if and only if (20) holds. For the Eq. (3), we discuss it for the following cases: for all $x_{1},x_{2},x_{3},y_{1}\in\mathfrak{g}\oplus\mathfrak{h}$, 1. $(I)$ when all of the four elements $x_{1},x_{2},x_{3},y_{1}$ belong to $\mathfrak{g}$, (3) holds if and only if (21) holds. 2. $(II)$ when $x_{1},x_{2},x_{3},y_{1}$ equal to: 1. $(i)$ $x,y,z,a$, (3) holds if and only if (• ‣ 3.3) holds. 2. $(ii)$ $x,y,a,z$ or $x,a,y,z$ or $a,x,y,z$ respectively, (3) holds if and only if (23) holds. 3. $(iii)$ $x,y,a,b$ or $x,a,y,b$ or $a,x,y,b$ respectively, (3) holds if and only if (24) holds. 4. $(iv)$ $x,a,b,y$ or $a,x,b,y$ or $a,b,x,y$ respectively, (3) holds if and only if (25) holds. 5. $(v)$ $x,a,b,c$ or $a,x,b,c$ or $a,b,x,y,c$ respectively, (3) holds if and only if (26) holds. 6. $(vi)$ $a,b,c,x$, (3) holds if and only if (27) holds. For the Eq. (4), we discuss it for the following cases: for all $x_{1},x_{2},y_{1},y_{2}\in\mathfrak{g}\oplus\mathfrak{h}$, 1. $(I)$ when all of the four elements $x_{1},x_{2},y_{1},y_{2}$ belong to $\mathfrak{g}$, (4) holds if and only if (28) holds. 2. $(II)$ when $x_{1},x_{2},y_{1},y_{2}$ equal to: 1. $(i)$ $x,y,z,a$ or $x,y,a,z$, (4) holds if and only if (29) holds. 2. $(ii)$ $x,a,y,z$ or $a,x,y,z$ respectively, (4) holds if and only if (30) holds. 3. $(iii)$ $x,y,a,b$, (4) holds if and only if (31) holds. 4. $(iv)$ $x,a,y,b$ or $a,x,y,b$ or $x,a,b,y$ or $a,x,b,y$ respectively, (4) holds if and only if (32) holds. 5. $(v)$ $a,b,x,y$, (4) holds if and only if (33) holds. 6. $(vi)$ $a,b,c,x$ or $a,b,x,c$ respectively, (4) holds if and only if (34) holds. 7. $(vii)$ $a,x,b,c$ or $x,a,b,c$ respectively, (4) holds if and only if (35) holds. For the Eq. (5), we discuss it for the following cases: for all $x_{1},x_{2},y_{1},y_{2},y_{3}\in\mathfrak{g}\oplus\mathfrak{h}$, 1. $(I)$ when all of the five elements $x_{1},x_{2},y_{1},y_{2},y_{3}$ belong to $\mathfrak{g}$, (5) holds if and only if (• ‣ 3.3) holds. 2. $(II)$ when $x_{1},x_{2},y_{1},y_{2},y_{3}$ equal to: 1. $(i)$ $x,y,z,w,a$, (5) holds if and only if (• ‣ 3.3) holds. 2. $(ii)$ $x,y,z,a,w$ or $x,y,a,z,w$ respectively, (5) holds if and only if (• ‣ 3.3) holds. 3. $(iii)$ $x,a,y,z,w$ or $a,x,y,z,w$ respectively, (5) holds if and only if (38) holds. 4. $(iv)$ $x,y,z,a,b$ or $x,y,a,y,b$, (5) holds if and only if (40) holds. 5. $(v)$ $x,a,y,z,b$ or $a,x,y,z,b$ respectively, (5) holds if and only if (41) holds. 6. $(vi)$ $x,y,a,b,z$, (5) holds if and only if (43) holds. 7. $(vii)$ $x,a,y,b,z$ or $a,x,y,b,z$ or $x,a,b,y,z$ or $a,x,b,y,z$ respectively, (4) holds if and only if (42) holds. 8. $(viii)$ $a,b,x,y,z$, (5) holds if and only if (44) holds. 9. $(ix)$ $x,y,a,b,c$, (5) holds if and only if (45) holds. 10. $(x)$ $x,a,y,b,c$ or $a,x,y,b,c$ or $a,x,b,y,c$ or $x,a,b,y,c$ respectively, (5) holds if and only if (46) holds. 11. $(xi)$ $x,a,b,c,y$ or $a,x,b,c,y$ respectively, (5) holds if and only if (48) holds. 12. $(xii)$ $a,b,x,c,y$ or $a,b,c,x,y$ respectively, (5) holds if and only if (47) holds. 13. $(xiii)$ $a,b,x,y,c$, (5) holds if and only if (49) holds. 14. $(xiv)$ $a,b,c,d,x$, (5) holds if and only if (52) holds. 15. $(xv)$ $a,b,c,x,d$ or $a,b,x,c,d$ respectively, (5) holds if and only if (50) holds. 16. $(xvi)$ $a,x,b,c,d$ or $x,a,b,c,d$ respectively, (5) holds if and only if (51) holds. This completes the proof. ∎ Let $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ be a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $p$. Define $\chi_{s}:\mathfrak{g}\otimes\mathfrak{g}\longrightarrow\mathfrak{h},~{}\omega_{s}:\mathfrak{g}\otimes\mathfrak{g}\otimes\mathfrak{g}\longrightarrow\mathfrak{h},~{}\mu_{s}:\mathfrak{g}\longrightarrow\mathfrak{gl}(\mathfrak{h}),~{}\theta_{s},D_{s}:\mathfrak{g}\wedge\mathfrak{g}\longrightarrow\mathfrak{gl}(\mathfrak{h}),~{}\rho_{s},T_{s}:\mathfrak{g}\longrightarrow\mathrm{Hom}(\mathfrak{h}\wedge\mathfrak{h},\mathfrak{h})$ respectively by (61) $\chi_{s}(x,y)=[s(x),s(y)]_{\hat{\mathfrak{g}}}-s[x,y]_{\mathfrak{g}},$ (62) $\omega_{s}(x,y,z)=\\{s(x),s(y),s(z)\\}_{\hat{\mathfrak{g}}}-s\\{x,y,z\\}_{\mathfrak{g}},$ (63) $\theta_{s}(x,y)a=\\{a,s(x),s(y)\\}_{\hat{\mathfrak{g}}},~{}~{}~{}~{}\rho_{s}(x)(a,b)=\\{s(x),a,b\\}_{\hat{\mathfrak{g}}},$ (64) $D_{s}(x,y)a=\\{s(x),s(y),a\\}_{\hat{\mathfrak{g}}},~{}~{}~{}~{}T_{s}(x)(a,b)=\\{a,b,s(x)\\}_{\hat{\mathfrak{g}}}$ for any $x,y,z\in\mathfrak{g},a,b\in\mathfrak{h}$. By direct computations, we have ###### Proposition 3.6. With the above notions, $(\chi_{s},\omega_{s},\mu_{s},\theta_{s},D_{s},\rho_{s},T_{s})$ is a non- abelian (2,3)-cocycle on $\mathfrak{g}$ with values in $\mathfrak{h}$. We call it the non-abelian (2,3)-cocycle corresponding to the extension $\mathcal{E}$ induced by $s$. Naturally, $(\mathfrak{g}\oplus\mathfrak{h},[\ ,\ ]_{\chi_{s}},\\{\ ,\ ,\ \\}_{\omega_{s}})$ is a Lie-Yamaguti algebra. Denote this Lie-Yamaguti algebra simply by $\mathfrak{g}\oplus_{(\chi_{s},\omega_{s})}\mathfrak{h}$. In the following, we denote $(\chi_{s},\omega_{s},\mu_{s},\theta_{s},D_{s},\rho_{s},T_{s})$ by $(\chi,\omega,\mu,\theta,D,\rho,T)$ without ambiguity. ###### Lemma 3.7. Let $(\chi_{i},\omega_{i},\mu_{i},\theta_{i},D_{i},\rho_{i},T_{i})$ be the non-abelian (2,3)-cocycle corresponding to the extension $\mathcal{E}$ induced by $s_{i}$ (i=1,2). Then $(\chi_{1},\omega_{1},\mu_{1},\theta_{1},D_{1},\rho_{1},T_{1})$ and $(\chi_{2},\omega_{2},\mu_{2},\theta_{2},D_{2},\rho_{2},T_{2})$ are equivalent, that is, the equivalent classes of non-abelian (2,3)-cocycles corresponding to a non-abelian extension induced by a section are independent on the choice of sections. ###### Proof. Let $\hat{\mathfrak{g}}$ be a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$. Assume that $s_{1}$ and $s_{2}$ are two different sections of $p$, $(\chi_{1},\omega_{1},\mu_{1},\theta_{1},D_{1},\rho_{1},T_{1})$ and $(\chi_{2},\omega_{2},\mu_{2},\theta_{2},D_{2},\rho_{2},T_{2})$ are the corresponding non-abelian (2,3)-cocycles. Define a linear map $\varphi:\mathfrak{g}\longrightarrow\mathfrak{h}$ by $\varphi(x)=s_{2}(x)-s_{1}(x)$. Since $p\varphi(x)=ps_{2}(x)-ps_{1}(x)=0$, $\varphi$ is well defined. Thanks to Eqs. (62)-(64), we get $\displaystyle w_{1}(x,y,z)=\\{s_{1}(x),s_{1}(y),s_{1}(z)\\}_{\hat{\mathfrak{g}}}-s_{1}\\{x,y,z\\}_{\mathfrak{g}}$ $\displaystyle=$ $\displaystyle\\{s_{2}(x)-\varphi(x),s_{2}(y)-\varphi(y),s_{2}(z)-\varphi(z)\\}_{\hat{\mathfrak{g}}}-(s_{2}\\{x,y,z\\}_{\mathfrak{g}}-\varphi(\\{x,y,z\\}_{\mathfrak{g}})$ $\displaystyle=$ $\displaystyle\\{s_{2}(x),s_{2}(y),s_{2}(z)\\}_{\hat{\mathfrak{g}}}-\\{s_{2}(x),\varphi(y),s_{2}(z)\\}_{\hat{\mathfrak{g}}}-\\{s_{2}(x),s_{2}(y),\varphi(z)\\}_{\hat{\mathfrak{g}}}+\\{s_{2}(x),\varphi(y),\varphi(z)\\}_{\hat{\mathfrak{g}}}$ $\displaystyle-\\{\varphi(x),s_{2}(y),s_{2}(z)\\}_{\hat{\mathfrak{g}}}+\\{\varphi(x),\varphi(y),s_{2}(z)\\}_{\hat{\mathfrak{g}}}+\\{\varphi(x),s_{2}(y),\varphi(z)\\}_{\hat{\mathfrak{g}}}-\\{\varphi(x),\varphi(y),\varphi(z)\\}_{\hat{\mathfrak{g}}}$ $\displaystyle- s_{2}\\{x,y,z\\}_{\mathfrak{g}}+\varphi\\{x,y,z\\}_{\mathfrak{g}}$ $\displaystyle=$ $\displaystyle w_{2}(x,y,z)+\theta_{2}(x,z)\varphi(y)-D_{2}(x,y)\varphi(z)+\rho_{2}(x)(\varphi(y),\varphi(z))-\theta_{2}(y,z)\varphi(x)+T_{2}(z)(\varphi(x),\varphi(y))$ $\displaystyle-\rho_{2}(y)(\varphi(x),\varphi(z))-\\{\varphi(x),\varphi(y),\varphi(z)\\}_{\hat{\mathfrak{g}}}+\varphi\\{x,y,z\\}_{\mathfrak{g}},$ which yields that Eq. (3.4) holds. Similarly, Eqs. (53) and (55)-(58) hold. This finishes the proof. ∎ According to Proposition 3.5 and Proposition 3.6, given a non-abelian extension $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ of $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $p$, we have a non-abelian (2,3)-cocycle $(\chi_{s},\omega_{s},\mu_{s},\theta_{s},D_{s},\rho_{s},T_{s})$ and a Lie- Yamaguti algebra $\mathfrak{g}\oplus_{(\chi_{s},\omega_{s})}\mathfrak{h}$. It follows that $\mathcal{E}_{(\chi_{s},\omega_{s})}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\mathfrak{g}\oplus_{(\chi_{s},\omega_{s})}\mathfrak{h}\stackrel{{\scriptstyle\pi}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ is a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$. Since any element $\hat{w}\in\hat{\mathfrak{g}}$ can be written as $\hat{w}=a+s(x)$ with $a\in\mathfrak{h},x\in\mathfrak{g}$, define a linear map $f:\hat{\mathfrak{g}}\longrightarrow\mathfrak{g}\oplus_{(\chi_{s},\omega_{s})}\mathfrak{h},~{}f(\hat{w})=f(a+s(x))=a+x.$ It is easy to check that $f$ is an isomorphism of Lie-Yamaguti algebras such that the following commutative diagram holds: $\textstyle{\mathcal{E}:0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathfrak{h}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i}$$\textstyle{\hat{\mathfrak{g}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{p}$$\textstyle{\mathfrak{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{\mathcal{E}_{(\chi_{s},\omega_{s})}:0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathfrak{h}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i}$$\textstyle{\mathfrak{g}\oplus_{(\chi_{s},\omega_{s})}\mathfrak{h}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{\mathfrak{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0,}$ which indicates that the non-abelian extensions $\mathcal{E}$ and $\mathcal{E}_{(\chi_{s},\omega_{s})}$ of $\mathfrak{g}$ by $\mathfrak{h}$ are equivalent. On the other hand, if $(\chi,\omega,\mu,\theta,D,\rho,T)$ is a non-abelian (2,3)-cocycle on $\mathfrak{g}$ with values in $\mathfrak{h}$, there is a Lie-Yamaguti algebra $\mathfrak{g}\oplus_{(\chi,\omega)}\mathfrak{h}$, which yields the following non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$: $\mathcal{E}_{(\chi,\omega)}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\mathfrak{g}\oplus_{(\chi,\omega)}\mathfrak{h}\stackrel{{\scriptstyle\pi}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0,$ where $i$ is the inclusion and $\pi$ is the projection. In the following, we focus on the relationship between non-abelian (2,3)-cocycles and extensions. ###### Proposition 3.8. Let $\mathfrak{g}$ and $\mathfrak{h}$ be two Lie-Yamaguti algebras. Then the equivalent classes of non-abelian extensions of $\mathfrak{g}$ by $\mathfrak{h}$ are classified by the non-abelian cohomology group, that is, $\mathcal{E}_{nab}(\mathfrak{g},\mathfrak{h})\simeq H_{nab}^{(2,3)}(\mathfrak{g},\mathfrak{h})$. ###### Proof. Define a linear map $\Theta:\mathcal{E}_{nab}(\mathfrak{g},\mathfrak{h})\rightarrow H_{nab}^{(2,3)}(\mathfrak{g},\mathfrak{h}),~{}$ where $\Theta$ assigns an equivalent class of non-abelian extensions to the class of corresponding non-abelian (2,3)-cocycles. First, we check that $\Theta$ is well-defined. Assume that $\mathcal{E}_{1}$ and $\mathcal{E}_{2}$ are two equivalent non-abelian extensions of $\mathfrak{g}$ by $\mathfrak{h}$ via the map $f$, that is, the commutative diagram (15) holds. Let $s_{1}:\mathfrak{g}\rightarrow\hat{\mathfrak{g}}_{1}$ be a section of $p_{1}$. Then $p_{2}fs_{1}=p_{1}s_{1}=I_{\mathfrak{g}}$, which follows that $s_{2}=fs_{1}$ is a section of $p_{2}$. Let $(\chi_{1},\omega_{1},\mu_{1},\theta_{1},D_{1},\rho_{1},T_{1})$ and $(\chi_{2},\omega_{2},\mu_{2},\theta_{2},D_{2},\rho_{2},T_{2})$ be two non- abelian (2,3)-cocycles induced by the sections $s_{1},s_{2}$ respectively. Then we have, $\displaystyle\theta_{1}(x,y)a$ $\displaystyle=$ $\displaystyle f(\theta_{1}(x,y)a)=f(\\{a,s_{1}(x),s_{1}(y)\\}_{\hat{\mathfrak{g}}_{1}})$ $\displaystyle=$ $\displaystyle\\{f(a),fs_{1}(x),fs_{1}(y)\\}_{\hat{\mathfrak{g}}_{2}}$ $\displaystyle=$ $\displaystyle\\{a,s_{2}(x),s_{2}(y)\\}_{\hat{\mathfrak{g}}_{2}}$ $\displaystyle=$ $\displaystyle\theta_{2}(x,y)a.$ By the same token, we have $D_{1}(x,y)a=D_{2}(x,y)a,\chi_{1}(x,y)=\chi_{2}(x,y),\omega_{1}(x,y,z)=\omega_{2}(x,y,z),$ $\rho_{1}(x)(a,b)=\rho_{2}(x)(a,b),T_{1}(x)(a,b)=T_{2}(x)(a,b).$ Thus, $(\chi_{1},\omega_{1},\mu_{1},\theta_{1},D_{1},\rho_{1},T_{1})=(\chi_{2},\omega_{2},\mu_{2},\theta_{2},D_{2},\rho_{2},T_{2})$, which means that $\Theta$ is well-defined. Next, we verify that $\Theta$ is injective. Indeed, suppose that $\Theta([\mathcal{E}_{1}])=[(\chi_{1},\omega_{1})]$ and $\Theta([\mathcal{E}_{2}])=[(\chi_{2},\omega_{2})]$. If the equivalent classes $[(\chi_{1},\omega_{1})]=[(\chi_{2},\omega_{2})]$, we obtain that the non- abelian (2,3)-cocycles $(\chi_{1},\omega_{1},\mu_{1},\theta_{1},D_{1},\rho_{1},T_{1})$ and $(\chi_{2},\omega_{2},\mu_{2},\theta_{2},D_{2},\rho_{2},T_{2})$ are equivalent via the linear map $\varphi:\mathfrak{g}\longrightarrow\mathfrak{h}$, satisfying Eqs. (53)-(58). Define a linear map $f:\mathfrak{g}\oplus_{(\chi_{1},\omega_{1})}\mathfrak{h}\longrightarrow\mathfrak{g}\oplus_{(\chi_{2},\omega_{2})}\mathfrak{h}$ by $f(x+a)=x-\varphi(x)+a,~{}~{}\forall~{}x\in\mathfrak{g},a\in\mathfrak{h}.$ According to Eq. (3), for all $x,y,z\in\mathfrak{g},a,b,c\in\mathfrak{h}$, we get $\displaystyle f(\\{x+a,y+b,z+c\\}_{\omega_{1}})$ $\displaystyle=$ $\displaystyle f(\\{x,y,z\\}_{\mathfrak{g}}+\omega_{1}(x,y,z)+D_{1}(x,y)c+\theta_{1}(y,z)a-\theta_{1}(x,z)b$ $\displaystyle+T_{1}(z)(a,b)+\rho_{1}(x)(b,c)-\rho_{1}(y)(a,c)+\\{a,b,c\\}_{\mathfrak{h}})$ $\displaystyle=$ $\displaystyle\\{x,y,z\\}_{\mathfrak{g}}-\varphi(\\{x,y,z\\}_{\mathfrak{g}})+\omega_{1}(x,y,z)+D_{1}(x,y)c+\theta_{1}(y,z)a-\theta_{1}(x,z)b$ $\displaystyle+T_{1}(z)(a,b)+\rho_{1}(x)(b,c)-\rho_{1}(y)(a,c)+\\{a,b,c\\}_{\mathfrak{h}},$ and $\displaystyle\\{f(x+a),f(y+b),f(z+c)\\}_{\omega_{2}}$ $\displaystyle=$ $\displaystyle\\{x-\varphi(x)+a,y-\varphi(y)+b,z-\varphi(z)+c\\}_{\omega_{2}}$ $\displaystyle=$ $\displaystyle\\{x,y,z\\}_{\mathfrak{g}}+\omega_{2}(x,y,z)+D_{2}(x,y)(c-\varphi(z))+\theta_{2}(y,z)(a-\varphi(x))-\theta_{2}(x,z)(b-\varphi(y))$ $\displaystyle+T_{2}(z)(a-\varphi(x),b-\varphi(y))+\rho_{2}(x)(b-\varphi(y),c-\varphi(z))-\rho_{2}(y)(a-\varphi(x),c-\varphi(z))$ $\displaystyle+\\{a-\varphi(x),b-\varphi(y),c-\varphi(z)\\}_{\mathfrak{h}})$ $\displaystyle=$ $\displaystyle\\{x,y,z\\}_{\mathfrak{g}}+\omega_{2}(x,y,z)+D_{2}(x,y)(c-\varphi(z))+\theta_{2}(y,z)(a-\varphi(x))-\theta_{2}(x,z)(b-\varphi(y))$ $\displaystyle+T_{2}(z)(\varphi(x),\varphi(y))-T_{2}(z)(a,\varphi(y))-T_{2}(z)(\varphi(x),b)+T_{2}(z)(a,b)$ $\displaystyle+\rho_{2}(x)(\varphi(y),\varphi(z))-\rho_{2}(x)(\varphi(y),c)-\rho_{2}(x)(b,\varphi(z))+\rho_{2}(x)(b,c)-\rho_{2}(y)(\varphi(x),\varphi(z))$ $\displaystyle+\rho_{2}(y)(\varphi(x),c)+\rho_{2}(y)(a,\varphi(z))-\rho_{2}(y)(a,c)-\\{\varphi(x),\varphi(y),\varphi(z)\\}_{\mathfrak{h}}+\\{\varphi(x),\varphi(y),c\\}_{\mathfrak{h}}$ $\displaystyle+\\{\varphi(x),b,\varphi(z)\\}_{\mathfrak{h}}-\\{\varphi(x),b,c\\}_{\mathfrak{h}}+\\{a,\varphi(y),\varphi(z)\\}_{\mathfrak{h}}-\\{a,b,\varphi(z)\\}_{\mathfrak{h}}-\\{a,\varphi(y),c\\}_{\mathfrak{h}}+\\{a,b,c\\}_{\mathfrak{h}}.$ In view of Eqs. (53)-(58), we have $f(\\{x+a,y+b,z+c\\}_{\omega_{1}})=\\{f(x+a),f(y+b),f(z+c)\\}_{\omega_{2}}$. Similarly, $f([x+a,y+b]_{\chi_{1}})=[f(x+a),f(y+b)]_{\chi_{2}}$. Hence, $f$ is a homomorphism of Lie-Yamaguti algebras. Clearly, the following commutative diagram holds: (65) $\textstyle{\mathcal{E}_{(\chi_{1},\omega_{1})}:0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathfrak{h}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i}$$\textstyle{\mathfrak{g}\oplus_{(\chi_{1},\omega_{1})}\mathfrak{h}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{\pi}$$\textstyle{\mathfrak{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{\mathcal{E}_{(\chi_{2},\omega_{2})}:0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathfrak{h}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i}$$\textstyle{\mathfrak{g}\oplus_{(\chi_{2},\omega_{2})}\mathfrak{h}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{\mathfrak{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$ Thus $\mathcal{E}_{(\chi_{1},\omega_{1})}$ and $\mathcal{E}_{(\chi_{2},\omega_{2})}$ are equivalent non-abelian extensions of $\mathfrak{g}$ by $\mathfrak{h}$, which means that $[\mathcal{E}_{(\chi_{1},\omega_{1})}]=[\mathcal{E}_{(\chi_{2},\omega_{2})}]$. Thus, $\Theta$ is injective. Finally, we claim that $\Theta$ is surjective. For any equivalent class of non-abelian (2,3)-cocycles $[(\chi,\omega)]$, by Proposition 3.5, there is a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$: $\mathcal{E}_{(\chi,\omega)}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\mathfrak{g}\oplus_{(\chi,\omega)}\mathfrak{h}\stackrel{{\scriptstyle\pi}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0.$ Therefore, $\Theta([\mathcal{E}_{(\chi,\omega)}])=[(\chi,\omega)]$, which follows that $\Theta$ is surjective. In all, $\Theta$ is bijective. This finishes the proof. ∎ ## 4\. Non-abelian extensions in terms of Maurer-Cartan elements In this section, we classify the non-abelian extensions using Maurer-Cartan elements. We start with recalling the Maurer-Cartan elements from [12]. Let $(L=\oplus_{i}L_{i},[\ ,\ ],d)$ be a differential graded Lie algebra. The set $\mathrm{MC}(L)$ of Maurer-Cartan elements of $(L,[\ ,\ ],d)$ is defined by $\mathrm{MC}(L)=\\{\eta\in L_{1}|d\eta+\frac{1}{2}[\eta,\eta]=0\\}.$ Moreover, $\eta_{0},\eta_{1}\in\mathrm{MC}(L)$ are called gauge equivalent if and only if there exists an element $\varphi\in L_{0}$ such that $\eta_{1}=e^{ad_{\varphi}}\eta_{0}-\frac{e^{ad_{\varphi}}-1}{ad_{\varphi}}d\varphi.$ Let $\mathfrak{g}$ be a vector space. Denote by $\mathbb{C}(\mathfrak{g},\mathfrak{g})=\mathrm{Hom}(\wedge^{2}\mathfrak{g}\otimes\mathfrak{g},\mathfrak{g})\times\mathrm{Hom}(\wedge^{2}\mathfrak{g}\otimes\wedge^{2}\mathfrak{g},\mathfrak{g})$ and (66) $\mathcal{C}^{n}(\mathfrak{g},\mathfrak{g})=\left\\{\begin{aligned} &\mathrm{Hom}(\mathfrak{g},\mathfrak{g}),&n=0,\\\ &\mathrm{Hom}(\underbrace{\wedge^{2}\mathfrak{g}\otimes\cdot\cdot\cdot\otimes\wedge^{2}\mathfrak{g}}_{n},\mathfrak{g})\times\mathrm{Hom}(\underbrace{\wedge^{2}\mathfrak{g}\otimes\cdot\cdot\cdot\otimes\wedge^{2}\mathfrak{g}}_{n}\otimes\mathfrak{g},\mathfrak{g}),&n\geq 1,\end{aligned}\right.$ Then $\mathcal{L}^{*}(\mathfrak{g},\mathfrak{g})=\mathcal{C}^{*}(\mathfrak{g},\mathfrak{g})\oplus\mathbb{C}(\mathfrak{g},\mathfrak{g})=\oplus_{n\geq 0}\mathcal{C}^{n}(\mathfrak{g},\mathfrak{g})\oplus\mathbb{C}(\mathfrak{g},\mathfrak{g})$, where the degree of elements in $\mathcal{C}^{n}(\mathfrak{g},\mathfrak{g})$ is $n$, the degree of elements in $\mathbb{C}(\mathfrak{g},\mathfrak{g})$ is $1$ and $f\in\mathrm{Hom}(\otimes^{2n+1}\mathfrak{g},\mathfrak{g})$ satisfying (67) $\displaystyle f(x_{1},\cdot\cdot\cdot,x_{2i-1},x_{2i},\cdot\cdot\cdot,x_{n})=0,~{}~{}\hbox{if}~{}x_{2i-1}=x_{2i},~{}\forall~{}i=1,2,\cdot\cdot\cdot,[\frac{n}{2}].$ For all $P=(P_{I},P_{II})\in\mathcal{C}^{p}(\mathfrak{g},\mathfrak{g}),Q=(Q_{I},Q_{II})\in\mathcal{C}^{q}(\mathfrak{g},\mathfrak{g})~{}(p,q\geq 1)$, denote by $P\circ Q=((P\circ Q)_{I},(P\circ Q)_{II})\in\mathcal{C}^{p+q}(\mathfrak{g},\mathfrak{g}).$ The definition of $P\circ Q$ is given in [31]. In detail, $\displaystyle(P\circ Q)_{I}(X_{1},\cdot\cdot\cdot,X_{p+q})$ $\displaystyle=$ $\displaystyle\sum_{\begin{subarray}{c}\sigma\in sh(p,q),\\\ \sigma(p+q)=p+q\end{subarray}}(-1)^{pq}sgn(\sigma)P_{II}(X_{\sigma(1)},\cdot\cdot\cdot,X_{\sigma(p)},Q_{I}(X_{\sigma(p+1)},\cdot\cdot\cdot,X_{\sigma(p+q)})$ $\displaystyle+\sum_{\begin{subarray}{c}k=1,\\\ \sigma\in sh(k-1,q)\end{subarray}}^{p}(-1)^{q(k-1)}sgn(\sigma)P_{I}(X_{\sigma(1)},\cdot\cdot\cdot,X_{\sigma(k-1)},x_{q+k}\wedge Q_{II}(X_{\sigma(k)},\cdot\cdot\cdot,X_{\sigma(k+q-1)},y_{k+q}),X_{k+q+1},\cdot\cdot\cdot,X_{p+q})$ $\displaystyle+\sum_{\begin{subarray}{c}k=1,\\\ \sigma\in sh(k-1,q)\end{subarray}}^{p}(-1)^{q(k-1)}sgn(\sigma)P_{I}(X_{\sigma(1)},\cdot\cdot\cdot,X_{\sigma(k-1)},Q_{II}(X_{\sigma(k)},\cdot\cdot\cdot,X_{\sigma(k+q-1)},x_{k+q})\wedge y_{q+k},X_{k+q+1},\cdot\cdot\cdot,X_{p+q}),$ and $\displaystyle(P\circ Q)_{II}(X_{1},\cdot\cdot\cdot,X_{p+q},z)$ $\displaystyle=$ $\displaystyle\sum_{\sigma\in sh(p,q)}(-1)^{pq}sgn(\sigma)P_{II}(X_{\sigma(1)},\cdot\cdot\cdot,X_{\sigma(p)},Q_{II}(X_{\sigma(p+1)},\cdot\cdot\cdot,X_{\sigma(p+q)},z)$ $\displaystyle+\sum_{\begin{subarray}{c}k=1,\\\ \sigma\in sh(k-1,q)\end{subarray}}^{p}(-1)^{q(k-1)}sgn(\sigma)P_{II}(X_{\sigma(1)},\cdot\cdot\cdot,X_{\sigma(k-1)},x_{q+k}\wedge Q_{II}(X_{\sigma(k)},\cdot\cdot\cdot,X_{\sigma(k+q-1)},y_{k+q}),X_{k+q+1},\cdot\cdot\cdot,X_{p+q},z)$ $\displaystyle+\sum_{\begin{subarray}{c}k=1,\\\ \sigma\in sh(k-1,q)\end{subarray}}^{p}(-1)^{q(k-1)}sgn(\sigma)P_{II}(X_{\sigma(1)},\cdot\cdot\cdot,X_{\sigma(k-1)},Q_{II}(X_{\sigma(k)},\cdot\cdot\cdot,X_{\sigma(k+q-1)},x_{k+q})\wedge y_{q+k},X_{k+q+1},\cdot\cdot\cdot,X_{p+q},z).$ In particular, for $f\in\mathcal{C}^{0}(\mathfrak{g},\mathfrak{g})=\mathrm{Hom}(\mathfrak{g},\mathfrak{g})$ and $P=(P_{I},P_{II})\in\mathcal{C}^{p}(\mathfrak{g},\mathfrak{g})$, define $\displaystyle(P\circ f)_{I}(X_{1},\cdot\cdot\cdot,X_{p})=$ $\displaystyle\sum_{k=1}^{p}P_{I}(X_{1},\cdot\cdot\cdot,X_{k-1},x_{k}\wedge f(y_{k}),X_{k+1},\cdot\cdot\cdot,X_{p})$ (68) $\displaystyle+\sum_{k=1}^{p}P_{I}(X_{1},\cdot\cdot\cdot,X_{k-1},f(x_{k})\wedge y_{k},X_{k+1},\cdot\cdot\cdot,X_{p}),$ (69) $(f\circ P)_{I}(X_{1},\cdot\cdot\cdot,X_{p})=f(P_{I}(X_{1},\cdot\cdot\cdot,X_{p})),$ $\displaystyle(P\circ f)_{II}(X_{1},\cdot\cdot\cdot,X_{p},z)=$ $\displaystyle\sum_{k=1}^{p}P_{II}(X_{1},\cdot\cdot\cdot,X_{k-1},x_{k}\wedge f(y_{k}),X_{k+1},\cdot\cdot\cdot,X_{p},z)$ (70) $\displaystyle+\sum_{k=1}^{p}P_{I}(X_{1},\cdot\cdot\cdot,X_{k-1},f(x_{k})\wedge y_{k},X_{k+1},\cdot\cdot\cdot,X_{p},z),$ (71) $(f\circ P)_{II}(X_{1},\cdot\cdot\cdot,X_{p},z)=f(P_{II}(X_{1},\cdot\cdot\cdot,X_{p},z)).$ In order to derive sufficient and necessary conditions of Lie-Yamaguti algebras in terms of Maurer-Cartan element of some graded Lie algebras, we define $P\bullet Q$ as follows: for all $P=(P_{I},P_{II})\in\mathcal{C}^{n}(\mathfrak{g},\mathfrak{g}),~{}Q=(Q_{I},Q_{II})\in\mathcal{C}^{n}(\mathfrak{g},\mathfrak{g})$, (72) $P\bullet Q=\left\\{\begin{aligned} &0,&p,q\neq 1,\\\ &((P\bullet Q)_{I},(P\bullet Q)_{II}),&p=q=1,\end{aligned}\right.$ where $\displaystyle(P\bullet Q)_{I}(x_{1},x_{2},x_{3})$ $\displaystyle=$ $\displaystyle\frac{1}{2}\sum_{\sigma\in S_{3}}sgn(\sigma)P_{I}(Q_{I}(x_{\sigma(1)},x_{\sigma(2)}),x_{\sigma(3)})+\frac{1}{2}\sum_{\sigma\in S_{3}}sgn(\sigma)Q_{II}(x_{\sigma(1)},x_{\sigma(2)},x_{\sigma(3)}),$ $\displaystyle(P\bullet Q)_{II}(x_{1},x_{2},x_{3},x_{4})=\frac{1}{2}\sum_{\sigma\in S_{3}}sgn(\sigma)P_{II}(Q_{I}(x_{\sigma(1)},x_{\sigma(2)}),x_{\sigma(3)},x_{4}).$ Let $\mathfrak{g}$ and $V$ be vector spaces. For any $(\chi,\omega),(\chi^{\prime},\omega^{\prime})\in\mathcal{C}^{1}(\mathfrak{g},\mathfrak{g})$, put $\Pi=(\chi,\omega),~{}\Pi^{\prime}=(\chi^{\prime},\omega^{\prime})$ and $\Pi+\Pi^{\prime}=(\chi+\chi^{\prime},\omega+\omega^{\prime})$. ###### Proposition 4.1. With the above notations, $(\mathcal{L}^{*}(\mathfrak{g},\mathfrak{g}),[\ ,\ ]_{LY})$ is a graded Lie algebra, where (73) $[P,Q]_{LY}=\left\\{\begin{aligned} &P\bullet Q+Q\bullet P+P\circ Q+Q\circ P,&p=q=1,\\\ &P\circ Q-(-1)^{pq}Q\circ P,&otherwise,\end{aligned}\right.$ for all $P\in\mathcal{C}^{p}(\mathfrak{g},\mathfrak{g}),Q\in\mathcal{C}^{q}(\mathfrak{g},\mathfrak{g}).$ Furthermore, $\Pi=(\chi,\omega)\in\mathcal{C}^{1}(\mathfrak{g},\mathfrak{g})$ defines a Lie-Yamaguti algebra structure on $\mathfrak{g}$ if and only if $[\ ,\ ]_{LY}=0$, that is, $\Pi$ is a Maurer-Cartan element of the graded Lie algebra $(\mathcal{L}^{*}(\mathfrak{g},\mathfrak{g}),[\ ,\ ]_{LY})$. We write $[P,Q]_{LY}=([P,Q]_{I},[P,Q]_{II}).$ ###### Proof. Take the same procedure of the proof of Proposition 4.1 [31], we can check that Eq. (67) holds. Clearly, Eq. (67) implies that Eq. (1) holds. For all $\Pi=(\chi,\omega)\in\mathcal{C}^{1}(\mathfrak{g},\mathfrak{g})$, $[\Pi,\Pi]_{LY}=2\Pi\circ\Pi+2\Pi\bullet\Pi$ and for all $x_{1},x_{2},x_{3},x_{4}\in\mathfrak{g}$, we have $\displaystyle(\Pi\bullet\Pi)_{I}(x_{1},x_{2},x_{3})=$ $\displaystyle\chi(\chi(x_{1},x_{2}),x_{3})+\chi(\chi(x_{2},x_{3}),x_{1})+\chi(\chi(x_{3},x_{1}),x_{2})$ $\displaystyle+\omega(x_{1},x_{2},x_{3})+\omega(x_{2},x_{3},x_{1})+\omega(x_{3},x_{1},x_{2}),$ and $\displaystyle(\Pi\bullet\Pi)_{II}(x_{1},x_{2},x_{3},x_{4})$ $\displaystyle=$ $\displaystyle\omega(\chi(x_{1},x_{2}),x_{3},x_{4})+\omega(\chi(x_{2},x_{3}),x_{1},x_{4})+\omega(\chi(x_{3},x_{1}),x_{2},x_{4}).$ Combining Theorem 3.1 [31], $\Pi=(\chi,\omega)\in\mathcal{C}^{1}(\mathfrak{g},\mathfrak{g})$ defines a Lie-Yamaguti algebra structure on $\mathfrak{g}$ if and only if $\Pi$ is a Maurer-Cartan element of the graded Lie algebra $(\mathcal{L}^{*}(\mathfrak{g},\mathfrak{g}),[\ ,\ ]_{LY})$. ∎ By Proposition 4.1, we rewrite Theorem 3.3 [31] as follows: ###### Theorem 4.2. Let $(\mathfrak{g},\chi_{\mathfrak{g}},\omega_{\mathfrak{g}})$ be a Lie- Yamaguti algebra. Then $(\mathcal{L}^{*}(\mathfrak{g},\mathfrak{g}),[\ ,\ ]_{LY},d_{\Pi})$ is a differential graded Lie algebra, where $d_{\Pi}$ with $\Pi=(\chi_{\mathfrak{g}},\omega_{\mathfrak{g}})$ is given by (74) $d_{\Pi}(\nu)=[\Pi,\nu]_{LY},~{}~{}\forall~{}\nu\in\mathcal{C}^{n-1}(\mathfrak{g},\mathfrak{g}).$ Moreover, $\Pi+\Pi^{\prime}$ with $\Pi^{\prime}\in\mathcal{C}^{1}(\mathfrak{g},\mathfrak{g})$ defines a Lie- Yamaguti algebra structure on $\mathfrak{g}$ if and only if $\Pi^{\prime}$ is a Maurer-Cartan element of the differential graded Lie algebra $(\mathcal{L}^{*}(\mathfrak{g},\mathfrak{g}),[\ ,\ ]_{LY},d_{\Pi})$. Denote $\bar{\mu}(x+a,y+b)=\mu(x)b-\mu(y)a$ and $\bar{\theta}(x+a,y+b,z+c)=D(x,y)c+\theta(y,z)a-\theta(x,z)b$ for all $x,y,z\in\mathfrak{g}$ and $a,b,c\in V$. ###### Proposition 4.3. With the above notations, $(V,\mu,\theta,D)$ is a representation of Lie- Yamaguti algebra $(\mathfrak{g},\chi_{\mathfrak{g}},\omega_{\mathfrak{g}})$ if and only if $\bar{\Pi}\in\mathcal{L}^{*}(\mathfrak{g}\ltimes V,\mathfrak{g}\ltimes V)$ is a Maurer-Cartan element of the differential graded Lie algebra $(\mathcal{L}^{*}(\mathfrak{g}\ltimes V,\mathfrak{g}\ltimes V),[\ ,\ ]_{LY},d_{\bar{\Pi}})$, where $\bar{\Pi}=(\bar{\mu},\bar{\theta}).$ ###### Proof. It follows from Theorem 4.2. ∎ Let $(\mathfrak{g},\chi_{\mathfrak{g}},\omega_{\mathfrak{g}})$ and $(\mathfrak{h},\chi_{\mathfrak{h}},\omega_{\mathfrak{h}})$ be two Lie-Yamaguti algebras. Then $(\mathfrak{g}\oplus\mathfrak{h},\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})$ is a Lie-Yamaguti algebra, where $\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}$ are defined by $\chi_{\mathfrak{g}\oplus\mathfrak{h}}(x+a,y+b)=\chi_{\mathfrak{g}}(x,y)+\chi_{\mathfrak{h}}(a,b),~{}~{}\omega_{\mathfrak{g}\oplus\mathfrak{h}}(x+a,y+b,z+c)=\omega_{\mathfrak{g}}(x,y,z)+\omega_{\mathfrak{h}}(a,b,c)$ for all $x,y,z\in\mathfrak{g},a,b,c\in\mathfrak{h}$. In view of Theorem 4.2, $(\mathcal{L}^{*}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{g}\oplus\mathfrak{h}),[\ ,\ ]_{LY},d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})})$ is a differential graded Lie algebra. Define $\mathcal{C}_{>}^{n}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h})\subset\mathcal{C}^{n}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h}),~{}\mathbb{C}_{>}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h})\subset\mathbb{C}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h})$ respectively by $\mathcal{C}^{n}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h})=\mathcal{C}_{>}^{n}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h})\oplus\mathcal{C}^{n}(\mathfrak{h},\mathfrak{h}),~{}\mathbb{C}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h})=\mathbb{C}_{>}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h})\oplus\mathbb{C}(\mathfrak{h},\mathfrak{h}).$ Denote by $\mathcal{C}_{>}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h})=\oplus_{n}\mathcal{C}_{>}^{n}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h})$ and $\mathcal{L}_{>}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h})=\mathcal{C}_{>}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h})\oplus\mathbb{C}_{>}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h})$. Similar to the case of $3$-Lie algebras [27], we have ###### Proposition 4.4. With the above notations, $(\mathcal{L}_{>}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h}),[\ ,\ ]_{LY},d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})})$ is a differential graded Lie subalgebra of $(\mathcal{L}^{*}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{g}\oplus\mathfrak{h}),[\ ,\ ]_{LY},d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})})$. ###### Proposition 4.5. The following conditions are equivalent: 1. $(i)$ $(\mathfrak{g}\oplus\mathfrak{h},[\ ,\ ]_{\chi},\\{\ ,\ ,\ \\}_{\omega})$ is a Lie-Yamaguti algebra, which is a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$. 2. $(ii)$ $\Pi=(\bar{\chi},\bar{\omega})$ is a Maurer-Cartan element of the differential graded Lie algebra $(C_{>}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h}),[\ ,\ ]_{LY},d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})})$, where $\bar{\chi}(x+a,y+b)=\chi(x,y)+\mu(x)b-\mu(y)a,$ $\displaystyle\bar{\omega}(x+a,y+b,z+c)=$ $\displaystyle\omega(x,y,z)+D(x,y)c+\theta(y,z)a-\theta(x,z)b$ $\displaystyle+T(z)(a,b)+\rho(x)(b,c)-\rho(y)(a,c),$ for all $x,y,z\in\mathfrak{g},a,b,c\in\mathfrak{h}$. ###### Proof. In view of the definition of Maurer-Cartan element, $\Pi=(\bar{\chi},\bar{\omega})$ is a Maurer-Cartan element of the differential graded Lie algebra $(\mathcal{L}_{>}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h}),[\ ,\ ]_{LY},d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})})$ if and only if $d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}\Pi+\frac{1}{2}[\Pi,\Pi]_{LY}=0,$ that is, (75) $[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),(\bar{\chi},\bar{\omega})]_{LY}+\frac{1}{2}[(\bar{\chi},\bar{\omega}),(\bar{\chi},\bar{\omega})]_{LY}=0.$ On the other hand, by Proposition 4.1, we know that $(\mathfrak{g}\oplus\mathfrak{h},[\ ,\ ]_{\chi},\\{\ ,\ ,\ \\}_{\omega})$ is a Lie-Yamaguti algebra if and only if (76) $[(\chi_{\mathfrak{g}\oplus\mathfrak{h}}+\bar{\chi},\omega_{\mathfrak{g}\oplus\mathfrak{h}}+\bar{\omega}),(\chi_{\mathfrak{g}\oplus\mathfrak{h}}+\bar{\chi},\omega_{\mathfrak{g}\oplus\mathfrak{h}}+\bar{\omega})]_{LY}=0.$ Since $(\mathfrak{g}\oplus\mathfrak{h},\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})$ is a Lie-Yamaguti algebra, by computations, we have $\displaystyle[(\chi_{\mathfrak{g}\oplus\mathfrak{h}}+\bar{\chi},\omega_{\mathfrak{g}\oplus\mathfrak{h}}+\bar{\omega}),(\chi_{\mathfrak{g}\oplus\mathfrak{h}}+\bar{\chi},\omega_{\mathfrak{g}\oplus\mathfrak{h}}+\bar{\omega})]_{LY}$ $\displaystyle=$ $\displaystyle[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})]_{LY}+[(\bar{\chi},\bar{\omega}),(\bar{\chi},\bar{\omega})]_{LY}+2[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),(\bar{\chi},\bar{\omega})]_{LY}$ $\displaystyle=$ $\displaystyle[(\bar{\chi},\bar{\omega}),(\bar{\chi},\bar{\omega})]_{LY}+2[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),(\bar{\chi},\bar{\omega})]_{LY},$ which yields that Eq. (75) holds if and only if Eq. (76) holds. This completes the proof. ∎ ###### Proposition 4.6. Two non-abelian extensions $(\mathfrak{g}\oplus\mathfrak{h},[\ ,\ ]_{\chi_{0}},\\{\ ,\ ,\ \\}_{\omega_{0}})$ and $(\mathfrak{g}\oplus\mathfrak{h},[\ ,\ ]_{\chi},\\{\ ,\ ,\ \\}_{\omega})$ are equivalent if and only if the Maurer-Cartan elements $\Pi_{0}=(\bar{\chi}_{0},\bar{\omega}_{0})$ and $\Pi=(\bar{\chi},\bar{\omega})$ are gauge equivalent. ###### Proof. Let $\Pi_{0},\Pi$ be two Maurer-Cartan elements of the differential graded Lie algebra $(\mathcal{L}_{>}(\mathfrak{g}\oplus\mathfrak{h},\mathfrak{h}),[\ ,\ ]_{LY},d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})})$. $\Pi_{0},\Pi$ are gauge equivalent if and only if there is a linear map $\varphi\in\mathrm{Hom}(\mathfrak{g},\mathfrak{h})$ such that $\displaystyle\Pi_{0}=$ $\displaystyle e^{ad_{\varphi}}\Pi-\frac{e^{ad_{\varphi}}-1}{ad_{\varphi}}d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}\varphi$ $\displaystyle=$ $\displaystyle(id+ad_{\varphi}+\frac{1}{2!}ad_{\varphi}^{2}+\cdot\cdot\cdot++\frac{1}{n!}ad_{\varphi}^{n}+\cdot\cdot\cdot)\Pi$ $\displaystyle-(id+\frac{1}{2!}ad_{\varphi}+\frac{1}{3!}ad_{\varphi}^{2}+\cdot\cdot\cdot+\frac{1}{n!}ad_{\varphi}^{n-1}+\cdot\cdot\cdot)d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}\varphi.$ In the following, we denote by $ad_{\varphi}\Pi=[\varphi,\Pi]_{LY}=([\varphi,\Pi]_{I},[\varphi,\Pi]_{II}),~{}~{}ad_{\varphi}^{2}\Pi=[\varphi,[\varphi,\Pi]_{LY}]_{LY}=([\varphi,[\varphi,\Pi]_{LY}]_{I},[\varphi,[\varphi,\Pi]_{LY}]_{II}),$ $d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}\varphi=[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{LY}=([(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{I},[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{II}),$ and $[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}(\varphi)]_{LY}=([\varphi,[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{LY}]_{I},[\varphi,[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{LY}]_{II}).$ Using Eqs. (68)-(71), for all $w_{i}=x_{i}+a_{i}\in\mathfrak{g}\oplus\mathfrak{h}~{}(i=1,2,3)$, we get $\displaystyle[\varphi,\Pi]_{I}(w_{1},w_{2})=$ $\displaystyle\varphi\bar{\chi}(w_{1},w_{2})-\bar{\chi}(w_{1},\varphi(x_{2}))-\bar{\chi}(\varphi(x_{1}),w_{2})$ $\displaystyle=$ $\displaystyle-\mu(x_{1})\varphi(x_{2})+\mu(x_{2})\varphi(x_{1}),$ $\displaystyle[\varphi,\Pi]_{II}(w_{1},w_{2},w_{3})=$ $\displaystyle\varphi\bar{\omega}(w_{1},w_{2},w_{3})-\bar{\omega}(\varphi(x_{1}),w_{2},w_{3})-\bar{\omega}(w_{1},w_{2},\varphi(x_{3}))-\bar{\omega}(w_{1},\varphi(x_{2}),w_{3})$ $\displaystyle=$ $\displaystyle\theta(x_{1},x_{3})\varphi(x_{2})-T(x_{3})(a_{1},\varphi(x_{2}))-\rho(x_{1})(\varphi(x_{2}),a_{3})$ $\displaystyle-\theta(x_{2},x_{3})\varphi(x_{1})-T(x_{3})(\varphi(x_{1}),a_{2})+\rho(x_{2})(\varphi(x_{1}),a_{3})$ $\displaystyle+\rho(x_{2})(a_{1},\varphi(x_{3}))-D(x_{1},x_{2})\varphi(x_{3})-\rho(x_{1})(a_{2},\varphi(x_{3})),$ $[\varphi,[\varphi,\Pi]_{LY}]_{I}(w_{1},w_{2})=\varphi[\varphi,\Pi]_{I}(w_{1},w_{2})-[\varphi,\chi]_{I}(w_{1},\varphi(x_{2}))-[\varphi,\Pi]_{I}(\varphi(x_{1}),w_{2})=0,$ $\displaystyle[\varphi,[\varphi,\Pi]_{LY}]_{II}(w_{1},w_{2},w_{3})=$ $\displaystyle\varphi[\varphi,\Pi]_{II}(w_{1},w_{2},w_{3})-[\varphi,\Pi]_{II}(w_{1},\varphi(x_{2}),w_{3})$ $\displaystyle-[\varphi,\Pi]_{II}(\varphi(x_{1}),w_{2},w_{3})-[\varphi,\Pi]_{II}(w_{1},w_{2},\varphi(x_{3}))$ $\displaystyle=$ $\displaystyle 2T(x_{3})(\varphi(x_{1}),\varphi(x_{2}))+2\rho(x_{1})(\varphi(x_{2}),\varphi(x_{3}))-2\rho(x_{2})(\varphi(x_{1}),\varphi(x_{3})),$ $\displaystyle[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{I}(w_{1},w_{2})=$ $\displaystyle\chi_{\mathfrak{g}\oplus\mathfrak{h}}(w_{1},\varphi(x_{2}))+\chi_{\mathfrak{g}\oplus\mathfrak{h}}(\varphi(x_{1}),w_{2})-\varphi\chi_{\mathfrak{g}\oplus\mathfrak{h}}(w_{1},w_{2})$ $\displaystyle=$ $\displaystyle[a_{1},\varphi(x_{2})]_{\mathfrak{h}}+[\varphi(x_{1}),a_{2}]_{\mathfrak{h}}-\varphi([x_{1},x_{2}]_{\mathfrak{g}}),$ $\displaystyle[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{II}(w_{1},w_{2},w_{3})$ $\displaystyle=$ $\displaystyle\omega_{\mathfrak{g}\oplus\mathfrak{h}}(w_{1},\varphi(x_{2}),w_{3})+\omega_{\mathfrak{g}\oplus\mathfrak{h}}(\varphi(x_{1}),w_{2},w_{3})+\omega_{\mathfrak{g}\oplus\mathfrak{h}}(w_{1},w_{2},\varphi(x_{3}))-\varphi\omega_{\mathfrak{g}\oplus\mathfrak{h}}(w_{1},w_{2},w_{3})$ $\displaystyle=$ $\displaystyle\\{a_{1},\varphi(x_{2}),a_{3}\\}_{\mathfrak{h}}+\\{\varphi(x_{1}),a_{2},a_{3}\\}_{\mathfrak{h}}+\\{a_{1},a_{2},\varphi(x_{3})\\}_{\mathfrak{h}}-\varphi(\\{x_{1},x_{2},x_{3}\\}_{\mathfrak{g}}),$ $\displaystyle[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}(\varphi)]_{I}(w_{1},w_{2})$ $\displaystyle=$ $\displaystyle\varphi[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{I}(w_{1},w_{2})-[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{I}(w_{1},\varphi(x_{2}))-[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{I}(\varphi(x_{1}),w_{2})$ $\displaystyle=$ $\displaystyle-2[\varphi(x_{1}),\varphi(x_{2})]_{\mathfrak{h}},$ $\displaystyle[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}(\varphi)]_{II}(w_{1},w_{2},w_{3})$ $\displaystyle=$ $\displaystyle\varphi[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{II}(w_{1},w_{2},w_{3})-[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{II}(w_{1},\varphi(x_{2}),w_{3})$ $\displaystyle-[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{II}(\varphi(x_{1}),w_{2},w_{3})-[(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}}),\varphi]_{II}(w_{1},w_{2},\varphi(x_{3}))$ $\displaystyle=$ $\displaystyle-\\{\varphi(x_{1}),\varphi(x_{2}),a_{3}\\}_{\mathfrak{h}}-\\{a_{1},\varphi(x_{2}),\varphi(x_{3})\\}_{\mathfrak{h}}-\\{\varphi(x_{1}),\varphi(x_{2}),a_{3}\\}_{\mathfrak{h}}$ $\displaystyle-\\{\varphi(x_{1}),a_{2},\varphi(x_{3})\\}_{\mathfrak{h}}-\\{a_{1},\varphi(x_{2}),\varphi(x_{3})\\}_{\mathfrak{h}}-\\{\varphi(x_{1}),a_{2},\varphi(x_{3})\\}_{\mathfrak{h}}$ $\displaystyle=$ $\displaystyle-2\\{\varphi(x_{1}),\varphi(x_{2}),a_{3}\\}_{\mathfrak{h}}-2\\{a_{1},\varphi(x_{2}),\varphi(x_{3})\\}_{\mathfrak{h}}-2\\{\varphi(x_{1}),a_{2},\varphi(x_{3})\\}_{\mathfrak{h}},$ $\displaystyle[\varphi,[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}(\varphi)]_{LY}]_{I}(w_{1},w_{2})$ $\displaystyle=$ $\displaystyle\varphi[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}(\varphi)]_{I}(w_{1},w_{2})-[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}(\varphi)]_{I}(x_{1}+a_{1},\varphi(x_{2}))-[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}(\varphi)]_{I}(\varphi(x_{1}),w_{2})$ $\displaystyle=$ $\displaystyle 0,$ $\displaystyle[\varphi,[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}(\varphi)]_{LY}]_{II}(w_{1},w_{2},w_{3})$ $\displaystyle=$ $\displaystyle[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}(\varphi)]_{II}(x_{1}+a_{1},\varphi(x_{2}),w_{3})+[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}(\varphi)]_{II}(\varphi(x_{1}),w_{2},w_{3})$ $\displaystyle+[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}(\varphi)]_{II}(w_{1},w_{2},\varphi(x_{3}))-\varphi[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}(\varphi)]_{II}(w_{1},w_{2},w_{3})$ $\displaystyle=$ $\displaystyle 6\\{\varphi(x_{1}),\varphi(x_{2}),\varphi(x_{3})\\}_{\mathfrak{h}},$ and $ad_{\varphi}^{n}\Pi=0,~{}~{}ad_{\varphi}^{n}(d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}\Pi)=0,~{}~{}\forall~{}~{}n\geq 3.$ Thus, $\Pi$ and $\Pi_{0}$ are gauge equivalent Maurer-Cartan elements if and only if (77) $\chi_{0}=\chi+[\varphi,\Pi]_{I}-d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}\varphi-\frac{1}{2!}[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}\varphi]_{I},$ $\displaystyle\omega_{0}=$ $\displaystyle\omega+[\varphi,\Pi]_{II}+\frac{1}{2!}[\varphi,[\varphi,\Pi]_{LY}]_{II}-d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}\varphi$ (78) $\displaystyle-\frac{1}{2!}[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}\varphi]_{II}-\frac{1}{3!}[\varphi,[\varphi,d_{(\chi_{\mathfrak{g}\oplus\mathfrak{h}},\omega_{\mathfrak{g}\oplus\mathfrak{h}})}\varphi]_{LY}]_{II}.$ Therefore, Eqs. (77) and (78) hold if and only if Eqs. (53)-(58) hold. This finishes the proof. ∎ ## 5\. Extensibility of a pair of Lie-Yamaguti algebra automorphisms In this section, we study extensibility of pairs of Lie-Yamaguti algebra automorphisms and characterize them by equivalent conditions. Let $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ be a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $p$. Denote $\mathrm{Aut}_{\mathfrak{h}}(\hat{\mathfrak{g}})=\\{\gamma\in\mathrm{Aut}(\hat{\mathfrak{g}})\mid\gamma(\mathfrak{h})=\mathfrak{h}\\}.$ ###### Definition 5.1. A pair of automorphisms $(\alpha,\beta)\in\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})$ is said to be extensible with respect to a non-abelian extension $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ if there is an automorphism $\gamma\in\mathrm{Aut}_{\mathfrak{h}}(\hat{\mathfrak{g}})$ such that $i\beta=\gamma i,~{}p\gamma=\alpha p$, that is, the following commutative diagram holds: $\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathfrak{h}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta}$$\scriptstyle{i}$$\textstyle{\hat{\mathfrak{g}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\gamma}$$\scriptstyle{p}$$\textstyle{\mathfrak{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathfrak{h}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i}$$\textstyle{\hat{\mathfrak{g}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{\mathfrak{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$ It is natural to ask: when is a pair of automorphisms $(\alpha,\beta)\in\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})$ extensible? We discuss this problem in the following. ###### Theorem 5.2. Let $0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ be a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $p$ and $(\chi,\omega,\mu,\theta,D,\rho,T)$ the corresponding non-abelian (2,3)-cocycle induced by $s$. A pair $(\alpha,\beta)\in\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})$ is extensible if and only if there is a linear map $\varphi:\mathfrak{g}\longrightarrow\mathfrak{h}$ satisfying the following conditions: $\displaystyle\beta\omega(x,y,z)-\omega(\alpha(x),\alpha(y),\alpha(z))=T(\alpha(z))(\varphi(x),\varphi(y))-\rho(\alpha(y))(\varphi(x),\varphi(z))-\theta(\alpha(y),\alpha(z))\varphi(x)$ (79) $\displaystyle+$ $\displaystyle\rho(\alpha(x))(\varphi(y),\varphi(z))+\theta(\alpha(x),\alpha(z))\varphi(y)-D(\alpha(x),\alpha(y))\varphi(z)+\varphi(\\{x,y,z\\}_{\mathfrak{h}})-\\{\varphi(x),\varphi(y),\varphi(z)\\}_{\mathfrak{h}},$ (80) $\beta\chi(x,y)-\chi(\alpha(x),\alpha(y))=[\varphi(x),\varphi(y)]_{\mathfrak{h}}+\varphi([x,y]_{\mathfrak{g}})-\mu(\alpha(x))\varphi(y)+\mu(\alpha(y))\varphi(x),$ (81) $\beta(\theta(x,y)a)-\theta(\alpha(x),\alpha(y))\beta(a)=\\{\beta(a),\varphi(x),\varphi(y)\\}_{\mathfrak{h}}-T(\alpha(y))(\beta(a),\varphi(x))+\rho(\alpha(x))(\beta(a),\varphi(y)),$ (82) $\beta D(x,y)a-D(\alpha(x),\alpha(y))\beta(a)=\\{\varphi(x),\varphi(y),\beta(a)\\}_{\mathfrak{h}}-\rho(\alpha(x))(\varphi(y),\beta(a))+\rho(\alpha(y))(\varphi(x),\beta(a)),$ (83) $\beta(\rho(x)(a,b))-\rho(\alpha(x))(\beta(a),\beta(b))=\\{\beta(a),\varphi(x),\beta(b)\\}_{\mathfrak{h}},$ (84) $\beta T(x)(a,b)-T(\alpha(x))(\beta(a),\beta(b))=\\{\beta(b),\beta(a),\varphi(x)\\}_{\mathfrak{h}},$ (85) $\beta\mu(x)a-\mu(\alpha(x))\beta(a)=[\beta(a),\varphi(x)]_{\mathfrak{h}},$ for all $x,y,z\in\mathfrak{g}$ and $a\in\mathfrak{h}$. ###### Proof. Assume that $(\alpha,\beta)\in\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})$ is extensible, that is, there is an automorphism $\gamma\in\mathrm{Aut}_{\mathfrak{h}}(\hat{\mathfrak{g}})$ such that $\gamma i=i\beta$ and $p\gamma=\alpha p$. Due to $s$ being a section of $p$, for all $x\in\mathfrak{g}$, $p(s\alpha-\gamma s)(x)=\alpha(x)-\alpha(x)=0,$ which implies that $(s\alpha-\gamma s)(x)\in\mathrm{ker}p=\mathfrak{h}$. So we can define a linear map $\varphi:\mathfrak{g}\longrightarrow\mathfrak{h}$ by $\varphi(x)=(s\alpha-\gamma s)(x),~{}~{}\forall~{}x\in\mathfrak{g}.$ Using Eqs. (63) and (64), for $x,y\in\mathfrak{g},a\in\mathfrak{h}$, we get $\displaystyle\beta(\theta(x,y)a)-\theta(\alpha(x),\alpha(y))\beta(a)$ $\displaystyle=$ $\displaystyle\beta\\{a,s(x),s(y)\\}_{\hat{\mathfrak{g}}}-\\{\beta(a),s\alpha(x),s\alpha(y)\\}_{\hat{\mathfrak{g}}}$ $\displaystyle=$ $\displaystyle\\{\beta(a),\beta s(x),\beta s(y)\\}_{\hat{\mathfrak{g}}}-\\{\beta(a),s\alpha(x),s\alpha(y)\\}_{\hat{\mathfrak{g}}}$ $\displaystyle=$ $\displaystyle\\{\beta(a),\beta s(x)-s\alpha(x),\beta s(y)\\}_{\hat{\mathfrak{g}}}+\\{\beta(a),s\alpha(x),\beta s(y)\\}_{\hat{\mathfrak{g}}}-\\{\beta(a),s\alpha(x),s\alpha(y)\\}_{\hat{\mathfrak{g}}}$ $\displaystyle=$ $\displaystyle\\{\beta(a),-\varphi(x),\beta s(y)\\}_{\hat{\mathfrak{g}}}+\\{\beta(a),s\alpha(x),-\varphi(y)\\}_{\hat{\mathfrak{g}}}$ $\displaystyle=$ $\displaystyle-\\{\beta(a),\varphi(x),\beta s(y)-s\alpha(y)\\}_{\hat{\mathfrak{g}}}-\\{\beta(a),\varphi(x),s\alpha(y)\\}_{\hat{\mathfrak{g}}}-\\{\beta(a),s\alpha(x),\varphi(y)\\}_{\hat{\mathfrak{g}}}$ $\displaystyle=$ $\displaystyle\\{\beta(a),\varphi(x),\varphi(y)\\}_{\hat{\mathfrak{g}}}-\\{\beta(a),\varphi(x),s\alpha(y)\\}_{\hat{\mathfrak{g}}}+\\{s\alpha(x),\beta(a),\varphi(y)\\}_{\hat{\mathfrak{g}}}$ $\displaystyle=$ $\displaystyle\\{\beta(a),\varphi(x),\varphi(y)\\}_{\mathfrak{h}}-T(\alpha(y))(\beta(a),\varphi(x))+\rho(\alpha(x))(\beta(a),\varphi(y)),$ which indicates that Eq. (81) holds. Take the same procedure, we can prove that Eqs. (5.2), (80) and (82)-(85) hold. Conversely, suppose that $(\alpha,\beta)\in\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})$ and there is a linear map $\varphi:\mathfrak{g}\longrightarrow\mathfrak{h}$ satisfying Eqs. (5.2)-(85). Since $s$ is a section of $p$, all $\hat{w}\in\hat{\mathfrak{g}}$ can be written as $\hat{w}=a+s(x)$ for some $a\in\mathfrak{h},x\in\mathfrak{g}.$ Define a linear map $\gamma:\hat{\mathfrak{g}}\longrightarrow\hat{\mathfrak{g}}$ by $\gamma(\hat{w})=\gamma(a+s(x))=\beta(a)-\varphi(x)+s\alpha(x).$ It is easy to check that $i\beta=\gamma i,~{}p\gamma=\alpha p$ and $\gamma(\mathfrak{h})=\mathfrak{h}$. In the sequel, firstly, we prove that $\gamma$ is bijective. Indeed if $\gamma(\hat{w})=0,$ we have $s\alpha(x)=0$ and $\beta(a)-\varphi(x)=0$. In view of $s$ and $\alpha$ being injective, we get $x=0$, which follows that $a=0$. Thus, $\hat{w}=a+s(x)=0$, that is $\gamma$ is injective. For any $\hat{w}=a+s(x)\in\hat{\mathfrak{g}}$, $\gamma(\beta^{-1}(a)+\beta^{-1}\varphi\alpha^{-1}(x)+s\alpha^{-1}(x))=a+s(x)=\hat{w},$ which yields that $\gamma$ is surjective. In all, $\gamma$ is bijective. Secondly, we show that $\gamma$ is a homomorphism of the Lie-Yamaguti algebra $\hat{\mathfrak{g}}$. In fact, for all $\hat{w}_{i}=a_{i}+s(x_{i})\in\hat{\mathfrak{g}}~{}(i=1,2,3)$, $\displaystyle\\{\gamma(\hat{w}_{1}),\gamma(\hat{w}_{2}),\gamma(\hat{w}_{3})\\}_{\hat{\mathfrak{g}}}$ $\displaystyle=$ $\displaystyle\\{\beta(a_{1})-\varphi(x_{1})+s\alpha(x_{1}),\beta(a_{2})-\varphi(x_{2})+s\alpha(x_{2}),\beta(a_{3})-\varphi(x_{3})+s\alpha(x_{3})\\}_{\hat{\mathfrak{g}}}$ $\displaystyle=$ $\displaystyle\\{\beta(a_{1}),\beta(a_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}-\\{\beta(a_{1}),\beta(a_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}+\\{\beta(a_{1}),\beta(a_{2}),s\alpha(x_{3})\\}_{\hat{\mathfrak{g}}}$ $\displaystyle-\\{\beta(a_{1}),\varphi(x_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}+\\{\beta(a_{1}),\varphi(x_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}-\\{\beta(a_{1}),\varphi(x_{2}),s\alpha(x_{3})\\}_{\hat{\mathfrak{g}}}$ $\displaystyle+\\{\beta(a_{1}),s\alpha(x_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}-\\{\beta(a_{1}),s\alpha(x_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}+\\{\beta(a_{1}),s\alpha(x_{2}),s\alpha(x_{3})\\}_{\hat{\mathfrak{g}}}$ $\displaystyle-\\{\varphi(x_{1}),\beta(a_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}+\\{\varphi(x_{1}),\beta(a_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}-\\{\varphi(x_{1}),\beta(a_{2}),s\alpha(x_{3})\\}_{\hat{\mathfrak{g}}}$ $\displaystyle+\\{\varphi(x_{1}),\varphi(x_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}-\\{\varphi(x_{1}),\varphi(x_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}+\\{\varphi(x_{1}),\varphi(x_{2}),s\alpha(x_{3})\\}_{\hat{\mathfrak{g}}}$ $\displaystyle-\\{\varphi(x_{1}),s\alpha(x_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}+\\{\varphi(x_{1}),s\alpha(x_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}-\\{\varphi(x_{1}),s\alpha(x_{2}),s\alpha(x_{3})\\}_{\hat{\mathfrak{g}}}$ $\displaystyle+\\{s\alpha(x_{1}),\beta(a_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}-\\{s\alpha(x_{1}),\beta(a_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}+\\{s\alpha(x_{1}),\beta(a_{2}),s\alpha(x_{3})\\}_{\hat{\mathfrak{g}}}$ $\displaystyle-\\{s\alpha(x_{1}),\varphi(x_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}+\\{s\alpha(x_{1}),\varphi(x_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}-\\{s\alpha(x_{1}),\varphi(x_{2}),s\alpha(x_{3})\\}_{\hat{\mathfrak{g}}}$ $\displaystyle+\\{s\alpha(x_{1}),s\alpha(x_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}-\\{s\alpha(x_{1}),s\alpha(x_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}+\\{s\alpha(x_{1}),s\alpha(x_{2}),s\alpha(x_{3})\\}_{\hat{\mathfrak{g}}}$ $\displaystyle=$ $\displaystyle\\{\beta(a_{1}),\beta(a_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}-\\{\beta(a_{1}),\beta(a_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}+T(\alpha(x_{3}))(\beta(a_{1}),\beta(a_{2}))$ $\displaystyle-\\{\beta(a_{1}),\varphi(x_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}+\\{\beta(a_{1}),\varphi(x_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}-T(\alpha(x_{3}))(\beta(a_{1}),\varphi(x_{2}))$ $\displaystyle-\rho(\alpha(x_{2}))(\beta(a_{1}),\beta(a_{3}))+\rho(\alpha(x_{2}))(\beta(a_{1}),\varphi(x_{3}))+\theta(\alpha(x_{2}),\alpha(x_{3}))\beta(a_{1})$ $\displaystyle-\\{\varphi(x_{1}),\beta(a_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}+\\{\varphi(x_{1}),\beta(a_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}-T(\alpha(x_{3}))(\varphi(x_{1}),\beta(a_{2}))$ $\displaystyle+\\{\varphi(x_{1}),\varphi(x_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}-\\{\varphi(x_{1}),\varphi(x_{2}),\varphi(x_{3})\\}_{\hat{\mathfrak{g}}}+T(\alpha(x_{3}))(\varphi(x_{1}),\varphi(x_{2}))$ $\displaystyle+\rho(\alpha(x_{2}))(\varphi(x_{1}),\beta(a_{3}))-\rho(\alpha(x_{2}))(\varphi(x_{1}),\varphi(x_{3}))-\theta(\alpha(x_{2}),\alpha(x_{3}))\varphi(x_{1})$ $\displaystyle+\rho(\alpha(x_{1}))(\beta(a_{2}),\beta(a_{3}))-\rho(\alpha(x_{1}))(\beta(a_{2}),\varphi(x_{3}))-\theta(\alpha(x_{1}),\alpha(x_{3}))\beta(a_{2})$ $\displaystyle-\rho(\alpha(x_{1}))(\varphi(x_{2}),\beta(a_{3}))+\rho(\alpha(x_{1}))(\varphi(x_{2}),\varphi(x_{3}))+\theta(\alpha(x_{1}),\alpha(x_{3}))\varphi(x_{2})$ $\displaystyle+D(\alpha(x_{1}),\alpha(x_{2}))\beta(a_{3})-D(\alpha(x_{1}),\alpha(x_{2}))\varphi(x_{3})+\omega(\alpha(x_{1}),\alpha(x_{2}),\alpha(x_{3}))+s\alpha\\{x_{1},x_{2},x_{3}\\}_{\mathfrak{g}}$ and $\displaystyle\gamma(\\{\hat{w}_{1},\hat{w}_{2},\hat{w}_{3}\\}_{\hat{\mathfrak{g}}})$ $\displaystyle=$ $\displaystyle\gamma(\\{a_{1},a_{2},a_{3}\\}_{\hat{\mathfrak{g}}}+\\{a_{1},a_{2},s(x_{3})\\}_{\hat{\mathfrak{g}}}+\\{a_{1},s(x_{2}),a_{3}\\}_{\hat{\mathfrak{g}}}+\\{a_{1},s(x_{2}),s(x_{3})\\}_{\hat{\mathfrak{g}}}$ $\displaystyle+\\{s(x_{1}),a_{2},s(x_{3})\\}_{\hat{\mathfrak{g}}}+\\{s(x_{1}),s(x_{2}),a_{3}\\}_{\hat{\mathfrak{g}}}+[s(x_{1}),a_{2},a_{3}]_{\hat{\mathfrak{g}}}+\omega(x_{1},x_{2},x_{3})+s\\{x_{1},x_{2},x_{3}\\}_{\mathfrak{g}})$ $\displaystyle=$ $\displaystyle\\{\beta(a_{1}),\beta(a_{2}),\beta(a_{3})\\}_{\hat{\mathfrak{g}}}+\beta(T(x_{3})(a_{1},a_{2})-\rho(x_{2})(a_{1},a_{3})+\theta(x_{2},x_{3})a_{1}-\theta(x_{1},x_{3})a_{2}$ $\displaystyle+D(x_{1},x_{2})a_{3}+\rho(x_{1})(a_{2},a_{3}))+\beta(\\{s(x_{1}),s(x_{2}),s(x_{3})\\}_{\hat{\mathfrak{g}}})+\beta\omega(x_{1},x_{2},x_{3})$ $\displaystyle+s\alpha\\{x_{1},x_{2},x_{3}\\}_{\mathfrak{g}}-\varphi(\\{x_{1},x_{2},x_{3}\\}_{\mathfrak{g}}).$ Thanks to Eqs. (5.2)-(83), we have $\gamma(\\{\hat{w}_{1},\hat{w}_{2},\hat{w}_{3}\\}_{\hat{\mathfrak{g}}})=\\{\gamma(\hat{w}_{1}),\gamma(\hat{w}_{2}),\gamma(\hat{w}_{3})\\}_{\hat{\mathfrak{g}}}.$ By the same token, $\gamma([\hat{w}_{1},\hat{w}_{2}]_{\hat{\mathfrak{g}}})=[\gamma(\hat{w}_{1}),\gamma(\hat{w}_{2})]_{\hat{\mathfrak{g}}}.$ Hence, $\gamma\in\mathrm{Aut}_{\mathfrak{h}}(\hat{\mathfrak{g}})$. This completes the proof. ∎ Let $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ be a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $p$ and $(\chi,\omega,\mu,\theta,D,\rho,T)$ be the corresponding non-abelian (2,3)-cocycle induced by $s$. For all $(\alpha,\beta)\in\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})$, define maps $\chi_{(\alpha,\beta)}:\mathfrak{g}\otimes\mathfrak{g}\longrightarrow\mathfrak{h},~{}\omega_{(\alpha,\beta)}:\mathfrak{g}\otimes\mathfrak{g}\otimes\mathfrak{g}\longrightarrow\mathfrak{h},~{}\mu_{(\alpha,\beta)}:\mathfrak{g}\longrightarrow\mathfrak{gl}(\mathfrak{h}),~{}\theta_{(\alpha,\beta)},D_{(\alpha,\beta)}:\mathfrak{g}\wedge\mathfrak{g}\longrightarrow\mathfrak{gl}(\mathfrak{h}),~{}\rho_{(\alpha,\beta)},T_{(\alpha,\beta)}:\mathfrak{g}\longrightarrow\mathrm{Hom}(\mathfrak{h}\wedge\mathfrak{h},\mathfrak{h})$ respectively by (86) $\omega_{(\alpha,\beta)}(x,y,z)=\beta\omega(\alpha^{-1}(x),\alpha^{-1}(y),\alpha^{-1}(z)),~{}~{}\chi_{(\alpha,\beta)}(x,y)=\beta\chi(\alpha^{-1}(x),\alpha^{-1}(y)),$ (87) $\theta_{(\alpha,\beta)}(x,y)a=\beta(\theta(\alpha^{-1}(x),\alpha^{-1}(y))\beta^{-1}(a)),~{}~{}D_{(\alpha,\beta)}(x,y)a=\beta D(\alpha^{-1}(x),\alpha^{-1}(y))\beta^{-1}(a),$ (88) $\rho_{(\alpha,\beta)}(x)(a,b)=\beta\rho(\alpha^{-1}(x))(\beta^{-1}(a),\beta^{-1}(b)),~{}~{}T_{(\alpha,\beta)}(x)(a,b)=\beta T(\alpha^{-1}(x))(\beta^{-1}(a),\beta^{-1}(b)),$ (89) $\mu_{(\alpha,\beta)}(x)a=\beta\mu(\alpha^{-1}(x))\beta^{-1}(a),$ for all $x,y,z\in\mathfrak{g},a,b\in\mathfrak{h}.$ ###### Proposition 5.3. With the above notations, $(\chi_{(\alpha,\beta)},\omega_{(\alpha,\beta)},\mu_{(\alpha,\beta)},\theta_{(\alpha,\beta)},D_{(\alpha,\beta)},\rho_{(\alpha,\beta)},T_{(\alpha,\beta)})$ is a non-abelian (2,3)-cocycle. ###### Proof. By Eqs. (• ‣ 3.3), (86) and (87), for all $x_{1},x_{2},y_{1},y_{2},y_{3}\in\mathfrak{g}$, we get $\displaystyle D_{(\alpha,\beta)}(x_{1},x_{2})\omega_{(\alpha,\beta)}(y_{1},y_{2},y_{3})+\omega_{(\alpha,\beta)}(x_{1},x_{2},\\{y_{1},y_{2},y_{3}\\}_{\mathfrak{g}})$ $\displaystyle=$ $\displaystyle\beta D(\alpha^{-1}(x_{1}),\alpha^{-1}(x_{2}))\beta^{-1}\beta\omega_{(\alpha,\beta)}(\alpha^{-1}(y_{1}),\alpha^{-1}(y_{2}),\alpha^{-1}(y_{3}))$ $\displaystyle-\beta\omega_{(\alpha,\beta)}(\alpha^{-1}(x_{1}),\alpha^{-1}(x_{2}),\alpha^{-1}(\\{y_{1},y_{2},y_{3}\\}_{\mathfrak{g}}))$ $\displaystyle=$ $\displaystyle\beta\omega(\\{\alpha^{-1}(x_{1}),\alpha^{-1}(x_{2}),\alpha^{-1}(y_{1})\\}_{\mathfrak{g}},\alpha^{-1}(y_{2}),\alpha^{-1}(y_{3}))$ $\displaystyle+\beta\theta(\alpha^{-1}(y_{2}),\alpha^{-1}(y_{3}))\omega(\alpha^{-1}(x_{1}),\alpha^{-1}(x_{2}),\alpha^{-1}(y_{1}))$ $\displaystyle+\beta\omega(\alpha^{-1}(y_{1}),\\{\alpha^{-1}(x_{1}),\alpha^{-1}(x_{2}),\alpha^{-1}(y_{2})\\}_{\mathfrak{g}},\alpha^{-1}(y_{3}))$ $\displaystyle-\beta\theta(\alpha^{-1}(y_{1}),\alpha^{-1}(y_{3}))\omega(\alpha^{-1}(x_{1}),\alpha^{-1}(x_{2}),\alpha^{-1}(y_{2}))$ $\displaystyle+\beta\omega(\alpha^{-1}(y_{1}),\alpha^{-1}(y_{2}),\\{\alpha^{-1}(x_{1}),\alpha^{-1}(x_{2}),\alpha^{-1}(y_{3})\\}_{\mathfrak{g}})$ $\displaystyle+\beta D(\alpha^{-1}(y_{1}),\alpha^{-1}(y_{2}))\omega(\alpha^{-1}(x_{1}),\alpha^{-1}(x_{2}),\alpha^{-1}(y_{3}))$ $\displaystyle=$ $\displaystyle\omega_{(\alpha,\beta)}(\\{x_{1},x_{2},y_{1}\\}_{\mathfrak{g}},y_{2},y_{3})+\theta_{(\alpha,\beta)}(y_{2},y_{3})\omega_{(\alpha,\beta)}(x_{1},x_{2},y_{1})+\omega_{(\alpha,\beta)}(y_{1},\\{x_{1},x_{2},y_{2}\\}_{\mathfrak{g}},y_{3})$ $\displaystyle-\theta_{(\alpha,\beta)}(y_{1},y_{3})\omega_{(\alpha,\beta)}(x_{1},x_{2},y_{2})+\omega_{(\alpha,\beta)}(y_{1},y_{2},\\{x_{1},x_{2},y_{3}\\}_{\mathfrak{g}})+D_{(\alpha,\beta)}(y_{1},y_{2})\omega_{(\alpha,\beta)}(x_{1},x_{2},y_{3}),$ which implies that Eq. (• ‣ 3.3) holds for $(\omega_{(\alpha,\beta)},\theta_{(\alpha,\beta)})$. Similarly, we can check that Eqs. (• ‣ 3.3)-(52) hold. This completes the proof. ∎ ###### Theorem 5.4. Let $0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ be a non-abelian extension of a Lie-Yamaguti algebra $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $p$ and $(\chi,\omega,\mu,\theta,D,\rho,T)$ be the corresponding non-abelian (2,3)-cocycle induced by $s$. A pair $(\alpha,\beta)\in\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})$ is extensible if and only if the non-abelian (2,3)-cocycles $(\chi,\omega,\mu,\theta,D,\rho,T)$ and $(\chi_{(\alpha,\beta)},\omega_{(\alpha,\beta)},\mu_{(\alpha,\beta)},\theta_{(\alpha,\beta)},D_{(\alpha,\beta)},\rho_{(\alpha,\beta)},T_{(\alpha,\beta)})$ are equivalent. ###### Proof. Suppose $(\alpha,\beta)\in\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})$ is extensible, by Theorem 5.2, there is a linear map $\varphi:\mathfrak{g}\longrightarrow\mathfrak{h}$ satisfying Eqs. (5.2)-(85). For all $x,y\in\mathfrak{g},a\in\mathfrak{h}$, there exist $x_{0},y_{0}\in\mathfrak{g},a_{0}\in\mathfrak{h}$ such that $x=\alpha(x_{0}),y=\alpha(y_{0}),a=\beta(a_{0})$. Thus, by Eqs. (81), (87) and (88), we have $\displaystyle\theta_{(\alpha,\beta)}(x,y)a-\theta(x,y)a$ $\displaystyle=$ $\displaystyle\beta(\theta(\alpha^{-1}(x),\alpha^{-1}(y))\beta^{-1}(a))-\theta(x,y)a$ $\displaystyle=$ $\displaystyle\beta(\theta(x_{0},y_{0})a_{0})-\theta(\alpha(x_{0}),\alpha(y_{0}))\beta(a_{0})$ $\displaystyle=$ $\displaystyle\\{\beta(a_{0}),\varphi(x_{0}),\varphi(y_{0})\\}_{\mathfrak{h}}-T(\alpha(y_{0}))(\beta(a_{0}),\varphi(x_{0}))+\rho(\alpha(x_{0}))(\beta(a_{0}),\varphi(y_{0}))$ $\displaystyle=$ $\displaystyle\\{a,\varphi\alpha^{-1}(x),\varphi\alpha^{-1}(y)\\}_{\mathfrak{h}}-T(y)(a,\varphi\alpha^{-1}(x))+\rho(x)(a,\varphi\alpha^{-1}(y)),$ which indicates that Eq. (56) holds. Analogously, Eqs. (53)-(55) and (57)-(58) hold. Thus, $(\chi,\omega,\mu,\theta,D,\rho,T)$ and $(\chi_{(\alpha,\beta)},\omega_{(\alpha,\beta)},\mu_{(\alpha,\beta)},\theta_{(\alpha,\beta)},D_{(\alpha,\beta)},\rho_{(\alpha,\beta)},T_{(\alpha,\beta)})$ are equivalent via the linear map $\varphi\alpha^{-1}:\mathfrak{g}\longrightarrow\mathfrak{h}$. The converse part can be obtained analogously. ∎ ## 6\. Wells exact sequences for Lie-Yamaguti algebras In this section, we consider the Wells map associated with non-abelian extensions of Lie-Yamaguti algebras. Then we interpret the results gained in Section 5 in terms of the Wells map. Let $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ be a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $p$. Then there is a linear map $t:\hat{\mathfrak{g}}\longrightarrow\mathfrak{h}$, such that (90) $it+sp=I_{\hat{\mathfrak{g}}}.$ Assume that $(\chi,\omega,\mu,\theta,D,\rho,T)$ is the corresponding non- abelian (2,3)-cocycle induced by $s$. Define a linear map $W:\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})\longrightarrow H^{(2,3)}_{nab}(\mathfrak{g},\mathfrak{h})$ by (91) $W(\alpha,\beta)=[(\chi_{(\alpha,\beta)},\omega_{(\alpha,\beta)},\mu_{(\alpha,\beta)},\theta_{(\alpha,\beta)},D_{(\alpha,\beta)},\rho_{(\alpha,\beta)},T_{(\alpha,\beta)})-(\chi,\omega,\mu,\theta,D,\rho,T)].$ The map $W$ is called the Wells map. ###### Proposition 6.1. The Wells map $W$ does not depend on the choice of sections. ###### Proof. For all $x,y,z\in\mathfrak{g}$, there are elements $x_{0},y_{0},z_{0}\in\mathfrak{g}$ such that $x=\alpha(x_{0}),y=\alpha(y_{0}),z=\alpha(z_{0})$. Assume that $(\chi^{\prime},\omega^{\prime},\mu^{\prime},\theta^{\prime},D^{{}^{\prime}},\rho^{{}^{\prime}},T^{{}^{\prime}})$ is another non-abelian (2,3)-cocycle corresponding to the non-abelian extension $\mathcal{E}$. By Lemma 3.7, we know that $(\chi^{\prime},\omega^{\prime},\mu^{\prime},\theta^{\prime},D^{{}^{\prime}},\rho^{{}^{\prime}},T^{{}^{\prime}})$ and $(\chi,\omega,\mu,\theta,D,\rho,T)$ are equivalent non-abelian (2,3)-cocycles via a linear map $\varphi:\mathfrak{g}\longrightarrow\mathfrak{h}$ According to Eqs. (3.4) and (86)-(88), denote $\psi=\beta\varphi\alpha^{-1}$, which follows that $\displaystyle\omega^{{}^{\prime}}_{(\alpha,\beta)}(x,y,z)-\omega_{(\alpha,\beta)}(x,y,z)$ $\displaystyle=$ $\displaystyle\beta\omega^{{}^{\prime}}(\alpha^{-1}(x),\alpha^{-1}(y),\alpha^{-1}(z))-\beta\omega(\alpha^{-1}(x),\alpha^{-1}(y),\alpha^{-1}(z))$ $\displaystyle=$ $\displaystyle\beta\omega^{{}^{\prime}}(x_{0},y_{0},z_{0})-\beta\omega(x_{0},y_{0},z_{0})$ $\displaystyle=$ $\displaystyle\beta\Big{(}\theta(x_{0},z_{0})\varphi(y_{0})-D(x_{0},y_{0})\varphi(z_{0})+\rho(x_{0})(\varphi(y_{0}),\varphi(z_{0}))-\theta(y_{0},z_{0})\varphi(x_{0})$ $\displaystyle+T(z_{0})(\varphi(x_{0}),\varphi(y_{0}))-\rho(y_{0})(\varphi(x_{0}),\varphi(z_{0}))-\\{\varphi(x_{0}),\varphi(y_{0}),\varphi(z_{0})\\}_{\mathfrak{h}}+\varphi\\{x_{0},y_{0},z_{0}\\}_{\mathfrak{g}}\Big{)}$ $\displaystyle=$ $\displaystyle\beta\Big{(}\theta(\alpha^{-1}(x),\alpha^{-1}(z))\beta^{-1}\psi(y)-D(\alpha^{-1}(x),\alpha^{-1}(y))\beta^{-1}\psi(z)+\rho(\alpha^{-1}(x))(\beta^{-1}\psi(y),\beta^{-1}\psi(z))$ $\displaystyle-\theta(\alpha^{-1}(y),\alpha^{-1}(z))\beta^{-1}\psi(x)+T(\alpha^{-1}(z))(\beta^{-1}\psi(x),\beta^{-1}\psi(y))-\rho(\alpha^{-1}(y))(\beta^{-1}\psi(x),\beta^{-1}\psi(z))\Big{)}$ $\displaystyle-\\{\psi(x),\psi(y),\psi(z)\\}_{\mathfrak{h}}+\psi(\\{x,y,z\\}_{\mathfrak{g}})$ $\displaystyle=$ $\displaystyle\theta_{(\alpha,\beta)}(x,z)\psi(y)-D_{(\alpha,\beta)}(x,y)\psi(z)+\rho_{(\alpha,\beta)}(x)(\psi(y),\psi(z))-\theta_{(\alpha,\beta)}(y,z)\psi(x)$ $\displaystyle+T_{(\alpha,\beta)}(z)(\psi(x),\psi(y))-\rho_{(\alpha,\beta)}(y)(\psi(x),\psi(z))-\\{\psi(x),\psi(y),\psi(z)\\}_{\mathfrak{h}}+\psi(\\{x,y,z\\}_{\mathfrak{g}}).$ By the same token, $\displaystyle\chi^{{}^{\prime}}_{(\alpha,\beta)}(x,y)-\chi_{(\alpha,\beta)}(x,y)=[\psi(x),\psi(y)]_{\mathfrak{h}}+\psi([x,y]_{\mathfrak{g}})-\mu_{(\alpha,\beta)}(x)\psi(y)+\mu_{(\alpha,\beta)}(y)\psi(x),$ $\displaystyle\theta^{{}^{\prime}}_{(\alpha,\beta)}(x,y)a-\theta_{(\alpha,\beta)}(x,y)a=\rho_{(\alpha,\beta)}(x)(a,\psi(y))-T(y)(a,\psi(x))+\\{a,\psi(x),\psi(y)\\}_{\mathfrak{h}},$ $\displaystyle D^{{}^{\prime}}_{(\alpha,\beta)}(x,y)a-D_{(\alpha,\beta)}(x,y)a=\rho_{(\alpha,\beta)}(y)(\psi(x),a)-\rho_{(\alpha,\beta)}(x)(\psi(y),a)+\\{\psi(x),\psi(y),a\\}_{\mathfrak{h}},$ $\displaystyle\mu^{{}^{\prime}}_{(\alpha,\beta)}(x)a-\mu_{(\alpha,\beta)}(x)a=[a,\psi(x)]_{\mathfrak{h}},~{}~{}~{}~{}\rho^{{}^{\prime}}_{(\alpha,\beta)}(x)(a,b)-\rho_{(\alpha,\beta)}(x)(a,b)=\\{a,\psi(x),b\\}_{\mathfrak{h}},$ $\displaystyle T^{{}^{\prime}}_{(\alpha,\beta)}(x)(a,b)-T_{(\alpha,\beta)}(x)(a,b)=\\{b,a,\psi(x)\\}_{\mathfrak{h}}.$ So, $(\chi^{\prime}_{(\alpha,\beta)},\omega^{\prime}_{(\alpha,\beta)},\mu^{\prime}_{(\alpha,\beta)},\theta^{\prime}_{(\alpha,\beta)},D^{\prime}_{(\alpha,\beta)},\rho^{\prime}_{(\alpha,\beta)},T^{\prime}_{(\alpha,\beta)})$ and $(\chi_{(\alpha,\beta)},\omega_{(\alpha,\beta)},\mu_{(\alpha,\beta)},\theta_{(\alpha,\beta)},D_{(\alpha,\beta)},\rho_{(\alpha,\beta)},T_{(\alpha,\beta)})$ are equivalent non-abelian (2,3)-cocycles via the linear map $\psi=\beta\varphi\alpha^{-1}$. Combining Lemma 3.7, we know that $\displaystyle(\chi^{\prime}_{(\alpha,\beta)},\omega^{\prime}_{(\alpha,\beta)},\mu^{\prime}_{(\alpha,\beta)},\theta^{\prime}_{(\alpha,\beta)},D^{\prime}_{(\alpha,\beta)},\rho^{\prime}_{(\alpha,\beta)},T^{\prime}_{(\alpha,\beta)})-(\chi^{\prime},\omega^{\prime},\mu^{\prime},\theta^{\prime},D^{{}^{\prime}},\rho^{{}^{\prime}},T^{{}^{\prime}})$ and $\displaystyle(\chi_{(\alpha,\beta)},\omega_{(\alpha,\beta)},\mu_{(\alpha,\beta)},\theta_{(\alpha,\beta)},D_{(\alpha,\beta)},\rho_{(\alpha,\beta)},T_{(\alpha,\beta)})-(\chi,\omega,\mu,\theta,D,\rho,T)$ are equivalent via the linear map $\beta\varphi\alpha^{-1}-\varphi$. ∎ ###### Proposition 6.2. Let $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ be a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $p$. Define a map (92) $K:\mathrm{Aut}_{\mathfrak{h}}(\hat{\mathfrak{g}})\longrightarrow\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{g}),~{}~{}K(\gamma)=(p\gamma s,\gamma|_{\mathfrak{h}}),~{}\forall~{}\gamma\in\mathrm{Aut}_{\mathfrak{h}}(\hat{\mathfrak{g}}).$ Then $K$ is a homomorphism of groups. ###### Proof. One can take the same procedure of Lie algebras, see [1]. ∎ ###### Theorem 6.3. Assume that $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ is a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $\hat{\mathfrak{g}}$. Then there is an exact sequence: $1\longrightarrow\mathrm{Aut}_{\mathfrak{h}}^{\mathfrak{g}}(\hat{\mathfrak{g}})\stackrel{{\scriptstyle H}}{{\longrightarrow}}\mathrm{Aut}_{\mathfrak{h}}(\hat{\mathfrak{g}})\stackrel{{\scriptstyle K}}{{\longrightarrow}}\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})\stackrel{{\scriptstyle W}}{{\longrightarrow}}H^{(2,3)}_{nab}(\mathfrak{g},\mathfrak{h}),$ where $\mathrm{Aut}_{\mathfrak{h}}^{\mathfrak{g}}(\hat{\mathfrak{g}})=\\{\gamma\in\mathrm{Aut}(\hat{\mathfrak{g}})|K(\gamma)=(I_{\mathfrak{g}},I_{\mathfrak{h}})\\}$. ###### Proof. Obviously, $\mathrm{Ker}K=\mathrm{Im}H$ and $H$ is injective. We only need to prove that $\mathrm{Ker}W=\mathrm{Im}K$. By Theorem 5.4, for all $(\alpha,\beta)\in\mathrm{Ker}W$, we get that $(\alpha,\beta)$ is extensible with respect to the non-abelian extension $\mathcal{E}$, that is, there is a $\gamma\in\mathrm{Aut}_{\mathfrak{h}}^{\mathfrak{g}}(\hat{\mathfrak{g}})$, such that $i\beta=\gamma i,~{}p\gamma=\alpha p$, which follows that $\alpha=\alpha ps=p\gamma s,~{}\beta=\gamma|_{\mathfrak{h}}.$ Thus, $(\alpha,\beta)\in\mathrm{Im}K$. On the other hand, for any $(\alpha,\beta)\in\mathrm{Im}K$, there is an isomorphism $\gamma\in\mathrm{Aut}_{\mathfrak{h}}(\hat{\mathfrak{g}})$, such that Eq. (92) holds. Combining Eq. (90) and $\mathrm{Im}i=\mathrm{Ker}p$, we obtain $\alpha p=p\gamma sp=p\gamma(I_{\hat{\mathfrak{g}}}-it)=p\gamma$ and $i\beta=\gamma i$. Hence, $(\alpha,\beta)$ is extensible with respect to the non-abelian extension $\mathcal{E}$. According to Theorem 5.4, $(\alpha,\beta)\in\mathrm{Ker}W$. In all, $\mathrm{Ker}W=\mathrm{Im}K$. ∎ Let $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ be a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $p$. Suppose that $(\chi,\omega,\mu,\theta,D,\rho,T)$ is a non-abelian (2,3)-cocycle induced by the section $s$. Denote (93) $\displaystyle Z_{nab}^{1}(\mathfrak{g},\mathfrak{h})=$ $\displaystyle\left\\{\varphi:\mathfrak{g}\rightarrow\mathfrak{h}\left|\begin{aligned} &[a,\varphi(x)]_{\mathfrak{h}}=\\{a,b,\varphi(x)\\}_{\mathfrak{h}}=\\{\varphi(x),a,b\\}_{\mathfrak{h}}=0,\\\ &\\{\varphi(x),\varphi(y),a\\}_{\mathfrak{h}}-\rho(x)(\varphi(y),a)+\rho(y)(\varphi(x),a)=0,\\\ &\\{a,\varphi(x),\varphi(y)\\}_{\mathfrak{h}}-T(y)(a,\varphi(x))+\rho(x)(a,\varphi(y))=0,\\\ &\mu(x)\varphi(y)-\mu(y)\varphi(x)=\varphi([x,y]_{\mathfrak{g}})+[\varphi(x),\varphi(y)]_{\mathfrak{h}},~{}\\\ &T(z)(\varphi(x),\varphi(y))-\rho(y)(\varphi(x),\varphi(z))-\theta(y,z)\varphi(x)\\\ &+\rho(x)(\varphi(y),\varphi(z))+\theta(x,z)\varphi(y)-D(x,y)\varphi(z)\\\ &=\\{\varphi(x),\varphi(y),\varphi(z)\\}_{\mathfrak{h}}-\varphi(\\{x,y,z\\}_{\mathfrak{h}}),~{}\forall~{}x,y,z\in{\mathfrak{g}},a,b\in{\mathfrak{h}}\end{aligned}\right.\right\\}.$ It is easy to check that $Z_{nab}^{1}(\mathfrak{g},\mathfrak{h})$ is an abelian group, which is called a non-abelian 1-cocycle on $\mathfrak{g}$ with values in $\mathfrak{h}$. ###### Proposition 6.4. With the above notations, we have 1. $(i)$ The linear map $S:\mathrm{Ker}K\longrightarrow Z_{nab}^{1}(\mathfrak{g},\mathfrak{h})$ defined by (94) $S(\gamma)(x)=\varphi_{\gamma}(x)=s(x)-\gamma s(x),~{}\forall~{}~{}\gamma\in\mathrm{Ker}K,~{}x\in\mathfrak{g}$ is a homomorphism of groups. 2. $(ii)$ $S$ is an isomorphism, that is, $\mathrm{KerK}\simeq Z_{nab}^{1}(\mathfrak{g},\mathfrak{h})$. ###### Proof. 1. $(i)$ By Eqs. (62)-(64), (92) and (94), for all $x,y,z\in\mathfrak{g}$, we have, $\displaystyle\\{\varphi_{\gamma}(x),\varphi_{\gamma}(y),\varphi_{\gamma}(z)\\}_{\mathfrak{h}}-T(z)(\varphi_{\gamma}(x),\varphi_{\gamma}(y))+\rho(y)(\varphi_{\gamma}(x),\varphi_{\gamma}(z))+\theta(y,z)\varphi_{\gamma}(x)$ $\displaystyle-\rho(x)(\varphi_{\gamma}(y),\varphi_{\gamma}(z))-\theta(x,z)\varphi_{\gamma}(y)+D(x,y)\varphi_{\gamma}(z)-\varphi_{\gamma}(\\{x,y,z\\}_{\mathfrak{g}})$ $\displaystyle=$ $\displaystyle\\{s(x)-\gamma s(x),s(y)-\gamma s(y),s(z)-\gamma s(z)\\}_{\hat{\mathfrak{g}}}-\\{s(x)-\gamma s(x),s(y)-\gamma s(y),s(z)\\}_{\hat{\mathfrak{g}}}$ $\displaystyle+\\{s(y),s(x)-\gamma s(x),s(z)-\gamma s(z)\\}_{\hat{\mathfrak{g}}}+\\{s(x)-\gamma s(x),s(y),s(z)\\}_{\hat{\mathfrak{g}}}-\\{s(x),s(y)-\gamma s(y),s(z)-\gamma s(z)\\}_{\hat{\mathfrak{g}}}$ $\displaystyle-\\{s(y)-\gamma s(y),s(x),s(z)\\}_{\hat{\mathfrak{g}}}+\\{s(x),s(y),s(z)-\gamma s(z)\\}_{\hat{\mathfrak{g}}}+\gamma s(\\{x,y,z\\}_{\mathfrak{g}})-s(\\{x,y,z\\}_{\mathfrak{g}})$ $\displaystyle=$ $\displaystyle\gamma s(\\{x,y,z\\}_{\mathfrak{g}})-\\{\gamma s(x),\gamma s(y),\gamma s(z)\\}_{\hat{\mathfrak{g}}}+\\{s(x),s(y),s(z)\\}_{\hat{\mathfrak{g}}}-s(\\{x,y,z\\}_{\mathfrak{g}})$ $\displaystyle=$ $\displaystyle\omega(x,y,z)-\gamma\omega(x,y,z)$ $\displaystyle=$ $\displaystyle 0.$ Analogously, we can check that $\varphi_{\gamma}$ satisfies the other identities in $Z_{nab}^{1}(\mathfrak{g},\mathfrak{h})$. Thus, $S$ is well- defined. For any $\gamma_{1},\gamma_{2}\in\mathrm{Ker}K$ and $x\in\mathfrak{g}$, suppose $S(\gamma_{1})=\varphi_{\gamma_{1}}$ and $S(\gamma_{2})=\varphi_{\gamma_{2}}$. By Eqs. (92) and (94), we have $\displaystyle S(\gamma_{1}\gamma_{2})(x)$ $\displaystyle=s(x)-\gamma_{1}\gamma_{2}s(x)$ $\displaystyle=s(x)-\gamma_{1}(s(x)-\varphi_{\gamma_{2}}(x))$ $\displaystyle=s(x)-\gamma_{1}s(x)+\gamma_{1}\varphi_{\gamma_{2}}(x)$ $\displaystyle=\varphi_{\gamma_{1}}(x)+\varphi_{\gamma_{2}}(x)$ which means that $S(\gamma_{1}\gamma_{2})=S(\gamma_{1})+S(\gamma_{2})$ is a homomorphism of groups. 2. $(ii)$ For all $\gamma\in\mathrm{Ker}K$, we obtain that $K(\gamma)=(p\gamma s,\gamma|_{\mathfrak{h}})=(I_{\mathfrak{g}},I_{\mathfrak{h}})$. If $S(\gamma)=\varphi_{\gamma}=0$, we can get $\varphi_{\gamma}(x)=s(x)-\gamma s(x)=0$, that is, $\gamma=I_{\hat{\mathfrak{g}}}$, which indicates that $S$ is injective. Secondly, we prove that $S$ is surjective. Since $s$ is a section of $p$, all $\hat{x}\in\hat{\mathfrak{g}}$ can be written as $a+s(x)$ for some $a\in\mathfrak{h},x\in\mathfrak{g}$. For any $\varphi\in Z_{nab}^{1}(\mathfrak{g},\mathfrak{h})$, define a linear map $\gamma:\hat{\mathfrak{g}}\rightarrow\hat{\mathfrak{g}}$ by (95) $\gamma(\hat{x})=\gamma(a+s(x))=s(x)-\varphi(x)+a,~{}\forall~{}\hat{x}\in\hat{\mathfrak{g}}.$ It is obviously that $(p\gamma s,\gamma|_{\mathfrak{h}})=(I_{\mathfrak{g}},I_{\mathfrak{h}})$. We need to verify that $\gamma$ is an automorphism of Lie-Yamaguti algebra $\hat{\mathfrak{g}}$. One can take the same procedure as the proof of the converse part of Theorem 5.2. It follows that $\gamma\in\mathrm{Ker}K$. Thus, $S$ is surjective. In all, $S$ is bijective. So $\mathrm{Ker}K\simeq Z_{nab}^{1}(\mathfrak{g},\mathfrak{h})$. ∎ Combining Theorem 6.3 and Proposition 6.4, we have ###### Theorem 6.5. Let $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ be a non-abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$. There is an exact sequence: $0\longrightarrow Z_{nab}^{1}(\mathfrak{g},\mathfrak{h})\stackrel{{\scriptstyle i}}{{\longrightarrow}}\mathrm{Aut}_{\mathfrak{h}}(\hat{\mathfrak{g}})\stackrel{{\scriptstyle K}}{{\longrightarrow}}\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})\stackrel{{\scriptstyle W}}{{\longrightarrow}}H^{(2,3)}_{nab}(\mathfrak{g},\mathfrak{h}).$ ## 7\. Particular case: abelian extensions of Lie-Yamaguti algebras In this section, we discuss the results of previous section in particular case. We fix the abelian extension $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ of $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $p$. Assume that $(\chi,\omega)$ is a (2,3)-cocycle corresponding to $\mathcal{E}$. In the case of abelian extensions $\mathcal{E}$, the maps $\rho,T$ defined by (64) become to zero. Then the quadruple $(\mathfrak{h},\mu,\theta,D)$ given by Eq. (63) is a representation of $\mathfrak{g}$ [35]. Moreover, ###### Theorem 7.1 ([35]). 1. $(i)$ The triple $(\mathfrak{g}\oplus\mathfrak{h},[\ ,\ ]_{\chi},\\{\ ,\ ,\ \\}_{\omega})$ is a Lie-Yamaguti algebra if and only if $(\chi,\omega)$ is a (2,3)-cocycle of $\mathfrak{g}$ with coefficients in the representation $(\mathfrak{h},\mu,\theta,D)$. 2. $(ii)$ Abelian extensions of $\mathfrak{g}$ by $\mathfrak{h}$ are classified by the cohomology group $H^{(2,3)}(\mathfrak{g},\mathfrak{h})$ of $\mathfrak{g}$ with coefficients in $(\mathfrak{h},\mu,\theta,D)$. ###### Theorem 7.2. Let $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ be an abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $p$. Assume that $(\chi,\omega)$ is a (2,3)-cocycle and $(\mathfrak{h},\mu,\theta,D)$ is a representation of $\mathfrak{g}$ associated to $\mathcal{E}$. A pair $(\alpha,\beta)\in\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})$ is extensible with respect to the abelian extension $\mathcal{E}$ if and only if there is a linear map $\varphi:\mathfrak{g}\longrightarrow\mathfrak{h}$ satisfying the following conditions: $\displaystyle\beta\omega(x,y,z)-\omega(\alpha(x),\alpha(y),\alpha(z))=$ $\displaystyle\theta(\alpha(x),\alpha(z))\varphi(y)-\theta(\alpha(y),\alpha(z))\varphi(x)$ (96) $\displaystyle-D(\alpha(x),\alpha(y))\varphi(z)+\varphi(\\{x,y,z\\}_{\mathfrak{h}}),$ (97) $\beta\chi(x,y)-\chi(\alpha(x),\alpha(y))=\mu(\alpha(y))\varphi(x)-\mu(\alpha(x))\varphi(y)+\varphi([x,y]_{\mathfrak{g}}),$ (98) $\beta(\theta(x,y)a)=\theta(\alpha(x),\alpha(y))\beta(a),~{}~{}\beta\mu(x)a=\mu(\alpha(x))\beta(a).$ ###### Proof. It can be obtained directly from Theorem 5.2. ∎ By Eqs. (10) and (98), we obtain $\beta D(x,y)a=D(\alpha(x),\alpha(y))\beta(a).$ For all $(\alpha,\beta)\in\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})$, $(\chi_{(\alpha,\beta)},\omega_{(\alpha,\beta)})$ may not be a (2,3)-cocycle. Indeed, $(\chi_{(\alpha,\beta)},\omega_{(\alpha,\beta)})$ is a (2,3)-cocycle if Eq. (98) holds. Thus, it is natural to introduce the space of compatible pairs of automorphisms: $\displaystyle C_{(\mu,\theta)}=$ $\displaystyle\left\\{(\alpha,\beta)\in\mathrm{Aut}(\mathfrak{g})\times\mathrm{Aut}(\mathfrak{h})\left|\begin{aligned} &\beta(\theta(x,y)a)=\theta(\alpha(x),\alpha(y))\beta(a),\\\ &\beta\mu(x)a=\mu(\alpha(x))\beta(a),~{}\forall~{}x,y\in{\mathfrak{g}},a\in{\mathfrak{h}}\end{aligned}\right.\right\\}.$ More detail on the space of compatible pairs of automorphisms can be found in [13]. Analogous to Theorem 5.4, we get ###### Theorem 7.3 ([13]). Let $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ be an abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$ with a section $s$ of $p$ and $(\chi,\omega)$ be a (2,3)-cocycle associated to $\mathcal{E}$. A pair $(\alpha,\beta)\in C_{(\mu,\theta)}$ is extensible with respect to the abelian extension $\mathcal{E}$ if and only if $(\chi,\omega)$ and $(\chi_{(\alpha,\beta)},\omega_{(\alpha,\beta)})$ are in the same cohomological class. In the case of abelian extensions, $Z^{1}_{nab}(\mathfrak{g},\mathfrak{h})$ defined by (93) becomes to ${H}^{1}(\mathfrak{g},\mathfrak{h})$ given in Section 2. In the light of Theorem 6.5 and Theorem 7.3, we have the following exact sequence: ###### Theorem 7.4 ([13]). Let $\mathcal{E}:0\longrightarrow\mathfrak{h}\stackrel{{\scriptstyle i}}{{\longrightarrow}}\hat{\mathfrak{g}}\stackrel{{\scriptstyle p}}{{\longrightarrow}}\mathfrak{g}\longrightarrow 0$ be an abelian extension of $\mathfrak{g}$ by $\mathfrak{h}$. There is an exact sequence: $0\longrightarrow H^{1}(\mathfrak{g},\mathfrak{h})\stackrel{{\scriptstyle i}}{{\longrightarrow}}\mathrm{Aut}_{\mathfrak{h}}(\hat{\mathfrak{g}})\stackrel{{\scriptstyle K}}{{\longrightarrow}}C_{(\mu,\theta)}\stackrel{{\scriptstyle W}}{{\longrightarrow}}H^{(2,3)}(\mathfrak{g},\mathfrak{h}).$ Acknowledgments This work was supported by the National Natural Science Foundation of China (11871421); Natural Science Foundation of Zhejiang Province of China (LY19A010001); and Science and Technology Planning Project of Zhejiang Province (2022C01118). Statements and Declarations All datasets underlying the conclusions of the paper are available to readers. No conflict of interest exits in the submission of this manuscript. ## References * [1] V. G. Bardakov, M. Singh, Extensions and automorphisms of Lie algebras, J. Algebra Appl. 16 (2017), 1750162. * [2] P. Benito, A. Elduque, F. M. Herce, Irreducible Lie-Yamaguti algebras, J. Pure Appl. Algebra 213 (2009),795-808. * [3] P. Benito, A. Elduque, F. M. Herce, Irreducible Lie-Yamaguti algebras of generic type, J. Pure Appl. Algebra 215 (2011), 108-130. * [4] J. M. Casas, E. Khmaladze, M. Ladra, Low-dimensional non-abelian Leibniz cohomology, Forum Math. 25 (3) (2013), 443–469. * [5] S. Chen, Y. Sheng, Z. Zheng, Non-abelian extensions of Lie 2-algebras, Sci. China Math. 55 (8) (2012), 1655–1668. * [6] A. Das, A. Mandal, Extensions, deformations and categorifications of AssDer pairs, arXiv: 2002.11415. * [7] A. Das, N. Ratheeb, Extensions and automorphisms of Rota-Baxter groups, J. Algebra 636 (2023), 626–665. * [8] L. Du, Y. Tan, Wells sequences for abelian extensions of Lie coalgebras, J. Algebra Appl. 20 (8) (2021), 2150149. * [9] L. Du, Y. Tan, Coderivations, abelian extensions and cohomology of Lie coalgebras, Commun. Algebra 49 (10) (2021), 4519-4542. * [10] S. Eilenberg, S. Maclane, Cohomology theory in abstract groups. II. Group extensions with non-abelian kernel, Ann. Math. 48 (1947), 326-341. * [11] Y. Frégier, Non-abelian cohomology of extensions of Lie algebras as Deligne groupoid, J. Algebra 398 (2014), 243–257. * [12] J. M. W. Goldman, The defoemation theory of representations of fundamental groups of compact Kahler manifolds, Publ. Math. IHES 67 (1988), 43–96. * [13] S. Goswamia, S. K. Mishraa, G. Mukherjee, Automorphisms of extensions of Lie-Yamaguti algebras and inducibility problem, J. Algebra 641 (2024), 268–306. * [14] Y. Guo, B. Hou, Crossed modules and non-abelian extensions of Rota-Baxter Leibniz algebras, J. Geom. Phys. 191 (2023), 104906. * [15] S. K. Hazra, A. Habib, Wells exact sequence and automorphisms of extensions of Lie superalgebras, J. Lie Theory 30 (2020), 179–199. * [16] B. Hou, J. Zhao, Crossed modules, non-abelian extensions of associative conformal algebras and Wells exact sequences, arXiv:2211.10842v1. * [17] N. Inassaridze, E. Khmaladze, M. Ladra, Non-abelian cohomology and extensions of Lie algebras, J. Lie Theory 18 (2008), 413–432. * [18] P. Jin, H. Liu, The Wells exact sequence for the automorphism group of a group extension, J. Algebra 324 (2010), 1219–1228. * [19] M. K. Kinyon, A. Weinstein, Leibniz algebras, Courant algebroids and multiplications on reductive homogeneous spaces, Amer. J. Math. 123 (2001), 525-550. * [20] J. Lin, L. Chen, Y. Ma, On the deformation of Lie-Yamaguti algebras, Acta Math. Sin. (Engl. Ser.) 31 (6) (2015), 938-946. * [21] J. Liu, Y. Sheng, Q. Wang, On non-abelian extensions of Leibniz algebras, Commun. Algebra 46 (2) (2018), 574–587. * [22] S. K. Mishra, A. Das, S. K. Hazra, Non-abelian extensions of Rota-Baxter Lie algebras and inducibility of automorphisms, Linear. Algebra. Appl. 669 (2023), 147-174. * [23] K. Nomizu, Invariant affine connections on homogeneous spaces, Amer. J. Math. 76 (1954), 33-65. * [24] I. B. S. Passi, M. Singh, M. K. Yadav, Automorphisms of abelian group extensions, J. Algebra 324 (2010), 820–830. * [25] Y. Sheng, J. Zhao, Relative Rota-Baxter operators and symplectic structures on Lie-Yamaguti algebras, Commun. Algebra 50 (9) (2022), 4056-4073. * [26] Y. Sheng, J. Zhao, Y. Zhou, Nijnhuis operators, product structures and complex structures on Lie-Yamaguti algebras, J. Algebra Appl. (2021), 2150146. * [27] L. Song, A. Makhlouf, R. Tang, On non-abelian extensions of 3-Lie algebras, Commun. Theor. Phys. 69 (2018), 347-356. * [28] N. Takahashi, Modules over quadratic spaces and representations of Lie Yamaguti algebras, J. Lie Theory 31 (4) (2021), 897-932. * [29] C. Wells, Automorphisms of group extensions, Trans. Amer. Math. Soc. 155 (1971), 189-194. * [30] R. Tang, Y. Frégier, Y. Sheng, Cohomologies of a Lie algebra with a derivation and applications, J. Algebra. 534 (2) (2019), 65-99. * [31] J. Zhao, Y. Qiao, Maurer-Cartan characterizations, $L_{\infty}$-algebras, and cohomology of relative Rota-Baxter operators on Lie Yamaguti algebras, arXiv:2310.05360v1. * [32] J. Zhao, Y. Qiao, Cohomology and deformations of relative Rota-Baxter operators on Lie-Yamaguti algebras, arXiv:2204.04872. * [33] K. Yamaguti, On the cohomology groups of general Lie triple system, Kumamoto J. Sci. Ser. A 8 (1969), 135-146. * [34] K. Yamaguti, On the Lie triple system and its generalization, J. Sci. Hiroshima Univ. Ser. A 21 (1957/58), 155-160. * [35] T. Zhang, J. Li, Deformations and extensions of Lie-Yamaguti algebras, Linear Multilinear Algebra 63 (11) (2015), 2212-2231.
# Synthetic photometry of OB star clusters with stochastically sampled IMFs: analysis of models and HST observations. Rogelio Orozco-Duarte1, Aida Wofford1, Alba Vidal-García2,3, Gustavo Bruzual4, Stephane Charlot4, Mark R. Krumholz5,6, Stephen Hannon7,8, Janice Lee9,7, Timothy Wofford10, Michele Fumagalli11, Daniel Dale12, Matteo Messa13,14, Eva K. Grebel15, Linda Smith16, Kathryn Grasha5, David Cook7 1Universidad Nacional Autónoma de México, Instituto de Astronomía, AP 106, Ensenada 22800, BC, México 2Sorbonne Université, UPMC-CNRS, UMR7095, Institut d’Astrophysique de Paris, F-75014 Paris, France 3LPENS, Ecole Normale Supérieure, Université PSL, CNRS, Sorbonne Université, Université Paris- Diderot, Paris, France 4Instituto de Radioastronomía y Astrofísica, UNAM, Campus Morelia, Michoacán, Mé́xico, C.P. 58089, Mé́xico 5Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia 6ARC Centre of Excellence for Astronomy in Three Dimensionns (ASTRO-3D), Canberra, ACT 2611, Australia 7Caltech-IPAC, 1200 E. California Blvd. Pasadena, CA 91125, USA 8Department of Physics $\And$ Astronomy, University of California, Riverside, CA, USA 9Gemini Observatory/NSF’s NOIRLab, 950 N. Cherry Avenue, Tucson, AZ, 85719, USA 10Facultad de Ciencias, Universidad Autónoma de Baja California, AP 1880, Ensenada 22800, BC, México 11Dipartimento di Fisica G. Occhialini, Università degli Studi di Milano Bicocca, Piazza della Scienza 3, 20126 Milano, Italy 12Department of Physics & Astronomy, University of Wyoming, Laramie WY 13Observatoire de Genève, Université de Genève, Chemin Pegasi 51, Versoix CH-1290, Switzerland 14Department of Astronomy, Oscar Klein Centre, Stockholm University, AlbaNova, Stockholm SE-106 91, Sweden 15Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg, Mönchhofstraße 12-14, 69120 Heidelberg, Germany 15Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia 16European Space Agency (ESA), ESA Office, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA E-mail<EMAIL_ADDRESS> (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract We present a pilot library of synthetic NUV, U, B, V, and I photometry of star clusters with stochastically sampled IMFs and ionized gas for initial masses, $M_{i}=10^{3}$, $10^{4}$, and $10^{5}$ $M_{\odot}$; $t=1$, 3, 4, and 8 Myr; $Z=0.014$ and $Z=0.002$; and log(US) =-2 and -3. We compare the library with predictions from deterministic models and observations of isolated low-mass ($<10^{4}$ $M_{\odot}$) star clusters with co-spatial compact H ii regions. The clusters are located in NGC 7793, one of the nearest galaxies observed as part of the HST LEGUS and H$\alpha$-LEGUS surveys. 1) For model magnitudes that only account for the stars: a) the residual |deterministic mag - median stochastic mag| can be $\geq 0.5$ mag, even for $M_{i}=10^{5}$ $M_{\odot}$; and b) the largest spread in stochastic magnitudes occurs when Wolf-Rayet stars are present. 2) For $M_{i}=10^{5}$ $M_{\odot}$: a) the median stochastic mag with gas can be $>$1.0 mag more luminous than the median stochastic magnitude without gas; and b) nebular emission lines can contribute with $>50\%$ and $>30\%$ to the total emission in the V and I bands, respectively. 3) Age-dating OB-star clusters via deterministic tracks in the U-B vs. V-I plane is highly uncertain at $Z=0.014$ for $M_{i}\sim 10^{3}$ $M_{\odot}$ and $Z=0.002$ for $M_{i}\sim 10^{3}-10^{5}$ $M_{\odot}$. 4) For low-mass clusters, the V-band extinction derived with stochastic models significantly depends on the value of log(US). 5) The youngest clusters tend to have higher extinction. 6) The majority of clusters have multi-peaked age PDFs. 7) Finally, we discuss the importance of characterising the true variance in the number of stars per mass bin in nature. ###### keywords: stars: luminosity function, mass function – galaxies: stellar content – galaxies: ISM – (ISM:) HII regions – methods: statistical ††pubyear: 2015††pagerange: Synthetic photometry of OB star clusters with stochastically sampled IMFs: analysis of models and HST observations.–Synthetic photometry of OB star clusters with stochastically sampled IMFs: analysis of models and HST observations. ## 1 Introduction Star clusters and cluster mass function. Star clusters are groupings of stars that are born from the same molecular cloud and are gravitationally bound. They can contain anywhere between millions of stars to less than a few hundred members. Observations of star clusters with $<10^{4}\,$$M_{\odot}$ exist for the Milky Way, M31, NGC 4214, and other comparatively-nearby galaxies. In particular, the PHAT survey (Panchromatic Hubble Andromeda Treasury, PI Dalcanton) covered approximately 1/3 of M31’s star forming disk from the near ultraviolet (NUV) to the near infrared (NIR) at the high spatial resolution of the Hubble Space Telescope (HST), and provided a robust distance measurement and high quality data for M31. Studies of the above three galaxies show that the cluster mass function (CMF) in the range $<10^{4}\,$$M_{\odot}$ appears to be similar to the distribution at higher masses, down to the sensitivity limit, i.e., it is consistent with $dN/dM\sim M^{-2}$, where $N$ is the number of clusters and $M$ is the mass of the cluster (Johnson et al., 2017; Krumholz et al., 2019). The shape of the CMF has been confirmed by the studies of Adamo et al. (2017), Messa et al. (2018), and Cook et al. (2019) that target galaxies NGC 628, M51 and the dwarf galaxies from HST’s Legacy Extra Galactic Ultraviolet Survey (LEGUS, Calzetti et al. 2015), respectively. In their study of 25 LEGUS galaxies Hannon et al. 2019 (hereafter H19) do not measure the CMF but find more clusters in the interval $10^{3}-10^{4}\,$$M_{\odot}$ than with $>10^{4}\,$$M_{\odot}$. Importance of low-mass star clusters. Observations of the Small Magellanic Cloud analysed by Lamb et al. (2010) seem to suggest that a distribution of the type $dN/dM\sim M^{-2}$ describes systems with masses as low as $10\,$$M_{\odot}$. This means that the mass range between 10 and 1000 $M_{\odot}$ contains the same total mass in stars as the range between 1000 and $10^{5}\,$$M_{\odot}$. Thus, one cannot obtain a complete understanding of stellar populations without studying low-mass ($<10^{4}\,$$M_{\odot}$) star clusters. In addition, studying a statistically significant number of low-mass clusters in a diversity of environments is necessary in order to understand how cluster properties depend on their environment. The IMF of star clusters. The stellar initial mass function (IMF) describes the stellar mass distribution of the cluster at birth, when the cluster is still embedded. It is a main ingredient of population synthesis models (e.g. Bruzual & Charlot 2003; Leitherer et al. 1999, 2014; Eldridge et al. 2017), which aim to predict the radiative, mechanical, and chemical feedback of stellar populations; and are used as input to cosmological simulations (e.g., Hirschmann et al. 2019). The IMF has a universal form for conditions as they are found at the present time throughout galaxies of the Local Group (Salpeter, 1955; Chabrier, 2003; Kroupa, 2012); and it is stochastically sampled (Cerviño & Luridiana, 2003; Fouesneau & Lançon, 2010; Fumagalli et al., 2011; Krumholz et al., 2015a). For star clusters with total initial masses ($M_{i}$) of 103, 104, 105, and 10${}^{6}\,$$M_{\odot}$, and a stochastically-sampled IMF, Figure 1 shows the mean and variation in the number of stars in each mass bin. For generating the figure, we assume a Chabrier (2003) IMF shape and stop adding stars when the next star makes the total mass be $>M_{i}$. Since most stars have a low mass, the final total mass is rarely much different from $M_{i}$. The stars are collected in 49 bins with boundaries evenly spaced on a logarithmic scale from 0.1 to 100 $M_{\odot}$. The figure shows the following. 1) The standard deviation around the mean increases as the mass of the star increases, i.e., as the probability of a star belonging to the bin decreases. 2) In addition, as the value of $M_{i}$ decreases the standard deviation decreases while the relative standard deviation increases. These two results (1 and 2) are because the number of stars in each bin follows an approximately binomial distribution111For a fixed number of stars the collection of bin counts would follow a multinomial distribution. Fixing the total mass removes some of the independence, so a multinomial distribution is only an approximation. with a probability given by the IMF. For example, if the IMF says that the probability in bin "j" is $P_{j}$ and we generate one realization of the IMF with N stars, then we expect to find $n_{j}=NP_{j}$ stars in that bin with a standard deviation of $\sigma_{j}=\sqrt{NP_{j}(1-P_{j})}$ stars. The relative standard deviation $n_{j}/\sigma_{j}$ decreases as $N$ increases, i.e., as $M_{i}$ increases. Figure 1: Mean number of stars in each mass bin resulting from stochastically sampling the Chabrier (2003) IMF (his equation 1) 1000 times (filled circles), and standard deviation around the mean (error bars). We show this for clusters with initial masses of 103, 104, 105, and 106 M⊙, as indicated by the legend. The horizontal dashed line corresponds to a number of stars equal to 1. The need for improved models. Beyond the Local Group of galaxies, star clusters can only be resolved into individual stars in comparatively nearby galaxies, and one requires model spectro-photometry generated with population synthesis codes to infer the extinction, mass, and age of a star cluster from its integrated light. Although in general, population synthesis codes do not account for the stochastic sampling of the IMF, SLUG (da Silva et al., 2012) does. The latter code has been used to study broad-band HST NUV to NIR observations of large samples of star clusters. Krumholz et al. (2015a) find that the stochastic SLUG models are generally a better fit to such observations compared to the deterministic Yggdrasil models of Zackrisson et al. (2011), but that the overall properties of the star clusters recovered by both codes are qualitatively similar. Ashworth et al. (2017) find that including H$\alpha$ photometry in the SLUG stochastic models significantly improves the age determination of young clusters. However, in the latter two works, the stochastic models do not include the effect of the stochastic variation in the shape of the ionizing continuum, on the nebular emission. As we show in this work, properly accounting for the emission of the ionized gas is important when fitting models to observations that include the light from OB-star clusters and nebular emission, as is the case for compact H ii regions, where the ionized gas is co-spatial with the stars. This work. In this work, we present a pilot library of synthetic photometry of young ($1-8$ Myr) star clusters that accounts for the stochastic sampling of the IMF and where the contribution of the ionized gas is fully modelled. The library includes the HST F275W (UV), F336W (U), F438W (B), F555W (V), and F814W (I) broad band filters and is based on the model GALAXEV-C spectra that are presented in Vidal-García et al. (in prep.). Our specific objectives are: i) for clusters with different initial masses, ages, and metallicities, and for different values of the ionization parameter, to determine the spread in predicted magnitudes due to a) the stochastic sampling of the IMF and b) the inclusion of the ionized gas; ii) to quantify the relative contributions of the stellar continuum, nebular continuum, and emission lines to the total emission in the photometric bands; iii) to compare the location of the stochastic models relative to the deterministic predictions in the U - B versus V - I colour - colour magnitude diagram (this diagram is used for age- dating clusters); and iv) for a sample of observed star clusters with a) low masses according to deterministic models and b) compact H ii regions, to obtain the probability distribution functions (PDFs) of the extinction, mass, and age corresponding to the independent stochastic GALAXEV-C and SLUG. We use cluster observations from the HST LEGUS (Calzetti et al., 2015) and H$\alpha$-LEGUS (PID 13773, PI Chandar) surveys. The clusters are located in NGC 7793, which is one of the nearest galaxies in these surveys. In Section 2, we present our pilot library; in Section 3, we analyse the predictions of our pilot-library; in Section 4, we present the observations; in Section 5, we describe BAYESPHOT (Krumholz et al., 2015b), which is the photometric interpretation tool that we use in this paper, and we discuss the derived cluster properties; in Appendix A, we discuss what happens if we change the number of realizations of the IMF; in Appendices B and C we present complementary colour-colour magnitude diagrams and PDF plots; finally, in Section 7, we summarise and conclude. ## 2 Models In this Section, we present a pilot library of synthetic HST-equivalent UV, U, B, V and I photometry, which accounts for the stochastic sampling of the IMF and the contribution of the ionized gas. The specific bands for which photometry was computed are: WFC3/UVIS F275W, F336W, and F438W; and ACS/WFC F555W and F814W, where WFC3 is the Wide Field Camera 3, ACS is the Advanced Camera for Surveys, and UVIS and WFC (Wide Field Channel) are the channels of the instruments. This is the filter set that was used for observing the west field of galaxy NGC 7793, whose star clusters are analysed in Section 5. Our pilot library covers sufficient parameter space to i) determine if there is a significant difference between deterministic models and the median of the stochastic models; ii) quantify the impact of the nebular emission in the filters; and iii) show the limitations of colour-colour diagrams for age- dating low-mass star clusters. ### 2.1 Stars The star clusters are assumed to be simple stellar populations (SSPs), i.e., populations where all stars are born simultaneously from the same molecular cloud. We compute models for four ages (1, 3, 4, and 8 Myr); three initial cluster masses ($10^{3}$, $10^{4}$, and $10^{5}$ $M_{\odot}$); and two metallicities ($Z=0.014$ and $Z=0.002$). The highest metallicity, $Z=0.014$, is the solar reference value of Asplund et al. (2009) and the closest metallicity of the star clusters (see Pilyugin et al. 2014 and Section 4). However, we also compute models at $Z=0.002$ for comparison with the higher metallicity the models. The IMF of the star clusters is stochastically sampled and assumes a Chabrier (2003) form and a mass range for the individual stars of 0.1 to 100 $M_{\odot}$. When populating the IMF, we keep adding stars until we reach at least 0.05 $M_{\odot}$ above the required mass of the cluster. Since most of the stars are low mass, typically, our cluster masses are within 0.1 $M_{\odot}$ of the target mass. For each combination of the above parameters, we generated 220 realizations. This number is set by GALAXEV (Plat et al., 2019), which is the code that we use to model the stellar population spectra. In its deterministic version, 220 corresponds to the maximum number of time steps and spectra that are generated in one run. In its stochastic version, the time steps are replaced by IMF realizations. In Appendix A, we show that using a larger library of 1000 realisations does not significantly alter the mean to standard deviation or the main conclusions of this work. We compute deterministic models and corresponding stochastic models. In the deterministic models, the shape of the spectrum remains unchanged and the luminosity is simply scaled in proportion to the value of $M_{i}$, which effectively assumes that the IMF is fully sampled. The models were computed with the latest version of the population synthesis code GALAXEV (Plat et al. 2019; Charlot & Bruzual, in prep.), which only includes the contribution from the stars. The models assume that the stars evolve as single stars with zero rotation. These models are useful for comparing with published results based on the same assumptions and more comprehensive models that account for the evolution of massive stars in close binary systems. In the latter systems the stars exchange mass and rotate. ### 2.2 Ionized gas In order to compute the contribution of the gas ionized by the massive stars, we use the above stellar models as input to photoionization models generated with CLOUDY (Ferland et al., 2017) and the approach presented in Vidal-García et al. (2017), i.e., we use: spherical geometry, a covering factor of 1, and a filling factor of 1. The H ii regions are ionisation-bounded. The code used for this purpose is GALAXEV-C (C for Cloudy, Vidal-Garcia et al., in prep.). For the pilot library, we compute models for the following parameters: hydrogen number density, n(H)=100 cm-3; metallicities, $Z=0.002$ and $Z=0.014$; ionisation parameters at the Strömgren radius (as defined in Gutkin et al. 2016), log(US) = -2 and -3; and C/O = (C/O)⊙. ### 2.3 Extinction due to dust In order to account for the effect of dust mixed with the ionized gas, we include dust grains in CLOUDY and for consistency deplete the refractory elements in the ionized gas by using a dust-to-metal gas ratio of $\xi_{d}$=0.3. In order to account for dust in the intervening neutral medium when deriving the cluster properties, we apply an extinction law to the CLOUDY output. In the process of finding the extinction in the V-band, $A_{\rm V}$, of observed star clusters, we try $A_{\rm V}$ values in the range $0-3$ mag, in steps of 0.01 mag. ## 3 Analysis of Models For all combinations of photometric band (NUV, U, B, V, and I ), initial mass ($10^{3}$, $10^{4}$, and $10^{5}\,$$M_{\odot}$), metallicity (Z=0.002 and 0.014), and log(US) value (-2 and -3), Figure 2 shows the magnitudes predicted by the deterministic (small squares) and stochastic (violin plots) predictions. The photomtric bands are those used to observe field NGC 7793-W (see Section 4). The left side of the violin corresponds to stars only (GALAXEV output), while the right side corresponds to stars + ionized gas + dust mixed with the ionized gas (GALAXEV-C output). ### 3.1 Models with just stars In this subsection, we analyse the behaviour of the models that account for the stochastic sampling of the IMF and the contribution of the stars alone (left violin halves of Figure 2). Figure 2: How accounting for the stochastic sampling of the IMF and including the contributions of the ionized gas + dust mixed with the ionized gas affects the magnitude predictions. Each violin diagram includes 220 realizations of the IMF. The left half of the violin represents the models with just stars and the right half the models that account for stars + ionized gas + dust. We show models for: five LEGUS bands (given by the column titles); initial cluster masses of $M_{i}=10^{3}$, 104, and 105 $M_{\odot}$ (bottom-blue, middle-red, and top-magenta violins, respectively, as given by the legend); ages of $t=1$, 3, 4, and 8 Myr (given by the x-axis label); and all combinations of metallicity (Z=0.002 or 0.014) and ionization parameter (log(US)=-2 or -3, as given by the row titles). The shape of the half violin indicates the frequency of models at a given magnitude. The solid horizontal lines within the half violins give the median of the stochastic models while the 25th and 75th quartiles are given by the dashed lines. The squares on each side of the violin give the deterministic magnitude without gas (square on the left) and with gas (square on the right) at each value of $M_{i}$. We number the panels from 1 to 20 for facilitating the discussion in Table 4. Mass effect. As the initial cluster mass, $M_{i}$, increases from bottom to top in each panel, the spread in magnitudes decreases. This behaviour is observed in all LEGUS bands and at all ages and metallicities. This is because massive stars are significantly more luminous than lower mass stars. Thus, the presence or absence of massive stars in the stellar population severely impacts its integrated luminosity. Age effect. In general, the spread in magnitudes is larger at 3 and 4 Myr than at 1 and 8 Myr, specially for $M_{i}=10^{3}$ $M_{\odot}$. This is due to the presence of classical Wolf-Rayet stars at 3 and 4 Myr. These stars, which are in a phase of helium burning and have lost their hydrogen envelope, are the evolved descendants of the most massive stars ($\geq 25\,M_{\odot}$, depending on metallicity, Massey et al. 2000). Their presence or absence in the population significant impacts the integrated luminosity. Metallicity effect. Stars of different chemical compositions evolve on different time-scales. For $M_{i}=10^{5}$ $M_{\odot}$, and each broad band and age, Table 1 gives the difference between the median magnitude of the stochastic models at $Z=0.014$ and $Z=0.002$. At $t=1$ Myr, the $Z=0.014$ models are more luminous in all bands, whereas at $t=8$ Myr, the $Z=0.002$ models are more luminous in all bands except F814W. Age | F275W | F336W | F438W | F555W | F814W ---|---|---|---|---|--- (Myr) | M(Z=0.002) - M(Z=0.014) 1 | 0.38 | 0.40 | 0.40 | 0.40 | 0.39 3 | 0.42 | 0.37 | 0.11 | -0.01 | -0.30 4 | 0.28 | 0.00 | -0.77 | -1.19 | -1.80 8 | -0.15 | -0.23 | -0.28 | -0.02 | 1.29 Table 1: For the GALAXEV models with $M_{i}=10^{5}$ M⊙, difference in median stochastic absolute magnitude, M(Z=0.002) - M(Z=0.014). ### 3.2 Models with stars, gas, and dust In this subsection, we analyse the behaviour of models that account for the stochastic sampling of the IMF and the added contributions of the ionized gas and dust mixed with the ionized gas (right violin halves of Figure 2). Gas effect. Adding the ionized gas significantly increases the luminosity for certain combinations of age, Z, and log(US). This can be seen in Table 2, which for models with $M_{i}=10^{5}$ $M_{\odot}$ shows the difference in median magnitude with and without gas. The effect of adding the gas is largest for the F555W (V) and F814W (I) bands, where at $t=1$ Myr, the difference is $>1$ mag for both values of $Z$ and log(US). At $t=1$ Myr, significant differences also occur in other bands. At 4 Myr, for $Z=0.014$ and both values of log(US), the difference is $\sim 0.6$ mag in the F555W band. However, as expected, at 8 Myr, when the ionising flux from the most massive stars is greatly diminished, including the gas does not significantly change the synthetic magnitudes. The contributions of the nebular continuum and emission lines to the V and I bands are discussed next. Age | F275W | F336W | F438W | F555W | F814W ---|---|---|---|---|--- (Myr) | M(stars) - M(stars+gas) $Z=0.002$, log(U)=-2 1 | 0.68 | 1.00 | 1.02 | 2.61 | 1.69 3 | 0.33 | 0.50 | 0.37 | 1.26 | 0.57 4 | 0.22 | 0.29 | 0.09 | 0.35 | 0.06 8 | -0.02 | 0.03 | 0.014 | 0.08 | 0.12 $Z=0.002$, log(U)=-3 1 | 0.88 | 1.13 | 1.11 | 2.10 | 1.83 3 | 0.48 | 0.62 | 0.45 | 0.90 | 0.65 4 | 0.35 | 0.38 | 0.15 | 0.22 | 0.09 8 | 0.06 | 0.09 | 0.06 | 0.10 | 0.14 $Z=0.014$, log(U)=-2 1 | -0.03 | 0.35 | 0.28 | 1.70 | 1.23 3 | -0.27 | -0.06 | -0.07 | 0.22 | 0.39 4 | -0.27 | -0.08 | -0.09 | 0.58 | 0.18 8 | -0.38 | -0.29 | -0.24 | -0.19 | -0.13 $Z=0.014$, log(U)=-3 1 | 0.51 | 0.86 | 0.76 | 1.53 | 1.77 3 | 0.11 | 0.31 | 0.23 | 0.37 | 0.63 4 | 0.15 | 0.29 | 0.21 | 0.57 | 0.47 8 | -0.06 | -0.03 | -0.04 | -0.03 | -0.03 Table 2: For the GALAXEV and GALAXEV-C models with $M_{i}=10^{5}$ M⊙, difference in median stochastic absolute magnitude, M(stars) - M(stars+gas). ### 3.3 Contributions of stellar continuum, nebular continuum, and emission lines to total in band. When gas is present in the models, the total emission includes the contributions from the nebular continuum and the emission lines. The importance of including nebular emission lines in spectral synthesis models has been shown by Bruzual & Charlot (2003), Zackrisson et al. (2011), and Schaerer & de Barros (2009). For $M_{i}=10^{5}$ $M_{\odot}$, $Z=0.014$, and log(US)=-3, Figure 3 shows the strongest spectral features that contribute to the LEGUS bands used to image two overlapping fields of galaxy NGC 7793. NGC 7793-E, was imaged with WFC3/UVIS in the HST-equivalent NUV, U, B, V, and I bands; while NGC 7793-W used ACS/WFC for the V and I bands. The strongest emission lines in each filter are: Mg II $\lambda$2798 (F275W); H$\gamma$ $\lambda$4102, Ar I $\lambda$4300, and H$\delta$ $\lambda$4341 (F438W); [O iii] $\lambda\lambda$4859, 5007, H$\beta$ $\lambda 4861$, and/or H$\alpha$ $\lambda 6563$ \+ [N ii] $\lambda\lambda$6548, 6584 (F555W, depending on the instrument/channel combination, and [S iii] $\lambda$9069 and $\lambda$9532 (F814W). Note that WFC3/UVIS F555W contains H$\alpha$ $\lambda 6563$ \+ [N ii] $\lambda\lambda$6548, 6584 because it cuts out at 7000 Å and even though the throughput at 6563 is low (0.05 compared to 0.28 at peak) the strength of H$\alpha$ can dominate depending on the strength of [O iii]. ACS/WFC F555W is different and cuts off at a shorter wavelength. The contribution of the nebular continuum is strongest in the F275W and F336W bands. Note the presence of a strong Balmer break around $\sim$3800 Å. Such Balmer breaks have been observed, see for example Guseva et al. (2006). For $M_{i}=10^{5}$ $M_{\odot}$ and the full range of parameters of our pilot library, Figure 4 shows the contributions to the total luminosity in the V and I bands of the stellar continuum, nebular continuum, and strongest emission lines. The 220 realizations of the IMF are included. We select the V and I bands because they have contributions from the emission lines alone of $\geq 30$%. The Figure shows that the stellar continuum dominates at 8 Myr. This is because the ionizing flux from the massive stars in not significant at this age. The stellar continuum is also dominant in some cases at ages of 3 and 4 Myr (panels 5 to 12). The Figure also shows that the nebular continuum dominates in one case (panel 2). Finally, the Figure shows that the nebular emission lines can contribute with $>50\%$ to the total emission in the V-band (panels 1, 3, 5 and 11) and $>30\%$ to the I-bands (panel 4). The ranges corresponding to the y-axis of Figure 4 are given in Table 3. The Table and Figure illustrate the importance of accounting for the contribution of the ionized gas in the LEGUS bands. Figure 3: Top-panel–. Throughputs of the LEGUS filters + WFC3/UVIS (filled curves) or ACS/WFC (dashed curves) instrument/channel. Middle-panel–. The strongest spectral features that contribute to the different LEGUS bands according to models that include the contributions of the stars the stars + ionized gas + dust mixed with the ionized gas. The stellar spectra used as input correspond to a cluster of initial mass = 105 M⊙, $Z=0.014$, and age = 1 Myr. The gas parameters are: $Z=0.014$ and log(US)=-3. We use FWHM=130 km s-1 for the width of the emission lines, which is the typical value obtained from VLT MUSE observations of H ii regions in the galaxy under study (NGC 7793, Wofford et al. 2020). Bottom-panel–. Enlargement of the middle panel to show the continua from the stars alone (blue curve) and stars + ionized gas + dust in the ionized gas (black curve). Figure 4: For the GALAXEV and GALAXEV-C models with $M_{i}=10^{5}$ M⊙, contributions to the total luminosities in the V and I bands of the stellar continuum (SteCon, magenta background), nebular continuum (NebCon, white background), and strongest emission lines (EmLin, blue background). The cluster age is given by the y-axis label the log(US) value in the legend, and the metallicity at the top. The horizontal lines indicate contributions of 30 and 50% to the total luminosity in the band. The panels are numbered from 1 to 16. | F555W | F814W ---|---|--- Age | SteCon | NebCon | EmLin | SteCon | NebCon | EmLin | Z=0.002, log(US)=-2 1 | $0.08-0.10$ | $0.10-0.10$ | $0.80-0.82$ | $0.20-0.25$ | $0.59-0.63$ | $0.16-0.17$ 3 | $0.17-0.45$ | $0.05-0.10$ | $0.49-0.74$ | $0.34-0.80$ | $0.15-0.51$ | $0.05-0.15$ 4 | $0.53-0.81$ | $0.00-0.05$ | $0.19-0.43$ | $0.85-0.98$ | $0.01-0.11$ | $0.01-0.04$ 8 | $0.89-0.93$ | $0.00-0.01$ | $0.06-0.10$ | $0.88-0.94$ | $0.04-0.09$ | $0.02-0.03$ | Z=0.002, log(US)=-3 1 | $0.12-0.16$ | $0.18-0.19$ | $0.66-0.69$ | $0.19-0.23$ | $0.61-0.65$ | $0.15-0.16$ 3 | $0.26-0.59$ | $0.10-0.18$ | $0.31-0.56$ | $0.32-0.78$ | $0.18-0.54$ | $0.05-0.13$ 4 | $0.66-0.87$ | $0.03-0.09$ | $0.10-0.25$ | $0.83-0.95$ | $0.04-0.14$ | $0.01-0.03$ 8 | $0.89-0.94$ | $0.02-0.04$ | $0.04-0.07$ | $0.86-0.92$ | $0.06-0.12$ | $0.01-0.02$ | Z=0.014, log(US)=-2 1 | $0.18-0.23$ | $0.03-0.04$ | $0.74-0.78$ | $0.32-0.38$ | $0.29-0.32$ | $0.33-0.36$ 3 | $0.67-0.77$ | $0.04-0.09$ | $0.14-0.28$ | $0.68-0.90$ | $0.01-0.13$ | $0.10-0.19$ 4 | $0.36-0.65$ | $0.02-0.08$ | $0.27-0.62$ | $0.69-0.93$ | $0.00-0.14$ | $0.05-0.18$ 8 | $0.84-0.85$ | $0.15-0.15$ | $0.01-0.01$ | $0.89-0.89$ | $0.11-0.11$ | $0.00-0.00$ | Z=0.014, log(US)=-3 1 | $0.22-0.27$ | $0.14-0.14$ | $0.59-0.64$ | $0.21-0.25$ | $0.40-0.42$ | $0.35-0.37$ 3 | $0.66-0.81$ | $0.05-0.11$ | $0.14-0.24$ | $0.53-0.75$ | $0.15-0.29$ | $0.10-0.18$ 4 | $0.42-0.73$ | $0.04-0.07$ | $0.23-0.50$ | $0.50-0.81$ | $0.11-0.27$ | $0.08-0.22$ 8 | $0.94-0.95$ | $0.04-0.04$ | $0.01-0.02$ | $0.96-0.97$ | $0.03-0.04$ | $0.00-0.00$ Table 3: For the GALAXEV and GALAXEV-C models with $M_{i}=10^{5}$ M⊙, ranges of the contributions to the total luminosities in the V and I bands of the stellar continuum (SteCon), nebular continuum (NebCon), and the strongest emission lines in the band (EmLin, see Figure 4). ### 3.4 Stochastic versus deterministic models. In Figure 2, the filled squares represent the deterministic magnitudes for the cases with just stars (left square) and stars + gas + dust (right square) at each value of $M_{i}$. How the deterministic magnitude compares to the median of the stochastic models depends on whether the models include just stars or stars + gas + dust, and on the combination of $M_{i}$, age, $Z$, log(US), and photometric band. Differences of less than 0.5 mag between the median and deterministic magnitude can be seen in the Figure for combinations of the parameters in the following cases: i) {stars} or {stars + gas + dust}, $M_{i}/M_{\odot}=10^{4}$ or $10^{5}$, $Z=0.014$, and log(U${}_{\rm S}=-3$); and ii) {stars}, $M_{i}/M_{\odot}=10^{3}$, $t=1$ Myr, both values of $Z$, and both values of log(US). Examples of cases where $|$Det - <Sto>$|$> 0.5 mag are provided in Table 4. Note that $|$Det - <Sto>$|$ can be > 0.5 mag for $M_{i}/M_{\odot}$ as high as $10^{5}$, and that although the deterministic magnitude tends to be more luminous, this is not systematically the case. Gas? | Mi | Age | Panels | Most ---|---|---|---|--- | (M⊙) | (Myr) | | Luminous No | 1E3 - 1E5 | 1 | none | - No | 1E3 - 1E5 | 3 | 5 & 10 | Det No | 1E4 | 4 | 15 & 20 | Det No | 1E3 | 4 | 4, 5, 9 & 10 | Det No | 1E3 | 8 | 15 & 20 | Det Yes | 1E3 | 1, 3, 4 | 12 | <Sto> Yes | 1E3 | 8 | 4 & 5 | Det Yes | 1E4 | 3 | 5 & 10 | Det Table 4: Examples of cases where the absolute value of the residual (deterministic mag - median stochastic mag) is > 0.5 mag. We consider the GALAXEV and GALAXEV-C models. Column 1 indicates if gas is included in the models. Columns 2 and 3 give the initial mass and age of the cluster, respectively (or their ranges). Column 4 gives the IDs of the panels in Figure 2 where examples can be found. Column 5 says which of the two magnitudes is the most luminous (Det=deterministic, <Sto>=median stochastic mag). For star-only broad-band magnitudes: a) the absolute value of the residual (deterministic prediction - median of stochastic models) can be $\geq 0.5$ mag, even for $M_{i}=10^{5}$ $M_{\odot}$ ### 3.5 Models in the U - B versus V - I diagram We now analyse the positions of the models in the U - B versus V - I diagram, which has been used to age-date star clusters by (Chandar et al., 2004, 2016). Although Chandar et al. (2016) conclude that using UBVIH$\alpha$ photometry yields better agreement between photometric and spectroscopic ages, determining accurate H$\alpha$ photometry is very difficult in practice, particularly, when star clusters are not isolated and the H$\alpha$ morphology is complex, as we discuss in Section 6.4. We note that other diagrams such as (U - B) versus (B - V), which is not discussed here, have also been used to age-date star clusters (e.g., Bica et al. 1991). Let D0 and S0 be the deterministic and stochastic GALAXEV models, respectively. Similarly, let (D2, S2) and (D3, S3) be the GALAXEV-C pairs for log(Us)=-2 and -3, respectively. For $Z=0.014$, which is the closest metallicity to the observations (see Section 4), Figures 5 \- 7 show comparisons of the D0 and S0, D3 and S3, and D2 and S2 predictions in the U - B vs. V - I diagram, respectively. Appendix Figures 14 \- 15 are similar to Figures 5 \- 7 but for $Z=0.002$. Let us start by discussing Figure 5, which corresponds to the $Z=0.014$ models that only account for the stars. In this and similar figures, the age and initial mass of the cluster are given by the y-axis label and the column title, respectively. In Figure 5, the clouds of magenta filled-symbols are the stochastic S0 models and the magenta curves are the corresponding deterministic track. The comparison between the S0 and D0 predictions yields the following results, which are organised in order of increasing cluster age and decreasing cluster mass. 1 Myr (top row of panels).– For $M_{i}/M_{\odot}=10^{5}$, the S0 models are tightly grouped on top of the 1 Myr D0 point that is located at the intersection of the vertical and horizontal dashed lines. As mass decreases, the spread of the S0 models increases. In particular, for $M_{i}/M_{\odot}=10^{3}$, the spread is mostly in the vertical direction and towards the red. 3 Myr (second row of panels from the top).– For $M_{i}/M_{\odot}=10^{5}$, the S0 models break into three clouds, one tightly grouped on top of the 3 Myr D0 point, one bluer and closer to the 1 Myr D0 point, and the last one redder and closer to the 4 Myr D0 point. For $M_{i}/M_{\odot}=10^{4}$, a small fraction of the S0 models are located far away from the 3 Myr D0 point, towards the red, and approach the 100 Myr D0 point. Finally, for $M_{i}/M_{\odot}=10^{4}$ and $M_{i}/M_{\odot}=10^{3}$, most of the S0 models are bluer than the 3 Myr D0 point. 4 Myr (third row of panels from the top).– For $M_{i}/M_{\odot}=10^{5}$, the spread in S0 models is larger than at 1 Myr, and it is both towards the red and the blue. For $M_{i}/M_{\odot}=10^{4}$, a larger but still small fraction of S0 models is found significantly offset to the red. For $M_{i}/M_{\odot}=10^{3}$, a small fraction of S0 models is significantly redder than the 4 Myr D0 point and is located between the 100 Myr and 1 Gyr markers along the deterministic track (however, the main S0 cloud is spread closer to the 1-4 Myr D0 predictions. 8 Myr (bottom row of panels.– For $M_{i}/M_{\odot}=10^{5}$, the spread is larger than at 1 Myr, and it is both towards the red and the blue, similar to the behaviours at 3 and 4 Myr, but the spread is mostly horizontal. For $M_{i}/M_{\odot}=10^{4}$, the S0 models are spread almost horizontally towards the blue and the red and a few S0 models are significantly bluer than the D0 prediction. For $M_{i}/M_{\odot}=10^{3}$, the S0 models are divided into two clouds, one bluer in V - I and close to the 3 - 4 Myr D0 predictions and another starting at the D0 point and spreading towards significantly redder colours. Figure 5: U-B vs. V-I diagrams with $Z=0.014$ star-only GALAXEV deterministic tracks (magenta solid curves) and stochastic models (magenta small filled symbols) overlaid. Ages along the D0 track are marked with orange-filled triangles and labelled using M=Myr and G=Gyr. The dashed vertical and horizontal lines mark the position of the deterministic prediction at the age given by the y-axis label. For clarity, for the stochastic models, each panel shows a different combination of cluster initial mass (given by the column title) and age (given by the y-axis label). Let us now discuss Figures 6 and 7, i.e., the $Z=0.014$ models with gas. The S2 and S3 models yield some of the same trends that are observed in the star- only case but also some additional behaviours. At 1 Myr, the S3 and S2 models are bluer in V - I relative to their respective deterministic predictions. At 3 and 4 Myr, for $M_{i}/M_{\odot}>10^{5}$, a significant fraction of the S2 and S3 models are offset towards bluer U - B relative to the corresponding deterministic models. Finally, a major conclusion from the figures is that the $Z=0.014$ D2 and D3 tracks are not useful for age-dating clusters with $M_{i}/M_{\odot}\sim 10^{3}$. This is because at 8 Myr, for this mass, the number of stochastic models is similar in the $\sim$4 Myr cloud and the 8 M yr cloud. Figure 6: Similar to Figure 5 but for log(Us)=-3 deterministic (dark blue curves) and stochastic (dark blue symbols) models that include the contributions of the ionized gas + dust mixed with the ionized gas. Figure 7: Similar to Figure 6 but for log(Us)=-2. We proceed with the analysis of the $Z=0.002$ models in the U - B vs. V - I plane. By looking at the star-only tracks of Figure 14, one can see that the 4 Myr D0 marker is almost coincident with the 100 Myr marker and that the 8 Myr marker is between the 1 and 3 Myr markers. When gas is included (Figures 15 and 16), the 3 and 8 Myr deterministic markers are close to each other and they are almost coincident for log(Us)=-3. On the other hand, the 4 and 100 Myr deterministic markers become more separated than in the star-only case. In conclusion, age-dating star clusters with $Z=0.002$ using their positions in the U - B vs. V - I diagram along the GALAXEV-C deterministic tracks is highly uncertain at any mass. ## 4 Sample and Observations In this work, we use HST images of two partially overlapping fields of galaxy NGC 7793, NGC 7793-E and NGC 7793-W. The images were obtained as part of the LEGUS and H$\alpha$ LEGUS programs. We fit models to the LEGUS broad-band photometry and investigate how close the H$\alpha$ equivalent widths of the clusters that were obtained by Hannon et al. from the H$\alpha$-LEGUS images are to the values predicted by our best-fitting models. In this section, we describe the observations, the galaxy, and how we selected the sample of star clusters. ### 4.1 HST LEGUS and H$\alpha$ LEGUS LEGUS (Calzetti et al., 2015, PID 13364) is a Cycle 21 HST treasury program that obtained high spatial resolution ($\sim 0.07"$) images of portions of 50 nearby ($\leq 16$ Mpc) galaxies, using the UVIS channel of the Wide Field Camera Three (WFC3), and the broad band filters F275W (2704 Å), F336W (3355 Å), F438W (4325 Å), F555W (5308 Å), and F814W (8024 Å), which roughly correspond to the photometric bands NUV, U, B, V, and I, respectively. The survey includes galaxies of different morphological types and spans a factor of $\sim 10^{3}$ in both star formation rate (SFR) and specific star formation rate (sSFR), $\sim 10^{4}$ in stellar mass ($\sim 10^{7}-10^{11}\,\mathrm{M_{\odot}}$), and $\sim 10^{2}$ in oxygen abundance ($12+\mathrm{log\,O/H}=7.2-9.2$). Some of the targets in the survey have high quality archival images in bandpasses similar to those required by LEGUS, most of them from the Wide Field Channel of HST’s Advanced Camera for Surveys (ACS), and fewer of them from ACS’s High Resolution Channel (HRC). For the latter targets, LEGUS completed the five band coverage. The choice of filters was dictated by the desire to distinguish young massive bright stars from faint star clusters, to derive accurate star formation histories for the stars in the field from their CMDs, and to obtain extinction-corrected estimates of age and mass for the star clusters. Star and star-cluster catalogues have been released for the LEGUS sample and are described in Sabbi et al. (2018) and Adamo et al. (2017) (hereafter A17), respectively. H$\alpha$ LEGUS (PI Chandar, PID 13773) is a Cycle 22 HST program that obtained narrow-band, H$\alpha$ (F657N) and medium band, continuum (F547M) images for the 25 LEGUS galaxies with the highest star formation rates, using the WFC3. The corresponding H$\alpha$ observations reveal thousands of previously undetected H ii regions, including those ionized by stellar clusters and “single" massive stars. We note that the LEGUS data do not have the spatial resolution to visually resolve massive stars in close binary systems. ### 4.2 NGC 7793 We used the observations of NGC 7793 obtained by LEGUS and H$\alpha$ LEGUS which are summarised in Table 2 of Wofford et al. (2020). NGC 7793 is a Southern SAd flocculent spiral galaxy that is part of the Sculptor group and is located at a Cepheid distance of 3.44 Mpc (Pietrzyński et al., 2010). It has a small bulge and a spiral filamentary morphology, and the following additional properties: an inclination of 47°; a colour excess due to the Galaxy of E(B - V) = 0.017 mag (Schlafly & Finkbeiner, 2011); a stellar mass determined from the extinction-corrected B-band luminosity and colour information of M∗ = $3\text{\times}{10}^{9}$ M⊙ (Bothwell et al., 2009); lastly, a galaxy-wide star formation rate calculated from dust-attenuation corrected GALEX far-UV, adopting a distance of 3.44 Mpc, SFR = 0.52 M⊙yr-1 (Lee et al., 2009). According to Pilyugin et al. (2014), the ionized-gas oxygen abundance at the centre of the galaxy is 12 + log(O/H)=$8.50\pm 0.02$, and the O/H gradient is $-0.066\pm 0.0104$ dex kpc-1. ### 4.3 Sample of star clusters We select a sample of 17 isolated, low-mass ($<10^{4}$ $M_{\odot}$), young ($<10$ Myr) star clusters from the catalogue of clusters with compact H$\alpha$ morphologies of H19. H19 use the LEGUS datasets, which are aligned to the F438W image, and the LEGUS photometry, which uses an aperture with a radius of 5 pixels or 0.2” (3 pc), which was selected based on a curve of growth analysis. For H$\alpha$, H19 extracted their own photometry using apertures with different radii for each cluster, also based on a curve of growth analysis (see H21 in prep. for more details). The masses and ages used for the selection come from deterministic models, i.e., models where the luminosity is scaled in proportion to the initial cluster mass. Figure 8 shows the location of the star clusters within the galaxy. The figure shows that the star clusters are located within a radius of $\sim 3$ arcmin from the centre of the galaxy. Adopting Z${}_{\odot}=0.014$ as the reference solar metallicity (Asplund et al., 2009) and using O/H as a gauge of metallicity, we find that the metallicity range of the clusters in our sample is Z=0.006 to 0.009. Thus, their metallicity is close to Z⊙. Table 5 lists the J2000 coordinates and apparent Vega magnitudes of the star clusters in the LEGUS and H$\alpha$ LEGUS bands. Note that clusters 93 and 383, and 417 and 1252 have different IDs but very similar coordinates and photometry. This is because the clusters are the same, but their measurements come from field NGC 7793E in one case and NGC 7793W in the other. The clusters are in the overlapping region between these two fields. This constitutes a good check on repeatability of the cluster-finding process. When comparing models to observations, we keep the repeated clusters in order to check if small differences in the photometry due to the different pointings affect the derived properties of the clusters. Figure 9 shows postage stamps of the clusters in our sample with the 5 pixel aperture overlaid. Figure 8: Composite RGB images of the partially-overlapping fields NGC 7793E (left panel) and NGC 7793W (righ panel), where red = continuum-subtracted H$\alpha$, green = F555W, and blue = F438W. We use white circles to indicate the positions of the star clusters in our sample and give their LEGUS ID. In both images, the centre of the galaxy appears as a cyan knot. We overlay cyan circles centered on this knot, of radii equal to 0, 1, 2, and 3 arcmin. The value of 12+log(O/H) in the ionized gas at each of these radii is shown with cyan characters. At 3.44 Mpc (Pietrzyński et al., 2010), 1 armin is $\sim$ 1 kpc. Note that clusters 93 and 383, and 417 and 1252 have similar coordinates and photometry but have different IDs (see Section 4.3 for a discussion of these clusters). Also note that cluster 1576 appears in both fields and is at the edge of the H$\alpha$ image for field NGC 7793W. Figure 9: Postage stamps of the star clusters in our sample using the same colour scheme as in Figure 8. The images are 12" (200 pc) on the side. The small white circle represents the aperture used for obtaining the photometry in the five LEGUS broad-bands. Star clusters from fields NGC 7793-E and NGC 7793-W are shown in the top-two and bottom-two rows show, respectively. Note that clusters 93 and 383, and 417 and 1252 have similar coordinates and photometry but have different IDs (see Section 4.3 for a discussion of these clusters). Also note that cluster 1576 appears in both fields and at the edge of the H$\alpha$ image in field NGC 7793W. ID | RA | Dec | F275W | F336W | F438W | F547M | F555W | F657N | F814W | Rad ---|---|---|---|---|---|---|---|---|---|--- | J2000 | J2000 | mag | mag | mag | mag | mag | mag | mag | pix 0011-E | 23:57:54.2424 | -32:33:59.148 | 20.97$\pm$0.09 | 20.81$\pm$0.09 | 22.16$\pm$0.08 | 20.70 | 21.07$\pm$0.07 | 17.39 | 20.71$\pm$0.06 | 30 0090-E | 23:57:48.3768 | -32:34:33.672 | 18.80$\pm$0.09 | 19.32$\pm$0.08 | 20.82$\pm$0.07 | 21.06 | 20.99$\pm$0.05 | 19.80 | 21.06$\pm$0.05 | 10 0093-E* | 23:57:45.6384 | -32:34:33.888 | 19.03$\pm$0.09 | 19.50$\pm$0.08 | 20.92$\pm$0.07 | 20.96 | 20.96$\pm$0.05 | 17.70 | 20.97$\pm$0.06 | 30 0383-W* | 23:57:45.6384 | -32:34:33.852 | 18.95$\pm$0.07 | 19.40$\pm$0.07 | 20.88$\pm$0.06 | 21.06 | 20.97$\pm$0.05 | 17.62 | 21.06$\pm$0.07 | 40 0417-E* | 23:57:49.0584 | -32:35:23.028 | 19.70$\pm$0.09 | 19.88$\pm$0.08 | 21.38$\pm$0.07 | 20.59 | 21.01$\pm$0.05 | 18.81 | 20.59$\pm$0.07 | 10 0451-W | 23:57:35.1792 | -32:34:38.820 | 22.44$\pm$0.14 | 21.88$\pm$0.09 | 23.21$\pm$0.09 | 21.81 | 21.95$\pm$0.06 | 19.71 | 21.81$\pm$0.07 | 10 0531-E | 23:57:46.2240 | -32:35:33.036 | 18.86$\pm$0.09 | 19.30$\pm$0.08 | 20.76$\pm$0.07 | 20.75 | 20.86$\pm$0.05 | 17.26 | 20.76$\pm$0.06 | 50 0534-E | 23:57:47.1456 | -32:35:33.144 | 19.81$\pm$0.09 | 20.17$\pm$0.08 | 21.56$\pm$0.07 | 20.97 | 21.55$\pm$0.05 | 17.84 | 20.98$\pm$0.06 | 40 0589-E | 23:57:56.1552 | -32:35:39.804 | 19.77$\pm$0.09 | 19.90$\pm$0.08 | 21.16$\pm$0.07 | 20.39 | 20.93$\pm$0.06 | 17.49 | 20.39$\pm$0.05 | 30 0816-E | 23:57:48.2376 | -32:36:14.796 | 17.33$\pm$0.09 | 17.69$\pm$0.08 | 19.16$\pm$0.07 | 18.94 | 18.96$\pm$0.05 | 16.58 | 18.94$\pm$0.05 | 50 0894-E | 23:57:51.2400 | -32:36:48.456 | 20.57$\pm$0.09 | 20.54$\pm$0.08 | 21.93$\pm$0.07 | 20.64 | 20.81$\pm$0.05 | 16.96 | 20.65$\pm$0.05 | 30 1252-W* | 23:57:49.0656 | -32:35:23.028 | 19.69$\pm$0.07 | 19.91$\pm$0.07 | 21.43$\pm$0.07 | 20.70 | 20.96$\pm$0.05 | 18.83 | 20.71$\pm$0.07 | 10 1381-W | 23:57:40.4448 | -32:35:27.888 | 21.77$\pm$0.09 | 21.79$\pm$0.08 | 23.27$\pm$0.09 | 22.49 | 22.02$\pm$0.06 | 20.15 | 22.50$\pm$0.07 | 10 1564-W | 23:57:41.4264 | -32:35:34.368 | 20.10$\pm$0.07 | 20.18$\pm$0.07 | 21.53$\pm$0.07 | 20.42 | 20.80$\pm$0.05 | 18.84 | 20.43$\pm$0.06 | 10 1576-W | 23:57:48.7728 | -32:35:34.656 | 18.73$\pm$0.07 | 19.24$\pm$0.06 | 20.80$\pm$0.06 | 20.74 | 20.73$\pm$0.05 | 18.85 | 20.75$\pm$0.07 | 10 1949-W | 23:57:40.0344 | -32:35:47.436 | 18.93$\pm$0.07 | 19.38$\pm$0.06 | 20.97$\pm$0.06 | 20.96 | 20.71$\pm$0.05 | 19.22 | 20.96$\pm$0.06 | 10 2449-W | 23:57:38.6064 | -32:36:12.312 | 19.67$\pm$0.07 | 19.84$\pm$0.07 | 21.19$\pm$0.07 | 20.47 | 20.89$\pm$0.06 | 17.08 | 20.47$\pm$0.06 | 50 2732-W | 23:57:40.3200 | -32:36:41.328 | 19.32$\pm$0.07 | 19.61$\pm$0.07 | 21.06$\pm$0.06 | 20.68 | 20.59$\pm$0.06 | 18.80 | 20.69$\pm$0.06 | 10 2740-W | 23:57:40.1136 | -32:36:43.452 | 20.39$\pm$0.08 | 20.78$\pm$0.08 | 22.10$\pm$0.07 | 21.77 | 21.83$\pm$0.06 | 19.15 | 21.77$\pm$0.06 | 10 Table 5: Column (1): ID of star cluster and field (NGC 7793-E or -W). Columns (2)-(3): Right Ascension and Declination. Columns (4)-(10): Apparent magnitudes from A17 (LEGUS-bands) and H19 (H$\alpha$-LEGUS bands), based on PSF-photometry. We use Vega and AB magnitudes for the LEGUS and H$\alpha$-LEGUS bands, respectively. The photometry is corrected for foreground extinction as explained in the text. Column (11): Radius in pixels used for the H$\alpha$ photometry. Note that clusters 93 and 383, and 417 and 1252, which are marked with an asterisk in the first column, have similar coordinates and photometry but have different IDs (see Section 4.3 for a discussion of these clusters). Figure 10 combines Figures 5, 6, and 7 in one. This helps to see where the $Z=0.014$ S0, S2, and S3 models fall relative to the corresponding D0, D2, and D3 predictions. The corresponding figure for $Z=0.002$ is Figure 17. In addition, the panels in the right column of Figure 10 include the LEGUS observations. The red error bars represent the observations corrected for reddening due to dust in the MW (using Av=0.053 mag) and uncorrected for intrinsic reddening while the black error bars are the observations also corrected for dust in NGC 7793, using the Av values of column 5 in Table 8. The black error bars include the propagation of the uncertainties in the intrinsic Av values that are given in column 5 of Table 8. For both the foreground and intrinsic reddening corrections, we use the MW-extinction law of Cardelli et al. (1989) for R(V)=3.1. As expected, after the full correction for reddening, the observations move towards bluer colours and the size of the error bars increases. Note that none of our observations are found near the location of the “outlier" stochastic models, which are the few models located far from their corresponding deterministic prediction. This is expected since our observed sample is small and according to stochastic models, “outlier" clusters have a low probability of being created in nature, and observed. Several works of the LEGUS collaboration use deterministic Yggdrasil tracks of Zackrisson et al. (2011). The bottom-left panel of Figure 10 shows the Yggdrasil track corresponding to log(U)=-3 and $Z=0.020$ (dashed-black curve with ages marked using black-filled triangles). $Z=0.020$ is the closest track to $Z=0.014$ that is available. The Yggdrasil track for log(U)=-3 and $Z=0.004$ is shown in Figure 17. Note that along the $Z=0.004$ Yggdrasil, the 8 Myr marker is redder in U - B than the 4 Myr marker, contrary to what happens in the GALAXEV-C $Z=0.002$ track. Figure 10: Combination of Figures 5 \- 7, as indicated by the legend on the top-left panel. Some ages along the log(Us)=-3 (dark-blue) track are marked with orange-filled triangles and labelled using M=Myr and G=Gyr. The bottom- left panel shows the Yggdrasil track corresponding to log(U)=-3 and $Z=0.020$ (dashed-black curve with black-filled triangles). The last column of panels includes: i) LEGUS observations corrected for reddening due to dust in the MW that are: uncorrected for dust in NGC 7793 (red error bars) and corrected for dust in NGC 7793 (black error bars), as given by the legend in the top-right panel; and ii) a reddening vector corresponding to an extinction of AV=1 mag. ## 5 Method for interpreting the observations In this section, we use our models in order to derive the dust extinctions, masses, and ages of the star clusters in our sample. The derived properties are the physical parameters of the models that best fit the observations. In order to find the model that best fits the observations, one can use $\chi^{2}$ minimization (Popescu, 2010); construct probability maps in the parameter space to explore the nodes of the grid and then select the more probable solutions (Fouesneau & Lançon, 2010); or use a Bayesian inference method (Krumholz et al., 2015a; Wofford et al., 2016; Fouesneau & Lancon, 2010). We use the latter method, which is explained in the following section. In order to find the probability distribution function of the physical properties, we use the method of conditional regression, coupled with a kernel density estimation, which is presented in (Krumholz et al., 2015a). In summary, if we let $p(x|y_{obs};\sigma_{y})$ be the probability distribution of the physical parameters, $x$, given $N$ photometric observations, $y_{obs}$, with error $\sigma_{y}$, the probability distribution of the physical properties, given a set of photometric observations can be written as: $p(x|y_{obs})\equiv\sum_{i}\omega_{i}G((x-x_{i},y_{obs}-y_{i}),h^{{}^{\prime}}),$ (1) where $h^{{}^{\prime}}$ is a new bandwidth that depends on both, the bandwidth of the physical properties of the models $h_{x}$ and the photometric properties ($h_{y}$). This also allows us to find an expression for calculating the marginal probability distribution of each physical parameter, which we will call $x_{1}$. $p(x_{1}|y_{obs})\propto\sum_{i}\omega_{i}G((x_{1}-x_{1,i},h_{1})G(y_{obs}-y_{i}),\sqrt{\sigma_{y}^{2}+h_{y}^{2}}).$ (2) This procedure can be followed for each of the physical parameters. In order to derive the physical properties of the clusters in our sample, we adapted our pilot-library based on the models presented in Vidal-García et al.(, in prep.) to the tool BAYESPHOT, which uses Bayesian inference to estimate joint and marginal PDFs by following the approach which was just described. In addition to the synthetic photometry, BAYESPHOT requires additional output from population synthesis, such as the time step, birth mass of the SSP, current mass of all stars in the cluster (accounting for the effects of mass loss and supernovae), number of living stars in the cluster at the present time, visual extinction, to name a few. In this process, we used the python module SLUGPY, which is presented in Krumholz et al. (2015a). This module is a series of functions that allows to handle spectro-photometric data generated with the population synthesis code SLUG. Since NGC 7793 is a spiral galaxy and the clusters have solar metallicity, in order to obtain the extinction due to dust mixed with the neutral gas, in the V-band, we adopted the Milky Way law of Mathis (1990), with $R(V)=3.1$. ## 6 RESULTS In this section, we test our models using the observations which were presented in Section 4. ### 6.1 S3 models versus LEGUS photometry In order to illustrate how well one can fit the LEGUS observations with our models, which are only available for three values of the initial mass, we use the S3 models, which have Z = 0.014 and log(US) = -3. For each cluster in our sample, Figure 11 shows the comparison of the best- fitting S3 models (black-dashed curves) and the observations. In order to find the best-fitting model, we use equation (29) from Krumholz et al. (2015a). The figure shows observations (apparent Vega magnitudes) with and without a correction for dust intrinsic to NGC 7793. The red error bars are the observations corrected for reddening due to dust in the Milky Way (using Av=0.017 mag), while the blue error bars include the additional correction for dust in NGC 7793 (using the median V-band extinctions of column 5 in Table 8). For both corrections we use the Milky Way extinction law of Cardelli et al. (1989) for R(V)=3.1. Since in the figure, the models are uncorrected for reddening in the neutral gas, they should be compared to the red error bars. In Figure 11, the panels are arranged in order of increasing AV value. This is why in the right-side panels there is a larger offset between the red and blue curves. As can be seen in the figure, the F275W magnitude is more affected by the reddening correction than the F814W magnitude, which is in agreement with the shape of the Milky Way extinction law. Note that there are differences in the $A_{\rm V}$ values of 0.08 and 0.61 mag between the clusters that are repeated and discussed above, i.e., 93 and 383; and 417 and 1252, respectively. For each cluster, Table 6 gives the AV residual (observation - best-fitting model) in each LEGUS band. For the clusters in our sample, the median residual in each LEGUS band is within the observational error which is reported in Table 5. The observations of cluster 1381-W are not well reproduced in the bands which are redder than U. This is likely because it is least massive cluster in our sample according to the K15 stochastic models which are presented in Table 9. The mass of the latter cluster is 219 $M_{\odot}$ while the minimum mass in our models is M=$10^{3}$ $M_{\odot}$. We find models that fit the observations reasonably well in spite of the poor sampling in cluster mass and age of our pilot library thanks to the fine sampling in V-band extinction values. Figure 11: Observed apparent Vega magnitudes (red error bars, corrected for reddening in the Milky Way) versus best-fitting model with $Z=0.014$ and log($U_{\rm S}=-3$ (black dashed lines). For comparison, we also show the observations corrected for reddening in the Milky Way and NGC 7793 (blue error bars). The panels are arranged in order of increasing AV value, which is why for the panels on the right, a larger offset between the red and blue curves can be observed. Note how the bluest magnitude (F275W) is more affected by the reddening correction compared to the reddest magnitude (F814W), which is expected from the shape of the extinction law. As discussed in Section 4.3, cluster 93 is a copy of 383, and cluster 417 is a copy of 1252. We mark the IDs of these clusters with asterisks. The clusters have different AV values due their slightly different observed magnitudes (see Table 5). | Residual of observed - model magnitude ---|--- ID | F275W | F336W | F438W | F555W | F814W (1) | (2) | (3) | (4) | (5) | (6) 0011-E | -0.05 | 0.07 | 0.08 | -0.14 | 0.07 0090-E | -0.06 | -0.02 | 0.02 | 0.07 | -0.04 0093-E | -0.04 | 0.01 | 0.04 | 0.02 | -0.07 0383-W | 0.01 | -0.00 | 0.03 | 0.03 | -0.07 0417-E | -0.09 | 0.05 | 0.00 | 0.04 | -0.04 0451-W | 0.26 | 0.05 | 0.18 | -0.32 | 0.29 0531-E | -0.07 | 0.09 | -0.04 | 0.03 | -0.01 0534-E | -0.18 | 0.08 | 0.02 | 0.13 | -0.13 0589-E | -0.12 | 0.09 | 0.01 | 0.01 | -0.02 0816-E | -0.05 | 0.04 | 0.08 | -0.08 | 0.01 0894-E | -0.04 | 0.13 | 0.12 | -0.17 | 0.12 01252-W | -0.09 | 0.05 | 0.02 | 0.01 | -0.03 01381-W | 0.08 | 0.08 | 0.52 | -0.51 | 0.31 01564-W | -0.07 | 0.11 | -0.02 | -0.03 | -0.05 01576-W | -0.11 | 0.05 | -0.02 | -0.05 | -0.01 01949-W | -0.14 | 0.01 | -0.04 | -0.11 | 0.14 02449-W | -0.05 | 0.11 | -0.04 | -0.04 | -0.03 02732-W | 0.05 | 0.11 | -0.05 | -0.13 | -0.01 02740-W | -0.09 | 0.05 | 0.15 | -0.06 | -0.02 RANGE | -0.18 - 0.26 | -0.02 - 0.13 | -0.05 - 0.52 | -0.51 - 0.13 | -0.13 - 0.31 MEDIAN | -0.04 | 0.06 | 0.06 | -0.07 | 0.02 Table 6: Residuals in each photometric band. Column (1) shows the observed cluster ID. Columns (2) - (6) shows the difference between the observed magnitudes of the clusters in NGC 7793 and the best-fitting models corresponding to $Z=0.014$, log(Us) = -3, and cluster mass = 103 $M_{\odot}$. For each LEGUS filter, the last two rows give the range and median of the values in the column. ### 6.2 Equivalent width of H$\alpha$ The F656N filter includes the H$\alpha$ and [N ii] lines. We compare the equivalent width of the combined H$\alpha$ \+ [N ii] emission from the best- fitting S2 and S3 models with $Z=0.014$, against the value measured by Hannon et al. (in prep., hereafter H21) using the H$\alpha$-LEGUS observations. In order to catch all of the H$\alpha$ emission due to ionisation by the cluster, the size of the aperture used by H21 to obtain EW(H$\alpha$\+ [N ii]) was selected based on the curve of growth of each cluster. The radius of the aperture is provided in the last column of Table 5. We present the observed and best-fit model EWs in Table 7. We find that for two clusters (816-E and 1564-W) the S2 models are within $100$ Åof the observed value, while the S3 models are within $100$ Åof the observed value only in one case (531-E). Note that the observed EW(H$\alpha$\+ [N ii]) value is highly uncertain. We also find that for the S2 models, the mean of EW(H$\alpha$\+ [N ii]) is a factor of $\sim 4$ lower than the mean of the observations, and that the mean of the S3 models is closer to the mean of the observations. Finally, we find that in general, the youngest clusters have the largest model value of EW(H$\alpha$\+ [N ii]) (see Table 10). ID | | EW(H$\alpha$) --- Å | $\Delta$EW(H$\alpha$) --- Å | (1) --- | H21 --- (2) | S2 --- (3) | S3 --- (4) | (3) - (2) --- (5) | (4) - (2) --- (6) 0011-E | 3038 | 1919 | 2097 | -1119 | -841 0090-E | 1489 | 171 | 235 | -1318 | -1254 0093-E | 1232 | 128 | 1987 | -1104 | 755 0383-W | 1043 | 128 | 325 | -915 | -718 0417-E | 1640 | 102 | 2365 | -1538 | 725 0451-W | 3288 | 102 | 2299 | -3186 | -989 0531-E | 364 | 54 | 293 | -310 | -71 0534-E | 444 | 143 | 2080 | -301 | 1636 0589-E | 2423 | 41 | 2080 | -2382 | -415 0816-E | 1988 | 1919 | 803 | -69 | -1185 0894-E | 2043 | 1919 | 1913 | -124 | -130 1252-W | 1823 | 41 | 3121 | -1782 | 1298 1381-W | 789 | 63 | 14 | -726 | -775 1564-W | 1197 | 1290 | 3852 | 93 | 2655 1576-W | 2472 | 91 | 2177 | -2381 | -295 1949-W | 1053 | 41 | 2048 | -1012 | 995 2449-W | 358 | 80 | 3187 | -278 | 2829 2732-W | 796 | 41 | 577 | -755 | -219 2740-W | 826 | 153 | 325 | -673 | -501 RANGE | 358 - 3288 | 41 - 1919 | 14 - 3852 | \- 3186 - 93 | -1254 - 2829 MEAN | 1489 | 387 | 1674 | -1046 | -184 Table 7: Observed versus predicted H$\alpha$ equivalent widths. Column (1) - Cluster ID. Column (2) - Observed EW(H$\alpha$) from H21. Columns (3) and (4) - value from best-fitting S2 and S3 models, respectively ($Z=0.014$). Columns (5) and (6) give the differences between values in the columns which are indicated in the header of the table. ### 6.3 Physical properties of the star clusters We use the Bayesian inference tool BAYESPHOT and the formalism presented in Section 5 in order to determine the extinction, mass and age of the star clusters in our sample. V-band extinction. Table 8 gives V-band extinctions (AV) from the literature and our work. Column (2) gives the value from A17, which is derived via deterministic models and $\chi^{2}$ minimization. Column (3) gives the median value that is derived with BAYESPHOT and the Z=0.020 stochastic models of Krumholz et al. (2015a, hereafter K15). Columns (4) and (5) give the median values that are derived with BAYESPHOT and the Z=0.014 S2 and S3 GALAXEV-C models, respectively. We find that the S2 models yield lower extinctions than the S3 models. Thus, the extinction depends on the log(US) value of the models. We also find good general agreement between the A17, K15 and S3 extinctions (within the error bars), and that the A17 extinctions tend to be the largest. Mass. Table 9 gives masses from the literature and our work. Although the models in our pilot library use a coarse grid of cluster masses, we find that the observed clusters are low mass, in agreement with previous results. The mean cluster mass is 103 $M_{\odot}$ using the A17, K15 S2, and S3 models. Thus, the value of log(US) does not affect the estimated mass value. Age. Table 10 gives ages from the literature and our work. We find that the S2 models yield an older mean cluster age relative to the S3 models and that the A17 models yield the youngest mean age (2 Myr). We also find that the oldest/youngest clusters using K15 are the oldest/youngest clusters from S3 as well. Finally, according to the S3 models, four clusters in our sample are 1 Myr. This is an age when nebular emission lines contribute significantly to the V-band luminosity and the nebular continuum to the I-band luminosity. ID | Av | $\Delta$Av ---|---|--- | A17 | K15 | S2 | S3 | (3)-(2) | (3)-(5) | (5)-(4) (1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) 0011-E | 1.89 ${}^{+1.80}_{-1.98}$ | 1.25 ${}^{+0.24}_{-0.19}$ | 0.54 ${}^{+0.10}_{-0.09}$ | 1.32 ${}^{+0.16}_{-0.14}$ | -0.64 | -0.07 | 0.78 0090-E | 0.12 ${}^{+0.00}_{-0.19}$ | 0.17 ${}^{+0.11}_{-0.12}$ | 0.07 ${}^{+0.07}_{-0.05}$ | 0.12 ${}^{+0.14}_{-0.07}$ | 0.05 | 0.05 | 0.05 0093-E | 0.37 ${}^{+0.00}_{-0.46}$ | 0.31 ${}^{+0.16}_{-0.17}$ | 0.09 ${}^{+0.1}_{-0.07}$ | 0.24 ${}^{+0.16}_{-0.15}$ | -0.06 | 0.07 | 0.15 0383-W | 0.25 ${}^{+0.12}_{-0.31}$ | 0.21 ${}^{+0.14}_{-0.14}$ | 0.07 ${}^{+0.09}_{-0.05}$ | 0.16 ${}^{+0.15}_{-0.11}$ | -0.04 | 0.05 | 0.09 0417-E | 1.05 ${}^{+0.93}_{-1.15}$ | 0.83 ${}^{+0.23}_{-0.26}$ | 0.28 ${}^{+0.10}_{-0.12}$ | 0.73 ${}^{+0.19}_{-0.19}$ | -0.22 | 0.10 | 0.45 0451-W | 2.08 ${}^{+1.98}_{-2.17}$ | 1.39 ${}^{+0.26}_{-0.26}$ | 0.64 ${}^{+0.11}_{-0.12}$ | 1.72 ${}^{+0.28}_{-0.17}$ | -0.69 | -0.33 | 1.08 0531-E | 0.37 ${}^{+0.22}_{-0.46}$ | 0.33 ${}^{+0.17}_{-0.19}$ | 0.09 ${}^{+0.10}_{-0.07}$ | 0.19 ${}^{+0.16}_{-0.12}$ | -0.04 | 0.14 | 0.10 0534-E | 0.81 ${}^{+0.00}_{-0.90}$ | 0.71 ${}^{+0.21}_{-0.21}$ | 0.21 ${}^{+0.12}_{-0.09}$ | 0.61 ${}^{+0.17}_{-0.16}$ | -0.10 | 0.10 | 0.40 0589-E | 1.21 ${}^{+1.12}_{-1.36}$ | 0.99 ${}^{+0.24}_{-0.64}$ | 0.31 ${}^{+0.09}_{-0.12}$ | 0.97 ${}^{+0.23}_{-0.22}$ | -0.22 | 0.02 | 0.66 0816-E | 0.62 ${}^{+0.50}_{-0.71}$ | 0.38 ${}^{+0.19}_{-0.21}$ | 0.12 ${}^{+0.09}_{-0.07}$ | 0.35 ${}^{+0.17}_{-0.19}$ | -0.24 | 0.03 | 0.23 0894-E | 1.71 ${}^{+1.61}_{-1.86}$ | 1.02 ${}^{+0.18}_{-0.19}$ | 0.47 ${}^{+0.12}_{-0.09}$ | 1.11 ${}^{+0.16}_{-0.14}$ | -0.69 | -0.09 | 0.64 1381-W | 1.21 ${}^{+1.15}_{-1.30}$ | 0.33 ${}^{+0.40}_{-0.24}$ | 0.49 ${}^{+1.58}_{-0.18}$ | 1.34 ${}^{+0.21}_{-0.21}$ | -0.88 | -1.01 | 0.85 1252-W | 0.96 ${}^{+0.87}_{-1.05}$ | 0.66 ${}^{+0.28}_{-0.23}$ | 0.26 ${}^{+0.12}_{-0.10}$ | 0.64 ${}^{+0.21}_{-0.17}$ | -0.30 | 0.02 | 0.38 1564-W | 1.46 ${}^{+1.40}_{-1.61}$ | 1.04 ${}^{+0.21}_{-0.24}$ | 0.42 ${}^{+0.10}_{-0.09}$ | 0.94 ${}^{+0.24}_{-0.16}$ | -0.42 | 0.10 | 0.52 1576-W | 0.34 ${}^{+0.19}_{-0.40}$ | 0.24 ${}^{+0.16}_{-0.15}$ | 0.09 ${}^{+0.07}_{-0.07}$ | 0.16 ${}^{+0.15}_{-0.11}$ | -0.10 | 0.08 | 0.07 1949-W | 0.37 ${}^{+0.03}_{-0.43}$ | 0.24 ${}^{+0.16}_{-0.15}$ | 0.09 ${}^{+0.10}_{-0.07}$ | 0.19 ${}^{+0.16}_{-0.12}$ | -0.13 | 0.05 | 0.10 2449-W | 1.08 ${}^{+0.99}_{-1.24}$ | 0.94 ${}^{+0.19}_{-0.25}$ | 0.28 ${}^{+0.10}_{-0.12}$ | 0.80 ${}^{+0.24}_{-0.19}$ | -0.14 | 0.14 | 0.52 2732-W | 0.74 ${}^{+0.46}_{-0.90}$ | 0.54 ${}^{+0.26}_{-0.28}$ | 0.21 ${}^{+0.10}_{-0.12}$ | 0.42 ${}^{+0.29}_{-0.18}$ | -0.20 | 0.12 | 0.21 2740-W | 0.50 ${}^{+0.00}_{-0.78}$ | 0.66 ${}^{+0.21}_{-0.21}$ | 0.19 ${}^{+0.12}_{-0.10}$ | 0.71 ${}^{+0.16}_{-0.17}$ | 0.16 | -0.05 | 0.52 RANGE | 0.1 - 2.1 | 0.2 - 1.4 | 0.1 - 0.6 | 0.1 - 1.7 | -0.9 - 0.2 | -1.0 - 0.1 | 0.1 - 1.1 MEAN | 0.9 | 0.6 | 0.3 | 0.7 | -0.3 | -0.0 | 0.4 Table 8: V-band extinctions of star-clusters derived with different models, all of which include gas. The difference between values given by the different models is also given. Column (1) - Cluster ID. Column (2) - Deterministic models with Z=0.020 of A17. Column (3) - Stochastic models with Z=0.020 of Krumholz et al. (2015a). Columns (4) and (5) - Stochastic models with Z=0.014 and log(US)=-2 and log(US)=-3, respectively, presented in this work. Columns (6) to (8) - Differences between values in the columns which are indicated. Column (9) Comment. ID | log(M / M⊙) | $\Delta$log(M / M⊙) ---|---|--- | A17 | K15 | S2 | S3 | (3)-(2) (1) | (2) | (3) | (4) | (5) | (6) 0011-E | 3.43${}^{+3.47}_{-3.39}$ | 2.59${}^{+0.40}_{-0.35}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | -0.44 0090-E | 2.77${}^{+2.86}_{-2.63}$ | 2.89${}^{+0.35}_{-0.55}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | 0.47 0093-E | 2.88${}^{+2.95}_{-2.72}$ | 2.94${}^{+0.40}_{-0.55}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | 0.46 0383-W | 2.71${}^{+2.85}_{-2.68}$ | 2.94${}^{+0.35}_{-0.55}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | 0.58 0417-E | 3.25${}^{+3.30}_{-3.19}$ | 2.69${}^{+0.40}_{-0.40}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | -0.16 0451-W | 2.96${}^{+3.00}_{-2.92}$ | 2.54${}^{+0.45}_{-0.35}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | 0.03 0531-E | 2.94${}^{+3.04}_{-2.82}$ | 2.89${}^{+0.45}_{-0.55}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | 0.40 0534-E | 2.99${}^{+3.03}_{-2.90}$ | 2.64${}^{+0.40}_{-0.40}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | 0.05 0589-E | 3.37${}^{+3.41}_{-3.28}$ | 3.04${}^{+0.50}_{-0.60}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | 0.17 0816-E | 3.65${}^{+3.83}_{-3.62}$ | 3.49${}^{+0.25}_{-0.25}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | 0.09 0894-E | 3.43${}^{+3.46}_{-3.25}$ | 2.54${}^{+0.40}_{-0.35}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | -0.49 1252-W | 3.19${}^{+3.24}_{-3.14}$ | 2.69${}^{+0.35}_{-0.40}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | -0.15 1381-W | 2.48${}^{+2.52}_{-2.45}$ | 2.34${}^{+0.30}_{-0.20}$ | 3.02${}^{+1.27}_{-0.02}$ | 3.0${}^{+0.16}_{-0.00}$ | 0.16 1564-W | 3.43${}^{+3.47}_{-3.28}$ | 2.74${}^{+0.40}_{-0.40}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | -0.29 1576-W | 2.95${}^{+3.06}_{-2.81}$ | 2.79${}^{+0.40}_{-0.50}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | 0.24 1949-W | 2.81${}^{+2.91}_{-2.76}$ | 2.79${}^{+0.35}_{-0.45}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | 0.33 2449-W | 3.31${}^{+3.34}_{-3.21}$ | 2.84${}^{+0.45}_{-0.50}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | -0.02 2732-W | 3.05${}^{+3.13}_{-3.00}$ | 2.89${}^{+0.35}_{-0.45}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | 0.19 2740-W | 2.58${}^{+2.62}_{-2.26}$ | 2.69${}^{+0.40}_{-0.40}$ | 3.0${}^{+0.02}_{-0.00}$ | 3.0${}^{+0.02}_{-0.00}$ | 0.51 RANGE | 2.48 - 3.65 | 2.34 - 3.49 | 3.00 - 3.00 | 3.00 - 3.00 | -0.49 - 0.58 MEAN | 3.06 | 2.79 | 3.00 | 3.00 | 0.11 Table 9: Total mass in stars of star-cluster derived with different models, all of which include gas. The difference between values given by the different models is also given. Column (1) - Cluster ID. Column (2) - deterministic models with Z=0.020 A17. Column (3) - Stochastic models with Z=0.020 of Krumholz et al. (2015a). Columns (4) and (5) - stochastic models with (Z=0.014, log(US)=-2) and (Z=0.014, log(US)=-3) from this work, respectively. Columns (6) Differences between values in the columns which are indicated. ID | t / Myr | $\Delta$t / Myr ---|---|--- | A17 | K15 | S2 | S3 | (3)-(2) | (3)-(5) | (5)-(4) (1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) 0011-E | 1.0${}^{+1.0}_{-1.0}$ | 0.7${}^{+1.9}_{-0.5}$ | 3.0${}^{+0.1}_{-0.0}$ | 1.0${}^{+0.0}_{-0.0}$ | -0.3 | -0.4 | -2.0 0090-E | 2.0${}^{+3.0}_{-1.0}$ | 5.6${}^{+1.6}_{-0.6}$ | 7.8${}^{+0.1}_{-0.1}$ | 7.8${}^{+0.1}_{-3.8}$ | 3.7 | -2.2 | 0.0 0093-E | 2.0${}^{+5.0}_{-1.0}$ | 5.1${}^{+1.9}_{-0.5}$ | 7.8${}^{+0.1}_{-0.1}$ | 4.0${}^{+3.8}_{-0.2}$ | 3.2 | 1.1 | -3.8 0383-W | 3.0${}^{+4.0}_{-2.0}$ | 5.6${}^{+1.8}_{-0.5}$ | 7.8${}^{+0.1}_{-0.1}$ | 7.6${}^{+0.4}_{-3.6}$ | 2.6 | -1.9 | -0.3 0417-E | 1.0${}^{+1.0}_{-1.0}$ | 2.0${}^{+1.5}_{-0.6}$ | 7.8${}^{+0.1}_{-0.1}$ | 3.0${}^{+0.9}_{-0.1}$ | 1.0 | -1.0 | -4.8 0451-W | 4.0${}^{+4.0}_{-4.0}$ | 0.7${}^{+7.1}_{-0.3}$ | 7.8${}^{+0.1}_{-0.1}$ | 1.0${}^{+3.0}_{-0.0}$ | -3.3 | -0.4 | -6.8 0531-E | 2.0${}^{+3.0}_{-1.0}$ | 4.3${}^{+1.9}_{-0.5}$ | 7.8${}^{+0.1}_{-0.1}$ | 3.9${}^{+0.1}_{-0.2}$ | 2.3 | 0.3 | -3.9 0534-E | 1.0${}^{+15.0}_{-1.0}$ | 2.2${}^{+1.6}_{-0.8}$ | 7.8${}^{+0.1}_{-0.1}$ | 3.9${}^{+0.1}_{-1.0}$ | 1.2 | -1.7 | -3.9 0589-E | 1.0${}^{+2.0}_{-1.0}$ | 2.7${}^{+8.5}_{-0.6}$ | 7.8${}^{+0.1}_{-0.1}$ | 3.0${}^{+1.0}_{-0.1}$ | 1.7 | -0.4 | -4.8 0816-E | 3.0${}^{+3.0}_{-1.0}$ | 2.5${}^{+1.6}_{-0.6}$ | 3.0${}^{+0.8}_{-0.1}$ | 3.0${}^{+0.1}_{-0.1}$ | -0.6 | -0.6 | 0.0 0894-E | 1.0${}^{+3.0}_{-1.0}$ | 0.5${}^{+1.9}_{-0.5}$ | 3.0${}^{+0.1}_{-0.1}$ | 1.0${}^{+0.0}_{-0.0}$ | -0.5 | -0.5 | -2.0 1252-W | 1.0${}^{+1.0}_{-1.0}$ | 1.7${}^{+1.8}_{-0.6}$ | 7.8${}^{+0.1}_{-0.1}$ | 3.0${}^{+0.9}_{-2.0}$ | 0.7 | -1.4 | -4.8 1381-W | 4.0${}^{+4.0}_{-4.0}$ | 4.3${}^{+1.3}_{-0.8}$ | 7.8${}^{+0.1}_{-5.1}$ | 7.8${}^{+0.1}_{-0.3}$ | 0.3 | -3.5 | 0.0 1564-W | 1.0${}^{+3.0}_{-1.0}$ | 1.3${}^{+2.1}_{-0.6}$ | 3.0${}^{+0.1}_{-0.0}$ | 1.0${}^{+2.0}_{-0.0}$ | 0.3 | 0.2 | -2.0 1576-W | 2.0${}^{+3.0}_{-1.0}$ | 3.0${}^{+1.9}_{-0.6}$ | 7.8${}^{+0.1}_{-0.1}$ | 3.9${}^{+0.1}_{-0.9}$ | 0.9 | -1.0 | -3.9 1949-W | 3.0${}^{+5.0}_{-2.0}$ | 3.5${}^{+1.6}_{-0.6}$ | 7.8${}^{+0.1}_{-0.1}$ | 4.0${}^{+3.8}_{-1.0}$ | 0.5 | -0.5 | -3.8 2449-W | 1.0${}^{+2.0}_{-1.0}$ | 2.2${}^{+1.8}_{-0.7}$ | 7.8${}^{+0.1}_{-0.1}$ | 3.0${}^{+1.0}_{-0.1}$ | 1.2 | -0.8 | -4.8 2732-W | 4.0${}^{+5.0}_{-2.0}$ | 2.2${}^{+1.9}_{-0.4}$ | 7.8${}^{+0.1}_{-0.1}$ | 3.0${}^{+1.0}_{-2.0}$ | -1.8 | -0.8 | -4.8 2740-W | 5.0${}^{+6.0}_{-4.0}$ | 3.9${}^{+2.3}_{-0.4}$ | 7.8${}^{+0.1}_{-0.1}$ | 7.8${}^{+0.1}_{-3.9}$ | -1.1 | -3.9 | 0.0 RANGE | 1 - 5 | 0.5 - 5.6 | 2.9 - 7.8 | 1.0 - 7.8 | -3.3 - 3.6 | -3.9 - 1.0 | -6.8 - 0.0 MEAN | 2.2 | 2.8 | 6.79 | 3.8 | 0.6 | -1.0 | -3.0 Table 10: Ages of star-cluster derived with different models, all of which include gas. The difference between values given by the different models is also given. Column (1) - Cluster ID. Column (2) Deterministic models with Z=0.020 (A17). Column (2) - Stochastic models with Z=0.020 of Krumholz et al. (2015a). Columns (4) and (5) - Stochastic models with Z=0.014 and log(US)=-2 and log(US)=-3, respectively, presented in this work. Columns (6) to (8) - Differences between values in the columns which are indicated. ### 6.4 Well versus poorly constrained solutions For clusters 816-W (top row) and 2732-W (bottom row), Figure 12 shows the extinction PDFs corresponding to the Z = 0.014 / S3 / GALAXEV-C models (left column) and the Z=0.020 / log(Us)=-3 / SLUG models (middle column). For 2732-W, note that GALAXEV-C and SLUG yield single- and multiple-valued extinction PDFs, respectively. That one PDF is single-peaked and the other one is not is attributed to differences between the GALAXEV-C and SLUG libraries. In particular, as explained in the introduction, the SLUG models do not include the effect of the stochastic variation in the shape of the ionizing continuum, on the nebular emission. The right column of Figure 12 shows the SLUG age PDFs corresponding the the above two clusters. We only show the SLUG results because the pilot GALAXEV-C age grid is very coarse. The age PDF is single-peaked for cluster 816-W and multi-peaked for cluster 2732-W. A discussion of the PDFs for the whole sample of clusters is provided in Appendix C. Figure 12: Left column–. The Z=0.014/log(Us)=-3/GALAXEV-C V-band extinction PDFs of clusters 816-E (top panel) and 2732-W (bottom panel). The blue, red, and green lines give the 16th, 50th and 84th percentiles, respectively. Middle column–. Similar to the left column but we show the Z=0.020/log(Us)=-3/SLUG V-band extinction PDFs. Right column–. Z=0.020/log(Us)=-3/SLUG age PDFs. ## 7 Summary and Conclusions 1. 1. We present a pilot GALAXEV-C library of synthetic HST-equivalent NUV, U, B, V, and I photometry of star clusters that accounts for the stochastic sampling of the stellar IMF and the contribution of the ionized gas and dust mixed with the ionized gas (Section 2). The library uses the spectra that are presented in Vidal-García et al. (in prep.) and includes models for clusters with initial masses, $M_{i}=10^{3}$, $10^{4}$, and $10^{5}$ $M_{\odot}$; ages, $t=1$, 3, 4, and 8 Myr; metallicities, $Z=0.002$ and $Z=0.014$ (solar); and ionisation parameters, log(U${}_{\rm S})=-2$ (S2 models) and -3 (S3 models). We compare the stochastic models to corresponding deterministic models (Section 3.4); and to HST LEGUS and H$\alpha$-LEGUS observations of star clusters in galaxy NGC 7793 that are isolated, have compact H$\alpha$ morphologies, $Z\sim 0.014$ (Figure 8), and deterministic masses and ages of $<10^{4}$ $M_{\odot}$ and $\leq 10$ Myr, respectively. We determine the V-band extinctions, masses, and ages of these clusters using the stochastic models (Section 5). We compare the cluster properties derived with deterministic models that are published in A17, and derived with independent GALAXEV-C and SLUG (K15) stochastic models (Section 6.3). 2. 2. For GALAXEV magnitudes that only account for the stars: a) the absolute value of the residual, deterministic mag - median stochastic mag, can be $\geq 0.5$ mag, even for $M_{i}=10^{5}$ $M_{\odot}$ (Table 4); and b) the largest spread of the stochastic models occurs at 3 and 4 Myr, when Wolf-Rayet stars are present (Figure 2). 3. 3. For $M_{i}=10^{5}$ $M_{\odot}$: a) the median stochastic mag with gas can be $>$1.0 mag more luminous than the median stochastic mag without gas (Table 2); and b) the nebular emission lines can contribute with $>50\%$ and $>30\%$ to the total emission in the V and I bands, respectively (Figure 4). 4. 4. Regarding age-dating OB clusters via deterministic tracks in the U - B vs. V - I diagram, we find that this method leads to highly uncertain ages at $Z=0.014$ for $M_{i}\sim 10^{3}$ $M_{\odot}$ (Figures 5 \- 7) and $Z=0.002$ for all masses in the stochastic library (Figures 14 \- 16). 5. 5. Also regarding the U - B vs. V - I diagram, we find that at $Z=0.014$, a small fraction of models with $M_{i}\sim 10^{3}$ and $M_{i}\sim 10^{4}$ $M_{\odot}$ are located far from their corresponding deterministic predictions and none of our observations are found near these outlier models. This is expected given that our observational sample is small and according to stochastic models the corresponding outlier clusters have a low probability of being created. 6. 6. Regarding the SED fitting, we find good agreement between the best-fitting S3 model and the observations (Figure 11, Table 6). 7. 7. We derive the extinctions in the V-band, masses, and ages of the star clusters in our sample using two independent libraries of stochastic models with gas, the K15 library and our pilot library. We compare the results with those of A17, which are based on deterministic models (Tables 8 to 10). 8. 8. Regarding the extinction, we find that the GALAXEV-C AV value is systematically lower for log(US)=-2 than for log(US)=-3. We also find that the A17, K15, and S3 extinctions are in general agreement (within the errors), and that the A17 extinctions tend to be the largest. Finally, we find that for a given cluster, the extinction PDF can be single-peaked for GALAXEV-C and multi-peaked for SLUG and vise versa, which is attributed to differences in the stochastic libraries. 9. 9. Regarding the masses, we find that the observed clusters are low mass, in agreement with the deterministic predictions (Table 9). 10. 10. Regarding the ages, we find that the S2 models tend to yield ages relative to the S3 models, and that the A17 models yield the youngest mean age (2 Myr; Table 10). We also find that in several cases, the age PDF presents multiple peaks. 11. 11. Regarding models versus nature, we recall that for a multinomial distribution, the standard deviations of the different mass bins are not independent from each other, whereas we have no reason to believe that the same correlation between the standard deviations is true in nature. This is why it is important to observationally characterize the true variance in nature. 12. 12. An extension of the pilot library to other metallicities and ages is near completion and can be made available upon request to AVG, who is a co-author of this work. ## Appendix A Is 220 realizations enough? The left-panel of Figure 13 is similar to Figure 1 but includes only the number of realizations that are used in this work. These Figures show that reducing the number of realizations from 1000 to 220 has no significant effect on the estimated mean and standard deviation of the number of stars in each mass bin. Increasing the number of realizations only changes the uncertainty of the standard deviation estimates. Figure 13: Left-panel—Same as Figure 1 but for 220 realizations. Right- panel—Expected mean and standard deviations for a number of stars that is fixed such that the expected mass is $M_{i}$. If we fix the total number of stars rather than the total mass, then the distribution of bin counts will follow a multinomial distribution with the desired expected total mass, as shown in the right panel of Figure 13. We can use the known statistics of this multinomial distribution to approximate the standard deviations for the case in which the total mass is strictly fixed (compare the standard deviations in the left and right panels). The standard deviations estimated from the stochastically-generated distributions (left panel) show a somewhat larger standard deviation due to having a variable total number of stars. An in depth discussion about mass-limited sampling versus other sampling procedures can be found in Cerviño et al. (2013). The importance of computing 220 realizations of the IMF is thus, not to characterize the spread in observables predicted by the stochastic models (because in principle, that can be calculated analytically) but to fill in the space between random realizations in diagrams such as the colour-colour diagram, which is useful for comparison to the observations. ## Appendix B Additional colour-colour diagrams. In Figures 5 \- 7 and Figure 10, we compare Z=0.014 deterministic and stochastic predictions in the U - B vs. V - I diagram for cases: only-stars, log(Us)=-3, log(Us)=-2, and the three previous cases combined, respectively. Figures 14 \- 17 are similar but for Z=0.002. Figures 14 \- 16 are discussed in Section 3, while Figure 17 is discussed in Section 5. Figure 14: Similar to Figure 5 but for $Z=0.002$. Figure 15: Similar to Figure 6 but for $Z=0.002$. Figure 16: Similar to Figure 6 but for $Z=0.002$. Figure 17: Similar to Figure 10 but for $Z=0.002$. In this case, the Yggdrasil predictions are for $Z=0.004$. ## Appendix C Extinction and age PDF. For all clusters in our sample, the left-three panels of Figures 18 \- 21 show the V-band extinction PDFs from GALAXEV-C for models with log(U${}_{\rm S})=-3$ and $Z=0.014$ (left column) and from SLUG for models with log(U${}_{\rm S})=-3$ and $Z=0.020$ (middle column); and the age PDFs from SLUG (right column). The code and cluster ID is indicated in the column titles. The blue, red, and green lines give the 16th, 50th and 84th percentiles respectively. In each figure, the clusters are arranged in order of increasing age. The right-five panels of Figures 18 \- 21 show the NUV, U, B, V, and I postage stamps of the clusters, from left to right. Although the top three clusters of Figure 18 have the youngest median SLUG ages, their age PDFs show a second peak at an older age. These three clusters also have high Av values (for the sample), as expected if the surrounding dust has not been as affected by the ionising photons from massive stars compared to other clusters. The high extinction of the youngest clusters leads to NUV/U/B-band postage stamps with low number of counts because reddening due to dust increases as wavelength decreases. Figures 18 \- 21 also show that for some clusters the extinction PDF is multi- peaked according to one code but single-peaked according to the other. For instance, for cluster 451-W in Figure 18 GALAXEV-C yields two peaks while for cluster 589-E in Figure 20 it is the opposite. Cases where both codes yield single-peaked PDFs (e.g., cluster 534-E) or multiple-peaked PDFs (e.g., cluster 2732-W) also occur. The different shapes of the extinction PDFs is attributed to the different ways of modelling the ionising continuum in the GALAXEV- and SLUG libraries (see the introduction of this paper). Figure 18: Left–. PDFs of V-band extinction (left and middle columns) and age (right column) for five clusters in our sample. The left and middle columns show the GALAXEV-C and SLUG PDFs, respectively, while the right column shows the SLUG age PDF. The clusters are arranged in order of increasing SLUG age. In each PDF sub-panel, we give the median value (50th percentile) of the extinction or age. Right–. Postage stamps of the clusters in the NUV, U, B, V and I LEGUS bands, from left to right. We use a logarithmic scale from 0 to 10 and SAO-ds9’s colour scale "b", such that blue corresponds to the lowest number of counts and yellow corresponds to pixels with 10 counts or more. Figure 19: Similar to Figure 18 but for five different clusters. Figure 20: Similar to Figure 18 but for five different clusters. Note that cluster 1381-W, which is $\sim$4 Myr and has a high Av value (for the sample) according to GALEX-C and the second PDF peak of SLUG, has postage stamps that follow a similar pattern to that of the youngest ($<1$ Myr) high Av clusters of Figure 18. Figure 21: Similar to Figure 18 but for five different clusters. ## Acknowledgements We thank the jury of ROD’s M.S. thesis (E. Terlevich, S. Sánchez, L. Aguilar, S. Srinivasan) as well as M. Cerviño and B. Elmegreen for comments and suggestions which have greatly improved the quality of this paper. ROD and AW acknowledge the support of UNAM via grant agreement PAPIIT no. IA-102120. AVG, SC and GB acknowledge support from the ERC via an Advanced Grant under grant agreement no. 321323-NEOGAL. AVG also aknowledges support from the ERC Advanced Grant MIST (FP7/2017-2022, No 742719). MRK acknowledges support from the Australian Research Council’s Future Fellowship funding scheme, award FT180100375, and from resources and services provided by the National Computational Infrastructure (NCI), which is supported by the Australian Government. ## Appendix D Data availability The HST data underlying this article are available in the Mikulski Archive for Space Telescopes at https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html, and can be accessed with the dataset identifiers 13364 and 13773. LEGUS high level science products can be found at https://archive.stsci.edu/prepds/legus/dataproducts-public.html ## References * Adamo et al. (2017) Adamo A., et al., 2017, ApJ, 841, 131 * Ashworth et al. (2017) Ashworth G., et al., 2017, MNRAS, 469, 2464 * Asplund et al. (2009) Asplund M., Grevesse N., Sauval A. J., Scott P., 2009, ARA&A, 47, 481 * Bica et al. (1991) Bica E., Claria J. J., Dottori H., Santos J. F. C. J., Piatti A., 1991, ApJ, 381, L51 * Bothwell et al. (2009) Bothwell M. S., Kennicutt R. C., Lee J. C., 2009, MNRAS, 400, 154 * Bruzual & Charlot (2003) Bruzual G., Charlot S., 2003, MNRAS, 344, 1000 * Calzetti et al. (2015) Calzetti D., et al., 2015, AJ, 149, 51 * Cardelli et al. (1989) Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245 * Cerviño & Luridiana (2003) Cerviño M., Luridiana V., 2003, in Reyes-Ruiz M., Vázquez-Semadeni E., eds, Revista Mexicana de Astronomia y Astrofisica Conference Series Vol. 18, Revista Mexicana de Astronomia y Astrofisica Conference Series. pp 11–13 (arXiv:astro-ph/0302577) * Cerviño et al. (2013) Cerviño M., Román-Zúñiga C., Luridiana V., Bayo A., Sánchez N., Pérez E., 2013, A&A, 553, A31 * Chabrier (2003) Chabrier G., 2003, ApJ, 586, L133 * Chandar et al. (2004) Chandar R., Leitherer C., Tremonti C. A., 2004, ApJ, 604, 153 * Chandar et al. (2016) Chandar R., Whitmore B. C., Dinino D., Kennicutt R. C., Chien L. H., Schinnerer E., Meidt S., 2016, ApJ, 824, 71 * Cook et al. (2019) Cook D. O., et al., 2019, MNRAS, 484, 4897 * Eldridge et al. (2017) Eldridge J. J., Stanway E. R., Xiao L., McClelland L. A. S., Taylor G., Ng M., Greis S. M. L., Bray J. C., 2017, Publ. Astron. Soc. Australia, 34, e058 * Ferland et al. (2017) Ferland G. J., et al., 2017, Rev. Mex. Astron. Astrofis., 53, 385 * Fouesneau & Lançon (2010) Fouesneau M., Lançon A., 2010, A&A, 521, A22 * Fouesneau & Lancon (2010) Fouesneau M., Lancon A., 2010, A Bayesian Approach Accounting for Stochastic Fluctuations in Stellar Cluster Properties. p. 171 * Fumagalli et al. (2011) Fumagalli M., da Silva R. L., Krumholz M. R., 2011, ApJ, 741, L26 * Guseva et al. (2006) Guseva N. G., Izotov Y. I., Thuan T. X., 2006, ApJ, 644, 890 * Gutkin et al. (2016) Gutkin J., Charlot S., Bruzual G., 2016, MNRAS, 462, 1757 * Hannon et al. (2019) Hannon S., et al., 2019, MNRAS, 490, 4648 * Hirschmann et al. (2019) Hirschmann M., Charlot S., Feltre A., Naab T., Somerville R. S., Choi E., 2019, MNRAS, 487, 333 * Johnson et al. (2017) Johnson L. C., et al., 2017, ApJ, 839, 78 * Kroupa (2012) Kroupa P., 2012, arXiv e-prints, p. arXiv:1210.1211 * Krumholz et al. (2015a) Krumholz M. R., Fumagalli M., da Silva R. L., Rendahl T., Parra J., 2015a, MNRAS, 452, 1447 * Krumholz et al. (2015b) Krumholz M. R., et al., 2015b, ApJ, 812, 147 * Krumholz et al. (2019) Krumholz M. R., McKee C. F., Bland -Hawthorn J., 2019, ARA&A, 57, 227 * Lamb et al. (2010) Lamb J. B., Oey M. S., Werk J. K., Ingleby L. D., 2010, ApJ, 725, 1886 * Lee et al. (2009) Lee J. C., et al., 2009, ApJ, 706, 599 * Leitherer et al. (1999) Leitherer C., et al., 1999, ApJS, 123, 3 * Leitherer et al. (2014) Leitherer C., Ekström S., Meynet G., Schaerer D., Agienko K. B., Levesque E. M., 2014, ApJS, 212, 14 * Massey et al. (2000) Massey P., Waterhouse E., DeGioia-Eastwood K., 2000, AJ, 119, 2214 * Mathis (1990) Mathis J. S., 1990, ARA&A, 28, 37 * Messa et al. (2018) Messa M., et al., 2018, MNRAS, 473, 996 * Pietrzyński et al. (2010) Pietrzyński G., et al., 2010, AJ, 140, 1475 * Pilyugin et al. (2014) Pilyugin L. S., Grebel E. K., Kniazev A. Y., 2014, AJ, 147, 131 * Plat et al. (2019) Plat A., Charlot S., Bruzual G., Feltre A., Vidal-García A., Morisset C., Chevallard J., Todt H., 2019, MNRAS, 490, 978 * Popescu (2010) Popescu B., 2010, PhD thesis, University of Cincinnati * Sabbi et al. (2018) Sabbi E., et al., 2018, ApJS, 235, 23 * Salpeter (1955) Salpeter E. E., 1955, ApJ, 121, 161 * Schaerer & de Barros (2009) Schaerer D., de Barros S., 2009, A&A, 502, 423 * Schlafly & Finkbeiner (2011) Schlafly E. F., Finkbeiner D. P., 2011, ApJ, 737, 103 * Vidal-García et al. (2017) Vidal-García A., Charlot S., Bruzual G., Hubeny I., 2017, MNRAS, 470, 3532 * Wofford et al. (2016) Wofford A., et al., 2016, Monthly Notices of the Royal Astronomical Society, 457, 4296 * Wofford et al. (2020) Wofford A., et al., 2020, MNRAS, 493, 2410 * Zackrisson et al. (2011) Zackrisson E., Rydberg C.-E., Schaerer D., Östlin G., Tuli M., 2011, The Astrophysical Journal, 740, 13 * da Silva et al. (2012) da Silva R. L., Fumagalli M., Krumholz M., 2012, ApJ, 745, 145
# CoNO: Complex Neural Operator for Continous Dynamical Physical Systems Karn Tiwari Department of Electrical Communication Engineering Indian Institute of Science, Bangalore Bengaluru, 560012, India <EMAIL_ADDRESS> &N M Anoop Krishnan Yardi School of Artificial Intelligence Indian Institute of Technology, Delhi New Delhi, 110016, India <EMAIL_ADDRESS> &Prathosh A P Department of Electrical Communication Engineering Indian Institute of Science, Bangalore Bengaluru, 560012, India <EMAIL_ADDRESS> ###### Abstract Neural operators extend data-driven models to map between infinite-dimensional functional spaces. While these operators perform effectively in either the time or frequency domain, their performance may be limited when applied to non-stationary spatial or temporal signals whose frequency characteristics change with time. Here, we introduce a Complex Neural Operator (CoNO) that parameterizes the integral kernel using Fractional Fourier Transform (FrFT), better representing non-stationary signals in a complex-valued domain. Theoretically, we prove the universal approximation capability of CoNO. We perform an extensive empirical evaluation of CoNO on seven challenging partial differential equations (PDEs), including regular grids, structured meshes, and point clouds. Empirically, CoNO consistently attains state-of-the-art performance, showcasing an average relative gain of 10.9%. Further, CoNO exhibits superior performance, outperforming all other models in additional tasks such as zero-shot super-resolution and robustness to noise. CoNO also exhibits the ability to learn from small amounts of data—giving the same performance as the next best model with just 60% of the training data. Altogether, CoNO presents a robust and superior model for modeling continuous dynamical systems, providing a fillip to scientific machine learning. ## 1 Introduction Continuum systems span various scientific and engineering fields, such as robotics, biological systems, climate modeling, and fluid dynamics, among others [15]. These systems are represented using Partial Differential Equations (PDEs), the solution of which provides the system’s time evolution. The solution of PDEs necessitates the identification of an optimal solution operator, which maps across functional spaces while including the initial conditions and coefficients. Traditionally, numerical methods, such as finite element and spectral methods, have been employed to approximate the solution operator for PDEs. However, these approaches often incur high computational costs and exhibit limited adaptability to arbitrary resolutions and geometries [56]. Such high computational costs of these numerical methods inhibit the real-time prediction crucial in weather forecasting and robotics. Figure 1: FrFT heatmaps illustrating the temporal-frequency characteristics of the 2D Navier-Stokes equation for varying values of $\alpha$. Each subplot represents the magnitude of the transformed frequency content over time, obtained by applying the FrFT and then flattened 2D frequency map of the Navier-Stokes equation. Different subplots correspond to fractional orders $\alpha$, highlighting the diverse spectral behaviors captured by the FrFT across both the temporal and frequency domains. Note that $\alpha=0$ represents the time domain while $\alpha=1$ represents the frequency domain. Recently, in the realm of scientific machine learning, neural networks present a promising alternative to solve PDEs through a data-driven approach. Specifically, neural operators represent an extension of neural networks, facilitating the mapping between infinite-dimensional functional spaces and serving as a universal approximation of the operator. Notably, these operators learn the functionals without any prior knowledge of the underlying PDE, relying solely on data-driven training, leading to faster inference times than the traditional methods [41, 35]. Among different neural operators, the Fourier Neural Operator (FNO) [41] has gained widespread recognition due to its ability to navigate the infinite- dimensional functional space via kernel integral operations in the Fourier domain. Renowned for its versatility, FNO has found successful applications across diverse domains, including weather forecasting [36], biomedical surrogate modeling [24], and expediting sampling processes in diffusion models [72]. Nevertheless, recent investigations have brought to light some specific challenges associated with FNO, including aliasing errors [18], a departure from continuous-discrete equivalence [6], susceptibility to vanishing gradient problems with an increasing number of layers [58] and susceptibility to poor performance when presented with noisy data [8]. FNO also exhibits suboptimal performance on time-dependent PDEs [40, 8] exemplified by the turbulent Navier-Stokes equations (chaotic flow). Notably, FNO encounters difficulties in making predictions over extended time horizons, constrained by the limitations of the Fourier Transform (FT), which is tailored for stationary signals and needs more time-frequency characteristics—as it decomposes the function based on monochromatic basis function and is unable to detect oscillatory patterns present in chirps signal [57]. For instance, the change of frequency with time in the Navier-Stokes equation is non-stationary and nonlinear (see Fig. 1 and App. Fig. 6), leading to the concentration of spectrum around low frequency in the Fourier Transform. Note that the Navier- Stokes equation holds significant relevance across diverse practical domains, including but not limited to aerodynamics, weather forecasting, and oceanography. Addressing these challenges through a data-driven methodology underscores the pressing need for alternative operator formulation that has ability to learn non-stationary signals. A generalization of the FT for non-stationary signals, where spectra evolve with time, by employing fractional FT (FrFT) can potentially improve the prediction of long-horizon dynamics, particularly in handling highly nonlinear and rapidly changing time-frequency characteristics [11]. FrFT improves the efficiency of the classical FT by using a general parameterized adjustable orthonormal basis. This basis allows us to break down a chirp function into simpler parts. The adjustable parameter helps efficiently reconstruct the chirp functions [57]. The FrFT generalizes FT by rotating the signal between the time and frequency domains, transitioning from real to complex domain, and incorporating phase information [51], resulting in a complex representation of the signal. However, the complex-valued representations have remained unexplored in the operator learning paradigm, although a large literature exists on complex-valued neural networks (CVNNs) [48]. CVNNs offer easier optimization [48], faster convergence during training [4, 14, 66], better generalization [30], data efficiency, and resilience to noise [14, 66]. Thus, combining the operator paradigm with CVNNs can potentially result in an architecture that exploits both best, leading to a model that provides improved long-horizon prediction. Our Contributions. Motivated by the above observations, we introduce the Complex Neural Operator (CoNO). The major contributions of the present work are as follows. 1. 1. Novel architecture. CoNO represents the first instance that performs operator learning employing a CVNN that parameterizes the integral kernel through FrFT, enabling richer information through phase details learnable by CVNNs suitable for non-stationary signals. Table 1 shows the comprehensive comparison of CoNO with current SOTA operators. 2. 2. Universal approximation. We prove theoretically that CoNO follows the universal approximation theorem for operators (see Thm 4.3). 3. 3. Superior performance. We show that CoNO consistently exhibits superior performance compared to SOTA operators, always ranking among the top two on all the datasets with an average gain of 10.9% as presented in Table 2. 4. 4. Data efficiency and robustness to noise. CoNO demonstrates superior performance even with limited samples, data instances, and training epochs. CoNO provides more robustness when noise is injected into the training data compared to the existing methods and performs better than SOTA methods even under 0.1% data noise. ## 2 Related Work Table 1: A comprehensive comparative analysis of features of state-of-the-art operators with CoNO. "*" denotes not applicable. Features | FNO | LSM | CoNO (Ours) ---|---|---|--- Integral Kernel | Frequency | Spatial | Spatial-Frequency Elementary function | Sine | * | Linear Frequency Modulation Representation | Real | Real | Complex Pertinent Signal | Stationary | * | Time Varying Signal $\alpha$ Parameter | Fixed ($90^{o}$) | Fixed ($0^{o}$) | Not Fixed (Learnable) Applicability | Individual | Individual | Unified Neural Operators (NO): Neural operators have shown promise in solving PDEs in a data-driven fashion [35]. Lu et al. [45] introduced DeepOnet, theoretically establishing its universal approximation capabilities. The DeepOnet architecture comprises a branch network and a trunk network, with the former dedicated to learning the input function operator and the latter tasked with learning the function space onto which it is projected. Another famous architecture, the FNO, proposed by Li et al. [41], utilizes a frequency domain method. FNO incorporates Fourier kernel-based integral transformations through fast Fourier transform and projection blocks. An enhanced version, F-FNO [58], improves upon the original FNO by integrating distinct spectral mixing and residual connections. Subsequently, various kernel integral neural operators based on transformations have emerged. For example, Fanaskov and Oseledets [18] introduced spectral methods, such as Chebyshev and Fourier series, to mitigate aliasing errors and enhance clarity in FNO outputs. Li et al. [42] incorporated specialized learnable transforms to facilitate operator learning on irregular domains, achieved by transforming them into uniform latent meshes. Kovachki et al. [35] demonstrated that the well-known attention mechanism can be seen as a particular case of neural operator learning the integral kernel specifically applied to address irregular meshes for solving PDEs, a characteristic managed by Geo-FNO [42] differently. Cao [10] employed two self-attention mechanism-based operators without utilizing a softmax layer, accompanied by a theoretical interpretation. Recently, Hao et al. [26] introduced the GNOT operator, featuring a linear cross-attention block designed to enhance the encoding of irregular geometries. However, transformer-based operators are susceptible to issues arising from limited data samples, displaying a tendency to overfit easily on training data without exhibiting robust generalization. In addressing the challenges posed by multiscale PDEs, Liu et al. [43] proposed a hierarchical attention-based operator. Fractional Fourier Transform (FrFT): The FrFT represents a generalization of the classical Fourier Transform (FT), providing robust capabilities for spectral analysis. It achieves this by facilitating the transformation of input signals into an intermediate domain between the time and frequency domains, thereby establishing a time-frequency representation [51]. FrFT is particularly effective in processing non-stationary signals, commonly called "chirp signals" or signals exhibiting frequency derivatives over time. App. Fig. 7 illustrates the efficacy of employing the FrFT for noise filtering within the signal spectrum through the rotation of the fractional order axis. In contrast to FrFT, alternative signal processing techniques such as wavelet or Gabor transformation don’t provide a joint signal energy distribution across both time and frequency domains. Additionally, these alternatives often grapple with challenges related to high computational complexity, the selection of wavelet functions, and sensitivity to signal noise. FrFT, with its distinct advantages, has found applications in various domains, ranging from solving differential equations [46], wireless communication [39], biomedical signal processing [23], image encryption and image compression [47] etc. Yu et al. [71] demonstrated the benefits of employing FrFT over conventional convolution across various tasks in computer vision, including segmentation, object detection, classification, super-resolution, etc. Complex Valued Neural Networks (CVNNs): A CVNN incorporates complex-valued parameters and variables within its architecture, enabling the representation of both magnitude and phase information in the neural networks [37, 13]. The utilization of CVNNs encompasses diverse advantages, extending from biological to signal processing applications. Danihelka et al. [14] has demonstrated that complex numbers enhance efficiency and stability in information retrieval processes. Additionally, Arjovsky et al. [4] have introduced complex recurrent neural networks (RNNs), highlighting that unitary matrices offer a more intricate representation, thereby mitigating issues associated with vanishing and exploding gradient problems. In image processing, phase information is a critical descriptor, offering detailed insights about image shape, orientation, and edges. Oppenheim and Lim [50] showed the information encapsulated in the phase of an image proves sufficient for the recovery of a substantial proportion of the encoded magnitude information. CVNNs have been a research focus for a long time [20, 31, 29, 49]. Recently, Geuchen et al. [22] established the universal approximation capabilities of CVNNs for deep, narrow architectures, significantly contributing to understanding their expressive power. Prior works [54, 60, 4, 14, 12, 21] have made noteworthy strides in both experimental and theoretical aspects of CVNNs. In the domain of computer vision, the utilization of scattering transformation-based complex neural networks has demonstrated considerable promise, showcasing their ability to achieve performance on par with real- valued counterparts while employing significantly fewer parameters [33, 67, 53]. In NLP, complex embeddings have been incorporated for semantic and phonetic processing of natural languages [59, 16]. In contrast, Yang et al. [70] and Dong et al. [17] showcased the advantages of employing CVNNs transformers for the NLP community. Despite these notable applications across various domains, exploring the applicability of CVNNs within the SciML community is still limited. ## 3 Preliminaries Problem Setting: We have followed and adopted the notations in Li et al. [41]. Let $D$ denote a bounded open set as $D\subset\mathbb{R}^{d}$, with $A=A(D;\mathbb{R}^{d_{a}})$ and $U=U(D;\mathbb{R}^{d_{u}})$ as separable Banach spaces of functions representing elements in $\mathbb{R}^{d_{a}}$ and $\mathbb{R}^{d_{u}}$, respectively. Consider $\mathcal{G}^{\dagger}:A\rightarrow U$ to be a nonlinear surrogate mapping arising from the solution operator for a parametric PDE. It is assumed that there is access to i.i.d. observations ${(a_{j},u_{j})}_{j=1}^{N}$, where $a_{j}\sim\mu$, drawn from the underlying probability measure $\mu$ supported on $A$, and $u_{j}=\mathcal{G}^{\dagger}(a_{j})$. The objective of operator learning is to construct an approximation for $\mathcal{G}^{\dagger}$ via a parametric mapping $\mathcal{G}:A\times\Theta\rightarrow U$, or equivalently, $\mathcal{G}_{\theta}:A\rightarrow U,\theta\in\Theta$, within a finite- dimensional parameter space $\Theta$. The aim is to select $\theta^{\dagger}\in\Theta$ such that $\mathcal{G}(\cdot,\theta^{\dagger})=\mathcal{G}_{\theta}^{\dagger}\approx\mathcal{G}^{\dagger}$. This framework facilitates learning in infinite dimensional spaces as the solution to the optimization problem in Eq. 1 constructed using a loss function $\mathcal{L}:U\times U\rightarrow\mathbb{R}$. $\min_{\theta\in\Theta}\mathbb{E}_{a\sim\mu}\left[\mathcal{L}(\mathcal{G}(a,\theta),\mathcal{G}^{\dagger}(a))\right],$ (1) The optimization problem is solved in operator learning frameworks using a data-driven empirical approximation of the loss function akin to the regular supervised learning approach using train-test observations. Usually, $\mathcal{G}_{\theta}$ is parameterized using deep neural networks. Fractional Fourier Transform (FrFT): Inspired by the kernel formulation for solving linear PDEs using Green’s function, we construct the model $\mathcal{G}_{\theta}$ employing an iterative approach to map an input function $a$ to an output function $u$ within the CoNO framework as detailed in Sec. 4. In CoNO, the kernel integral is formulated using the FrFT with a learnable order. The fractional transformation of order $\alpha$ ($\alpha\in\mathbb{R}$) is a parameter of the Fractional Fourier Transform (FrFT) that corresponds to the $\alpha^{th}$ power of the Fourier Transform (FT) (denote by $\mathcal{F}^{\alpha}$). ###### Definition 3.1 (FrFT). The fractional Fourier transform with angle $\alpha$ of a signal $f(y)$ is defined as: $\mathcal{F}^{\alpha}(f)(m)=\int_{-\infty}^{\infty}f(y)\mathcal{K}_{\alpha}(m,y)\,dy,$ (2) where, $\mathcal{K}_{\alpha}(m,y)=\begin{cases}c(\alpha)\exp\\{j\pi a(\alpha)[(m^{2}+y^{2})-2b(\alpha)my]\\}&\text{if }\alpha\neq n\pi\mathbb{Z},\\\ \delta(m-y)&\text{if }\alpha=2\pi\mathbb{Z},\\\ \delta(m+y)&\text{if }\alpha+\pi=2\pi\mathbb{Z}\end{cases}$ where, $a(\alpha)=\cot\alpha,\>b(\alpha)=\csc\alpha,\>\text{and}\>c(\alpha)=\sqrt{1-j\cot\alpha}$ and $\mathcal{F}^{\alpha}(f)(m)$ denotes the $m^{th}$ fractional fourier coefficient of order $\alpha$ of $f$. ###### Remark 3.2. For Eq. 2, it reduces to Standard Fourier Transform (FT) when $\alpha=\frac{\pi}{2}$, then $\cot{\alpha}=0$ and $\csc{\alpha}=1$. Complex Valued Neural Networks (CVNNs): In the CoNO framework, the integration of kernels in the operator $\mathcal{G}_{\theta}$ is performed within the complex-valued domain using CVNNs. A CVNN is modeled as real and imaginary parts or magnitude and phases as follows [59]: $z=x+jy=|z|e^{j\angle z}$ (3) where $j=\sqrt{-1}$ is imaginary unit, $x$ and $y$ are the real and imaginary parts of $z$, and $|z|$ and $\angle z$ are the magnitude and phase of $z$. ###### Definition 3.3 (Complex Valued Activation). Let $z\in\mathbb{C}$ be a complex number with real part $\text{Re}(z)$ and imaginary part $\text{Im}(z)$. The Complex GeLU (CGeLU) activation function is defined as follows: $\text{CGeLU}(z)=\text{GeLU}(\text{Re}(z))+j\cdot\text{GeLU}(\text{Im}(z)),$ (4) where $\text{GeLU}(\cdot)$ is the Gaussian Error Linear Unit activation function [28]. The CGeLU activation function satisfies the Cauchy-Riemann equations when the real and imaginary parts of $z$ are strictly positive or negative. Complex Valued Back-propagation: Complex-valued back-propagation involves extending traditional back-propagation algorithms to handle complex numbers, utilizing mathematical tools like Wirtinger calculus [3] which enable training neural networks with complex-valued weights and activation, allowing for the modeling of intricate relationships within complex data domains [5, 13]. ## 4 Complex Neural Operator (CoNO) ### 4.1 Proposed Method Here, we introduce our framework, named the Complex Neural Operator (CoNO) depicted in Fig. 2. Figure 2: CoNO Architecture Overview. (Top) (1) The input function $a(x)$ undergoes a deformation $\phi^{-1}$ to convert an irregular mesh into a uniform mesh. (2) The deformed input is then lifted to a higher dimension in the channel space using a neural network. (3) Apply iterative CoNO layer in complex domain consisting of fractional integral kernel. (4) Then, the output is projected back on the lower dimension in channel space using neural network. (5) Solution $u(x)$ is obtained by passing through the deformation $\phi$. (Bottom) Zoomed version of FrFT integral kernel defined in Eq. 6 with learnable parameters $R^{\alpha}$, $R^{\alpha^{\prime}}$, $W$ and fractional order $\alpha$ and $\alpha^{\prime}$. Overview: Suppose $\mathcal{G}_{\theta}$ represents an iterative sequence of functions. The operator $\mathcal{G}_{\theta}:A\rightarrow U$, where $A=A(D;\mathbb{R}^{d_{a}})$ and $U=U(D;\mathbb{R}^{d_{u}})$ are separable Banach spaces of functions, representing elements in $\mathbb{R}^{d_{a}}$ and $\mathbb{R}^{d_{u}}$, respectively. Our goal is to construct the operator in a structure-preserving manner where we band-limit a function over a given spectrum [62], thus preserving complex continuous-discrete equivalence such that Shannon-Whittaker-Kotel’nikov theorem is obeyed for all continuous operations [61]. With this, our operator CoNO, denoted by $\mathcal{G}_{\theta}$ is defined as follows: $\mathcal{G}_{\theta}=\phi\circ\mathcal{Q}\circ\mathcal{L}_{l}\circ\ldots\circ\ \mathcal{L}_{1}\circ\mathcal{P}\circ\phi^{-1},$ (5) where $\circ$ denotes composition. On irregular geometries, $\phi$ represents the function modeled by a neural network that maps the irregular domain into a regular latent mesh for operator learning. The operators $\mathcal{P}:\mathbb{R}^{d_{a}}\rightarrow\mathbb{R}^{d_{1}}$ and $\mathcal{Q}:\mathbb{R}^{d_{l}}\rightarrow\mathbb{R}^{d_{u}}$ correspond to the lifting and projection operations, encoding lower-dimensional spaces into higher-dimensional spaces or vice versa which helps in converting the non- linear dynamics into linear dynamics inspired by Koopman Operator [7]. The operator consists of $l$ layers of non-linear integral operators $\sigma(\mathcal{L}_{l})$ (Eq. 6), where $\sigma$ is an activation function which is applied from layer $\mathcal{L}_{i}$, where $i$ belongs to $1$ to $l-1$ to introduce non-linearity in operators, akin to standard neural networks, to learn highly nonlinear operators, and $\theta$ denotes all the learnable parameters of the operator. Non-linear Operator Layer, $\mathcal{L}$: In CoNO, the complex kernel operator is defined as follows: $v^{l+1}(x)=\sigma\left(Wv^{l}(x)+b^{l}+\mathcal{K}(a,\alpha)v^{l}(x)\right)\quad\forall x\in D$ (6) where: $v^{l}$ is the representation at layer $l$ and $v^{l}\in\mathbb{C}^{d_{l}}$, $\mathcal{K}:V^{l}\times\Theta_{K}\to\mathcal{L}(V^{l+1}(D;\mathbb{C}^{d_{l}}),V^{l+1}(D;\mathbb{C}^{d_{l+1}}))$ maps to bounded linear operators on $V^{l+1}(D;\mathbb{R}^{d_{l+1}})$ as integral operator is linear and bounded operator as shown in App. section B, $\sigma$ is a non-linear complex activation function, $W$ is a linear transformation, $W:\mathbb{C}^{d_{l}}\rightarrow\mathbb{C}^{d_{l+1}}$, $\mathcal{K}(a,\phi)$ is an integral kernel parameterized by CVNN (Eq. 7), $a$ is a complex input function, and $\alpha$ belongs to the parameter space of $\mathcal{G}_{\theta}$ and $b$ is the bias. Fractional Integral Operator, $\mathcal{K}(a,\alpha)$: In CoNO, the integral operator is defined as, $\mathcal{K}(a,\alpha):V^{l}(D)\rightarrow V^{l}(D)$ as follows: $\mathcal{K}(a,\alpha)v^{l}(x)=\sum_{\alpha,\alpha^{\prime}}\mathcal{F}^{-\alpha}(R^{\alpha}\cdot(\mathcal{F}^{\alpha}v^{l}))(x)\quad\forall x\in D$ (7) where, $\mathcal{F}^{\alpha}$ denotes the FrFT with order $\alpha$, $\mathcal{F}^{-\alpha}$ denotes the inverse FrFT, $R^{\alpha}$ is the learnable function which we learn from data, $a$ is a complex input function, $\alpha$ is the learnable fractional order which we learn with data as well and $\alpha$ belongs to the parameter space of $\mathcal{G}_{\theta}$. Note that although $R^{\alpha}$ can be parameterized using a neural network, in the present work, we observed that linear transformation provided slightly better performance than deep neural networks and hence used the same. In the subsequent subsection, we theoretically prove that the proposed operator can map between infinite-dimensional spaces, leading to the learning of non-linear operators. ### 4.2 Theoretical Analysis In this subsection, we present a theoretical analysis of several components of the proposed method. Specifically, we prove the following. (i) The $N$-dimensional FrFT can be decomposed into one-dimensional FrFT. This decomposition is crucial for CoNO, as it relies on multidimensional FrFT (Thm. 4.1). (ii) Multiplication in the fractional domain is equivalent to convolution in the spatial domain for the proposed operator CoNO (Thm. 4.2). (iii) The universal approximation capability for the CoNO (Thm. 4.3). ###### Theorem 4.1 (Product Rule for FrFT). Suppose $\mathcal{K}_{\alpha}(m_{1},x_{1},m_{2},x_{2}...,m_{n},x_{n})$ denote the fractional integral kernel of the $N$-dimensional FrFT of a signal $f(x_{1},x_{2}...x_{n})$ with $m_{1},m_{2}...m_{n}$ denoting the multidimensional FrFT coefficients, then the following fractional kernel property holds: $\mathcal{K}_{\alpha}(m_{1},x_{1},m_{2},x_{2}...,m_{n},x_{n})=\prod_{i=1}^{N}\mathcal{K}_{\alpha}(m_{i},x_{i})$ (8) Proof. Refer to the App. B.9 for the proof. Thm. 4.1, show that an $N$-dimensional fractional integral kernel is a product of one-dimensional fractional integral kernels along each dimension. ###### Theorem 4.2 (Convolution Theorem for FrFT). Suppose $f$ and $g$ are square-integrable functions. Define $e_{\alpha}(m)=e^{i\pi\mid m\mid^{2}cot(\alpha)}$ and $h(x)=(f*g)(x)$ where $*$ denotes the usual convolution integral with $\mathcal{F}^{\alpha}$, $\mathcal{G}^{\alpha}$, $\mathcal{H}^{\alpha}$ respectively denoting the FrFT of $f$, $g$ and $h$, respectively. Then, $\mathcal{H}^{\alpha}(m)=\mathcal{F}^{\alpha}(m)\mathcal{G}^{\alpha}(m)e_{-\alpha}(m).$ (9) Proof. Refer to App. B.12 for the proof. Thm. 4.2 states that the convolution of two functions in the spatial domain is equivalent to the product of their respective fractional Fourier transforms, multiplied by a function dependent on the fractional order. Finally, we present the universal approximation theorem of CoNO below, similar to that for FNO [34]. ###### Theorem 4.3 (Universal Approximation). Let $s,s^{\prime}>0$ and $\alpha\in\mathbb{R}$; Suppose $\mathcal{G}:H^{s}(\mathcal{T}_{\alpha}^{d};\mathbb{R}^{d_{\text{a}}})\rightarrow H^{s^{\prime}}(\mathcal{T}_{\alpha}^{d};\mathbb{R}^{d_{\text{u}}})$ represent a continuous operator between Sobolev spaces where $\mathcal{T}_{\alpha}^{d}$ denotes the fractional order torus and $d_{a},d_{u}\in\mathbb{N}$; and $K\subset H^{s}(\mathcal{T}_{\alpha}^{d};\mathbb{R}^{d_{\text{a}}})$ is a compact subset. Then, for any $\varepsilon>0$, there exists CoNO layers $\mathcal{N}:H^{s}(\mathcal{T}_{\alpha}^{d};\mathbb{R}^{d_{\text{a}}})\rightarrow H^{s^{\prime}}(\mathcal{T}_{\alpha}^{d};\mathbb{R}^{d_{\text{u}}})$ satisfying: $\sup_{v\in K}\|\mathcal{G}(v)-\mathcal{N}(v)\|_{L^{2}}\leq\varepsilon$ (10) Proof. Refer to App. B.15 for the proof. ## 5 Numerical Experiments This section provides a thorough empirical investigation of CoNO in contrast to multiple vision models and neural operator baselines. We conduct extensive experiments on a diverse set of challenging benchmarks spanning various domains to demonstrate the efficacy of our proposed method. Table 2: The main result with sixteen baselines on all benchmarks datasets: Mean Relative $\ell_{2}$ Error (Equation 11) is reported as the evaluation metric, where a smaller $\ell_{2}$ Error indicates superior performance. "INCREMENT %" refers to the relative error reduction concerning the second- best model on each benchmark. Specifically focusing on the 2D Navier–Stokes benchmark, a detailed comparison is conducted with KNO [69], and TF-Net [64], as they are designed for auto-regressive (time-dependent) tasks. Instances marked with ’*’ indicate that the baseline cannot handle the benchmark. In the color legend, blue represents the best performance, green indicates the second-best performance, and orange signifies the third-best performance among the baselines. MODEL | Elasticity-P | Elasticity-G | Plasticity | Navier-Stokes | Darcy | Airfoil | Pipe ---|---|---|---|---|---|---|--- U-NET [2015] | 0.0235 | 0.0531 | 0.0051 | 0.1982 | 0.0080 | 0.0079 | 0.0065 RESNET [2016] | 0.0262 | 0.0843 | 0.0233 | 0.2753 | 0.0587 | 0.0391 | 0.0120 TF-NET [2020] | \ | \ | \ | 0.1801 | \ | \ | \ SWIN [2021] | 0.0283 | 0.0819 | 0.0170 | 0.2248 | 0.0397 | 0.0270 | 0.0109 DEEPONET [2021] | 0.0965 | 0.0900 | 0.0135 | 0.2972 | 0.0588 | 0.0385 | 0.0097 FNO [2020] | 0.0229 | 0.0508 | 0.0074 | 0.1556 | 0.0108 | 0.0138 | 0.0067 U-FNO [2022] | 0.0239 | 0.0480 | 0.0039 | 0.2231 | 0.0183 | 0.0269 | 0.0056 WMT [2021] | 0.0359 | 0.0520 | 0.0076 | 0.1541 | 0.0082 | 0.0075 | 0.0077 GALERKIN [2021] | 0.0240 | 0.1681 | 0.0120 | 0.2684 | 0.0170 | 0.0118 | 0.0098 SNO [2022] | 0.0390 | 0.0987 | 0.0070 | 0.2568 | 0.0495 | 0.0893 | 0.0294 U-NO [2022] | 0.0258 | 0.0469 | 0.0034 | 0.1713 | 0.0113 | 0.0078 | 0.0100 HT-NET [2022] | 0.0372 | 0.0472 | 0.0333 | 0.1847 | 0.0079 | 0.0065 | 0.0059 F-FNO [2021] | 0.0263 | 0.0475 | 0.0047 | 0.2322 | 0.0077 | 0.0078 | 0.0070 KNO [2023] | \ | \ | \ | 0.2023 | \ | \ | \ GNOT [2023] | 0.0315 | 0.0494 | * | 0.1670 | 0.0105 | 0.0081 | * LSM [2023] | 0.0218 | 0.0408 | 0.0025 | 0.1535 | 0.0065 | 0.0062 | 0.0050 CoNO (Ours) | 0.0210 | 0.0436 | 0.0019 | 0.1287 | 0.0051 | 0.0057 | 0.0054 INCREMENT % | 3.8% | -6.8% | 31.6% | 19.3% | 27.5% | 8.7% | -8.0% ### 5.1 Experiments Details and Main Result Benchmarks: We assess the performance of our model on Darcy and Navier Stokes [41] benchmarks to gauge its proficiency on regular grids. Subsequently, we extend our experimentation to benchmarks featuring irregular geometries, such as Airfoil, Plasticity, and Pipe [42], modeled using structured meshes and Elasticity [42], represented in point clouds. Refer to App. section C for more details about benchmarks and tasks. Baselines: We assess CoNO by comparing it against sixteen established models across seven benchmarks, which include baselines from vision models (U-Net [55], ResNet [27], SwinTransformer [44]) and thirteen baselines specifically designed for PDEs (DeepONet [45], TF-Net [64], FNO [41], U-FNO [65], WMT [25], GalerkinTransformer [10], SNO [18], U-NO [52], HT-Net [43], F-FNO [58], KNO [69], GNOT [26], LSM [68]). Notably, for the Elasticity-P benchmark in the point cloud, we incorporate the specialized transformation proposed by geo-FNO [42] at both the start and end of these models. This transformation facilitates the conversion of irregular input domains into or back from a uniform mesh. Evaluation Metric: Mean relative $\ell_{2}$ error is used throughout the experiments. $\mathcal{L}=\frac{1}{N}\sum_{i=1}^{N}\frac{\|\mathcal{G}_{\theta}(a_{i})-\mathcal{G}^{\dagger}(a_{i})\|_{2}}{\|\mathcal{G}^{\dagger}(a_{i})\|_{2}}$ (11) the regular mean-squared error (MSE) is enhanced with a normalizer $\|\mathcal{G}^{\dagger}(a_{i})\|_{2}$ to take account for discrepancies in absolute resolution scale across different benchmarks as described in [41]. Implementation Details: We have used mean relative $\ell_{2}$ error (Eq. 11) as the training and evaluation metric. We train all the models for 500 epochs using the Adam optimizer [32]. Comprehensive details are provided in the App. section D. All the experiments are conducted on a Linux machine running Ubuntu 20.04.3 LTS on an Intel(R) Core(TM) i9-10900X processor and a single NVIDIA RTX A6000 GPU with 48 GB RAM. Empirical Results: As illustrated in Table 2, CoNO consistently demonstrates superior performance on all the datasets in comparison to all baseline models, ranking among the top two. This performance superiority is evident across benchmark datasets characterized by diverse geometries and dimensions, exhibiting an average improvement of 10.9%. With the second-best performance observed in the Pipe and Elasticity-G benchmarks, our findings suggest that architectures resembling UNET demonstrate superior efficacy in capturing the underlying solution compared to the CoNO model. When applied to time-dependent PDEs, CoNO surpasses all previously established baselines, achieving an average improvement of 25.5%. This result underscores the efficacy of incorporating the change in frequency derivative captured by the FrFT in complex domains, thereby showcasing the promise of our approach in handling temporal dynamics in PDEs. ### 5.2 Ablation Study and Additional Results Table 3: Comprehensive ablation study on CoNO: investigating the impact of individual component removal on Navier-Stokes and Darcy Flow benchmark (w/o denotes the performance without that component). DESIGN | Navier-Stokes | Darcy ---|---|--- w/o Bias | 0.1425 | 0.0080 w/o FrFT | 0.1535 | 0.0086 w/o Complex NN | 0.1390 | 0.0072 w/o Alias Free Activation | 0.1295 | 0.0052 CoNO | 0.1287 | 0.0050 Figure 3: Depiction of results for different methods on fluid datasets, Darcy Flow (Left) and Navier Stokes (Right). We plotted the heatmap of the absolute difference value between ground truth and prediction to compare the predicted output. See the App. Fig. 10 for more solid physics and fluid physics benchmarks showcases. To assess the efficacy of each component in the CoNO operator, we conducted a comprehensive ablation study by systematically excluding individual components. The results presented in Table 3 indicate that all components are crucial for the effectiveness of the CoNO operator, as evidenced by a notable change in the $\ell_{2}$ error with the addition or deletion of an element. Specifically, removing the FrFT block results in a substantial degradation in performance, underscoring its effectiveness in capturing non-stationary signals. Similarly, the absence of the CVNN leads to comparable adverse effects. Notably, our analysis reveals that including bias is instrumental in introducing high-frequency components, further emphasizing its importance. Interestingly, CoNO performance degraded only slightly after removing the alias-free activation function as defined. It raises an intriguing question regarding its necessity for optimizing the operator’s efficiency. Visual Demonstrations: For a concise representation of intuitive performance, Fig. 3 presents a comparative analysis between FNO, LSM, and CoNO. Notably, CoNO exhibits remarkable proficiency in addressing time-dependent PDEs, including Navier-Stokes and Plasticity, as depicted in App. Fig. 10. Moreover, CoNO outperforms LSM and FNO significantly in the case of Darcy by 27% and having fewer artifacts present in prediction. Additionally, CoNO excels in capturing singularities around corners, as illustrated in the elasticity dataset in the App. Fig. 10, emphasizing its robust and superior performance. Performance across various Resolutions: In Fig. 4 (Left), the operator CoNO consistently exhibits superior performance compared to other operators on darcy flow PDEs at various resolutions. Notably, CoNO demonstrates stability across different resolutions, adhering to the principle of discrete-continuous equivalence as shown in App. section F.1. It contrasts HT-Net, which experiences degradation in very high dimensions. Furthermore, FNO and CoNO represent the exclusive class of operators capable of zero-shot super- resolution without requiring explicit training. Out of Distribution Generalization: We conducted experiments on the Navier- Stokes dataset in this investigation, training our model with a viscosity coefficient of $10^{-5}$. Subsequently, we assessed the out-of-distribution generalization capabilities by evaluating the trained model on a viscosity coefficient of $10^{-4}$. Our findings consistently reveal that CoNO demonstrates a significantly superior generalization performance, exhibiting an increment of $64.3\%$ compared to FNO. It also highlights the significance of capturing latent variable information or the UNET architecture, as achieved by LSM, which outperforms all other operators even CoNO as shown in App. section F.3. Data Efficiency: As demonstrated in Fig. 4 (Middle), CoNO exhibits comparable performance to the second-best operator LSM when trained on 60% of the data. Furthermore, across various training dataset ratios, CoNO consistently outperforms all other operators, underscoring its superior data efficiency compared to SOTA operators as demonstrated in App. section F.6. Robustness to Noise: In this study, we performed experiments introducing different noise levels into the training using Gaussian noise. The noise addition process follows: For each input sample denoted as $x(n)$ within the dataset $D$, we modified it by adding Gaussian noise with parameters $\gamma N(0,\sigma^{2}_{D})$. Here, $\sigma^{2}_{D}$ represents the variance of the entire dataset, and $\gamma$ indicates the specified noise intensity level. Our investigation yielded notable results as in Fig. 4 (Right), particularly when evaluating the performance of CoNO in the presence of noise within the training dataset; specifically, the noisy training with 0.1% yielded a better result than the LSM operator without noisy training, confirming the robustness of CoNO to the noisy dataset shown in App. section F.2. Figure 4: (Left) Models performance under different resolutions on Darcy. (Middle) Models performance under different training datasets the ratio on Darcy. (Right) Models performance under the presence of noise on Darcy. The lower $l2$ loss indicates better performance. Figure 5: Learning Curve for Darcy flow (Left) and Navier Stokes (Right) where the x-axis denotes epochs and the y-axis $l2$ error. Training Stability: The performance of CoNO is more stable during training, as visually observed in Fig. 5. The model exhibits reduced oscillations and consistently performs better than FNO and LSM. Remarkably, CoNO attains an equivalent performance compared with the best LSM performance after training within the initial 200 epochs, demonstrating its efficient faster and better convergence while training. Long Time Prediction on Navier Stokes: We evaluated the long-term behavior of the proposed operator CoNO by training it on the Navier-Stokes equation (viscosity coefficient of $10^{-4}$), as shown in the App. section F.5 shows that CoNO excels in extrapolating beyond the prediction horizon compared to LSM and FNO. ## 6 Conclusion Altogether, we introduce a new operator learning paradigm called the CoNO, which capitalizes on CVNNs and the FrFT as the integral operator. We theoretically prove that CoNO follows the universal approximation theorem. We demonstrate the effectiveness of leveraging the expressive power of CVNNs within the operator learning framework to construct resilient, data-efficient, and superior neural operators capable of improving the learning of function- to-function mappings. Empirically, we show that CoNO presents the SOTA results in terms of performance, zero-shot super-resolution, out-of-distribution generalization, and noise robustness. This advancement makes CoNO a promising method for developing efficient operators for real-time PDE inference, offering new tools for the SciML community. Limitations and future work. Although CoNO shows improved empirical performance, the specific features that enable this superiority remain unclear. Understanding the loss landscape and learning mechanisms behind this performance is crucial. Additionally, making CoNO computationally efficient is essential to accelerate inference. Additional future work and limitations are discussed in App. section G. Further, the broader impacts of the work are discussed in App. section H. ## References * Ahmad et al. [2020] Imtiaz Ahmad, Hijaz Ahmad, Phatiphat Thounthong, Yu-Ming Chu, and Clemente Cesarano. Solution of multi-term time-fractional pde models arising in mathematical biology and physics by local meshless method. _Symmetry_ , 12(7):1195, 2020. * Almeida [1993] Luís B Almeida. An introduction to the angular fourier transform. In _1993 IEEE International Conference on Acoustics, Speech, and Signal Processing_ , volume 3, pages 257–260. IEEE, 1993. * Amin et al. [2011] Md Faijul Amin, Muhammad Ilias Amin, Ahmed Yarub H Al-Nuaimi, and Kazuyuki Murase. Wirtinger calculus based gradient descent and levenberg-marquardt learning algorithms in complex-valued neural networks. In _International Conference on Neural Information Processing_ , pages 550–559. Springer, 2011. * Arjovsky et al. [2016] Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. In _International conference on machine learning_ , pages 1120–1128. PMLR, 2016. * Barrachina et al. [2023] Jose Agustin Barrachina, Chengfang Ren, Gilles Vieillard, Christele Morisseau, and Jean-Philippe Ovarlez. Theory and implementation of complex-valued neural networks. _arXiv preprint arXiv:2302.08286_ , 2023. * Bartolucci et al. [2023] Francesca Bartolucci, Emmanuel de Bézenac, Bogdan Raonić, Roberto Molinaro, Siddhartha Mishra, and Rima Alaifari. Are neural operators really neural operators? frame theory meets operator learning. _arXiv preprint arXiv:2305.19913_ , 2023. * Bevanda et al. [2021] Petar Bevanda, Stefan Sosnowski, and Sandra Hirche. Koopman operator dynamical models: Learning, analysis and control. _Annual Reviews in Control_ , 52:197–212, 2021. * Burark et al. [2024] Priyanshu Burark, Karn Tiwari, Meer Mehran Rashid, AP Prathosh, and NM Anoop Krishnan. Codbench: A critical evaluation of data-driven models for continuous dynamical systems. _Digital Discovery_ , 2024. * Candan et al. [2000] C. Candan, M.A. Kutay, and H.M. Ozaktas. The discrete fractional fourier transform. _IEEE Transactions on Signal Processing_ , 48(5):1329–1337, 2000. doi: 10.1109/78.839980. * Cao [2021] Shuhao Cao. Choose a transformer: Fourier or galerkin. _Advances in neural information processing systems_ , 34:24924–24940, 2021. * Chassande-Mottin and Flandrin [1999] Eric Chassande-Mottin and Patrick Flandrin. On the time–frequency detection of chirps1. _Applied and Computational Harmonic Analysis_ , 6(2):252–281, 1999. * Chatterjee et al. [2022] Soumick Chatterjee, Pavan Tummala, Oliver Speck, and Andreas Nürnberger. Complex network for complex problems: A comparative study of cnn and complex-valued cnn. In _2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)_ , pages 1–5. IEEE, 2022. * Chiheb et al. [2017] Trabelsi Chiheb, O Bilaniuk, D Serdyuk, et al. Deep complex networks. In _International Conference on Learning Representations_ , 2017. * Danihelka et al. [2016] Ivo Danihelka, Greg Wayne, Benigno Uria, Nal Kalchbrenner, and Alex Graves. Associative long short-term memory. In _International conference on machine learning_ , pages 1986–1994. PMLR, 2016. * Debnath and Debnath [2005] Lokenath Debnath and Lokenath Debnath. _Nonlinear partial differential equations for scientists and engineers_. Springer, 2005. * Demir and Ngomo [2021] Caglar Demir and Axel-Cyrille Ngonga Ngomo. Convolutional complex knowledge graph embeddings. In _The Semantic Web: 18th International Conference, ESWC 2021, Virtual Event, June 6–10, 2021, Proceedings 18_ , pages 409–424. Springer, 2021. * Dong et al. [2021] Yihong Dong, Ying Peng, Muqiao Yang, Songtao Lu, and Qingjiang Shi. Signal transformer: Complex-valued attention and meta-learning for signal recognition. _arXiv preprint arXiv:2106.04392_ , 2021. * Fanaskov and Oseledets [2022] Vladimir Fanaskov and Ivan Oseledets. Spectral neural operators. _arXiv preprint arXiv:2205.10573_ , 2022. * Fu et al. [2022] Zunwei Fu, Xianming Hou, and Qingyan Wu. Convergence of fractional fourier series on the torus and applications. _arXiv preprint arXiv:2210.14720_ , 2022. * Georgiou and Koutsougeras [1992] George M Georgiou and Cris Koutsougeras. Complex domain backpropagation. _IEEE transactions on Circuits and systems II: analog and digital signal processing_ , 39(5):330–334, 1992. * Geuchen and Voigtlaender [2024] Paul Geuchen and Felix Voigtlaender. Optimal approximation using complex-valued neural networks. _Advances in Neural Information Processing Systems_ , 36, 2024. * Geuchen et al. [2023] Paul Geuchen, Thomas Jahn, and Hannes Matt. Universal approximation with complex-valued deep narrow neural networks. _arXiv preprint arXiv:2305.16910_ , 2023. * Gómez-Echavarría et al. [2020] Alejandro Gómez-Echavarría, Juan P Ugarte, and Catalina Tobón. The fractional fourier transform as a biomedical signal and image processing tool: A review. _Biocybernetics and Biomedical Engineering_ , 40(3):1081–1093, 2020. * Guan et al. [2021] Steven Guan, Ko-Tsung Hsu, and Parag V Chitnis. Fourier neural operator networks: A fast and general solver for the photoacoustic wave equation. _arXiv preprint arXiv:2108.09374_ , 2021. * Gupta et al. [2021] Gaurav Gupta, Xiongye Xiao, and Paul Bogdan. Multiwavelet-based operator learning for differential equations. _Advances in neural information processing systems_ , 34:24048–24062, 2021. * Hao et al. [2023] Zhongkai Hao, Zhengyi Wang, Hang Su, Chengyang Ying, Yinpeng Dong, Songming Liu, Ze Cheng, Jian Song, and Jun Zhu. Gnot: A general neural operator transformer for operator learning. In _International Conference on Machine Learning_ , pages 12556–12569. PMLR, 2023. * He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 770–778, 2016. * Hendrycks and Gimpel [2016] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). _arXiv preprint arXiv:1606.08415_ , 2016. * Hirose [2003] Akira Hirose. _Complex-valued neural networks: theories and applications_ , volume 5. World Scientific, 2003. * Hirose and Yoshida [2012] Akira Hirose and Shotaro Yoshida. Generalization characteristics of complex-valued feedforward neural networks in relation to signal coherence. _IEEE Transactions on Neural Networks and learning systems_ , 23(4):541–551, 2012. * Kim and Adalı [2003] Taehwan Kim and Tülay Adalı. Approximation by fully complex multilayer perceptrons. _Neural computation_ , 15(7):1641–1666, 2003. * Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014. * Ko et al. [2022] Manny Ko, Ujjawal K Panchal, Héctor Andrade-Loarca, and Andres Mendez-Vazquez. Coshnet: A hybird complex valued neural network using shearlets. _arXiv preprint arXiv:2208.06882_ , 2022. * Kovachki et al. [2021a] Nikola Kovachki, Samuel Lanthaler, and Siddhartha Mishra. On universal approximation and error bounds for fourier neural operators. _Journal of Machine Learning Research_ , 22(290):1–76, 2021a. * Kovachki et al. [2021b] Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces. _arXiv preprint arXiv:2108.08481_ , 2021b. * Kurth et al. [2023] Thorsten Kurth, Shashank Subramanian, Peter Harrington, Jaideep Pathak, Morteza Mardani, David Hall, Andrea Miele, Karthik Kashinath, and Anima Anandkumar. Fourcastnet: Accelerating global high-resolution weather forecasting using adaptive fourier neural operators. In _Proceedings of the Platform for Advanced Scientific Computing Conference_ , pages 1–11, 2023. * Lee et al. [2022] ChiYan Lee, Hideyuki Hasegawa, and Shangce Gao. Complex-valued neural networks: A comprehensive survey. _IEEE/CAA Journal of Automatica Sinica_ , 9(8):1406–1426, 2022. * Li and Chen [2018] Changpin Li and An Chen. Numerical methods for fractional partial differential equations. _International Journal of Computer Mathematics_ , 95(6-7):1048–1099, 2018. * Li et al. [2018] Yong Li, Zhiqun Song, and Xuejun Sha. The multi-weighted type fractional fourier transform scheme and its application over wireless communications. _EURASIP Journal on wireless communications and networking_ , 2018(1):1–10, 2018. * Li et al. [2023] Zhijie Li, Wenhui Peng, Zelong Yuan, and Jianchun Wang. Long-term predictions of turbulence by implicit u-net enhanced fourier neural operator. _Physics of Fluids_ , 35(7), 2023. * Li et al. [2020] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. _arXiv preprint arXiv:2010.08895_ , 2020. * Li et al. [2022] Zongyi Li, Daniel Zhengyu Huang, Burigede Liu, and Anima Anandkumar. Fourier neural operator with learned deformations for pdes on general geometries. _arXiv preprint arXiv:2207.05209_ , 2022. * Liu et al. [2022] Xinliang Liu, Bo Xu, and Lei Zhang. Ht-net: Hierarchical transformer based operator learning model for multiscale pdes. _arXiv preprint arXiv:2210.10890_ , 2022. * Liu et al. [2021] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In _Proceedings of the IEEE/CVF international conference on computer vision_ , pages 10012–10022, 2021. * Lu et al. [2021] Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. _Nature machine intelligence_ , 3(3):218–229, 2021. * McBride and Kerr [1987] AC McBride and FH Kerr. On namias’s fractional fourier transforms. _IMA Journal of applied mathematics_ , 39(2):159–175, 1987. * Naveen Kumar et al. [2019] R Naveen Kumar, BN Jagadale, and JS Bhat. A lossless image compression algorithm using wavelets and fractional fourier transform. _SN Applied Sciences_ , 1:1–8, 2019. * Nitta [2002] Tohru Nitta. On the critical points of the complex-valued neural network. In _Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP’02._ , volume 3, pages 1099–1103. IEEE, 2002. * Nitta [2004] Tohru Nitta. Orthogonality of decision boundaries in complex-valued neural networks. _Neural computation_ , 16(1):73–97, 2004. * Oppenheim and Lim [1981] Alan V Oppenheim and Jae S Lim. The importance of phase in signals. _Proceedings of the IEEE_ , 69(5):529–541, 1981. * Ozaktas and Kutay [2001] Haldun M Ozaktas and M Alper Kutay. The fractional fourier transform. In _2001 European Control Conference (ECC)_ , pages 1477–1483. IEEE, 2001. * Rahman et al. [2022] Md Ashiqur Rahman, Zachary E Ross, and Kamyar Azizzadenesheli. U-no: U-shaped neural operators. _arXiv preprint arXiv:2204.11127_ , 2022. * Rawat et al. [2021] Shubhankar Rawat, KPS Rana, and Vineet Kumar. A novel complex-valued convolutional neural network for medical image denoising. _Biomedical Signal Processing and Control_ , 69:102859, 2021. * Reichert and Serre [2013] David P Reichert and Thomas Serre. Neuronal synchrony in complex-valued deep networks. _arXiv preprint arXiv:1312.6115_ , 2013. * Ronneberger et al. [2015] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In _Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18_ , pages 234–241. Springer, 2015. * Sewell [2012] Granville Sewell. _Analysis of a finite element method: PDE/PROTRAN_. Springer Science & Business Media, 2012. * Shabbir [2019] Muhammad Shabbir. _Approximation of chirp functions by fractional Fourier series_. PhD thesis, Dissertation, Lübeck, Universität zu Lübeck, 2019, 2019. * Tran et al. [2021] Alasdair Tran, Alexander Mathews, Lexing Xie, and Cheng Soon Ong. Factorized fourier neural operators. _arXiv preprint arXiv:2111.13802_ , 2021. * Trouillon and Nickel [2017] Théo Trouillon and Maximilian Nickel. Complex and holographic embeddings of knowledge graphs: a comparison. _arXiv preprint arXiv:1707.01475_ , 2017. * Tygert et al. [2016] Mark Tygert, Joan Bruna, Soumith Chintala, Yann LeCun, Serkan Piantino, and Arthur Szlam. A mathematical motivation for complex-valued convolutional networks. _Neural computation_ , 28(5):815–825, 2016. * Unser [2000] Michael Unser. Sampling-50 years after shannon. _Proceedings of the IEEE_ , 88(4):569–587, 2000. * Vetterli et al. [2014] M Vetterli, J Kovacevic, and VK Goyal. Foundations of signal processing, cambridge university press, cambridge, 2014, 2014. * Voigtlaender [2023] Felix Voigtlaender. The universal approximation theorem for complex-valued neural networks. _Applied and Computational Harmonic Analysis_ , 64:33–61, 2023. * Wang et al. [2020] Rui Wang, Karthik Kashinath, Mustafa Mustafa, Adrian Albert, and Rose Yu. Towards physics-informed deep learning for turbulent flow prediction. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pages 1457–1466, 2020. * Wen et al. [2022] Gege Wen, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, and Sally M Benson. U-fno—an enhanced fourier neural operator-based deep-learning model for multiphase flow. _Advances in Water Resources_ , 163:104180, 2022. * Wisdom et al. [2016] Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. Full-capacity unitary recurrent neural networks. _Advances in neural information processing systems_ , 29, 2016. * Worrall et al. [2017] Daniel E Worrall, Stephan J Garbin, Daniyar Turmukhambetov, and Gabriel J Brostow. Harmonic networks: Deep translation and rotation equivariance. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 5028–5037, 2017. * Wu et al. [2023] Haixu Wu, Tengge Hu, Huakun Luo, Jianmin Wang, and Mingsheng Long. Solving high-dimensional pdes with latent spectral models. _arXiv preprint arXiv:2301.12664_ , 2023. * Xiong et al. [2023] Wei Xiong, Xiaomeng Huang, Ziyang Zhang, Ruixuan Deng, Pei Sun, and Yang Tian. Koopman neural operator as a mesh-free solver of non-linear partial differential equations. _arXiv preprint arXiv:2301.10022_ , 2023. * Yang et al. [2020] Muqiao Yang, Martin Q Ma, Dongyu Li, Yao-Hung Hubert Tsai, and Ruslan Salakhutdinov. Complex transformer: A framework for modeling complex-valued sequence. In _ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 4232–4236. IEEE, 2020. * Yu et al. [2024] Hu Yu, Jie Huang, Lingzhi Li, Feng Zhao, et al. Deep fractional fourier transform. _Advances in Neural Information Processing Systems_ , 36, 2024. * Zheng et al. [2023] Hongkai Zheng, Weili Nie, Arash Vahdat, Kamyar Azizzadenesheli, and Anima Anandkumar. Fast sampling of diffusion models via operator learning. In _International Conference on Machine Learning_ , pages 42390–42402. PMLR, 2023. ## Appendix ## Appendix A Fractional Fourier Transform Figure 6: The figure illustrates the FrFT of the Navier-Stokes Partial Differential Equation (PDE) across varying $\alpha$ values. The (Top) displays heatmaps representing the magnitude of the transformed data, while the (Bottom) showcases corresponding heatmaps depicting the phase angle. Each column corresponds to a distinct $\alpha$ value, highlighting the impact of fractional Fourier transform parameters on the learned representation of the Navier-Stokes PDE. It is observed that in FT, most spectrum is concentrated at low frequency for $\alpha=1$. Figure 7: The figure illustrates the inherent optimality of FrFT in signal filtering for non-stationary image signals, showcasing its ability to separate effects along the fractional order axis and examining its relationship with spatial-frequency plane rotation. Fig. 7 is an elucidative representation of the intrinsic optimality associated with the Fractional Fourier Transform (FrFT) in signal filtering for non- stationary image signals. It notably underscores FrFT’s distinctive capability to separate various signal effects along the fractional order axis effectively. Furthermore, the figure provides insightful scrutiny into the intricate relationship between FrFT and the rotation of the spatial-frequency plane. This visualization contributes to a nuanced understanding of FrFT’s prowess in handling non-stationary image signals. It offers valuable insights into its utility and potential applications in the broader signal-processing domain. Fig. 6 shows that as the parameter $\alpha$ increases, the spectral distribution becomes increasingly concentrated around the origin. This concentration results in a loss of spatial information while emphasizing frequency information. Conversely, lower values of $\alpha$ reveal the preservation of information in the phase domain, which, unfortunately, becomes increasingly noisy when transformed into the frequency domain. This behavior highlights the delicate balance between spatial and frequency information, emphasizing the parameter $\alpha$’s critical role in shaping the signal’s characteristics. ## Appendix B Mathematical Analysis In this section, we will first present the necessary definitions and then prove the theorem as stated in the main content of the paper. Subsequently, we present a theoretical analysis of several components of the proposed method. ###### Definition B.1 (Fractional Torus). [19] Let consider $\alpha\in\mathbb{R}$, and $\alpha\neq n\mathbb{Z}$. Define fractional torus of order $\alpha$, denoted as $\mathcal{T}_{\alpha}^{n}$ is the cube $[0,\mid\sin{\alpha}\mid]^{n}$ with opposite sides identified. We will define FrFT on $\mathcal{T}_{\alpha}^{n}$ for the rest of the analysis. We will define Fractional convolution and fractional approximation in $L^{p}(\mathcal{T}_{\alpha}^{n})$ for $1\leq p\leq\infty$. ###### Definition B.2. Let $\alpha\in\mathcal{R}$ and $\alpha\neq n$. Define $e_{\alpha}(x)=e^{i\pi\mid x\mid^{2}cot\alpha}$ and $e_{\alpha}f(x)=e_{\alpha}(x)f(x)$ for function on $\mathcal{T}_{\alpha}^{n}$. ###### Definition B.3. Let $1\leq p\leq\infty$. A function $f$ on $\mathcal{T}_{\alpha}^{n}$ lies in the space $e_{-\alpha}L^{p}(\mathcal{T}_{\alpha}^{n})$ if $f(x)=e_{-\alpha}g(x),\>\>g\in L^{p}(\mathcal{T}_{\alpha}^{n})$ (12) and $\mid\mid f\mid\mid_{L^{p}(\mathcal{T}_{\alpha}^{n})}<\infty$. ###### Definition B.4 (Fractional Fourier Transform on Fractional Torus Fu et al. [19]). For complex-valued functions $f\in e_{-\alpha}L^{1}(\mathcal{T}_{\alpha}^{n})$, $\alpha\in\mathbb{R}$ and $m\in\mathbb{Z}^{n}$, we define, $\mathcal{F}^{\alpha}(f)(m)=\begin{cases}\int_{\mathcal{T}_{\alpha}^{n}}f(x)K_{\alpha}(m,x)dx&\alpha\neq\pi\mathbb{(}Z)\\\ f(m)&\alpha=2\pi\mathbb{(}Z)\\\ f(-m)&\alpha=2\pi\mathbb{(}Z)+\pi\end{cases}$ (13) where, $K_{\alpha}(m,x)=A_{\alpha}^{n}e_{\alpha}(x)e_{\alpha}(m,x)e_{\alpha}(m)$ (14) here, $A_{\alpha}=\sqrt{1-i\cot{\alpha}}$ and $e_{\alpha}(m,x)=e^{-i2\pi(m.x)\csc{\alpha}}$. where $\mathcal{F}^{\alpha}(f)(m)$ denotes the $m^{th}$ fractional fourier coefficient of order $\alpha$ for $f\in e_{-\alpha}L^{1}(\mathcal{T}_{\alpha}^{n})$. ###### Remark B.5. For Eq. 13, it reduces to Fourier Transform (FT) when $\alpha=\frac{\pi}{2}$, $\cot{\alpha}=0$ and $\csc{\alpha}=1$, where $K_{\alpha}(m,x)=e^{-i2\pi(m.x)}$ (15) ###### Proposition B.6 (Linear and Bounded Operator). Let $f$, $g$ be in $e_{-\alpha}L^{1}(\mathcal{T}_{\alpha}^{n})$. Then for $m,k\in\mathbb{Z}^{n}$, $\lambda\in\mathbb{C}$, $y\in\mathcal{T}_{\alpha}^{n}$ then following holds true. 1. 1. $\mathcal{F}^{\alpha}(f+g)(m)$ = $\mathcal{F}^{\alpha}(f)(m)+\mathcal{F}^{\alpha}(g)(m)$. 2. 2. $\mathcal{F}^{\alpha}(\lambda f)(m)=\lambda\mathcal{F}^{\alpha}(f)(m)$. 3. 3. $\sup_{m\in\mathbb{Z}^{n}}|\mathcal{F}^{\alpha}(f)(m)|\leq|\csc{\alpha}|^{\frac{n}{2}}\mid\mid f\mid\mid_{L^{1}(\mathcal{T}_{\alpha}^{n})}$ Proof: For proof refer Fu et al. [19]. ###### Proposition B.7 (Uniqueness of Fractional Fourier Transform). If $f,g\in e_{-\alpha}L^{1}(\mathcal{T}_{\alpha}^{n})$ satisfy $\mathcal{F}^{\alpha}(f)(m)=\mathcal{F}^{\alpha}(g)(m)$ for all $m\in\mathbb{Z}^{n}$, then $f=g$ a.e. Proof: For proof refer Fu et al. [19]. ###### Corollary B.8 (Fractional Fourier Inversion). If $f\in e_{-\alpha}L^{1}(\mathcal{T}_{\alpha}^{n})$ and $\sum_{m\in\mathbb{Z}^{n}}|\mathcal{F}^{\alpha}(f)(m)|<\infty.$ (16) Then, $f(x)=\sum_{m\in\mathbb{Z}^{n}}\mathcal{F}^{\alpha}(f)(m)K_{-\alpha}(m,x)\>\text{a.e.}$ (17) and therefore, $f$ is almost everywhere, equal to a continuous function. ###### Theorem B.9 (Product Rule). Suppose $\mathcal{K}_{\alpha}(m_{1},x_{1},m_{2},x_{2}...,m_{n},x_{n})$ denote the fractional integral kernel of the $N$-dimensional FrFT of a signal $f(x_{1},x_{2}...x_{n})$ with $m_{1},m_{2}...m_{n}$ denoting the multidimensional FrFT coefficients, then the following fractional kernel property holds: $\mathcal{K}_{\alpha}(m_{1},x_{1},m_{2},x_{2}...,m_{n},x_{n})=\prod_{i=1}^{N}\mathcal{K}_{\alpha}(m_{i},x_{i})$ (18) Proof: Let consider the $N$-dimensional signal $f(x_{1},x_{2},...,x_{n})$, Then, let’s define the FrFT of the signal as follows: $\mathcal{F}^{\alpha}(f)(m_{1},m_{2},...,m_{N})=\int_{-\infty}^{\infty}...\int_{-\infty}^{\infty}f(x_{1},x_{2},...,x_{N})\prod_{i=1}^{N}\mathcal{K}_{\alpha}(m_{i},x_{i})dx_{i},$ (19) From Eq. 19, we can easily prove for $\alpha=0$, $\mathcal{F}^{\alpha}(f)(m_{1},m_{2},...,m_{N})=f(x_{1},x_{2},...,x_{n})$ (20) and for $\alpha=1$, it reduces to Fourier Transform (FT). Therefore, if we can show that $\mathcal{F}^{\alpha}.\mathcal{F}^{\beta}=\mathcal{F}^{\alpha+\beta}$ then obviously $\mathcal{F}^{\alpha}$ satisfy all the FrFT properties. Then, $\mathcal{F}^{\alpha}.\mathcal{F}^{\beta}(f)(m_{1},m_{2},...,m_{n})=\int_{-\infty}^{\infty}...\int_{-\infty}^{\infty}\prod_{i=1}^{N}\mathcal{K}_{\alpha}(m_{i},x_{i})\int_{-\infty}^{\infty}...\int_{-\infty}^{\infty}f(y_{1},y_{2},...,y_{N})\prod_{i=1}^{N}\mathcal{K}_{\beta}(x_{i},y_{i})dy_{i}dx_{i}$ (21) $\mathcal{F}^{\alpha}.\mathcal{F}^{\beta}(f)(m_{1},m_{2},...,m_{n})=\int_{-\infty}^{\infty}...\int_{-\infty}^{\infty}f(y_{1},y_{2},...,y_{N})\\{\prod_{i=1}^{N}\int_{-\infty}^{\infty}\mathcal{K}_{\alpha}(m_{i},x_{i})\mathcal{K}_{\beta}(x_{i},y_{i})dx_{i}\\}dy_{1}dy_{2}...dy_{n}$ (22) Now using the result from [2], $\int_{-\infty}^{\infty}\mathcal{K}_{\alpha}(m,x)K_{\beta}(x,y)du=\mathcal{K}_{\alpha+\beta}(m,y)$ (23) Using the above property, $\mathcal{F}^{\alpha}.\mathcal{F}^{\beta}(f)(m_{1},m_{2},...,m_{n})=\int_{-\infty}^{\infty}...\int_{-\infty}^{\infty}f(y_{1},y_{2},...,y_{N})\prod_{i=1}^{N}K_{\alpha+\beta}(m_{i},y_{i})dy_{1}dy_{2}...dy_{n}=\mathcal{F}^{\alpha+\beta}$ (24) Therefore, we conclude that, $\mathcal{K}_{\alpha}(m_{1},x_{1},m_{2},x_{2}...,m_{n},x_{n})=\prod_{i=1}^{N}\mathcal{K}_{\alpha}(m_{i},x_{i})$ (25) ###### Definition B.10. Let $f$ and $g$ be square integrable functions, Let, $h(x)=(f*g)(x)$ (where $*$ denotes the convolution), i.e., $h(x)=(f*g)(x)=\int_{-\infty}^{\infty}f(t)g(x-t)\,dt.$ (26) ###### Definition B.11. For any function $f(x)$, let us define the function $e_{\alpha}f(x)=e_{\alpha}(x)f(x)$. For any two functions $f$ and $g$, we define the convolution operation $\ast$ by $h(x)=(f\ast g)(x)=A_{\alpha}e_{-\alpha}(x)(e_{\alpha}f\ast e_{\alpha}g)(x),$ (27) where $\ast$ is the convolution operation as defined in Eq. 26. ###### Theorem B.12 (Convolution Theorem for Fractional Fourier Transform (FrFT)). Suppose $f$ and $g$ are square-integrable functions. Define $e_{\alpha}(m)=e^{i\pi\mid m\mid^{2}cot(\alpha)}$ and $h(x)=(f*g)(x)$ where $*$ denotes the usual convolution integral with $\mathcal{F}^{\alpha}$, $\mathcal{G}^{\alpha}$, $\mathcal{H}^{\alpha}$ respectively denoting the FrFT of $f$, $g$ and $h$, respectively. Then, $\mathcal{H}^{\alpha}(m)=\mathcal{F}^{\alpha}(m)\mathcal{G}^{\alpha}(m)e_{-\alpha}(m).$ (28) Proof: By FrFT definition from Eq. 2, we know that $\begin{split}\mathcal{H}^{\alpha}(h)(m)&=A_{\alpha}\int_{-\infty}^{\infty}h(t)e_{\alpha}(t)e_{\alpha}(m,t)e_{\alpha}(m)dt\\\ &=A_{\alpha}^{2}\int_{-\infty}^{\infty}e_{\alpha}(t)e_{\alpha}(m,t)e_{\alpha}(m)e_{-\alpha}(t)dt\int_{-\infty}^{\infty}e_{\alpha}(x)f(x)e_{\alpha}(t-x)g(t-x)dx\\\ &=A_{\alpha}^{2}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(x)g(t-x)e_{\alpha}(t)e_{\alpha}(m,t)e_{\alpha}(m)e_{-\alpha}(t)e_{\alpha}(x)e_{\alpha}(t-x)dxdt\end{split}$ (29) By substituting $t-x=v$ in the above eq. , we obtain $\begin{split}\mathcal{H}^{\alpha}(h)(m)&=A_{\alpha}^{2}e_{\alpha}(m)\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(x)g(v)e_{\alpha}(x+v)e_{\alpha}(m,x+v)e_{-\alpha}(x+v)e_{\alpha}(x)e_{\alpha}(v)dxdv\\\ &=A_{\alpha}^{2}e_{-\alpha}(m)\int_{-\infty}^{\infty}f(x)e_{\alpha}(x)e_{\alpha}(m)e_{\alpha}(m,x)dx\int_{-\infty}^{\infty}g(v)e_{\alpha}(v)e_{\alpha}(m)e_{\alpha}(m,v)dv\\\ &=\mathcal{F}^{\alpha}(m)\mathcal{G}^{\alpha}(m)e_{-\alpha}(m)\end{split}$ (30) Therefore, we have, $\mathcal{H}^{\alpha}(m)=\mathcal{F}^{\alpha}(m)\mathcal{G}^{\alpha}(m)e_{-\alpha}(m).$ (31) ###### Definition B.13 (Fractional Fourier Projection Operator). Let’s define the Fractional Fourier wave-number as follows: $\mathcal{K}_{N}=\\{k\in\mathbb{Z}^{n}:|k|_{\infty}\leq N\\}$ (32) And define fractional projection operator to truncate high order coefficient of a function as follows: $\mathcal{P}_{N}(f)(x)=\mathcal{F}^{-\alpha}(\mathcal{F}^{\alpha}(f)(k).\boldsymbol{1}_{\mathcal{K}_{N}}(k))$ (33) where $1_{\mathcal{K}_{N}}(k)$ indicates indicator function which takes 1 when $k\in\mathcal{K}_{N}$ and $0$ otherwise. ###### Theorem B.14 (Convergence of Fractional Fourier Series). Let $\alpha\in\mathbb{R}$ and $\alpha\neq\pi\mathbb{Z}$, and $f\in e_{-\alpha}L^{p}(\mathcal{T}_{\alpha}^{n})$ ($1\leq p<\infty$), $\mathcal{P}_{N}(f)(x)$ denotes as follows: $\mathcal{P}_{N}(f)(x)=\mathcal{F}^{-\alpha}(\mathcal{F}^{\alpha}(f)(k).\boldsymbol{1}_{\mathcal{K}_{N}}(k))$ (34) where, $\mathcal{F}^{\alpha}(f)(k)$ denotes the $k^{th}$ fractional coefficient. Then, $\lim_{N\rightarrow\infty}\sup_{x}||f(x)-\mathcal{P}_{N}(f)(x)||\rightarrow 0$ (35) ###### Theorem B.15 (Universal Approximation of CoNO). Let $s,s^{\prime}>0$ and $\alpha\in\mathbb{R}$; Suppose $\mathcal{G}:H^{s}(\mathcal{T}_{\alpha}^{d};\mathbb{R}^{d_{\text{a}}})\rightarrow H^{s^{\prime}}(\mathcal{T}_{\alpha}^{d};\mathbb{R}^{d_{\text{u}}})$ represent a continuous operator between Sobolev spaces where $\mathcal{T}_{\alpha}^{d}$ denotes the fractional order torus and $d_{a},d_{u}\in\mathbb{N}$; and $K\subset H^{s}(\mathcal{T}_{\alpha}^{d};\mathbb{R}^{d_{\text{in}}})$ is a compact subset. Then, for any $\varepsilon>0$, there exists CoNO layers $\mathcal{N}:H^{s}(\mathcal{T}_{\alpha}^{d};\mathbb{R}^{d_{\text{in}}})\rightarrow H^{s^{\prime}}(\mathcal{T}_{\alpha}^{d};\mathbb{R}^{d_{\text{out}}})$ satisfying: $\sup_{v\in K}\|\mathcal{G}(v)-\mathcal{N}(v)\|_{L^{2}}\leq\varepsilon$ (36) Proof: Let $\alpha\in\mathbb{R}$ and define the following fractional orthogonal projection operator: $\mathcal{G}_{N}:H^{s}(\mathcal{T}_{\alpha}^{d})\rightarrow H^{s^{\prime}}(\mathcal{T}_{\alpha}^{d}),\>\>\mathcal{G}_{N}(v)=\mathcal{P}_{N}\mathcal{G}(\mathcal{P}_{N}(v))$ (37) Using Thm. B.14, we have for $\forall\varepsilon$, there exists $N\geq 0$ such that: $\|\mathcal{G}(v)-\mathcal{G}_{N}(v)\|_{L^{2}}\leq\varepsilon,\>\>\forall v\in K$ (38) We need to find $\mathcal{N}$ to approximate $\mathcal{G}_{N}$. Next we define Fractional Conjugate of $\mathcal{G}_{N}$ as follows: $\mathcal{\hat{G}}_{N}:\mathbb{C}^{\mathcal{K}_{N}}\rightarrow\mathbb{C}^{\mathcal{K}_{N}},\>\>\mathcal{\hat{G}}_{N}(\hat{v})=\mathcal{F}^{\alpha}(\mathcal{G}_{N}(\mathcal{F}^{-\alpha}(\hat{v})))$ (39) we can show that, $\mathcal{G}_{N}=\mathcal{F}^{-\alpha}\circ\mathcal{\hat{G}}_{N}\circ(\mathcal{F}^{\alpha}\circ\mathcal{P}_{N})$ (40) Now with above decomposition, we can construct CoNO by separately approximating each individual terms $\mathcal{F}^{-\alpha}$, $\mathcal{\hat{G}}_{N}$ and $\mathcal{F}^{\alpha}\circ\mathcal{P}_{N}$. Approximating $\mathcal{\hat{G}}_{N}$: For $\forall\varepsilon\geq 0$ and $\mathcal{\hat{G}}_{N}:\mathbb{C}^{\mathcal{K}_{N}}\rightarrow\mathbb{C}^{\mathcal{K}_{N}}$, we have $l$ CoNO layers $\mathcal{L}_{l}\circ\mathcal{L}_{l-1}\circ...\circ\mathcal{L}_{1}$ which satisfy: $||\mathcal{\hat{G}}_{N}(\hat{v})(x)-(\mathcal{L}_{l}\circ\mathcal{L}_{l-1}\circ...\circ\mathcal{L}_{1})(\hat{w})(x)||_{L^{2}}\leq\varepsilon,\>\>\forall v\in K,\forall x\in\mathcal{T}^{d}_{\alpha}$ (41) where, $\hat{w}:\mathcal{T}^{d}_{\alpha},x\rightarrow\hat{v}$ is constant function defined on $\mathcal{T}^{d}_{\alpha}$. From CoNO layer definition, we have, $\mathcal{L}_{l}(v)(x)=Wv(x)+\mathcal{F}^{-\alpha}(K_{l}(k)\mathcal{F}^{\alpha}(\mathcal{P}_{N}v)(k))$ (42) To approximate $\mathcal{\hat{G}}_{N}$, we can use Universal Approximation Theorem for CVNNs [63] by setting $K_{l}(k)$ to be identity. Then, $\mathcal{F}^{-\alpha}(K_{l}(k)\mathcal{F}^{\alpha}(\mathcal{P}_{N}v)(k))\approx\mathcal{P}_{N}(v),\>\>\forall l,$ (43) then we have, $\mathcal{L}_{l}(v)(x)=Wv(x)+\mathcal{P}_{N}(v)(x)$ (44) And then using the Universal Approximation Theorem for CVNNs will guarantee that $\mathcal{L}_{l}\circ\mathcal{L}_{l-1}\circ...\circ\mathcal{L}_{1}$ can approximate $\mathcal{\hat{G}}_{N}$ to any desired precision. Approximating $\mathcal{F}^{\alpha}\circ\mathcal{P}_{N}$: For any $\varepsilon\geq 0$ there exists $l\geq 0$ and $\mathcal{L}_{l}\circ\mathcal{L}_{l-1}\circ...\circ\mathcal{L}_{1}\circ\mathcal{P}$ that satisfy following: $||\mathcal{F}^{\alpha}(\mathcal{P}_{N}v)-(\mathcal{L}_{l}\circ\mathcal{L}_{l-1}\circ...\circ\mathcal{L}_{1}\circ\mathcal{P})(v)(x)||_{L^{2}}\leq\varepsilon,\>\>\forall v\in K,\forall x\in\mathcal{T}^{d}$ (45) Let’s define $\mathcal{R}(k,x)=e_{\alpha}(k)e_{\alpha}(k,x)$ which constitutes the orthonormal basis for FrFT, which we call as fractional fourier basis. Firstly we will show that there exists $\mathcal{N}=\mathcal{L}_{l}\circ\mathcal{L}_{l-1}\circ...\circ\mathcal{L}_{1}\circ\mathcal{P}$ that satisfies: $\begin{cases}||\mathcal{N}(v)_{1,k}-\mathcal{P}_{N}v(x).Re(\mathcal{R}(k,x))||_{L_{2}}<\varepsilon\\\ ||\mathcal{N}(v)_{1,k}-\mathcal{P}_{N}v(x).Im(\mathcal{R}(k,x))||_{L_{2}}<\varepsilon\end{cases}$ (46) To construct $\mathcal{N}$, we define the lifting operator $\mathcal{P}$ and use position embedding as follows: $\mathcal{P}(v)=R(v(x)=\\{v(x),Re(\mathcal{R}(k,x)),v(x),Im(\mathcal{R}(k,x))\\}_{k\in\mathcal{K}_{N}}$ (47) where trigonometric polynomial of order $\alpha$ are directly embedded in CoNO layer. $\mathcal{P}$ lifts the range of function from $\mathbb{R}^{d_{a}}$ to $\mathbb{R}^{d_{u}}$. Then, by leveraging the Universal Approximation Theorem for CVNNs where we concatenate the $\\{v.Re(\mathcal{R}(k,x)),v.Im(\mathcal{R}(k,x))\\}_{k\in\mathcal{K}_{N}}$, we can have multiple layers satisfies: $\scriptstyle(\mathcal{L}_{l}\circ...\circ\mathcal{L}_{1})(\\{v(x),Re(\mathcal{R}(k,x)),v(x),Im(\mathcal{R}(k,x))\\}_{k\in\mathcal{K}_{N}})\approx\\{v.Re(\mathcal{R}(k,x)),v.Im(\mathcal{R}(k,x))\\}_{k\in\mathcal{K}_{N}},\>\>\forall v\in K$ (48) And can achieve the desired precision by adjusting the width and depth of the layers. We next note that, $\mathcal{P}_{N}v(x)=\sum_{k\in\mathcal{K}_{N}}\hat{v}_{k}K_{-\alpha}(k,x)$ (49) where $\hat{v}_{k}$ denotes the $k^{th}$ coefficient of FrFT. Then by definition of FrFT in Eq 2: $\mathcal{F}^{\alpha}(v)(0)=\int_{\mathcal{T}^{d}_{\alpha}}v(x)A_{\alpha}e_{\alpha}(x)dx=Re(\hat{v}_{0})+Im(\hat{v}_{0})$ (50) $\mathcal{F}^{\alpha}(v.\mathcal{R}(k,x))(0)=\int_{\mathcal{T}^{d}_{\alpha}}v(x)A_{\alpha}e_{\alpha}(x)e_{\alpha}(k)e_{\alpha}(k,x)dx=Re(\hat{v}_{k})+Im(\hat{v}_{k})$ (51) Thus in layer $\mathcal{L}_{l+1}$: $\begin{cases}K_{l+1}(k)=0&\text{if}\>\>k=0\\\ K_{l+1}(k)=-Id&\text{if}\>\>k\neq 0\\\ \end{cases}$ (52) And then we have: $\scriptstyle\mathcal{F}^{-\alpha}(K_{l+1}(k).\mathcal{F}^{\alpha}(\\{v.Re(\mathcal{R}(k,x)),v.Im(\mathcal{R}(k,x))\\}_{k\in\mathcal{K}_{N}}))=\\{Re(\hat{v}_{k})-v.Re(\mathcal{R}(k,x)),Im(\hat{v}_{k})-v.Im(\mathcal{R}(k,x))\\}_{k\in\mathcal{K}_{N}}$ (53) Finally, we can set $W_{l+1}$ in layer $\mathcal{L}_{l+1}$ to be identity we get, $\mathcal{L}_{l+1}(\\{v.Re(\mathcal{R}(k,x)),v.Im(\mathcal{R}(k,x))\\}_{k\in\mathcal{K}_{N}}))=\\{Re(\hat{v}_{k}),Im(\hat{v}_{k})\\}_{k\in\mathcal{K}_{N}}=\mathcal{F}^{\alpha}(\mathcal{P}_{N}v)$ (54) Which is the desired approximation. Approximating $\mathcal{F}^{-\alpha}$: For any $\varepsilon$, we have $l\geq 0$ and $\mathcal{Q}\circ\mathcal{L}_{l}\circ\mathcal{L}_{l-1}\circ...\circ\mathcal{L}_{1}$ that satisfies: $||\mathcal{F}^{-\alpha}(\hat{v})-(\mathcal{Q}\circ\mathcal{L}_{l}\circ\mathcal{L}_{l-1}\circ...\circ\mathcal{L}_{1})(v)||_{L^{2}}\leq\varepsilon,\>\>\forall v\in K$ (55) where $\hat{v}\in\mathbb{C}^{\mathcal{K}_{N}}$ is the truncated fractional fourier coefficient. We can use the previous steps here as well. We can construct $Re(\hat{v}_{k}).\mathcal{R}(k,x)$ and $Im(\hat{v}_{k}).\mathcal{R}(k,x)$ using $\mathcal{L}_{l}\circ\mathcal{L}_{l-1}\circ...\circ\mathcal{L}_{1}$. We can construct projection operator $\mathcal{Q}$ to recover the original function. $\mathcal{Q}(\\{\hat{v}_{k}.\mathcal{R}(k,x)\\}_{k\in\mathcal{K}_{N}})=\sum_{k\in\mathcal{K}_{N}}Re(\hat{v}_{k}).\mathcal{R}(k,x)-Im(\hat{v}_{k}).\mathcal{R}(k,x)=v(x)$ (56) Therefore, it completes our proof. Using the above approximation for $\mathcal{G}_{N}$ we can approximate the operator to desired precision. Thus establishing the Universal Approximation Theorem for CoNO. ## Appendix C Details for the Benchmark ### C.1 Details for benchmarks tasks The following sections comprehensively elucidate details for benchmark tasks. Figure 8: The diagram presents an overview of the Operator learning task applied to distinct Partial Differential Equations (PDEs) classified as Fluid Physics and Solid Physics. (Top) showcases three specific PDEs: Darcy Flow, Airfoil, and Plasticity. (Bottom), an additional set of three PDEs is featured: Navier Stokes, Pipe, and Elasticity. Table 4: Detailed benchmark tasks provide a thorough and systematic exploration of each task’s specific characteristics used in experimentation. Dataset | Geometry | Task | Input | Output ---|---|---|---|--- ELASTICITY-P | Point Cloud | Estimate Stress | Material Structure | Inner Stress ELASTICITY-G | Regular Grid | Estimate Stress | Material Structure | Inner Stress PLASTICITY | Structured Mesh | Model Deformation | Boundary Condition | Mesh Displacement NAVIER-STOKES | Regular Grid | Predict Future | Past Velocity | Future Velocity AIRFOIL | Structured Mesh | Estimate Velocity | Structure | Fluid Velocity PIPE | Structured Mesh | Estimate Velocity | Structure | Fluid Velocity DARCY | Regular Grid | Estimate Pressure | Porous Medium | Fluid Pressure ### C.2 Description of Datasets Table 4 and Fig. 8 comprehensively present the benchmark details. The categorization of generation details is presented according to the governing partial differential equations (PDEs) as follows: Elasticity-P and Elasticity-G Dataset [42]: The dataset is designed to evaluate internal stress within an incompressible material characterized by an arbitrary void at its center, subject to external tension. The material’s structural configuration constitutes the input, and the resulting internal stress is the output. Notably, two distinct approaches are employed in modeling the material’s geometry: Elasticity-P utilizes a point cloud comprising 972 points. Elasticity-G represents the data on a structured grid with dimensions $41\times 41$, obtained through interpolation from the Elasticity-P dataset. Plasticity Dataset [42]: This dataset addresses the plastic forging scenario, wherein a die with an arbitrary shape impacts a plastic material from above. The input to the benchmark is characterized by the die’s shape, encoded in a structured mesh. The benchmark aims to predict the deformation of each mesh point over the subsequent 20 timesteps. The structured mesh employed has a resolution of $101\times 31$. Navier Stokes Dataset [41]: 2D Navier-Stokes equation mathematically describes the flow of a viscous, incompressible fluid in vorticity form on the unit torus as follows: $\begin{split}\partial_{t}w(x,t)+u(x,t)\cdot\nabla w(x,t)&=\nu\Delta w(x,t)+f(x),\quad x\in(0,1)^{2},\,t\in(0,T]\\\ \nabla\cdot u(x,t)&=0,\quad x\in(0,1)^{2},\,t\in[0,T]\\\ w(x,0)&=w_{0}(x),\quad x\in(0,1)^{2}\end{split}$ (57) where, $u$ represents the velocity field, $w=\nabla\times u$ is the vorticity, $w_{0}$ is the initial vorticity, $\nu$ is the viscosity coefficient, and $f$ is the forcing function. In this dataset, the viscosity ($\nu$) is fixed at $10^{-5}$, and the 2D field has a resolution of $64\times 64$. Each sample within the dataset comprises 20 consecutive frames. The objective is to predict the subsequent ten frames based on the preceding ten. Pipe Dataset [42]: This dataset focuses on the incompressible flow through a pipe. The governing equations are Equation 59: $\nabla\cdot\mathbf{U}=0,$ (58) $\frac{\partial\mathbf{U}}{\partial t}+\mathbf{U}\cdot\nabla\mathbf{U}=\mathbf{f}^{-1}\frac{1}{\rho}\nabla p+\nu\nabla^{2}\mathbf{U}.$ (59) The dataset is constructed on a geometrically structured mesh with a $129\times 129$ resolution. We employ the mesh structure as input data for experimental purposes, with the output being the horizontal fluid velocity within the pipe. Airfoil Dataset [42]: The dataset pertains to transonic flow over an airfoil. Due to the negligible viscosity of air, the viscous term $\nu\nabla^{2}U$ is omitted from the Navier-Stokes equation. Consequently, the governing equations for this scenario are expressed as follows: $\frac{\partial\rho f}{\partial t}+\nabla\cdot(\rho fU)=0$ (60) $\frac{\partial(\rho fU)}{\partial t}+\nabla\cdot(\rho fUU+pI)=0$ (61) $\frac{\partial E}{\partial t}+\nabla\cdot((E+p)U)=0,$ (62) where $\rho f$ represents fluid density, and $E$ denotes total energy. The data is on a structured mesh with dimensions $200\times 50$, and the mesh point coordinates are utilized as inputs. The corresponding output is the Mach number at each mesh point. Darcy Flow Dataset [41]: It represents the flow through porous media. 2D Darcy flow over a unit square is given by $\nabla\cdot(a(x)\nabla u(x))=f(x),\quad x\in(0,1)^{2},$ (63) $u(x)=0,\quad x\in\partial(0,1)^{2}.$ (64) where $a(x)$ is the viscosity, $f(x)$ is the forcing term, and $u(x)$ is the solution. This dataset employs a constant value of forcing term $F(x)=\beta$. Further, Equation 63 is modified in the form of a temporal evolution as $\partial_{t}u(x,t)-\nabla\cdot(a(x)\nabla u(x,t))=f(x),\quad x\in(0,1)^{2},$ (65) In this dataset, the input is represented by the parameter $a$, and the corresponding output is the solution $u$. The dataset comprises samples organized on a regular grid with a resolution of $85\times 85$. ## Appendix D Implementation Details The following section provides a comprehensive overview of the training and testing samples, including details about the shapes of input and output tensors. Table 5: Training details for benchmark datasets. The input-output resolutions are presented in the shape of (temporal, spatial, variate). The symbol "/" indicates dimensions excluded. Dataset | Training Samples | Testing Samples | Input Tensor | Output Tensor ---|---|---|---|--- ELASTICITY-P | 1000 | 200 | (/, 972, 2) | (/, 972, 1) ELASTICITY-G | 1000 | 200 | (/, 41 $\times$ 41, 1) | (/, 41 $\times$ 41, 1) PLASTICITY | 900 | 80 | (/, 101 $\times$ 31, 2) | (20, 101 $\times$ 31, 4) NAVIER-STOKES | 1000 | 200 | (10, 64 $\times$ 64, 1) | (10, 64 $\times$ 64, 1) AIRFOIL | 1000 | 100 | (/, 200 $\times$ 50, 2) | (/, 200 $\times$ 50, 1) PIPE | 1000 | 200 | (/, 129 $\times$ 129, 2) | (/, 129 $\times$ 129, 1) DARCY | 1000 | 200 | (/, 85 $\times$ 85, 1) | (/, 85 $\times$ 85, 1) ### D.1 Training Details Table 5 presents a comprehensive overview of the experimental setup, including details regarding the training and testing split and the shapes of the input and output tensors. This information is crucial for understanding the specific configurations employed in our experimentation process. In training CoNO, we need to initialize the fractional orders, which are learned in the same way as other matrices that are part of our network using optimizers such as Adam. Notably, fractional orders can vary across axes, and there is no requirement for uniformity initialization of fractional orders across different axes. Furthermore, we conducted each experiment five times and observed that the standard deviation falls within the ranges of 0.0003 for Darcy, Airfoil, and Pipe datasets, 0.01 for Navier Stokes, 0.002 for Elasticity-P and Elasticity-G, and 0.0002 for Plasticity. ### D.2 Discrete Implementation of Fractional Fourier Transform The discrete implementation of the Fractional Fourier Transform (FrFT) is essential for CoNO. In CoNO, we utilize a matrix multiplication-based discrete FrFT. This approach leverages the spectral expansion of the fractional integral kernel using a complete set of eigenfunctions of the FrFT, which are Hermite-Gaussian functions [9]. We have used the following PyTorch based discrete implementation of FrFT: Torch FrFT. ### D.3 Complex Neural Networks (CVNNs) Figure 9: Complex Residual Network used in CoNO for learning the complex part for CVNNs. Given that all datasets are inherently real-valued, we designed a neural network optimized for real-valued operations within the complex domain. Using Complex-Valued Neural Networks (CVNNs), the network effectively handles complex data components. It converts real-valued data to complex representations through a residual block, as shown in Fig. 9. This architecture allows the neural network to process the real-valued datasets in the complex domain efficiently. Complex-valued backpropagation was implemented using the Wirtinger calculus [3], which generalizes the notion of complex derivatives and trains complex-valued neural networks (generalized complex chain rule for real-valued loss function) [5, 13]. If $L$ is a real-valued loss function and $z$ is a complex variable such that $z=x+iy$ where $x,y\in\mathbb{R}$: $\nabla_{z}L=\frac{\partial L}{\partial z}=\frac{\partial L}{\partial x}+i\frac{\partial L}{\partial y}=\frac{\partial L}{\partial(\text{Re}(z))}+i\frac{\partial L}{\partial(\text{Im}(z))}=(\nabla_{Re(z)}L+i(\nabla_{Im(z)}L)$ (66) We have used Pytorch to build blocks for CoNO based on the following GitHub repositories: 1. 1. Deep Complex Network 2. 2. Pytorch Complex ### D.4 Hyperparameters Table 6: Table detailing the hyperparameters used in CoNO in Darcy Flow Benchmark. Parameters | Values ---|--- Learning Rate | 0.001 Batch Size | 20 Latent Dim | 64 Padding | 11 Gamma | 0.5 Step-Size | 100 This section details the hyperparameter values used in CoNO for the Darcy Flow dataset. We will specify the settings and configurations, such as learning rates, batch sizes, the number of layers, activation functions, and any other relevant parameters that were employed to optimize the performance of CoNO on this dataset as shown in Table 6. Further, as shown in Fig. 2 we initialize $\alpha=1$ and $\alpha^{\prime}=0.5$. We further used pointwise convolution in $R^{\alpha^{\prime}}$ and linear transformation in $R^{\alpha}$ by truncating the higher frequency similar as FNO. ### D.5 Mitigation of Aliasing In Neural Operator learning, the utilization of non-linear operations, such as non-linear pointwise activations, can introduce high-frequency components into the output signal. The manifestation of aliasing induced by nonlinearity can result in distortion of symmetry inherent in the physical signal, leading to undesirable effects. Additionally, the pursuit of translational invariance, a key characteristic in neural operators, becomes susceptible to degradation due to aliasing. We propose a two-step process to address the challenges of aliasing errors within the continuous equivariance paradigm. Firstly, before applying any activation function, we employ an upsampling operation on the input function, exceeding its frequency bandwidth. Subsequently, a non-linear operation is applied to the upsampled signal, followed by a sinc-based filter and downsampling. Incorporating the sinc-based low-pass filter effectively attenuates higher frequency components in the output signal. This method enhances the accuracy of neural network predictions by minimizing distortions caused by aliasing, which is especially critical in applications dealing with complex signal-processing tasks. Although we experimentally haven’t found much improvement in performance using alias-free activation. ## Appendix E Prediction Visualization As illustrated in Fig. 10, the performance exhibited by CoNO in predictive tasks surpasses that of other benchmark datasets, notably outperforming the state-of-the-art operator LSM. This superiority is particularly evident in time-dependent and independent partial differential equations (PDEs) scenarios. CoNO showcases enhanced predictive accuracy and significantly reduces artifacts. These compelling findings underscore the efficacy of CoNO as a robust solution for a diverse range of PDE applications, marking a significant advancement in scientific machine learning. Figure 10: (Top) Showcase of Darcy Flow (Left) and Navier Stokes (Right). (Middle) Showcase of Airfoil (Right) and Pipe (Left). (Bottom) Showcase of Plasticity (Left) and Elasticity (Right). For Comparison of predicted output, we have plotted the heatmap of the absolute value of the difference between Ground Truth and Prediction. ## Appendix F Experimental Results The following section provides an exhaustive examination of the results obtained from multiple tasks associated with the operator. These tasks encompass a broad spectrum, including different resolutions, resilience to noise, performance in out-of-generalization tasks, data efficiency, and training stability. The comprehensive analysis presented in this section aims to offer a detailed insight into the performance and capabilities of the operator across a range of critical aspects, contributing to a nuanced understanding of its practical utility and effectiveness in diverse scenarios. All baseline models are instantiated and implemented utilizing the official source code. ### F.1 Performance Under Different Resolution To assess the efficacy of our operator at diverse resolutions, we conducted experiments on Darcy and Navier-Stokes problems across varying spatial resolutions. Notably, CoNO consistently outperformed other operators across these resolutions, demonstrating its superior effectiveness. The comparative results, as presented in Table 7 and Table 8, unequivocally highlight the robust performance and versatility of CoNO in addressing challenges associated with different spatial resolutions in the context of both Darcy and Navier- Stokes scenarios. Table 7: Neural Operators performance comparison on Darcy Flow under Different Resolutions. Resolution | UNET | FNO | MWT | UNO | FFNO | HTNET | LSM | CoNO ---|---|---|---|---|---|---|---|--- 32x32 | 0.0059 | 0.0128 | 0.0083 | 0.0148 | 0.0103 | 0.0058 | 0.0049 | 0.0048 64x64 | 0.0052 | 0.0067 | 0.0078 | 0.0079 | 0.0064 | 0.0046 | 0.0042 | 0.0036 128x128 | 0.0054 | 0.0057 | 0.0064 | 0.0064 | 0.0050 | 0.0040 | 0.0038 | 0.0034 256x256 | 0.0251 | 0.0058 | 0.0057 | 0.0064 | 0.0051 | 0.0044 | 0.0043 | 0.0039 512x512 | 0.0496 | 0.0057 | 0.0066 | 0.0057 | 0.0042 | 0.0063 | 0.0039 | 0.0035 1024x1024 | 0.0754 | 0.0062 | 0.0077 | 0.0058 | 0.0069 | 0.0163 | 0.0050 | 0.0044 Table 8: Neural Operators performance comparison on the Navier-Stokes Benchmark Under Different Resolutions. "/" indicates poor $l2$ error performance. Resolution | UNET | FNO | MWT | UNO | FFNO | HTNET | LSM | CoNO ---|---|---|---|---|---|---|---|--- 64×64 | 0.1982 | 0.1556 | 0.1541 | 0.1713 | 0.2322 | 0.1847 | 0.1535 | 0.1287 128×128 | / | 0.1028 | 0.1099 | 0.1068 | 0.1506 | 0.1088 | 0.0961 | 0.0817 ### F.2 Robustness to Noise To evaluate the robustness of our operator in the presence of noisy training input, we systematically conducted experiments across varying input noise levels. Our objective was to comprehensively understand the impact of noise on the performance of the proposed operator. Remarkably, our findings revealed that CoNO, even when subjected to 0.1% data noise, consistently outperformed LSM—the best-performing operator trained without exposure to noisy data. This result is a compelling confirmation of the enhanced robustness exhibited by our proposed operator under challenging conditions involving noisy training inputs, as highlighted in Table 9. We observe relative error increases of 22%, 47%, and 28% for CoNO, LSM, and FNO, respectively. Here, CoNO outperforms all the models. Furthermore, under 0.5% data noise conditions, our findings indicate relative error increases of 62%, 186%, and 45% for CoNO, LSM, and FNO, respectively. Here, it performs significantly better than LSM but poorer than FNO. However, it should be noted that the absolute performance of CoNO in relation to FNO is considerably better. Thus, it can be argued that CoNO is robust against noise at least as much as FNO, if not better and better LSM, which is the SOTA model. Table 9: Neural Operators performance comparison on Darcy Flow under different noise Levels in the training dataset. "/" indicates poor $\ell_{2}$ error performance. Noise % | UNET | FNO | MWT | UNO | FFNO | HTNET | LSM | CoNO ---|---|---|---|---|---|---|---|--- 0 % | 0.0080 | 0.0108 | 0.0082 | 0.0113 | 0.0083 | 0.0079 | 0.0065 | 0.0050 0.001 % | 0.0094 | 0.0114 | 0.0089 | 0.0115 | 0.0089 | 0.0087 | 0.0085 | 0.0056 0.01 % | 0.0105 | 0.0137 | 0.0097 | 0.0121 | 0.0095 | 0.0097 | 0.0087 | 0.0058 0.1 % | 0.0113 | 0.0139 | 0.0125 | 0.0126 | 0.0106 | 0.0124 | 0.0096 | 0.0061 0.5 % | / | 0.0157 | 0.0135 | 0.0138 | 0.0125 | 0.0148 | 0.0186 | 0.0081 ### F.3 Out of Distribution Generalization In this study, we conducted extensive experiments utilizing the Navier-Stokes dataset, training our model with a viscosity coefficient set to $10^{-5}$. Subsequently, we rigorously assessed the out-of-distribution generalization capabilities by subjecting the trained model to a viscosity coefficient of $10^{-4}$, as depicted in Table 10. Our empirical observations consistently demonstrate that the CoNO model exhibits a notably superior generalization performance, showcasing an impressive increment of $64.3\%$ compared to the FNO model’s performance. Furthermore, our findings underscore the critical importance of capturing latent variable information, a task effectively accomplished by the UNET architecture, particularly exemplified by the Latent Spectral Operator. Significantly, LSM outperforms all other operators, including CoNO, emphasizing its role in enhancing generalization capabilities in fluid dynamics modeling. Table 10: Neural Operators performance comparison on the Navier-Stokes Benchmark Under Out of distribution performance. Model is trained on NS viscosity coefficient $1e^{-5}$ and tested on NS viscosity coefficient $1e^{-4}$. "/" indicates poor $\ell_{2}$ error performance. NS Viscosity Coefficient | UNET | FNO | MWT | UNO | FFNO | HTNET | LSM | CoNO ---|---|---|---|---|---|---|---|--- $1e^{-5}$ | 0.1982 | 0.1556 | 0.1541 | 0.1713 | 0.2322 | 0.1847 | 0.1535 | 0.1287 $1e^{-4}$ | / | 0.6621 | 0.5864 | 0.5436 | 0.5606 | 0.4888 | 0.1887 | 0.2321 ### F.4 Effect of Number of Layers We conducted experiments to analyze the relationship between the number of layers and performance, in comparison to FNO, on the Darcy flow dataset. Our findings, presented in the table below, reveal that performance initially improves with increased layers in FNO. However, a significant drop in performance occurs due to the vanishing gradient problem. Importantly, our method consistently avoids the vanishing gradient problem even with an increase in the number of layers as shown in Table 11. Table 11: Neural Operators performance comparison on Darcy Benchmark under different numbers of layers. Number of Layers | 2 | 4 | 6 | 8 ---|---|---|---|--- FNO | 0.0114 | 0.0108 | 0.0087 | 0.0098 CoNO | 0.0066 | 0.0052 | 0.0053 | 0.0052 ### F.5 Long Term Prediction In further substantiating our evidence regarding the enhanced stability of CoNO in long-horizon predictions, we conducted an additional experiment utilizing the Navier Stokes model with a viscosity coefficient of $10^{-4}$. Here, we performed forecasts for the subsequent ten steps based solely on the preceding ten observations and extrapolated the results over ten more steps. The outcomes of this experiment are presented in the tables below. Our findings indicate that although not explicitly trained for predicting the subsequent 20 timestamps, CoNO consistently outperforms LSM and FNO in extrapolating beyond the prediction horizon. This observation underscores the heightened stability and robustness of CoNO in long-horizon prediction tasks as shown in Table 12. Table 12: A comparative analysis of the performance of Neural Operators on the Navier-Stokes equations, with a viscosity coefficient of $10^{-4}$, involves training models to predict the subsequent 10 timestamps based on the preceding 10 timestamps. Subsequently, the extrapolated results are utilized to forecast the subsequent 10 timestamps. Neural Operator | $T=5$ | $T=10$ | $T=15$ | $T=20$ ---|---|---|---|--- FNO | 0.028 | 0.050 | 0.17 | 0.34 LSM | 0.065 | 0.13 | 0.24 | 0.38 CoNO | 0.020 | 0.045 | 0.14 | 0.31 ### F.6 Data Efficiency As shown in Table 13, the performance of CoNO is superior, showcasing its competitive capabilities comparable to the second-best operator LSM when trained on 60% of the available data. Remarkably, across diverse training dataset ratios, CoNO consistently surpasses all other operators, emphasizing its remarkable data efficiency compared to state-of-the-art (SOTA) operators. This observation underscores the efficacy and robustness of CoNO across varying training scenarios, positioning it as a noteworthy solution in the realm of operator-based learning. With a data ratio of 0.6, we observed relative error increases of 34%, 44%, and 35% for CoNO, LSM, and FNO, respectively. Thus, it is evident that our approach consistently outperforms the second-best operator, LSM, across various ratio settings. Further, the absolute performance of CoNO is significantly better than that of FNO. Table 13: Neural Operators performance comparison on Darcy Benchmark under different training dataset ratios. "/" indicates poor $\ell_{2}$ error performance. Ratio | UNET | FNO | MWT | UNO | FFNO | HTNET | LSM | CoNO ---|---|---|---|---|---|---|---|--- 0.2 | / | 0.2678 | 0.2854 | 0.2734 | 0.2573 | 0.2564 | 0.2465 | 0.2234 0.4 | / | 0.0176 | 0.0165 | 0.0183 | 0.0153 | 0.0145 | 0.0138 | 0.0105 0.6 | 0.1234 | 0.0146 | 0.0113 | 0.0153 | 0.0142 | 0.0105 | 0.0094 | 0.0067 0.8 | 0.0107 | 0.0122 | 0.0096 | 0.0134 | 0.0095 | 0.0094 | 0.0082 | 0.0056 1.0 | 0.0080 | 0.0108 | 0.0082 | 0.0113 | 0.0077 | 0.0079 | 0.0065 | 0.0050 ### F.7 Inference Time Comparison The section examines inference time complexity across diverse neural operators applied to the Darcy Flow benchmark dataset within the Fluid Physics field. It aims to elucidate these operators’ computational efficiency and efficacy in modeling fluid flow phenomena, as shown in Table 14. Table 14: The evaluation of inference time complexity across various neural operators on the Darcy Flow benchmark dataset in the domain of Fluid Physics. Neural Operator | UNET | UFNO | FNO | WMT | LSM | CoNO ---|---|---|---|---|---|--- Inference (s) | 0.035 | 0.042 | 0.135 | 0.045 | 0.020 | 0.055 ## Appendix G Limitation and Future Work To enhance our understanding of CoNO, it is crucial to explore its mathematical and algorithmic principles thoroughly. We aim to uncover the latent space’s learning mechanisms and build a solid theoretical foundation for complex operators. Our research highlights several key challenges that need careful investigation, including refining initialization procedures for fractional orders, designing streamlined architectures for complex neural operators, developing equivariant complex operators, and understanding the role of the Fractional Fourier Transform (FrFT) in the continuous dynamics of complex systems. Additionally, our work raises questions about creating foundational models for Partial Differential Equations (PDEs). These research directions offer opportunities to further understand CoNO and contribute to the broader field of Scientific Machine Learning (SciML). ## Appendix H Broader Impact This paper introduces innovative tools to revolutionize scientific machine learning (SciML). By integrating Complex Neural Networks (CVNNs) and the Fractional Fourier Transform (FrFT) into the Neural Operator framework, we offer a novel approach to addressing the complex challenges of partial differential equations (PDEs), particularly Fractional PDEs, which lack explicit differential forms and are common in natural phenomena [1, 38]. Our research has significant implications for diverse fields such as biology, physics, and civil engineering, addressing a crucial scientific problem and paving the way for transformative advancements in interdisciplinary problem- solving. There is no serious ethical issue of the proposed methodology.
The Secret Hyperbolic Life of Positive Scalar Curvature Joachim Lohkamp Mathematisches Institut, Universität Münster, Einsteinstrasse 62, Germany _e-mail<EMAIL_ADDRESS> Abstract This survey introduces to the _hyperbolic unfolding_ correspondence that links the geometric analysis of minimal hypersurfaces with that of Gromov hyperbolic spaces. Problems caused from hypersurface singularities oftentimes become solvable on associated Gromov hyperbolic spaces. Applied to scalar curvature geometry this yields _smoothing schemes_ that eliminate such singularities. We explain the two main lines of such smoothings: a top-down singular analysis using iterated blow-ups of the given hypersurface and bottom-up smoothings following the tree of blow-ups backwards. ###### Contents 1. 1 Introduction 2. 2 Singularities as Regular Boundaries 1. 2.1 Hyperbolic Unfoldings 2. 2.2 Reduction to Hyperbolic Geodesics 3. 2.3 Applications in Geometric Analysis 3. 3 Minimal Splitting Factors - Top-Down Analysis 1. 3.1 Minimal Splitting Factors 2. 3.2 Stable Isoperimetry - Hyperbolic Geodesics Again 4. 4 Smoothing Techniques - Bottom-up Constructions 1. 4.1 Surgeries Revisited 2. 4.2 Inductive Removal of Singularities 3. 4.3 From Ordinary to Locally Finite Homology ## 1 Introduction The story began in the 70s when Hawking [H, Ch. 4] and Schoen and Yau [SY1] discovered the first cases of a remarkable $scal>0$-heredity one may state, in dimensions $n\geq 2$, as follows: _Given a compact manifold $M^{n+1}$ with $scal>0$, any area minimizing hypersurface $(H^{n},g_{H})\subset(M^{n+1},g_{M})$ also has $scal>0$, after conformal deformations of $g_{H}$._ We indicate how to derive this powerful yet simple result. The first and the second variation of $Area(H)$ of a hypersurface $H\subset M$ by a $f\cdot\nu$, where $\nu$ is the outward normal vector field of $H$, $f\in C^{\infty}(H,\mathbb{R})$ with $supp\>f\subset reg(H)$ (= the set of regular points of $H$), are given by $\displaystyle Area^{\prime}(f)$ $\displaystyle=\int_{H}trA_{H}(z)\cdot f(z)\>dVol,$ $\displaystyle Area^{\prime\prime}(f)$ $\displaystyle=\int_{H}|\nabla_{H}f|^{2}+\left((trA_{H})^{2}-|A|^{2}-Ric(\nu,\nu)\right)\cdot f^{2}\>dVol$ where $trA_{H}$ is the mean curvature of $H$, $|A|^{2}$ is the sum of the squares of principal curvatures of $H$ and $Ric$ the Ricci tensor of $M$. Since $H$ is supposed to be area minimizing, we have $Area^{\prime}(f)=0$ and $Area^{\prime\prime}(f)\geq 0$. That is, $trA_{H}=0$ and this gives (1) $\quad Area^{\prime\prime}(f)=\int_{H}|\nabla_{H}f|^{2}-\left(|A|^{2}+Ric(\nu,\nu)\right)\cdot f^{2}\>dVol\geq 0$ Now we recall the Gauß–Codazzi equations for hypersurfaces $\small|A|^{2}+Ric(\nu,\nu)=1/2\cdot\left(|A|^{2}+scal_{M}-scal_{H}+(trA_{H})^{2}\right)$ where $scal_{H}$ and $scal_{M}$ denote the scalar curvature of $H$ and $M$. Since $trA_{H}=0$ we may rewrite (1) as follows (here we assume $n\geq 3$, the computations slightly deviate when $n=2$): (2) $\int_{H}|\nabla f|^{2}+\frac{n-2}{4(n-1)}\cdot scal_{H}\cdot f^{2}dA\\\ \geq\int_{H}\frac{n}{2(n-1)}\cdot|\nabla f|^{2}+\frac{n-2}{4(n-1)}\cdot\left(|A|^{2}+scal_{M}\right)\cdot f^{2}dA.$ The left hand side of $(\ref{3})$ is the variational integral for the first eigenvalue $\lambda_{1}$ of the conformal Laplacian $L_{H}$ on $H$. That is, when $scal_{M}>0$, inequality $(\ref{3})$ shows that $\lambda_{1}>0$. Following Kazdan and Warner [KW], the transformation law for scalar curvature under conformal transformation by the first eigenfunction $f_{1}>0$ can be used to show that $scal(f_{1}^{4/(n-2)}\cdot g_{H})\cdot f_{1}^{\frac{n+2}{n-2}}=L_{H}(f_{1})=\lambda_{1}\cdot f_{1}>0.$ In other words, we observe a reproduction of the scalar curvature constraint on $M$ on the lower dimensional space $H$, also called a scalar curvature _splitting factor_ of $M$. This suggests an appealing and versatile strategy to study scalar curvature from an inductive dimensional splitting until one reaches a space already understood. The problem is that only in low dimensions $n\leq 7$ these hypersurfaces are smooth submanifolds. The ultimate goal of this paper is to explain how to use hyperbolic geometry to get such smooth splitting factors in arbitrary dimensions. The basic reference are the papers [L1]–[L5]. The program is illustrated in the following diagram. Figure 1: The hyperbolic unfolding gives use means to control the elliptic analysis on $H$ but it also reveals some new geometric structures like canonical Semmes families on $H$. Hyperbolic geometry is extensively used right from the beginning, in chapter 2. The smoothings occupy chapters 3 and 4. As a general remark, the theory we present naturally extends to considerably larger classes of almost minimizers. Some of them do not even result from variational principles. Nevertheless, for the most part we are talking about area minimizers to simplify the exposition. ## 2 Singularities as Regular Boundaries The problem we encounter in dimensions $\geq 8$ is how to control $L_{H}$ near the singular set of $H$. The critical dimension $8$ comes from classical regularity theory. It says that an area minimizing hypersurface $H^{n}$ that is smooth outside a potentially complicated singular set $\Sigma_{H}\subset M^{n+1}$ of codimension $\geq 8$. To overcome these subtleties we reinterpret $\Sigma_{H}$ as a boundary of $H\setminus\Sigma_{H}$ and show that, relative to $H\setminus\Sigma_{H}$, the singular set $\Sigma_{H}$ becomes regular, in a sense we explain below. We make use of some pieces of geometric measure theory but we neither employ any structural details of $\Sigma$ nor the full codimension estimate but only that $\Sigma$ has codimension $\geq 3$. Motivation To motivate and properly describe the adequate notion of regularity of $\Sigma$ we consider the case of Euclidean domains. The validity of the boundary Harnack inequality on a Euclidean domain $G\subset\mathbb{R}^{n}$, in (3) below, is a litmus test for a reasonable elliptic analysis on $G$. It asserts that there are constants $A(G)$, $C(G)>1$ such that for any $p\in\partial G$, small $R>0$ and any two harmonic functions $u$, $v>0$ on $B_{A\cdot R}(p)\cap G$ which vanish along $B_{A\cdot R}(p)\cap\partial G$, (3) $u(x)/v(x)\leq C\cdot u(y)/v(y)\mbox{ for all }x,\,y\in B_{R}(p)\cap G.$ Remarkably, there is a purely geometric way to characterize such domains. The validity of (3) is essentially equivalent [Ai] to the regularity condition that $G$ is a uniform domain. ###### Definition 2.1 A Euclidean domain $G\subset\mathbb{R}^{n}$ is called uniform if there is some $c\geq 1$ such that any two points $p,q\in G$ can be joined by a uniform curve. This is a rectifiable curve $\gamma_{p,q}:[a,b]\rightarrow G$ going from $p$ to $q$ such that: * • _Quasi-geodesic:_ $l(\gamma_{p,q})\leq c\cdot d(p,q)$. * • _Twisted double cones:_ For any $z\in\gamma_{p,q}$ let $l_{min}(\gamma_{p,q}(z))$ be the minimum of the lengths of the two subcurves of $\gamma_{p,q}$ from $p$ to $z$ and from $z$ to $q$. Then $l_{min}(\gamma_{p,q}(z))\leq c\cdot dist(z,\partial X).$ One may describe uniformity as a quantitative and scaling invariant form of connectivity. $\mathcal{S}$-Structures Turning to $H\setminus\Sigma_{H}$ relative to its boundary $\Sigma_{H}$, we observe that $G\subset\mathbb{R}^{n}$ is flat until we reach $\partial G$ whereas $H\setminus\Sigma_{H}$ degenerates towards $\Sigma_{H}$ with diverging second fundamental form. The isoperimetric inequality for the area minimizer $H$ allows us to compensate the additional twist from the divergence of second fundamental form and to establish the even stronger $\mathcal{S}$-uniformity of $H$. To make this precise we introduce a measure $\langle A\rangle_{H}$ for this degeneration. An assignment $H\mapsto\langle A\rangle_{H}$ of a non-negative, measurable function to any connected area minimizing hypersurface $H$ is called an $\mathcal{S}$-transform provided * • $\langle A\rangle_{H}$ is naturally assigned to $H$, in other words, the assignment commutes with the convergence of sequences of underlying area minimizers. * • $\langle A\rangle_{H}\geq|A_{H}|$ with $\langle A\rangle_{H}\equiv 0$, if $H\subset M$ is totally geodesic, otherwise, $\langle A\rangle_{H}$ is strictly positive. * • When $H$ is not totally geodesic, the $\mathcal{S}$-distance $\delta_{\langle A\rangle}:=1/\langle A\rangle$ is well-defined and it is $L_{\langle A\rangle}$-Lipschitz regular, for some constant $L_{\langle A\rangle}=L(\langle A\rangle,n)>0$: (4) $|\delta_{\langle A\rangle}(p)-\delta_{\langle A\rangle}(q)|\leq L_{\langle A\rangle}\cdot d(p,q),\mbox{ for }p,q\in H\setminus\Sigma.$ The $\mathcal{S}$-distance is a measure for the distance to singular and highly curved portions of $H$ that takes also the curvature into account. Constructions of $\mathcal{S}$-transforms are given in [L1] where we merge $g_{H}$ and $A_{H}$ in a suitable way. The geometric main application is the following result. ###### Theorem 2.2 There exists some $c>0$ such that $H\setminus\Sigma$ is an $\mathcal{S}$-uniform space. This means that any pair $p,q\in H\setminus\Sigma$ can be joined by an $\mathcal{S}$-uniform curve in $H\setminus\Sigma$, i.e., a rectifiable curve $\gamma_{p,q}:[a,b]\rightarrow H\setminus\Sigma$ with $\gamma_{p,q}(a)=p$, $\gamma_{p,q}(b)=q$ and so that: * • _Quasi-geodesic:_ $l(\gamma)\leq c\cdot d(p,q).$ * • _Twisted double $\mathcal{S}$-cones:_ $l_{min}(\gamma_{p,q}(z))\leq c\cdot\delta_{\langle A\rangle}(z)$ for any $z\in\gamma_{p,q}$. Figure 2: Schematic view of $H$, where $\Sigma$ is just the bottom line. The cold colors indicate low and the hot color high curvature of $H$. $\mathcal{S}$-uniform curves are also sensitive to the underlying curvature. Some Remarks In this theory the totally geodesic hypersurfaces play the rôle of the trivial case. They are always smooth submanifolds [L1, Corollary A.6] and in the non-compact case they are just Euclidean hyperplanes. In this case, many results in this paper are either obvious or they degenerate to conventions. The Lipschitz regular $\mathcal{S}$-distance $\delta_{\langle A\rangle}$ admits a Whitney type $C^{\infty}$-smoothing $\delta_{\langle A\rangle^{*}}$ which satisfies (S1)–(S3) and is quasi-natural in the sense that $c_{1}\cdot\delta_{\langle A\rangle}(x)\leq\delta_{\langle A\rangle^{*}}(x)\leq c_{2}\cdot\delta_{\langle A\rangle}(x)$, for some constants $c_{1}$, $c_{2}>0$, cf. [L1, Proposition B.3]. We note in passing that in older papers, now superseded by [L1]–[L3], we used the term _skin transform_ for the subclass of $\mathcal{S}$-transforms that includes a so-called Hardy inequality, now called _Hardy $\mathcal{S}$-transforms_, cf. [L3]. The renaming had no deeper reasons. The Hardy inequality is only needed in applications but not in the basic theory. Secondly, originally the level sets of $\langle A\rangle_{H}$, the _$|A|$ -skins_, served important technical purposes now covered from functional relations we have for $\langle A\rangle_{H}$. ### 2.1 Hyperbolic Unfoldings Remarkably, the uniformity of $G$ is also equivalent to the purely geometric condition that the quasi-hyperbolic metric, which we get from conformally deforming the Euclidean metric $g_{Eucl}$ to $dist(\cdot,\partial G)^{-2}\cdot g_{Eucl}$, is Gromov hyperbolic, has bounded geometry and its Euclidean boundary is homeomorphic to the Gromov boundary, cf. [BHK]. To make this plausible, we first observe that a (twisted) cone conformally deformed by $dist(\cdot,\partial G)^{-2}$ roughly looks like a piece of a hyperbolic space (a generalized Poincaré metric) and the scaling invariance of the uniformity conditions globalizes this to the whole space. We will see that an area minimizer $H$ with singular set $\Sigma$ (= boundary of $H\setminus\Sigma$) has a similar hyperbolic nature. This is more challenging since $H\setminus\Sigma$ degenerates while we approach $\Sigma$. In zeroth order this can be compared with the extension of the Riemann uniformization from complex domains to arbitrary Riemann surfaces. For starters we recall the notions of Gromov hyperbolicity and of bounded geometry. It has no local impact but strong consequences for the geometry near infinity. We mention [BH] as a general reference. ###### Definition 2.3 A locally compact geodesic metric space $X$ is Gromov hyperbolic if its geodesic triangles are $\mathbf{\delta}$-thin for some $\delta=\delta_{X}>0$. That is, each point on the edge of any geodesic triangle is within $\delta$-distance of one of the other two edges. Two rays in $X$ are _equivalent_ if they have finite Hausdorff distance. The set $\partial_{G}X$ of equivalence classes $[\gamma]$ of geodesic rays from a fixed base point $p\in X$ is called the Gromov boundary of $X$. This definition of $\partial_{G}X$ is independent of $p$. The space $\overline{X}_{G}=X\cup\partial_{G}X$ admits a natural topology that makes it a compact metrizable space. It is called the Gromov compactification of $X$. ###### Definition 2.4 A Riemannian manifold $M$ has $(\varrho,\ell)$-bounded geometry if there exist constants $\varrho={\varrho_{M}}>0$ and $\ell=\ell_{M}\geq 1$ for $M$ such that for each ball $B_{\varrho}(p)\subset M$ there is a smooth $\ell$-bi- Lipschitz chart $\phi_{p}$ onto an open set $U_{p}\subset\mathbb{R}^{n}$ with its Euclidean metric. The $\mathcal{S}$-metric on $H\setminus\Sigma$ In general the quasi-hyperbolic metric $dist(\cdot,\Sigma_{H})^{-2}\cdot g_{H}$ is not well-behaved on $H\setminus\Sigma$. It neither has bounded geometry nor does it have a good blow-up behavior, that is, the corresponding quasi-hyperbolic metric on tangent cones of points in $\Sigma_{H}\subset H$ does not approximate that on $H\setminus\Sigma$. The key point about $\mathcal{S}$-uniformity of $H$ is that it implies all the desirable properties for the so-called $\mathcal{S}$-metric $d_{\langle A\rangle}=d_{\langle A\rangle_{H}}$ defined by (5) $d_{\langle A\rangle}(x,y):=\inf\Bigl{\\{}\int_{\gamma}\langle A\rangle\,\,\Big{|}\,\gamma\subset H\setminus\Sigma\mbox{ rectifiable curve joining }x\mbox{ and }y\Bigr{\\}}$ for $x$, $y\in H\setminus\Sigma$. The metric $d_{\langle A\rangle}$ is even well-defined for _smooth_ $H$ where $\Sigma=\emptyset$. Alternatively, the $\mathcal{S}$-metric can be written $\langle A\rangle^{2}\cdot g_{H}$, but this is not a regular Riemannian metric since $\langle A\rangle$ is merely a locally Lipschitz function. However, one may use the Whitney type $C^{\infty}$-smoothing $\delta_{\langle A\rangle^{*}}$ if one needs a smooth version. Now we can formulate the following hyperbolization result, cf. [L1, Theorem 1.11, Proposition 3.10 and Theorem 1.13]. ###### Theorem 2.5 The $\mathcal{S}$-metric $d_{\langle A\rangle}$ has the following properties: * • The metric space $(H\setminus\Sigma,d_{\langle A\rangle})$ and its quasi- isometric Whitney smoothing, i.e., the smooth Riemannian manifold $(H\setminus\Sigma,d_{\langle A\rangle^{*}})=(H\setminus\Sigma,1/\delta_{\langle A\rangle^{*}}^{2}\cdot g_{H})$, are complete Gromov hyperbolic spaces with bounded geometry. * • $d_{\langle A\rangle}$ is natural, that is, the assignment $H\mapsto d_{\langle A\rangle_{H}}$ commutes with compact convergence of regular domains of the underlying area minimizers. The typical example is the blow-up around singular points and the resulting tangent cone approximations. * • For any _singular_ $H$ the identity map on $H\setminus\Sigma$ extends to homeomorphisms $\displaystyle\widehat{H}$ $\displaystyle\cong\overline{(H\setminus\Sigma,d_{\langle A\rangle})}_{G}\cong\overline{(H\setminus\Sigma,d_{\langle A\rangle^{*}})}_{G}\,\mbox{ and }$ $\displaystyle\widehat{\Sigma}$ $\displaystyle\cong\partial_{G}(H\setminus\Sigma,d_{\langle A\rangle})\cong\partial_{G}(H\setminus\Sigma,d_{\langle A\rangle^{*}}),$ where for $X=(H\setminus\Sigma,d_{\langle A\rangle})$ or $(H\setminus\Sigma,d_{\langle A\rangle^{*}})$, $\overline{X}_{G}$ and $\partial_{G}(X)$ denote the Gromov compactification and the Gromov boundary, respectively. The spaces $(H\setminus\Sigma,d_{\langle A\rangle})$ and $(H\setminus\Sigma,d_{\langle A\rangle^{*}})$ are conformally equivalent to the original space $(H\setminus\Sigma,g_{H})$. We refer to both these spaces as hyperbolic unfoldings of $(H\setminus\Sigma,g_{H})$. Here, $\widehat{H}$ and $\widehat{\Sigma}$ denote the one-point compactifications of $H$ and $\Sigma$ in the non-compact case of Euclidean hypersurfaces with the extra condition that for $H\subset\mathbb{R}^{n+1}$ we always add the point $\infty$ to $\Sigma$, even for compact $\Sigma$. ### 2.2 Reduction to Hyperbolic Geodesics Hyperbolic unfoldings reveal that the potential theory of elliptic operators $L$, like the conformal Laplacian or the Jacobi field operator, on area minimizers conforms to the underlying geometry. Intuitively, the evolution of the minimal Green’s function $\widetilde{G}$ of $\widetilde{L}=\delta_{\langle A\rangle^{*}}^{2}\cdot L$ on $(H\setminus\Sigma,d_{\langle A\rangle^{*}})$ concentrates along hyperbolic geodesics and this it largely controls the global analysis of $\widetilde{L}$. (The minimal Green’s function $\widetilde{G}$ of $\widetilde{L}$, $G(x,y)>0$ is a smallest function on $M\times M$, singular on $\\{(x,x)\>|\>x\in M\\}$, satisfying the equation $\widetilde{L}\,G(\cdot,y)=\delta_{y}$, where $\delta_{y}$ is the Dirac function in $y$.) This focussing/squeezing effect of hyperbolic geodesics is reflected in so- called $\boldsymbol{3G}$-inequalities, due to the triple appearance of $G$ in one inequality, one may interpret as follows: _Let $x,y,z\in(H\setminus\Sigma,d_{\langle A\rangle^{*}})$ such that $y$ lies on a hyperbolic geodesic connecting $x$ and $z$. Then, up to universal constants, there are as many “Brownian particles” travelling directly from $x$ to $z$, measured by $\widetilde{G}(x,z)$, as there are particles travelling from $x$ to $y$, measured by $\widetilde{G}(x,y)$, and then from $y$ to $z$, measured by $\widetilde{G}(y,z)$. That is, we have $\widetilde{G}(x,z)\approx\widetilde{G}(x,y)\cdot\widetilde{G}(y,z)$._ The potential theory on hyperbolic manifolds of bounded geometry naturally extends to hyperbolic graphs of bounded valence, where the constant $\delta$ measures the deviation of the graph from a tree. In turn, we can approximate $(H\setminus\Sigma,d_{\langle A\rangle^{*}})$, along with its potential theory, by such graphs. This formalizes the intuition of a reduction to hyperbolic geodesics. The admissible operators on $(H\setminus\Sigma,d_{\langle A\rangle^{*}})$ to make this work are the adapted weakly coercive operators. These are the linear second order elliptic operators $L$ which are uniformly elliptic on $(H\setminus\Sigma,d_{\langle A\rangle^{*}})$ and weakly coercive. That is, there is a $u>0$ with $L\,u\geq\varepsilon\cdot u$ for some $\varepsilon>0.$ For an exposition of this potential theory on Gromov hyperbolic spaces, which is largely due to Ancona [A1, A2], see [KL] for an exposition. An interesting aspect of this theory is that the geometric and the analytic conditions work hand in hand: the pair of conditions _bounded geometry_ $\leftrightarrow$ _uniformly elliptic_ is employed for most of the basic estimates and relations and then these results are critically improved from a tandem use of _Gromov hyperbolicity_ $\leftrightarrow$ _weak coercivity_. The point about hyperbolic unfoldings is that the transparent potential theory we have on $(H\setminus\Sigma,d_{\langle A\rangle^{*}})$ transfers one-by-one to corresponding classes of operators on the original area minimizer. This way we get the same type of potential theoretic results we have for uniform Euclidean domains on the $\mathcal{S}$-uniform spaces $H\setminus\Sigma$ with the difference that we have literally outsourced all the analysis to the hyperbolic unfolding as our workbench. This is the unfolding corrrespondence and the relevant operators $L$ on $(H\setminus\Sigma,g_{H})$ are called _$\mathcal{S}$ -adapted_ provided their counterpart $\delta_{\langle A\rangle^{*}}^{2}\cdot L$ on $(H\setminus\Sigma,d_{\langle A\rangle^{*}})$ is adapted weakly coercive. More explicitly, we set ###### Definition 2.6 An elliptic operator $L$ is $\mathcal{S}$-adapted when $-L(u)=\sum_{i,j}a_{ij}\cdot\frac{\partial^{2}u}{\partial x_{i}\partial x_{j}}+\sum_{i}b_{i}\cdot\frac{\partial u}{\partial x_{i}}+c\cdot u$ so that for some $k\geq 1$ and suitable charts (6) $k^{-1}\cdot\sum_{i}\xi_{i}^{2}\leq\sum_{i,j}a_{ij}(p)\cdot\xi_{i}\xi_{j}\leq k\cdot\sum_{i}\xi_{i}^{2},$ (7) $\delta^{\beta}_{\langle A\rangle}(p)\cdot|a_{ij}|_{C^{\beta}(B_{\Theta(p)}(p))}\leq k,\delta_{\langle A\rangle}(p)\cdot|b_{i}|_{L^{\infty}(B_{\Theta(p)}(p))}\leq k\mbox{ and }\\\ \delta^{2}_{\langle A\rangle}(p)\cdot|c|_{L^{\infty}(B_{\Theta(p)}(p))}\leq k,\mbox{ for }\Theta(p)=c/\langle A\rangle(p),\mbox{ for some }c(H)>0.$ and there is a $u>0$ with $L\,u\geq\varepsilon\cdot\langle A\rangle^{2}\cdot u\mbox{ for some }\varepsilon>0.$ A frequently considered type of elliptic problems is that of eigenvalues, typically for symmetric operators. For them it is possible and useful to bring the weak coercivity condition into a variational form. Then the weak coercivity condition is equivalent to the existence of some positive constant $\tau>0$ such that the Hardy type inequality (8) $\int_{H}f\cdot Lf\,dV\,\geq\,\tau\cdot\int_{H}\langle A\rangle^{2}\cdot f^{2}dV\mbox{ holds for any }f\in C^{\infty}_{0}(H\setminus\Sigma).$ That is, $L$ has an $\langle A\rangle$-weighted positive first eigenvalue $\lambda^{\langle A\rangle}_{1}(L)>0$, in this singular case commonly called the principal eigenvalue. We also mention that in the singular case, in sharp contrast to the smooth case, we have positive $\langle A\rangle$-weighted eigenfunctions for any $\lambda<\lambda^{\langle A\rangle}_{1}(L)$. The boundary Harnack inequality we get for $\mathcal{S}$-adapted $L$ along the boundary $\widehat{\Sigma}$ of $(H\setminus\Sigma,g_{H})$ differs in two ways from that in the case of uniform domains equipped with the Laplacian. * • In place of the balls of (3), we choose hyperbolic halfspaces $\mathcal{N}^{\delta}_{i}(z)$ in $(H\setminus\Sigma,d_{\langle A\rangle^{*}})$, with completions $\mathbf{N}^{\delta}_{i}(z)$, uniformly contracting to $z\in\widehat{\Sigma}$ for $i\rightarrow\infty$. One may compare this with the Poincaré disc model on $B_{1}(0)\subset\mathbb{R}^{2}$ where the $B_{\rho}(p)\cap B_{1}(0)$, $\rho\in(0,1)$, for flat discs $B_{\rho}(p)$, $p\in\partial D$, become hyperbolic halfspaces. * • Solutions $u>0$ of $L\,f=0$ will usually diverge to infinity when we approach $\Sigma$. The generalization of the vanishing boundary data is that of solutions of minimal growth towards $\Sigma_{H}$ when compared to other solutions $v>0$. ###### Theorem 2.7 There exists a constant $C(H,L)>1$ such that for any $z\in\widehat{\Sigma}$ and any two solutions $u$, $v>0$ of $L\,f=0$ on $H\setminus\Sigma$ with minimal growth along $\mathbf{N}^{\delta}_{i}(z)\cap\widehat{\Sigma}$, we have (9) $u(x)/v(x)\leq C\cdot u(y)/v(y)\mbox{ \emph{for all} }x,\,y\in\mathcal{N}^{\delta}_{i+1}(z).$ Examples To relate the general setup of $\mathcal{S}$-adapted operators to some geometrically relevant examples, we upgrade the $\mathcal{S}$-transform $\langle A\rangle_{H}$ to a Hardy $\mathcal{S}$-transform, cf. [L3]. This is an $\mathcal{S}$-transform that additionally satisfies the following Hardy type inequality: for any $f\in C^{\infty}(H\setminus\Sigma,\mathbb{R})$ compactly supported in $H\setminus\Sigma$ we have $\int_{H}|\nabla f|^{2}+|A_{H}|^{2}\cdot f^{2}dA\geq\tau\cdot\int_{H}\langle A\rangle_{H}^{2}\cdot f^{2}dA,\mbox{ for some }\tau=\tau(\langle A\rangle,H)\in(0,1).$ Let $H^{n}\subset M^{n+1}$ be a singular area minimizer. Then we have for any Hardy $\mathcal{S}$-transform: * • For $scal_{M}\geq 0$ we have $\lambda^{\langle A\rangle}_{1}(L_{H})>0$ for the conformal Laplacian $L_{H}$. * • We have $\lambda^{\langle A\rangle}_{1}(J_{H})\geq 0$ for the Jacobi field operator $J_{H}:=-\Delta_{H}-|A|^{2}-Ric_{M}(\nu,\nu)$. On area minimizers, we can turn any operator that satisfies (6) and (7) into an $\mathcal{S}$-adapted operator $L$ provided $\lambda^{\langle A\rangle}_{1}(L)>-\infty$ since $L-\lambda\cdot\langle A\rangle^{2}\cdot Id$ becomes $\mathcal{S}$-adapted for $\lambda<\lambda^{\langle A\rangle}_{1}(L)$. ### 2.3 Applications in Geometric Analysis A basic application of the boundary Harnack inequality is a transparent Martin theory. ###### Definition 2.8 Let $X$ be a non-compact Riemannian manifold and $L$ be a linear second order elliptic operator on $X$ with a minimal Green’s function $G:X\times X\rightarrow(0,\infty]$. We choose a base point $p$ and consider the space $S$ of sequences $s=\\{p_{n}\\}$ in $X$, $n\geq 1$, such that * • $s$ has no accumulation points in $X$. * • $K(x,p_{n}):=G(x,p_{n})/G(p,p_{n})\rightarrow K_{s}(x)$ compactly to some function $K_{s}$ on $X$ as $n\rightarrow\infty$. The Martin boundary $\partial_{M}(X,L)$ is the quotient of $S$ modulo the following relation on $S$: $s\sim s^{*}$ if and only if $K_{s}\equiv K_{s^{*}}$. Moreover, we define the Martin kernel $k(x;y)$ on $X\times\partial_{M}(X,L)$ by $k(x;y):=K_{s}(x)$, for some sequence $s$ representing $y\in\partial_{M}(X,L)$. As for the Gromov boundary, these definitions do not depend on the choice of the base point $p$. The (metrizable) Martin topology on $\overline{X}_{M}:=X\cup\partial_{M}(X,L)$ is defined through the convergence of the function $K(x,p_{n})$. $\overline{X}_{M}$ and $\partial_{M}(X,L)$ turn out to be compact. $\overline{X}_{M}$ is called the Martin compactification of $(X,L)$. A point in $S_{L}(X)$ is called extremal if it cannot be written as a non-trivial convex combination of other points of $S_{L}(X)$. The subset $\partial^{0}_{M}(X,L)\subset\partial_{M}(X,L)$ of extremal functions belonging to $\partial_{M}(X,L)$ is called the minimal Martin boundary. The point about $\partial^{0}_{M}(X,L)$ is that for any solution $v>0$ of $L\,f=0$ we have a unique finite Radon measure $\mu=\mu(v)$ on $\partial^{0}_{M}(X,L)$ so that (10) $v(x)=\int_{\partial^{0}_{M}(X,L)}k(x;y)\,d\mu(y).$ The problem with this Martin integral is that a proper understanding of $\partial^{0}_{M}(X,L)$ can be very difficult. Different from classical contour formulas, $\partial_{M}(X,L)$ and $\partial^{0}_{M}(X,L)$ may strongly depend on $L$ and they usually differ from intrinsically defined topological boundaries of $X$. Remarkably, these problems disappear for $\mathcal{S}$-adapted operators on area minimizers. ###### Theorem 2.9 For any $\mathcal{S}$-adapted operator $L$ on $H\setminus\Sigma$ we have * • the identity map on $H\setminus\Sigma$ extends to a homeomorphism between $\widehat{H}$ and the Martin compactification $\overline{(H\setminus\Sigma)}_{M}$. * • all Martin boundary points are minimal: $\partial^{0}_{M}(H\setminus\Sigma,L)\equiv\partial_{M}(H\setminus\Sigma,L)$. Thus, $\widehat{\Sigma}$ and the minimal Martin boundary $\partial^{0}_{M}(H\setminus\Sigma,L)$ are homeomorphic. Quantitative Results The unfolding correspondence is not restricted to the transition between $L$ on $(H\setminus\Sigma,g_{H})$ and $\delta_{\langle A\rangle^{*}}^{2}\cdot L$ on $(H\setminus\Sigma,d_{\langle A\rangle^{*}})$. Inspired from the Doob transform ín stochastic analysis we define (11) $L^{\mathcal{S}}\phi:=\delta^{(n+2)/2}_{\langle A\rangle^{*}}\cdot L({\delta^{-(n-2)/2}_{\langle A\rangle^{*}}}\cdot\phi)\mbox{ for sufficiently regular }\phi\mbox{ on }(H\setminus\Sigma,d_{\langle A\rangle^{*}})$ We call $L^{\mathcal{S}}$ the $\mathcal{S}$-Doob Transform of $L$. The minimal Green’s functions $G$ of an $\mathcal{S}$-adapted $L$ on $(H\setminus\Sigma,d_{H})$ and $G^{\mathcal{S}}$ of the adapted weakly coercive $L^{\mathcal{S}}$ on $(H\setminus\Sigma,d_{\langle A\rangle^{*}})$ satisfy (12) $G^{\mathcal{S}}(x,y)=\delta_{\langle A\rangle^{*}}^{(n-2)/2}(x)\cdot\delta_{\langle A\rangle^{*}}^{(n-2)/2}(y)\cdot G(x,y),\mbox{ for }x\neq y\in H\setminus\Sigma.$ There are constants $\beta(H),\alpha(H)>0$ so that (typically applied along hyperbolic geodesics) (13) $G^{\mathcal{S}}(x,y)\leq\beta\cdot\exp(-\alpha\cdot d_{\langle A\rangle^{*}}(x,y)),\mbox{ for }x,y\in H\setminus\Sigma\mbox{ and }\\\ d_{\langle A\rangle^{*}}(x,y)>2\cdot\varrho_{(H\setminus\Sigma,d_{\langle A\rangle^{*}})}.$ In the case of the conformal Laplacian, the combination of (12) and (13) with the axioms for $\langle A\rangle$ yields upper radius and distance estimates after conformal deformations by $G$ and, via boundary Harnack inequality, for arbitrary solutions of minimal growth. Minimal Growth under Blow-Ups In the case of a singular minimal cone $C$ the Martin theory shows that there is exactly one solution $u>0$ with minimal growth towards $\Sigma_{C}$, up to multiples. This means that $\mu_{u}$ is the Dirac measure in $\infty\in\widehat{\Sigma}_{C}$. Now we assume that $L$ reproduces under scalings, $L_{C}$ and $J_{C}$ are examples. This implies a separation of variables: $u=\psi_{C}(\omega)\cdot r^{\alpha_{C}},\,(\omega,r)\in\partial B_{1}(0)\cap C\setminus\Sigma_{C}\times\mathbb{R}^{>0}$, for some function $\psi_{C}$ on $\partial B_{1}(0)\cap C\setminus\Sigma_{C}$ and $\alpha_{C}<0$. When $u$ solves $L\,v=0$ with minimal growth along $B\cap\widehat{\Sigma}$ for a ball $B\subset H$ around some $p\in\Sigma$ and we consider any solution $v$ on any tangent cone $C$ of $H$ in $p$ induced by $u$ under blow-up, we find that $v$ has minimal growth towards all of $\Sigma_{C}$. $u$ asymptotically looks like $\psi_{C}(\omega)\cdot r^{\alpha_{C}}$. The outcome is a top-down to bottom-up asymptotic analysis towards $p$. $\bullet$ Top-Down Given a solution $u>0$ on $H\setminus\Sigma_{H}$ of minimal growth near some $p\in\Sigma_{H}$, choose some tangent cone $C$ and consider the (uniquely determined) induced solution of minimal growth towards $\Sigma_{C}\subset C$ on $C\setminus\Sigma_{C}$. Blow-up in a point of $\Sigma_{C}\setminus\\{0\\}$ and iterate this process, at most $\dim H-7$ times, until we reach a product cone $\mathbb{R}^{m}\times C^{n-m}$, for some $C^{n-m}\subset\mathbb{R}^{n-m+1}$ singular only in $0$. We think of this as a terminal node in a blow-up tree with root $H$. The uniqueness shows that the induced minimal solutions we end up with are $\mathbb{R}^{n-m+1}$-translation symmetric and amenable to an explicit description. $\bullet$ Bottom-Up Starting from terminal nodes $\mathbb{R}^{m}\times C^{n-m}$, for some $C^{n-m}\subset\mathbb{R}^{n-m+1}$ singular only in $0$, we transfer our understanding of the induced solutions on the tangent cones stepwise backwards to $H$ in the blow-up tree. We get an asymptotic portrait of the solution $u$ on $H\setminus\Sigma$ near $p$ from the family of Martin theories on $H\setminus\Sigma$ and on its (iterated) tangent cones. ## 3 Minimal Splitting Factors - Top-Down Analysis For a compact singular area minimizer $H^{n}$ in some $scal>0$-manifold $M^{n+1}$ we recall that $\lambda^{\langle A\rangle}_{H}>0$. We apply our theory to get a nicely controllable conformal but still singular $scal>0$-geometry on $H^{n}$, the minimal splitting factor geometry. One may think of it as some kind of $\boldsymbol{scal>0}$-twin of $H^{n}$ and we will explain why. ### 3.1 Minimal Splitting Factors To this end, we can neither use $L_{H}$ nor $L_{H,\lambda^{\langle A\rangle}_{H}}=L_{H}-\lambda^{\langle A\rangle}_{H}\cdot\langle A\rangle^{2}\cdot Id$. When we deform $H$, more precisely $H\setminus\Sigma$, by solutions of $L_{H}\phi=0$ we get a scalar flat metric. One may feel tempted to consider solutions of $L_{H,\lambda^{\langle A\rangle}_{H}}\phi=0$, but this operator is no more $\mathcal{S}$-adapted simply because it obviously has a vanishing principal eigenvalue and many of the results do not apply. Instead we choose (super)solutions of $L_{H,\lambda}\phi=L_{H}\phi-\lambda\cdot\langle A\rangle^{2}\cdot\phi=0$ for (14) $0<\lambda<\lambda^{\langle A\rangle}_{H}.$ Due to the locally Lipschitz regular coefficients of $L_{H,\lambda}=L_{H}-\lambda\cdot\langle A\rangle^{2}\cdot Id$, solutions of $L_{H,\lambda}\,\phi=0$ are $C^{2,\alpha}$-regular, for any $\alpha\in(0,1)$. This suggests the following regularity assumptions. For $\lambda<\lambda^{\langle A\rangle}_{H}$, let $\Phi>0$ be a $C^{2,\alpha}$-supersolution of $L_{H,\lambda}\phi=0$ on $H\setminus\Sigma_{H}$ so that: * • For compact area minimizers $H$: $\Phi$ is a solution in a neighborhood of $\Sigma$ and it has minimal growth towards $\Sigma$. * • For non-compact Euclidean area minimizers $H$: $\Phi$ is a solution on $H\setminus\Sigma_{H}$ with minimal growth towards $\Sigma$. ###### Theorem 3.1 The metric completion $(\widehat{H\setminus\Sigma},\widehat{d_{\mathcal{S}}}(\Phi))$ of $(H\setminus\Sigma,\Phi^{4/(n-2)}\cdot g_{H})$ is homeomorphic to $(H,d_{H})$. Thus, we can write it as $(H,d_{\mathcal{S}}(\Phi))$. The Hausdorff dimension of $\Sigma$ relative to $(H,d_{\mathcal{S}}(\Phi))$ is $\leq n-7$. Here $(H,d_{H})$ is the metric space that results from the embedding $H^{n}\subset M^{n+1}$. We write simply $d_{\mathcal{S}}$ when the specific choice of $\Phi$ is not needed or already known from the context. Some Details To get the homeomorphism, we use the $\mathcal{S}$-Doob transform (11) and the relation (12) to transfer the basic upper growth estimate (13) to the minimal Green’s function of $L_{H,\lambda}$ on $(H\setminus\Sigma,g_{H})$. It also applies to $\Phi$ by the boundary Harnack inequality. This shows that $(\widehat{H\setminus\Sigma},\widehat{d_{\mathcal{S}}})$ is not larger than $(H,d_{H})$. This works as soon as $\lambda<\lambda^{\langle A\rangle}_{H}$. Remarkably, we need $\lambda>0$ to also get lower estimates showing that it is not smaller and, hence, to establish the homeomorphism from the Bombieri–Giusti Harnack inequality [BG]. One might expect that the Hausdorff dimension of $\Sigma$ relative to $(H,d_{\mathcal{S}})$ must be the same as for area minimizers. However, the Hausdorff dimension is not a topological but only a bi-Lipschitz invariant while the identity map from $(H,d_{H})$ to $(H,d_{\mathcal{S}})$ is not Lipschitz continuous. Hölder continuous homeomorphisms can increase the dimension of a subset of dimension $a\in(0,n)$ to any value $b\in(a,n)$, cf. [Bi]. We actually need the minimal growth of $\Phi$ and essential parts of the hyperbolic unfolding theory to control the Hausdorff dimension. Metric Measure Spaces We augment $(H,d_{\mathcal{S}}(\Phi_{H}))$ to a metric measure space. To this end we show that there is a canonical extension of $\Phi^{2\cdot n/(n-2)}\cdot\mu_{H}$ on $H\setminus\Sigma$ to a measure $\mu_{\mathcal{S}}$ on $H$, where $\mu_{H}$ is the $n$-dimensional Hausdorff measure on $(H^{n},g_{H})\subset(M^{n+1},g_{M})$. In fact, this extension $\mu_{\mathcal{S}}$ is a Borel measure on $(H,d_{\mathcal{S}})$, cf. [HKST, pp. 62–64]. ###### Definition 3.2 We call $(H,d_{\mathcal{S}})$ a minimal spitting factor of its ambient space $M$ and we define the minimal factor measure $\mu_{\mathcal{S}}$ by $\mu_{\mathcal{S}}(E):=\int_{E\setminus\Sigma_{H}}\Phi^{2\cdot n/(n-2)}\cdot d\mu_{H},$ for any Borel set $E\subset H$. The small Hausdorff dimension of $\Sigma\subset(H,d_{\mathcal{S}}(\Phi_{H}))$ ensures that $\mu_{\mathcal{S}}$ is an outer regular measure and it is still sufficient to define $\mu^{n-1}_{\mathcal{S}}$ for hypersurfaces within $(H,d_{\mathcal{S}}(\Phi_{H}))$. We get $d\mu^{n-1}_{\mathcal{S}}$ from extending $\Phi^{2\cdot(n-1)/(n-2)}\cdot d\mu^{n-1}_{H}$ on $H\setminus\Sigma$, where $d\mu_{H}^{n-1}$ is the hypersurface element on $(H\setminus\Sigma,g_{H})$. ###### Theorem 3.3 We consider $(H,d_{\mathcal{S}}(\Phi_{H}))$, some $p\in\Sigma_{H}$ and some tangent cone $C$ in $p$. Then we get the following blow-up invariance: Any sequence $(H,\tau_{i}\cdot d_{\mathcal{S}}(\Phi_{H}))$, scaled by a sequence $\tau_{i}\rightarrow\infty$, $i\rightarrow\infty$, around $p$, subconverges and the limit of any converging subsequence is $(C,d_{\mathcal{S}}(\Phi_{C}))$ for some tangent cone $C$. $(C,d_{\mathcal{S}}(\Phi_{C}))$ is invariant under scaling around $0\in C$, that is, it is again a cone. This is the geometric counterpart of the top-down blow-up analysis of minimal growth solutions we discussed in the previous chapter. The result means that $(H,d_{\mathcal{S}},\mu_{\mathcal{S}})$ has a simple asymptotic geometry near $\Sigma$ and it admits inductive tangent cone reductions similar to area minimizers. Note that the principal eigenvalues of $H$ and $C$ usually differ, with $\lambda^{\langle A\rangle}_{H}<\lambda^{\langle A\rangle}_{C}$. That is, we would encounter non-principal eigenvalues in the blow-up analysis even if we started from $\lambda^{\langle A\rangle}_{H}$. ### 3.2 Stable Isoperimetry - Hyperbolic Geodesics Again A vital feature of minimal spitting factors is that they still satisfy an isoperimetric inequality. ###### Theorem 3.4 There are some constants $\gamma(H)>0,$ $\gamma(H^{*})>0$ so that for every ball $B_{\rho}$ and any open set $U\subset H$ with compact closure and rectifiable boundary $\partial U$ (15) $\mu_{\mathcal{S}}(U)^{(n-1)/n}\leq\gamma\cdot\mu^{n-1}_{\mathcal{S}}(\partial U),$ (16) $\min\\{\mu_{\mathcal{S}}(B_{\rho}\cap U),\mu_{\mathcal{S}}(B_{\rho}\setminus U)\\}^{(n-1)/n}\leq\gamma^{*}\cdot\mu^{n-1}_{\mathcal{S}}(B_{\rho}\cap\partial U).$ The proof has two steps. We first show that $(H,d_{\mathcal{S}},\mu_{\mathcal{S}})$ is doubling and has a volume decay property of order $n$. Both follow from the stronger Ahlfors regularity estimate that for some $a(H),b(H)>0$, (17) $a\cdot r^{n}\leq Vol(B_{r}(q),d_{\mathcal{S}})\leq b\cdot r^{n}.$ The trick is to check that $Vol(B_{1}(q),d_{\mathcal{S}})$ for Euclidean hypersurfaces can be bounded in terms of constants that only depend on the dimension. This uses the same techniques as the proof of the homeomorphism theorem above. Scalings to other radii commute with scaling of the original area minimizing metric and readily give these growth rates. The main step is the proof of the Poincaré inequality on $(H,d_{\mathcal{S}},\mu_{\mathcal{S}})$. It says that there is a constant $C_{0}=C_{0}(H,\Phi)>0$, so that when $B\subset H$ is an open ball, $u:B\rightarrow\mathbb{R}$ is an $L^{1}$-function on $H$ that is $C^{1}$ on $H\setminus\Sigma$. Then we have, setting $|\nabla u|\equiv 0$ on $\Sigma$, (18) $\fint_{B}|u-u_{B}|\,d\mu_{\mathcal{S}}\leq C_{0}\cdot\fint_{B}|\nabla u|\,d\mu_{\mathcal{S}},\\\ \mbox{ where }u_{B}:=\fint_{B}u\,d\mu_{\mathcal{S}}:=\frac{1}{\mu_{\mathcal{S}}(B)}\int_{B}u\,d\mu_{\mathcal{S}}.$ The volume decay property of order $n$ then improves this Poincaré inequality to the Sobolev inequality with exponent $n$ and, from this, we get the isoperimetric inequalities for $(H,d_{\mathcal{S}},\mu_{\mathcal{S}})$. A broad reference is [HKST]. Semmes Families To derive the Poincaré inequality on rather general metric measure spaces, Semmes has decompiled the classical proof of Poincaré inequality on $\mathbb{R}^{n}$ [Se, He, HKST]. In an important step, we encounter uniformly distributed families of curves linking any two given points. The abstracted concept is that of _thick families of curves_ , also called _Semmes families_ , satisfying the two conditions (i) and (ii) below. For reasonable metric spaces, the presence of Semmes families implies the validity of a Poincaré inequality. On area minimizers, we can use hyperbolic unfoldings to define canonical Semmes families on $(H,d_{H})$ with the interesting feature that they are still Semmes families relative to $(H,d_{\mathcal{S}})$. ###### Proposition 3.5 There is a $C=C(H)>0$ and for any two $p,q\in H$ a family $\Gamma_{p,q}$ of rectifiable curves $\gamma:I_{\gamma}\rightarrow H$, $I_{\gamma}\subset\mathbb{R}$, joining $p$ and $q$, so that: 1. (i) For any $\gamma\in\Gamma_{p,q}$: $l(\gamma|_{[s,t]})<C\cdot d(\gamma(s),\gamma(t))$, for $s,t\in I_{\gamma}$. 2. (ii) Each family $\Gamma_{p,q}$ carries a probability measure $\sigma_{p,q}$ so that for any Borel set $A\subset X$, the assignment $\gamma\mapsto l(\gamma\cap A)$ is $\sigma$-measurable with (19) $\int_{\Gamma_{p,q}}l(\gamma\cap A)\,d\sigma(\gamma)\leq C\cdot\int_{A_{C,p,q}}\left(\frac{d(p,z)}{\mu(B_{d(p,z)}(p))}+\frac{d(q,z)}{\mu(B_{d(q,z)}(q))}\right)d\mu(z)$ for $A_{C,p,q}:=(B_{C\cdot d(p,q)}(p)\cup B_{C\cdot d(p,q)}(q))\cap A$. The family $\Gamma_{p,q}$ uniformly surrounds a central curve, its core $\gamma_{p,q}$. It is a hyperbolic geodesic linking $p$ and $q$ in the Gromov compactification of the hyperbolic unfolding. The first assertion, for the core, is part of the proof that $(H\setminus\Sigma,d_{\langle A\rangle^{*}})$ is Gromov hyperbolic in [L1]. It shows that each segment of $\Gamma_{p,q}$ is an $\mathcal{S}$-uniform curve relative to $(H,d_{H})$. For the other curves, this follows from the way we define $\Gamma_{p,q}$. To define $\Gamma_{p,q}$ and the probability measure $\sigma_{p,q}$, we start with any two points $x,y\in\mathbb{R}^{n}$ and consider the hyperplane $L^{n-1}(x,y)$ orthogonal to the line segment $[x,y]\subset\mathbb{R}^{n}$ passing through the midpoint $m(x,y)$ of $[x,y]$. For $\rho=d(x,y)$, we consider a ball $B_{r}=B_{r}^{n-1}(m(x,y))\subset L^{n-1}(x,y)$ of radius $r\in(0,\rho]$ which we will choose later. For any $z\in B_{r}$, let $\gamma_{z}$ be the unit speed curve from $x$ to $y$ we get when we follow the line segments $[x,z]$ and $[z,y]$. We define $\Gamma^{\mathbb{R}^{n}}_{x,y}=\Gamma^{\mathbb{R}^{n}}_{x,y}(r):=\\{\gamma_{z}\,|\,z\in B_{r}\\}$ and the point set $D_{r}(x,y)$ we get as a union of the curves. We introduce the probability measure $\alpha^{\mathbb{R}^{n}}_{x,y}$ on $\Gamma^{\mathbb{R}^{n}}_{x,y}$ as follows: if $W\subset\Gamma^{\mathbb{R}^{n}}_{x,y},$ (20) $\alpha^{\mathbb{R}^{n}}_{x,y}(W):=\mathcal{H}^{n-1}(\\{z\in B_{r}^{n-1}\,|\,\gamma_{z}\in W\\})/\mathcal{H}^{n-1}(B_{r}^{n-1}).$ For any Borel set $A\subset\mathbb{R}^{n}$, the function $\gamma\mapsto\ell(\gamma\cap A)$ on $\Gamma^{\mathbb{R}^{n}}_{x,y}$ is $\alpha^{\mathbb{R}^{n}}_{x,y}$-measurable. Now assume that all points in $A$ are closer to $x$ than to $y$. For the distance between corresponding points on any two segments, we have (21) $d(s\cdot z_{1}+(1-s)\cdot x,s\cdot z_{2}+(1-s)\cdot x)\leq 2\cdot r\cdot s,\mbox{ for }s\in[0,1],z_{1},z_{2}\in B_{r}.$ The coarea formula gives the following inequality for the annuli $A_{j}:=A\cap B(x,\ 2^{-j}d(x,\ y))\setminus B(x,2^{-j-1}d(x,y))$, $j\in\mathbb{Z}^{\geq 0}$: $\int_{B_{r}}\ell(\gamma_{z}\cap A)d\mathcal{H}^{n-1}(z)=\sum_{j=0}^{\infty}\ \int_{B_{r}}\ell(\gamma_{z}\cap A_{j})d\mathcal{H}^{n-1}(z)\leq\sum_{j=0}^{\infty}2^{(j+2)(n-1)}\mu(A_{j})$ $\leq 4^{n-1}\cdot\int_{A\cap B(x,d(x,y))}\frac{d(x,z)}{\mu(B_{d(x,z)}(x))}d\mu(z)$ Thus we have a Semmes family on $\mathbb{R}^{n}$ and can now identify the line segment $[x,y]\subset\mathbb{R}^{n}$ with the hyperbolic geodesic $\gamma_{p,q}$ and use its $\mathcal{S}$-uniformity to transfer the Semmes family $\Gamma^{\mathbb{R}^{n}}_{x,y}$ to $(H,d_{H})$. In other words, what we really do is to define a curve fibration of the twisted double $\mathcal{S}$-cone surrounding $\gamma_{p,q}$ which we already had from the $\mathcal{S}$-uniformity of $H$. This is the desired canonical Semmes family on $(H,d_{H})$. Once again, the estimates for minimal Green’s functions along hyperbolic geodesics show that, with other constants and probability measures, $\Gamma_{p,q}$ is still a Semmes family for $(H,d_{\mathcal{S}},\mu_{\mathcal{S}})$ and thus we get the desired Poincaré inequality on $(H,d_{\mathcal{S}},\mu_{\mathcal{S}})$. The isoperimetric inequalities also control area minimizers within $(H,d_{\mathcal{S}},\mu_{\mathcal{S}})$ from the following type of standard consequences: ###### Corollary 3.6 There is a $\rho_{H}>0$ so that for any $r\in(0,\rho_{H})$ and any area minimizing boundary $L^{n-1}$ bounding some open set $L^{+}\subset H$ in $(H,d_{\mathcal{S}},\mu_{\mathcal{S}})$: (22) $\kappa^{-}_{n}\cdot r^{n}\leq\mu_{\mathcal{S}}(L^{+}\cap B_{r}(p))\leq\kappa^{+}_{n}\cdot r^{n},$ where $\kappa^{-}_{n},\kappa^{+}_{n}>0$ denote constants depending only on the dimension $n$. This volume growth estimate shows that there are no horn-shaped pieces of area minimizers in $(H,d_{\mathcal{S}},\mu_{\mathcal{S}})$ entering narrow hollow cylinders. In particular, such an area minimizer cannot _stretch out_ along $\Sigma$. This is a crucial detail to improve the efficiency of smoothing techniques we discuss in the next chapter. ## 4 Smoothing Techniques - Bottom-up Constructions We describe how to deform the still singular minimal splitting factors $(H,d_{\mathcal{S}})$ to regular $scal>0$-geometries on $H\setminus U$ with minimal boundary $\partial U$, where is $U$ is a small neighborhood of $\Sigma_{H}$. This yields schemes of an inductive dimensional descent with a built-in partial regularization. The point is that $\partial U$ may also have singularities but of lower dimension and the minimality of $\partial U$ matches the intended inductive descent: Area minimizers in $H\setminus U$ either coincide with $\partial U$ or they are entirely supported in $H\setminus\overline{U}$. Thus in each loop of the induction such area minimizers experience a smooth $scal>0$-environment and we eventually reach exclusively smooth geometries in dimensions $\leq 7$. ### 4.1 Surgeries Revisited We shed some new light on the well-known $scal>0$-preserving surgeries on a manifold $M$ one can do along any given submanifold $N\subset M$ of codimension $\geq 3$, [GL] and [SY2]. We approach these surgeries in a way that represents the gluing boundaries, after removing tubes around $N$, as minimal surfaces. One may keep the intermediate bounded manifolds per se or glue some complementary piece to them to get closed $scal>0$-manifolds with altered topology. This illustrates the smoothing results for minimal splitting factors we discuss below. In a quite similar manner, we remove tubes around the singularities $\Sigma\subset H$. Again, we will get minimal boundaries and keep $scal>0$. One may use this manifold with minimal boundary directly or one can glue a smooth complementary part to it to get a new closed $scal>0$-manifold. ###### Proposition 4.1 Let $M^{n}$, $n\geq 3$, be a closed manifold and assume that the first eigenvalue $\lambda_{1}$ of the conformal Laplacian $L_{M}$ is positive. For any submanifold $N^{k}\subset M^{n}$ of dimension $k\leq n-3$ there are arbitrarily small neighborhoods $U$ of $N$, such that $M\setminus U$ is conformal to a $scal>0$-manifold $X=X_{U}$ a (locally) area minimizing boundary $\partial X$. We break the construction of the conformal deformation into two steps: In Step A we choose a canonical conformal deformation to reach a _basic $scal>0$-metric_ on $M$. In Step B we add a _secondary_ conformal change of $M\setminus N$ using some positive function diverging to $+\infty$ while we approach $N$ to achieve the desired _bending_ effect towards $N$. When we evaluate these conformal deformations, we find two neighborhoods $U\subset V$ of $N$ with $\partial V$ having _positive_ mean curvature and $\partial U$ with _negative_ mean curvature111The sign convention is made so that $\partial B_{1}(0)\subset\mathbb{R}^{n+1}$, viewed from $0\in\mathbb{R}^{n+1}$, has positive mean curvature.. We think of $\partial V$ as an outer barrier and $\partial U$ as an inner barrier. They keep area minimizers from escaping $V\setminus U$. In fact, the mean curvature constraints show that we may replace any $Area_{n-1}$-minimizing sequence of boundaries $T_{i}\subset V$ _surrounding_ $U$, that is $T_{i}=\partial W_{i}$, for open $W_{i}$ with $U\subset W_{i}\subset V$, so that for $i\rightarrow\infty$: $Area_{n-1}(T_{i})\rightarrow\inf\\{Area_{n-1}(T)\,|\,T=\partial W\mbox{ for an open }W,U\subset W\subset V\\},$ by another $Area_{n-1}$-minimizing sequence $T^{*}_{i}$ with $Area_{n-1}(T^{*}_{i})\leq Area_{n-1}(T_{i})$ and support outside a neighborhood of $\partial V\cup\partial U$. Then standard arguments from basic geometric measure theory [Gi, 1.20] show that there is an area minimizer $T^{*}=\partial W^{*}$ for some open $W^{*}$ with $U\subset W^{*}\subset V$. We set $X:=M\setminus W^{*}$ and have $\partial X=T^{*}$. Step A We can use the first eigenfunction u=$u_{M}>0$ of $L_{M}$ on $M$, the assumption $\lambda_{1}>0$ and the transformation law for scalar curvature under conformal transformation to see: (23) $scal(u^{4/(n-2)}\cdot g_{M})\cdot u^{\frac{n+2}{n-2}}=L_{M}(u)=\lambda_{1}\cdot u>0.$ That is $scal(u^{4/(n-2)}\cdot g_{M})>0$. This observation is due to Kazdan and Warner [KW]. $u^{4/(n-2)}\cdot g_{M}$ is our basic $scal>0$-metric on $M$. Step B We also choose a function $\psi>0$ on $M\setminus N$ with $L_{M}\psi=0$ and (24) $\psi=\begin{cases*}r^{2+k-n}+O(r^{3+k-n}),&when $k<n-3$\\\ r^{2+k-n}+O(log(r)^{-1}),&when $k=n-3$,\end{cases*}$ where $r$ is the distance function to $N$. A construction of $\psi$ is given in [SY2, Appendix]. Roughly speaking, one averages a Green’s function for $L_{M}$ along $N$. Now we add $\delta\cdot\psi$ to $u$, for some small $\delta>0$, as a secondary deformation: we choose $g_{\delta}:=(\delta\cdot\psi+u)^{4/(n-2)}\cdot g_{M}$. The linearity of $L_{M}$ shows that $scal(g_{\delta})>0$. We denote the $r$-distance neighborhoods of $N$ relative to $u^{4/(n-2)}\cdot g_{M}$ by $V_{r}$. Then, for any small $\varepsilon>0$, we have that $\partial V_{\varepsilon}$ is diffeomorphic to $N^{k}\times S^{n-k-1}$. Thus for small enough $\delta(\varepsilon)>0$ we readily see that $\partial V_{\varepsilon}$ is essentially determined from $u^{4/(n-2)}\cdot g_{M}$ and it has _positive mean curvature_ $\approx(n-k-1)/\varepsilon$ largely as in the Euclidean case of distance tubes around $\mathbb{R}^{k}\subset\mathbb{R}^{n}$. This makes $\partial V_{\varepsilon}$ an _outer barrier_ for area minimizers in $V_{\varepsilon}$ homologous to $\partial V_{\varepsilon}$. It keeps them inside $V_{\varepsilon}$. In turn, for such a (now fixed) $\delta>0$ and $\rho>0$ small enough we claim that $\partial V_{\rho}$ has _negative mean curvature_. To see this, we recall that the second fundamental form $A_{L}(g)$ of a submanifold $L$ with respect to some metric $g$ on the ambient manifold transforms under conformal deformations $u^{4/(n-2)}\cdot g$, for smooth $u>0$, according to the formula [Be, 1.163, p. 60] (25) $A_{L}(u^{4/(n-2)}\cdot g)(v,w)=A_{L}(g)(v,w)-\frac{2}{n-2}\cdot{\cal{N}}(\nabla u/u)\cdot g(v,w),$ where ${\cal{N}}(\nabla u/u)$ is the normal component of $\nabla u/u$ with respect to $L$. In our case, we choose $u=\psi$ and locally consider $V_{\rho}$ as a tube around $\mathbb{R}^{k}$ in $\mathbb{R}^{k}\times\mathbb{R}^{n-k}$ where $\mathbb{R}^{k}$ represents $N$. Thus we have from (24) that $tr\,A_{\partial V_{\rho}}(g)\approx-(n-k-1)/\rho$, since the $\mathbb{R}^{n-k}$-factor of $V_{\rho}$ is totally geodesic. The trace of the second summand is $(n-1)\cdot 2/(n-2)\cdot(2+k-n)/\rho$ and we get: $\displaystyle\qquad\psi^{4/(n-2)}\cdot tr\,A_{\partial V_{\rho}}(\psi^{4/(n-2)}\cdot g)$ $\displaystyle=-\left((n-k-1)+(n-1)\cdot 2/(n-2)\cdot(2+k-n)\right)\cdot\rho^{-1}$ $\displaystyle=(2-(3+k)\cdot n+n^{2})/(n-2)\cdot\rho^{-1}$ (26) $\displaystyle\geq 2/(n-2)\cdot\rho^{-1}>0,\mbox{ since }n-3\geq k.$ This makes $\partial V_{\rho}$ an _inner barrier_ for area minimizers in $M\setminus V_{\rho}$ homologous to $\partial V_{\rho}$, keeping them from reaching $N$. Thus for any $\varepsilon>0$, there is some $\delta_{\varepsilon}>0$ so that there is a neighborhood $U_{\varepsilon}$ of $N$, with $V_{\rho}\subset U_{\varepsilon}\subset V_{\varepsilon}$, such that $\partial U_{\varepsilon}$ is (locally) area minimizing relative to $g_{\delta_{\varepsilon}}$. The resulting $scal>0$-manifold $X=X_{U}$ with its minimal boundary $\partial X$ can be used in several ways: Doublings We take a second copy $X^{*}$ of $X$ and glue $X^{*}$ to $X$ along $\partial X$ to get a doubling $X\cup_{\div}X^{*}$ with the obvious identification $\div$ along $\partial X$. This can be made to get a smooth $scal>0$-metric on $X\cup_{\div}X^{*}$ since one can approximate $\partial X$, slightly (re)enlarging $X$, by a smooth hypersurface $Z$ with positive mean curvature [G2] and then a standard bending of a collar of $Z$ gives a $scal>0$-metric with totally geodesic boundary $Z$. If there is another manifold $Y$ with boundary $\partial Y$ being locally isometric to $X^{*}$ near $\partial X^{*}$ then we can alternatively glue $Y$ to $X$ along $\partial X$. This is just a variant of the classical surgery in [GL] and [SY2]. ### 4.2 Inductive Removal of Singularities As already indicated in the codimension $\geq 3$-surgery reconstructed above, the main result is a partial regularization method where we replace the singularities of $(H^{n},d_{\mathcal{S}})$ by minimal boundaries: ###### Theorem 4.2 Let $H^{n}$ be a singular area minimizing hypersurface in some compact $scal\geq 0$-manifold $M^{n+1}$. Then there are arbitrarily small neighborhoods $U$ of $\Sigma$, such that $H\setminus U$ is conformal to some $\boldsymbol{scal>0}$-manifold $X_{U}$ with locally area minimizing boundary $\partial X_{U}$. This is the generic step in an inductive dimensional descent with a built-in partial regularization scheme. Starting from a singular area minimizing hypersurface $H^{n}$ in some smooth $scal\geq 0$ ambient space we descent to a possibly again singular area minimizing hypersurface $L^{n-1}$ in the smooth $scal>0$ ambient space $X_{U}$. The process shifts the singular issues with $H^{n}$ to lower dimensions: A. Let $L^{n-1}\subset\overline{X_{U}^{n}}$ be an area minimizer. At this point $\partial X_{U}$ merely constrains $L^{n-1}$ to stay within $\overline{X_{U}^{n}}$. It does not matter whether $\partial X_{U}$ is smooth [Gi, Th.1.20, Rm.1.22]. B. The point is that $\partial X_{U}$ is minimal. From this the strict maximum principle [Si] shows, componentwise, that either $L^{n-1}\equiv\partial X_{U}^{n-1}$ or $L^{n-1}\cap\partial X_{U}^{n-1}=\emptyset$. This renders $L^{n-1}$ as an actually unconstraint minimal hypersurface in an extension $Y_{U}^{n}$ of $\overline{X_{U}^{n}}$ surrounding $L^{n-1}$ as a smooth $scal>0$-manifold. C. $L^{n-1}$ may be singular, with a lower dimensional singular set, but we can reapply Theorem 4.2 to $L^{n-1}$ in dimension $n-1$. This and further iterations inductively sweep out all singular issues to lower dimensions before they ultimately disappear in dimension $7$. To indicate the proof of this theorem, we follow the lines of the case without singularities as discussed in the previous section. Step A: Minimal Splitting Factors The counterpart to Step A is the transition from the compact singular area minimizer $H^{n}$ in some $scal>0$-manifold $M^{n+1}$ to an associated minimal splitting factor $(H,d_{\mathcal{S}}(\Phi_{H}))$, where $\Phi_{H}>$ is a positive supersolution of $L_{H,\lambda}\,\phi=0$ for some $0<\lambda<\lambda^{\langle A\rangle}_{H}$. We actually choose a rather small $\lambda$ to get uniform growth estimates for $\Phi$ towards $\Sigma$ which are used to control the secondary deformations in the following Step B. Step B: Removal of Singularities We deform $(H,d_{\mathcal{S}})$ in a similar way as in Step B in the smooth sample case above. We find a small neighborhood $U$ of $\Sigma$ and an elementary deformation of $g_{\mathcal{S}}$ to a $scal>0$-metric on $H\setminus U$ so that $\partial U$ becomes minimal. We construct this elementary deformation of $(H,d_{\mathcal{S}})$ from the top-down analysis of the previous chapter. We apply tangent cone reductions in the class of minimal splitting factors. To this end, we cover $\Sigma\subset H$ with upper bounded intersection numbers by finitely many small balls $B_{r_{i}}(p_{i})\subset H$ which, after scaling to unit size, are well- approximated by the ball $B_{1}(0)\cap C_{i}$ in some tangent cone $C_{i}$ in $p_{i}$ with its minimal splitting factor geometry. Similarly, we choose a ball cover for $\partial B_{1}(0)\cap\Sigma_{C_{i}}\subset\partial B_{1}(0)\cap C_{i}$ and finitely many locally approximating tangent cones. The iteration of this process defines a blow-up tree $\mathbb{T}$ of cones with root $H$. The branching of $\mathbb{T}$ ends, after at most $n-7$ blow-ups, with some product cone $\mathbb{R}^{k}\times C^{n-k}$ singular only along $\mathbb{R}^{k}\times\\{0\\}$. In this case, the minimal splitting factor metric has a simple form. For some function $c:\partial B_{1}(0)\cap C\rightarrow\mathbb{R}^{>0}$ and a $\gamma\in(-(n-2)/4,0)$, we have $g_{\mathcal{S}}=(c(\zeta)\cdot\rho^{\gamma})^{4/(n-2)}\cdot g_{C}\mbox{ on }\mathbb{R}^{k}\times C^{n-k}\setminus\mathbb{R}^{k}\times\\{0\\},$ where $\rho(x)=dist(x,\mathbb{R}^{k}\times\\{0\\})$, $\zeta(x)=\pi_{2}(x/|x|)$ and $\pi_{2}:\mathbb{R}^{k}\times C^{n-k}\rightarrow C^{n-k}$ is the projection. For this metric, we easily find $\mathbb{R}^{k}$-translation invariant conformal deformations, supported away from $\mathbb{R}^{k}\times\\{0\\}$, to another $scal>0$-metric which contains an elementary inner barrier for area minimizers illustrated in the left and middle part of Figure 3. An inner barrier is a bumpy deformation of $g_{\mathcal{S}}$ that locally increases the volume element of $g_{\mathcal{S}}$ to keep local area minimizers away from $\Sigma$. These barriers result from a simple cut-off construction of a secondary deformation added to the minimal factor metric. Figure 3: The minimal factors are drawn as simple planes. The singular set is visualized as centered spots and lines. The red rubber bands $L^{n-1}$ illustrate locally area minimizing hypersurfaces kept from contracting to (parts of) $\Sigma$ by these barriers. To assemble a global barrier along $\Sigma\subset H$, we localize the barriers in the terminal nodes of $\mathbb{T}$ keeping $scal>0$: we truncate the bump deformation to a deformation supported in a compact subset of $C\setminus\Sigma_{C}$, indicated on the right portion of the picture. This particular localization is important for the transfer between different nodes of the blow-up tree. The transfer uses smooth tangent cone approximations, that is, approximations described as sections of the normal bundle of the tangent cone in $C^{k}$-norm, for some $k\geq 2$, and we use these sections to transfer constructions between different spaces via pull-back or push-forward. But these smooth approximations exist only in positively upper and lower bounded distance to the singular sets. These transfer maps between nodes in the blow-up tree allow us to bring the harvest home: we transport the individual bump deformations backwards to the next node in $\mathbb{T}$ and inductively assemble barriers until we reach $H$. This is a simple process but it needs a considerable amount of bookkeeping using families of well-controlled coverings. The isoperimetry of $g_{\mathcal{S}}$ is applied when these deformations have been completed. The isoperimetric inequality appears in the guise of the volume estimates in Cor. 3.6. Auto-Aligned Coverings: As an alternative to the use of blow-up trees there is a refined covering argument, in [L5], that guides the placement of the local barriers. * • The local barriers are placed disjointedly to keep control over their curving effect. The resulting global barrier along $\Sigma$ does not topologically separate the singular region from the main regular part of $H$. However, using the isoperimetric inequality of the underlying minimal splitting factor, it separates geometrically in the sense that it keeps area minimizers from approaching $\Sigma$. Intuitively, one may compare these topologically permeable barriers with a Faraday cage. * • In turn, these volume estimates also ensure that there is such an area minimizer since they show that there is a larger neighborhood of the deformed region that area minimizers do not leave. Doublings As in the sample case of codimension $\geq 3$ surgeries, we have arbitrarily small neighborhoods $U$ of $\Sigma$, such that after some smoothing of $\partial X_{U}$ we can take a second copy $X_{U}^{*}$ of $X_{U}$ and glue $X_{U}^{*}$ to $X_{U}$ so that $X_{U}\cup_{\div}X_{U}^{*}$ is smooth with $scal>0$. ### 4.3 From Ordinary to Locally Finite Homology Here we will see how to apply the partial regularization method in scalar curvature geometry and general relativity to extend results valid in dimensions $\leq 7$ to higher dimensions. We start with one of the most prominent examples: the Riemannian positive mass theorem. We refer to [L6] for the explicit statement and its physical relevance. In geometric terms it can be formulated as a non-existence of $\boldsymbol{scal>0}$-islands: ###### Theorem 4.3 There exists no complete Riemannian manifold $(M^{n+1},g)$, $n\geq 2$, such that: * • $scal(g)>0$ on a non-empty open set $U\subset M^{n+1}$ with compact closure. * • $(M^{n+1}\setminus U,g)$ is isometric to $(\mathbb{R}^{n+1}\setminus B_{1}(0),g_{Eucl})$. We indicate the contradiction argument of [L6]. Let us assume we had such a Riemannian manifold $(M^{n+1},g)$. Then we can place a large cube around $B_{1}(0)$ and compactify $(M^{n+1},g)$ to a closed $scal>0$-manifold diffeomorphic to $N^{n+1}\char 35\relax T^{n+1}$, for some closed manifold $N^{n+1}$. Now we can find a possibly singular area minimizer $H^{n}\subset N^{n+1}\char 35\relax T^{n+1}$ homologous to the standard $T^{n}\subset T^{n+1}$ and apply Theorem 4.2 to get the $scal>0$-manifold $X_{U}$ for some small neighborhood $U$ of $\Sigma_{H}$. The regularity theory also shows that $H^{n}$ again contains a nearly flat torus summand, even after the conformal deformation of 4.2, i.e., $T^{n}\setminus B\subset H^{n}$ for some small ball $B$. This means that there is a compact area minimizer $L^{n-1}\subset X_{U}\subset H^{n}$ homologous to $T^{n-1}$ with $\partial U\cap L^{n-1}=\emptyset$ and, again, $L^{n-1}$ contains a nearly flat torus summand. We iterate this process until we reach a $scal>0$-surface $F^{2}\char 35\relax T^{2}$ that does not exist, due to the Gauss–Bonnet theorem. This proves the non-existence of $scal>0$-islands and, hence, the positive mass theorem. Locally Finite Homology The argument above exploited the existence of a nearly flat torus component of $N^{n+1}\char 35\relax T^{n+1}$ to iteratively bypass the singular sets from a suitable selection of homology classes. Figure 4: The left hand picture represents the case of a compact $L^{n-1}\cap\partial U=\emptyset$. In the second picture we have a homology class that hits $\overline{U}$. Here we can still find an intrinsically complete local area minimizer $L^{n-1}\cap\partial U=\emptyset$ that asymptotically approaches $\partial U$. Theorem 4.2 can also be used in a different way to include more general homology classes. We start with any non-trivial family of classes $\alpha[1],\dots,\alpha[k]\in H^{1}(M^{n+1},\mathbb{Z})$. Geometric measure theory shows that there is an area minimizer $H^{n}\subset M^{n+1}$ that represents $\alpha[1]\cap[M]$ in the homology of integral currents. Since $M$ is smooth, the integral current homology groups of $M$ are isomorphic to ordinary homology groups [D], but, in general, a singular $H^{n}$ does _not_ represent the associated class in ordinary homology. * • $H^{n}$ might not be a singular $n$-cycle or have a non-finitely generated homology and $H^{n}\setminus\Sigma$ does not represent a class in ordinary homology. This advocates the interpretation of the $\alpha[1]\cap\cdots\cap\alpha[m]\cap[M]$, $m\leq k$, as classes in integral current homology. However, Theorem 4.2 and the regularity theory for minimal hypersurfaces, proving the low dimensionality of the singular set, suggest still another interpretation. We think of the $\alpha[1]\cap\cdots\cap\alpha[m]\cap[M]$ as classes in a _locally finite_ homology [HR]. In more detail, for small $U$, the isoperimetric inequality and the fact that $\partial U$ is locally area minimizing can be used to show that there is an intrinsically complete local area minimizer $L^{n-1}\subset Y_{U}\subset H^{n}$ so that, componentwise, either $L^{n-1}$ is compact and $L^{n-1}\cap\partial X_{U}=\emptyset$ (or $L^{n-1}=\partial X_{U}$), this is the case we have already discussed above, or * • $L^{n-1}$ is non-compact with end. It asymptotically approaches $\partial X_{U}$, like a geodesic ray on a surface that approaches a closed geodesic in infinitely many loops. We apply Theorem 4.2 to $\partial X_{U}$ and eventually transfer its metric to $L^{n-1}$ to get a periodic $scal>0$-end structure. For the present we assume that $L^{n-1}$ is smooth. Then $L^{n-1}$ represents a class in the Borel–Moore homology $H_{n-1}^{BM}(X_{U})$ of $X_{U}$. $L^{n-1}$ can be thought to represent $\alpha[1]\cap\alpha[2]\cap[M]$. To make this intuition more precise, the local finiteness of chains in this homology becomes important. $L^{n-1}$ spins around $\partial X_{U}$, but each of the infinitely many loops can be annihilated by one ordinary homology operation. That is, $L^{n-1}$ is Borel–Moore homologous to the restriction $L_{*}^{n-1}\cap X_{U}$ of a hypersurface $L_{*}^{n-1}\subset H$ representing $\alpha[1]\cap\alpha[2]\cap[M]$ in integral current homology. In our context, this suffices to view $L^{n-1}$ as a representing cycle of $\alpha[1]\cap\alpha[2]\cap[M]$ since area minimizers reaching into the periodic end of $L^{n-1}$ can be made to stay in a compact subset of $L^{n-1}$. A sample of such a truncation style process is explained in the proof that topologically large manifolds cannot admit $scal>0$-metrics [L7]. _It would be nice to formalize such ad-hoc truncation arguments in a suitable locally finite or coarse homology theory where $L^{n-1}$ properly represents $\alpha[1]\cap\alpha[2]\cap[M]$, cf. [HR], [R] for some background._ To finish the argument, we inductively apply the same strategy to get potentially singular and non-compact area minimizers with ends that, in the sense above, represent $\alpha[1]\cap\cdots\cap\alpha[m]\cap[M]$ in smooth $scal>0$-manifolds of dimension $n-m+1$. Then we regularize them, using Theorem 4.2, before we start the next loop. This way, we inductively sweep out all singular issues to lower dimensions before they ultimately disappear in dimension $7$. ## References * [A1] Ancona, A.: Negatively curved manifolds, elliptic operators, and the Martin boundary, _Ann. of Math._ 125 (1987), 495–536 * [A2] Ancona, A.: _Théorie du potentiel sur les graphes et les variétés_ , in: Ecole d’été de Prob. de Saint-Flour XVIII-1988, LNM 1427, Springer (1990), 1-112 * [Ai] Aikawa, H.: _Potential-theoretic characterizations of nonsmooth domains_ , Bull. London Math. Soc. 36 (2004), no. 4, 469-482 * [Be] Besse, A.: _Einstein Manifolds_ , Springer (1987) * [Bi] Bishop, C. J. Quasiconformal mappings which increase dimension. Ann. Acad. Sci. Fenn. Ser. A I Math. 24 (1999), 397-407 * [BG] Bombieri, E. and Giusti, E.: Harnack’s inequality for elliptic differential equations on minimal surfaces, _Invent. Math._ 15 (1972) 24-46 * [BHK] Bonk, M., Heinonen, J. and Koskela, P.: _Uniformizing Gromov hyperbolic spaces_ , Astérisque 270, SMF (2001) * [BH] Bridson, M. and Haefliger, A: _Metric Spaces of Non-Positive Curvature_ , Springer (1999) * [D] De Pauw, T.: Comparing homologies: Cech’s theory, singular chains, integral flat chains and integral currents, Rev. Mat. Iberoam. 23 (2007), 143-189 * [Gi] Giusti, E.: _Minimal Surfaces and functions of bounded variations_ , Birkhäuser (1984) * [G1] Gromov, M.: Metric inequalities with scalar curvature, _GAFA_ 28 (2018), 645–726 * [G2] Gromov, M.: Plateau-Stein manifolds, Central European J. of Math. 12, 923-951 (2014) * [GL] Gromov, M. and Lawson, B.: The Classification of Simply Connected Manifolds of Positive Scalar Curvature, _Ann. of Math._ 111 (1980), 423-434 * [H] Hawking, S.: Black holes in general relativity, Commun. Math. Phys. 25 (1972), 152-166 * [He] Heinonen, J.: Lectures on Analysis on Metric Spaces, Universitext, Springer (2001) * [HKST] Heinonen, J., Koskela, P., Shanmugalingam, N. and Tyson, J.: Sobolev Spaces On Metric Measure Spaces, Cambridge University Press, Cambridge (2015) * [HR] Hughes, B, and Ranicki, A.: Ends of Complexes, _Cambridge Tracts in Math_., Vol.123, Cambridge Univ. Press (1996) * [KW] Kazdan, J. and Warner, F.: Existence and Conformal Deformations of Metrics with prescribing Gaussian and scalar curvature. _Ann. of Math_. 101, 317–331 (1975) * [KL] Kemper, M. and Lohkamp, J.: Potential Theory on Gromov Hyperbolic Spaces (2022), arXiv:2203.16447 * [L1] Lohkamp, J.: Hyperbolic Unfoldings of Minimal Hypersurfaces, _Analysis and Geometry in Metric Spaces_ (2018) 6, 96-128, https://doi.org/10.1515/agms-2018-0006 * [L2] Lohkamp, J.: Potential Theory on Minimal Hypersurfaces I: Singularities as Martin Boundaries, _Potential Analysis_ 53 (2020), 1493–1528, http://dx.doi.org/10.1007/s11118-019-09815-6 * [L3] Lohkamp, J.: Potential Theory on Minimal Hypersurfaces II: Hardy Structures and Schrödinger Operators, _Potential Analysis_ 55 (2021), 563–602, https://doi.org/10.1007/s11118-020-09869-x * [L4] Lohkamp, J.: Scalar Curvature Splittings I: Minimal Factors (2022) arXiv:2012.12223 * [L5] Lohkamp, J.: Scalar Curvature Splittings II: Removal of Singularities (2022) arXiv:2203.15531 * [L6] Lohkamp, J.: The Higher Dimensional Positive Mass Theorem I, arXiv:math/0608795v2 * [L7] Lohkamp, J.: Contracting Maps and Scalar Curvature: arXiv:1812.11839v1 * [R] Roe, J.: Lectures on Coarse Geometry, University Lecture Series 31, AMS (2003) * [SY1] Schoen, R. and Yau, S.T.: Existence of incompressible minimal surfaces and the topology of three dimensional manifolds with non-negative scalar curvature, _Ann. of Math_. 110 (1979), 127-142 * [SY2] Schoen, R. and Yau, S.T.: On the structure of manifolds with positive scalar curvature, _manuscripta mathematica_ , 28 (1979), 159–183 * [Se] Semmes, S.: Finding curves on general spaces through quantitative topology, with applications to Sobolev and Poincaré inequalities, _Selecta Math._ 2 (1996), 155-295 * [Si] Simon, L.: A strict maximum principle for area minimizing hypersurfaces,_J. Diff. Geom._ 26 (1987), 327–335
# MA-VAE: Multi-head Attention-based Variational Autoencoder Approach for Anomaly Detection in Multivariate Time-series Applied to Automotive Endurance Powertrain Testing Lucas Correia12, Jan-Christoph Goos1, Philipp Klein1, Thomas Bäck2 and Anna V. Kononova2 1Mercedes-Benz AG, Stuttgart, Germany 2Leiden University, Leiden, The Netherlands <EMAIL_ADDRESS> ###### Abstract A clear need for automatic anomaly detection applied to automotive testing has emerged as more and more attention is paid to the data recorded and manual evaluation by humans reaches its capacity. Such real-world data is massive, diverse, multivariate and temporal in nature, therefore requiring modelling of the testee behaviour. We propose a variational autoencoder with multi-head attention (MA-VAE), which, when trained on unlabelled data, not only provides very few false positives but also manages to detect the majority of the anomalies presented. In addition to that, the approach offers a novel way to avoid the bypass phenomenon, an undesirable behaviour investigated in literature. Lastly, the approach also introduces a new method to remap individual windows to a continuous time series. The results are presented in the context of a real-world industrial data set and several experiments are undertaken to further investigate certain aspects of the proposed model. When configured properly, it is 9% of the time wrong when an anomaly is flagged and discovers 67% of the anomalies present. Also, MA-VAE has the potential to perform well with only a fraction of the training and validation subset, however, to extract it, a more sophisticated threshold estimation method is required. ## 1 Introduction Powertrain testing is an integral part of the wider automotive powertrain development and is undertaken at different stages of development. Each of these stages is composed of many integration levels. These integration levels range from powertrain sub-component testing, such as the electric drive unit (EDU) controller or high-voltage battery (HVB) management system, to whole vehicle powertrain testing. Each of these has its special type of controlled environment, called a test bench. The use-case in this paper is on an endurance powertrain test bench, where the EDU and HVB on their own are tested under different conditions and loads for longer periods to simulate wear over time. Given the costly maintenance and upkeep costs of such test benches, it is desirable to keep downtime at a minimum and to avoid faulty measurements. Also, it is desirable to detect problems early to prevent damage to the testee. Given that evaluation is done manually by inspection, it is not feasible to analyse every single measurement, also evaluation tends to be delayed, only being undertaken days after the measurement is recorded, hence there is a clear need for automatic, fast and unsupervised evaluation methodology which can flag anomalous measurements before the next measurement is started. To achieve this, we propose a multi-head attention variational autoencoder (MA-VAE). MA-VAE consists of a bidirectional long short-term memory (BiLSTM) variational autoencoder architecture that maps a time-series window into a temporal latent distribution [Park et al., 2018] [Su et al., 2019]. Also, a multi-head attention (MA) mechanism is added to further enhance the sampled latent matrix before it is passed on to the decoder. As shown in the ablation study, this approach avoids the so-called bypassed phenomenon [Bahuleyan et al., 2018], which is the first contribution. Furthermore, this paper offers a unique methodology for the reverse-window process. It is used for remapping the fixed-length windows the model is trained on to continuous variable-length sequences. This paper is structured as follows: First, a short background is provided in Section 2 on the powertrain testing methodology specific to this use case, as well as the theory behind VAE and MA mechanisms. Then, related work in variational autoencoder-based time-series anomaly detection is presented in Section 3, followed by an in-depth introduction of the real-world data set and the approach we propose in Section 4. Then, several experiments testing different aspects of the proposed method are conducted and discussed in Section 5, along with the final results. Finally, conclusions from this work are drawn and an outlook into future work is provided in Section 6. The source code for the data pre-processing, model training as well as evaluation can be found under https://github.com/lcs-crr/MA-VAE. ## 2 Background ### 2.1 Real-world Application During endurance testing a portfolio of different driving cycles is run, where a cycle is a standardised driving pattern, which enables repeatability of measurements. For this type of testing the portfolio consists exclusively of proprietary cycles, which differ from the public cycles used, for example, for vehicle fuel/energy consumption certification like the New European Driving Cycle (NEDC) or the Worldwide Harmonised Light Vehicles Test Cycle (WLTC). The reason why proprietary cycles are used for endurance runs is that they allow for more extensive loading of the powertrain. Given the presence of a battery in the testee, some time has to be dedicated to battery soaking (sitting idle) and charging. These procedures are also standardised using cycles, although, for the intents and purposes of this paper, they are omitted. What is left are the eight dynamic driving cycles representing short, long, fast, slow and dynamic trips ranging from $5$ to $30$ minutes. There are multiple versions of the same cycle, which mostly differ in starting conditions such as state-of-charge (SoC) and temperature of the battery. On powertrain test benches, there are several control methods to ensure the testee maintains the given driving cycle. In this particular test bench, the regulation is done by the acceleration pedal and the EDU revolutions-per- minute (rpm), which is nothing more than a non-SI version of the angular velocity. ### 2.2 Data Set This real-world data set consists of 3385 normal measurement files, each of which contains hundreds of (mostly redundant or empty) channels. A measurement is considered normal when the testee behaviour conforms to the norm. For this work, a list of $d_{\textbf{X}}=13$ channels was hand-picked in consultation with the test bench engineers to choose a reasonable and representative number of channels. This list includes the vehicle speed, EDU torque, current, voltage, rotor temperature and stator temperature, left and right wheel shaft torque, HVB current, voltage, temperature and SoC and inverter temperature. Given that some channels (such as torque) are sampled much faster than others (like temperature and SoC), a common sampling rate of $2$Hz is chosen. Channels sampled slower than $2$Hz are linearly interpolated, which is seen as permissible due to the lower amplitude resolution of those channels. Channels sampled faster than $2$Hz are passed through a low-pass filter with a cut-off frequency of $1$Hz and then resampled to $2$Hz, as is consistent with the Whittaker–Nyquist–Shannon theorem [Shannon, 1949]. Then the driving cycles are z-score normalised, i.e. transformed such that the mean for each channel lies at $0$ and the standard deviation at $1$. Lastly, the driving cycles (generally referred to as sequences in this paper) are windowed to create a set of fixed-length sub-sequences, or windows. First, each channel is auto- correlated to obtain the number of lags of the slowest dynamic process present in the signal. Then, the window size $W$ is set as the smallest power of two larger than the longest lag, in this case, $W=256$ time steps or $128$ seconds. Each window overlaps their preceding and succeeding windows by half a window, i.e. the shift between windows is $W/2=128$ time steps, in order to reduce computational load compared to a shift of one time step. Due to the absence of labelled anomalies in the dataset, realistic anomalous events are intentionally simulated and recorded following the advice of test bench engineers. To this end, five anomaly types were recorded. In the first type, the virtual wheel diameter is changed, such that the resulting vehicle speed deviates from the norm. The wheel diameter is a parameter as resistances are connected to the shafts rather than actual wheels. The second type of anomaly involves changing the driving mode from comfort to sport, which leads to a higher HVB SoC drop over the cycle and a different torque response. In the third anomaly, the recuperation level is turned from maximum to zero, hence the minimum EDU torque is always non-negative and the HVB SoC experiences a higher drop in SoC. In the case of the fourth anomaly, the HVB is swapped for a battery simulator, where the HVB voltage behaviour deviates from a real battery. The inverter and EDU share a cooling loop, whose cooling capacity is reduced at the beginning or middle of the cycle, leading to higher EDU rotor, EDU stator and inverter temperatures than normal. Every anomaly type is recorded during every cycle at least once, leading to $60$ anomalous driving cycles that are all used as the anomalous subset of the test set. A plot of one normal and one wheel-diameter anomalous cycle is shown in Figure 1. Due to the long channel names, the plot only shows the channel indices, a table containing the legend is shown in Table 1 for context. Visual inspection may suggest that the red plot is anomalous, since the EDU and HVB voltage, temperature and state of charge deviate from the black plot. This deviation is to be expected because they depend on how charged the battery is and on how much the battery is used previous to the current cycle. In the case of this anomaly, the only channel that demonstrates anomalous behaviour is the vehicle speed, since: $v_{\text{vehicle}}=r\times\omega$ (1) where $r$ is the wheel radius and $\omega$ the angular velocity. Evidently, the anomalous behaviour is most visible at higher speeds. Figure 1: Features of a normal (black) and an anomalous (red) cycle plotted with respect to time. The anomalous cycle plotted represents a scenario where the wheel diameter has not been set correctly. The amplitude axis is z-score normalised to comply with confidentiality guidelines. Table 1: Legend for the channel names in Figure 1. No. | Name ---|--- 1 | Vehicle Speed 2 | EDU Torque 3 | Left Axle Torque 4 | Right Axle Torque 5 | EDU Current 6 | EDU Voltage 7 | HVB Current 8 | HVB Voltage 9 | HVB Temperature 10 | HVB State of Charge 11 | EDU Rotor Temperature 12 | EDU Stator Temperature 13 | Inverter Temperature In an operative environment, it is desirable to find out whether the previously recorded sequence had any problems to analyse before the next measurement is recorded. Also, a model that performs as well as possible with as little data as possible translates to faster deployment. Good performance is indicated by a model that can detect as many anomalies as possible and rarely labels normal measurements wrongly. To investigate the required training subset size of the model, it is trained with $1$h, $8$h, $64$h, and $512$h worth of dynamic testing data, which corresponds to the first $6$, $44$, $348$, and $2785$ driving cycles, respectively. The results are also presented in Section 5. In each of the above-mentioned cases, the training subset is further split into a training ($80\%$) and a validation ($20\%$) subsets. Both the training and validation subsets are batched to sets of 512 windows. Given the anomalous subset size of $60$ driving cycles, $600$ normal driving cycles recorded after the ones in the training subset are chosen to make up the normal test subset. This would imply that 9% of measurements at the test bench are anomalous, however in reality this value is estimated to be much lower. This amount of anomalous data in relation to normal data is used as it approximately matches the anomaly ratio in public data sets and because the data set is not large enough to create a larger normal test subset. ### 2.3 Variational Autoencoders The variational autoencoder [Kingma and Welling, 2014][Rezende et al., 2014] is a generative model that structurally resembles an autoencoder, but is theoretically derived from variational Bayesian statistics. As opposed to the regular deterministic autoencoder, the VAE uses the evidence lower bound (ELBO), which is a lower bound approximation of the so-called log evidence $\log p_{\theta}(\textbf{X})$, as its objective function. The ELBO, Equation 2, can be expressed as the reconstruction log-likelihood and the negative Kullback-Leibler Divergence ($D_{\text{KL}}$) between the approximate posterior $q_{\phi}(\textbf{Z}|\textbf{X})$ and the prior $p_{\theta}(\textbf{Z})$, which is typically assumed to be a Gaussian distribution [Goodfellow et al., 2016]. $\begin{split}\mathcal{L}_{\theta,\phi}(\textbf{X})&=\mathbb{E}_{\textbf{Z}\sim q_{\phi}(\textbf{Z}|\textbf{X})}\left[\log p_{\theta}(\textbf{X}|\textbf{Z})\right]\\\ &-D_{\text{KL}}(q_{\phi}(\textbf{Z}|\textbf{X})||p_{\theta}(\textbf{Z}))\\\ \end{split}$ (2) where $\textbf{Z}\in\mathbb{R}^{W\times d_{\textbf{Z}}}$ is the sampled latent matrix and $\textbf{X}\in\mathbb{R}^{W\times d_{\textbf{X}}}$ is the input window. $W$ refers to the window length, whereas $d_{\textbf{X}}$ and $d_{\textbf{Z}}$ refer to the input window and latent matrix dimensionality, respectively. Gradient-based optimisation minimises an objective function and the goal is the maximisation of the ELBO, hence the final loss function is defined as the negative of Equation 2, shown in Equation 3. $\mathcal{L}_{\text{VAE}}=-\mathcal{L}_{\theta,\phi}(\textbf{X})$ (3) Finally, to enable the backpropagation through the otherwise intractable gradient of the ELBO, the reparametrisation trick [Kingma and Welling, 2014] is applied, shown in Equation 4. $\textbf{Z}=\boldsymbol{\mu}_{\textbf{Z}}+\epsilon\cdot\boldsymbol{\sigma}_{\textbf{Z}}$ (4) where $\epsilon\sim\mathcal{N}(0,1)$ and $(\boldsymbol{\mu}_{\textbf{Z}},\log\boldsymbol{\sigma}_{\textbf{Z}}^{2})=q_{\phi}(\textbf{X})$. ### 2.4 Multi-head Attention Mechanism To simplify the explanation of MA as employed in this work, multi-head self- attention (MS) will be explained instead with the small difference between MA and MS being pointed out at the end. MS consists of two different concepts: self-attention and its multi-head extension. Self-attention is nothing more than scaled dot-product attention [Vaswani et al., 2017] where the key, query and value are the same. The scaled dot-product attention score is the softmax [Bridle, 1990] of the product between query matrix Q and key matrix K which is scaled by $\sqrt{d_{\textbf{K}}}$. The product between the attention score and the value matrix V yields the context matrix C, as shown in Equation 5. $\textbf{C}=\mathrm{Softmax}\left(\frac{\textbf{Q}\textbf{K}^{T}}{\sqrt{d_{\textbf{K}}}}\right)\textbf{V}$ (5) Compared to recurrent or convolutional layers, self-attention offers a variety of benefits, such as the reduction of computational complexity, as well as an increased amount of operations that can be parallelised. [Vaswani et al., 2017]. Also, self-attention inherits an advantage over Bahdanau-style attention [Bahdanau et al., 2015] from the underlying scaled dot-product attention mechanism: it can run efficiently in matrix multiplication manner [Vaswani et al., 2017]. Multi-head self-attention then allows the attention model to attend to different representation subspaces [Vaswani et al., 2017], in addition to learning useful projections rather than it being a stateless transformation [Chollet, 2021]. This is achieved using weight matrices $\textbf{W}^{Q}_{i}$, $\textbf{W}^{K}_{i}$, $\textbf{W}^{V}_{i}$, which contain trainable parameters and are unique for each head $i$, as shown in Equation6. $\textbf{Q}_{i}=\textbf{Q}\textbf{W}^{Q}_{i}\quad\textbf{K}_{i}=\textbf{K}\textbf{W}^{K}_{i}\quad\textbf{V}_{i}=\textbf{V}\textbf{W}^{V}_{i}$ (6) Once the query, key and value matrices are linearly transformed via the weight matrices, the context matrix $\textbf{C}_{i}$ for each head $i$ is computed using Equation 7. $\textbf{C}_{i}=\mathrm{Softmax}\left(\frac{\textbf{Q}_{i}\textbf{K}_{i}^{T}}{\sqrt{d_{\textbf{K}}}}\right)\textbf{V}_{i}$ (7) Then, for $h$ heads, the different context matrices are concatenated and linearly transformed again via the weight matrix $\textbf{W}^{O}$, resulting in the multi-head context matrix $\textbf{C}\in\mathbb{R}^{W\times d_{\textbf{Z}}}$, Equation 8. $\textbf{C}=[\textbf{C}_{1},...,\textbf{C}_{h}]\textbf{W}^{O}$ (8) The underlying mechanism of MA is identical to MS, with the only difference being that K $=$ Q $\not=$ V. The benefit of this alteration is discussed in Section 4. ## 3 Related Work MA-VAE belongs to the so-called generative model class, which encompasses both variational autoencoders, as well as generative adversarial networks. This section focuses solely on the work on VAE proposed in the context of time- series anomaly detection. In time-series anomaly detection literature, the only other model that uses the combination of a VAE and an attention mechanism is by [Pereira and Silveira, 2018]. For the purpose of our paper, it is named VS-VAE. Their approach consists of a BiLSTM encoder and decoder, where, for an input window of length W, the $t=W$ encoder hidden states of each direction are passed on to the variational self-attention (VS) mechanism [Bahuleyan et al., 2018]. The resulting context vector is then concatenated with the sampled latent vector and then passed on to the decoder. The author claims that applying VS to the VAE model solves the bypass phenomenon, however, no evidence for this claim is provided. The first published time-series anomaly detection approach based on VAE was LSTM-VAE [Park et al., 2018]. One of the contributions is its use of a dynamic prior $\mathcal{N}(\mu_{p},1)$, rather than a static one $\mathcal{N}(0,1)$. In addition to that, they introduce a state-based threshold estimation method consisting of a support-vector regressor (SVR), which maps the latent distribution parameters $(\mu_{\textbf{z}},\sigma_{\textbf{z}})$ to the resulting anomaly score using the validation data. Hence, the dynamic threshold can be obtained through Equation 9. $\eta_{t}=\text{SVR}(\mu_{\textbf{z},t},\sigma_{\textbf{z},t})+c$ (9) where $c$ is a pre-defined constant to control sensitivity. OmniAnomaly [Su et al., 2019] attempts to create a temporal connection between latent distributions by applying a linear Gaussian state space model to them. For the purpose of this paper, it is called (OmniA). Also, it concatenates the last gated recurrent unit (GRU) hidden state with the latent vector sampled in the previous time step. In addition to that, it uses planar normalising flow [Rezende and Mohamed, 2015] by applying $K$ transformations to the latent vector in order to approximate a non-Gaussian posterior, as shown in Equation 10. $f^{k}(\textbf{z}_{t}^{k-1})=\textbf{u}\tanh(\textbf{w}\textbf{z}^{k-1}_{t})+\textbf{b}$ (10) where u, w and b are trainable parameters. A simplified VAE architecture [Pereira and Silveira, 2019] based on BiLSTM layers is also proposed. For the purpose of our paper, it is called W-VAE. Unlike its predecessor [Pereira and Silveira, 2018], it drops the attention mechanism but provides contributions elsewhere. It offers two strategies to detect anomalies based on the VAE outputs. The first involves clustering the space characterised by the mean parameter of the latent distribution into two clusters and labelling the larger one as normal. This strategy has a few weaknesses: it cannot be used in an operative environment as it requires some sort of history of test windows to form the clusters and it assumes that there are always anomalous samples present. The second strategy finds the Wasserstein similarity measure (hence the W in the name) between the latent mean space mapping of the test window in question and the respective mapping $i$ resulting from a representative data subset, such as the validation subset. Equation 11 shows how the Wasserstein similarity measure is computed $W_{i}(\textbf{z}_{\text{test}},\textbf{z}_{i})=\|\mu_{\textbf{z}_{\text{test}}}-\mu_{\textbf{z}_{i}}\|_{2}^{2}+\|\Sigma_{\textbf{z}_{\text{test}}}^{1/2}-\Sigma_{\textbf{z}_{i}}^{1/2}\|_{F}^{2}$ (11) where the first term represents the $L2$-Norm between the mean distribution parameters resulting from the test window and each point of the representative subset. The second term represents the Frobenius norm between the covariance matrix resulting from the test window and each point of the representative subset. SWCVAE [Chen et al., 2020] is the first that applies convolutional neural networks (CNN) to VAE for multivariate time-series anomaly detection. Peculiarly, 2D CNN layers are used with the justification of being able to process the input both spatially and temporally. We, however, doubt the ability of the model to properly detect anomalies through spatial processing, as a kernel moving along the feature axis can only capture features adjacent to each other. To create a continuous anomaly score from windows they append the last value of each window to the previous one. For the purpose of this paper, this process is referred to as last-type reverse-windowing. SISVAE [Li et al., 2021] tries to improve the modelling robustness by the addition of a smoothing term in the loss function which contributes to the reduction of sudden changes in the reconstructed signal, making it less sensitive to noisy time steps. As part of the VASP framework [von Schleinitz et al., 2021], a variational autoencoder architecture is proposed to increase the robustness of time-series prediction when faced with anomalies. While the main contribution is attributed to the framework itself, not the VAE, it should be noted that during inference only the mean parameter of the latent distribution is passed to the decoder. ## 4 Proposed approach Figure 2: An illustration of the proposed MA-VAE model. Blue shapes designate trainable models, orange deterministic tensors and green distribution parameters. The shape of each tensor is designated below it. During training Z is used as the value matrix, denoted by the solid arrow, whereas during inference $\boldsymbol{\mu}_{\textbf{Z}}$ is used as the value matrix, denoted by the traced arrow. ### 4.1 Overview To detect anomalies in multivariate time-series data, we propose a variational autoencoder architecture consisting of BiLSTM layers. The model architecture is illustrated in Figure 2. During training, the encoder $q_{\phi}$ maps multivariate input window X to a temporal distribution with parameters $\boldsymbol{\mu}_{\textbf{Z}}$ and $\log\boldsymbol{\sigma}_{\textbf{Z}}^{2}$ in the forward pass, Equation 12. $(\boldsymbol{\mu}_{\textbf{Z}},\log\boldsymbol{\sigma}_{\textbf{Z}}^{2})=q_{\phi}(\textbf{X})$ (12) Given the latent distribution parameters $\boldsymbol{\mu}_{\textbf{Z}}$ and $\log\boldsymbol{\sigma}_{\textbf{Z}}^{2}$, the latent matrix is sampled from the resulting distribution, as shown in Equation 13. $\textbf{Z}\sim\mathcal{N}(\boldsymbol{\mu}_{\textbf{Z}},\log\boldsymbol{\sigma}_{\textbf{Z}}^{2})$ (13) Then, the input window X is linearly transformed to obtain the query matrices $\textbf{Q}_{i}$ and key matrices $\textbf{K}_{i}$ for each head $i$ . Likewise, the sampled latent matrix Z is also transformed to the value matrix $\textbf{V}_{i}$, as shown in Equation 14. $\textbf{Q}_{i}=\textbf{X}\textbf{W}^{Q}_{i}\quad\textbf{K}_{i}=\textbf{X}\textbf{W}^{K}_{i}\quad\textbf{V}_{i}=\textbf{Z}\textbf{W}^{V}_{i}$ (14) To output the context matrix $\textbf{C}_{i}$ for each head $i$, the softmax of the through $\sqrt{d_{\textbf{K}}}$ normalised query and key product is multiplied with the value matrix, Equation 15. $\textbf{C}_{i}=\mathrm{Softmax}\left(\frac{\textbf{Q}_{i}\textbf{K}_{i}^{T}}{\sqrt{d_{\textbf{K}}}}\right)\textbf{V}_{i}$ (15) The final context matrix C is the result of the linearly-transformed concatenation of each head-specific context matrix $\textbf{C}_{i}$, as expressed in Equation 16. $\textbf{C}=[\textbf{C}_{1},...,\textbf{C}_{h}]\textbf{W}^{O}$ (16) The decoder $p_{\theta}$ then maps the context matrix C to an output distribution with parameters $\boldsymbol{\mu}_{\textbf{X}}$ and $\log\boldsymbol{\sigma}_{\textbf{X}}^{2}$, as shown in Equation 17. $(\boldsymbol{\mu}_{\textbf{X}},\log\boldsymbol{\sigma}_{\textbf{X}}^{2})=p_{\theta}(\textbf{C})$ (17) ### 4.2 Inference Mode Despite the generative capabilities of VAE, MA-VAE does not leverage generation for anomaly detection. Rather than sampling a latent matrix as shown in Equation 13 during inference, sampling is disabled and only $\boldsymbol{\mu}_{\textbf{Z}}$ is taken as the input for the multi-head attention mechanism, like in [von Schleinitz et al., 2021]. Equation 13 in the forward pass, therefore, is replaced by Equation 18. $\textbf{Z}=\boldsymbol{\mu}_{\textbf{Z}}$ (18) This not only accelerates inference by eliminating the sampling process but is also empirically found to be a good approximation of an averaged latent matrix if it were sampled several times like in [Pereira and Silveira, 2018]. The MA- VAE layout during inference is shown in Figure 2, where the traced arrow designates the information flow from the encoder to the MA mechanism. ### 4.3 Threshold Estimation Method Anomalies are by definition very rare events, hence an ideal anomaly detector only flags measurements very rarely but accurately. Test bench engineers prefer an algorithm that only flags a sequence it is sure is an anomaly, in other words, an algorithm that outputs very few to no false positives. A high false positive count would lead to a lot of stoppages and therefore lost testing time and additional cost. Of course, the vast majority of measurements evaluated will be normal and hence it is paramount to classify them correctly, naturally leading to a high precision value. Also, there is no automatic evaluation methodology currently running at test benches, other than rudimentary rule-based methods, therefore a solution that plugs into the existing system that automatically detects some or most anomalies undetectable by rules is already a gain. To achieve this, the threshold $\tau$ is set as the maximum log probability observed when the model is fed with validation data. ### 4.4 Bypass Phenomenon VAE, when combined with an attention mechanism, can exhibit a behaviour called the bypass phenomenon [Bahuleyan et al., 2018]. When the bypass phenomenon happens the latent path between encoder and decoder is ignored and information flow occurs mostly or exclusively through the attention mechanism, as it has deterministic access to the encoder hidden states and therefore avoids regularisation through the $D_{\text{KL}}$ term. In an attempt to avoid this, [Bahuleyan et al., 2018] propose variational attention, which, like the VAE, maps the input to a distribution rather than a deterministic vector. Applied to natural language processing, [Bahuleyan et al., 2018] demonstrate that this leads to a diversified generated portfolio of sentences, indicating alleviation of the bypassing phenomenon. As previously mentioned, only [Pereira and Silveira, 2018] applies this insight in the anomaly detection domain, however, they do not present any proof that it alleviates the bypass phenomenon in their work. MA-VAE on the other hand, cannot suffer from the bypass phenomenon in the sense that information flow ignores the latent variational path between encoder and decoder since the MA mechanism requires the value matrix V from the encoder to output the context matrix. Assuming the bypass phenomenon also applies to a case where information flow ignores the attention mechanism, one could claim that MA-VAE is not immune. To disprove this claim, the attention mechanism is removed from the model in an ablation study to see if anomaly detection performance remains the same. In this case, V is instead directly input into the decoder. If it drops, it is evidence of the contribution of the attention mechanism to the model performance and hence is not bypassed. The results for this ablation study are shown and discussed in Section 5. ### 4.5 Impact of Seed Choice Given the stochastic nature of the VAE, the chosen seed can impact the anomaly detection performance as it can lead to a different local minimum during training. To investigate the impact the seed choice has on model training, MA- VAE is trained on three different seeds, the respective results are also shown in Section 5. ### 4.6 Reverse-window Process Since the model is trained to reconstruct fixed-length windows, the same applies during inference. However, to decide whether a given measurement sequence $\mathcal{S}\in\mathbb{R}^{T\times d_{\textbf{X}}}$ is anomalous, a continuous reconstruction of the measurement is required. The easiest way to do so would be to window the input measurement using a shift of $1$, input the windows into the model and chain the last time step from each output window to obtain a continuous sequence [Chen et al., 2020]. Considering the BiLSTM nature of the encoder and decoder, the first and last time steps of a window can only be computed given the states from one direction, making these values, in theory, less accurate, however. To overcome this, we propose averaging matching time steps in overlapping windows, which is called mean-type reverse- window method. This is done by pre-allocating an array with NaN values, filling it, and taking the mean for each time step while ignoring the NaN values. This process and the general anomaly detection process are described in Algorithm 1. This reverse-window process is done for the mean and variance parameters of the output distribution, then the variance is converted to standard deviation since two distributions cannot be combined by averaging the standard deviations. With a continuous mean and standard deviation, the continuous negative log probability, i.e. the anomaly score $s$, is computed for the respective measurement. A comparison between the mean, last and first reverse-window process is provided in Section 5. Algorithm 1 Anomaly Detection Process $\text{Sequence }\mathcal{S}\in\mathbb{R}^{T\times d_{\textbf{X}}},\quad\text{Threshold }\tau$ $\text{Label }l$ $n_{\text{windows}}\leftarrow T-W+1$ $\boldsymbol{\mu}_{\textbf{X},\text{temp}}\leftarrow\text{zeros}(n_{\text{windows}},T,d_{\textbf{X}})+\text{NaN}$ $\boldsymbol{\sigma}_{\textbf{X},\text{temp}}^{2}\leftarrow\text{zeros}(n_{\text{windows}},T,d_{\textbf{X}})+\text{NaN}$ for $i=1\to n_{\text{windows}}$ do $\textbf{X}\leftarrow\mathcal{S}[i:W+i]$ $(\boldsymbol{\mu}_{\textbf{Z}},\log\boldsymbol{\sigma}_{\textbf{Z}}^{2})\leftarrow q_{\phi}(\textbf{X})$ $\textbf{C}\leftarrow\text{MA}(\textbf{X},\textbf{X},\boldsymbol{\mu}_{\textbf{Z}})$ $(\boldsymbol{\mu}_{\textbf{X}},\log\boldsymbol{\sigma}_{\textbf{X}}^{2})\leftarrow p_{\theta}(\textbf{C})$ $\boldsymbol{\mu}_{\textbf{X},\text{temp}}[i,i:i+W]\leftarrow\boldsymbol{\mu}_{\textbf{X}}$ $\boldsymbol{\sigma}_{\textbf{X},\text{temp}}^{2}[i,i:i+W]\leftarrow\boldsymbol{\sigma}_{\textbf{X}}^{2}$ end for $\boldsymbol{\mu}_{\textbf{X},\text{seq}}\leftarrow\text{nanmean}(\boldsymbol{\mu}_{\textbf{X},\text{temp}})$ $\boldsymbol{\sigma}_{\textbf{X},\text{seq}}^{2}\leftarrow\text{nanmean}(\boldsymbol{\sigma}_{\textbf{X},\text{temp}}^{2})$ $s\leftarrow-\log p(\textbf{X}|\boldsymbol{\mu}_{\textbf{X},\text{seq}},\boldsymbol{\sigma}_{\textbf{X},\text{seq}})$ $l\leftarrow\text{max}(s)>\tau$ ## 5 Results ### 5.1 Setup The encoder and decoder both consist of two BiLSTM layers, with the outer ones having 512 hidden- and cell-state sizes and the inner ones 256. All other parameters are left as the default in the TensorFlow API. During training only, input windows are corrupted using Gaussian noise using $0.01$ standard deviation to increase robustness to noise. Key factors that are investigated in Section 5 are given a default value which applies to all experiments unless otherwise specified. These factors are training/validation subset size, which is set to $512$h, seed choice, which has been kept at $1$, reverse-window method, where the mean-type is used, the latent dimension size, which is set to $d_{\textbf{Z}}=16$ and the MA mechanism, which is set up as proposed in [Vaswani et al., 2017] with a head count of $h=8$ and a key dimension size $d_{\textbf{K}}=\lfloor d_{\textbf{X}}/h\rfloor=1$. The optimiser used is the AMSGrad optimiser with the default parameters in the TensorFlow API. Cyclical $D_{\text{KL}}$ annealing [Fu et al., 2019] is applied to the training of MA-VAE, to avoid the $D_{\text{KL}}$ vanishing problem. The $D_{\text{KL}}$ vanishing problem occurs when regularisation is too strong at the beginning of training, i.e. the Kullback-Leibler divergence term has a larger magnitude in relation to the reconstruction term. Cyclical $D_{\text{KL}}$ annealing allows the model to weigh the Kullback-Leibler divergence lower than the reconstruction term in a cyclical manner through a weight $\beta$. This callback is configured with a grace period of 25 epochs, where $\beta$ is linearly increased from $0$ to $10^{-8}$. After the grace period, $\beta$ is set to $10^{-8}$ and is gradually increased linearly to $10^{-2}$ throughout the following $25$ epochs, representing one loss cycle. This loss cycle is repeated until the training stops. All priors in this work are set as standard Gaussian distributions, i.e. $p=\mathcal{N}(0,1)$. To prevent overfitting, early stopping is implemented. It works by monitoring the log probability component of the validation loss during training and stopping if it does not improve for $250$ epochs. Logically, the model weights at the lowest log probability validation loss are saved. Training is done on a workstation configured with an NVIDIA RTX A6000 GPU. The library used for model training is TensorFlow 2.10.1 on Python 3.10 on Windows 10 Enterprise LTSC version 21H2. The results provided are given in the form of the calibrated and uncalibrated anomaly detection performance, i.e. with and without consideration of threshold $\tau$, respectively. Recall that the threshold used is the absolute maximum negative log probability obtained from the validation set. Calibrated metrics are the precision, recall and $F_{1}$ score. Precision $P$ represents the ratio between correctly identified anomalies (true positives) and all positives (true and false), shown in Equation 19, recall $R$ represents the ratio between true positives and all anomalies, shown in Equation 19, and $F_{1}$ score represents the harmonic mean of the precision and recall, shown in Equation 20. The underlying metrics used to calculate all of the below are the true positives ($TP$), false negatives ($FN$) and false positives ($FP$). $P=\frac{TP}{TP+FP}\quad R=\frac{TP}{TP+FN}$ (19) $F_{1}=\frac{TP}{TP+0.5*(FP+FN)}=2*\frac{P*R}{P+R}$ (20) The theoretical maximum $F_{1}$ score, $F_{1,\text{best}}$, is also provided to aid discussion. This represents the best possible score achievable by the approach if the ideal threshold were known, i.e. the point on the precision- recall curve that comes closest to the $P=R=1$ point, though, in reality, this value is not observable and hence cannot be obtained in an unsupervised manner. The uncalibrated anomaly detection performance, i.e. the performance for a range of thresholds, each $0.1$ apart, is represented by the area under the continuous precision-recall curve $A_{\text{PRC}}^{\text{cont}}$, Equation 21. $A_{\text{PRC}}^{\text{cont}}=\int_{0}^{1}P\,dR$ (21) As the integral cannot be computed for the continuous function, the area under the discrete precision-recall curve $A_{\text{PRC}}^{\text{disc}}$ is used which is done using the trapezoidal rule, Equation 22. $A_{\text{PRC}}^{\text{disc}}=\sum^{N}_{k=1}\frac{f(R_{k-1})+f(R_{k})}{2}\Delta R_{k}$ (22) where $N$ is the number of discrete sub-intervals, $k$ the index of sub- intervals and $\Delta R_{k}$ the sub-intervals length at index $k$. Precision is a function of recall, i.e. $P=f(R)$. ### 5.2 Ablation Study MA-VAE is tested without the MA mechanism and with a direct connection from the encoder to the decoder to observe whether it impacts results. The anomaly detection performance of MA-VAE and its counterpart without MA, henceforth referred to as No MA model, are shown in Table 2. While the precision value of the No MA model is slightly higher than the MA-VAE, the recall value on the other hand is much lower. Overall, MA-VAE has a higher $F_{1}$ score, as well as a higher theoretical maximum $F_{1}$ score, although both values are so close enough to each other that one could claim the threshold is near ideal. The uncalibrated performance is also higher in the case of the MA-VAE, as evident in the precision-recall plot in Figure 3. Interestingly, MA-VAE may feature a lower precision value for the chosen unsupervised threshold but has the potential to have a higher maximum recall at $P=1$. The results hence point towards an improvement brought about by the addition of the MA mechanism and therefore the bypass phenomenon can be ruled out. Table 2: Precision $P$, recall $R$, $F_{1}$ score, theoretical best $F_{1}$ score $F_{1,\text{best}}$ and area under the precision-recall curve $A_{PRC}$ results for the model variant without the MA mechanism and MA-VAE. The best values for each metric are given in bold. Model | $P$ | $R$ | $F_{1}$ | $F_{1,\text{best}}$ | $A_{\text{PRC}}$ ---|---|---|---|---|--- No MA | 1.00 | 0.35 | 0.52 | 0.54 | 0.52 MA-VAE | 0.92 | 0.55 | 0.69 | 0.70 | 0.66 Figure 3: Precision-recall curves for the model variation without MA and MA- VAE. ### 5.3 Data Set Size Requirements To evaluate how much data is required to train MA-VAE to a point of adequate anomaly detection performance, it has been trained with $1$h, $8$h, $64$h, and $512$h worth of dynamic testing data. The results for this experiment are presented in Table 3. On the one hand, as the training/validation subset increases in size, the precision value improves, with the largest jump occurring when the dynamic testing time goes from $1$h to $8$h. The recall value on the other hand decreases as the subset grows. This can be attributed to the fact that smaller subset sizes lead to a small validation set and therefore less data to obtain a threshold from. With a limited amount of data to obtain a threshold from, it is more difficult to get a representative error distribution, leading to a threshold that is very small and hence marks most anomalies correctly but also leads to a lot of false positives. $F_{1}$ score reaches a point of diminishing returns with the $8$h subset onwards, this can also be observed in the case of the theoretical maximum $F_{1}$ score, $F_{1,\text{best}}$, as well as in the $A_{\text{PRC}}$ value, further supported by the precision-recall plot in Figure 4. Lastly, the $F_{1}$ score seems to approach the $F_{1,\text{best}}$ score as the subset grows, also backing the fact that with a small subset size, a good threshold cannot easily be obtained. Table 3: Precision $P$, recall $R$, $F_{1}$ score, threoretical best $F_{1}$ score $F_{1,\text{best}}$ and area under the precision-recall curve $A_{PRC}$ results for the different training/validation subset sizes. The best values for each metric are given in bold. Size | $P$ | $R$ | $F_{1}$ | $F_{1,\text{best}}$ | $A_{\text{PRC}}$ ---|---|---|---|---|--- $1$h | 0.09 | 0.88 | 0.17 | 0.55 | 0.49 $8$h | 0.66 | 0.63 | 0.64 | 0.72 | 0.69 $64$h | 0.71 | 0.57 | 0.63 | 0.69 | 0.68 $512$h | 0.92 | 0.55 | 0.69 | 0.70 | 0.66 Figure 4: Precision-recall curves for the model trained on different training/validation subset sizes. Therefore, for application at the test bench, the largest subset size is desirable due to the higher precision value and a closer-to-ideal threshold value. ### 5.4 Impact of Seed Choice To illustrate the impact it has on the performance metrics, they are presented using three different seeds. Table 4 shows that while the precision values are roughly in the same range for all seeds, the recall values vary more significantly, which also reflects on the $F_{1}$ score. However, by inspecting the $F_{1,\text{best}}$ and $A_{\text{PRC}}$ values it becomes clear that the seeds are not as far apart as the recall value suggests and that the issue may lie with the threshold choice. Figure 5 further supports this, as all lines have roughly the same path, with the exception of seed $3$ at very high precision values. The plot clearly shows that a more suitable (lower) threshold would lead to seed $3$ having a comparable recall value to the other seeds while maintaining high precision. Some differences can be observed between the seeds, especially in the recall values, however, this can be attributed to the unsupervised threshold choice. Table 4: Precision $P$, recall $R$, $F_{1}$ score, threoretical best $F_{1}$ score $F_{1,\text{best}}$ and area under the precision-recall curve $A_{PRC}$ results for the different seeds. The best values for each metric are given in bold. Seed | $P$ | $R$ | $F_{1}$ | $F_{1,\text{best}}$ | $A_{\text{PRC}}$ ---|---|---|---|---|--- 1 | 0.92 | 0.55 | 0.69 | 0.70 | 0.66 2 | 0.90 | 0.60 | 0.72 | 0.73 | 0.67 3 | 0.96 | 0.40 | 0.56 | 0.70 | 0.64 Figure 5: Precision-recall curves for the model trained on different seeds. ### 5.5 Reverse-window Process To investigate the effect of the mean-type reverse-window method, it is compared with the first-type and last-type methods where the first and last values of each window are carried over, respectively. The results in this subsection, Table 5 and Figure 6, tell a similar story to the previous subsection. The metrics independent of the chosen threshold are very similar regardless of the reverse-window method, implying that they are comparable and that any differences in the calibrated metrics can be attributed to the chosen threshold. The mean-type reverse-window method results in a higher computational load, though negligible. For a rather long sequence of $4000$ time steps, i.e. around $33$ minutes long, the mean-type method only takes around $2$ seconds longer. One source of delay that can appear, however, is during online anomaly detection. An online anomaly detection algorithm is defined as an algorithm which evaluates the sequence as it is being recorded. To obtain time step $t$ using the mean-type (or the first-type) you have to wait for time step $t+W$ while $t<W$. This translates to a delay of around $2$ minutes in the real world, given the chosen window size. If the evaluation is done offline, i.e. when $t=W$, then this delay is eliminated since the last value does not have other overlapping values to compute the mean. Table 5: Precision $P$, recall $R$, $F_{1}$ score, threoretical best $F_{1}$ score $F_{1,\text{best}}$ and area under the precision-recall curve $A_{PRC}$ results for the different reverse-window types. The best values for each metric are given in bold. Type | $P$ | $R$ | $F_{1}$ | $F_{1,\text{best}}$ | $A_{\text{PRC}}$ ---|---|---|---|---|--- first | 0.97 | 0.48 | 0.64 | 0.69 | 0.64 last | 0.88 | 0.58 | 0.71 | 0.71 | 0.67 mean | 0.92 | 0.55 | 0.69 | 0.70 | 0.66 Figure 6: Precision-recall curves for different reverse-window methods. ### 5.6 Hyperparameter Optimisation As part of the hyperparameter optimisation of MA-VAE, a list of latent dimension sizes $d_{\textbf{Z}}$ in combination with a list of key dimension sizes $d_{\textbf{K}}$ is tested. Despite the larger learning capacity associated with a higher $d_{\textbf{K}}$, the concatenation is always transformed to a matrix of size $d_{\textbf{O}}=d_{\textbf{Z}}$. For the two variables, values of 1, 4, 16, and 64 are tested. The best result is achieved with $d_{\textbf{Z}}=d_{\textbf{K}}=64$. Given that they are the respective highest values of $d_{\textbf{Z}}$ and $d_{\textbf{K}}$, even higher values should be experimented with in the future, though they will lead to higher model complexity and training/inference time. The attention head count $h$ was also experimented with using the same range of values as for $d_{\textbf{Z}}$ and $d_{\textbf{K}}$, however, none performed better than the $h=8$ configuration. The results are presented in Table 6, the corresponding precision-recall plot is shown in Figure 7. $91\%$ of the sequences marked as normal were actually normal and $67\%$ of the total number of anomalous sequences in the test set were detected. One example of the anomalous cycles and the respective reconstructions is plotted in Figure 8. Table 6: Precision $P$, recall $R$, $F_{1}$ score, threoretical best $F_{1}$ score $F_{1,\text{best}}$ and area under the precision-recall curve $A_{PRC}$ result for the best $d_{\textbf{Z}}$, $d_{\textbf{K}}$ and $h$ values. $d_{\textbf{Z}}$ | $d_{\textbf{K}}$ | $h$ | $P$ | $R$ | $F_{1}$ | $F_{1,\text{best}}$ | $A_{\text{PRC}}$ ---|---|---|---|---|---|---|--- 64 | 64 | 8 | 0.91 | 0.67 | 0.77 | 0.79 | 0.74 Figure 7: Precision-recall curve for the final MA-VAE. Figure 8: Wheel diameter anomaly plotted in black and the output distribution in red, as well as anomaly score plotted in blue and the threshold as a straight line in orange. ### 5.7 Benchmarking Of course, MA-VAE is not the first model proposed for time-series anomaly detection. To underline its anomaly detection performance, it is compared with a series of other models based on variational autoencoders. The chosen subset of models is based on the work discussed in Section 3 which either linked source code or contained enough information for implementation. The models are implemented using hyperparameters specified in their respective publications. To even the playing field, the models are trained on the $512$h subset with early stopping, which is parametrised equally across all models. The anomaly detection process specified in Algorithm 1 is also applied to all models, along with the threshold estimation method. The results can be seen in Table 7. Table 7: Precision $P$, recall $R$, $F_{1}$ score, threoretical best $F_{1}$ score $F_{1,\text{best}}$ and area under the precision-recall curve $A_{PRC}$ results for competing models and MA-VAE (Ours). The best values for each metric are given in bold. Model | $P$ | $R$ | $F_{1}$ | $F_{1,\text{best}}$ | $A_{\text{PRC}}$ ---|---|---|---|---|--- VS-VAE | 1.00 | 0.33 | 0.50 | 0.56 | 0.51 W-VAE | 1.00 | 0.30 | 0.46 | 0.46 | 0.41 OmniA | 0.96 | 0.37 | 0.53 | 0.58 | 0.53 SISVAE | 1.00 | 0.30 | 0.46 | 0.50 | 0.51 MA-VAE | 0.91 | 0.67 | 0.77 | 0.79 | 0.74 As is evident, MA-VAE outperforms all other models in every metric, except for precision. As stated in Section 4 a high precision figure is important in this type of powertrain testing, however, the reduced precision is still considered tolerable. Also, it comes at the benefit of a much higher recall figure, which is reflected in the superior $F_{1}$ figure. Furthermore, the $F_{1,\text{best}}$ figure, which is obtained at $P=0.98$ and $R=0.67$, suggests that MA-VAE has the potential to achieve even higher precision without sacrificing recall if the threshold were optimised. The higher $A_{PRC}$ also shows that MA-VAE has a higher range of thresholds at which it performs well. ## 6 Conclusion and Outlook In this paper, a multi-head attention variational autoencoder (MA-VAE) for anomaly detection in automotive testing is proposed. It not only features an attention configuration that avoids the bypass phenomenon but also introduces a novel method of remapping windows to whole sequences. A number of experiments are conducted to demonstrate the anomaly detection performance of the model, as well as to underline the benefits of key aspects introduced with the model. From the results obtained, MA-VAE clearly benefits from the MA mechanism, indicating the avoidance of the bypass phenomenon. Moreover, the proposed approach only requires a small training/validation subset size but fails to obtain a suitable threshold, as with increasing subset size only the calibrated anomaly detection performance increases. Training with different seeds also is shown to have little impact on the anomaly detection metrics, provided the threshold is chosen suitably, further underlining the previous point. Moreover, mean-type reverse windowing fails to significantly outperform its first-type and last-type counterparts, while introducing additional lag if it is applied to online anomaly detection. Lastly, the hyperparameter optimisation revealed that the MA-VAE variant with the largest latent dimension and attention key dimension resulted in the best anomaly detection performance. It is only $9\%$ of the time wrong when an anomaly is flagged and manages to discover $67\%$ of the anomalies present in the test data set. Also, it outperforms all other competing models it is compared with. In the future, a method of threshold choice involving active learning will be investigated, which can use user feedback to hone in on a better threshold. Also, MA-VAE is set to be tested in the context of online anomaly detection, i.e. during the driving cycle measurement. ## REFERENCES * Bahdanau et al., 2015 Bahdanau, D., Cho, K., and Bengio, Y. (2015). Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations (ICLR). * Bahuleyan et al., 2018 Bahuleyan, H., Mou, L., Vechtomova, O., and Poupart, P. (2018). Variational Attention for Sequence-to-Sequence Models. In International Conference on Computational Linguistics (COLING). * Bridle, 1990 Bridle, J. S. (1990). Probabilistic Interpretation of Feedforward Classification Network Outputs, with Relationships to Statistical Pattern Recognition. Neurocomputing, pages 227–236. * Chen et al., 2020 Chen, T., Liu, X., Xia, B., Wang, W., and Lai, Y. (2020). Unsupervised Anomaly Detection of Industrial Robots Using Sliding-Window Convolutional Variational Autoencoder. IEEE Access, 8:47072–47081. * Chollet, 2021 Chollet, F. (2021). Deep Learning with Python. Manning Publications. * Fu et al., 2019 Fu, H., Li, C., Liu, X., Gao, J., Celikyilmaz, A., and Carin, L. (2019). Cyclical Annealing Schedule: A Simple Approach to Mitigating. In Conference of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). * Goodfellow et al., 2016 Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning. MIT Press. * Kingma and Welling, 2014 Kingma, D. P. and Welling, M. (2014). Auto-Encoding Variational Bayes. In International Conference on Learning Representations (ICLR). * Li et al., 2021 Li, L., Yan, J., Wang, H., and Jin, Y. (2021). Anomaly Detection of Time Series With Smoothness-Inducing Sequential Variational Auto-Encoder. Transactions on Neural Networks and Learning Systems, 32(3):1177–1191. * Park et al., 2018 Park, D., Hoshi, Y., and Kemp, C. C. (2018). A Multimodal Anomaly Detector for Robot-Assisted Feeding Using an LSTM-Based Variational Autoencoder. IEEE Robotics and Automation Letters, 3(3):1544–1551. * Pereira and Silveira, 2018 Pereira, J. and Silveira, M. (2018). Unsupervised Anomaly Detection in Energy Time Series Data Using Variational Recurrent Autoencoders with Attention. In International Conference on Machine Learning and Applications (ICMLA). * Pereira and Silveira, 2019 Pereira, J. and Silveira, M. (2019). Unsupervised representation learning and anomaly detection in ECG sequences. International Journal of Data Mining and Bioinformatics, 22(4):389. * Rezende and Mohamed, 2015 Rezende, D. J. and Mohamed, S. (2015). Variational Inference with Normalizing Flows. In International Conference on Machine Learning (ICML). * Rezende et al., 2014 Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In International Conference on Machine Learning (ICML). * Shannon, 1949 Shannon, C. (1949). Communication in the Presence of Noise. Proceedings of the IRE, 37(1):10–21. * Su et al., 2019 Su, Y., Zhao, Y., Niu, C., Liu, R., Sun, W., and Pei, D. (2019). Robust Anomaly Detection for Multivariate Time Series through Stochastic Recurrent Neural Network. In International Conference on Knowledge Discovery & Data Mining (KDD). * Vaswani et al., 2017 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention Is All You Need. In Conference on Neural Information Processing Systems (NIPS). * von Schleinitz et al., 2021 von Schleinitz, J., Graf, M., Trutschnig, W., and Schröder, A. (2021). VASP: An autoencoder-based approach for multivariate anomaly detection and robust time series prediction with application in motorsport. Engineering Applications of Artificial Intelligence, 104:104354.
Universal relation for operator complexity Zhong-Ying Fan1† 1† Department of Astrophysics, School of Physics and Material Science, Guangzhou University, Guangzhou 510006, China ABSTRACT We study Krylov complexity $C_{K}$ and operator entropy $S_{K}$ in operator growth. We find that for a variety of systems, including chaotic ones and integrable theories, the two quantities always enjoy a logarithmic relation $S_{K}\sim\mathrm{ln}{C_{K}}$ at long times, where dissipative behavior emerges in unitary evolution. Otherwise, the relation does not hold any longer. Universality of the relation is deeply connected to irreversibility of operator growth. Email<EMAIL_ADDRESS>, ###### Contents 1. 1 Introduction 2. 2 Preliminaries: recursion method and irreversibility of operator dynamics 1. 2.1 The recursion method 2. 2.2 Irreversibility and ergodicity 3. 3 Operator growth at initial times 1. 3.1 Initial growth 2. 3.2 Testing the initial growth 4. 4 Operator growth at long times 1. 4.1 Chaotic systems 1. 4.1.1 With logarithmic correction 2. 4.2 Integrable theories 1. 4.2.1 With logarithmic correction 3. 4.3 With bounded support and beyond 5. 5 Finite chains 6. 6 Conclusion and discussion ## 1 Introduction Time evolution in quantum mechanical systems is generally local and unitary. However, it is also known that many quantum systems have effectively irreversible hydrodynamic descriptions, for example the transport properties. It is a central goal to understand emergence of this thermal behavior in theoretical research on quantum many body systems. Operator growth and the relation to thermalization is an exciting topic is in this direction, for example see [1, 2, 3, 4, 5]. Consider an initial operator $\mathcal{O}_{0}$ and suppose it can be written as a sum of a few basis vectors in any local basis. Heisenberg evolution of the space of operators, $\mathcal{O}(t)=e^{iHt}\mathcal{O}_{0}e^{-iHt}$, is described by a set of nested commutators of $\mathcal{O}_{0}$ with the Hamiltonian $H$. Evaluation of all these commutators is equivalent to solving dynamics of the operator completely. However, for complex systems, the number of these commutators (or the nonzero coefficients of $\mathcal{O}(t)$ in any local basis) increases monotonically in the evolution and can blow up exponentially: the initially simple operator $\mathcal{O}_{0}$ grows irreversibly into a complex one. Because of exponential size of the problem, a statistical description should emerge for the process. This implies that operator growth should have some form of universality, similar to statistical mechanics. It is of great interests to search universal features of operator dynamics. In this paper, we would like to study information quantities, the Krylov complexity ( or K-complexity) [6] and the operator entropy (or K-entropy) [7] in operator growth. Previously, the two quantities were examined separately in literature [8, 9, 10, 11, 12, 13]. Here we are interested in examining their functional relation in the time evolution. We will show that the two enjoys a logarithmic relation at long times111In this paper, the base of the logarithm is the constant $e$. $S_{K}(t)=\tilde{\eta}\,\mathrm{ln}{C_{K}(t)}+\cdots\,,$ (1) where dissipative behavior emerges. Here $\tilde{\eta}$ is a positive constant, depending on the choice of dynamics. We propose that it is up bounded as $\tilde{\eta}\leq 1$. The relation is universal to all irreversible growth of operators, not necessarily exponential growth, as far as we can check. The paper is organized as follows. In section 2, we briefly review the recursion method and a quantitative measure for irreversibility of operator growth. In section 3, we study the functional relation between $S_{K}$ and $C_{K}$ at initial times. We show that they enjoy a product logarithmic relation $S_{K}\sim-C_{K}\log{C_{K}}$ to leading order. In section 4, we study the long time behavior of $S_{K}$ and $C_{K}$ for a variety of systems with emergence of dissipative behaviors222In this paper, “dissipative behavior” was referred to the decaying behavior of operator wave functions at long times., including chaotic ones and integrable theories. We find that they always enjoy a logarithmic relation $S_{K}\sim\log{C_{K}}$ at leading order, irrespective of dynamical details. In section 5, as a comparison, we examine simple systems, which have no emergence of dissipative behaivors and show that in this case, the logarithmic relation does not hold any longer. We conclude in section 6. ## 2 Preliminaries: recursion method and irreversibility of operator dynamics For a lattice system with Hamiltonian $H$, the initial operator $\mathcal{O}_{0}$ evolves in time according to the Heisenberg equation $\partial_{t}\mathcal{O}(t)=i[H\,,\mathcal{O}(t)]$ or explicitly $\mathcal{O}(t)=e^{iHt}\mathcal{O}_{0}e^{-iHt}=\sum_{n=0}{\frac{(it)^{n}}{n!}}\tilde{\mathcal{O}}_{n}\,,$ (2) where $\tilde{\mathcal{O}}_{n}$ stands for nested commutators: $\tilde{\mathcal{O}}_{1}=[H,\mathcal{O}_{0}]\,,\tilde{\mathcal{O}}_{2}=[H,\tilde{\mathcal{O}}_{1}]\,,\cdots\,,\tilde{\mathcal{O}}_{n}=[H,\tilde{\mathcal{O}}_{n-1}]\,.$ However, evaluation of these commutators is of great difficult for complex systems. Sometimes it is helpful to thick of the problem as solving a Schrödinger equation by taking the operator as a wave function. Define the Liouvillian $\mathcal{L}\equiv[H\,,\cdot]$, the operator wave function evolves as $|\mathcal{O}(t)\rangle=e^{i\mathcal{L}t}|\mathcal{O}_{0}\rangle=\sum_{n=0}{\frac{(it)^{n}}{n!}}|\tilde{\mathcal{O}}_{n}\rangle\,,$ (3) where $|\tilde{\mathcal{O}}_{n}\rangle=\mathcal{L}^{n}|\mathcal{O}_{0}\rangle.$ In this language, one generally needs to introduce a proper inner product in the operator Hilbert space. For example, in the infinite temperature limit, one usually has $\langle A|B\rangle=\mathrm{Tr}\big{(}A^{{\dagger}}B\big{)}$. We refer the readers to [14] for more details. The physical information about operator growth is essentially encoded in the so-called auto-correlation function, which is defined as $C(t):=\langle\mathcal{O}_{0}|\mathcal{O}(t)\rangle=\langle\mathcal{O}_{0}|e^{i\mathcal{L}t}|\mathcal{O}_{0}\rangle\,.$ (4) The same information can also be extracted from the moments $\mu_{2n}$ $\mu_{2n}:=\langle\mathcal{O}_{0}|\mathcal{L}^{2n}|\mathcal{O}_{0}\rangle={\frac{d^{2n}}{dt^{2n}}}C(-it)\Big{|}_{t=0}\,.$ (5) Note $C(t)=\sum_{n=0}{\frac{\mu_{2n}(it)^{2n}}{(2n)!}}=1-{\frac{1}{2}}\mu_{2}t^{2}+\cdots$, implying that the initial growth of operators is determined by the lowest moment $\mu_{2}$. We will come to this point in the next section. Another two quantities encoding the same information about the dynamics of operators are the relaxation function $\phi_{0}(z)$ and the spectral density $\Phi(\omega)$. They are defined as the Laplace transform and the Fourier transform of $C(t)$, respectively $\displaystyle\phi_{0}(z):=\int_{0}^{+\infty}dt\,e^{-zt}C(t)\,,$ $\displaystyle\Phi(\omega):=\int_{-\infty}^{+\infty}dt\,e^{-i\omega t}C(t)\,.$ (6) Note that they (and the moments ) are linearly related to the auto-correlation function. In particular, one has [14] $\Phi(\omega)=2\lim_{\varepsilon\rightarrow 0}\mathrm{Re}\big{[}\phi_{0}(\varepsilon-i\omega)\big{]}\,.$ (7) In the remaining of this paper, we will frequenctly switch between these quantities and use the one that is most convenient to discuss the physics we are interested in. The readers should not be confused. ### 2.1 The recursion method In general, the original operator basis $\\{|\tilde{\mathcal{O}}_{n}\rangle\\}$ is not orthogonal. Just like in ordinary quantum mechanics, one can study dynamics of the operator wave function using different basises, of which a particularly interesting one is orthonormal. This leads to a different approach to operator growth. Let us briefly review it in the following. Using Gram-Schmidt scheme, one starts with a normalized vector $|\mathcal{O}_{0}\rangle$. Then the first vector is given by $|\mathcal{O}_{1}\rangle:=b_{1}^{-1}\mathcal{L}|\mathcal{O}_{0}\rangle$, where $b_{1}:=\langle\mathcal{O}_{0}\mathcal{L}|\mathcal{L}\mathcal{O}_{0}\rangle^{1/2}$. For the $n-$th vector, one inductively defines $\displaystyle|A_{n}\rangle:=\mathcal{L}|\mathcal{O}_{n-1}\rangle- b_{n-1}|\mathcal{O}_{n-2}\rangle\,,$ $\displaystyle|\mathcal{O}_{n}\rangle:=b_{n}^{-1}|A_{n}\rangle\,,\quad b_{n}:=\langle A_{n}|A_{n}\rangle^{1/2}\,.$ (8) The output of the above procedure is a set of orthomornal vectors $\\{|\mathcal{O}_{n}\rangle\\}$, called Krylov basis and a sequence of positive numbers $\\{b_{n}\\}$, referred to as Lanczos coefficients, which have units of energy (in this paper, time is measured in units of the inverse of Lanczos coefficients, for example $1/b_{1}$). The physical information encoded in the auto-correlation function or moments can be equivalently extracted from the Lanczos coefficients. However, they are related via nonlinear transformations. In Krylov basis, the Liouvillian is tridiagonal $L_{nm}:=\langle O_{n}|\mathcal{L}|O_{m}\rangle=\left(\begin{array}[]{cccccccc}0&b_{1}&0&0&\ldots\\\ b_{1}&0&b_{2}&0&\ldots\\\ 0&b_{2}&0&b_{3}&\ldots\\\ 0&0&b_{3}&0&\ddots\\\ \vdots&\vdots&\vdots&\ddots&\ddots\\\ \end{array}\right)\,,$ (9) whereas the moments are given by $\mu_{2n}=\langle\mathcal{O}_{0}|\mathcal{L}^{2n}|\mathcal{O}_{0}\rangle=(L^{2n})_{00}\,.$ (10) Some low lying examples are $\mu_{2}=b_{1}^{2}\,,\quad\mu_{4}=b_{1}^{4}+b_{1}^{2}b_{2}^{2}\,,\quad\cdots\,.$ (11) General transformations between these two sets of constants can be found in [14] (or Appendix A of [6]). The nonlinear information coding of Lanczos coefficients plays an indispensable role in demonstrating universality of operator growth [6]. In Krylov basis, evolution of the operator $\mathcal{O}(t)$ can be formally written as $|\mathcal{O}(t)\rangle:=\sum_{n=0}i^{n}\varphi_{n}(t)|\mathcal{O}_{n}\rangle\,,$ (12) where $\varphi_{n}(t):=i^{-n}\langle\mathcal{O}_{n}|\mathcal{O}(t)\rangle$ is a discrete set of wave functions. The Heisenberg evolution of $\mathcal{O}(t)$ gives rise to a discrete set of equations $\partial_{t}\varphi_{n}=b_{n}\varphi_{n-1}-b_{n+1}\varphi_{n+1}\,,$ (13) subject to the boundary condition $\varphi_{n}(0)=\delta_{n0}$ and $b_{0}=0=\varphi_{-1}(t)$ by convention. This defines a one-dimensional quantum mechanical problem on a half chain. This uniquely determines the wave functions $\varphi_{n}(t)$ for a given set of Lanczos coefficients. Since the auto-correlation function is simply $C(t)=\varphi_{0}(t)$, it is completely equivalent to the Lanczos coefficients. Taking the Laplace transform of (13), one finds $z\phi_{n}(z)-\delta_{n0}=b_{n}\phi_{n-1}(z)-b_{n+1}\phi_{n+1}(z)\,,$ (14) where $\phi_{n}(z)$ is the Laplace transform of $\varphi_{n}(t)$. The relation gives the continued fraction representation of the relaxation function $\phi_{0}(z)=\dfrac{1}{z+\dfrac{b_{1}^{2}}{z+\dfrac{b_{2}^{2}}{z+\ddots}}}\,.$ (15) For finite chains, the recursion method naturally terminates at some finite order and the relaxation function will have a compact expression. This already solves the operator dynamics for simple systems completely, see sec. 5 for more details. For (sufficiently) complex systems, the one dimensional chain will be semi-infinite and different approaches are needed. ### 2.2 Irreversibility and ergodicity It turns out that the condition for the emergence of ergodic behavior for complex systems can be formulated in terms of operator growth directly. Define a quantity $W$ as [15] $W={\frac{b_{2}^{2}\,b_{4}^{2}\,b_{6}^{2}\cdots}{b_{1}^{2}\,b_{3}^{2}\,b_{5}^{2}\cdots}}\,,$ (16) which is referred to as the canonical form of $W$. Equivalently, it can be evaluated using either the relaxation function $W=\phi_{0}(0)\,,$ (17) or the auto-correlation function directly as $W=\int_{0}^{+\infty}dt\,\varphi_{0}(t)\,.$ (18) For simple systems, since the recursion method terminates at some finite order so that $b_{2k}=0$ or $b_{2k+1}=0$, one will have $W=0$ or $W=\infty$. The process will be reversible. Otherwise, a finite $W$ (except zero) describes an irreversible process. It was shown [15] that this is equivalent to the Kubo’s condition, which is formulated in terms of time averages of correlation functions. However, evaluation of $W$ provides a simpler way to test ergodicity of the theory. There are also other ways that were believed to probe irreversibility of operator growth. For example, at long times the auto-correlation function should approach to zero. However, as pointed out in [15], this is only a necessary condition. In fact, it is hard to search an alternate condition equivalent to a finite $W$. Our new observation is information quantities: the relation between K-complexity and K-entropy may be such a candidate. ## 3 Operator growth at initial times According to (12), the wave function $\varphi_{n}(t)$ on the one dimensional chain could be interpreted as probabilities, as in ordinary quantum mechanics. Normalization of the operator wave function $|\mathcal{O}(t)\rangle$ implies $\sum_{n=0}|\varphi_{n}(t)|^{2}=1$. The K-complexity was defined as the average positions of the chain [6] $C_{K}:=\langle\mathcal{O}(t)|n|\mathcal{O}(t)\rangle=\sum_{n=0}n|\varphi_{n}|^{2}\,.$ (19) Clearly, it depends on the Krylov basis, the initial operator and the dynamics under considerations. Yet physical meaning of $C_{K}$ is far from transparent. It was established [6] that K-complexity provides an upper bound for other notions of complexity, such as OTOCs. Application of the theorem to chaotic systems leads to an upper bound on Lyapunov exponent $\lambda_{L}\leq 2\alpha$, where $\alpha$ is the asymptotic growth rate of Lanczos coefficient $b_{n}\rightarrow\alpha n+\gamma$ as $n\rightarrow+\infty$. Moreover, extension of the analysis to finite temperatures, it was argued that the bound is tighter than the MSS bound [16] $\lambda_{L}\leq 2\alpha\leq 2\pi T$. Though these results are interesting, the K-complexity has not been connected to irreversibility of operator growth, as far as we know. On the other hand, the operator entropy (or $K$-entropy) was defined as [7] $S_{K}(t):=-\sum_{n=0}|\varphi_{n}(t)|^{2}\,\mathrm{ln}{|\varphi_{n}(t)|^{2}}\,.$ (20) Similar to $C_{K}$, $S_{K}$ depends on the Krylov basis, the initial operator and the choice of dynamics. Its physical meaning is not clear as well. Properties of $S_{K}$ were previously examined in the scrambling regime and the post-scrambling regime for chaotic systems [7]. In this paper, we will investigate $C_{K}$ and $S_{K}$ for a variety of systems with dissipative behavior emerging at long times. We will show that they always enjoy a logarithmic relation $S_{K}\sim\log{C_{K}}$ in this case. ### 3.1 Initial growth Before studying the long time dynamics, let us first address operator growth at initial times. According to the discrete Schrödinger equation (13) and the boundary conditions, one easily finds $\varphi_{n}(0)=\delta_{n0}\,,\quad\dot{\varphi}_{n}(0)=b_{1}\delta_{n1}\,,\quad\ddot{\varphi}_{n}(0)=-b_{1}^{2}\delta_{n0}+b_{1}b_{2}\delta_{n2}\,,$ (21) where a dot denotes a derivative with respect to $t$. Using these results, we deduce $C_{K}(0)=0\,,\quad\dot{C}_{K}(0)=0\,,\quad\ddot{C}_{K}(0)=2\mu_{2}\,.$ (22) It implies that initially, the Krylov complexity grows quadratically $C_{K}(t)=\mu_{2}t^{2}+\cdots\,.$ (23) Extension of the analysis to the operator entropy yields $S_{K}(0)=0\,,\quad\dot{S}_{K}(0)=0\,,\quad\ddot{S}_{K}(0)=-2\mu_{2}\,\mathrm{ln}{(\mu_{2}t^{2})}-4\mu_{2}\,.$ (24) To leading order, the operator entropy behaves as $S_{K}(t)=-\mu_{2}t^{2}\,\mathrm{ln}{(\mu_{2}t^{2})}+O(\mu_{2}t^{2})\,.$ (25) The results imply a product-logarithmic relation between $S_{K}$ and $C_{K}$ at initial times $S_{K}(t)=-C_{K}(t)\,\mathrm{ln}{C_{K}(t)}+O\big{(}C_{K}(t)\big{)}\,.$ (26) Notice that the relation is universal to both reversible and irreversible process. It does not capture any information about long time dynamics. Comparing it to the result at long times will help us to understand the long time dynamics better. ### 3.2 Testing the initial growth Let us test the relation (26) using several exact examples. The first is $b_{n}=\alpha\sqrt{n(n-1+\eta)}$, which naturally arises in SYK model. The wave function on the semi-infinite chain can be solved as [6] $\varphi_{n}(t)=\sqrt{{\textstyle{\frac{\scriptstyle\Gamma(n+\eta)}{\scriptstyle n!\Gamma(\eta)}}}}\,{\frac{\tanh^{n}(\alpha t)}{\cosh^{\eta}{(\alpha t)}}}\,.$ (27) Evaluating K-complexity yields $C_{K}=\sum_{n}n|\varphi_{n}|^{2}=\eta\sinh^{2}{(\alpha t)}$. Clearly, at initial times, $C_{K}\simeq\eta\alpha^{2}t^{2}=\mu_{2}t^{2}$, where $\mu_{2}=b_{1}^{2}=\eta\alpha^{2}$. On the other hand, $\log{|\varphi_{n}|^{2}}\simeq n\,\mathrm{ln}{(\alpha^{2}t^{2})}\simeq n\mathrm{ln}{C_{K}}$, leading to $S_{K}\simeq-\sum_{n}n|\varphi_{n}|^{2}\,\mathrm{ln}{C_{K}}=-C_{K}\,\mathrm{ln}{C_{K}}$. This coincides with the relation (26). The second example is $b_{n}=\alpha\sqrt{n}$, which appears in several integrable models [6]. The operator state $|\mathcal{O}(t)\rangle$ can be viewed as the Glauber coherent state in the operator Hilbert space [8]. This gives $\varphi_{n}(t)=e^{-\alpha^{2}t^{2}/2}{\frac{\alpha^{n}t^{n}}{\sqrt{n!}}}\,.$ (28) The Krylov complexity can be evaluated as $C_{K}=\alpha^{2}t^{2}=\mu_{2}t^{2}$. The initially quadratic growth persists in the full time evolution. On the other hand, evaluation of the operator entropy yields $S_{K}=-C_{K}\,\mathrm{ln}{C_{K}}+C_{K}+\sum_{n}|\varphi_{n}|^{2}\log{n!}\,.$ (29) Clearly, at initial times, the first term on the r.h.s is dominant. This again coincides with the relation (26). The third example is $b_{n}=\alpha\sqrt{n(2j-n+1)}$, where $j$ is an integer or half integer [8]. The operator Hilbert space has a finite dimension with $0\leq n\leq 2j$. It describes a reversible process since $W=\infty$ ( or $W=0$) for an integer (or half integer) $j$. One has $\varphi_{n}(t)=\sqrt{{\textstyle{\frac{\scriptstyle\Gamma(2j+1)}{\scriptstyle n!\Gamma(2j-n+1)}}}}\,{\frac{\tan^{n}(\alpha t)}{\cos^{-2j}{(\alpha t)}}}\,.$ (30) Evaluation of K-complexity and K-entropy yields $C_{K}=2j\sin^{2}{(\alpha t)}$ and $S_{K}=-C_{K}\,\mathrm{ln}{\tan^{2}{(\alpha t)}}-\,\mathrm{ln}{\Big{(}\Gamma(2j+1)\cos^{4j}(\alpha t)\Big{)}}+\sum_{n}|\varphi_{n}|^{2}\,\mathrm{ln}{\Big{(}n!\,\Gamma(2j-n+1)\Big{)}}\,.$ (31) Again at initial times $C_{K}\simeq\mu_{2}t^{2}$ and $S_{K}\simeq- C_{K}\,\mathrm{ln}{C_{K}}$, consistent with the relation (26). In fact, for reversible process, the operator wave functions can always be solved exactly and hence there are a lot of examples that can test the relation (26), for example see (5). ## 4 Operator growth at long times We are moving to study the functional relation between K-complexity and K-entropy for a variety of dissipative systems, including chaotic ones, integrable theories and many others. We will adopt the continuum limit as well as numerical approach. For semi- infinite chains, the continuum limit is good at capture long time behaviors of K-complexity and K-entropy using coarse grained wave functions. We shall briefly review it by following [7]. Introducing a lattice cutoff $\epsilon$ and defining a coordinate $x=\epsilon n$ and velocity $v(x)=2\epsilon b_{n}$. The interpolating wave function is defined as $\varphi(x\,,t)=\varphi_{n}(t)$. Continuum version of the discrete equation (13) is given by $\partial_{t}\varphi(x\,,t)={\frac{1}{2\epsilon}}\Big{[}v(x)\varphi(x-\epsilon)-v(x+\epsilon)\varphi(x+\epsilon)\Big{]}\,.$ (32) Expansion in powers of $\epsilon$, one finds a chiral wave equation to leading order $\partial_{t}\varphi=-v(x)\partial_{x}\varphi-{\frac{1}{2}}\partial_{x}v(x)\varphi+O(\epsilon)\,,$ (33) with a position-dependent velocity $v(x)$ and mass ${\frac{1}{2}}\partial_{x}v(x)$. Introducing a new coordinate $y$ as $v(x)\partial_{x}=\partial_{y}$ and a rescaled wave function $\psi(y\,,t)=\sqrt{v(y)}\,\varphi(y\,,t)\,,$ (34) the equation simplifies to $(\partial_{t}+\partial_{y})\psi(y\,,t)=0+O(\epsilon)\,.$ (35) The general solution is given by $\psi(y\,,t)=\psi_{i}(y-t)\,,$ (36) where $\psi_{i}(y)=\psi(y\,,0)$ stands for the initial amplitude. It implies that at leading order the coarse grained wave function moves ballistically. It turns out that this leading order approximation always derives the growth of K-complexity correctly. However, for K-entropy, some higher order corrections should be included for certain cases. In fact, the method just captures the leading long time dependence for both K-complexity and K-entropy qualitatively. Hence, we will also adopt numerical approach as a supplement. Normalization condition in both $x$-frame and $y$-frame reads $1=\sum_{n}|\varphi_{n}(t)|^{2}={\frac{1}{\epsilon}}\int\mathrm{d}x\,\varphi^{2}(x\,,t)={\frac{1}{\epsilon}}\int\mathrm{d}y\,\psi^{2}(y\,,t)\,.$ (37) The K-complexity can be evaluated as $\displaystyle C_{K}(t)$ $\displaystyle=$ $\displaystyle\sum_{n}n|\varphi_{n}(t)|^{2}={\frac{1}{\epsilon}}\int\mathrm{d}x\,{\frac{x}{\epsilon}}\,\varphi^{2}(x\,,t)$ (38) $\displaystyle=$ $\displaystyle{\frac{1}{\epsilon}}\int\mathrm{d}y\,{\frac{x(y)}{\epsilon}}\,\psi_{i}^{2}(y-t)$ $\displaystyle=$ $\displaystyle{\frac{1}{\epsilon}}\int\mathrm{d}y\,{\frac{x(y+t)}{\epsilon}}\,\,\psi_{i}^{2}(y)\,,$ and the K-entropy reads $\displaystyle S_{K}(t)$ $\displaystyle=$ $\displaystyle-\sum_{n}|\varphi_{n}(t)|^{2}\,\mathrm{ln}{|\varphi_{n}(t)|^{2}}$ (39) $\displaystyle=$ $\displaystyle-{\frac{1}{\epsilon}}\int\mathrm{d}x\,\varphi^{2}(x\,,t)\,\mathrm{ln}{\varphi^{2}(x\,,t)}$ $\displaystyle=$ $\displaystyle-{\frac{1}{\epsilon}}\int\mathrm{d}y\,\psi_{i}^{2}(y-t)\,\mathrm{ln}{\Big{[}{\frac{\psi_{i}^{2}(y-t)}{v(y)}}\Big{]}}$ $\displaystyle=$ $\displaystyle-{\frac{1}{\epsilon}}\int\mathrm{d}y\,\psi_{i}^{2}(y)\,\mathrm{ln}{\psi_{i}^{2}(y)}+{\frac{1}{\epsilon}}\int\mathrm{d}y\,\psi_{i}^{2}(y)\,\mathrm{ln}{v(y+t)}\,,$ where the time dependence of $S_{K}$ is only contained in the second term on the r.h.s. Once we know the transformation between the two frames, we are able to extract the leading time dependence of $C_{K}$ and $S_{K}$ at long times immediately. ### 4.1 Chaotic systems It was first conjectured in [6] that for chaotic systems, the Lanczos coefficient grows asymptotically linearly $b_{n}\rightarrow\alpha n+\gamma$ as $n\rightarrow+\infty$ (This is valid to the infinite temperature limit. At finite temperatures, it was shown in [11] that the asymptotically linear behavior of $b_{n}$ can also be obtained for free quantum field theories, which however do not probe chaos ). This gives rise to $v(x)=2\alpha x+2\epsilon\gamma$, where the subleading order correction $\gamma$ plays only a short time in the scrambling regime and hence is negligible in the continuum limit. One finds $y={\frac{1}{2\alpha}}\mathrm{ln}{\big{(}{\frac{x}{\epsilon}}\big{)}}\qquad\mathrm{or}\qquad x=\epsilon\,e^{2\alpha y}\,.$ (40) Evaluation of K-complexity yields $C_{K}(t)={\frac{1}{\epsilon}}\int\mathrm{d}y\,e^{2\alpha(y+t)}\,\psi_{i}^{2}(y)=C_{K}(0)\,e^{2\alpha t}\,.$ (41) It grows exponentially with correct exponent. This coincides with the SYK model. The operator entropy is given by $\displaystyle S_{K}(t)$ $\displaystyle=$ $\displaystyle{\frac{1}{\epsilon}}\int\mathrm{d}y\,\psi_{i}^{2}(y)\,\mathrm{ln}{v(y+t)}+\cdots$ (42) $\displaystyle=$ $\displaystyle{\frac{1}{\epsilon}}\int\mathrm{d}y\,\psi_{i}^{2}(y)\,2\alpha(y+t)+\cdots$ $\displaystyle=$ $\displaystyle 2\alpha t+\cdots\,,$ which grows linearly with time. These results have already been derived in [7]. Our new contribution here is we realize that at long times333Asymptotic growth of $C_{K}$ implies that the time scale at which the logarithmic relation emerges may be defined as $C_{K}\sim O(1)$. This coincides with the scrambling time $\sim\log S$ for chaotic systems, where $S$ stands for the number of degrees of freedoms. they imply a logarithmic relation $S_{K}(t)=\mathrm{ln}{C_{K}(t)}+\cdots\,,$ (43) which is very similar to the celebrated Boltzmann relation in statistical mechanics. Indeed, operator randomization is very efficient for fast scramblers [7] so that a statistical description should emerge in the scrambling regime (and beyond). This inspires us that the above relation may signal irreversibility of operator growth for general cases, irrespective of choice of dynamics. This motivates us to study the relation between $S_{K}$ and $C_{K}$ for many other systems. However, it should be emphasized that in the relation (43) the proportional coefficient is undetermined since continuum limit just captures the leading time dependence of $C_{K}$ and $S_{K}$ qualitatively. In general, we may take the relation in the form of Eq.(1). However, it turns out that the relation (43) is correct for chaotic systems. For example, consider the SYK model with $\eta=1$, the K-complexity and K-entropy can be evaluated exactly as $\displaystyle C_{K}=\sinh^{2}{(\alpha t)}\,,$ $\displaystyle S_{K}=\cosh^{2}{(\alpha t)}\,\mathrm{ln}{\cosh^{2}{(\alpha t)}}-\sinh^{2}{(\alpha t)}\,\mathrm{ln}{\sinh^{2}{(\alpha t)}}\,.$ (44) In the long time limit, $C_{K}\rightarrow e^{2\alpha t}/4$ and $S_{K}\rightarrow 2\alpha t$, which leads to the relation (43), with the proportional coefficient exactly equals to unity. For generic $\eta$, we find that this is always true. From a statistical point of view, the Boltzmann-like relation (43) emerges from a uniform distribution. This inspires us that the relation may hold for general chaotic systems, in which operator randomization is most efficient. We check this idea for a variety of chaotic models and find that it is indeed true. It also implies that for chaotic systems, the operator wave functions at long times are effectively described by a uniform distribution at leading order. As an example, consider a class of model spectral density [14] $\Phi(\omega)={\frac{\pi}{\omega_{0}\Gamma(\nu+1)}}\Big{|}{\frac{\omega}{\omega_{0}}}\Big{|}^{\nu}\mathrm{exp}\Big{(}-\big{|}{\frac{\omega}{\omega_{0}}}\big{|}\Big{)}\,,$ (45) where $\omega_{0}=2\alpha/\pi$. Notice that exponential decay of $\Phi(\omega)$ at large frequency is equivalent to asymptotically linear growth of Lanczos coefficient [14]. Hence, the model spectral density describes certain chaotic systems. It turns out that in this case the frequency moments can be written in closed form as $\mu_{2n}=\omega_{0}^{2n}\,\Gamma(1+\nu+2n)/\Gamma(1+\nu)\,.$ (46) The Lanczos coefficients can be computed using recurrence relations [14]. The auto-correlation function turns out to be $C(t)=\varphi_{0}(t)={\frac{1}{2\big{(}1-i\omega_{0}t\big{)}^{1+\nu}}}+{\frac{1}{2\big{(}1+i\omega_{0}t\big{)}^{1+\nu}}}\,.$ (47) Figure 1: The functional relation $S_{K}\sim\mathrm{ln}C_{K}$ is shown for the model spectral density (45), where $\nu=0\,,1\,,2$ from left to right. At long times, the linear relation Eq.(1) holds, with (shown in dashed lines) $S_{K}=0.976348\,\mathrm{ln}{C_{K}}+1.06661$ ($\nu=0$), $S_{K}=0.978129\,\mathrm{ln}{C_{K}}+0.934119$ ($\nu=1$) and $S_{K}=1.01102\,\mathrm{ln}{C_{K}}+0.63022$ ($\nu=2$). Within our numerical accuracy $\tilde{\eta}\simeq 1$ for all these cases. To have a nice presentation, we have moved the results along the horizontal axis properly. The remaining wave functions $\varphi_{n}(t)$’s can be deduced using the discrete equation (13). To simplify matter, we set $\omega_{0}=2/\pi$ so that $\alpha=1$. The functional relation $S_{K}(C_{K})$ for several $\nu$’s value was shown numerically in Fig. 1. It is easily seen that at long times the logarithmic relation (1) indeed holds, with the proportional coefficient $\tilde{\eta}$ close to unity444Because of exponential growth of $C_{K}$, computational cost increases exponentially for chaotic models. This limits our numerical accuracy. $\displaystyle\nu=0\,,\qquad\tilde{\eta}=0.976348\,,$ $\displaystyle\nu=1\,,\qquad\tilde{\eta}=0.978129\,,$ $\displaystyle\nu=2\,,\qquad\tilde{\eta}=1.011020\,.$ (48) This supports our intuitive idea that the Boltzmann relation (43) holds for general chaotic systems. To end this subsection, let us comment on the case beyond the scrambling regime. It was argued [7] that in this case, the Lanczos coefficient will approach to a constant $b_{n}\rightarrow b=\alpha S$, for systems with $S$ extensive degrees of freedoms. Using continuum limit, it was shown that K-complexity grows linearly $C_{K}(t)_{\mathrm{post-scrambling}}\sim 2b(t-t_{*})+\cdots\,,$ (49) until arriving at its maximum value of order $\sim e^{O(S)}$, where $t_{*}$ stands for the scrambling time. On the other hand, a careful examination of the long time tails of wave functions leads to $S_{K}(t)\sim\mathrm{ln}{\big{(}2bt\big{)}}+\cdots\,.$ (50) The operator entropy continues to grow until arriving at the maximum value $S_{K}\sim O(S)$. This again leads to the logarithmic relation (1), though the constant $\tilde{\eta}$ is undetermined. However, we argue that in this case $\tilde{\eta}$ should be still equal to unity since in the post-scrambling regime, the operator wave functions should still obey a uniform distribution to leading order. The above result supports our intuitive idea that the logarithmic relation (1) simply signals irreversibility, irrespective of dynamical details of the process. We would like to extend the analysis to general systems with emergence of dissipative behaviors, where the proportional constant $\tilde{\eta}$ is undetermined. Here it is worth emphasizing that the K-entropy approaches to its maximum value for a uniform distribution because of maximal entropy principle. Hence, we propose that the constant $\tilde{\eta}$ is bounded as $0<\tilde{\eta}\leq 1\,,$ (51) where the up bound is saturated for fast scramblers, including $\mathrm{1d}$ chaotic systems. We find that this is true as far as we can check. #### 4.1.1 With logarithmic correction It was argued in [6] that for $\mathrm{1d}$ chaotic systems, the asymptotic behavior of Lanczos coefficient acquires a logarithmic correction: $b_{n}=\alpha n/\mathrm{ln}{n}+\cdots$. However, for our purpose, we may take it as a nearly chaotic model in diverse dimensions: the Lanczos coefficient grows faster than integrable theories (which have a power law $b_{n}\sim n^{\delta}\,,0<\delta<1$) but still slower than fast scramblers. A more general case may be taken as $b_{n}=\alpha n/(\mathrm{ln}{n})^{\sigma}$, where $\sigma>0$, corresponding to a velocity $v(x)=2\alpha x/(\mathrm{ln}{{\textstyle{\frac{\scriptstyle x}{\scriptstyle\epsilon}}}})^{\sigma}$. In this case, the two frames are connected by Figure 2: The functional relation $S_{K}(C_{K})$ for nearly chaotic models. $b_{n}={\textstyle{\frac{\scriptstyle\alpha n}{\scriptstyle\mathrm{ln}{(n+1)}}}}$ for the left panel and $b_{n}={\textstyle{\frac{\scriptstyle\alpha n}{\scriptstyle\mathrm{ln}^{2}{(n+1)}}}}$ for the right panel. We set $\alpha=1$ in numerical calculations. At long times, the functional relation is very well fitted by $S_{K}=0.994787\,\mathrm{ln}{C_{K}}-0.958367\,\mathrm{ln}{\mathrm{ln}{C_{K}}}+1.04432$ (left) and $S_{K}=0.634549\,\mathrm{ln}{C_{K}}+0.593231\,\mathrm{ln}{\mathrm{ln}{C_{K}}}+0.59138$ (right), matching the continuum limit. $x=\epsilon\,\mathrm{exp}\Big{[}\big{(}2\tilde{\alpha}y\big{)}^{{\frac{1}{1+\sigma}}}\Big{]}\,,$ (52) where $\tilde{\alpha}=\alpha(1+\sigma)$. It is straightforward to deduce $C_{K}(t)={\frac{1}{\epsilon}}\int\mathrm{d}y\,\psi_{i}^{2}(y)\,\mathrm{exp}\Big{[}\big{(}2\tilde{\alpha}(y+t)\big{)}^{{\frac{1}{1+\sigma}}}\Big{]}\sim\mathrm{exp}\Big{[}\big{(}2\tilde{\alpha}t\big{)}^{{\frac{1}{1+\sigma}}}\Big{]}\,,$ (53) and $\displaystyle S_{K}(t)$ $\displaystyle=$ $\displaystyle{\frac{1}{\epsilon}}\int\mathrm{d}y\,\psi_{i}^{2}(y)\,\Big{[}\big{(}2\tilde{\alpha}(y+t)\big{)}^{{\frac{1}{1+\sigma}}}-\mathrm{ln}{\big{(}2\tilde{\alpha}(y+t)\big{)}^{{\frac{1}{1+\sigma}}}}\Big{]}+\cdots$ (54) $\displaystyle\sim$ $\displaystyle\big{(}2\tilde{\alpha}t\big{)}^{{\frac{1}{1+\sigma}}}-\mathrm{ln}{\big{(}2\tilde{\alpha}t\big{)}^{{\frac{1}{1+\sigma}}}}+\cdots\,,$ where a tilde means taking the long time limit and we have ignored the constant coefficient in each term. Combining these results again leads to the logarithmic relation (1) except that the subleading order correction is of order $O(\mathrm{ln}\mathrm{ln}{C_{K}})$. However, from the method itself, we do not expect it can extract the subleading order term of $S_{K}$ (or $C_{K}$) correctly since the leading term is only determined qualitatively. Nevertheless, the analysis is consistent with our numerical calculations for several examples, see Fig. 2. However, numerically it is also hard to ensure the form of the subleading order corrections. For numerical data established in the figure, including $\mathrm{ln}\mathrm{ln}{C_{K}}$ term just fits the numerical data slightly better than that without it but it should not be considered conclusive. However, an exception occurs for $\mathrm{1d}$ chaotic systems. We find that in this case $\tilde{\eta}$ is equal to unity if and only if the $\mathrm{ln}\mathrm{ln}{C_{K}}$ term is included! ### 4.2 Integrable theories Theories with asymptotic growth of Lanczos cofficients $b_{n}\rightarrow\alpha n^{\delta}$, where $0<\delta<1$ were referred to as integrable ones [6]. In this case, $v(x)=2\alpha\epsilon^{1-\delta}x^{\delta}$ and the two frames are connected by $y={\frac{1}{2\tilde{\alpha}}}\Big{(}{\frac{x}{\epsilon}}\Big{)}^{1-\delta}\qquad\mathrm{or}\qquad x=\epsilon\,\big{(}2\tilde{\alpha}y\big{)}^{{\frac{1}{1-\delta}}}\,,$ (55) where $\tilde{\alpha}=\alpha(1-\delta)$. Evaluation of K-complexity yields $C_{K}(t)={\frac{1}{\epsilon}}\int\mathrm{d}y\,\big{[}2\tilde{\alpha}(y+t)\big{]}^{{\frac{1}{1-\delta}}}\,\psi_{i}^{2}(y)\sim\big{(}2\tilde{\alpha}t\big{)}^{{\frac{1}{1-\delta}}}+\cdots\,.$ (56) It grows in a power law at sufficiently long time. On the other hand, the operator entropy is given by $\displaystyle S_{K}(t)$ $\displaystyle=$ $\displaystyle{\frac{1}{\epsilon}}\int\mathrm{d}y\,\psi_{i}^{2}(y)\,\mathrm{ln}{\big{[}2\tilde{\alpha}(y+t)\big{]}^{\frac{\delta}{1-\delta}}}+\cdots$ (57) $\displaystyle\sim$ $\displaystyle{\frac{\delta}{1-\delta}}\,\mathrm{ln}{\big{(}2\tilde{\alpha}t\big{)}}+\cdots$ $\displaystyle\sim$ $\displaystyle\delta\,\mathrm{ln}{C_{K}(t)}+\cdots\,.$ Again this leads to the logarithmic relation (1). However, unlike previous cases, the proportional constant $\tilde{\eta}=\delta$ is exact for $\delta\geq 1/2$ (from the method itself, this should be simply a coincidence). We check it using a lot of numerical examples and find that it is always true. For example, consider integrable models with $b_{n}=\alpha n^{\delta}$. Numerically evaluation of the operator entropy (29) yields at long times $\displaystyle\delta=1/2\,,\qquad S_{K}(t)=0.500917\,\mathrm{ln}{C_{K}(t)}+1.41373\,,$ $\displaystyle\delta=2/3\,,\qquad S_{K}(t)=0.669477\,\mathrm{ln}{C_{K}(t)}+1.43501\,,$ $\displaystyle\delta=3/4\,,\qquad S_{K}(t)=0.752683\,\mathrm{ln}{C_{K}(t)}+1.44995\,.$ (58) In all these cases, the proportional constant $\tilde{\eta}$ is equal to $\delta$, within our numerical accuracy. It is a pity that we do not have a physical interpretation for this. #### 4.2.1 With logarithmic correction To test whether the above result can be extended to slightly different cases, let us consider logarithmic corrections to integrable models. For instance, the Lanczos coefficient behaves asymptotically as $b_{n}=\alpha n^{\delta}/\mathrm{ln}{n}$, giving rise to $v(x)=2\alpha\epsilon\big{(}{\textstyle{\frac{\scriptstyle x}{\scriptstyle\epsilon}}}\big{)}^{\delta}/\mathrm{ln}{\big{(}{\textstyle{\frac{\scriptstyle x}{\scriptstyle\epsilon}}}\big{)}}$. The two frames are connected by $y={\frac{1}{2\bar{\alpha}}}\big{(}X\,\mathrm{ln}{X}-X\big{)}\,,\quad X=\big{(}{\textstyle{\frac{\scriptstyle x}{\scriptstyle\epsilon}}}\big{)}^{1-\delta}\,,$ (59) where $\bar{\alpha}=\alpha(1-\delta)^{2}$. Notice at very large $y$, $X\sim(2\bar{\alpha}y)/\mathrm{ln}{(2\bar{\alpha}y)}$ to leading order. Hence, at long times the K-complexity and K-entropy behave as $\displaystyle C_{K}(t)\sim x(t)/\epsilon\sim(2\bar{\alpha}t)^{\gamma}/\mathrm{ln}^{\gamma}{(2\bar{\alpha}t)}\,,$ $\displaystyle S_{K}(t)\sim\mathrm{ln}{v(t)}\sim\delta\,\mathrm{ln}\big{(}{\textstyle{\frac{\scriptstyle x(t)}{\scriptstyle\epsilon}}}\big{)}\sim\delta\,\mathrm{ln}{C_{K}(t)}\,,$ (60) where $\gamma=1/(1-\delta)$. It turns out that the K-complexity grows a bit slower than the integrable models. However, while the logarithmic relation (1) still holds, the proportional coefficient $\tilde{\eta}=\delta$ is falsified by our numerical results. One may also consider positively corrections to Lanzos coefficient as $b_{n}=\alpha n^{\delta}\,\mathrm{ln}{n}$, corresponding to $v(x)=2\alpha\epsilon\big{(}{\textstyle{\frac{\scriptstyle x}{\scriptstyle\epsilon}}}\big{)}^{\delta}\,\mathrm{ln}{\big{(}{\textstyle{\frac{\scriptstyle x}{\scriptstyle\epsilon}}}\big{)}}$. The two frames are connected as $y={\textstyle{\frac{\scriptstyle 1}{\scriptstyle 2\alpha}}}\mathrm{li}(X)\,,$ (61) where $\mathrm{li}(x)$ stands for the logarithmic integral function. At large $y$, one has $X\sim 2\alpha y\,\mathrm{ln}{(2\alpha y)}$, implying that the K-complexity grows a bit faster than integrable models $C_{K}\sim(2\alpha t)^{\gamma}\,\mathrm{ln}^{\gamma}{(2\alpha t)}\,.$ (62) Again the K-entropy turns out to be $S_{K}(t)\sim\mathrm{ln}{v(t)}\sim\delta\,\mathrm{ln}{C_{K}(t)}$, but the constant coefficient $\tilde{\eta}=\delta$ is falsified by our numerical results. It seems that the result $\tilde{\eta}=\delta$ is only valid to integrable theories with $\delta\geq 1/2$. ### 4.3 With bounded support and beyond Consider $\delta\rightarrow 0$ limit of integrable theories. The Lanczos coefficient approaches to a constant $b_{n}\rightarrow b$ asymptotically, corresponding to a constant velocity $v(x)=v=2\epsilon b$. This case is very similar to the post-scrambling regimes of chaotic systems [7], except the initial amplitude. Careful analysis using the continuum limit (with higher order corrections) implies that the K-entropy grows logarithmically at long times $S_{K}(t)\sim\mathrm{ln}{(2bt)}\,,$ (63) whereas the K-complexity increases linearly $C_{K}(t)\sim 2bt$. This again leads to the logarithmic relation (1). To test the result, consider two simple models555These two models arise from a chain of classical harmonic oscillator of $2N$ atoms with periodic boundary conditions [17]. . The first is $b_{n}=\omega_{0}/2$. The wave function is solved in terms of Bessel functions $\varphi_{n}(t)=J_{n}(\omega_{0}t)+J_{n+2}(\omega_{0}t)\,.$ (64) Numerically evaluation of K-complexity and K-entropy yields at long times $S_{K}(t)=0.729302\,\mathrm{ln}{C_{K}(t)}+0.353124\,.$ (65) The second example is $b_{1}=\omega_{0}/\sqrt{2}\,,b_{n}=\omega_{0}/2$ for $n>1$. The wave function is given by $\varphi_{n}(t)=c_{n}J_{n}(\omega_{0}t)\,,$ (66) where $c_{-1}=0\,,c_{0}=1$ and $c_{n}=\sqrt{2}$ for $n>1$. We find at long times $S_{K}(t)=0.870024\,\mathrm{ln}{C_{K}(t)}+0.603974\,.$ (67) These results support the continuum limit analysis. To proceed, consider a critical case where the Lanczos coefficient grows asymptotically faster than the bounded case but still slower than a power law. For example: $b_{n}=\alpha\,\mathrm{ln}{n}$. It corresponds to a velocity $v(x)=2\epsilon\alpha\,\mathrm{ln}{({\textstyle{\frac{\scriptstyle x}{\scriptstyle\epsilon}}})}$ and the two frames are transformed as $y(x)={\textstyle{\frac{\scriptstyle 1}{\scriptstyle 2\alpha}}}\mathrm{li}\big{(}{\textstyle{\frac{\scriptstyle x}{\scriptstyle\epsilon}}}\big{)}\,.$ (68) Since at large $y$, $x/\epsilon\sim 2\alpha y\,\mathrm{ln}{(2\alpha y)}$, one has $C_{K}(t)\sim x(t)/\epsilon\sim 2\alpha t\,\mathrm{ln}{(2\alpha t)}\,,$ (69) which is a product logarithmic law. The K-complexity grows faster than the bounded case (which is linear) but still slower than the integrable theories. This is in accordance with our expectations. Figure 3: Left panel: $b_{n}=\alpha\,\mathrm{ln}{(n+1)}$. Right panel: $b_{n}=\alpha\,\mathrm{ln}{n}+\gamma$. We set $\alpha=\gamma=1$. In both cases, the logarithmic relation (1) already appears at the time scale where $C_{K}(t_{c})\sim O(1)$ (see the orange lines) but the coefficient $\tilde{\eta}$ is changed at later times (see the red lines). Around $t\sim t_{c}$, we have $S_{K}=0.613581\,\mathrm{ln}{C_{K}}+1.35088$ (left) and $S_{K}=0.619693\,\mathrm{ln}{C_{K}}+1.36223$ (right) whereas at long times $S_{K}=0.166437\,\mathrm{ln}{C_{K}}+2.59585$ (left) and $S_{K}=0.170329\,\mathrm{ln}{C_{K}}+2.40017$ (right). Within our numerical accuracy, both cases have the same coefficient $\tilde{\eta}$ at the same time regimes. However, at this order, the K-entropy turns out to be $S_{K}\sim\mathrm{ln}v(t)\sim\mathrm{ln}\mathrm{ln}{C_{K}}$, violating the logarithmic relation (1) apparently. To resolve the issue, one may include higher order corrections in the wave equation as the bounded case. Unfortunately, we do not find a definite answer using this approach. Nevertheless, our numerical results still suggest that the logarithmic relation (1) holds in this case, see Fig. 3. To end this section, let us discuss the relation $S_{K}(C_{K})$ in the full time evolution. We have seen that for systems described by semi-infinite chains $S_{K}\sim-C_{K}\,\mathrm{ln}{C_{K}}$ at initial times and $S_{K}\sim\mathrm{ln}{C_{K}}$ at long times where dissipative behavior emerges. These two regimes are universal, irrespective of the choice of dynamics. The former is determined by the lowest moment $\mu_{2}$ whereas the latter probably signals irreversibility of the process. There is a smooth crossover between them, which however depends on the dynamical details. Since the functional function $S_{K}(C_{K})$ is well defined in the full time evolution, it makes sense to study it in operator dynamics and connect it to emergence of ergodic behavior of the theories. ## 5 Finite chains In this section, we would like to study K-complexity and K-entropy for finite chains and compare the results with previous cases. In this case, the recursion method naturally comes to a stop at some order $K$ so that $b_{K+1}=0$. As a consequence, the continued-fraction representation of the relaxation function terminates at the $K$-th level: $\phi_{0}(z)=\dfrac{1}{z+\dfrac{b_{1}^{2}}{z+\dfrac{b_{2}^{2}}{\begin{array}[]{cc}z+\cdots&\\\ &\cdots+\dfrac{b_{K}^{2}}{z}\end{array}}}}\,.$ (70) This is a rational function $p_{K}(z)/q_{K+1}(z)$. Here there are two cases, depending on $K$ is odd or even, corresponding to $W=0$ or $W=\infty$. For odd $K$, $\displaystyle p_{K}(z)=z^{K}+z^{K-2}+\cdots+z\,,$ $\displaystyle q_{K+1}(z)=z^{K+1}+z^{K-1}+\cdots+c\,,$ (71) where we have omitted the constant coefficient for each term in the polynomials and $c\neq 0$. This leads to $W=p_{K}(0)/q_{K+1}(0)=0$. On the other hand, for even $K$, the results (5) still hold but with the last term interchanged between $p_{K}(z)$ and $q_{K+1}(z)$, giving rise to $W=\infty$. Hence, for both cases, the dynamics of operator is nonergodic. We will show that as a consequence, the functional relation $S_{K}\sim\log{C_{K}}$ at long times does not hold any longer. To proceed, consider odd $K$ at first. The relaxation function contains $K+1$ poles, which are all located on the imaginary axis. The spectral function, inferred via $\Phi(\omega)=\lim_{\varepsilon\rightarrow 0}2\mathrm{Re}\big{[}\phi_{0}(\varepsilon-i\omega)\big{]}\,,$ (72) consists of $L=(K+1)/2$ pairs of $\delta$-functions $\Phi(\omega)=\pi\sum_{\ell=1}^{L}a_{\ell}\big{[}\delta(\omega-\omega_{\ell})+\delta(\omega+\omega_{\ell})\big{]}\,,$ (73) where all the frequencies $\omega_{\ell}$ are generally nonzero. The auto- correlation function turns out to be $\varphi_{0}(t)=\sum_{\ell=1}^{L}a_{\ell}\cos{(\omega_{\ell}t)}\,.$ (74) However, if one of the $L$ frequencies $\omega_{\ell}$ happens to be zero, the total number of Lanczos coefficients will be reduced by one so that $K=2L-2$. This is exactly the even $K$ case. Nevertheless, the result (74) is valid to both cases. Given the above auto-correlation function, it turns out that all the remaining (nonzero) wave functions $\varphi_{n}$’s will be a sum of sine or cosine functions. It is immediately seen that in this case both K-complexity and K-entropy will no longer grow monotonically in the time evolution. Of course, the functional relation $S_{K}\sim\mathrm{ln}{C_{K}}$ will not hold at long times any longer. Figure 4: Trajectories on $C_{K}$-$S_{K}$ plane for finite chains. Left panel: $K=1$. Middle panel: $K=3$ and $\omega_{2}=2\omega_{1}$. In both cases, the particle moves periodically between $O$ and $F$. In the first half period of $C_{K}$, $t\in[0\,,T_{C}/2]$, it moves to the right, from $O$ to $F$ whereas in the next half period $t\in[T_{c}/2\,,T_{C}]$, it moves in the opposite direction, similar to a harmonic oscillator. Right panel: $K=3$ and $\omega_{2}=\sqrt{3}\omega_{1}$. The motion of the particle is not periodic and the trajectory becomes more and more complex as time increases. Here we have chosen $t\in[0\,,20]$. Let us consider several examples. The first is $K=1$: $b_{1}=\omega$ and $b_{n}=0$ otherwise. The auto-correlation function is simply a cosine function $\varphi_{0}(t)=\cos{(\omega t)}$ and $\varphi_{1}(t)=\sin{(\omega t)}$. One easily finds $\displaystyle C_{K}(t)=\sin^{2}(\omega t)\,,$ $\displaystyle S_{K}(t)=-\cos^{2}(\omega t)\,\mathrm{ln}{\cos^{2}(\omega t)}-\sin^{2}(\omega t)\,\mathrm{ln}{\sin^{2}(\omega t)}\,.$ (75) It is clear that the both are periodic in times. The minimal period is $T=\pi/\omega$ for $C_{K}$ and $T=\pi/2\omega$ for $S_{K}$. The functional relation $S_{K}(C_{K})$ is shown in the left panel of Fig. 4. We may think of the time evolution as a particle moving on the $C_{K}$-$S_{K}$ plane. In the first half period of $C_{K}$, $t\in[0\,,T_{C}/2]$, the particle moves to the right, starting at $O$ and stopping at $F$, giving rise to a finite path. In the next half period $t\in[T_{c}/2\,,T_{C}]$, the particle still moves along the same trajectory but in the opposite direction exactly: it goes from $F$ to $O$. This is very similar to a harmonic oscillator. The same behavior will be repeated as time increases. The second example is $K=3$: $b_{1}^{2}={\frac{\omega_{1}^{2}+\omega_{2}^{2}}{2}}\,,\quad b_{2}^{2}={\frac{(\omega_{1}^{2}-\omega_{2}^{2})^{2}}{2(\omega_{1}^{2}+\omega_{2}^{2})}}\,,\quad b_{3}^{2}={\frac{2\omega_{1}^{2}\omega_{2}^{2}}{\omega_{1}^{2}+\omega_{2}^{2}}}\,.$ (76) The auto-correlation function is given by $\varphi_{0}(t)=\big{(}\cos(\omega_{1}t)+\cos(\omega_{2}t)\big{)}/2$. If the ratio $\omega_{2}/\omega_{1}$ is rational, $\varphi_{0}(t)$ will be periodic as well as the K-complexity and K-entropy. The situation is quite similar to the previous case except some slight differences, as shown in the middle panel of Fig. 4. However, when $\omega_{2}/\omega_{1}$ is irrational, the auto- correlation function will not be periodic. The particle does not move on a fixed path and the trajectory on the plane will become more and more complex in the time evolution, see the right panel of Fig. 4. These are general features for odd $K$. For even $K$, as previously emphasized, the results can be obtained from $(K+1)$-case by setting one of the $L$ frequencies $\omega_{\ell}$ equal to zero. For example, for $K=2$ case, the auto-correlation function is given by $\varphi_{0}(t)=a_{0}+a_{1}\cos{(\omega t)}$, where $a_{0}+a_{1}=1$. Of course, the time evolution is periodic except that $W=\infty$ because of the constant term $a_{0}\neq 0$. For $K=4$ case, $\varphi_{0}(t)=a_{0}+a_{1}\cos{(\omega_{1}t)}+a_{2}\cos{(\omega_{2}t)}$, where $a_{0}+a_{1}+a_{2}=1$. Whether the time evolution is periodic or not depends on the ratio $\omega_{2}/\omega_{1}$ is rational or irrational. In any case, the trajectory on the $C_{K}$-$S_{K}$ plane shows the same features as the odd $K$ case qualitatively. Therefore, we safely conclude that for reversible process characterized by $W=0$ or $W=\infty$, the functional relation $S_{K}\sim\log{C_{K}}$ at long times does not hold any longer. ## 6 Conclusion and discussion In this paper, we study quantum information quantities: K-complexity $C_{K}(t)$ and K-entropy $S_{K}(t)$ in operator growth. We are trying to search their interesting features which may diagnose irreversibility of the process. We have studied a variety of systems with emergence of dissipative behaviors, including chaotic ones and integrable theories. Our main result is for irreversible process, the two quantities enjoy a logarithmic relation (1) to leading order at sufficiently long times, where the proportional constant is up bounded as $\tilde{\eta}\leq 1$. It should be emphasized that in practical calculations we choose a certain inner product in the operator space and take the thermodynamic limit. However, the logarithmic relation only depends on the asymptotic behavior of Lanczos coefficients $b_{n}$. Hence choosing different inner product or considering finite temperatures will not change the relation if the asymptotic behavior of $b_{n}$ takes the same asymptotic form as those studied in the paper. Inspired by similarity of the relation to the Boltzmann formula in statistical mechanics, we propose that the relation is a sufficient condition for irreversibility of operator growth, irrespective of the choice of dynamics. This is true as far as we can check although we cannot prove it analytically. Physical consequences of the relation deserves further investigations. model | chaotic | $1d$-chaotic | integrable | unknown | unknown | bounded ---|---|---|---|---|---|--- $b_{n}$ | $\alpha n$ | $\alpha n/\mathrm{ln}{n}$ | $\alpha n^{\delta}$ | $\alpha n^{\delta}(\mathrm{ln}{n})^{\pm}$ | $\alpha\,\mathrm{ln}{n}$ | $b$ $C_{K}$ | $e^{2\alpha t}$ | $e^{\sqrt{4\alpha t}}$ | $(2\alpha t)^{\gamma}$ | $(2\alpha t)^{\gamma}\,\mathrm{ln}^{\pm\gamma}{(2\alpha t)}$ | $2\alpha t\,\mathrm{ln}{(2\alpha t)}$ | $2bt$ $S_{K}$ | $2\alpha t$ | $\sqrt{4\alpha t}$ | $\mathrm{ln}{(2\alpha t)}$ | $\mathrm{ln}{(2\alpha t)}$ | $\mathrm{ln}{(2\alpha t)}$ | $\mathrm{ln}{(2bt)}$ Table 1: Asymptotic growth of Lanczos coefficient $b_{n}$ and the leading long time dependence of K-complexity and K-entropy. The constants $\alpha$ and $b$ have dimension of energy and $\gamma={\frac{1}{1-\delta}}$, $0<\delta<1$. The last column corresponds to the case with bounded support, of which a particularly interesting example is chaotic systems in the post-scrambling regimes. The fifth/sixth columns can be viewed as integrable models/bounded case with logarithmic corrections. In addition to the logarithmic relation, behavior of K-complexity itself is also closely related to irreversibility. In table 1, we summarize asymptotic growth of Lanczos coefficient $b_{n}$ and the leading long time dependence of $C_{K}$ (and $S_{K}$). It is intriguing to observe that the former is somehow positively related to the latter: when $b_{n}$ grows faster asymptotically, $C_{K}$ grows faster in the time evolution (in a one-to-one mapping) as well. If this is correct, from the long time behavior of $C_{K}$, one can read off the asymptotic behavior of the Lanczos coefficients. The underlying relation between $C_{K}$ and emergence of dissipative behavior certainly deserves further investigations. ## Acknowledgments Z.Y. Fan was supported in part by the National Natural Science Foundations of China with Grant No. 11805041 and No. 11873025. ## References * [1] C. von Keyserlingk, T. Rakovszky, F. Pollmann and S. Sondhi, Operator hydrodynamics, OTOCs, and entanglement growth in systems without conservation laws, Phys. Rev. X 8, no.2, 021013 (2018) doi:10.1103/PhysRevX.8.021013 [arXiv:1705.08910 [cond-mat.str-el]]. * [2] A. Nahum, S. Vijay and J. Haah, Operator Spreading in Random Unitary Circuits, Phys. Rev. X 8, no.2, 021014 (2018) doi:10.1103/PhysRevX.8.021014 [arXiv:1705.08975 [cond-mat.str-el]]. * [3] V. Khemani, A. Vishwanath and D. A. Huse, Operator spreading and the emergence of dissipation in unitary dynamics with conservation laws, Phys. Rev. X 8, no.3, 031057 (2018) doi:10.1103/PhysRevX.8.031057 [arXiv:1710.09835 [cond-mat.stat-mech]]. * [4] T. Rakovszky, F. Pollmann and C. W. von Keyserlingk, Diffusive hydrodynamics of out-of-time-ordered correlators with charge conservation, Phys. Rev. X 8, no.3, 031058 (2018) doi:10.1103/PhysRevX.8.031058 [arXiv:1710.09827 [cond-mat.stat-mech]]. * [5] S. Gopalakrishnan, D. A. Huse, V. Khemani and R. Vasseur, Hydrodynamics of operator spreading and quasiparticle diffusion in interacting integrable systems, Phys. Rev. B 98, no.22, 220303 (2018) doi:10.1103/PhysRevB.98.220303 [arXiv:1809.02126 [cond-mat.stat-mech]]. * [6] D. E. Parker, X. Cao, A. Avdoshkin, T. Scaffidi and E. Altman, A Universal Operator Growth Hypothesis, Phys. Rev. X 9, no.4, 041017 (2019) doi:10.1103/PhysRevX.9.041017 [arXiv:1812.08657 [cond-mat.stat-mech]]. * [7] J. L. F. Barbón, E. Rabinovici, R. Shir and R. Sinha, On The Evolution Of Operator Complexity Beyond Scrambling, JHEP 10, 264 (2019) doi:10.1007/JHEP10(2019)264 [arXiv:1907.05393 [hep-th]]. * [8] P. Caputa, J. M. Magan and D. Patramanis, Geometry of Krylov complexity, Phys. Rev. Res. 4, no.1, 013041 (2022) doi:10.1103/PhysRevResearch.4.013041 [arXiv:2109.03824 [hep-th]]. * [9] S. K. Jian, B. Swingle and Z. Y. Xian, Complexity growth of operators in the SYK model and in JT gravity, JHEP 03, 014 (2021) doi:10.1007/JHEP03(2021)014 [arXiv:2008.12274 [hep-th]]. * [10] E. Rabinovici, A. Sánchez-Garrido, R. Shir and J. Sonner, Operator complexity: a journey to the edge of Krylov space, JHEP 06, 062 (2021) doi:10.1007/JHEP06(2021)062 [arXiv:2009.01862 [hep-th]]. * [11] A. Dymarsky and M. Smolkin, Krylov complexity in conformal field theory, Phys. Rev. D 104, no.8, L081702 (2021) doi:10.1103/PhysRevD.104.L081702 [arXiv:2104.09514 [hep-th]]. * [12] J. Kim, J. Murugan, J. Olle and D. Rosa, Operator delocalization in quantum networks, Phys. Rev. A 105, no.1, L010201 (2022) doi:10.1103/PhysRevA.105.L010201 [arXiv:2109.05301 [quant-ph]]. * [13] D. Patramanis, Probing the entanglement of operator growth, [arXiv:2111.03424 [hep-th]]. * [14] VS Viswanath and Gerhard Müller, The Recursion Method: Applications to Many-body Dynamics (Springer, 2008). * [15] M. Howard Lee, Ergodic Theory, Infinite Products, and Long Time Behavior in Hermitian Models, Phys. Rev. Lett. 87, 250601 (2001). * [16] J. Maldacena, S. H. Shenker and D. Stanford, A bound on chaos, JHEP 08, 106 (2016) doi:10.1007/JHEP08(2016)106 [arXiv:1503.01409 [hep-th]]. * [17] M. H. Lee, J. Florencio, and J. Hong, Dynamic equivalence of a two-dimensional quantum electron gas and a classical harmonic oscillator chain with an impurity mass, J. Phys. A 22, L331 (1989).
\left.-\hat{u}(\tilde{F}-F(0)+w_{A})-\hat{\tilde{v}}(F-F(0)+w_{B})\right)\right]\end{aligned}\end{aligned}$ $\displaystyle\quad-\frac{\mu}{R_{u}}Q_{f^{\prime}}\bar{f}(0)(G_{-}(0)-w_{D_{-}})-\frac{\mu}{R_{v}}\bar{Q}_{\bar{f}^{\prime}}f(0)(G_{+}(0)-w_{D_{+}})\ .$ (A.23) The first line vanishes identically, using (2.25). The terms proportional to $\Theta$ functions add up to $\displaystyle\begin{aligned} \frac{\mu}{2}\int d\sigma d{\tilde{\sigma}}&\left[f^{\prime}\frac{\mathcal{H}+\mathcal{P}}{R_{u}}\frac{\tilde{\bar{f}}^{\prime}}{R_{v}}\left(-\tilde{v}^{\prime}\tilde{G}_{-}+\mu^{2}(\tilde{\mathcal{H}}-\tilde{\mathcal{P}})\tilde{F}\right)\Theta(\sigma-{\tilde{\sigma}})\right.\\\ &~{}\left.+\frac{f^{\prime}}{R_{u}}\tilde{\bar{f}}^{\prime}\frac{\tilde{\mathcal{H}}-\tilde{\mathcal{P}}}{R_{v}}\left(-u^{\prime}G_{+}+\mu^{2}(\mathcal{H}+\mathcal{P})F\right)\Theta({\tilde{\sigma}}-\sigma)\right]\end{aligned}$ $\displaystyle=-\frac{\mu}{2}\int d\sigma d{\tilde{\sigma}}f^{\prime}\tilde{\bar{f}}^{\prime}\frac{\mathcal{H}+\mathcal{P}}{R_{u}}\frac{\tilde{\mathcal{H}}+\tilde{\mathcal{P}}}{R_{v}}[\Theta(\sigma-{\tilde{\sigma}})+\Theta({\tilde{\sigma}}-\sigma)]=-\frac{2\mu}{R_{u}R_{v}}Q_{f^{\prime}}\bar{Q}_{\bar{f}^{\prime}}\ .$ (A.24) Similarly, the terms proportional to $\hat{u}$ and $\hat{\tilde{v}}$ give $\displaystyle\frac{\mu}{R_{u}^{2}}Q_{uf^{\prime}}\int d{\tilde{\sigma}}\frac{\tilde{\bar{f}}^{\prime}}{R_{v}}\left[\tilde{v}^{\prime}\tilde{G}_{-}-\mu^{2}(\tilde{\mathcal{H}}-\tilde{\mathcal{P}})(\tilde{F}-F(0)+w_{A})\right]$ $\displaystyle+\frac{\mu}{R_{v}^{2}}\bar{Q}_{v\bar{f}^{\prime}}\int d\sigma\frac{f^{\prime}}{R_{u}}\left[u^{\prime}G_{+}-\mu^{2}(\mathcal{H}+\mathcal{P})(F-F(0)+w_{B})\right]$ $\displaystyle=\frac{2\mu}{R_{u}R_{v}}\left[\frac{1}{R_{u}}Q_{uf^{\prime}}\bar{Q}_{\bar{f}^{\prime}}(1+\mu^{2}[F(0)-w_{A}])+\frac{1}{R_{v}}Q_{f^{\prime}}\bar{Q}_{v\bar{f}^{\prime}}(1+\mu^{2}[F(0)-w_{B}])\right]\ .$ (A.25) Using these equations to work out the Poisson bracket of the charges, we finally find $\displaystyle\\{Q_{f},\bar{Q}_{\bar{f}}\\}$ (A.26) $\displaystyle=\frac{\mu}{R_{u}}Q_{f^{\prime}}\left(\bar{f}(0)(w_{D_{-}}-G_{-}(0))+\int d{\tilde{\sigma}}\frac{\tilde{\bar{f}}^{\prime}}{R_{v}}\left[(\tilde{\mathcal{H}}-\tilde{\mathcal{P}})\left(-\frac{1}{2}+\hat{\tilde{v}}(1+\mu^{2}[F(0)-w_{B}])+\mu^{2}\tilde{B}\right)-\tilde{v}^{\prime}\tilde{D}_{-}\right]\right)$ $\displaystyle\,+\frac{\mu}{R_{v}}\bar{Q}_{\bar{f}^{\prime}}\left(f(0)(w_{D_{+}}-G_{+}(0))+\int d\sigma\frac{f^{\prime}}{R_{u}}\left[(\mathcal{H}+\mathcal{P})\left(-\frac{1}{2}+\hat{u}(1+\mu^{2}[F(0)-w_{A}])+\mu^{2}A\right)-u^{\prime}D_{+}\right]\right)$ The terms proportional to $u,v$ in the integrand, which represent the contribution of the state-dependent radii to the Poisson brackets, are dangerous, as they would produce terms proportional to $Q_{uf^{\prime}}$ on the right-hand side, which are not conserved due to the non-periodicity of the associated function. A very similar derivation for the left-left and right- right components of the Poisson bracket algebra gives $\displaystyle\\{Q_{f},Q_{g}\\}$ $\displaystyle=\frac{1}{R_{u}}Q_{fg^{\prime}-f^{\prime}g}$ $\displaystyle\quad-\frac{\mu^{2}}{R_{u}}Q_{f^{\prime}}\left(g(0)(F(0)-w_{B})+\int\frac{d{\tilde{\sigma}}}{R_{u}}\tilde{g}^{\prime}\left[\tilde{u}^{\prime}\tilde{B}-(\tilde{\mathcal{H}}+\tilde{\mathcal{P}})\left(\tilde{D}_{-}+\hat{\tilde{u}}[G_{-}(0)-w_{D_{-}}]\right)\right]\right)$ $\displaystyle\quad+\frac{\mu^{2}}{R_{u}}Q_{g^{\prime}}\left(f(0)(F(0)-w_{B})+\int\frac{d\sigma}{R_{u}}f^{\prime}\left[u^{\prime}B-(\mathcal{H}+\mathcal{P})\left(D_{-}+\hat{u}[G_{-}(0)-w_{D_{-}}]\right)\right]\right)\ .$ (A.27) $\displaystyle\\{\bar{Q}_{\bar{f}},\bar{Q}_{\bar{g}}\\}$ $\displaystyle=\frac{1}{R_{v}}\bar{Q}_{\bar{f}^{\prime}\bar{g}-\bar{f}\bar{g}^{\prime}}$ $\displaystyle\quad-\frac{\mu^{2}}{R_{v}}\bar{Q}_{\bar{f}^{\prime}}\left(\bar{g}(0)(F(0)-w_{A})+\int\frac{d{\tilde{\sigma}}}{R_{v}}\tilde{\bar{g}}^{\prime}\left[\tilde{v}^{\prime}\tilde{A}-(\tilde{\mathcal{H}}-\tilde{\mathcal{P}})\left(\tilde{D}_{+}+\hat{\tilde{v}}[G_{+}(0)-w_{D_{+}}]\right)\right]\right)$ $\displaystyle\quad+\frac{\mu^{2}}{R_{v}}\bar{Q}_{\bar{g}^{\prime}}\left(\bar{f}(0)(F(0)-w_{A})+\int\frac{d\sigma}{R_{v}}\bar{f}^{\prime}\left[v^{\prime}A-(\mathcal{H}-\mathcal{P})\left(D_{+}+\hat{v}[G_{+}(0)-w_{D_{+}}]\right)\right]\right)\ .$ (A.28) With these results, we can deduce how to choose the functions $A$, $B$ and $D_{\pm}$ in order to guarantee, for consistency, that the right-hand sides are conserved. In the following, we will analyse this requirement step by step. Concretely, we need to cancel the radius contributions proportional with $Q_{uf^{\prime}}$ and $\bar{Q}_{v\bar{f}^{\prime}}$, which are not conserved due to the lack of periodicity of the functions that label them. Let us first consider the result (A.3), and understand the possible choices for the winding of $D_{-}$ so as to cancel this term. There are three types of winding one can consider: proportional to $\hat{u}(\sigma)$, proportional to $\Theta(\sigma)$, or proportional to $\hat{v}(\sigma)$. In the case of $D_{-}$, the first type of winding does not help cancel the non-conserved terms proportional to $Q_{uf^{\prime}}$ in the total Poisson bracket. Moreover, it introduces $u$ \- type winding in (A.26), which cannot be cancelled by $u$ \- type winding in $B$ (or the other functions) without introducing additional $u$ \- type winding in (A.3). Consequently, the $u$ \- type winding in $B$ and $D_{-}$ must be zero. A similar argument sets the $v$ \- type winding in $A$ and $D_{+}$ to zero. For whatever the other type of winding we may have, the vanishing of the terms proportional to $Q_{uf^{\prime}}$ in (A.3) and $\bar{Q}_{v\bar{f}^{\prime}}$ in (A.3) sets $w_{D_{\pm}}=G_{\pm}(0)$. The conservation equation (A.4) then requires that $w_{A}=w_{B}=F(0)$, so the charge algebra becomes $\displaystyle\\{Q_{f},\bar{Q}_{\bar{f}}\\}$ $\displaystyle=\frac{\mu Q_{f^{\prime}}}{R_{u}R_{v}}\int d\sigma\bar{f}^{\prime}\left[\left(\hat{v}-\frac{1}{2}+\mu^{2}B\right)(\mathcal{H}-\mathcal{P})-v^{\prime}D_{-}\right]$ $\displaystyle+\frac{\mu\bar{Q}_{\bar{f}^{\prime}}}{R_{u}R_{v}}\int d\sigma f^{\prime}\left[\left(\hat{u}-\frac{1}{2}+\mu^{2}A\right)(\mathcal{H}+\mathcal{P})-u^{\prime}D_{+}\right]$ (A.29) $\displaystyle\\{Q_{f},Q_{g}\\}$ $\displaystyle=\frac{1}{R_{u}}Q_{fg^{\prime}-f^{\prime}g}-\frac{\mu^{2}}{R_{u}^{2}}Q_{f^{\prime}}\int d\sigma g^{\prime}\,(u^{\prime}B-(\mathcal{H}+\mathcal{P})D_{-})+\frac{\mu^{2}}{R_{u}^{2}}Q_{g^{\prime}}\int d\sigma f^{\prime}\,(u^{\prime}B-(\mathcal{H}+\mathcal{P})D_{-})$ and a similar equation for $\\{\bar{Q}_{\bar{f}},\bar{Q}_{\bar{g}}\\}$. The first term in the first equation can be cancelled if we choose a $v$ \- type of winding $D_{-}$ and/or $B$; however, if we do not want it to introduce non- conserved terms in the second equation, we need to choose $u^{\prime}B^{(v)}-(\mathcal{H}+\mathcal{P})D_{-}^{(v)}=0\;,\;\;\;\;\;\;v^{\prime}D_{-}^{(v)}-\mu^{2}(\mathcal{H}-\mathcal{P})B^{(v)}=\mathcal{H}-\mathcal{P}$ (A.30) where the superscript $(v)$ indicates the coefficient of $\hat{v}$ in the corresponding function. These equations determine $D_{-}^{(v)}=G_{-}$ and $B^{(v)}=F$. Since the total winding of the functions $D_{-}$ and $B$ is $G_{-}(0)$ and respectively $F(0)$, we see that the $v$-type winding accounts for all of it, leaving no room for a $\Theta$ \- type of winding. An identical argument sets $v^{\prime}A^{(u)}-(\mathcal{H}-\mathcal{P})D_{+}^{(u)}=0\;,\;\;\;\;\;\;\;u^{\prime}D_{+}^{(u)}-\mu^{2}(\mathcal{H}+\mathcal{P})A^{(u)}=\mathcal{H}+\mathcal{P}$ (A.31) Therefore, we find that the winding part of the functions $A,B,D_{\pm}$ is entirely fixed by charge conservation and the non-appearance of non-conserved terms in the charge algebra. This however is not sufficient to fix the charge algebra, as the periodic parts of these functions have not yet been fixed. The integrals we are left with are only conserved if they are of the form $\mathcal{H}+\mathcal{P}$ times a periodic function of $\hat{u}$ or $\mathcal{H}-\mathcal{P}$ times a function of $\hat{v}$. Combining the first line of section A.3 with the third, we find the requirement $\displaystyle(\tilde{\mathcal{H}}-\tilde{\mathcal{P}})\left(-\frac{1}{2}+\mu^{2}\tilde{B}^{(p)}\right)-\tilde{v}^{\prime}\tilde{D}_{-}$ $\displaystyle=-(\tilde{\mathcal{H}}-\tilde{\mathcal{P}})\bar{X}_{p}(\hat{v})\ ,$ $\displaystyle\tilde{u}^{\prime}\tilde{B}^{(p)}-(\tilde{\mathcal{H}}+\tilde{\mathcal{P}})\tilde{D}_{-}^{(p)}$ $\displaystyle=-(\tilde{\mathcal{H}}+\tilde{\mathcal{P}})X_{p}(\hat{u})\ ,$ (A.32) where the superscript $(p)$ indicates the periodic part and where $X_{p}(\hat{u})$ and $\bar{X}_{p}(\hat{v})$ are periodic functions that are arbitrary for now. Comparing this with (A.13) however, we find that $X_{p}=X$ and $\bar{X}_{p}=\bar{X}-\hat{v}$. Similarly, the other contributions to (A.26)–(A.3) are conserved when (2.35) holds with $Y_{p}\equiv Y-\hat{u}$ and $\bar{Y}_{p}\equiv\bar{Y}$ periodic. The resulting algebra is $\displaystyle\\{Q_{f},\bar{Q}_{\bar{f}}\\}$ $\displaystyle=-\frac{\mu}{R_{u}R_{v}}\left(Q_{f^{\prime}}Q_{\bar{X}_{p}\bar{f}^{\prime}}+\bar{Q}_{\bar{f}^{\prime}}Q_{Y_{p}f^{\prime}}\right)\ ,$ $\displaystyle\\{Q_{f},Q_{g}\\}$ $\displaystyle=\frac{1}{R_{u}}Q_{fg^{\prime}-f^{\prime}g}+\frac{\mu^{2}}{R_{u}^{2}}\left(Q_{f^{\prime}}Q_{X_{p}g^{\prime}}-Q_{g^{\prime}}Q_{X_{p}f^{\prime}}\right)\ ,$ $\displaystyle\\{\bar{Q}_{\bar{f}},\bar{Q}_{\bar{g}}\\}$ $\displaystyle=\frac{1}{R_{v}}\bar{Q}_{\bar{f}^{\prime}\bar{g}-\bar{f}\bar{g}^{\prime}}+\frac{\mu^{2}}{R_{v}^{2}}\left(\bar{Q}_{\bar{f}^{\prime}}\bar{Q}_{\bar{Y}_{p}\bar{g}^{\prime}}-\bar{Q}_{\bar{g}^{\prime}}\bar{Q}_{\bar{Y}_{p}\bar{f}^{\prime}}\right)\ .$ (A.33) We can finally rewrite $\partial_{u}f=f^{\prime}/R_{u}$ and $\partial_{v}\bar{f}=\bar{f}^{\prime}/R_{v}$ to recover section 2.2. The undetermined functions $X_{p},\bar{X}_{p},Y_{p},\bar{Y}_{p}$ could more generally be constrained by the Jacobi identities that the Poisson bracket algebra must satisfy. These involve product of $\delta$ function distributions, so their full analysis is rather involved. We will not pursue it here.161616Note added in v2: to simplify the analysis, it is possible to study the Jacobi identities involving a “smeared” version of the field dependent coordinates $u_{0}\equiv\int_{0}^{R}d\sigma\,u(\sigma)$, $v_{0}\equiv\int_{0}^{R}d\sigma\,v(\sigma)$ and the currents, as was done in [73] for $J\bar{T}$. The simplest Jacobi identity is $\\{\\{P,\tilde{P}\\},u_{0}\\}+\text{cyclic}=0$. Plugging in the minimal solution, we find that this Jacobi identity _is not_ satisfied. More generally, it seems to be very difficult to find a solution to the Jacobi identities that has the non-trivial winding required by the charge conservation equation (A.4). If no such solution exists, this means that the functions $u,v$ as defined in this note are not good functions on phase space when the $T\bar{T}$ \- deformed theory lives on compact space. This does not, however, preclude the existence of related field-dependent coordinates whose action phase space is well-defined, as was the case for $J\bar{T}$. ## Appendix B The $J\bar{T}$ charge algebra In this appendix, we present some details of the calculation of Poisson brackets of the conserved charges in $J\bar{T}$ \- deformed CFTs. As explained in the main text, the undeformed commutators (3.29) and the general formula (3.25) allow us to compute the commutator of the deformed right-moving Hamiltonian with various quantities of interest. These read $\\{\mathcal{H}_{R}(\sigma),\mathcal{H}_{R}(\tilde{\sigma})\\}=\frac{-\mathcal{H}_{R}^{(0)}(\sigma)-\mathcal{H}_{R}^{(0)}(\tilde{\sigma})+\frac{\lambda^{2}}{2}\mathcal{H}_{R}(\sigma)\mathcal{H}_{R}(\tilde{\sigma})}{\sqrt{\left(1-\lambda\mathcal{J}_{+}(\sigma)\right)^{2}-\lambda^{2}\mathcal{H}_{R}^{(0)}(\sigma)}\sqrt{\left(1-\lambda\mathcal{J}_{+}(\tilde{\sigma})\right)^{2}-\lambda^{2}\mathcal{H}_{R}^{(0)}(\tilde{\sigma})}}\partial_{\sigma}\delta(\sigma-\tilde{\sigma})$ (B.1) $\\{\mathcal{P}(\sigma),\mathcal{H}_{R}(\tilde{\sigma})\\}=\frac{\mathcal{H}_{R}^{(0)}(\sigma)+\mathcal{H}_{R}^{(0)}(\tilde{\sigma})+\lambda\mathcal{J}_{+}(\sigma)\mathcal{H}_{R}(\tilde{\sigma})}{\sqrt{(1-\lambda\mathcal{J}_{+}(\tilde{\sigma}))^{2}-\lambda^{2}\mathcal{H}_{R}^{(0)}(\tilde{\sigma})}}\partial_{\sigma}\delta(\sigma-\tilde{\sigma})$ (B.2) $\\{\mathcal{H}_{R}(\sigma),\mathcal{J}_{+}(\tilde{\sigma})\\}=\frac{\lambda\mathcal{H}_{R}(\sigma)}{2\sqrt{\left(1-\lambda\mathcal{J}_{+}(\sigma)\right)^{2}-\lambda^{2}\mathcal{H}_{R}^{(0)}}}\partial_{\sigma}\delta(\sigma-\tilde{\sigma})$ (B.3) $\\{\mathcal{H}_{R}(\sigma),\mathcal{J}_{-}(\tilde{\sigma})\\}=-\frac{\mathcal{J}_{-}(\sigma)}{\sqrt{\left(1-\lambda\mathcal{J}_{+}(\sigma)\right)^{2}-\lambda^{2}\mathcal{H}_{R}^{(0)}(\sigma)}}\partial_{\sigma}\delta(\sigma-\tilde{\sigma})$ (B.4) Using the fact that $\mathcal{J}_{+}-\mathcal{J}_{-}=\phi^{\prime}_{1}$ and that the last two commutators are total $\tilde{\sigma}$ derivatives, we can deduce the commutator of $\mathcal{H}_{R}$ with $\phi$, which will be used to compute the Poisson bracket of the energy currents with the field-dependent coordinate $v=V-\lambda\phi_{1}$. We find $\\{\mathcal{H}_{R}(\sigma),\phi_{1}(\tilde{\sigma})\\}=\frac{-\mathcal{J}_{-}(\sigma)-\lambda\mathcal{H}_{R}(\sigma)/2}{\sqrt{(1-\lambda\mathcal{J}_{+}(\sigma))^{2}-\lambda^{2}\mathcal{H}_{R}^{(0)}(\sigma)}}\delta(\sigma-\tilde{\sigma})$ (B.5) Note that, in principle, we could have added an arbitrary integration function, $A(\sigma)$, to the right-hand-side of this commutator. However, the fact that the commutator is local with respect to $\sigma-{\tilde{\sigma}}$ compels us to set this potential integration function to zero. Using the above commutators, we can easily compute those of $\mathcal{H}_{L}=\mathcal{H}_{R}+\mathcal{P}$. As noted in the main text, the $\\{\mathcal{H}_{L},\mathcal{H}_{L}\\}$ commutator can be shown to be equivalent (using the criteria in appendix C) to $\\{\mathcal{H}_{L}(\sigma),\mathcal{H}_{L}(\tilde{\sigma})\\}=(\mathcal{H}_{L}(\sigma)+\mathcal{H}_{L}(\tilde{\sigma}))\partial_{\sigma}\delta(\sigma-\tilde{\sigma})$ (B.6) The $\\{\mathcal{H}_{L}(\sigma),\mathcal{H}_{R}(\tilde{\sigma})\\}$ commutator is somewhat involved; however, in the calculations below, in which we will integrate over ${\tilde{\sigma}}$, it suffices to know that it is proportional to $\partial_{\sigma}\delta(\sigma-{\tilde{\sigma}})$ which implies, using (C.1), that we only need to know its value at ${\tilde{\sigma}}=\sigma$, as well as that of its first ${\tilde{\sigma}}$ derivative evaluated at ${\tilde{\sigma}}=\sigma$, $\left.\\{\mathcal{H}_{L}(\sigma),\mathcal{H}_{R}(\tilde{\sigma})\\}\right|_{\tilde{\sigma}=\sigma}=\mathcal{H}_{R}\left(1-\frac{1}{\sqrt{\left(1-\lambda\mathcal{J}_{+}\right)^{2}-\lambda^{2}\mathcal{H}_{R}^{(0)}}}\right)\;,\;\;\;\;\;\;\partial_{\tilde{\sigma}}\left.\\{\mathcal{H}_{L}(\sigma),\mathcal{H}_{R}(\tilde{\sigma})\\}\right|_{\tilde{\sigma}=\sigma}=0$ (B.7) The $\\{\mathcal{H}_{L},\phi\\}$ commutator can be inferred from (B.5). We can now compute the algebra of the symmetry generators (3.12). It is trivial to show that the left-movers satisfy a Virasoro algebra, since $\\{Q_{f},Q_{g}\\}=\int d\sigma d\tilde{\sigma}f(U)g(\tilde{U})\\{\mathcal{H}_{L}(\sigma),\mathcal{H}_{L}(\tilde{\sigma})\\}=\int d\sigma\mathcal{H}_{L}(\sigma)(fg^{\prime}-f^{\prime}g)=Q_{fg^{\prime}-f^{\prime}g}$ (B.8) The commutator of the left and the right generators can be shown to vanish $\displaystyle\\{Q_{f},\bar{Q}_{\bar{f}}\\}$ $\displaystyle=$ $\displaystyle\int d\sigma d\tilde{\sigma}f(U)\left[\\{\mathcal{H}_{L}(\sigma),\mathcal{H}_{R}(\tilde{\sigma})\\}\bar{f}(\tilde{v})+\bar{f}^{\prime}(\tilde{v})\\{\mathcal{H}_{L}(\sigma),\tilde{v}\\}\mathcal{H}_{R}(\tilde{\sigma})\right]$ (B.9) $\displaystyle=\;\int d\sigma f(U)\left[\left.\\{\mathcal{H}_{L}(\sigma),\mathcal{H}_{R}(\tilde{\sigma})\\}\right|_{\tilde{\sigma}=\sigma}\partial_{\sigma}\bar{f}(v)-\lambda\bar{f}^{\prime}(v)\left.\\{\mathcal{H}_{L}(\sigma),\phi(\tilde{\sigma})\\}\right|_{\tilde{\sigma}=\sigma}\mathcal{H}_{R}(\sigma)\right]=0$ where we used the fact that the field-dependent coordinate $v=V-\lambda\phi_{1}$ (and so $\partial_{\sigma}\bar{f}=\bar{f}^{\prime}(1-\lambda\phi^{\prime}_{1})$), as well as the commutators (B.7) and we used the formula (C.5) for the integral of a derivative of a delta function. Finally, the right-moving generators have commutation relations $\displaystyle\\{\bar{Q}_{\bar{f}},\bar{Q}_{\bar{g}}\\}$ $\displaystyle=$ $\displaystyle\int d\sigma d\tilde{\sigma}\left[\bar{f}(v)\bar{g}(\tilde{v})\\{\mathcal{H}_{R},\tilde{\mathcal{H}}_{R}\\}+\bar{f}^{\prime}(v)\\{v,\mathcal{H}_{R}(\tilde{\sigma})\\}\mathcal{H}_{R}\bar{g}(\tilde{v})+\bar{f}(v)\bar{g}^{\prime}(\tilde{v})\\{\mathcal{H}_{R},\tilde{v}\\}\tilde{\mathcal{H}}_{R}\right]$ (B.10) $\displaystyle=$ $\displaystyle\int d\sigma\left[\bar{f}(v)\left.\partial_{\tilde{\sigma}}\left(\bar{g}(\tilde{v})\\{\mathcal{H}_{R},\tilde{\mathcal{H}}_{R}\\}\right)\right|_{\tilde{\sigma}=\sigma}+\lambda(\bar{f}^{\prime}\bar{g}-\bar{g}^{\prime}\bar{f})\mathcal{H}_{R}\left.\\{\mathcal{H}_{R}(\sigma),\phi(\tilde{\sigma})\\}\right|_{\tilde{\sigma}=\sigma}\right]$ Note there is no $\\{v,\tilde{v}\\}$ commutator, as $v=V-\lambda\phi_{1}$ only involves $\phi$. Using the fact that $\left.\partial_{\tilde{\sigma}}\left(\\{\mathcal{H}_{R},\tilde{\mathcal{H}}_{R}\\}\right)\right|_{\tilde{\sigma}=\sigma}=\frac{2}{\lambda^{2}}\partial_{\sigma}\left(1-\frac{1-\lambda\mathcal{J}_{+}}{\sqrt{\left(1-\lambda\mathcal{J}_{+}\right)^{2}-\lambda^{2}\mathcal{H}_{R}^{(0)}}}\right)=-\partial_{\sigma}\frac{\mathcal{H}_{R}}{\sqrt{\left(1-\lambda\mathcal{J}_{+}\right)^{2}-\lambda^{2}\mathcal{H}_{R}^{(0)}}}$ (B.11) Integrating by parts and using the specific values of the commutators, we find $\\{\bar{Q}_{\bar{f}},\bar{Q}_{\bar{g}}\\}=\int d\sigma(\bar{f}^{\prime}\bar{g}-\bar{g}^{\prime}\bar{f})\mathcal{H}_{R}=\bar{Q}_{\bar{f}^{\prime}\bar{g}-\bar{f}\bar{g}^{\prime}}$ (B.12) Let us now compute the Poisson brackets with the currents. It is easy to show, using the explicit expression (3.16) for $K_{U}$, that $\\{Q_{f},P_{\chi}\\}=\int d\sigma d\tilde{\sigma}f(U)\chi(\tilde{U})\\{\mathcal{H}_{L}(\sigma),K_{U}(\tilde{\sigma})\\}=\int d\sigma f(U)\chi^{\prime}(U)K_{U}(\sigma)=P_{f\chi^{\prime}}$ (B.13) The commutator of two $U(1)$ charges gives the usual central extension, $\\{P_{\chi},P_{\eta}\\}=1/2\int d\sigma\chi\eta^{\prime}$ and the commutator with the right-moving Virasoro generators vanishes. As for the right-moving $U(1)$ $\bar{P}_{\bar{\chi}}=\int d\sigma\frac{\pi-\phi^{\prime}}{2}\bar{\chi}(v)$ (B.14) it can be easily shown that they commute with the left-moving $U(1)$ and the left-moving Virasoro. The algebra of these $U(1)$ currents is however not Kac- Moody $\\{\bar{P}_{\bar{\chi}},\bar{P}_{\bar{\eta}}\\}=\frac{1}{2}\int d\sigma\left(-\bar{\chi}\bar{\eta}^{\prime}+\lambda\bar{\chi}\bar{\eta}^{\prime}\frac{\pi+\phi^{\prime}}{2}-\lambda\bar{\chi}^{\prime}\bar{\eta}\frac{\pi-\phi^{\prime}}{2}\right)$ (B.15) $\\{\bar{Q}_{\bar{f}},\bar{P}_{\bar{\eta}}\\}=-\int d\sigma\left(\bar{f}\bar{\eta}^{\prime}\mathcal{J}_{-}+\frac{\lambda}{2}\bar{f}^{\prime}\bar{\eta}\,\mathcal{H}_{R}\right)$ (B.16) As explained in the main text, we can obtain an operator that has the standard commutator with the right-moving Virasoro by adding a multiple of $T_{\alpha\bar{z}}$ to the current. It turns out that the combination $\bar{P}_{\bar{\eta}}^{KM}=\int d\sigma\left(\frac{\pi-\phi^{\prime}}{2}+\frac{\lambda}{2}\mathcal{H}_{R}\right)\bar{\eta}$ (B.17) does the job. The commutator of these new charges is $\\{\bar{P}_{\bar{\chi}},\bar{P}_{\bar{\eta}}\\}=-\frac{1}{2}\int d\sigma\bar{\chi}\bar{\eta}^{\prime}(1-\lambda\phi^{\prime})=-\frac{1}{2}\int d\sigma\bar{\chi}\partial_{\sigma}\bar{\eta}$ (B.18) Thus, we find two copies of the Virasoro - Kac-Moody algebra. As explained in the main text, on a compact space we need to divide the coordinates by the associated radii, but this does not affect the charge algebra, because the zero modes of $\mathcal{J}_{\pm}$ that enter $R_{v}$ commute with all the currents. ## Appendix C Equivalence of distributions The derivative of a Dirac-$\delta$ distribution acts on test functions as $\int d\tilde{\sigma}f(\sigma)g(\tilde{\sigma})\partial_{\sigma}\delta(\sigma-\tilde{\sigma})=f(\sigma)g^{\prime}(\sigma)$ (C.1) It is useful to know when two distributions of the form $F(\sigma,{\tilde{\sigma}})\partial_{\sigma}\delta(\sigma-{\tilde{\sigma}})$ are equivalent. The following criterion is necessary and sufficient: $\displaystyle F_{1}\partial_{\sigma}\delta(\sigma-{\tilde{\sigma}})\cong F_{2}\partial_{\sigma}\delta(\sigma-{\tilde{\sigma}})$ $\displaystyle\Leftrightarrow\begin{cases}F_{1}(\sigma,\sigma)&=F_{2}(\sigma,\sigma)\\\ \partial_{\sigma}F_{1}(\sigma,{\tilde{\sigma}})|_{{\tilde{\sigma}}=\sigma}&=\partial_{\sigma}F_{2}(\sigma,{\tilde{\sigma}})|_{{\tilde{\sigma}}=\sigma}\\\ \partial_{\tilde{\sigma}}F_{1}(\sigma,{\tilde{\sigma}})|_{{\tilde{\sigma}}=\sigma}&=\partial_{\tilde{\sigma}}F_{2}(\sigma,{\tilde{\sigma}})|_{{\tilde{\sigma}}=\sigma}\\\ \end{cases}\ .$ (C.2) An example of equivalent distributions is given by $F_{1}=\sqrt{f(\sigma)f({\tilde{\sigma}})}$ and $F_{2}=[f(\sigma)+f({\tilde{\sigma}})]/2$, or $[f(\sigma)g(\tilde{\sigma})+g(\sigma)f(\tilde{\sigma})]\partial_{\sigma}\delta(\sigma-\tilde{\sigma})=[f(\sigma)g(\sigma)+f(\tilde{\sigma})g(\tilde{\sigma})]\partial_{\sigma}\delta(\sigma-\tilde{\sigma})$ (C.3) which indeed satisfies eq. C.2. The remainder of this subsection is dedicated to showing eq. C.2. To this end, note that two distributions $\mathcal{F}_{1}$ and $\mathcal{F}_{2}$ are equivalent iff they integrate to the same result against arbitrary test functions, i.e. $\displaystyle\int_{\sigma_{1}}^{\sigma_{2}}d\sigma\,\mathcal{F}_{1}$ $\displaystyle=f(\sigma_{1},\sigma_{2},{\tilde{\sigma}})=\int_{\sigma_{1}}^{\sigma_{2}}d\sigma\,\mathcal{F}_{2}\ ,$ (C.4a) $\displaystyle\int_{{\tilde{\sigma}}_{1}}^{{\tilde{\sigma}}_{2}}d{\tilde{\sigma}}\,\mathcal{F}_{1}$ $\displaystyle=g({\tilde{\sigma}}_{1},{\tilde{\sigma}}_{2},\sigma)=\int_{{\tilde{\sigma}}_{1}}^{{\tilde{\sigma}}_{2}}d{\tilde{\sigma}}\,\mathcal{F}_{2}\ ,$ (C.4b) $\displaystyle\int_{\sigma_{1}}^{\sigma_{2}}d\sigma\int_{{\tilde{\sigma}}_{1}}^{{\tilde{\sigma}}_{2}}d{\tilde{\sigma}}\,\mathcal{F}_{1}$ $\displaystyle=h(\sigma_{1},\sigma_{2},{\tilde{\sigma}}_{1},{\tilde{\sigma}}_{2})=\int_{\sigma_{1}}^{\sigma_{2}}d\sigma\int_{{\tilde{\sigma}}_{1}}^{{\tilde{\sigma}}_{2}}d{\tilde{\sigma}}\,\mathcal{F}_{2}$ (C.4c) Obviously, if $f$ or $g$ are normal functions (i.e. not distributions), the requirement eq. C.4c follows from either eq. C.4a or eq. C.4b. For distributions proportional to $\delta^{\prime}$, the left-hand side of the requirement eq. C.4a gives $\displaystyle\int_{\sigma_{1}}^{\sigma_{2}}d\sigma\,F(\sigma,{\tilde{\sigma}})\partial_{\sigma}\delta(\sigma-{\tilde{\sigma}})$ (C.5) $\displaystyle=F(\sigma_{2},\sigma_{2})\delta({\tilde{\sigma}}-\sigma_{2})-F(\sigma_{1},\sigma_{1})\delta({\tilde{\sigma}}-\sigma_{1})-\partial_{\sigma}F(\sigma,{\tilde{\sigma}})|_{\sigma={\tilde{\sigma}}}\Theta(\sigma_{1}<{\tilde{\sigma}}<\sigma_{2})\ .$ The result is not a simple function, but it is a distribution with only $\delta$ functions for which we know the equivalence relations: the coefficients of the delta functions have to agree, as well as the remainder. This leads to the first two equations on the right-hand side of eq. C.2. The third equation follows from eq. C.4b. Equation C.4c does not provide additional constraints. The fact that the result of eq. C.5 contains $\delta$ functions, suggests a generalization to distributions of the form $[F(\sigma,{\tilde{\sigma}})\partial_{\sigma}+G(\sigma,{\tilde{\sigma}})]\delta(\sigma-{\tilde{\sigma}})$. Two such distributions are equivalent iff $\displaystyle F_{1}(\sigma,\sigma)$ $\displaystyle=F_{2}(\sigma,\sigma)\ ,$ (C.6a) $\displaystyle\partial_{\sigma}F_{1}(\sigma,{\tilde{\sigma}})-G_{1}(\sigma,{\tilde{\sigma}})|_{\sigma={\tilde{\sigma}}}$ $\displaystyle=\partial_{\sigma}F_{2}-G_{2}(\sigma,{\tilde{\sigma}})|_{\sigma={\tilde{\sigma}}}\ ,$ (C.6b) $\displaystyle\partial_{\tilde{\sigma}}F_{1}(\sigma,{\tilde{\sigma}})+G_{1}(\sigma,{\tilde{\sigma}})|_{\sigma={\tilde{\sigma}}}$ $\displaystyle=\partial_{\tilde{\sigma}}F_{2}(\sigma,{\tilde{\sigma}})+G_{2}(\sigma,{\tilde{\sigma}})|_{\sigma={\tilde{\sigma}}}\ .$ (C.6c) The derivation is identical. ## References * [1] A. B. Zamolodchikov, _Expectation value of composite field T anti-T in two-dimensional quantum field theory_ , hep-th/0401146. * [2] F. A. Smirnov and A. B. Zamolodchikov, _On space of integrable quantum field theories_ , _Nucl. Phys. B_ 915 (2017) 363–383, [1608.05499]. * [3] A. Cavaglià, S. Negro, I. M. Szécsényi and R. Tateo, _$T\bar{T}$ -deformed 2D Quantum Field Theories_, _JHEP_ 10 (2016) 112, [1608.05534]. * [4] S. Dubovsky, R. Flauger and V. Gorbenko, _Solving the Simplest Theory of Quantum Gravity_ , _JHEP_ 09 (2012) 133, [1205.6805]. * [5] S. Dubovsky, V. Gorbenko and M. Mirbabayi, _Natural Tuning: Towards A Proof of Concept_ , _JHEP_ 09 (2013) 045, [1305.6939]. * [6] P. Cooper, S. Dubovsky and A. Mohsen, _Ultraviolet complete Lorentz-invariant theory with superluminal signal propagation_ , _Phys. Rev. D_ 89 (2014) 084044, [1312.2021]. * [7] S. Dubovsky, V. Gorbenko and M. Mirbabayi, _Asymptotic fragility, near AdS 2 holography and $T\overline{T}$_, _JHEP_ 09 (2017) 136, [1706.06604]. * [8] J. Cardy, _The $T\overline{T}$ deformation of quantum field theory as random geometry_, _JHEP_ 10 (2018) 186, [1801.06895]. * [9] S. Dubovsky, V. Gorbenko and G. Hernández-Chifflet, _$T\overline{T}$ partition function from topological gravity_, _JHEP_ 09 (2018) 158, [1805.07386]. * [10] S. Datta and Y. Jiang, _$T\bar{T}$ deformed partition functions_, _JHEP_ 08 (2018) 106, [1806.07426]. * [11] O. Aharony, S. Datta, A. Giveon, Y. Jiang and D. Kutasov, _Modular invariance and uniqueness of $T\bar{T}$ deformed CFT_, _JHEP_ 01 (2019) 086, [1808.02492]. * [12] J. Cardy, _$T\overline{T}$ deformations of non-Lorentz invariant field theories_, 1809.07849. * [13] G. Jorjadze and S. Theisen, _Canonical maps and integrability in $T\bar{T}$ deformed 2d CFTs_, 2001.03563. * [14] J. Cardy, _$T\bar{T}$ deformation of correlation functions_, _JHEP_ 12 (2019) 160, [1907.03394]. * [15] V. Rosenhaus and M. Smolkin, _Integrability and renormalization under $T\bar{T}$_, _Phys. Rev. D_ 102 (2020) 065009, [1909.02640]. * [16] J. Kruthoff and O. Parrikar, _On the flow of states under $T\overline{T}$_, 2006.03054. * [17] S. Dubovsky, R. Flauger and V. Gorbenko, _Effective String Theory Revisited_ , _JHEP_ 09 (2012) 044, [1203.1054]. * [18] S. Dubovsky, R. Flauger and V. Gorbenko, _Evidence from Lattice Data for a New Particle on the Worldsheet of the QCD Flux Tube_ , _Phys. Rev. Lett._ 111 (2013) 062006, [1301.2325]. * [19] M. Caselle, D. Fioravanti, F. Gliozzi and R. Tateo, _Quantisation of the effective string with TBA_ , _JHEP_ 07 (2013) 071, [1305.1278]. * [20] S. Dubovsky and G. Hernandez-Chifflet, _Yang–Mills Glueballs as Closed Bosonic Strings_ , _JHEP_ 02 (2017) 022, [1611.09796]. * [21] C. Chen, P. Conkey, S. Dubovsky and G. Hernández-Chifflet, _Undressing Confining Flux Tubes with $T\bar{T}$_, _Phys. Rev. D_ 98 (2018) 114024, [1808.01339]. * [22] M. Baggio and A. Sfondrini, _Strings on NS-NS Backgrounds as Integrable Deformations_ , _Phys. Rev. D_ 98 (2018) 021902, [1804.01998]. * [23] T. Araujo, E. O. Colgáin, Y. Sakatani, M. M. Sheikh-Jabbari and H. Yavartanoo, _Holographic integration of $T\bar{T}$ \& $J\bar{T}$ via $O(d,d)$_, _JHEP_ 03 (2019) 168, [1811.03050]. * [24] S. Frolov, _$T\overline{T}$ Deformation and the Light-Cone Gauge_, _Proc. Steklov Inst. Math._ 309 (2020) 107–126, [1905.07946]. * [25] M. Guica, _$T\bar{T}$ deformations and holography_, _https://indico.cern.ch/event/857396/contributions/3706292/attachments/2036750/3410352/ttbar_cern_v1s.pdf_ . * [26] L. McGough, M. Mezei and H. Verlinde, _Moving the CFT into the bulk with $T\overline{T}$_, _JHEP_ 04 (2018) 010, [1611.03470]. * [27] P. Kraus, J. Liu and D. Marolf, _Cutoff AdS 3 versus the $T\overline{T}$ deformation_, _JHEP_ 07 (2018) 027, [1801.02714]. * [28] M. Guica and R. Monten, _$T\bar{T}$ and the mirage of a bulk cutoff_, _SciPost Phys._ 10 (2021) 024, [1906.11251]. * [29] O. Aharony and T. Vaknin, _The TT* deformation at large central charge_ , _JHEP_ 05 (2018) 166, [1803.00100]. * [30] M. Baggio, A. Sfondrini, G. Tartaglino-Mazzucchelli and H. Walsh, _On $T\overline{T}$ deformations and supersymmetry_, _JHEP_ 06 (2019) 063, [1811.00533]. * [31] C.-K. Chang, C. Ferko and S. Sethi, _Supersymmetry and $T\overline{T}$ deformations_, _JHEP_ 04 (2019) 131, [1811.01895]. * [32] B. Pozsgay, Y. Jiang and G. Takács, _$T\bar{T}$ -deformation and long range spin chains_, _JHEP_ 03 (2020) 092, [1911.11118]. * [33] G. Hernández-Chifflet, S. Negro and A. Sfondrini, _Flow Equations for Generalized $T\overline{T}$ Deformations_, _Phys. Rev. Lett._ 124 (2020) 200601, [1911.12233]. * [34] E. Marchetto, A. Sfondrini and Z. Yang, _$T\bar{T}$ Deformations and Integrable Spin Chains_, _Phys. Rev. Lett._ 124 (2020) 100601, [1911.12315]. * [35] J. C. Donahue and S. Dubovsky, _Classical Integrability of the Zigzag Model_ , _Phys. Rev. D_ 102 (2020) 026005, [1912.08885]. * [36] M. Asrat, _KdV charges and the generalized torus partition sum in $T\bar{T}$ deformation_, _Nucl. Phys. B_ 958 (2020) 115119, [2002.04824]. * [37] T. D. Brennan, C. Ferko, E. Martinec and S. Sethi, _Defining the $T\overline{T}$ Deformation on $\mathrm{AdS}_{2}$_, 2005.00431. * [38] J. Caetano, W. Peelaers and L. Rastelli, _Maximally Supersymmetric RG Flows in 4D and Integrability_ , 2006.04792. * [39] B. Le Floch and M. Mezei, _Solving a family of $T\bar{T}$-like theories_, 1903.07606. * [40] R. Conti, S. Negro and R. Tateo, _Conserved currents and $\text{T}\bar{\text{T}}_{s}$ irrelevant deformations of 2D integrable field theories_, _JHEP_ 11 (2019) 120, [1904.09141]. * [41] B. Le Floch and M. Mezei, _KdV charges in $T\bar{T}$ theories and new models with super-Hagedorn behavior_, _SciPost Phys._ 7 (2019) 043, [1907.02516]. * [42] G. Jafari, A. Naseh and H. Zolfi, _Path Integral Optimization for $T\bar{T}$ Deformation_, _Phys. Rev. D_ 101 (2020) 026007, [1909.02357]. * [43] E. Llabrés, _General solutions in Chern-Simons gravity and $T\overline{T}$-deformations_, _JHEP_ 01 (2021) 039, [1912.13330]. * [44] J. Haruna, T. Ishii, H. Kawai, K. Sakai and K. Yoshida, _Large N analysis of $T\overline{T}$-deformation and unavoidable negative-norm states_, _JHEP_ 04 (2020) 127, [2002.01414]. * [45] H. Ouyang and H. Shu, _$T\bar{T}$ deformation of chiral bosons and Chern–Simons $\hbox{AdS}_{3}$ gravity_, _Eur. Phys. J. C_ 80 (2020) 1155, [2006.10514]. * [46] T. Hartman, J. Kruthoff, E. Shaghoulian and A. Tajdini, _Holography at finite cutoff with a $T^{2}$ deformation_, _JHEP_ 03 (2019) 004, [1807.11401]. * [47] M. Taylor, _TT deformations in general dimensions_ , 1805.10287. * [48] A. Giveon, N. Itzhaki and D. Kutasov, _$\mathrm{T}\overline{\mathrm{T}}$ and LST_, _JHEP_ 07 (2017) 122, [1701.05576]. * [49] A. Giveon, N. Itzhaki and D. Kutasov, _A solvable irrelevant deformation of AdS 3/CFT2_, _JHEP_ 12 (2017) 155, [1707.05800]. * [50] M. Asrat, A. Giveon, N. Itzhaki and D. Kutasov, _Holography Beyond AdS_ , _Nucl. Phys. B_ 932 (2018) 241–253, [1711.02690]. * [51] G. Giribet, _$T\bar{T}$ -deformations, AdS/CFT and correlation functions_, _JHEP_ 02 (2018) 114, [1711.02716]. * [52] S. Chakraborty, _Wilson loop in a $T\bar{T}$ like deformed $\rm{CFT}_{2}$_, _Nucl. Phys. B_ 938 (2019) 605–620, [1809.01915]. * [53] L. Apolo, S. Detournay and W. Song, _TsT, $T\bar{T}$ and black strings_, _JHEP_ 06 (2020) 109, [1911.12359]. * [54] S. Chakraborty, A. Giveon and D. Kutasov, _$T\overline{T}$ , black holes and negative strings_, _JHEP_ 09 (2020) 057, [2006.13249]. * [55] M. Guica, _An integrable Lorentz-breaking deformation of two-dimensional CFTs_ , _SciPost Phys._ 5 (2018) 048, [1710.08415]. * [56] S. Frolov, _$T{\overline{T}}$ , $\widetilde{J}J$, $JT$ and $\widetilde{J}T$ deformations_, _J. Phys. A_ 53 (2020) 025401, [1907.12117]. * [57] T. Anous and M. Guica, _A general definition of $JT_{a}$– deformed QFTs_, _SciPost Phys._ 10 (2021) 096, [1911.02031]. * [58] S. Chakraborty, A. Giveon and D. Kutasov, _$J\overline{T}$ deformed CFT2 and string theory_, _JHEP_ 10 (2018) 057, [1806.09667]. * [59] L. Apolo and W. Song, _Strings on warped AdS 3 via $\mathrm{T}\bar{\mathrm{J}}$ deformations_, _JHEP_ 10 (2018) 165, [1806.10127]. * [60] S. Chakraborty, A. Giveon and D. Kutasov, _$T\bar{T}$ , $J\bar{T}$, $T\bar{J}$ and String Theory_, _J. Phys. A_ 52 (2019) 384003, [1905.00051]. * [61] M. Guica, _On correlation functions in $J\bar{T}$-deformed CFTs_, _J. Phys. A_ 52 (2019) 184003, [1902.01434]. * [62] L. Apolo and W. Song, _Heating up holography for single-trace $J\bar{T}$ deformations_, _JHEP_ 01 (2020) 141, [1907.03745]. * [63] S. El-Showk and M. Guica, _Kerr/CFT, dipole theories and nonrelativistic CFTs_ , _JHEP_ 12 (2012) 009, [1108.6091]. * [64] M. Guica and R. Monten, _$T\bar{T}$ and the mirage of a bulk cutoff_, 1906.11251. * [65] A. Bzowski and M. Guica, _The holographic interpretation of $J\bar{T}$-deformed CFTs_, _JHEP_ 01 (2019) 198, [1803.09753]. * [66] S. Dubovsky, _Beyond ttbar_ , _Talk given at SCGP workshop: TT and Other Solvable Deformations of Quantum Field Theories,http://scgp.stonybrook.edu/video_portal/video.php?id=4041_ (2019-04-12) . * [67] R. Conti, S. Negro and R. Tateo, _The $\mathrm{T}\overline{\mathrm{T}}$ perturbation and its geometric interpretation_, _JHEP_ 02 (2019) 085, [1809.09593]. * [68] D. Bazeia and R. Jackiw, _Nonlinear realization of a dynamical Poincare symmetry by a field-dependent diffeomorphism_ , _Annals Phys._ 270 (1998) 246–259, [hep-th/9803165]. * [69] R. Jackiw, _A Particle field theorist’s lectures on supersymmetric, nonAbelian fluid mechanics and d-branes_ , physics/0010042. * [70] S. Chakraborty, A. Giveon and D. Kutasov, _$J\overline{T}$ deformed CFT2 and string theory_, _JHEP_ 10 (2018) 057, [1806.09667]. * [71] A. Bzowski and M. Guica, _The holographic interpretation of $J\bar{T}$-deformed CFTs_, _JHEP_ 01 (2019) 198, [1803.09753]. * [72] D. M. Hofman and A. Strominger, _Chiral Scale and Conformal Invariance in 2D Quantum Field Theory_ , _Phys. Rev. Lett._ 107 (2011) 161601, [1107.2917]. * [73] M. Guica, _Symmetries versus the spectrum of $J\bar{T}$-deformed CFTs_, _SciPost Phys._ 10 (2021) 065, [2012.15806].
# Chenyuan Feng, Howard H. Yang, , Deshun Hu, Zhiwei Zhao, , Tony Q. S. Quek, , and Geyong Min C. Feng and T. Q. S. Quek are with Information Systems Technology and Design, Singapore University of Technology and Design, 487372, Singapore (email<EMAIL_ADDRESS><EMAIL_ADDRESS>.H. H. Yang is with the Zhejiang University/University of Illinois at Urbana- Champaign Institute, Zhejiang University, Haining 314400, China, (email: [email protected]).D. Hu is with the Department of Communication Engineering, Harbin Institute of Technology, Harbin 150001, China (email:[email protected]).Z. Zhao is with the School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610051, China (e-mail: [email protected]).G. Min is with the Department of Computer Science, College of Engineering Mathematics and Physical Sciences, The University of Exeter, Exeter EX4 4QF, U.K. (e-mail: [email protected]).The corresponding author of this paper is H. H. Yang. # Mobility-Aware Cluster Federated Learning in Hierarchical Wireless Networks Chenyuan Feng, Howard H. Yang, , Deshun Hu, Zhiwei Zhao, , Tony Q. S. Quek, , and Geyong Min C. Feng and T. Q. S. Quek are with Information Systems Technology and Design, Singapore University of Technology and Design, 487372, Singapore (email<EMAIL_ADDRESS><EMAIL_ADDRESS>.H. H. Yang is with the Zhejiang University/University of Illinois at Urbana- Champaign Institute, Zhejiang University, Haining 314400, China, (email: [email protected]).D. Hu is with the Department of Communication Engineering, Harbin Institute of Technology, Harbin 150001, China (email:[email protected]).Z. Zhao is with the School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610051, China (e-mail: [email protected]).G. Min is with the Department of Computer Science, College of Engineering Mathematics and Physical Sciences, The University of Exeter, Exeter EX4 4QF, U.K. (e-mail: [email protected]).The corresponding author of this paper is H. H. Yang. ###### Abstract Implementing federated learning (FL) algorithms in wireless networks has garnered a wide range of attention. However, few works have considered the impact of user mobility on the learning performance. To fill this research gap, firstly, we develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks where the mobile users may roam across multiple edge access points (APs), leading to incompletion of inconsistent FL training. Secondly, we provide the convergence analysis of HFL with user mobility. Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users. And this decline in the learning performance will be exacerbated with small number of participants and large data distribution divergences among user’s local data. To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm by redesigning the access mechanism, local update rule and model aggregation scheme. Finally, we provide experiments to evaluate the learning performance of HFL and our MACFL. The results show that our MACFL can enhance the learning performance, especially for three different cases: the case of users with non-independent and identical distribution (non-IID) data, the case of users with high mobility, and the cases with a small number of users. ###### Index Terms: Hierarchical federated learning, user mobility, data heterogeneity, convergence analysis ## I Introduction To support emerging intelligent services and applications, federated learning (FL) has been proposed in [1] as a promising approach to generate high quality models without collecting the distributed data. It provides great convenience for parallel processing and can reduce the costs of message exchanging significantly [2, 3]. As for the learning efficiency, the convergence analysis of the FL in wireless networks has been studied in [4, 5], which can characterize the impact of unreliable communication links. To accelerate the convergence of the FL, sophisticated user scheduling schemes in wireless networks have been designed in [6], and a theoretical bound has been analyzed in [7]. To fully exploit the processing power of cloud and edge computing servers, a client-edge-cloud hierarchical federated learning (HFL) paradigm has been proposed in [8], and the resource optimization has been designed in [9]. Albeit the popularity, it has been demonstrated that FL yields suboptimal results if the local clients’ data distributions diverge, which causes significant decline of model accuracy [10]. Due to the heterogeneous environment, a shared global model trained by conventional FL might not generalize well for each user. To tackle this problem, a new version named cluster multi-task FL is proposed in [11] and [12]. Cluster multi-task FL can improve the FL framework by dividing clients into different clusters, and different learning tasks and shared models are learned by different clusters. In [13], the authors have proved that cluster FL will converge faster than single cell FL. Therefore, cluster FL is more suitable for the large-scale hierarchical wireless networks. Aside from dividing users into different clusters, another technique named personalized FL is proposed to cope with the data heterogeneity among users. In [14], the authors propose the Per-FedAvg algorithm based on a model- agnostic meta-learning formulation. In [15], the pFedMe algorithm is proposed by adding regularization terms to local loss functions in the version of Moreau envelopes. The authors of [16] propose the FedAMP algorithm to facilitate the collaboration between similar clients based on attention mechanism. However, most of the works related to personalized FL are formulated for single-cell networks rather than hierarchical wireless networks. In addition to data heterogeneity, user mobility is another important factor in hierarchical wireless networks. However, most of the previous works only consider a static network topology while the impact of user mobility has been ignored. Indeed, in many practical scenarios, one mobile user may download the latest global model from a certain access point (AP) and keep moving and might leave the AP’s coverage area during the local training procedure. As far as we know, the state-of-the-art works that considered user mobility in the design of the FL paradigms are [17] and [18]. However, in [17] and [18], the APs do not select the users with high mobility that might leave their coverage area, which can lead to a decrease in the number of participants, hence results in missing valid training data and under-fitting global model. Although cluster FL with mobile users seems promising and more practical, several essential challenges still exist, which can be listed as follows: Firstly, it is difficult to evaluate the impact of user mobility on the convergence rate and accuracy of the HFL. Besides single-cell networks, the convergence rate of multi-layer FL is analyzed in [19, 20, 21]. However, there is little work considering the convergence performance of HFL with mobile users. Secondly, it is difficult to efficiently assign mobile users with non- IID dataset into different clusters with limited communication and computation resources. It is neither efficient for clustering schemes with additional communication and computation costs, nor practical due to the neglect of physical accessibility [22, 23, 24, 25]. Finally, the existing aggregation schemes are based on the conventional equal-weighted averaging scheme [1], which become the performance bottleneck due to the distribution divergences of non-IID data and user mobility. Motivated by these critical issues, we analyze the the performance of HFL algorithm with mobile users and propose a mobility-aware cluster federated learning (MACFL) algorithm. Our main contributions are summarized as follows: * • We develop a theoretical framework for the analysis of HFL with mobile users, whereas convergence rates are derived by accounting for the impacts of user mobility as well as the heterogeneity from both dataset and network architecture. * • Building upon the analytical results, we redesign the access mechanism to allow mobile users to effectively participate in collaboration. We also introduce personalized FL and attentive averaging schemes to boost up the convergence rate and enhance model accuracy. * • We conduct extensive experiments to evaluate the performance of our proposed scheme, and also study the impact of network parameters on the learning performance. The experiment results show that our proposed schemes can significantly improve the convergence and accuracy performance of HFL with mobile users. ## II System Model In this section, we introduce the architecture of the hierarchical wireless network, the mobility model of users, as well as the implementation of FL in such a system. Figure 1: An illustration of the system model. Fig.1: The hierarchical wireless network consists of one cloud server, multiple edge APs, and all users will communicate with their nearest edge AP and might lose connection due to mobility. Fig.1 shows an example of the procedure of HFL with $\kappa_{1}=3$ and $\kappa_{2}=2$. ### II-A Network Setup We consider a statistical learning task conducted in a hierarchical wireless network. The network consists of $M$ mobile users, $N$ edge APs, and one cloud server, as depicted in Fig.1. We denote the set of users by $\\{u_{m}\\}_{m=1}^{M}$, where a generic user $u_{m}$ holds a local dataset $\mathcal{D}_{u_{m}}$. The size of dataset $\mathcal{D}_{u_{m}}$ is denoted as $|\mathcal{D}_{u_{m}}|$, where $|\cdot|$ represents the cardinality of a set. We further denote the set of edge APs by $\\{c_{n}\\}_{n=1}^{N}$, and assume that each edge AP is equipped with an edge server, we interchangeably use the term edge AP and edge server in the rest of paper. Without loss of generality, we consider $M\gg N$ because an AP is usually connected by multiple users in the real world. In this work, we consider a simple cluster scheme based on physical accessibility. Specifically, each mobile user will connect to the nearest edge AP based on real time locations without additional communication and computation overhead. As such, we denote $\mathcal{C}_{n}$ as the set of mobile users connected to the edge AP $c_{n}$. The objective of all the entities in this network is to jointly minimize the global loss functions, $F(\mathbf{w},\mathcal{D})$, given as follows $\displaystyle F(\mathbf{w},\mathcal{D})$ $\displaystyle=\frac{1}{|\mathcal{D}|}\sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{D}}\ell(\mathbf{w};\mathbf{x}_{i},y_{i})=\sum_{m=1}^{M}\alpha_{u_{m}}F_{u_{m}}(\mathbf{w},\mathcal{D}_{u_{m}})$ (1) in which $\mathcal{D}=\cup_{m=1}^{M}\mathcal{D}_{u_{m}}$ is the dataset aggregated from all the mobile users and $\ell(\mathbf{w};\mathbf{x}_{i},y_{i})$ is the loss function assigned on the $i$-th data sample pair $(\mathbf{x}_{i},y_{i})$. Particularly, $\mathbf{x}_{i}\in\mathbb{R}^{d}$ denotes the $i$-th input sample and $y_{i}$ the corresponding label, and $\mathbf{w}\in\mathbb{R}^{d}$ is the minimizer, which can fully parametrize the machine learning model, and $\alpha_{u_{m}}$ denotes the linear combination weight of user $u_{m}$ and satisfies $\sum_{m=1}^{M}\alpha_{u_{m}}=1$ with $\alpha_{u_{m}}\in[0,1]$. Moreover, the function $F_{u_{m}}(\mathbf{w},\mathcal{D}_{u_{m}})$ represents the empirical loss function comprised by the local dataset of the mobile user $u_{m}$, given as $F_{u_{m}}(\mathbf{w},\mathcal{D}_{m})=\frac{1}{|\mathcal{D}_{u_{m}}|}\sum_{(\mathbf{x}_{i},y_{i})\in\mathcal{D}_{u_{m}}}\ell(\mathbf{w};\mathbf{x}_{i},y_{i}).$ (2) Due to privacy concerns of the users, the local dataset is not accessible by neither the APs nor the cloud server. Therefore, the global loss function $F(\mathbf{w},\mathcal{D})$ cannot be directly evaluated and its minimization needs to be carried out by means of federated computing. Fig. 1 illustrates a typical procedure of HFL. Particularly, inside the coverage – also referred to as cluster – of each edge AP, model training is conducted between the AP and the mobile users connected to it. As such, the objective function of this cluster is given by $\displaystyle f_{c_{i}}(\mathbf{w}_{c_{i}},\mathcal{D}_{c_{i}})=\sum_{u_{m}\in\mathcal{C}_{i}}\alpha_{u_{m}}^{c}F_{u_{m}}(\mathbf{w},\mathcal{D}_{u_{m}})$ (3) where $\mathcal{D}_{c_{i}}=\bigcup_{u_{m}\in\mathcal{C}_{i}}\mathcal{D}_{u_{m}}$ denotes the the training dataset owned by all users that connects to edge AP $c_{i}$ and $\alpha_{u_{m}}^{c}\in[0,1]$ denotes the weight of $u_{m}$ at edge aggregation in its assigned cluster, whereas $\sum_{m=1}^{M}\alpha_{u_{m}}^{c}=1$. Similarly, the edge APs will further aggregate their parameters at the cloud server to minimize the following function $\displaystyle f(\mathbf{w},\mathcal{D})=\sum_{i=1}^{N}\alpha_{c_{i}}f_{c_{i}}(\mathbf{w},\mathcal{D}_{c_{i}})$ (4) where $\alpha_{c_{i}}\in[0,1]$ is the weight of $c_{i}$ and we have $\sum_{i=1}^{N}\alpha_{c_{i}}=1$. Recalling cluster and global functions, we have $\alpha_{u_{m}}=\alpha_{u_{m}}^{c}\alpha_{c_{u_{m}}}$. In this work, we consider the communications between users and edge APs happen every $\kappa_{1}$ times of local updates. And the communication between the cloud and edge servers happens every $\kappa_{2}$ times of edge aggregations. ### II-B User Mobility Model In the presence of user mobility, the set of mobile users associated with each edge AP varies over time. In order to characterize this feature, we assume all the users are uniformly distributed over the entire network at the beginning of time, and then each user will stay or move to a neighboring cluster according to a certain probability during the local training. We use a Markov chain to model the dynamics of mobile users. Specifically, we use a vector $\mathbf{\pi}_{u_{m}}^{t}\in\mathbb{R}^{N}$ to capture the state of connection for $u_{m}$ at the $t$-th iteration, where the entries are defined as follows $\pi^{t}_{u_{m}}[i]=\left\\{\begin{matrix}1,&\text{if}~{}u_{m}\in\mathcal{C}_{i}^{t}\\\ 0,&\text{otherwise}\end{matrix}\right.$ (5) in which $\mathcal{C}_{i}^{t}$ denotes the set of mobile users connected to $c_{n}$ at the $t$-th iteration. Moreover, we use an adjacency matrix $\mathbf{A}\in\mathbb{R}^{N\times N}$ to characterize the spatial topology between the edge APs. The elements of this matrix are defined $\mathbf{A}_{ij}=\left\\{\begin{matrix}1,&~{}\text{if}~{}~{}c_{j}\in\mathcal{N}(c_{i})\\\ 0,&\\!\\!\\!\\!\\!\\!\\!\\!\text{otherwise}\end{matrix}\right.$ where $\mathcal{N}(c_{n})$ denotes the set of neighboring APs of $c_{n}$, including itself, and the size of $\mathcal{N}(c_{n})$ is given by $|\mathcal{N}(c_{i})|=\sum_{j=1}^{N}\mathbf{A}_{ij}$. Figure 2: An example of a linear graph and the transmission probability of mobile users between different clusters. We assume all the user are uniformly and randomly distributed over the entire network at the beginning of time, and the probability of any specific user moving to different neighboring APs are equal. Therefore, the element of transition probability matrix can be written as $\mathbf{P}^{t}_{i,j}=\left\\{\begin{matrix}p_{s,c_{i}}^{t},&\text{if}~{}i=j\\\ \frac{1-p_{s,c_{i}}^{t}}{\left|\mathcal{N}(c_{n})\right|-1},&\text{if}~{}~{}c_{j}\in\mathcal{N}(c_{i})\setminus c_{i}\\\ 0,&\text{otherswise}\end{matrix}\right.$ (6) If $p_{s,c_{i}}^{t}=1,\forall i,\forall t$, the scenario boils down to the conventional HFL with static users. In this paper, we assume all the mobile users have the same transition probability matrix. Then, at the $t$-th local computation step, let $\mathbf{P}^{t}\in\mathbb{R}^{N\times N}$ denote the transition probability matrix, which can be used to characterize the trajectory of mobile user at $t$-th iteration. As shown in Fig. 2, the entry at ($i$, $j$), i.e., $\mathbf{P}^{t}_{ij}$, denotes the probability that users staying in AP $c_{i}$ are moving to $c_{j}$ at $t$-th local computation time. In our work, we assume the transition probability matrix are the same for all users and each user will stay at the same cluster at $t$-th local computation at the probability of $p_{s,c_{i}}^{t}$, and leave for different neighbors at the probability of $1-p_{s,c_{i}}^{t}$. Given current observation vector and transmission probability matrix, the future state vector can be estimated by $\pi^{t+1}_{u_{m}}=\pi^{t}_{u_{m}}\mathbf{P}^{t}=\pi^{0}_{u_{m}}\prod_{\tau=0}^{t}\mathbf{P}^{\tau}$ (7) Following the above, we can calculate the size $S_{i}^{t}$ of the set $\mathcal{C}_{i}^{t}$ as follows $S_{i}^{t}=\sum_{m=1}^{M}\pi^{t}_{u_{m}}[i]=\sum_{m=1}^{M}\pi^{0}_{u_{m}}\left(\prod_{\tau=0}^{t}P^{\tau}_{u_{m}}\right)_{[:,i]}$ (8) where $(\cdot)_{[:,i]}$ represents the $i$-th column of a matrix. The evolution of $S_{i}^{t}$ is a complicated quasi-birth and death process, which depends on the initial conditions, Markov process intensity matrix, and the number of iterations [26]. For the sake of tractability, we consider an equilibrium condition with steady-state probability distribution as follows: $S_{i}^{t}(1-p_{s,c_{i}}^{t})=\sum_{c_{j}\in\mathcal{N}(c_{i})\setminus c_{i}}S_{j}^{t}p_{c_{j},c_{i}}^{t},\forall i,\forall t.$ (9) This equation indicates the number of incoming and departing users in each cluster is balanced, then we have $S_{i}^{t}=S_{i},\forall t,\forall i$. It is noteworthy that the indexes of user in $\mathcal{C}_{i}^{t}$ are time varying albeit the size keeps the same. ## III HFL with Mobile Users In this section, we extend the conventional HFL algorithm [8] to account for cases with mobile users. The general scheme is given in Algorithm 1, whereas the key steps, as well as the convergence analysis, are elaborated in the sequel. ### III-A Algorithm Description Algorithm 1 Conventional HFL with Mobile Users 1:Initialize $\\{\mathcal{D}_{u_{m}}\\}_{m=1}^{M}$, $\\{\pi_{u_{m}}^{0}\\}_{m=1}^{M}$, $\mathbf{P}$, $\mathcal{C}^{0}=\\{\mathcal{C}_{1}^{0},...,\mathcal{C}_{N}^{0}\\}$, ${\\{\mathbf{w}_{c_{i}}^{(0)}\\}}_{i=1}^{N}$. 2:for each edge communication round $b=0,1,...,B-1$ do 3: for each user $m=1,2,...,M$ in parallel do 4: $t=b\kappa_{1}$ 5: Local update based on (10) and (11). 6: for each edge $n=1,....,N$ in parallel do 7: Edge update based on (12). 8: if $i\mod\kappa_{2}=0$ then 9: Cloud update based on (13). 10: for each edge $n=1,....,N$ in parallel do 11: $\mathbf{w}_{c_{n}}^{t+1}\leftarrow\mathbf{w}_{g}^{t+1}$ 12:return Global model $\mathbf{w}_{g}^{T}$ The key steps of the HFL algorithm are presented in Algorithm 1. During the local training procedure, each mobile user will download the cluster model from its closest AP and then update local model after the local training. Only mobile users who stay in the same coverage area at the local training procedure can upload successfully to the original edge AP, and the cloud server will make an aggregation of all cluster models from all the edge APs’ models after every $\kappa_{2}$ edge model aggregations. The procedures of user update, edge update, and cloud update are detailed in below, respectively. #### III-A1 Local Update Let $t=b\kappa_{1}+j$ denote the index of local update iteration, with $i$ denoting the index of edge communication round and $j$ denoting the index of epoch during the local training. $\mathbf{w}^{t}_{u_{m}}$ denotes the local model parameter of $u_{m}$ at the $t$-th local update iteration, when $t\mod\kappa_{1}=0$, each user downloads the latest cluster model from its tagged edge AP, i.e., $\mathbf{w}^{t}_{u_{m}}=\mathbf{w}^{t}_{c_{n}},\text{ if }\pi_{u_{m}}^{t}[n]=1,$ (10) and then performs $\kappa_{1}\geq 1$ steps of local updates via SGD methods, which can be expressed as $\displaystyle\mathbf{w}^{t+1}_{u_{m}}\leftarrow\mathbf{w}^{t}_{u_{m}}-\eta g(\mathbf{w}^{t}_{u_{m}},\xi^{t+1}_{u_{m}}),$ (11) where $\eta$ is the learning rate, $\xi^{t}_{u_{m}}\subset\mathcal{D}_{u_{m}}$ is a mini-batch independently and identically sampled from the local dataset at the $t$-th local update iteration, and $g(\mathbf{w}^{t}_{u_{m}},\xi^{t}_{u_{m}})$ denotes the mini batch gradient of local loss function, and it satisfies $\mathbb{E}_{\xi}\\{g(\mathbf{w}^{t}_{u_{m}},\xi^{t}_{u_{m}})\\}=\nabla F(\mathbf{w}^{t}_{u_{m}},\mathcal{D}_{u_{m}})$. For notation simplicity, we use $g(\mathbf{w}^{t}_{u_{m}})$ and $\nabla F(\mathbf{w}^{t}_{u_{m}})$ for short in the rest of this paper. #### III-A2 Edge update The communications between the users and their associated edge APs take place in every $\kappa_{1}$ local update iterations. Due to mobility, the set of mobile users in cluster $c_{n}$ at $b$-th edge communication round might be different from that at the $(b+1)$-th edge communication round. As such, the edge AP only collects the updated gradients from the users that are connect to $c_{n}$ at the $(b+1)$-th edge aggregation can upload successfully. Let a Boolean variable $\mathbb{I}_{u_{m}}^{b\kappa_{1}}$ represent the uploading state, i.e., $\mathbb{I}_{u_{m}}^{b\kappa_{1}}=1$ if $\pi_{u_{m}}^{(b+1)\kappa_{1}}[i]=\pi_{u_{m}}^{b\kappa_{1}}[i],\forall i$, otherwise $\mathbb{I}_{u_{m}}^{b\kappa_{1}}=0$. Then the cluster model is updated as follows: $\mathbf{w}_{c_{i}}^{(b+1)\kappa_{1}}=\mathbf{w}_{c_{i}}^{b\kappa_{1}}-\eta\sum_{u_{m}\in\mathcal{C}_{i}^{t}}\sum_{\tau=1}^{\kappa_{1}}\alpha_{u_{m}}^{c}g(\mathbf{w}^{b\kappa_{1}+\tau}_{u_{m}})\mathbb{I}_{u_{m}}^{b\kappa_{1}}$ (12) where $\mathbf{w}_{c_{i}}^{b\kappa_{1}}$ and $\mathbf{w}_{c_{i}}^{(b+1)\kappa_{1}}$ are the cluster models of the edge AP $c_{i}$ after the $b$-th and $(b+1)$-th edge communication round, respectively. If $b\mod\kappa_{2}\neq 0$, the edge AP will broadcast the latest cluster model to its connected users. And after every $\kappa_{2}$ edge aggregations, the edge APs will upload the cluster models to the cloud server, and then update the cluster model after receiving the global model from the cloud, and finally broadcast to the connected users. #### III-A3 Cloud Update The global aggregation happens in every $\kappa_{2}$ edge aggregations, which means the global model is updated as follows when $b\mod\kappa_{2}=0$: $\mathbf{w}_{g}^{b\kappa_{1}}=\sum_{n=1}^{N}\alpha_{c_{n}}\mathbf{w}_{c_{n}}^{b\kappa_{1}}$ (13) where $\alpha_{c_{n}}$ denotes the weight of updated cluster models from $c_{n}$. Since the global aggregation happens every $\kappa_{1}\kappa_{2}$ local update iterations, we also have $\mathbf{w}_{g}^{(b+\kappa_{2})\kappa_{1}}=\mathbf{w}_{g}^{b\kappa_{1}}-\eta\sum_{m=1}^{M}\sum_{q=0}^{\kappa_{2}-1}\sum_{\tau=1}^{\kappa_{1}}\alpha_{u_{m}}g(\mathbf{\tilde{w}}^{(b+q)\kappa_{1}+\tau}_{u_{m}})\mathbb{I}_{u_{m}}^{(b+q)\kappa_{1}}$ (14) Compared to the time spent in local training, the time duration of parameter uploading, model aggregation, and downloading is relatively short. Therefore, we assume the connection states of the mobile users will only change during the local training procedures and remain the same during communication and model aggregation procedures. ### III-B Convergence Analysis Owing to mobility, users may drop off from the associated edge APs, which leads to a decrease in the number of participants in the FL. In this section, we analyze the impact of user mobility on the convergence rate of the HFL. Similar to previous works, we make the following assumptions: * • The loss functions are lower-bounded: $f(\mathbf{w})\geq f_{\text{inf}},\forall\mathbf{w}$. * • The gradients of loss functions are bounded: $\|\nabla F(\mathbf{w})\|^{2}\leq G^{2},\forall\mathbf{w}$. * • The loss functions are $L$-smooth: $\|\nabla F(\mathbf{w}_{1})-\nabla F(\mathbf{w}_{2})\|\leq L\|\mathbf{w}_{1}-\mathbf{w}_{2}\|,\forall\mathbf{w}_{1},\mathbf{w}_{2}$. * • The mini-batch gradients are unbiased with bounded variance: $\mathbb{E}_{\xi|\mathbf{w}}\\{g(\mathbf{w})\\}=\nabla F(\mathbf{w})$, ${\|g(\mathbf{w})-\nabla F(\mathbf{w})\|}^{2}\leq\sigma^{2},\forall\mathbf{w},\xi$. * • The divergences of the local, cluster and global loss functions are bounded, for all $u_{m},c_{i},\mathbf{w}$, we have $\frac{1}{N}\sum_{i=1}^{N}{\|\nabla f_{c_{i}}(\mathbf{w})-\nabla f(\mathbf{w})\|}^{2}\leq\epsilon_{g}^{2}$, $\frac{1}{S_{i}}\sum_{u_{m}\in\mathcal{C}_{i}}{\|\nabla F_{u_{m}}(\mathbf{w})-\nabla f_{c_{i}}(\mathbf{w})\|}^{2}\leq\epsilon_{c}^{2}.$ Although the averaged global model is not observable when $t\mod\kappa_{1}\kappa_{2}\neq 0$ in this system, here we use $\mathbf{\bar{w}}_{g}^{t}$ for analysis similar to related work [20, 21]. Because the objective function $F(\mathbf{w},\mathcal{D})$ can be non-convex, the learning algorithm via SGD methods may converge to a local minimum or saddle point. Similar to [20, 21] and [28, 29], we leverage the expected gradient norm as an indicator of convergence, namely the training algorithm achieves an $\theta$-suboptimal solution if $\mathbb{E}\left[\frac{1}{T}\sum_{t=1}^{T}{\|\nabla f(\mathbf{\bar{w}}_{g}^{t})\|}^{2}\right]\leq\theta$, where $\theta>0$. This condition can guarantee the convergence of an algorithm to a stationary point. ###### $\mathbf{Lemma}$ 1 For the employed FL system, when the learning rate satisfies $\eta<\frac{1}{L}$, after $T$ rounds of global aggregations, we have $\displaystyle\theta_{HFL}\leq$ $\displaystyle\frac{2}{\eta p_{s}T}\big{[}\,\mathbb{E}f(\mathbf{\bar{w}}^{0}_{g})-f_{\inf})\,\big{]}+\eta L\sigma^{2}\sum_{m=1}^{M}(\alpha_{u_{m}})^{2}+4\epsilon_{g}^{2}+4\epsilon_{c}^{2}$ $\displaystyle+\frac{4L^{2}}{T}\sum_{t=0}^{T-1}\bigg{(}\sum_{i=1}^{N}\alpha_{c_{n}}\mathbb{E}\Big{[}\|\mathbf{\bar{w}}^{t}_{g}-\mathbf{\bar{w}}_{c_{i}}^{t}\|^{2}\Big{]}+\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\Big{[}\|\mathbf{\bar{w}}_{c_{u_{m}}}^{t}-\mathbf{w}_{u_{m}}^{t}\|^{2}\Big{]}\bigg{)}.$ (15) ###### Proof: Please refer to Appendix -A. ∎ Notably, the first term on the right hand side of (1) is similar to the setting of centralized learning [29] when $p_{s}=1$, and the second term is introduced by the randomness arises from the mini-batch gradients, the third and fourth terms correspond to the divergences among the local, cluster, and global loss functions, the fifth and sixth terms are introduced by the divergences among local, cluster, and global model parameters. When $p_{s}=0$, the right hand of the inequality of (1) is unbounded, which means the training algorithm does not converge if no mobile user is participating in the model aggregation. We further bound the mean square error (MSE) of the local, as well as cluster, model parameters in below. ###### $\mathbf{Lemma}$ 2 For the employed FL system, if $\eta<\frac{1}{\sqrt{12}L\kappa_{1}}$, the MSE of the local model parameters can be bounded as follows: $\displaystyle\frac{1}{T}\sum_{t=0}^{T-1}\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\|\mathbf{w}^{t}_{u_{m}}-\mathbf{\bar{w}}_{c_{u_{m}}}^{t}\|^{2}$ $\displaystyle\leq\frac{1}{1-12\eta^{2}L^{2}\kappa_{1}^{2}p_{s}^{2}}\bigg{(}6\eta^{2}p_{s}^{2}\kappa_{1}^{2}\epsilon_{c}^{2}+2\eta^{2}\kappa_{1}p_{s}\left(\sigma^{2}+(1-p_{s})G^{2}\right)\sum_{m=1}^{M}\alpha_{u_{m}}(1-\alpha_{u_{m}}^{c})\bigg{)}.$ (16) ###### Proof: Please refer to Appendix -B. ∎ ###### $\mathbf{Lemma}$ 3 For the employed FL system, if $\eta<\frac{1}{\sqrt{12}L\kappa_{1}\kappa_{2}}$, the MSE of the cluster model parameters can be bounded as follows: $\displaystyle\frac{1}{T}\sum_{t=0}^{T-1}\sum_{i=1}^{N}\alpha_{c_{n}}\mathbb{E}\|\mathbf{\bar{w}}^{t}_{g}-\mathbf{\bar{w}}_{c_{i}}^{t}\|^{2}$ $\displaystyle\leq\frac{1}{1-12\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}p_{s}^{2}}\bigg{\\{}6\eta^{2}p_{s}^{2}\kappa_{1}^{2}\kappa_{2}^{2}\epsilon_{g}^{2}+4\eta^{2}\kappa_{1}\kappa_{2}p_{s}(\sigma^{2}+(1-p_{s})G^{2})\sum_{m=1}^{M}\alpha_{u_{m}}(\alpha_{u_{m}}^{c}-\alpha_{u_{m}})$ $\displaystyle+\frac{4\eta^{2}p_{s}^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}}{1-12\eta^{2}L^{2}\kappa_{1}^{2}p_{s}^{2}}\bigg{[}2\eta^{2}\kappa_{1}p_{s}(\sigma^{2}+(1-p_{s})G^{2})\sum_{m=1}^{M}\alpha_{u_{m}}(1-\alpha_{u_{m}}^{c})+6\eta^{2}p_{s}^{2}\kappa_{1}^{2}\epsilon_{c}^{2}\bigg{]}\bigg{\\}}$ (17) ###### Proof: Please refer to Appendix -C. ∎ We are now in position to obtain the upper bound of $\theta_{HFL}$ by applying Lemma 2 and Lemma 3 back into Lemma 1. ###### $\mathbf{Theorem}$ 1 Under the employed FL system, when the learning rate is chosen as $\eta<\frac{1}{\sqrt{12}L\kappa_{1}\kappa_{2}}$, then after $T$ rounds of global aggregations, Algorithm 1 achieves an $\theta_{HFL}$-suboptimal solution, where $\theta_{HFL}$ is bounded as $\displaystyle\theta_{HFL}$ $\displaystyle\leq\frac{2}{\eta p_{s}T}\big{[}\mathbb{E}f(\mathbf{\bar{w}}^{0}_{g})-f_{\inf}\big{]}+\eta L\sigma^{2}\sum_{m=1}^{M}\alpha_{u_{m}}^{2}+\frac{4(1-6L^{2}\eta^{2}p_{s}^{2}\kappa_{1}^{2}\kappa_{2}^{2})}{1-12\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}p_{s}^{2}}\epsilon_{g}^{2}$ $\displaystyle+4\bigg{[}1+\frac{(1-8\eta^{2}p_{s}^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2})6\eta^{2}L^{2}p_{s}^{2}\kappa_{1}^{2}}{(1-12\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}p_{s}^{2})(1-12\eta^{2}L^{2}\kappa_{1}^{2}p_{s}^{2})}\bigg{]}\epsilon_{c}^{2}+\frac{8L^{2}\eta^{2}\kappa_{1}p_{s}(\sigma^{2}+(1-p_{s})G^{2})}{1-12\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}p_{s}^{2}}$ $\displaystyle\times\sum_{m=1}^{M}\alpha_{u_{m}}\bigg{[}2\kappa_{2}(\alpha_{u_{m}}^{c}-\alpha_{u_{m}})+\frac{1-8\eta^{2}p_{s}^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}}{1-12\eta^{2}L^{2}\kappa_{1}^{2}p_{s}^{2}}(1-\alpha_{u_{m}}^{c})\bigg{]}.$ (18) From the right hand side of (1), we can see that the convergence error floor depends on the initial states, divergences between local, cluster, and global loss functions, SGD variance, and rate of descent of loss functions. Remark 1: By taking the partial derivatives with respect to $\kappa_{1}$ and $\kappa_{2}$, respectively, we find that $\theta_{HFL}$ monotonously goes down with a decreasing value of $\kappa_{1}$ or $\kappa_{2}$, which implies the more often the users and edge APs exchange information, the faster the training algorithm converges. Additionally, if the product of $\kappa_{1}\kappa_{2}$ is fixed, $\theta_{HFL}$ will decline with a decrease in $\kappa_{1}$ or an increase in $\kappa_{2}$. This observation suggests exchanging local models frequently helps more than exchanging global models, when the duration of global aggregation is fixed. Remark 2: By taking a partial derivative with respect to $p_{s}$, we can see that $\theta_{HFL}$ is not a monotonic function of $p_{s}$ which depends on the properties of training data and loss functions employed by local training. Particularly, when $p_{s}=0$, the algorithm will never converge since there is no mobile user participating in the model aggregation, which means a new model aggregation scheme needs to be devised for users with high mobility. In addition, by recalling the definition of $\epsilon_{g}^{2}$ and $\epsilon_{c}^{2}$, it is clear that the divergences of local and cluster loss functions with IID data is smaller than those with non-IID data. ###### $\mathbf{Corollary}$ 1 Under the employed FL system, if the weights for parameter aggregations are chosen as $\alpha_{c_{n}}=\frac{1}{N}$, $\alpha_{u_{m}}=\frac{1}{M}$, and $\alpha_{u_{m}}^{c}=\frac{N}{M}$, and the learning rate is set as $\eta<\frac{1}{\sqrt{12}L\kappa_{1}\kappa_{2}}$, then after $T$ rounds of global aggregations, Algorithm 1 can achieve an $\theta_{HFL}$-suboptimal solution, where $\theta_{HFL}$ is bounded as $\displaystyle\theta_{HFL}$ $\displaystyle\leq\frac{2(\mathbb{E}f(\mathbf{\bar{w}}^{0}_{g})-f_{\inf})}{\eta p_{s}T}+\frac{\eta L}{M}\sigma^{2}$ $\displaystyle+\frac{4(1-6L^{2}\eta^{2}p_{s}^{2}\kappa_{1}^{2}\kappa_{2}^{2})}{1-12\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}p_{s}^{2}}\epsilon_{g}^{2}+4\bigg{[}1+\frac{(1-8\eta^{2}p_{s}^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2})6\eta^{2}L^{2}p_{s}^{2}\kappa_{1}^{2}}{(1-12\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}p_{s}^{2})(1-12\eta^{2}L^{2}\kappa_{1}^{2}p_{s}^{2})}\bigg{]}\epsilon_{c}^{2}$ $\displaystyle+\frac{8L^{2}\eta^{2}\kappa_{1}p_{s}(\sigma^{2}+(1-p_{s})G^{2})}{1-12\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}p_{s}^{2}}\bigg{[}2\kappa_{2}\frac{N-1}{M}+\frac{1-8\eta^{2}p_{s}^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}}{1-12\eta^{2}L^{2}\kappa_{1}^{2}p_{s}^{2}}\frac{M-N}{M}\bigg{]}.$ (19) Following Corollary 1, we can see that if $M=N$, there is $\epsilon_{c}^{2}=0$ and $\alpha_{u_{m}}^{c}=1$, if $N=1$, we have $\epsilon_{g}^{2}=0$, $\kappa_{2}=1$ and $\alpha_{u_{m}}^{c}=\alpha_{u_{m}}$, the case turns into single-cell FL. If $M=N=1$, we have $\theta_{HFL}\leq\frac{2}{\eta p_{s}T}(\mathbb{E}f(\mathbf{\bar{w}}^{0}_{g})-f_{\inf})+\eta L\sigma^{2}$, which is in accord with centralized learning with SGD method [29] with $p_{s}=1$. ## IV Mobility-Aware Cluster federated learning In this section, we propose the MACFL algorithm by redesigning the access mechanism, personalized local model update rule, and weighted average schemes. Specifically, our proposed MACFL algorithm allows a mobile user to download a cluster model from one edge AP and, if it roams to another cell, upload the parameter to the corresponding AP, which leads to more mobile users to successfully participate in model aggregations. Based on the analysis in Theorem 1, the shared cluster models learned by conventional HFL can not generalize well due to the user mobility and data heterogeneity. Therefore, we also introduce personalized FL and attentive weighted averaging schemes to reduce the performance degradation caused by SGD variance and the divergences among local, cluster, and global loss functions. In this sections, we apply these state-of-the-art schemes to HFL with mobile users and analyze its convergence rate. ### IV-A Learning Task In our proposed MACFL algorithm, the newly-arrived users can update their local models based on the cluster model downloaded by one edge AP, and then upload the updated results to a different edge AP. This mechanism leads to an increasing number of participants. However, considering user mobility, the shared cluster model aims to learn a common model for the time-varing user set, and does to adapt to each user. Inspired by the personalized FL in single-cell networks such as Per-FedAvg [14], we propose a novel personalized cluster FL for hierarchical wireless networks to learn shared cluster and global models for each server and personalized local models for each mobile user at the same time. Moreover, motivated by attention-based FL in single- cell networks such as FedAMP [16], we introduce an attentive collaboration into HFL to boost the collaboration effectiveness between users without leakage of private data, which can also mitigate the impact of user mobility and data heterogeneity. In our work, we aim to learn personalized local and cluster models and shared gloabl model at the same time, similar to [14], we first rewrite the global loss function in the following way $\displaystyle\tilde{f}(\\{\mathbf{w}_{u_{m}}\\}_{m=1}^{M},\mathbf{w}_{g},\mathcal{D})=\sum_{m=1}^{M}\beta_{u_{m}}\tilde{F}_{u_{m}}(\mathbf{w}_{u_{m}},\mathbf{w}_{g},\mathcal{D}_{u_{m}}),$ $\displaystyle\tilde{F}_{u_{m}}(\mathbf{w}_{u_{m}},\mathbf{w}_{g},\mathcal{D}_{u_{m}})=F_{u_{m}}((\mathbf{w}_{g}-\rho\nabla F_{u_{m}}(\mathbf{w}_{u_{m}},\mathcal{D}_{u_{m}})),\mathcal{D}_{u_{m}})$ (20) where $\tilde{f}(\\{\mathbf{w}_{u_{m}}\\}_{m=1}^{M},\mathbf{w}_{g},\mathcal{D})$ and $\tilde{F}_{u_{m}}(\mathbf{w}_{u_{m}},\mathbf{w}_{g},\mathcal{D}_{u_{m}})$ denote the global and local loss function in MACFL algorithm with $F_{u_{m}}$ defined as (2), and $\beta_{u_{m}}$ is the weight assigned on the local models learned by $u_{m}$ at global aggregation. And the global model $\mathbf{w}_{g}$ is obtained by collaborative learning of all users, and the personalized local model is learned by each user based on the shared model and one more step of gradient decent, which can be computed as $\mathbf{w}_{g}-\rho\nabla F_{u_{m}}(\mathbf{w}_{u_{m}},\mathcal{D}_{u_{m}})$ with $\rho$ denoting the stepsize of local training. ### IV-B Algorithm Description #### IV-B1 Local Update With a loss function given as (IV-A), at the $b$-th edge communication round, each user will at first download the latest personalized model from nearest edge AP, i.e., $\mathbf{w}^{b\kappa_{1}}_{u_{m}}=\mathbf{w}^{b\kappa_{1}}_{c_{n}}$ if $u_{m}\in\mathcal{C}_{n}^{b\kappa_{1}}$, and then perform $\kappa_{1}\geq 1$ steps of local updates via SGD. Let $t=b\kappa_{1}+j,j=0,...,\kappa_{1}-1$, the evolution of each iteration can be expressed as $\displaystyle\mathbf{w}^{t+1}_{u_{m}}=\mathbf{w}^{t}_{u_{m}}-\eta\tilde{g}(\mathbf{w}^{t}_{u_{m}}),~{}\text{with }\tilde{g}(\mathbf{w}^{t}_{u_{m}})=(\mathbf{I}-\rho\nabla g(\mathbf{w}^{t}_{u_{m}}))g(\mathbf{w}^{t}_{u_{m}}-\rho g(\mathbf{w}^{t}_{u_{m}}))$ (21) where $\tilde{g}(\mathbf{w}^{t}_{u_{m}})$ denotes the mini-batch gradient of local loss function $\tilde{F}_{u_{m}}(\mathbf{w}^{t}_{u_{m}})$ given in (IV-A) at $i$-th local update after $t$-th edge comunication round, $\mathbf{I}$ denotes the identity matrix, and $\nabla g(\mathbf{w}^{t}_{u_{m}})$ denotes the gradient of mini-batch gradient $g(\mathbf{w}^{t}_{u_{m}})$ given in (11). As states in [14], the gradient estimates can be approximated by first-order approximations with a slight loss of accuracy in order to avoid the high computation complexity of computing hessian term, namely, $\tilde{g}(\mathbf{w}^{t}_{u_{m}})\approx g(\mathbf{w}^{t}_{u_{m}}-\rho g(\mathbf{w}^{t}_{u_{m}}))$. #### IV-B2 Edge Update In conventional FL, the aggregation strategy usually weights the importance of each local model equally or proportional to the size of the local data. However, this averaging scheme might not be the true weight of local data of each user in mixture data distribution of cluster and global model. Moreover, the mobile user sets in each cluster changes every communication round, the new users who might have different cluster model at last iteration, therefore a new edge update method needs to be designed. As shown in [10], the performance of model aggregation depends on a reasonable design of weight coefficients. Motivated by the attention scheme employed in FedAMP[16], we redesign the edge update rule for $(b+1)$-th edge communication round as follows $\mathbf{w}_{c_{n}}^{(b+1)\kappa_{1}}=\sum_{u_{m}\in\mathcal{C}_{n}^{(b+1)\kappa_{1}}}\beta_{u_{m}}^{c}\mathbf{w}^{(b+1)\kappa_{1}}_{u_{m}},~{}\text{with }\beta_{u_{m}}^{c}=\frac{e^{-\sigma_{1}cos(\mathbf{w}_{u_{m}}^{(b+1)\kappa_{1}},\mathbf{w}_{c_{n}}^{b\kappa_{1}})}}{\sum_{u_{m}\in\mathcal{C}_{n}^{(b+1)\kappa_{1}}}e^{-\sigma_{1}cos(\mathbf{w}_{u_{m}}^{(b+1)\kappa_{1}},\mathbf{w}_{c_{n}}^{b\kappa_{1}})}},$ (22) where $\beta_{u_{m}}^{c}$ is the attention coefficient, $\sigma_{1}$ is a scalar hyper-parameter, the attention coefficient $\beta_{u_{m}}^{c}$ denotes the linear combination weight of model parameter aggregating from user $u_{m}$ to AP $c_{n}$, and $\cos(\mathbf{w}_{u_{m}}^{(b+1)\kappa_{1}},\mathbf{w}_{c_{n}}^{b\kappa_{1}})$ is the cosine similarity between $\mathbf{w}_{u_{m}}^{(b+1)\kappa_{1}}$ and $\mathbf{w}_{c_{n}}^{b\kappa_{1}}$ which is defined as $\cos(\mathbf{x},\mathbf{y})=\frac{<\mathbf{x},\mathbf{y}>}{\|\mathbf{x}\|^{2}\|\mathbf{y}\|^{2}}$. In this work, we employ a simple but effective attention mechanism with only one scalar hyper-parameter. Compared with other clustered FL with deterministic weight, the weight coefficients are learnable parameters in our work. There are other complex methods to learn attention coefficients, such as neural networks [30], which will result in a larger computation overhead. Future work will focus on sophisticated attention scheme with limited computation resources. After edge update, the edge server will distribute the cluster model to connecting users, and when it is time for cloud update, the cluster model will be sent to the cloud server. #### IV-B3 Cloud Update When $b\mod\kappa_{2}=0$, the cloud server will collect the cluster models from all the edge APs and update the global parameter as follows $\displaystyle{\mathbf{w}}_{g}^{b\kappa_{1}}=\sum_{n=1}^{N}\beta_{c_{n}}{\mathbf{w}}^{b\kappa_{1}}_{c_{n}},~{}\text{with }\beta_{c_{n}}=\frac{e^{-\sigma_{2}cos(\mathbf{w}_{c_{n}}^{b\kappa_{1}},\mathbf{w}_{g}^{(b-\kappa_{2})\kappa_{1}})}}{\sum_{n=1}^{N}e^{-\sigma_{2}cos(\mathbf{w}_{c_{n}}^{b\kappa_{1}},\mathbf{w}_{g}^{(b-\kappa_{2})\kappa_{1}})}}$ (23) where $\beta_{c_{n}}$ is the attention coefficient, $\sigma_{2}$ is a scalar hyper-parameter. After cloud update, the cloud server will send latest global model to all edge APs, and all edge servers will distribute the latest shared model to its connecting users for a new round of local computing. ### IV-C Convergence Analysis By invoking similar approaches in Section III, we can analyze the convergence rate of the proposed training scheme. ###### $\mathbf{Theorem}$ 2 For the employed FL system, if the learning rate is chosen as $\eta\leq\frac{1}{\sqrt{12}L\kappa_{1}\kappa_{2}}$, then after $T$ rounds of global aggregations, Algorithm 2 can achieve an $\epsilon_{MACFL}$-suboptimal solution, where $\epsilon_{MACFL}$ is bounded as follows $\displaystyle\theta_{MACFL}$ $\displaystyle\leq\frac{2}{\eta T}\big{[}\mathbb{E}\tilde{f}(\mathbf{\bar{w}}^{0}_{g})-\tilde{f}_{\inf}\big{]}+\eta L\sigma_{M}^{2}\sum_{m=1}^{M}\beta_{u_{m}}^{2}$ $\displaystyle+\frac{12L^{2}\eta^{2}\kappa_{1}^{2}\kappa_{2}^{2}}{1-12\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}}\epsilon_{M,g}^{2}+\frac{12L^{2}\eta^{2}\kappa_{1}^{2}(1-8\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2})}{(1-12\eta^{2}L^{2}\kappa_{1}^{2})(1-12\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2})}\epsilon_{M,c}^{2}$ $\displaystyle+\frac{4L^{2}\eta^{2}\kappa_{1}}{1-12\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}}\sigma_{M}^{2}\sum_{m=1}^{M}\beta_{u_{m}}\bigg{(}\frac{1-8\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}}{1-12\eta^{2}L^{2}\kappa_{1}^{2}}(1-\beta_{u_{m}}^{c})+2\kappa_{2}(\beta_{u_{m}}^{c}-\beta_{u_{m}})\bigg{)}.$ (24) ###### Proof: Please refer to Appendix -D. ∎ ## V Performance Evaluation In this section, we conduct experimental evaluation of the cluster FL algorithm on heterogeneous models and dataset configurations to verify the efficacy of our analysis and proposed scheme. ### V-A Experimental Settings In our experiments, we consider a FL system consisting of 50 users and 5 clusters formed by the APs and one cloud server, and the 5 clusters constitute a linear graph as shown in Fig. 2. All users are uniformly and randomly assigned to 5 clusters at the initial time. We consider an image classification learning task based on the standard MNIST [31] dataset, which consists of 60,000 images for training and 10,000 images for testing and contains 10 different hand-written digits. At the user side, we consider the Convolutional Neural Network (CNN) models with cross-entropy loss functions for local training. The mini-batch size at each SGD step is set as 10, and the learning rate is set as 0.001. We set the total iterations to be 2000. Because data heterogeneity is critical in FL, we employ two configurations of data distributions: IID and non-IID dataset cases. The size of local dataset owned by each user is set as $|\mathcal{D}_{u_{m}}|=600$ for both IID and non- IID dataset cases. As for the IID dataset case, we assume the training dataset of each user is an IID sample of the global training dataset. As for the non- IID dataset case, we consider the pathological non-IID setting in [1], in which each user only has at most two categories of digits. ### V-B Impact of User Mobility on Conventional HFL We first carry out experiments to assess the convergence rate of the HFL algorithm with mobile users to illustrate the impact of user mobility with different system parameters. Prior to giving out the results, we would like to note that on the one hand, user mobility leads to a decreasing number of participants at each model aggregation round, which may result in missing training data and under-fitting training model. On the other hand, mobility of the users also has the potential to shuffle the data, which helps reducing the divergences among clusters and construct a representative sample at each SGD step of global model. #### V-B1 Impact of the Frequency of Aggregation Figure 3: Evaluation of test accuracy performance with different $\kappa_{1},\kappa_{2}$, when $p_{s}=0.5,M=50$ and $N=5$. Fig. 3 demonstrates the impact of parameter aggregation intervals, $\kappa_{1}$ and $\kappa_{2}$, on the accuracy of the trained model. It is worthwhile to note that $\kappa_{1}$ and $\kappa_{2}$ characterize the frequencies of edge aggregation and global aggregation, respectively, and there are $\kappa_{2}$ times of communication rounds between users and edge servers and one communication round between cloud and edge servers during one cloud update procedure. From Fig. 3 we can see that the convergence rates under IID and non-IID dataset can be boosted up by reducing $\kappa_{1}$ or $\kappa_{2}$, i.e., increasing the frequency of communications between either the users and edge APs or the edge APs and cloud server. Additionally, if we fix the product of $\kappa_{1}$ and $\kappa_{2}$, the convergence rate is accelerated with decreasing $\kappa_{1}$, which suggests that allowing more frequent edge aggregations helps to enhance learning performance. These observations verify the convergence analysis given in Theorem 1 and accord with related multi-layer FL settings [20] and [21]. This result poses a dissent on the conclusions drawn from single-cell FL [1], where the test accuracy can be improved by increasing $\kappa_{1}$. The main reason attributes to the fact that increasing $\kappa_{1}$ will enlarge the divergences among local models and increasing $\kappa_{2}$ will also aggravate the weight difference among cluster models. Similar to [21], our results show that high frequency of local average, instead of large number of local update iterations, is beneficial to the learning performance in HFL. Moreover, it can be seen that the learning performance of IID dataset case is better than that of non-IID dataset case, which also coincides with the conclusions indicated by Theorem 1. This is because the value of the divergences of local and cluster loss functions and the variance of SGD of IID dataset case is smaller than those of non-IID dataset case. #### V-B2 Impact of Staying Probability Figure 4: Evaluation of test accuracy performance with different $p_{s}$ when $\kappa_{1}=20,\kappa_{2}=1,M=50$ and $N=5$. In Fig. 4, we illustrate the impact of user staying probability $p_{s}$ on the convergence rate of HFL algorithm. From this figure, we first notice that the convergence rate changes slightly for IID dataset case with different $p_{s}>0$, which implies that the user mobility has a mild effect on the learning performance as long as a portion of the mobile users can participate in the model aggregation. This is because the test accuracy is insensitive to the number of participants for the ideal FL with IID dataset in both single- cell [1] and multi-cell [19] networks as long as the mini-batch sample is a representative IID sample of the entire dataset. Therefore, the test accuracy will not deteriorate severely by reducing $p_{s}$ as long as $p_{s}\neq 0$. For non-IID dataset case, we observe that increasing $p_{s}$ can bring along a marked gain in the learning performance. This comes from the fact that when dataset is non-IID, it is more likely that a large number of participants contribute a representative sample at each model aggregation step [1, 19]. Therefore, increasing $p_{s}$ retains more effective participants in each cluster, which can enhance the learning performance with non-IID datasets. Moreover, this staying probability also has a relation to the impact of $\kappa_{1}$ in Fig. 3 in real life. Particularly, a small $\kappa_{1}$ means a shorter duration of local update, and thus the staying probability during local update becomes larger, which helps to speed up HFL for non-IID dataset case. Additionally, increasing $p_{s}>0$ also helps to reduce the vibrations of the convergence curves, especially for non-IID dataset case. Two special scenarios are also noteworthy: When $p_{s}=0$, there are no updated results collected successfully by edge servers due to high mobility, and the test accuracy merely depends on the initialization model for both IID and non-IID dataset cases. When $p_{s}=1$, it turns to the conventional HFL with static users. And the test accuracy with $p_{s}=1$ for IID dataset case is higher comparing with non-IID dataset case, which is reasonable. #### V-B3 Impact of the Number of Users Figure 5: Evaluation of test accuracy performance with different $M$ when $p_{s}=0.5,\kappa_{1}=20,\kappa_{2}=1$ and $N=5$. Fig. 5 depicts the convergence rate of HFL under different user number $M$. It can be seen that the convergence rate increases with respect to $M$ for both IID and non-IID dataset cases, which is in accord with convergence analysis in conventional FL [20] and [29]. Additionally, increasing $M$ also helps to reduce the vibrations of the convergence curves, especially when the number of total global aggregations is small. It suggests that increasing the number of participating users benefits the learning performance when $T$ is small for IID dataset. This result confirms our analysis given in Theorem 1, namely, updates via SGD from a larger number of users will reduce the variance of SGD at each iteration, which is also in line with [27]. ### V-C Comparison between HFL and Our Proposed MACFL Algorithm We now turn our attention to the performance of the proposed MACFL. In this part, we perform several comparisons between our proposed MACFL and the conventional HFL algorithm. For a fair comparison, we adopt the same system and common hyperparameters, namely $\eta$, $\kappa_{1}$, $\kappa_{2}$, and $M$ for both algorithms. We also use the same initial state for both two algorithms. For the hyperparameters only used in MACFL algorithm, namely, $\sigma_{1}$, $\sigma_{2}$, and $\rho$, we conduct a grid search to figure out the combination of fine-tuned parameters via learning from the experiences of Per-FedAvg [14] and FedAMP [16]. And in our simulation we set $\sigma_{1}=\sigma_{2}=25$ and $\rho=0.001$ for both IID and non-IID datasets. Figure 6: Comparison of the evaluation of test accuracy performance between HFL and our proposed MACFL algorithm with different data distributions and system parameters $\kappa_{1},\kappa_{2}$ when $p_{s}=0.5,M=50$ and $N=5$. Fig. 6 plots the convergence rates of MACFL and HFL algorithm under different values of $\kappa_{1}$ and $\kappa_{2}$. Firstly, the convergence rate of our proposed MACFL algorithm can be enhanced with increasing $\kappa_{1}$ and $\kappa_{2}$ for both IID and non-IID data distributions, which are in consistence with Theorem 2. Secondly, our proposed MACFL algorithm attains faster convergence rate than the HFL algorithm in all cases. Particularly, for the IID dataset case, the test accuracy can be improved from $87.75\%$ to $89.10\%$ with $\kappa_{1}=20,\kappa_{2}=2$ by using our proposed scheme, and from $92.96\%$ to $94.01\%$ with $\kappa_{1}=20,\kappa_{2}=1$, and from $93.32\%$ to $94.57\%$ with $\kappa_{1}=10,\kappa_{2}=1$. As for the non-IID datasetcase, the performance gains are more pronounced, whereas the test accuracy can be improved from $58.92\%$ to $66.07\%$ with $\kappa_{1}=20,\kappa_{2}=2$ with MACFL algorithm, and from $77.95\%$ to $86.01\%$ with $\kappa_{1}=20,\kappa_{2}=1$, and from $85.35\%$ to $91.97\%$ with $\kappa_{1}=10,\kappa_{2}=1$. Moreover, the fluctuations in the convergence curve can also be largely reduced, especially for non-IID dataset case. Such an improvement in the learning performance is ascribed to the sophisticated weighted average scheme and more effective model update and aggregation rules employed in our proposed MACFL algorithm. Figure 7: Comparison of the evaluation of test accuracy performance between HFL and our proposed MACFL algorithm with different data distributions and system parameters $p_{s}$ when $\kappa_{1}=20,\kappa_{2}=1,M=50$ and $N=5$ . In Fig. 7, we compare the performance of our proposed MACFL and conventional HFL algorithm under various staying probability $p_{s}$ of mobile users. The first noteworthy observation is that different from HFL algorithm, the test accuracy of our proposed MACFL algorithm is not sensitive to the variants of $p_{s}$ for both IID and non-IID data distributions. This phenomenon can be explained by our proposed model aggregation rule, in which the stay probability $p_{s}$ does not affect the number of effective participants. Secondly, it shows that our proposed algorithm outperforms the conventional HFL algorithm with mobile users for different $p_{s}$ settings, especially for $p_{s}=0$, where the test accuracy can be improved from $11.37\%$ to $93.86\%$ for IID dataset case by using our proposed scheme, and from $11.37\%$ to $80.85\%$ for non-IID dataset case. Unlike HFL algorithm with $p_{s}=0$, in which the global model never get updated due to the extremely high mobility of users, our proposed model aggregation rule allows all the mobile user to upload their computed results at each communication round even though no user is connected with the same tagged edge AP during local training phase. Moreover, our algorithm also helps to reduce the vibrations of convergence curves, especially for non-IID dataset case, owing to attentive weighted scheme in the proposed algorithm. Figure 8: Comparison of the evaluation of test accuracy performance between HFL and our proposed MACFL algorithm with different data distributions and system parameters $M$ when $p_{s}=0.5,\kappa_{1}=20,\kappa_{2}=1$ and $N=5$. In Fig. 8, the test accuracy performance of our proposed MACFL algorithm under different $M$ is evaluated, and conventional HFL with mobile users is chosen as a benchmark. Firstly, similar to HFL algorithm, we note that the test accuracy of our proposed MACFL algorithm can be improved by increasing $M$ for both IID and non-IID data distributions. This can be explained by the same reasons for HFL algorithm, and the results prove the correctness of our convergence analysis given in Theorem 2. Secondly, it shows that our proposed algorithm outperforms the conventional HFL algorithm with mobile users, especially for small value of $M$. In Fig. 8, it shows that for the IID dataset case, the test accuracy can be improved from $65.83\%$ to $71.31\%$ with $M=5$ with our proposed scheme, and from $93.06\%$ to $94.01\%$ with $M=50$, and from $95.47\%$ to $95.93\%$ with $M=100$. As for non-IID dataset case, the improvements are more pronounced. In particular, as Fig. 8 shows, the test accuracy can be improved from $45.89\%$ to $56.85\%$ with $M=5$ by using our proposed averaging scheme, and from$77.95\%$ to $86.01\%$ with $M=50$, and from $85.32\%$ to $93.60\%$ with $M=100$. Such enhancements mainly owes to our proposed model update and aggregation rules, which can help each user to learn a more accurate personalized local models and help both edge servers and cloud server to construct more precise shared models, and it also unveils the importance of a well designed averaging scheme in FL system. ## VI Conclusion In this paper, we studied the performance of FL in the context of a hierarchical wireless network consisted of one cloud server, multiple edge APs, and a large number of mobile users. We derived analytical expressions for the convergence rate of FL in the conventional setup, by accounting for several key factors of the system, including the heterogeneity arises from both the dataset and network architecture, as well as the user mobility. The analysis revealed that increasing the communication frequency amongst the users and their connected APs can accelerate the convergence rate. In contrast, an increase in user mobility leads to a dropout of participants and decreases the convergence rate. Furthermore, by exploiting the correlations amongst the parameters of mobile users and edge APs, we proposed a mobility- aware scheme to aggregate the users’ parameters. The efficacy of the proposed approach is corroborated via extensive experiments, whereas the gain over the conventional training method is particularly pronounced when mobile users possess non-IID datasets. Future extensions of this work can focus on the more delicate design of HFL with mobile users, such as personalized local model updates, data-driven model aggregation schemes, etc. ### -A Proof of Lemma 1 In this section, we use $g(\mathbf{w}_{u_{m}}^{t})$, $F(\mathbf{w}_{u_{m}}^{t})$, $f_{c_{i}}(\mathbf{w}_{c_{i}}^{t})$ and $f(\mathbf{w}_{g}^{t})$ as the short version of mini-batch gradients, local, cluster and global loss functions in this section. Similar to [20, 21], we use averaged global model parameters to analysis even though the they are not observed for each iteration, and the virtual global model parameters can be written as $\mathbf{\bar{w}}_{g}^{t+1}=\mathbf{\bar{w}}_{g}^{t}-\eta\sum_{m=1}^{M}\alpha_{u_{m}}g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}$ to analysis, then we have $\displaystyle\mathbb{E}f(\mathbf{\bar{w}}^{t+1}_{g})$ $\displaystyle=\mathbb{E}f\left(\mathbf{\bar{w}}^{t}_{g}-\sum_{m=1}^{M}\alpha_{u_{m}}g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}\right)$ $\displaystyle\overset{a}{\leq}\mathbb{E}f(\mathbf{\bar{w}}^{t}_{g})+\frac{\eta^{2}L}{2}\mathbb{E}\|\sum_{m=1}^{M}\alpha_{u_{m}}g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}\|^{2}-\eta\mathbb{E}<\nabla f(\mathbf{\bar{w}}^{t}),\sum_{m=1}^{M}g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}>$ (25) where step (a) holds because of smoothness of the loss functions. For the second term of in (-A), we have $\displaystyle\mathbb{E}\|\sum_{m=1}^{M}\alpha_{u_{m}}g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}\|^{2}\overset{a}{=}$ $\displaystyle\mathbb{E}\|\sum_{m=1}^{M}\alpha_{u_{m}}\left(g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}-p_{s}\nabla F(\mathbf{w}_{u_{m}}^{t})\right)\|^{2}+\mathbb{E}\|p_{s}\sum_{m=1}^{M}\ \alpha_{u_{m}}\nabla F(\mathbf{w}_{u_{m}}^{t})\|^{2}$ $\displaystyle\overset{b}{\leq}$ $\displaystyle\mathbb{E}\|\sum_{m=1}^{M}\alpha_{u_{m}}\left(g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}-p_{s}\nabla F(\mathbf{w}_{u_{m}}^{t})\right)\|^{2}+p_{s}^{2}\sum_{m=1}^{M}\ \alpha_{u_{m}}\mathbb{E}\|\nabla F(\mathbf{w}_{u_{m}}^{t})\|^{2}$ (26) where the step (a) in (-A) holds because $\mathbb{E}\|\mathbf{x}\|^{2}=\mathbb{E}\|\mathbf{x}-\mathbb{E}\mathbf{x}\|^{2}-(\mathbb{E}\mathbf{x})^{2}$, step (b) in (-A) holds because $\|\sum_{i=1}^{N}a_{i}x_{i}\|^{2}\leq\sum_{i=1}^{N}a_{i}\|x_{i}\|^{2}$, with $0\leq a_{i}\leq 1,\sum_{i=1}^{N}a_{i}=1$. For the first term in inequality (-A), we have $\displaystyle\mathbb{E}\|\sum_{m=1}^{M}\alpha_{u_{m}}\left(g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}-p_{s}\nabla F(\mathbf{w}_{u_{m}}^{t})\right)\|^{2}$ $\displaystyle=$ $\displaystyle\mathbb{E}\bigg{\|}\sum_{m=1}^{M}\alpha_{u_{m}}\bigg{(}g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}-\nabla F(\mathbf{w}_{u_{m}}^{t})+(1-p_{s})\nabla F(\mathbf{w}_{u_{m}}^{t})\bigg{)}\bigg{\|}^{2}$ $\displaystyle=$ $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}^{2}\mathbb{E}\bigg{\|}g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}-\nabla F(\mathbf{w}_{u_{m}}^{t})+(1-p_{s})\nabla F(\mathbf{w}_{u_{m}}^{t})\bigg{\|}^{2}+\sum_{m=1}^{M}\sum_{j=1,j\neq m}^{M}\alpha_{u_{m}}\alpha_{u_{j}}\mathbb{E}\bigg{\\{}<g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}$ $\displaystyle-\nabla F(\mathbf{w}_{u_{m}}^{t})+(1-p_{s})\nabla F(\mathbf{w}_{u_{m}}^{t}),g(\mathbf{w}_{u_{j}}^{t})\mathbb{I}_{u_{j}}^{t}-\nabla F(\mathbf{w}_{u_{j}}^{t})+(1-p_{s})\nabla F(\mathbf{w}_{u_{j}}^{t})>\bigg{\\}}$ (27) For the first term in inequality (-A), we have $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}^{2}\mathbb{E}\bigg{\|}g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}-\nabla F(\mathbf{w}_{u_{m}}^{t})+(1-p_{s})\nabla F(\mathbf{w}_{u_{m}}^{t})\bigg{\|}^{2}$ $\displaystyle=$ $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}^{2}\bigg{(}\mathbb{E}\|g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}-\nabla F(\mathbf{w}_{u_{m}}^{t})\|^{2}+(1-p_{s})^{2}\mathbb{E}\|\nabla F(\mathbf{w}_{u_{m}}^{t})\|^{2}+\mathbb{E}<(1-p_{s})\nabla F(\mathbf{w}_{u_{m}}^{t}),$ $\displaystyle g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}-\nabla F(\mathbf{w}_{u_{m}}^{t})>+\mathbb{E}<g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}-\nabla F(\mathbf{w}_{u_{m}}^{t}),(1-p_{s})\nabla F(\mathbf{w}_{u_{m}}^{t})>\bigg{)}$ $\displaystyle\overset{a}{=}$ $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}^{2}\mathbb{E}\|g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}-\nabla F(\mathbf{w}_{u_{m}}^{t})\|^{2}-\sum_{m=1}^{M}\alpha_{u_{m}}^{2}(1-p_{s})^{2}\mathbb{E}\|\nabla F(\mathbf{w}_{u_{m}}^{t})\|^{2}$ $\displaystyle\leq$ $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}^{2}\bigg{(}p_{s}\sigma^{2}+p_{s}(1-p_{s})\mathbb{E}\|\nabla F(\mathbf{w}_{u_{m}}^{t})\|^{2}\bigg{)},$ (28) where step (a) holds because the mini-batch gradients are assumed to be unbiased with bounded variance, and the independence between the mobility variable and the mini-batch sampling, we have $\mathbb{E}\\{g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}\\}=p_{s}\nabla F(\mathbf{w}_{u_{m}}^{t})$. For the second term in inequality (-A), we have $\sum_{m=1}^{M}\sum_{j=1,j\neq m}^{M}\alpha_{u_{m}}\alpha_{u_{j}}\mathbb{E}\big{\\{}<g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}-\nabla F(\mathbf{w}_{u_{m}}^{t})+(1-p_{s})\nabla F(\mathbf{w}_{u_{m}}^{t}),g(\mathbf{w}_{u_{j}}^{t})\mathbb{I}_{u_{j}}^{t}-\nabla F(\mathbf{w}_{u_{j}}^{t})$ $+(1-p_{s})\nabla F(\mathbf{w}_{u_{j}}^{t})>\big{\\}}=0$ because of the unbiasedness of mini- batch gradients and the independence between user mobility and random sampling. Plugging $\eqref{eq:ft31}$ back into inequality (-A), we have $\displaystyle\mathbb{E}\|\sum_{m=1}^{M}\alpha_{u_{m}}g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}\|^{2}\leq p_{s}\sigma^{2}\sum_{m=1}^{M}\alpha_{u_{m}}^{2}+\sum_{m=1}^{M}\alpha_{u_{m}}p_{s}\left(p_{s}+\alpha_{u_{m}}(1-p_{s})\right)\mathbb{E}\|\nabla F(\mathbf{w}_{u_{m}}^{t})\|^{2}$ (29) For the third term in (-A), we have $\displaystyle-\eta\mathbb{E}<\nabla f(\mathbf{\bar{w}}_{g}^{t}),\sum_{m=1}^{M}\alpha_{u_{m}}g_{u_{m}}(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}>$ $\displaystyle\overset{a}{=}$ $\displaystyle-\eta p_{s}\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}<\nabla f(\mathbf{\bar{w}}_{g}^{t}),\nabla F(\mathbf{w}_{u_{m}}^{t})>$ $\displaystyle\overset{b}{=}$ $\displaystyle-\frac{\eta p_{s}}{2}\sum_{m=1}^{M}\alpha_{u_{m}}\bigg{(}\mathbb{E}\|\nabla f(\mathbf{\bar{w}}^{t}_{g})\|^{2}+\mathbb{E}\|\nabla F(\mathbf{w}_{u_{m}}^{t})\|^{2}-\mathbb{E}\|\nabla f(\mathbf{\bar{w}}^{t}_{g})-\nabla F(\mathbf{w}_{u_{m}}^{t})\|^{2}\bigg{)}$ (30) where a conditional expectation is made in step (a), and step (b) holds because $<\mathbf{x},\mathbf{y}>=\frac{1}{2}(\|\mathbf{x}-\mathbf{y}\|^{2}-\|\mathbf{x}\|^{2}-\|\mathbf{y}\|^{2})$. For the last term in (-A), we have $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\|\nabla f(\mathbf{\bar{w}}^{t}_{g})-\nabla F(\mathbf{w}_{u_{m}}^{t})\|^{2}$ $\displaystyle=$ $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\bigg{\|}\nabla f(\mathbf{\bar{w}}^{t}_{g})\pm\nabla f(\mathbf{\bar{w}}^{t}_{c_{u_{m}}})\pm\nabla f_{c_{u_{m}}}(\mathbf{\bar{w}}^{t}_{c_{u_{m}}})\pm\nabla F(\mathbf{\bar{w}}^{t}_{c_{u_{m}}})-\nabla F(\mathbf{w}_{u_{m}}^{t})\bigg{\|}^{2}$ $\displaystyle\overset{a}{\leq}$ $\displaystyle 4L^{2}\sum_{i=1}^{N}\alpha_{c_{i}}\mathbb{E}\|\mathbf{\bar{w}}^{t}_{g}-\mathbf{\bar{w}}_{c_{i}}^{t}\|^{2}+4\epsilon_{g}^{2}+4L^{2}\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\|\mathbf{\bar{w}}_{c_{u_{m}}}^{t}-\mathbf{w}_{u_{m}}^{t}\|^{2}+4\epsilon_{c}^{2}$ (31) where $\mathbf{\bar{w}}_{c_{u_{m}}}^{t}$ denotes the averaged cluster model owned by the edge AP which the $u_{m}$ located in at $t$-th iteration, step (a) holds because of the assumptions of smoothness and bounded divergences among the local, cluster and global loss functions. Plugging (29), (-A) and (-A) back into inequality (-A), and rearranging the order, dividing both side by $\frac{\eta p_{s}}{2}$ and taking the average over time, we have $\displaystyle\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\|\nabla f(\mathbf{\bar{w}}^{t}_{g})\|^{2}$ $\displaystyle\leq\frac{2}{\eta p_{s}T}(\mathbb{E}f(\mathbf{\bar{w}}^{0}_{g})-f_{\inf})+\eta L\sigma^{2}\sum_{m=1}^{M}\alpha_{u_{m}}^{2}+4\epsilon_{g}^{2}+4\epsilon_{c}^{2}$ $\displaystyle+\frac{4L^{2}}{T}\sum_{t=0}^{T-1}\sum_{i=1}^{N}\alpha_{c_{i}}\mathbb{E}\|\mathbf{\bar{w}}^{t}_{g}-\mathbf{\bar{w}}_{c_{i}}^{t}\|^{2}+\frac{4L^{2}}{T}\sum_{t=0}^{T-1}\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\|\mathbf{\bar{w}}_{c_{u_{m}}}^{t}-\mathbf{w}_{u_{m}}^{t}\|^{2}$ $\displaystyle+\frac{1}{T}\sum_{t=0}^{T-1}\sum_{m=1}^{M}\alpha_{u_{m}}\left(\eta L(p_{s}+\alpha_{u_{m}}(1-p_{s}))-1\right)\mathbb{E}\|\nabla F(\mathbf{w}_{u_{m}}^{t})\|^{2}$ (32) The last term in (-A) characterizes the efficiency of gradient descent with specific loss functions, if the learning rate is small enough, the last term is less than zero. Specifically, if $\eta\leq\frac{1}{L}$, we have $\eta L(p_{s}+\alpha_{u_{m}}(1-p_{s}))\leq 1$ since $0\leq p_{s}\leq 1$ and $0\leq\alpha_{u_{m}}\leq 1$. By far, we conclude the proof of Lemma 1. ### -B Proof of Lemma 2 For the MSE of the local and cluster model parameters, if $t\mod\kappa_{1}=0$, we have $\mathbf{w}_{u_{m}}^{t}=\mathbf{w}_{c_{u_{m}}}^{t}$ , otherwise, let $b=\lfloor\frac{t}{\kappa_{1}}\rfloor$, we have $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\|\mathbf{w}^{t}_{u_{m}}-\mathbf{\bar{w}}_{c_{u_{m}}}^{t}\|^{2}$ $\displaystyle=$ $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\bigg{\|}(\mathbf{w}^{b\kappa_{1}}_{u_{m}}-\eta\sum_{\tau=b\kappa_{1}}^{t-1}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-(\mathbf{\bar{w}}_{c_{u_{m}}}^{b\kappa_{1}}-\eta\sum_{u_{m}\in\mathcal{C}_{u_{m}}}\sum_{\tau=b\kappa_{1}}^{t-1}\alpha_{u_{m}}^{c}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}})\bigg{\|}^{2}$ $\displaystyle=$ $\displaystyle\eta^{2}\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\bigg{\|}\sum_{\tau=b\kappa_{1}}^{t-1}\bigg{(}\sum_{u_{m}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}\bigg{)}\bigg{\|}^{2}$ $\displaystyle=$ $\displaystyle\eta^{2}\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\bigg{\|}\sum_{\tau=b\kappa_{1}}^{t-1}\bigg{(}\sum_{u_{m}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}\pm p_{s}\sum_{u_{m}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\pm p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})-g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}\bigg{)}\bigg{\|}^{2}$ $\displaystyle\leq$ $\displaystyle 2\eta^{2}\sum_{m=1}^{M}\alpha_{u_{m}}\bigg{\\{}\mathbb{E}\bigg{\|}\sum_{\tau=b\kappa_{1}}^{t-1}\bigg{[}\sum_{u_{m}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}\bigg{(}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\bigg{)}-\bigg{(}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\bigg{)}\bigg{]}\bigg{\|}^{2}$ $\displaystyle+p_{s}^{2}\mathbb{E}\bigg{\|}\sum_{\tau=b\kappa_{1}}^{t-1}\bigg{(}\sum_{u_{m}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}\nabla F(\mathbf{w}^{\tau}_{u_{m}})-\nabla F(\mathbf{w}^{\tau}_{u_{m}})\bigg{)}\bigg{\|}^{2}\bigg{\\}}$ (33) For the first term in (-B), we have $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}\bigg{\\{}\mathbb{E}\bigg{\|}\sum_{\tau=b\kappa_{1}}^{t-1}\bigg{[}\sum_{u_{m}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}\bigg{(}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\bigg{)}-\bigg{(}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\bigg{)}\bigg{]}\bigg{\|}^{2}$ $\displaystyle\overset{a}{=}$ $\displaystyle\sum_{n=1}^{N}\sum_{u_{m}\in\mathcal{C}_{n}}\alpha_{c_{n}}\alpha_{u_{m}}^{c}\bigg{[}\mathbb{E}\bigg{\|}\sum_{\tau=b\kappa_{1}}^{t-1}\bigg{(}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})-\sum_{u_{m}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}(g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}}))\bigg{)}\bigg{\|}^{2}\bigg{]}$ $\displaystyle\overset{b}{=}$ $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\big{\|}\sum_{\tau=b\kappa_{1}}^{t-1}\big{(}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\big{)}\big{\|}^{2}-\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\big{\|}\sum_{\tau=b\kappa_{1}}^{t-1}\sum_{u_{m}\in\mathcal{C}_{n}}\alpha_{u_{m}}^{c}\big{(}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\big{)}\big{\|}^{2}$ $\displaystyle\overset{c}{=}$ $\displaystyle\sum_{m=1}^{M}\sum_{\tau=b\kappa_{1}}^{t-1}\alpha_{u_{m}}\bigg{(}\mathbb{E}\bigg{\|}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\bigg{\|}^{2}-\alpha_{u_{m}}^{c}\mathbb{E}\bigg{\|}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\bigg{\|}^{2}\bigg{)}$ $\displaystyle\leq$ $\displaystyle\kappa_{1}\sum_{m=1}^{M}\alpha_{u_{m}}(1-\alpha_{u_{m}}^{c})\mathbb{E}\bigg{\|}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\bigg{\|}^{2}$ (34) where step (a) holds because $\alpha_{u_{m}}=\alpha_{u_{m}}^{c}\alpha_{c_{u_{m}}}$, step (b) holds because $\sum_{i=1}^{N}a_{i}\|x_{i}-\sum_{i=1}^{N}a_{i}x_{i}\|^{2}=\sum_{i=1}^{N}a_{i}\|x_{i}\|^{2}-(\sum_{i=1}^{N}a_{i}x_{i})^{2}$ with $\sum_{i=1}^{N}a_{i}=1,0\leq a_{i}\leq 1$, step (c) holds because $\|\sum_{i=1}^{N}x_{i}-\mathbb{E}\\{x_{i}\\}\|^{2}=\sum_{i=1}^{N}\|x_{i}-\mathbb{E}\\{x_{i}\\}\|^{2}$ with the independence among different $x_{i}$. And then we have $4\kappa_{1}\sum_{m=1}^{M}\alpha_{u_{m}}(1-\alpha_{u_{m}}^{c})\mathbb{E}\bigg{\|}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\bigg{\|}^{2}\leq\kappa_{1}p_{s}(\sigma^{2}+(1-p_{s})G^{2})\sum_{m=1}^{M}\alpha_{u_{m}}(1-\alpha_{u_{m}}^{c})$ because $\mathbb{E}\|x_{i}-\mathbb{E}\\{x_{i}\\}\|^{2}=\mathbb{E}\|x_{i}\|^{2}-(\mathbb{E}\\{x_{i}\\})^{2}$. For the second term in (-B), we have $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\|\sum_{\tau=b\kappa_{1}}^{t-1}(\nabla F(\mathbf{w}^{\tau}_{u_{m}})-\sum_{u_{m}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}\nabla F(\mathbf{w}^{\tau}_{u_{m}}))\|^{2}$ $\displaystyle=$ $\displaystyle\sum_{m=1}^{M}\alpha_{u_{m}}\mathbb{E}\bigg{\|}\sum_{\tau=b\kappa_{1}}^{t-1}\bigg{(}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\pm\nabla F(\mathbf{\bar{w}}^{\tau}_{c_{u_{m}}})\pm\nabla f_{c_{u_{m}}}(\mathbf{\bar{w}}^{\tau}_{c_{u_{m}}})-\sum_{u_{j}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}\nabla F(\mathbf{w}^{\tau}_{u_{j}})\bigg{)}\bigg{\|}^{2}$ $\displaystyle\leq$ $\displaystyle 3\sum_{m=1}^{M}\alpha_{u_{m}}\bigg{(}\mathbb{E}\bigg{\|}\sum_{\tau=b\kappa_{1}}^{t-1}\bigg{(}\nabla F(\mathbf{w}^{\tau}_{u_{m}})-\nabla F(\mathbf{\bar{w}}^{\tau}_{c_{u_{m}}})\bigg{)}\bigg{\|}^{2}+\mathbb{E}\bigg{\|}\sum_{\tau=b\kappa_{1}}^{t-1}\bigg{(}\nabla F(\mathbf{\bar{w}}^{\tau}_{c_{u_{m}}})-\nabla f_{c_{u_{m}}}(\mathbf{\bar{w}}^{\tau}_{c_{u_{m}}})\bigg{)}\bigg{\|}^{2}$ $\displaystyle+\mathbb{E}\bigg{\|}\sum_{\tau=b\kappa_{1}}^{t-1}(\nabla f_{c_{u_{m}}}(\mathbf{\bar{w}}^{\tau}_{c_{u_{m}}})-\sum_{u_{j}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}\nabla F(\mathbf{w}^{\tau}_{u_{j}}))\bigg{\|}^{2}\bigg{)}$ $\displaystyle\leq$ $\displaystyle 6L^{2}\kappa_{1}\sum_{m=1}^{M}\sum_{\tau=b\kappa_{1}}^{t-1}\alpha_{u_{m}}\mathbb{E}\|\mathbf{w}^{\tau}_{u_{m}}-\mathbf{\bar{w}}^{\tau}_{c_{u_{m}}}\|^{2}+3\kappa_{1}^{2}\epsilon_{c}^{2}$ (35) Plugging (-B) and (-B) back into (-B), and taking average of time, if $\eta<\frac{1}{\sqrt{12}L\kappa_{1}}$ , we can conclude the proof of Lemma 2. ### -C Proof of Lemma 3 Similar to [20] [21], we use averaged cluster model parameters to analysis even though the they are not observed for each iteration, and the virtual cluster model parameters can be written as $\mathbf{\bar{w}}_{c_{n}}^{t+1}=\mathbf{\bar{w}}_{c_{n}}^{t}-\eta\sum_{u_{m}\in\mathcal{C}_{n}^{t}}\alpha_{u_{m}}^{c}g(\mathbf{w}_{u_{m}}^{t})\mathbb{I}_{u_{m}}^{t}$ to analysis. We use $c(u_{m})$ to indicate the index of edge AP that is connected by mobile user $u_{m}$ in the rest of this paper. For the MSE of the cluster and global model parameters, if $t\mod\kappa_{1}\kappa_{2}=0$, we have $\mathbf{w}_{c_{n}}^{t}=\mathbf{w}_{g}^{t}$ if $t\mod\kappa_{1}\kappa_{2}=0$, otherwise, let $b_{1}=\lfloor\frac{t}{\kappa_{1}\kappa_{2}}\rfloor$. Similar to Lemma 2, we have $\displaystyle\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\|\mathbf{\bar{w}}^{t}_{g}-\mathbf{\bar{w}}_{c_{n}}^{t}\|^{2}=\eta^{2}\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\bigg{\|}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\bigg{(}\sum_{m=1}^{M}\alpha_{u_{m}}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}\pm p_{s}\nabla f_{c_{u_{m}}}(\mathbf{\bar{w}}^{\tau}_{c(u_{m})})$ $\displaystyle\pm p_{s}\sum_{j=1}^{N}\alpha_{c_{j}}\nabla f_{c_{j}}(\mathbf{\bar{w}}^{\tau}_{c_{j}})-\sum_{u_{m}\in\mathcal{C}_{u_{m}}^{\tau}}\alpha_{u_{m}}^{c}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}\bigg{)}\bigg{\|}^{2}$ $\displaystyle\leq$ $\displaystyle 2\eta^{2}\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\bigg{\|}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\bigg{(}\sum_{u_{m}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla f_{c_{u_{m}}}(\mathbf{\bar{w}}^{\tau}_{c(u_{m})})-(\sum_{u_{m}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}$ $\displaystyle-p_{s}\sum_{j=1}^{N}\alpha_{c_{j}}\nabla f_{c_{j}}(\mathbf{\bar{w}}^{\tau}_{c_{j}}))\bigg{)}\bigg{\|}^{2}+2\eta^{2}p_{s}^{2}\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\bigg{\|}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\bigg{(}\nabla f_{c_{u_{m}}}(\mathbf{\bar{w}}^{\tau}_{c(u_{m})})-\sum_{j=1}^{N}\alpha_{c_{j}}\nabla f_{c_{j}}(\mathbf{\bar{w}}^{\tau}_{c_{j}})\bigg{)}\bigg{\|}^{2}$ $\displaystyle\leq$ $\displaystyle 4\eta^{2}\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\bigg{\|}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\bigg{(}\sum_{u_{m}\in\mathcal{C}_{u_{m}}^{\tau}}\alpha_{u_{m}}^{c}(g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}}))-\sum_{m=1}^{M}\alpha_{u_{m}}(g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}$ $\displaystyle-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}}))\bigg{)}\bigg{\|}^{2}+4\eta^{2}p_{s}^{2}\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\bigg{\|}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\bigg{(}\sum_{u_{m}\in\mathcal{C}_{u_{m}}^{\tau}}\alpha_{u_{m}}^{c}\nabla F(\mathbf{w}^{\tau}_{u_{m}})-\nabla f_{c_{u_{m}}}(\mathbf{\bar{w}}^{\tau}_{c_{u_{m}}})\bigg{)}\bigg{\|}^{2}$ $\displaystyle+2\eta^{2}p_{s}^{2}\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\bigg{\|}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\bigg{(}\nabla f_{c_{u_{m}}}(\mathbf{\bar{w}}^{\tau}_{c(u_{m})})-\sum_{j=1}^{N}\alpha_{c_{j}}\nabla f_{c_{j}}(\mathbf{\bar{w}}^{\tau}_{c_{j}})\bigg{)}\bigg{\|}^{2}$ (36) For the first term in (-C), we have $\displaystyle\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\bigg{\|}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\bigg{(}\sum_{u_{m}\in\mathcal{C}_{u_{m}}^{\tau}}\alpha_{u_{m}}^{c}(g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-\nabla F(\mathbf{w}^{\tau}_{u_{m}}))-p_{s}\sum_{m=1}^{M}\alpha_{u_{m}}(g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}}))\bigg{)}\bigg{\|}^{2}$ $\displaystyle\overset{a}{=}$ $\displaystyle\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\bigg{\|}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\bigg{(}\sum_{u_{m}\in\mathcal{C}_{u_{m}}}\alpha_{u_{m}}^{c}(g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}}))\bigg{)}\bigg{\|}^{2}-\mathbb{E}\bigg{\|}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\bigg{(}\sum_{m=1}^{M}\alpha_{u_{m}}(g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}$ $\displaystyle-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}}))\bigg{)}\bigg{\|}^{2}$ $\displaystyle=$ $\displaystyle\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\sum_{m=1}^{M}\alpha_{u_{m}}\bigg{(}\alpha_{u_{m}}^{c}\mathbb{E}\|g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\|^{2}-\alpha_{u_{m}}\mathbb{E}\|g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-p_{s}\nabla F(\mathbf{w}^{\tau}_{u_{m}})\|^{2}\bigg{)}$ $\displaystyle\overset{b}{=}$ $\displaystyle\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\sum_{m=1}^{M}\alpha_{u_{m}}(\alpha_{u_{m}}^{c}-\alpha_{u_{m}})\bigg{(}\mathbb{E}\bigg{\|}g(\mathbf{w}^{\tau}_{u_{m}})\mathbb{I}^{\tau}_{u_{m}}-\nabla F(\mathbf{w}^{\tau}_{u_{m}})\bigg{\|}^{2}-\mathbb{E}\bigg{\|}(p_{s}-1)\nabla F(\mathbf{w}^{\tau}_{u_{m}})\bigg{\|}^{2}\bigg{)}$ $\displaystyle\leq$ $\displaystyle\kappa_{1}\kappa_{2}p_{s}(\sigma^{2}+(1-p_{s})G^{2})\sum_{m=1}^{M}\alpha_{u_{m}}(\alpha_{u_{m}}^{c}-\alpha_{u_{m}})$ (37) where step (a) holds because $\sum_{i=1}^{N}a_{i}\|x_{i}-\sum_{i=1}^{N}a_{i}x_{i}\|^{2}=\sum_{i=1}^{N}a_{i}\|x_{i}\|^{2}-(\sum_{i=1}^{N}a_{i}x_{i})^{2}$ with $\sum_{i=1}^{N}a_{i}=1,0\leq a_{i}\leq 1$, step (b) holds because $\mathbb{E}\|x_{i}-\mathbb{E}\\{x_{i}\\}\|^{2}=\mathbb{E}\|x_{i}\|^{2}-(\mathbb{E}\\{x_{i}\\})^{2}$. For the second term in (-C), we have $\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\bigg{\|}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\bigg{(}\sum_{u_{m}\in\mathcal{C}_{n}}\alpha_{u_{m}}^{c}\nabla F(\mathbf{w}^{\tau}_{u_{m}})-\nabla f_{c_{n}}(\mathbf{\bar{w}}^{\tau}_{c_{n}})\bigg{)}\bigg{\|}^{2}\\\ \leq 4\eta^{2}p_{s}^{2}\kappa_{1}\kappa_{2}L^{2}\sum_{m=1}^{M}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\alpha_{u_{m}}\mathbb{E}\bigg{\|}\mathbf{w}^{\tau}_{u_{m}}-\mathbf{\bar{w}}^{\tau}_{c_{u_{m}}}\bigg{\|}^{2}$ because $\|\sum_{i=1}^{N}x_{i}\|^{2}\leq N\sum_{i=1}^{N}\|x_{i}\|^{2}$, and $\|\sum_{i=1}^{N}a_{i}x_{i}\|^{2}\leq\sum_{i=1}^{N}a_{i}\|x_{i}\|^{2}$ with $\sum_{i=1}^{N}a_{i}=1,0\leq a_{i}\leq 1$. For the third term in (-C), we have $\displaystyle 2\eta^{2}p_{s}^{2}\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\bigg{\|}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\bigg{(}\nabla f_{c_{u_{m}}}(\mathbf{\bar{w}}^{\tau}_{c(u_{m})})-\sum_{j=1}^{N}\alpha_{c_{j}}\nabla f_{c_{j}}(\mathbf{\bar{w}}^{\tau}_{c_{j}})\bigg{)}\bigg{\|}^{2}$ $\displaystyle=$ $\displaystyle 2\eta^{2}p_{s}^{2}\sum_{n=1}^{N}\alpha_{c_{n}}\mathbb{E}\bigg{\|}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\bigg{(}\nabla f_{c_{n}}(\mathbf{\bar{w}}^{\tau}_{c_{n}})\pm\nabla f(\mathbf{\bar{w}}^{\tau}_{c_{n}})\pm\nabla f(\mathbf{\bar{w}}^{\tau}_{g})-\sum_{j=1}^{N}\alpha_{c_{j}}\nabla f_{c_{j}}(\mathbf{\bar{w}}^{\tau}_{c_{j}})\bigg{)}\bigg{\|}^{2}$ $\displaystyle\overset{a}{\leq}$ $\displaystyle 12\eta^{2}p_{s}^{2}\kappa_{1}\kappa_{2}L^{2}\sum_{n=1}^{N}\sum_{\tau=b_{1}\kappa_{1}\kappa_{2}}^{t-1}\alpha_{c_{n}}\mathbb{E}\|\mathbf{\bar{w}}^{\tau}_{c_{n}}-\mathbf{\bar{w}}^{\tau}_{g}\|^{2}+6p_{s}^{2}\eta^{2}\kappa_{1}^{2}\kappa_{2}^{2}\epsilon_{g}^{2}$ (38) where step (a) holds because of assumptions of bounded variance of mini-batch gradients and bounded divergences among loss functions and $\|\sum_{i=1}^{N}x_{i}\|^{2}\leq N\sum_{i=1}^{N}\|x_{i}\|^{2}$. Plugging (-C) and (-C) back into (-C), and taking average of time, by the help of Lemma 2, if $\eta<\frac{1}{\sqrt{12}L\kappa_{1}\kappa_{2}}$, we can conclude the proof of Lemma 3. ### -D Proof of Theorem 2 Similar to HFL, the mini-batch gradients, local, cluster and global function in our proposed MACFL algorithm are denoted as $\tilde{g}(\mathbf{w}_{u_{m}}^{t})$, $\tilde{F}(\mathbf{w}_{u_{m}}^{t})$, $\tilde{f}_{c_{i}}(\mathbf{w}_{c_{i}}^{t})$ and $\tilde{f}(\mathbf{w}_{g}^{t})$, the averaged cluster and global model can be rewritten as $\mathbf{\bar{w}}_{c_{n}}^{t+1}=\mathbf{\bar{w}}_{c_{n}}^{t}-\sum_{u_{m}\in\mathcal{C}_{i}^{t}}\beta_{u_{m}}^{c}(\mathbf{\bar{w}}_{c_{n}}^{t}-\mathbf{w}_{u_{m}}^{t+1})$, and $\mathbf{\bar{w}}_{g}^{t+1}=\mathbf{\bar{w}}_{g}^{t}-\sum_{i=1}^{N}\beta_{c_{i}}(\mathbf{\bar{w}}_{g}^{t}-\mathbf{\bar{w}}_{c_{n}}^{t+1})$. In our proposed algorithm, all mobile users can participate in edge model aggregation at each iteration, similar to the proof of Lemma 1, we have $\mathbb{E}\tilde{f}(\mathbf{\bar{w}}^{t+1}_{g})\leq\mathbb{E}\tilde{f}(\mathbf{\bar{w}}^{t}_{g})-\eta\mathbb{E}<\nabla\tilde{f}(\mathbf{\bar{w}}_{g}^{t}),\sum_{m=1}^{M}\beta_{u_{m}}\tilde{g}(\mathbf{w}_{u_{m}}^{t})>+\frac{\eta^{2}L}{2}\mathbb{E}\|\sum_{m=1}^{M}\beta_{u_{m}}\tilde{g}(\mathbf{w}_{u_{m}}^{t})\|^{2}$ (39) For the second term, we have $\displaystyle-\mathbb{E}<\nabla\tilde{f}(\mathbf{\bar{w}}_{g}^{t}),\sum_{m=1}^{M}\beta_{u_{m}}\tilde{g}(\mathbf{w}_{u_{m}}^{t})>$ $\displaystyle=$ $\displaystyle\frac{1}{2}\bigg{(}\mathbb{E}\|\nabla\tilde{f}(\mathbf{\bar{w}}^{t}_{g})-\sum_{m=1}^{M}\beta_{u_{m}}\nabla\tilde{F}(\mathbf{w}_{u_{m}}^{t})\|^{2}-\mathbb{E}\|\nabla\tilde{f}(\mathbf{\bar{w}}^{t}_{g})\|^{2}-\mathbb{E}\|\sum_{m=1}^{M}\beta_{u_{m}}\nabla\tilde{F}(\mathbf{w}_{u_{m}}^{t})\|^{2}\bigg{)}$ $\displaystyle\leq$ $\displaystyle L^{2}\big{(}\sum_{i=1}^{N}\beta_{c_{i}}\mathbb{E}\|\mathbf{\bar{w}}^{t}_{g}-\mathbf{\bar{w}}_{c_{i}}^{t}\|^{2}+\sum_{m=1}^{M}\beta_{u_{m}}\mathbb{E}\|\mathbf{\bar{w}}_{c_{u_{m}}}^{t}-\mathbf{w}_{u_{m}}^{t}\|^{2}\big{)}-\frac{1}{2}\big{(}\mathbb{E}\|\nabla\tilde{f}(\mathbf{\bar{w}}^{t}_{g})\|^{2}+\mathbb{E}\|\sum_{m=1}^{M}\beta_{u_{m}}\nabla\tilde{F}(\mathbf{w}_{u_{m}}^{t})\|^{2}\big{)}$ (40) For the third term, we have $\displaystyle\mathbb{E}\|\sum_{m=1}^{M}\beta_{u_{m}}\tilde{g}(\mathbf{w}_{u_{m}}^{t})\|^{2}=$ $\displaystyle\bigg{(}\mathbb{E}\|\sum_{m=1}^{M}\beta_{u_{m}}(\tilde{g}(\mathbf{w}_{u_{m}}^{t})-\nabla\tilde{F}(\mathbf{w}_{u_{m}}^{t}))\|^{2}+\mathbb{E}\|\sum_{m=1}^{M}\beta_{u_{m}}\nabla\tilde{F}(\mathbf{w}_{u_{m}}^{t})\|^{2}\bigg{)}$ $\displaystyle\leq$ $\displaystyle\sigma_{M}^{2}\sum_{m=1}^{M}\beta_{u_{m}}^{2}+\mathbb{E}\|\sum_{m=1}^{M}\beta_{u_{m}}\nabla\tilde{F}(\mathbf{w}_{u_{m}}^{t})\|^{2}$ (41) where $\theta_{M}^{2}$ is the bound of variance of $\tilde{g}(\mathbf{w}_{u_{m}}^{t})$. Rearranging the order, dividing both side by $\frac{\eta}{2}$ and taking the average over time, if $\eta\leq\frac{1}{L}$, we have $\displaystyle\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\|\nabla\tilde{f}(\mathbf{\bar{w}}^{t}_{g})\|^{2}\leq$ $\displaystyle\frac{2}{\eta T}(\mathbb{E}\tilde{f}(\mathbf{\bar{w}}^{0}_{g})-\tilde{f}_{\inf})+\eta L\sigma_{M}^{2}\sum_{m=1}^{M}\beta_{u_{m}}^{2}$ $\displaystyle+\frac{2L^{2}}{T}\sum_{t=0}^{T-1}\bigg{(}\sum_{i=1}^{N}\beta_{c_{i}}\mathbb{E}\|\mathbf{\bar{w}}^{t}_{g}-\mathbf{\bar{w}}_{c_{i}}^{t}\|^{2}+\sum_{m=1}^{M}\beta_{u_{m}}\mathbb{E}\|\mathbf{\bar{w}}_{c_{u_{m}}}^{t}-\mathbf{w}_{u_{m}}^{t}\|^{2}\bigg{)}$ (42) Similar to Lemma 2, if $\eta<\frac{1}{\sqrt{12}L\kappa_{1}}$ , we can obtain the upper bound of the MSE of the local and cluster model parameters as follows: $\frac{1}{T}\sum_{t=0}^{T-1}\sum_{m=1}^{M}\beta_{u_{m}}\mathbb{E}\|\mathbf{w}^{t}_{u_{m}}-\mathbf{\bar{w}}_{c_{u_{m}}}^{t}\|^{2}\leq\frac{2\eta^{2}\kappa_{1}}{1-12\eta^{2}L^{2}\kappa_{1}^{2}}\bigg{(}3\kappa_{1}\epsilon_{M,c}^{2}+\sigma_{M}^{2}\sum_{m=1}^{M}\beta_{u_{m}}(1-\beta_{u_{m}}^{c})\bigg{)}$ (43) Similar to Lemma 3, if $\eta<\frac{1}{\sqrt{12}L\kappa_{1}\kappa_{2}}$, we can obtain the upper bound of local model parameters MSE as follows: $\displaystyle\frac{1}{T}\sum_{t=0}^{T-1}\sum_{i=1}^{N}\beta_{c_{i}}\mathbb{E}\|\mathbf{\bar{w}}^{t}_{g}-\mathbf{\bar{w}}_{c_{i}}^{t}\|^{2}$ $\displaystyle\leq\frac{2\eta^{2}\kappa_{1}\kappa_{2}}{1-12\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}^{2}}\bigg{[}3\kappa_{1}\kappa_{2}\epsilon_{M,g}^{2}+\frac{12\eta^{2}L^{2}\kappa_{1}^{3}\kappa_{2}}{1-12\eta^{2}L^{2}\kappa_{1}^{2}}\epsilon_{M,c}^{2}$ $\displaystyle+2\sigma_{M}^{2}\sum_{m=1}^{M}\beta_{u_{m}}\bigg{(}(\beta_{u_{m}}^{c}-\beta_{u_{m}})+\frac{2\eta^{2}L^{2}\kappa_{1}^{2}\kappa_{2}}{1-12\eta^{2}L^{2}\kappa_{1}^{2}}(1-\beta_{u_{m}}^{c})\bigg{)}\bigg{]}$ (44) Plugging (43) and (-D) back into (-D), if all models are initialized at a same point, and if $\eta<\frac{1}{\sqrt{12}L\kappa_{1}\kappa_{2}}$, we can conclude the proof of Theorem 2. ## References * [1] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Y. Arcas, “Communication-efficient learning of deep networks from decentralized data,” in _Proc. 20th Int. Conf. Artif. Intell. Stat. (AISTATS)_ , Fort Lauderdale, USA, Apr. 2017, pp. 1273-1282. * [2] X. Wang, Y. Han, C. Wang, Q. Zhao, X. Chen and M. Chen, “In-Edge AI: Intelligentizing mobile edge computing, caching and communication by federated learning,” _IEEE Network_ , vol. 33, no. 5, pp. 156-165, Sept.-Oct. 2019. * [3] M. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor and S. Cui, “A joint learning and communications framework for federated learning over wireless networks,” _IEEE Trans. Wireless Commun._ , vol. 20, no. 1, pp. 269-283, Jan. 2021. * [4] M. Chen, H. V. Poor, W. Saad and S. Cui, “Convergence time optimization for federated learning over wireless networks,” _IEEE Trans. Wireless Commun._ , 2021 [Early Access] * [5] S. Wang and et al., “Adaptive federated learning in resource constrained edge computing systems,” _IEEE J. Select. Areas Commun._ , vol. 37, no. 6, pp. 1205-1221, Jun. 2019. * [6] H. H. Yang, Z. Liu, T. Q. S. Quek and H. V. Poor, “Scheduling policies for federated learning in wireless networks,” _IEEE Trans. Commun._ , vol. 68, no. 1, pp. 317-333, Jan. 2020. * [7] M. M. Amiria, D. Gündüz, S. R. Kulkarni and H. Vincent Poor, “Convergence of update aware device scheduling for federated learning at the wireless edge,” _IEEE Trans. Wireless Commun._ , 2021, [Early Access]. * [8] L. Liu, J. Zhang, S. H. Song and K. B. Letaief, “Client-edge-cloud hierarchical federated learning,” in _Proc. of 2020 IEEE Int. Conf. Commun. (ICC)_ , Dublin, Ireland, 2020, pp. 1-6. * [9] S. Luo, X. Chen, Q. Wu, Z. Zhou and S. Yu, “HFEL: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning,” _IEEE Trans. Wireless Commun._ , vol. 19, no. 10, pp. 6535-6548, Oct. 2020. * [10] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated learning with non-IID data,” _Available as ArXiv_ : https://arxiv.org/abs/1806.00582. * [11] Laurent Jacob,Francis Bach, and Jean-Philippe Vert. “Clustered multi-task learning: a Convex Formulation,” _Available as ArXiv_ : http://arxiv.org/abs/0809.2085. * [12] V. Smith, C. K. Chiang, M. Sanjabi, and A. S. Talwalkar, “Federated multi-task learning,” in _Proc. Adv. Neural Inf. Process. Syst._ , 2017, pp. 4424–4434. * [13] C. Chen, Z. Chen, Y. Zhou and B. Kailkhura, “FedCluster: Boosting the convergence of federated learning via cluster-cycling,” _Available as ArXiv_ : https://arxiv.org/pdf/2009.10748. * [14] A. Fallah, A. Mokhtari, and A. Ozdaglar, “Personalized federated learning: A meta-learning approach” _Available as ArXiv_ : https://arxiv.org/abs/2002.07948 * [15] C. T. Dinh, N. H. Tran and T. D. Nguyen, “Personalized federated learning with Moreau envelopes,” _Available as ArXiv_ : https://arxiv.org/abs/2006.08848 * [16] Y. Huang, L. Chu, Z. Zhou, L. Wang, J. Liu, J. Pei and Y. Zhang, “Personalized cross-silo federated learning on non-IID Data,” _Available as ArXiv_ : https://arxiv.org/pdf/2007.03797. * [17] Z. Yu, J. Hu, G. Min, Z. Zhao, W. Miao and M. S. Hossain, “Mobility-aware proactive edge caching for connected vehicles using federated learning,” _IEEE Trans. Intell. Transport. Systems_ , pp. 1-11, 2020. * [18] H. T. Nguyen, N. C. Luong, J. Zhao, C. Yuen and D. Niyato, “Resource allocation in mobility-aware federated learning networks: A deep reinforcement learning approach,” in _IEEE Proc. World Forum Intern. Things (WF-IoT)_ , New Orleans, LA, USA, 2020, pp. 1-6. * [19] J. Wang and G. Josh, “Cooperative SGD: A unified framework for the design and analysis of communication-efficient SGD algorithms,” _Available as ArXiv_ : https://arxiv.org/abs/1808.07576 * [20] J. Wang, S. Wang, R. Chen and M. Ji, “Local averaging helps: Hierarchical federated learning and convergence analysis, ” _Available as ArXiv_ : https://arxiv.org/abs/2010.12998 * [21] T. Castiglia, A. Das and S. Patterson, “Multi-level local SGD for heterogeneous hierarchical networks, ” _Available as ArXiv_ : https://arxiv.org/abs/2007.13819 * [22] F. Sattler, K. R. Müller and W. Samek, “Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints,” in _IEEE Trans. Neural Netw. and Learning Systems_ , Aug., 2020, pp. 1-13. * [23] C. Briggs, Z. Fan and P. Andras, “Federated learning with hierarchical clustering of local updates to improve training on non-IID data,“in _2020 Int. Joint Conf. Neural Net. (IJCNN)_ , Glasgow, United Kingdom, 2020, pp. 1-9 * [24] M. S. H. Abad, E. Ozfatura, D. Gunduz and O. Ercetin, “Hierarchical federated learning across heterogeneous cellular networks,“in _IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP)_ , Barcelona, Spain, 2020, pp. 8866-8870. * [25] N. Shlezinger, S. Rini and Y. C. Eldar, “The Communication-aware clustered federated learning problem,” in _Proc. IEEE Int. Symp. Inf. Theory_ , 2020 pp. 2610-2615. * [26] V. Bezborodov, “Markov birth-and-death dynamics of populations,” _Available as ArXiv_ : https://arxiv.org/pdf/1502.06783. * [27] K. Yuan, B. Ying, J. Liu, and A. H. Sayed, “Variance-reduced stochastic learning by network agents under random reshuffling, ” _IEEE Trans. Signal Process._ , vol. 67, no. 2, pp. 351-366, Sep. 2018. * [28] X. Lian, Y. Huang, Y. Li, and J. Liu, “Asynchronous parallel stochastic gradient for nonconvex optimization,” in _Proc. Int. Conf. Neural Infor. Process. Systems (NeurIPS)_ , pp. 2737–2745, 2015. * [29] L. Bottou, F. E. Curtis, and J. Nocedal, “Optimization methods for large-scale machine learning,” _SIAM Review_ , 60(2):223–311, 2018. * [30] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” in _Int. Conf. Learning Representations (ICLR)_ , 2018. * [31] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms,” _Available as ArXiv_ : https://arxiv.org/abs/1708.07747, 2017.
# Recurrent neural network based parameter estimation of Hawkes model on high- frequency financial data Kyungsub Lee111Department of Statistics, Yeungnam University, Gyeongsan, Gyeongbuk 38541, Korea ###### Abstract This study examines the use of a recurrent neural network for estimating the parameters of a Hawkes model based on high-frequency financial data, and subsequently, for computing volatility. Neural networks have shown promising results in various fields, and interest in finance is also growing. Our approach demonstrates significantly faster computational performance compared to traditional maximum likelihood estimation methods while yielding comparable accuracy in both simulation and empirical studies. Furthermore, we demonstrate the application of this method for real-time volatility measurement, enabling the continuous estimation of financial volatility as new price data keeps coming from the market. ## 1 Introduction We propose a real-time estimation and volatility measurement scheme. Specifically, we continuously estimate the financial model parameters and volatility in real-time as new price process become available during the market operation. Our method uses a recurrent neural network to estimate the parameters of a Hawkes model based on high-frequency financial data, and these estimates are subsequently used to compute volatility. This approach exhibits significantly faster computational performance compared to the traditional maximum likelihood estimation (MLE) method. The Hawkes process introduced by Hawkes (1971) is a type of point process used to model the occurrence of events over time. It is frequently utilized in finance to model the arrival of trades or other financial events in the market; here, we use it to describe the fluctuations in the price in the tick structure. As a two dimensional Hawkes process with symmetric kernel in our study, it is characterized by two key features: self-excitation and mutual- excitation (Bacry et al., 2013). Self-excitation means that the occurrence of an event increases the likelihood of the same types of events occurring, while mutual-excitation implies that the occurrence of one event can increase the likelihood occurrence of other types of events. For the estimation, we use a long short-term memory (LSTM) network introduced by Hochreiter and Schmidhuber (1997). This is a type of recurrent neural network that is capable of learning long-term dependencies in data. LSTMs can be used for a variety of tasks that involve sequential data, including language modeling, machine translation, speech recognition, time series based on historical data, and financial analysis (Zhang et al., 2021; Ghosh et al., 2022). They are particularly useful for tasks where capturing long-term dependencies is important, as the gating mechanism allows the network to retain important earlier information in the sequence while discarding irrelevant or outdated information. Thus, our method uses a neural network for parameter estimation. Similar attempts have been made in recent years in various fields (Wlas et al., 2008; Wang et al., 2022; Wei and Jiang, 2022), but there is still limited research on using neural networks for estimation in financial time series. Specifically, we use a direct parameter estimation approach in which the neural network is trained to directly predict the model parameters based on the observed data. This method is an example of a small aspect of network- based parameter estimation and expected to be applied to more complex models. ## 2 Method ### 2.1 Price model First, we explain the stochastic model to describe high-frequency stock price movements. We model the up and down movements of the tick-level price process as a marked Hawkes process, which captures both the timing and size of the movements. This process is defined by the random measures, $\bm{M}(\mathrm{d}u\times\mathrm{d}z)=\begin{bmatrix}M_{1}(\mathrm{d}u\times\mathrm{d}z_{1})\\\ M_{2}(\mathrm{d}u\times\mathrm{d}z_{2})\end{bmatrix}$ (1) in the product space of time and jump size, $\mathbb{R}\times E_{i}$, for $i=1,2$, where $E_{i}=\mathbb{N}\times\\{i\\}$ denotes the space of mark (jump) sizes for up and down price movements, respectively. Each measure in Eq. (1) is associated with a sequence of $E_{i}$-valued random variables $\\{Z_{i,n}\\}$ in addition to the sequence of random times $\\{\tau_{i,n}\\}$ for each $i$. That is, $M_{i}(\mathrm{d}u\times\mathrm{d}z_{i})=\sum_{n}\delta_{\tau_{i,n},Z_{i,n}}(\mathrm{d}u\times\mathrm{d}z_{i})$ with the Dirac measure $\delta$, which is defined as follows: for any time interval $I$ and $A_{i}\subset E_{i}$ $\delta_{\tau_{i,n},Z_{i,n}}(I\times A_{i})=\left\\{\begin{array}[]{lr}1,\text{ if }\tau_{i,n}\in I\text{ and }Z_{i,n}\in A_{i},\\\ 0,\text{ otherwise.}\end{array}\right.$ A vector of càdlàg counting processes is defined by $\bm{N}_{t}=\begin{bmatrix}N_{1}(t)\\\ N_{2}(t)\end{bmatrix}=\int_{(0,t]\times E}\mathrm{Dg}(z)\bm{M}(\mathrm{d}u\times\mathrm{d}z),\quad\mathrm{Dg}(z)=\begin{bmatrix}z_{1}&0\\\ 0&z_{2}\end{bmatrix},\quad E=E_{1}\cup E_{2}$ which counts the number of events weighted by their size, that is, $N_{i}(t)=N_{i}((0,t])=\sum_{n}Z_{i,n}\mathbbm{1}_{\\{0<\tau_{i,n}\leq t\\}}=\\#\textrm{ of }\tau_{i,n}\in(0,t],\quad\textrm{for }i=1,2.$ ###### Assumption 1. The stochastic intensity $\lambda_{i}$ for $N_{i}$ is represented by the following: $\bm{\lambda}_{t}=\begin{bmatrix}\lambda_{1}(t)\\\ \lambda_{2}(t)\end{bmatrix}=\bm{\mu}+\int_{(-\infty,t]\times E}\bm{\alpha}\circ\bm{b}(t-u)\bm{M}(\mathrm{d}u\times\mathrm{d}z)$ (2) where $\bm{\mu}$ is a $2\times 1$ positive constant base intensity vector, $\bm{\alpha}$ is a positive $2\times 2$ constant matrix. $\bm{h}$ is a decay function matrix and $\circ$ denotes the element-wise product. From the definition of Eq. (2), for simplicity, we assume that the future impact of an event on intensities is independent of jump size, as the integrand of the equation does not contain the jump variable $z$. In addition, for further parsimony, we assume that: $\bm{\mu}=\begin{bmatrix}\mu\\\ \mu\end{bmatrix},\quad\bm{\alpha}=\begin{bmatrix}\alpha_{1}&\alpha_{2}\\\ \alpha_{2}&\alpha_{1}\end{bmatrix},\quad\bm{b}(t)=\begin{bmatrix}\mathrm{e}^{-\beta t}&\mathrm{e}^{-\beta t}\\\ \mathrm{e}^{-\beta t}&\mathrm{e}^{-\beta t}\end{bmatrix}.$ (3) Hence, the set of parameters to be estimated is $\\{\mu,\alpha_{1},\alpha_{2},\beta\\}$. We also assume that the mark $Z_{i}$ at time $t$ is independent from the $\sigma$-algebra generated by $(N_{j}(s),\lambda_{j}(s))_{s<t}$ for $j=1,2$. The intensity process is assumed to be stationary and that the spectral radius of $|\int_{0}^{\infty}\bm{\alpha}\circ\bm{b}(t)\mathrm{d}t|$ is less than 1. Under this assumption, the Hawkes volatility of price movements – the standard deviation of total up and down net movements – is represented by $\mathrm{SD}(N_{1}(t)-N_{2}(t))=\sqrt{\bm{\mathrm{u}}^{\top}\left[\mathcal{T}\left\\{\overline{\bm{\mathrm{Z}}}\circ\bm{\mathrm{B}}\right\\}+\overline{\bm{\mathrm{Z}}}^{(2)}\circ\mathrm{Dg}(\mathbb{E}[\bm{\lambda}_{t}])\right]\bm{\mathrm{u}}t},\quad\bm{\mathrm{u}}=\begin{bmatrix}1\\\ -1\end{bmatrix}$ (4) where $\mathcal{T}$ is an operator such that $\mathcal{T}(\bm{\mathrm{M}})=\bm{\mathrm{M}}+\bm{\mathrm{M}}^{\top}$ for a square matrix $\bm{\mathrm{M}}$ and $\mathrm{Dg}(\cdot)$ denotes a diagonal matrix whose diagonal entry is composed of the argument. Furthermore, $\mathbb{E}[\bm{\lambda}_{t}]=(\bm{\beta}-\bm{\alpha})^{-1}\bm{\beta}\bm{\mu},\quad\bm{\beta}=\begin{bmatrix}\beta&0\\\ 0&\beta\end{bmatrix}$ (5) and $\mathbb{E}[\bm{\lambda}_{t}\bm{\lambda}_{t}^{\top}]=(\bm{\beta}-\bm{\alpha})^{-1}\left(\frac{1}{2}\bm{\alpha}\mathrm{Dg}(\mathbb{E}[\bm{\lambda}_{t}])\bm{\alpha}+\bm{\beta}\bm{\mu}\mathbb{E}[\bm{\lambda}_{t}^{\top}]\right)$ (6) and $\bm{\mathrm{B}}=\left\\{\overline{\bm{\mathrm{Z}}}^{\top}\circ\mathbb{E}[\bm{\lambda}_{t}\bm{\lambda}_{t}^{\top}]+\mathrm{Dg}(\mathbb{E}[\bm{\lambda}_{t}])\left(\bm{\alpha}\circ\overline{\bm{\mathrm{Z}}}\right)^{\top}-\mathrm{Dg}(\overline{\bm{\mathrm{Z}}})\mathbb{E}[\bm{\lambda}_{t}]\mathbb{E}[\bm{\lambda}_{t}]^{\top}\right\\}(\bm{\beta}-\bm{\alpha})^{-1}$ (7) and by the mark independent assumption, $\overline{\bm{\mathrm{Z}}}=\begin{bmatrix}\mathbb{E}[Z_{1}]&\mathbb{E}[Z_{2}]\\\ \mathbb{E}[Z_{1}]&\mathbb{E}[Z_{2}]\end{bmatrix},\quad\overline{\bm{\mathrm{Z}}}=\begin{bmatrix}\mathbb{E}[Z_{1}^{2}]&\mathbb{E}[Z_{2}^{2}]\\\ \mathbb{E}[Z_{1}^{2}]&\mathbb{E}[Z_{2}^{2}]\end{bmatrix}.$ To calculate the volatility of price changes, rather than the number of movements, we multiply the minimum tick size to Eq. (4). Further details can be found in Lee and Seo (2017) and Lee (2022). ### 2.2 Network model Next, we construct a recurrent neural network for parameter estimation. The traditional method of estimating the parameters of a Hawkes process is MLE, which involves maximizing the log-likelihood function of the model: $L(T,\bm{\theta})=\sum_{i=1}^{2}\left(\int_{(0,T]\times E}\log\lambda_{i}(u)M_{i}(\mathrm{d}u\times\mathrm{d}z_{i})-\int_{0}^{T}\lambda_{i}(u)\mathrm{d}u\right)$ to estimate the parameters most likely to generate the observed data. In contrast, neural network based parameter estimation involves training a neural network to predict the parameters of a Hawkes process based on input data. To do this, numerous sample paths of inter-arrival times and movement types (up or down) are generated, where the parameters of each path are determined randomly. These sample paths are then used as feature variables, and the associated true parameter values are used as the target variables. The neural network is trained on these data. Once trained, it can be used to predict the parameter values of a new sample path of Hawkes process data. These predicted parameter values can then be used to compute further complicated formulae, such as the Hawkes volatility in Eq. (4), which is a measure of the variability of the process over time. Here, we use an LSTM model with three layers as the neural network. The LSTM network is known for its capability of retaining information over a long duration, making it appropriate for tasks that require context comprehension or state preservation. The gates mechanism in the LSTM architecture that regulates the flow of information between memory cells and the network enables the network to choose which information should be preserved or discarded. We also tested gated recurrent unit networks (Cho et al., 2014); however, the LSTM performed slightly better in our problem. A thorough account of the network’s implementation is presented in the following section. ## 3 Simulation result In this simulation study, we generate a set of paths of Hawkes processes to create a training dataset for the neural network. The dataset comprises a sufficient quantity of synthetic data, which are utilized to train the network to predict the real parameters of the Hawkes process. The real parameters for each Hawkes process are randomly selected to cover the entire range of possible values, with each path having distinct parameters. For the ranges of the parameters, we use the ranges of the estimates obtained by fitting the past intraday price process of various stocks to the Hawkes model. Approximately 30 symbols of stocks, including AAPL, AMZN, C, FB, GOOG, IBM, MCD, MSFT, NVDA, and XOM from 2018 to 2019 are used. For the estimation, we focus on high-frequency rather than ultra-high- frequency. More precisely, the raw data are filtered as follows. We observe the mid-price at intervals of $\Delta t=0.1$ seconds, noting any changes from the previously observed price. If a change is detected, we record the exact time of the change and the new price. If the price remains the same, we move on to the next interval of 0.1 seconds and repeat the process. This method allows us to filter out unnecessary movements, commonly referred to as microstructure noise, observed at ultra-high frequencies. Once the set of estimates is obtained, we generate Hawkes process paths of a 2,000-time step. These are then used for neural network training together with estimates as target variables. This method yields tens of thousands of datasets, which are sufficient to construct a comprehensive training set. The implementation of the LSTM network model is as follows. The first layer consists of 12 units, which manage sequential data. These data are a two- dimensional input of time series data that comprise inter-arrivals and types of movements (up or down). Up and down movements are encoded as 1 and 2, respectively. The output of this layer is a sequence of 12 length of vectors, where each vector is the output of the first layer at a given time step. Thus, if the original time series has a 2,000-time step, then the output of the first layer is a $2,000\times 12$ matrix, which is the time step $\times$ the number of units. The second layer has 12 units and produces a single (not a sequence of) vector of length 12. The final layer is a dense (fully connected) layer with four units, which produces output representing the parameters in the Hawkes model, $\mu,\alpha_{1},\alpha_{2}$ and $\beta$. If we extend the model for more complexity, the number of units in the last dense layer will be adjusted accordingly. This is because each unit in the last layer represents each parameter. As pointed out by Mei and Eisner (2017), a natural extension of LSTM is deep LSTM, especially for complex structured data such as high-frequency financial data, where the effectiveness of multi-layering seems promising. However, we utilized a relatively parsimonious Hawkes model and achieved sufficient performance without employing a large number of layers, so we used the LSTM model proposed above, which is relatively faster to train. If the data structure and model become more complex, an extension to deep LSTM would be helpful. For training, we use 75,000 sample data points; hence, the dataset for the neural network’s input has a shape of $75,000\times 1,000\times 2$. The Adam (Kingma and Ba, 2014) optimizer is used for training. Generally, more than 300 epochs are used. For testing, we use 15,000 data points that were not used for training. This is done to evaluate the model’s ability to make predictions on these unseen data based on mean squared error (MSE). We then compare the results with a traditional MLE. The computation times were measured using a typical commercial PC. The result shows that the MLE has slightly better performance in terms of MSE; however, the neural network also shows a reasonable result. Meanwhile, the general numerical method of MLE requires many iterations and is time consuming. However, a well-trained neural network computes an estimate very quickly, which is less than a hundredth of the time required by MLE. | Neural network | MLE ---|---|--- MSE | 0.0513 | 0.0417 Time (sec) | 0.0120 | 1.763 To understand the basic properties of the neural network estimator, we investigate its sampling distribution. To examine the sampling distribution, we generate 100,000 paths of length 2,000 using the fixed parameter values of $\mu=0.3$, $\alpha_{1}=0.4$, $\alpha_{2}=0.7$, $\beta=1.5$. We then compare the obtained sampling distributions using the neural network and MLE methods. Overall, MLE outperforms the neural network slightly. Even so, the general performance of neural networks is also quite good. Parameter | True | Neural network | MLE ---|---|---|--- | | Mean | S.D. | Mean | S.D. $\mu$ | 0.3000 | 0.3151 | 0.0358 | 0.3036 | 0.0314 $\alpha_{1}$ | 0.4000 | 0.4429 | 0.0661 | 0.3988 | 0.0500 $\alpha_{2}$ | 0.7000 | 0.5733 | 0.0697 | 0.7024 | 0.0608 $\beta$ | 1.5000 | 1.5736 | 0.1364 | 1.5078 | 0.1145 Specifically, the aforementioned example compares the performance of the numerical optimizer (used in MLE) and neural network. Owing to the nature of simulation studies, the numerical optimizer has several advantages. The outcome of a numerical optimizer is often influenced by its initial value. In the example provided above, because the true value of the parameter is known, it was directly used as the initial value. This may have improved the performance compared to the result from a random initial value. As finding a good initial value is sometimes challenging, a numerical optimizer may perform worse in real-world problems. In addition, if the numerical optimizer exhibits unexpected behavior, such as failure to converge appropriately, human engagement may be necessary, such as adjustments to the initial value or retrying the procedure. As the complexity of the model and level of noise present in the empirical data increase, the advantages of the numerical optimizer may decrease. In such cases, further research may be required to determine whether the numerical optimizer still outperforms the neural network. ## 4 Empirical result The approach in the previous section can be directly applied to empirical data. However, we need to consider whether robust estimation can be made in situations where the empirical data do not completely follow the Hawkes process. For example, in filtered high-frequency price process data, a subdue effect (where an event reduces intensity) can sometimes occur. This can result in negative estimates for the parameter $\alpha$, which violates the definition of the Hawkes model. To address this, a more complex model should be used; however, as this falls outside the scope of this study, an alternative method is to use a softplus activation function $\log(1+\exp(x))$ for the last layer in the neural network. This approach is similar to constraint optimization by ensuring positive estimates. Furthermore, instead of predicting $\beta$ directly, we trained and predicted $\beta-\alpha_{1}-\alpha_{2}$. By using the softplus function, this method ensures that the branching ratio condition of the Hawkes model is met. To further increase robustness, a combination of empirical data and its maximum likelihood estimates as training data can be used, rather than relying solely on simulation data. This approach accounts for the possibility of model mis-specification; for instance, the observed data may not perfectly align with the Hawkes process. By incorporating the MLE into the training data, the neural network can better mimic the MLE of the Hawkes model. Thus, if the goal is to construct a neural network that closely approximates the MLE, even under the possibility of model mis-specification, this method can be effective. The following section explains the step-by-step procedure. We select segments of observed intraday data of inter-arrivals and movement types. Each segment consists of a 2,000-time step. These selected paths are used to fit the Hawkes model using MLE. The resulting dataset is then used to train the neural network, where the inter-arrivals and types of real data serve as feature variables and the maximum likelihood estimates are the target variables. Figure 1 illustrates the intraday dynamics of estimates of the Hawkes model on a specific date. The data used for this illustration are out-of-sample that are not used for training. Specifically, it was estimated using segments of data corresponding to every 2,000-time step. This corresponds to a time horizon of approximately 10-20 minutes. To create a more continuous graph, the time windows for estimation were moved forward slowly with sufficient overlap. The neural network shows very consistent results with MLE. (a) $\mu$ (b) $\beta$ (c) $\alpha_{1}$ (d) $\alpha_{2}$ Figure 1: Intraday estimates for the NBBO of AAPL using MLE and neural network Figure 2 presents the instantaneous intraday annualized Hawkes volatility calculated using MLE, a neural network, and nonparametric realized volatility as a benchmark by Andersen et al. (2003). The realized volatility is calculated using the observed values at 1-second intervals for the price process of the period. All three measures have a similar trend throughout the day. Although it is not possible to present all the results examined, in some cases, MLE showed unstable dynamics of volatility. This is likely due to the fact that the 2,000-time length used for estimation is a relatively small sample size to estimate the parameters of the Hawkes model. Figure 2: Intraday annualized volatilities for the NBBO of AAPL using MLE and neural network ## 5 Conclusion This study shows that a neural network can accurately estimate time series parameters, with an accuracy similar to MLE and much faster computation. While the example used here is for calculating Hawkes volatility, but our proposed method can be applied to various fields. It can be particularly useful in cases where the model is complex and traditional estimation procedures are challenging, such as modeling entire limit order book. Further research in this area is expected to be ongoing and diverse. ## Acknowledgements This work has supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT)(No. NRF-2021R1C1C1007692). ## References * Andersen et al. (2003) Andersen, T. G., T. Bollerslev, F. X. Diebold, and P. Labys (2003). Modeling and forecasting realized volatility. Econometrica 71, 579–625. * Bacry et al. (2013) Bacry, E., S. Delattre, M. Hoffmann, and J.-F. Muzy (2013). Modelling microstructure noise with mutually exciting point processes. Quantitative Finance 13, 65–77. * Cho et al. (2014) Cho, K., B. Van Merriënboer, D. Bahdanau, and Y. Bengio (2014). On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. * Ghosh et al. (2022) Ghosh, P., A. Neufeld, and J. K. Sahoo (2022). Forecasting directional movements of stock prices for intraday trading using lstm and random forests. Finance Research Letters 46, 102280. * Hawkes (1971) Hawkes, A. G. (1971). Point spectra of some mutually exciting point processes. Journal of the Royal Statistical Society. Series B (Methodological) 33, 438–443. * Hochreiter and Schmidhuber (1997) Hochreiter, S. and J. Schmidhuber (1997). Long short-term memory. Neural computation 9, 1735–1780. * Kingma and Ba (2014) Kingma, D. P. and J. Ba (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. * Lee (2022) Lee, K. (2022). Application of hawkes volatility in the observation of filtered high-frequency price process in tick structures. arXiv 2207.05939. * Lee and Seo (2017) Lee, K. and B. K. Seo (2017). Marked Hawkes process modeling of price dynamics and volatility estimation. Journal of Empirical Finance 40, 174–200. * Mei and Eisner (2017) Mei, H. and J. M. Eisner (2017). The neural Hawkes process: A neurally self-modulating multivariate point process. Advances in Neural Information Processing Systems 30. * Wang et al. (2022) Wang, X., J. Feng, Q. Liu, Y. Li, and Y. Xu (2022). Neural network-based parameter estimation of stochastic differential equations driven by lévy noise. Physica A: Statistical Mechanics and its Applications 606, 128146. * Wei and Jiang (2022) Wei, Y. M. and Z. Jiang (2022). Estimating parameters of structural models using neural networks. USC Marshall School of Business Research Paper. * Wlas et al. (2008) Wlas, M., Z. Krzeminski, and H. A. Toliyat (2008). Neural-network-based parameter estimations of induction motors. IEEE Transactions on Industrial Electronics 55, 1783–1794. * Zhang et al. (2021) Zhang, Y., G. Chu, and D. Shen (2021). The role of investor attention in predicting stock prices: The long short-term memory networks perspective. Finance Research Letters 38, 101484.
supSupplementary References figurec # LiteTransformerSearch: Training-free Neural Architecture Search for Efficient Language Models Mojan Javaheripi1, Gustavo H. de Rosa2, Subhabrata Mukherjee2, Shital Shah2, Tomasz L. Religa3, Caio C.T. Mendes2, Sebastien Bubeck2, Farinaz Koushanfar1, Debadeepta Dey2 1University of California San Diego, 2Microsoft Research, 3Microsoft <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract The Transformer architecture is ubiquitously used as the building block of large-scale autoregressive language models. However, finding architectures with the optimal trade-off between task performance (perplexity) and hardware constraints like peak memory utilization and latency is non-trivial. This is exacerbated by the proliferation of various hardware. We leverage the somewhat surprising empirical observation that the number of decoder parameters in autoregressive Transformers has a high rank correlation with task performance, irrespective of the architecture topology. This observation organically induces a simple Neural Architecture Search (NAS) algorithm that uses decoder parameters as a proxy for perplexity without need for any model training. The search phase of our training-free algorithm, dubbed Lightweight Transformer Search (LTS)111code available at https://github.com/microsoft/archai/tree/neurips_lts/archai/nlp, can be run directly on target devices since it does not require GPUs. Using on-target- device measurements, LTS extracts the Pareto-frontier of perplexity versus any hardware performance cost. We evaluate LTS on diverse devices from ARM CPUs to NVIDIA GPUs and two popular autoregressive Transformer backbones: GPT-2 and Transformer-XL. Results show that the perplexity of $16$-layer GPT-2 and Transformer-XL can be achieved with up to $1.5\times,2.5\times$ faster runtime and $1.2\times,2.0\times$ lower peak memory utilization. When evaluated in zero and one-shot settings, LTS Pareto-frontier models achieve higher average accuracy compared to the $350$M parameter OPT across $14$ tasks, with up to $1.6\times$ lower latency. LTS extracts the Pareto-frontier in under $3$ hours while running on a commodity laptop. We effectively remove the carbon footprint of hundreds of GPU hours of training during search, offering a strong simple baseline for future NAS methods in autoregressive language modeling. ## 1 Introduction The Transformer architecture [42] has been used as the de-facto building block of most pre-trained language models like GPT [5]. A common problem arises when one tries to create smaller versions of Transformer models for edge or real- time applications (e.g. text prediction) with strict memory and latency constraints: it is not clear what the architectural hyperparameters should be, e.g., number of attention heads, number of layers, embedding dimension, and the inner dimension of the feed forward network, etc. This problem is exacerbated if each Transformer layer is allowed the freedom to have different values for these settings. This results in a combinatorial explosion of architectural hyperparameter choices and a large heterogeneous search space. For instance, the search space considered in this paper consists of over $10^{54}$ possible architectures. Neural Architecture Search (NAS) is an organic solution due to its ability to automatically search through candidate models with multiple conflicting objectives like latency vs. task performance. The central challenge in NAS is the prohibitively expensive function evaluation, i.e., evaluating each architecture requires training it on the dataset at hand. Thus it is often infeasible to evaluate more than a handful of architectures during the search phase. Supernets [31] have emerged as a dominant paradigm in NAS which combine all possible architectures into a single graph and jointly train them using weight-sharing. Nevertheless, supernet training imposes constraints on the expressiveness of the search space [29] and is often memory-hungry [52, 6, 51] as it creates large networks during search. Additionally, training supernets is non-trivial as children architectures may interfere with each other and the ranking between sub-architectures based on task performance is not preserved [29]222See [29] for a comprehensive treatment of the difficulties of training supernets.. We consider a different approach by proposing a training-free proxy that provides a highly accurate ranking of candidate architectures during NAS without need for costly function evaluation or supernets. Our scope is NAS for efficient autoregressive Transformers used in language modeling. We design a lightweight search method that is target hardware-aware and outputs a gallery of models on the Pareto-frontier of perplexity versus hardware metrics. We term this method Lightweight Transformer Search (LTS). LTS relies on our somewhat surprising observation: the decoder parameter count has a high rank correlation with the perplexity of fully trained autoregressive Transformers. Given a set of autoregressive Transformers, one can accurately rank them using decoder parameter count as the proxy for perplexity. Our observations are also well-aligned with the power laws in [22], shown for homogeneous autoregressive Transformers, i.e., when all decoder layers have the same configuration. We provide extensive experiments that establish a high rank correlation between perplexity and decoder parameter count for _both_ homogeneous and heterogeneous search spaces. Figure 1: High-level overview of LTS. We propose a training-free zero-cost proxy for evaluating the validation perplexity of candidate architectures. Pareto-frontier search is powered by evolutionary algorithms which use the proposed proxy along with real latency and memory measurements on the target hardware to evaluate sampled architectures. The above phenomenon coupled with the fact that a candidate architecture’s hardware performance can be measured on the target device leads to a training- free search procedure: pick one’s favorite discrete search algorithm (e.g. evolutionary search), sample candidate architectures from the search space; count their decoder parameters as a proxy for task performance (i.e., perplexity); measure their hardware performance (e.g., latency and memory) directly on the target device; and progressively create a Pareto-frontier estimate. While we have chosen a reasonable search algorithm in this work, one can plug and play any Pareto-frontier search method such as those in [20]. Building upon these insights, Figure 1 shows a high-level overview of LTS. We design the first training-free Transformer search that is performed entirely on the target (constrained) platform. As such, LTS easily performs a multi- objective NAS where several underlying hardware performance metrics, e.g., latency and peak memory utilization, are simultaneously optimized. Using our training-free proxy, we extract the $3$-dimensional Pareto-frontier of perplexity versus latency and memory in a record-breaking time of $<3$ hours on a commodity Intel Core i7 CPU. Notably, LTS eliminates the carbon footprint from hundreds of GPU hours of training associated with legacy NAS methods. To corroborate the effectiveness of our proxy, we train over $2900$ Transformers on three large language modeling benchmark datasets, namely, WikiText-103 [27], One Billion Word [7], and Pile [17]. We use LTS to search for Pareto-optimal architectural hyperparameters in two popularly used autoregressive Transformer backbones, namely, Transformer-XL [10] and GPT-2 [32]. We believe decoder parameter count should be regarded as a competitive baseline for evaluating Transformer NAS, both in terms of ranking capabilities and easy computation. We open-source our code along with tabular information of our trained models to foster future NAS research on Transformers. ## 2 Related Work Here, we discuss literature on automated search for Transformer architectures in the language domain. We refer to extensive surveys on NAS [14, 49] for a broader overview of the field. Decoder-only Architectures. So et al. [37] search over TensorFlow programs that implement an autoregressive language model via evolutionary search. Since most random sequences of programs either have errors or underperform, the search has to be seeded with the regular Transformer architecture, termed “Primer”. As opposed to “Primer” which uses large computation to search a general space, we aim to efficiently search the “backbone” of traditional decoder-only Transformers. Additionally, the objective in “Primer” is to find models that train faster. Our objective for NAS, however, is to deliver Pareto-frontiers for inference, with respect to perplexity and hardware constraints. Encoder-only Architectures. Relative to decoder-only autoregressive language models, encoder-only architectures like BERT [11] have received much more recent attention from the NAS community. NAS-BERT [50] trains a supernet to efficiently search for masked language models (MLMs) which are compressed versions of the standard BERT, Such models can then be used in downstream tasks as is standard practice. Similar to NAS-BERT, Xu et al. [51] train a supernet to conduct architecture search with the aim of finding more efficient BERT variants. They find interesting empirical insights into supernet training issues like differing gradients at the same node from different child architectures and different tensors as input and output at every node in the supernet. The authors propose fixes that significantly improve supernet training. Tsai et al. [41], Yin et al. [53], Gao et al. [16] also conduct variants of supernet training with the aim of finding more efficient BERT models. Encoder-Decoder Related: Applying the well-known DARTS [24] approach to Transformer search spaces leads to memory-hungry supernets. To mitigate this issue, Zhao et al. [61] propose a multi-split reversible network and a memory- efficient backpropagation algorithm. One of the earliest papers that applied discrete NAS to Transformer search spaces was [36], which uses a modified form of evolutionary search. Due to the expense of directly performing discrete search on the search space, this work incurs extremely large computation overhead. Follow-up work by [46] uses the Once-For-All [6] approach to train a supernet for encoder-decoder architectures used in machine translation. Search is performed on subsamples of the supernet that inherit weights to estimate task accuracy. For each target device, the authors train a small neural network regressor on thousands of architectures to estimate latency. As opposed to using a latency estimator, LTS evaluates the latency of each candidate architecture on the target hardware. Notably, by performing the search directly on the target platform, LTS can easily incorporate various hardware performance metrics, e.g., peak memory utilization, for which accurate estimators may not exist. To the best of our knowledge, such holistic integration of multiple hardware metrics in Transformer NAS has not been explored previously. ## 3 Lightweight Transformer Search We perform an evolutionary search over candidate architectures to extract models that lie on the Pareto-frontier. In contrast to the vast majority of prior methods that deliver a single architecture from the search space, our search is performed over the entire Pareto, generating architectures with a wide range of latency, peak memory utilization, and perplexity with one round of search. This alleviates the need to repeat the NAS algorithm for each hardware performance constraint. To evaluate candidate models during the search, LTS uses a training-free proxy for the validation perplexity. By incorporating training-free evaluation metrics, LTS, for the first time, performs the entire search directly on the target (constrained) hardware. Therefore, we can use real measurements of hardware performance during the search. Algorithm 1 outlines the iterative process Input: Search space $\mathcal{D}$, $n_{iter}$ Output: Perplexity-latency-memory Pareto-frontier $\mathbb{F}$ 1 $\mathcal{L},\mathcal{M},\mathcal{P},\mathbb{F}\leftarrow\emptyset,\emptyset,\emptyset,\emptyset$ 2 while _$N\leq n_{iter}$_ do 3 $\mathbb{F}^{\prime}\leftarrow$ Subsample($\mathbb{F}$) 4 $\mathbb{S}_{N}\leftarrow EA(\mathbb{F}^{\prime},\mathcal{D})$ // hardware profiling 5 $\mathcal{L}\leftarrow\mathcal{L}\bigcup$ Latency($\mathbb{S}_{N}$) 6 $\mathcal{M}\leftarrow\mathcal{M}\bigcup$ Memory($\mathbb{S}_{N}$) // estimate perplexity 7 $\mathcal{P}\leftarrow\mathcal{P}\bigcup$ Proxy($\mathbb{S}_{N}$) // update the Pareto-frontier 8 $\mathbb{F}\leftarrow$ LowerConvexHull($\mathcal{P},\mathcal{L},\mathcal{M}$) Algorithm 1 LTS’s training-free NAS performed in LTS 333The Pareto-frontier search method in Algorithm 1 is inspired by [13] and [21]. Other possibilities include variations proposed in [20], evaluation of which is orthogonal to our contributions in this work. for finding candidate architectures in the search space ($\mathcal{D}$), that lie on the $3$-dimensional Pareto-frontier ($\mathbb{F}$) of perplexity versus latency and memory. At each iteration, a set of points ($\mathbb{F^{\prime}}$) are subsampled from the current Pareto-frontier. A new batch of architectures ($\mathbb{S}_{N}$) are then sampled from $\mathbb{F^{\prime}}$ using evolutionary algorithms ($EA(.)$). The new samples are evaluated in terms of latency ($\mathcal{L}$), peak memory utilization ($\mathcal{M}$), and validation perplexity ($\mathcal{P}$). Latency and memory are measured directly on the target hardware while the perplexity is indirectly estimated using our accurate and training-free proxy methods. Finally, the Pareto- frontier is recalibrated using the lower convex hull of all sampled architectures. In the context of multi-objective NAS, Pareto-frontier points are those where no single metric (e.g., perplexity, latency, and memory) can be improved without degrading at least one other metric [20]. To satisfy application-specific needs, optional upper bounds can be placed on the latency and/or memory of sampled architectures during search. Search Space. Figure 2 shows all elastic parameters in LTS search space, namely, number of layers (nlayer), number of attention heads (nhead), decoder output dimension (dmodel), inner dimension of the feed forward network (dinner), embedding dimension (dembed), and the division factor ($k$) of adaptive embedding [3]. These architectural parameters are compatible with popularly used autoregressive Figure 2: Elastic parameters in LTS search space. Transformer backbones, e.g., GPT. For preliminaries on autoregressive Transformers, please see Appendix A. We adopt a heterogeneous search space where the backbone parameters are decided on a per-layer basis. This is in contrast to the homogeneous structure commonly used in Transformers [10, 5], which reuses the same configuration for all layers. Compared to homogeneous models, the flexibility associated with heterogeneous architectures enables them to obtain much better hardware performance under the same perplexity budget (see Section 4.4). Heterogeneous search space was previously explored in [46]. However, due to the underlying supernet structure, not all design parameters can change freely. As an example, the dimensionality of the Q, K, V vectors inside the encoder and decoder layers is fixed to a large value of $512$ to accommodate inheritance from the supernet. Our search space, however, allows exploration of all internal dimensions without constraints. By not relying on the supernet structure, our search space easily encapsulates various Transformer backbones with different configurations of the input/output embedding layers and elastic internal dimensions. LTS searches over the following values for the architectural parameters in our backbones: nlayer$\in\\{2,\dots,16|1\\}$444We use the notation {vmin,…, vmax—step size} to show the valid range of values., dmodel$\in\\{128,\dots,1024|64\\}$, dinner$\in\\{256,\dots,4096|64\\}$, and nhead$\in\\{2,4,8\\}$. Additionally we explore adaptive input embedding [3] with dembed$\in\\{128,256,512\\}$ and factor $k\in\\{1,2,4\\}$. Once a dmodel is sampled, we adjust the lower bound of the above range for dinner to $2\times$dmodel. Encoding this heuristic inside the search ensures that the acquired models will not suffer from training collapse. Our heterogeneous search space encapsulates more than $10^{54}$ different architectures. Such high dimensionality further validates the critical need for training-free NAS. ### 3.1 Training-free Architecture Ranking $\blacktriangleright$ Low-cost Ranking Proxies. Recently, Abdelfattah et al. [1] utilize the summation of pruning scores over all model weights as the ranking proxy for Convolutional Neural Networks (CNNs), where a higher score corresponds to higher architecture rank in the search space. White et al. [48] analyze these and more recent proxies and find that no particular proxy performs consistently well over various tasks and baselines, while parameter and floating point operations (FLOPS) count proxies are quite competitive. However, they did not include Transformer-based search spaces in their analysis. To the best of our knowledge, low-cost (pruning-based) proxies have not been evaluated on Transformer search spaces in the language domain. Note that one cannot naively apply these proxies to language models. Specifically, since the embedding layer in Transformers is equivalent to a lookup operation, special care must be taken to omit this layer from the proxy computation. Using this insight, we perform the first systematic study of low-cost proxies for NAS on autoregressive Transformers for text prediction. We leverage various pruning metrics, namely, grad_norm , snip [23], grasp [45], fisher [40], and synflow [38]. We also study jacob_cov [26] and relu_log_det [25] which are low-cost scoring mechanisms proposed for NAS on CNNs in vision tasks. While these low-cost techniques do not perform model training, they require forward and backward passes over the architecture to compute the proxy, which can be time-consuming on low-end hardware. Additionally, the aforesaid pruning techniques, by definition, incorporate the final softmax projection layer in their score assessment. Such an approach seems reasonable for CNNs dealing with a few classification labels, however, it can skew the evaluation for autoregressive Transformers dealing with a large output vocabulary space. To overcome these shortcomings, we introduce a zero-cost architecture ranking strategy in the next section that outperforms the proposed low-cost proxies in terms of ranking precision, is data free, and does not perform any forward/backward propagation. Figure 3: Our training-free zero-cost proxy based on decoder parameter count is highly correlated with the (ground truth) validation perplexity after full training. Each plot contains $200$ architectures sampled randomly from the search space of Transformer-XL or GPT-2 backbone. $\blacktriangleright$ Decoder Parameter Count as a Proxy. We empirically establish a strong correlation between the parameter count of decoder layers and final model performance in terms of validation perplexity. We evaluate $200$ architectures sampled uniformly at random from the search space of two autoregressive Transformer backbones, namely, Transformer-XL and GPT-2. These architectures are trained fully on WikiText-103 and One Billion Word (LM1B) datasets, which consumes over $\mathbf{25000}$ GPU-hours on NVIDIA A100 and V100 nodes. We compare the ranking obtained using decoder parameter count proxy and the ground truth ranking after full training in Figure 3. On WikiText-103, zero-cost ranking using the decoder parameter count obtains a Spearman’s Rank Correlation (SRC) of $0.97$ with full training. SRC further increases to $0.99$ for the more complex LM1B benchmark on both backbones. This validates that the decoder parameter count is strongly correlated with final model performance, thereby providing a reliable training-free proxy for NAS. ## 4 Experiments We conduct experiments to seek answers to the following critical questions: 1 How well can training-free proxies perform compared to training-based methods for estimating the performance of Transformer models? 2 How does model topology affect the performance of the proposed decoder parameter proxy? 3 Can our training-free decoder parameter count proxy be integrated inside a search algorithm to estimate the Pareto-frontier? How accurate is such an estimation of the Pareto? 4 Which models are on the Pareto-frontier of perplexity, latency, and memory for different hardware? 5 How well do LTS models perform in zero and one-shot settings compared to hand- designed variants when evaluated on downstream tasks? We empirically answer questions 1, 2, 4, and 5 in Sections 4.2, 4.3, 4.4, and 4.5 respectively. We further address question 3 in Appendix C where we show the Pareto-frontier models extracted by the decoder parameter count proxy are very close to the ground truth Pareto-frontier with an average of $0.6\%$ perplexity difference. Additionally, we show the efficacy of the decoder parameter count proxy when performing search on different ranges of model sizes in Appendix C, Figure 12. ### 4.1 Experimental Setup Please refer to Appendix B for information about the benchmarked datasets, along with details of our training and evaluation setup, hyperparameter optimization, and evolutionary search algorithm. Backbones. We apply our search on two widely used autoregressive Transformer backbones, namely, Transformer-XL [10] and GPT-2 [32] that are trained from scratch with varying architectural hyperparameters. The internal structure of these backbones are quite similar, containing decoder blocks with attention and feed-forward layers. The difference between the backbones lies mainly in their dataflow structure; the Transformer-XL backbone adopts a recurrence methodology over past states coupled with relative positional encoding which enables modeling longer term dependencies. Performance Criteria. To evaluate the ranking performance of various proxies, we first establish a ground truth ranking of candidate architectures by training them until full convergence. This ground truth ranking is then utilized to compute two performance criteria as follows: $\blacktriangleright$ Common Ratio (CR): We define CR as the percentage overlap between the ground truth ranking of architectures versus the ranking obtained from the proxy. CR quantifies the ability of the proxy ranking to identify the top$k\%$ architectures based on their validation perplexity after full training. $\blacktriangleright$ Spearman’s Rank Correlation (SRC): We use this metric to measure the correlation between the proxy ranking and the ground truth. Ideally, the proxy ranking should have high correlation with the ground truth over the entire search space as well as high-performing candidate models. ### 4.2 How do training-free proxies perform compared to training-based methods? In this section, we benchmark several proxy methods for estimating the rank of candidate architectures. Specifically, we investigate three different ranking techniques, namely, partial training, low-cost methods, and number of decoder parameters. $\blacktriangleright$ Partial Training. We first analyze the relationship between validation perplexity after a shortened training period versus that of full training for ranking candidate models. We stop the training after $\tau\in[1.25\%,87.5\%]$ of the total training iterations needed for model convergence. Figure 4 demonstrates the SRC and CR of partial training with various $\tau$s, evaluated on $100$ randomly selected models from the Transformer-XL backbone, trained on WikiText-103. The horizontal axis denotes the average time required for $\tau$ iterations of training across all sampled models. Intuitively, a higher number of training iterations results in a more accurate estimate of the final perplexity. Nevertheless, the increased wall- clock time prohibits training during search and also imposes the need for GPUs. Interestingly, very few training iterations, i.e., $1.25\%$, provide a very good proxy for final performance with an SRC of $>0.9$ on the entire population. Our training-free proxy, i.e., decoder parameter count, also shows competitive SRC compared to partial training. Figure 4: Comparison between partial training and our zero-cost proxy, i.e., decoder parameter count, in terms of ranking performance and timing overhead. Each subplot corresponds to a top$k\%$ of the randomly sampled models, based on their validation perplexity after full training. $\blacktriangleright$ Low-cost Proxies. We benchmark various low-cost methods introduced in Section 3.1 on $200$ randomly sampled architectures from the Transformer-XL backbone, trained on WikiText-103. Figure 5 shows the SRC between low-cost proxies and the ground truth ranking after full training. Figure 5: SRC between low-cost proxies and the ground truth ranking after full training of 200 randomly sampled Transformers. The decoder parameter count obtains the best SRC with zero cost. We measure the cost of each proxy in terms of FLOPs. As seen, the evaluated low-cost proxies have a strong correlation with the ground truth ranking (even the lowest performing relu_log_det has $>0.8$ SRC), validating the effectiveness of training-free NAS on autoregressive Transformers. The lower performance of relu_log_det can be attributed to the much higher frequency of ReLU activations in CNNs, for which the method was originally developed, compared to Transformer-based architectures. Our analysis of randomly selected models with homogeneous structures also shows a strong correlation between the low-cost proxies and validation perplexity, with decoder parameter count outperforming other proxies (see Appendix D). $\blacktriangleright$ Parameter Count. Figure 6a demonstrates the final validation perplexity versus the total number of model parameters for $200$ randomly sampled architectures from GPT-2 and Transformer-XL backbones. This figure contains two important observations: (1) the validation perplexity has a downward trend as the number of parameters increases, (2) The discontinuity is caused by the dominance of embedding parameters when moving to the small Transformer regime. We highlight several example points in Figure 6a where the architectures are nearly identical but the adaptive input embedding factor $k$ is changed. Changing $k\in\\{1,2,4\\}$ (shown with different colors in Figure 6a) varies the total parameter count without much influence on the validation perplexity. Figure 6: (a) Validation perplexity after full training versus total parameters for $200$ randomly sampled architectures trained on WikiText-103. The clear downward trend suggests a strong correlation between parameter count and perplexity. (b), (c) Performance of parameter count proxies for ranking the randomly sampled architectures with Transformer-XL and GPT-2 backbones. The above observations motivate us to evaluate two proxies, i.e., total number of parameters and decoder parameter count. Figures 6b and 6c demonstrate the CR and SRC metrics evaluated on the $200$ randomly sampled models divided into top$k\%$ bins based on their validation perplexity. As shown, the total number of parameters generally has a lower SRC with the validation perplexity, compared to decoder parameter count. This is due to the masking effect of embedding parameters, particularly in the Transformer-XL backbone. The total number of decoder parameters, however, provides a highly accurate, zero-cost proxy with an SRC of $0.97$ with the perplexity over all models, after full training. We further show the high correlation between decoder parameter count and validation perplexity for Transformer architectures with homogeneous decoder blocks in the supplementary material, Appendix D. While our main focus is on autoregressive, decoder-only, Transformers, we provide preliminary results on the ranking performance of parameter count proxies for encoder-only and encoder-decoder Transformers in Appendix J. ### 4.3 How does variation in model topology affect decoder parameter count as a proxy? The low-cost proxies introduced in Section 3.1, rely on forward and backward passes through the network. As such, they automatically capture the topology of the underlying architecture via the dataflow. The decoder parameter count proxy, however, is topology-agnostic. In this section, we investigate the effect of topology on the performance of decoder parameter count proxy. Specifically, we seek to answer whether for a given decoder parameter count budget, the aspect ratio of the architecture, i.e., trading off the width versus the depth, can affect the final validation perplexity. We define the aspect ratio of the architecture as dmodel (=width), divided by nlayer (=depth). This metric provides a sense of how skewed the topology is and has been used in prior works which study scaling laws for language models [22]. For a given decoder parameter count budget, we generate several random architectures from the GPT-2 backbone with a wide range of the width-to-depth aspect ratios555We control the aspect ratio by changing the width, i.e., dmodel while keeping dinner=$2\times$dmodel and nhead=$8$. The number of layers is then derived such that the total parameter count remains the same.. The generated models span wide, shallow topologies (e.g., dmodel=$1256$, nlayer=$2$) to narrow, deep topologies (e.g., dmodel=$112$, nlayer=$100$). Figure 7(a) shows the validation perplexity of said architectures after full training on WikiText-103 versus their aspect ratio. The maximum deviation (from the median) of the validation perplexity is $<12.8\%$ for a given decoder parameter count, across a wide range of aspect ratios $\in[1,630]$. Our findings on the heterogeneous search space complement the empirical results by [22] where decoder parameter count largely determines perplexity for homogeneous Transformer architectures, irrespective of shape (see Figure 5 in [22]). We observe stable training when scaling models from the GPT-2 backbone up to $100$ layers, with the perplexity increasing only when the aspect ratio nears $1$. Nevertheless, such deep models are not part of our search space as they have high latency and are unsuitable for lightweight inference. For the purposes of hardware-aware and efficient Transformer NAS, decoder parameter count proxy holds a very high correlation with validation perplexity, regardless of the architecture topology as shown in Figure 7(a). We further validate the effect of topology on decoder parameter count proxy for the Transformer-XL backbone in Figure 14 of Appendix E. Our results demonstrate less than $7\%$ deviation (from the median) in validation perplexity for different aspect ratios $\in[8,323]$. Note that while models with the same parameter count have very similar validation perplexities, the topology in fact affects their hardware performance, i.e., latency (up to $2.8\times$) and peak memory utilization (up to $5.5\times$), as shown in Figures 7(b) and 7(c). This motivates the need for incorporating hardware metrics in NAS to find the best topology. (a) (b) (c) Figure 7: Validation perplexity after full training versus the (a) width-to- depth aspect ratio, (b) latency, and (c) peak memory utilization. Models are randomly generated from the GPT-2 backbone and trained on WikiText-103. For a given decoder parameter count, we observe low variation in perplexity across different models, regardless of their topology. The topology, however, significantly affects the latency (up to $2.8\times$) and peak memory utilization (up to $5.5\times$) for models with the same perplexity. ### 4.4 Pareto-frontier models for various hardware platforms We run LTS on different target hardware and obtain a range of Pareto-optimal architectures with various latency/memory/perplexity characteristics. During search, we fix the adaptive input embedding factor to $k=4$ to search models that are lightweight while ensuring nearly on-par validation perplexity with non-adaptive input embedding. As the baseline Pareto, we benchmark the Transformer-XL (base) and GPT-2 (small) models with homogeneous layers $\in[1,16]$. This is because the straightforward way to produce architectures of different latency/memory is varying the number of layers (layer-scaling) [42, 10]. We compare our NAS-generated architectures with layer-scaled backbones and achieve better validation perplexity and/or lower latency and peak memory utilization. All baseline666The best reported result in the literature for GPT-2 or Transformer-XL might be different based on the specific training hyperparameters, which is orthogonal to our investigation. and NAS-generated models are trained using the same setup enclosed in Table 2 of Appendix B. Figure 8 shows the Pareto-frontier architectures found by LTS versus the layer-scaled baseline. Here, all models are trained on the LM1B dataset (See Figure 16 in Appendix G for results on WikiText-103). Note that the Pareto- frontier search is performed in a $3$-dimensional space, an example of which is enclosed in Appendix F, Figure 15. For better visualization, in Figure 8 we plot $2$-dimensional slices of the Pareto-frontier with validation perplexity on the y-axis and one hardware performance metric (either latency or memory) on the x-axis. As seen, in the low-latency regime, LTS consistently finds models that have significantly lower perplexity compared to naive scaling of the baseline Transformer-XL or GPT-2. On the Transformer-XL backbone, LTS finds architectures with an average of $19.8\%$ and $28.8\%$ lower latency and memory, while achieving similar perplexity compared to the baseline on ARM CPU. Specifically, the perplexity of the $16$-layer Transformer-XL base can be replicated on the ARM device with a lightweight model that is $1.6\times$ faster and utilizes $1.9\times$ less memory during execution. On the Corei7 CPU, the Pareto-frontier models found by LTS are on average $25.8\%$ faster and consume $30.0\%$ less memory under the same validation perplexity constraint. In this setting, LTS finds a model that replicates the perplexity of the $16$-layer Transformer-XL base while achieving $1.7\times$ faster runtime and $1.9\times$ less peak memory utilization. The savings are even higher on the GPU device, where the NAS-generated models achieve the same perplexity as the baseline with average $30.5\%$ lower latency and $27.0\%$ less memory. Specifically, an LTS model with the same perplexity as the $16$-layer Transformer-XL base has $2.5\times$ lower latency and consumes $2.0\times$ less peak memory on TITAN Xp. On the GPT-2 backbone, NAS-generated models consume on average $11.8\%$ less memory while achieving the same validation perplexity and latency on an ARM CPU. The benefits are larger on Corei7 and TitanXP where the latency savings are $13.8\%$ and $11.9\%$, respectively. The peak memory utilization also decreases by $9.7\%$ and $12.9\%$, on average, compared to the baseline GPT-2s on Corei7 and TITAN Xp. Notably, NAS finds new architectures with the same perplexity as the $16$-layer GPT-2 with $1.3\times$, $1.5\times$ faster runtime and $1.2\times$ lower memory utilization on Corei7 and TITAN Xp. Our heterogeneous search space allows us to find a better parameter distribution among decoder layers. Therefore, LTS delivers architectures with better performance in terms of perplexity, while reducing both latency and memory when compared to the homogeneous baselines. We provide the architecture of all baseline and LTS models shown in Figure 8 in Tables 4-7 of Appendix I. Figure 8: 2D visualization of the perplexity versus latency and memory Pareto- frontier found by LTS, versus the scaled backbone models with varying number of layers, trained on the LM1B dataset. Architectural parameters for models shown here are detailed in Appendix I. $\blacktriangleright$ Search Efficiency. The main component in LTS search time is the latency/peak memory utilization measurement for candidate architectures since evaluating the model perplexity is instant using the decoder parameter count. Therefore, our search finishes in a few hours on commodity hardware, e.g., taking only $0.9$, $2.6$, and $17.2$ hours on a TITAN Xp GPU, Corei7 CPU, and an ARM core, respectively. To provide more context into the timing analysis, full training of even one $16$-layer Transformer-XL base model on LM1B using a machine with $8\times$ NVIDIA V100 GPUs takes $15.8$ hours. Once the Pareto-frontier models are identified, the user can pick a model based on their desired hardware constraints and fully train it on the target dataset. LTS is an alternate paradigm to that of training large supernets; our search can run directly on the target device and GPUs are only needed for training the final chosen Pareto-frontier model after search. In Table 1 we study the ranking performance of partial training ($500$ steps) versus the decoder parameter count proxy for evaluating $1200$ architectures from the Transformer-XL backbone during LTS search. Astonishingly the decoder parameter count proxy gets higher SRC compared to partial training, while effectively removing training from the inner loop of search for NAS. | | Train --- Iter | GPU --- Hours | $CO_{2}$e --- (lbs) SRC Full Training | 40,000 | 19,024 | 5433 | 1.0 Partial Training | 500 | 231 | 66 | 0.92 5,000 | 2690 | 768 | 0.96 # Decoder Params | 0 | 0 | $\sim$0 | 0.98 Table 1: Ranking abilities of full and partial training versus our proxy for $1200$ models sampled during LTS search. Training time is reported for WikiText-103 and NVIDIA V100 GPU. Decoder parameter count proxy obtains an SRC of $0.98$ using zero compute. Figure 9: Average zero and one-shot accuracy obtained by LTS models (dots) and the baseline OPT-$350$M (triangle) across $14$ NLP tasks. Latency is measured on an A6000 NVIDIA GPU. Architectural parameters for all models shown here are detailed in Appendix I. ### 4.5 Zero and one-shot performance comparison of LTS models with OPT Zhang et al. [58] open-source a set of pre-trained decoder-only language models, called OPT, which can be used for zero or few-shot inference on various NLP tasks. Below, we compare the performance of LTS Pareto-frontier models with the hand-designed OPT architecture in zero and one-shot settings. We use LTS to search for language models with a GPT-2 backbone which have $300$M to $500$M total parameters to compare with the $350$M parameter OPT. Our search space is detailed in Appendix H. The search is conducted with latency as the target hardware metric and decoder parameter count as a proxy for perplexity. Once the search concludes, We train $20$ models from the Pareto-frontier along with OPT-$350$M on $28$B tokens from the Pile [17]. The pretrained models are then evaluated on $14$ downstream NLP tasks, namely, HellaSwag [57], PIQA [4], ARC (easy and challenge) [9], OpenBookQA [28], WinoGrande [34], and SuperGLUE [44] benchmarks BoolQ, CB, COPA, WIC, WSC, MultiRC, RTE, and ReCoRD. The training hyperparameters and the evaluation setup are outlined in Appendix B. Figure 9 shows the overall average accuracy obtained across all $14$ tasks versus the inference latency for LTS models and the baseline OPT. As shown, NAS-generated models achieve a higher average accuracy with lower latency compared to the hand-designed OPT-$350$M model. We provide a per-task breakdown of zero and one-shot accuracy in Appendix H, Figure 17. $\blacktriangleright$ Zero-shot Performance. Figure 17(a) demonstrates the zero-shot accuracy obtained by LTS and OPT-$350$M on the benchmarked tasks. Compared to the OPT-$350$M architecture, LTS finds models that achieve higher accuracy and lower latency in the zero-shot setting on all evaluated downstream tasks. Specifically, the maximum achievable accuracy of our NAS- generated models is $0.2-8.6\%$ higher than OPT-$350$M with an average speedup of $1.2\times$. If latency is prioritized, LTS delivers models which are, on average, $1.5\times$ faster and up to $4.6\%$ more accurate than OPT-$350$M . $\blacktriangleright$ One-shot Performance. Similar trends can be observed for one-shot evaluation as shown for different tasks in Figure 17(b). LTS Pareto- frontier models improve the per-task accuracy of OPT-$350$M on $12$ out of $14$ tasks by $0.1-8.0\%$, while achieving an average speedup of $1.2\times$. On the same tasks, LTS Pareto-frontier includes models that enjoy up to $1.6\times$ speedup over OPT-$350$M with an average $1.5\%$ higher accuracy. On the RTE task, the best LTS model has $0.4\%$ lower accuracy but $1.6\times$ faster runtime. On the WSC task, the best performing LTS model obtains a similar one-shot accuracy as OPT-$350$M, but with $1.5\times$ faster runtime. ## 5 Limitations and Future Work Decoder parameter count provides a simple yet accurate proxy for ranking autoregressive Transformers. This should serve as a strong baseline for future works on Transformer NAS. Our focus is mainly on autoregressive, decoder-only transformers. We therefore, study perplexity as the commonly used metric for language modeling tasks. Nevertheless, recent literature on scaling laws for Transformers suggest a similar correlation between parameter count and task metrics may exist for encoder only (BERT-style) Transformers or encoder- decoder models used in neural machine translation (NMT) [19]. Additionally, recent findings [39] show specific scaling laws exist between model size and downstream task metrics, e.g., GLUE [43]. Inspired by these observations, we provide preliminary studies that suggest parameter count proxies may be applicable to Transformers in other domains. Detailed investigations of such zero-cost proxies for NAS on heterogeneous BERT-style or NMT models with new performance metrics is an important future avenue of research. ## 6 Acknowledgements Professor Farinaz Koushanfar’s effort has been in parts supported by the NSF TILOS AI institute, award number CCF-2112665. ## References * Abdelfattah et al. [2020] Mohamed S Abdelfattah, Abhinav Mehrotra, Łukasz Dudziak, and Nicholas Donald Lane. Zero-cost proxies for lightweight nas. In _International Conference on Learning Representations_ , 2020. * [2] Meta AI. Codebase for open pre-trained transformers. https://github.com/facebookresearch/metaseq. * Baevski and Auli [2018] Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. In _International Conference on Learning Representations_ , 2018. * Bisk et al. [2020] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 34, pages 7432–7439, 2020. * Brown et al. [2020] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _arXiv preprint arXiv:2005.14165_ , 2020. * Cai et al. [2019] Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train one network and specialize it for efficient deployment. In _International Conference on Learning Representations_ , 2019. * Chelba et al. [2013] Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. _arXiv preprint arXiv:1312.3005_ , 2013. * Chen et al. [2021] Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin Ling. Autoformer: Searching transformers for visual recognition. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pages 12270–12280, 2021. * Clark et al. [2018] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. _arXiv preprint arXiv:1803.05457_ , 2018. * Dai et al. [2019] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 2978–2988, 2019. * Devlin et al. [2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In _NAACL-HLT (1)_ , 2019. * Dong and Yang [2019] Xuanyi Dong and Yi Yang. Nas-bench-201: Extending the scope of reproducible neural architecture search. In _International Conference on Learning Representations_ , 2019. * Elsken et al. [2019a] Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Efficient multi-objective neural architecture search via lamarckian evolution. In _International Conference on Learning Representations_ , 2019a. * Elsken et al. [2019b] Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey, 2019b. * [15] Hugging Face. Openai gpt2 by hugging face. https://huggingface.co/docs/transformers/model_doc/gpt2. * Gao et al. [2021a] Jiahui Gao, Hang Xu, Han shi, Xiaozhe Ren, Philip L. H. Yu, Xiaodan Liang, Xin Jiang, and Zhenguo Li. Autobert-zero: Evolving bert backbone from scratch, 2021a. * Gao et al. [2020] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. _arXiv preprint arXiv:2101.00027_ , 2020. * Gao et al. [2021b] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation. https://github.com/facebookresearch/metaseq, September 2021b. * Ghorbani et al. [2021] Behrooz Ghorbani, Orhan Firat, Markus Freitag, Ankur Bapna, Maxim Krikun, Xavier Garcia, Ciprian Chelba, and Colin Cherry. Scaling laws for neural machine translation. In _International Conference on Learning Representations_ , 2021. * Guerrero-Viu et al. [2021] Julia Guerrero-Viu, Sven Hauns, Sergio Izquierdo, Guilherme Miotto, Simon Schrodi, Andre Biedenkapp, Thomas Elsken, Difan Deng, Marius Lindauer, and Frank Hutter. Bag of baselines for multi-objective joint neural architecture search and hyperparameter optimization. _arXiv preprint arXiv:2105.01015_ , 2021. * Hu et al. [2019] Hanzhang Hu, John Langford, Rich Caruana, Saurajit Mukherjee, Eric J Horvitz, and Debadeepta Dey. Efficient forward architecture search. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_ , volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/6c468ec5a41d65815de23ec1d08d7951-Paper.pdf. * Kaplan et al. [2020] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. _arXiv preprint arXiv:2001.08361_ , 2020. * Lee et al. [2019] Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr. SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=B1VZqjAcYX. * Liu et al. [2019] Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=S1eYHoC5FX. * Mellor et al. [2021a] Joe Mellor, Jack Turner, Amos Storkey, and Elliot J Crowley. Neural architecture search without training. In _International Conference on Machine Learning_ , pages 7588–7598. PMLR, 2021a. * Mellor et al. [2021b] Joseph Mellor, Jack Turner, Amos Storkey, and Elliot J. Crowley. Neural architecture search without training, 2021b. URL https://openreview.net/forum?id=g4E6SAAvACo. * Merity et al. [2016] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. _arXiv preprint arXiv:1609.07843_ , 2016. * Mihaylov et al. [2018] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. _arXiv preprint arXiv:1809.02789_ , 2018. * Ning et al. [2021] Xuefei Ning, Changcheng Tang, Wenshuo Li, Zixuan Zhou, Shuang Liang, Huazhong Yang, and Yu Wang. Evaluating efficient performance estimators of neural architectures. _Advances in Neural Information Processing Systems_ , 34, 2021. * [30] NVIDIA. Transformer-xl for pytorch. https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/Transformer-XL. * Pham et al. [2018] Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In _International Conference on Machine Learning_ , pages 4095–4104. PMLR, 2018. * Radford et al. [2019] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. _OpenAI blog_ , 1(8):9, 2019. * Raffel et al. [2020] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. _J. Mach. Learn. Res._ , 21(140):1–67, 2020. * Sakaguchi et al. [2021] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. _Communications of the ACM_ , 64(9):99–106, 2021\. * Siems et al. [2020] Julien Siems, Lucas Zimmer, Arber Zela, Jovita Lukasik, Margret Keuper, and Frank Hutter. Nas-bench-301 and the case for surrogate benchmarks for neural architecture search. _arXiv preprint arXiv:2008.09777_ , 2020. * So et al. [2019] David R. So, Chen Liang, and Quoc V. Le. The evolved transformer, 2019. * So et al. [2021] David R. So, Wojciech Mańke, Hanxiao Liu, Zihang Dai, Noam Shazeer, and Quoc V. Le. Primer: Searching for efficient transformers for language modeling, 2021\. * Tanaka et al. [2020] Hidenori Tanaka, Daniel Kunin, Daniel L Yamins, and Surya Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. _Advances in Neural Information Processing Systems_ , 33, 2020. * Tay et al. [2021] Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. Scale efficiently: Insights from pre-training and fine-tuning transformers. _arXiv preprint arXiv:2109.10686_ , 2021. * Theis et al. [2018] Lucas Theis, Iryna Korshunova, Alykhan Tejani, and Ferenc Huszár. Faster gaze prediction with dense networks and fisher pruning. _arXiv preprint arXiv:1801.05787_ , 2018. * Tsai et al. [2020] Henry Tsai, Jayden Ooi, Chun-Sung Ferng, Hyung Won Chung, and Jason Riesa. Finding fast transformers: One-shot neural architecture search by component composition, 2020. * Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008, 2017. * Wang et al. [2018] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. _arXiv preprint arXiv:1804.07461_ , 2018. * Wang et al. [2019] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. _Advances in neural information processing systems_ , 32, 2019. * Wang et al. [2020a] Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before training by preserving gradient flow. In _International Conference on Learning Representations_ , 2020a. URL https://openreview.net/forum?id=SkgsACVKPH. * Wang et al. [2020b] Hanrui Wang, Zhanghao Wu, Zhijian Liu, Han Cai, Ligeng Zhu, Chuang Gan, and Song Han. Hat: Hardware-aware transformers for efficient natural language processing. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7675–7688, 2020b. * Wang et al. [2022] Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, and Furu Wei. Deepnet: Scaling transformers to 1,000 layers. _arXiv preprint arXiv:2203.00555_ , 2022. * White et al. [2022] Colin White, Mikhail Khodak, Renbo Tu, Shital Shah, Sébastien Bubeck, and Debadeepta Dey. A deeper look at zero-cost proxies for lightweight nas. In _ICLR Blog Track_ , 2022. URL https://iclr-blog-track.github.io/2022/03/25/zero-cost-proxies/. https://iclr-blog-track.github.io/2022/03/25/zero-cost-proxies/. * Wistuba et al. [2019] Martin Wistuba, Ambrish Rawat, and Tejaswini Pedapati. A survey on neural architecture search, 2019. * Xu et al. [2021a] Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, and Tie-Yan Liu. Nas-bert. _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining_, Aug 2021a. doi: 10.1145/3447548.3467262. URL http://dx.doi.org/10.1145/3447548.3467262. * Xu et al. [2021b] Jin Xu, Xu Tan, Kaitao Song, Renqian Luo, Yichong Leng, Tao Qin, Tie-Yan Liu, and Jian Li. Analyzing and mitigating interference in neural architecture search, 2021b. * Xu et al. [2019] Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, and Hongkai Xiong. Pc-darts: Partial channel connections for memory-efficient architecture search. In _International Conference on Learning Representations_ , 2019. * Yin et al. [2021] Yichun Yin, Cheng Chen, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. Autotinybert: Automatic hyper-parameter optimization for efficient pre-trained language models. _arXiv preprint arXiv:2107.13686_ , 2021. * You et al. [2019] Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. _arXiv preprint arXiv:1904.00962_ , 2019. * Zafrir et al. [2021] Ofir Zafrir, Ariel Larey, Guy Boudoukh, Haihao Shen, and Moshe Wasserblat. Prune once for all: Sparse pre-trained language models. _arXiv preprint arXiv:2111.05754_ , 2021. * Zela et al. [2021] Arber Zela, Julien Niklas Siems, Lucas Zimmer, Jovita Lukasik, Margret Keuper, and Frank Hutter. Surrogate nas benchmarks: Going beyond the limited search spaces of tabular nas benchmarks. In _International Conference on Learning Representations_ , 2021. * Zellers et al. [2019] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? _arXiv preprint arXiv:1905.07830_ , 2019. * Zhang et al. [2022] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. _arXiv preprint arXiv:2205.01068_ , 2022. * [59] Xuan Zhang and Kevin Duh. Reproducible and efficient benchmarks for hyperparameter optimization of neural machine translation systems. https://github.com/Este1le/hpo_nmt. * Zhang and Duh [2020] Xuan Zhang and Kevin Duh. Reproducible and efficient benchmarks for hyperparameter optimization of neural machine translation systems. _Transactions of the Association for Computational Linguistics_ , 8:393–408, 2020. * Zhao et al. [2021] Yuekai Zhao, Li Dong, Yelong Shen, Zhihua Zhang, Furu Wei, and Weizhu Chen. Memory-efficient differentiable transformer architecture search, 2021\. * Zhou et al. [2022] Qinqin Zhou, Kekai Sheng, Xiawu Zheng, Ke Li, Xing Sun, Yonghong Tian, Jie Chen, and Rongrong Ji. Training-free transformer architecture search. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 10894–10903, 2022. ## Checklist 1. 1. For all authors… 1. (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] 2. (b) Did you describe the limitations of your work? [Yes] See Sections 4.3, 5, and Appendix E 3. (c) Did you discuss any potential negative societal impacts of your work? [Yes] See Appendix K 4. (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. 2. If you are including theoretical results… 1. (a) Did you state the full set of assumptions of all theoretical results? [N/A] 2. (b) Did you include complete proofs of all theoretical results? [N/A] 3. 3. If you ran experiments… 1. (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] The source code for LTS is available at https://github.com/microsoft/archai/tree/neurips_lts/archai/nlp 2. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix B 3. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A] 4. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] 4. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets… 1. (a) If your work uses existing assets, did you cite the creators? [Yes] See Appendix B 2. (b) Did you mention the license of the assets? [N/A] All used code are open- source and publicly available 3. (c) Did you include any new assets either in the supplemental material or as a URL? [Yes] 4. (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] 5. (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] 5. 5. If you used crowdsourcing or conducted research with human subjects… 1. (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] 2. (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] 3. (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A] ## Appendix A Preliminaries on Autoregressive Transformers Perplexity. Perplexity is a widely used metric for evaluating the performance of autoregressive language models. This metric encapsulates how well the model can predict a word. Formally, perplexity of a language model $M$ is derived using the entropy formula as: $Perplexity(M)=2^{H(L,M)}=2^{-\sum_{x}L(x).log(M(x)))}$ (1) where $L$ represents the ground truth words. As seen, the perplexity is closely tied with the cross-entropy loss of the model, i.e., $H(L,M)$. Parameter count. Contemporary autoregressive Transformers consist of three main components, namely, the input embedding layer, hidden layers, and the final (softmax) projection layer. The embedding layer often comprises look-up table-based modules that map the input language tokens to vectors. These vectors then enter a stack of multiple hidden layers a.k.a, the decoder blocks. Each decoder block is made up of an attention layer and a feed-forward network. Once the features are extracted by the stack of decoder blocks, the final prediction is generated by passing through the softmax projection layer. When counting the number of parameters in an autoregressive Transformer, the total parameters enclosed in the hidden layers is dubbed the decoder parameter count or equivalently, the non-embedding parameter count. These parameters are architecture-dependent and do not change based on the underlying tokenization or the vocabulary size. The embedding parameter count, however, accounts for the parameters enclosed in the input embedding layer as well as the final softmax projection layer as they are both closely tied to the word embedding and vocabulary size. We visualize an autoregressive Transformer in Figure 10, where the orange blocks contain the decoder parameters and grey blocks hold the embedding parameters. Figure 10: High-level visualization of different components in autoregressive Transformers. Here, the parameters enclosed in the orange blocks are counted as decoder parameters, while the parameters contained in the gray boxes are included in the embedding parameter count. ## Appendix B Experimental Setup Datasets. We conduct experiments on three datasets, namely, WikiText-103, LM1B, and the Pile. The datasets are tokenized using word-level and byte-pair encoding for models with Transformer-XL and GPT-2 backbones, respectively. Training and Evaluation. We adopt the open-source code by [15] and [30] to implement the GPT-2 and Transformer-XL backbones, respectively. We further use the source code provided in [2] to implement the baseline OPT-$350$M and LTS models used in zero and one-shot evaluations. Table 2 encloses the hyperparameters used for training. In this paper, each model is trained separately from scratch. In many scenarios, the user only needs to train one model from the Pareto-frontier, which is selected based on their needs for perplexity, latency, and memory. However, if the users are interested in multiple models, they can either train all models separately or fuse them and train them simultaneously using weight sharing as in [46, 55]. As an example, if the user is interested in two particular models from the Pareto-frontier which have $3$ and $5$ layers, the user can fuse them into a single $5$-layer (super)net and train both models at the same time using weight sharing. The cost of training this supernet is roughly the same as training a $5$-layer model. Therefore, this simple trick can amortize the training cost for Pareto- frontier models. Throughout the paper, validation perplexity is measured over a sequence length of $192$ and $32$ tokens for WikiText-103 and LM1B datasets, respectively. For our zero and one-shot evaluations, we adopt the open-source code by Gao et al. [18]. Inference latency and peak memory utilization are measured on the target hardware for a sequence length of $192$, averaged over $10$ measurements. The sequence length is increased to $2048$ for latency comparison with the OPT baseline. We utilize PyTorch’s native benchmarking interface for measuring the latency and memory utilization of candidate architectures. Choice of Training Hyperparameters. For each backbone, dataset, and task, we use the same training setup for all models generated by NAS. This is the common setting used in the vast majority of NAS papers, including popular benchmarks [12, 35, 56], due to the extremely high cost of NAS combined with hyperparameter optimization (HPO). The setup for our training hyperparameters is based on the evidence provided in prior art in Transformer design [22, 33, 19] and NAS [46, 62, 8]. Specifically, for the range of model sizes studied in this paper, prior work adopts the same batch size (see Table 2.1 in GPT-3 [5]), which suggests there is no significant benefit in optimizing the batch size per architecture. The original GPT-3 paper [5] also adopts the same learning rate scheduler for all models, regardless of their size. Similarly, authors of [22] show that the choice of learning rate scheduler does not have a significant effect on final model performance, which further validates that exploration of the scheduler will not alter the empirical findings in this paper. Authors of [22] further provide a rule-of-thumb for setting the optimal learning rate (see Equation D.1 of [22]). This rule shows that changes in the optimal learning rate are negligible for the range of model sizes in our search space. We validate this by conducting an experiment that aims to find the optimal learning rate per architecture. We sweep the learning rate $\in[0.0001,0.001,0.01,0.1]$ for $100$ randomly sampled models from the GPT-2 backbone and train them on WikiText-103. The studied models span a wide range of configurations with $2-16$ layers and $2-65$M total parameters. We then pick the optimal learning rate for each architecture, i.e., the one which results in the lowest perplexity. We remeasure the correlation between newly obtained perplexities and the decoder parameter county proxy. Our learning rate optimization experiment results in two important observations: 1) for the vast majority of the architectures ($98\%$), the optimal learning rate is equal to $0.01$, i.e., the value used in all experiments (see Table 2), and 2) the ranking of architectures after convergence remains largely unchanged, leading to a correlation of $0.93$ with decoder parameter count, compared to $0.96$ when using the same learning rate for all models. The above evidence suggests that the same training setup can be used for all architectures in the search space, without affecting the results. Table 2: LTS training hyperparameters for different backbones. Here, DO represents dropout layers. Backbone | Dataset | Tokenizer | # Vocab | Optim. | # Steps | Batch size | LR | Scheduler | Warmup | DO | Attn DO ---|---|---|---|---|---|---|---|---|---|---|--- Transformer-XL | WT103 | Word | 267735 | LAMB [54] | 4e4 | 256 | 1e-2 | Cosine | 1e3 | 0.1 | 0.0 LM1B | Word | 267735 | Adam | 1e5 | 224 | 2.5e-4 | Cosine | 2e4 | 0.0 | 0.0 GPT-2 | WT103 | BPE | 50264 | LAMB [54] | 4e4 | 256 | 1e-2 | Cosine | 1e3 | 0.1 | 0.1 LM1B | BPE | 50264 | LAMB [54] | 1e5 | 224 | 2.5e-4 | Cosine | 2e4 | 0.1 | 0.1 Pile | BPE | 50272 | Adam | 5.48e4 | 256 | 3e-5 | Linear | 715 | 0.1 | 0.0 Search Setup. Evolutionary search is performed for $30$ iterations with a population size of $100$; the parent population accounts for $20$ samples out of the total $100$; $40$ mutated samples are generated per iteration from a mutation probability of $0.3$, and $40$ samples are created using crossover. ## Appendix C How Good is the Decoder Parameters Proxy for Pareto-frontier Search? In this Section, we validate whether the decoder parameter count proxy actually helps find Pareto-frontier models which are close to the ground truth Pareto front. We first fully train all $1200$ architectures sampled from the Transformer-XL backbone during the evolutionary search (1). Using the validation perplexity obtained after full training, we rank all sampled architectures and extract the ground truth Pareto-frontier of perplexity versus latency. We train the models on the WikiText-103 dataset and benchmark Intel Xeon E5-2690 CPU as our target hardware platform for latency measurement in this experiment. Figure 11 represents a scatter plot of the validation perplexity (after full training) versus latency for all sampled architectures during the search. The ground truth Pareto-frontier, by definition, is the lower convex hull of the dark navy dots, corresponding to models with the lowest validation perplexity for any given latency constraint. We mark the Pareto-frontier points found by the training-free proxy with orange color. As shown, the architectures that were selected as the Pareto-frontier by the proxy method are either on or very close to the ground truth Pareto-frontier. Figure 11: Perplexity versus latency Pareto obtained from full training of $1200$ architectures sampled during NAS on Transformer-XL backbone. Orange points are the Pareto-frontier extracted using decoder parameter count proxy, which lies close to the actual Pareto-frontier. Decoder parameter count holds an SRC of $0.98$ with the ground truth perplexity after full training. We define the mean average perplexity difference as a metric to evaluate the distance ($d_{avg}$) between the proxy and ground truth Pareto-frontier: $d_{avg}=\frac{1}{N}\sum_{i=1}^{N}\frac{|p_{i}-p_{gt,i}|}{p_{gt,i}}$ (2) Here, $p_{i}$ denotes the $i$-th point on the proxy Pareto front and $p_{gt,i}$ is the closest point, in terms of latency, to $p_{i}$ on the ground truth Pareto front. The mean average perplexity difference for Figure 11 is $d_{avg}=0.6\%$. This small difference validates the effectiveness of our zero-cost proxy in correctly ranking the sampled architectures and estimating the true Pareto-frontier. In addition to the small distance between the prxoy- estimated Pareto-frontier and the ground truth, our zero-cost proxy holds a high SRC of $0.98$ over the entire Pareto, i.e., all $1200$ sampled architectures. We further study the decoder parameter proxy in scenarios where the range of model sizes provided for search is limited. We categorize the total $1200$ sampled architectures into different bins based on the decoder parameters. Figure 12 demonstrates the SRC between the decoder parameter count proxy and the validation perplexity after full training for different model sizes. The proposed proxy provides a highly accurate ranking of candidate architectures even when exploring a small range of model sizes. Figure 12: SRC between the decoder parameter count proxy and validation perplexity. Results are gathered on $1200$ models grouped into four bins based on their decoder parameter count. Our proxy performs well even when exploring within a small range of model sizes. ## Appendix D Analysis on Homogeneous Models In this section, we evaluate the efficacy of the proposed proxies on the homogeneous search space, i.e., when all decoder layers have the same parameter configuration. In this scenario, the parameters are sampled from the valid ranges in Section 3 to construct one decoder block. This block is then replicated based on the selected nlayer to create the Transformer architecture. In what follows, we provide experimental results gathered on $100$ randomly sampled Transformer models from the Transformer-XL backbone with homogeneous decoder blocks, trained on WikiText-103. $\blacktriangleright$ Low-cost Proxies. Figure 13(a) demonstrates the SRC between various low-cost methods and the validation perplexity after full training. On the horizontal axis, we report the total computation required for each proxy in terms of FLOPs. Commensurate with the findings on the heterogeneous models, we observe a strong correlation between the low-cost proxies and validation perplexity, with the decoder parameter count outperforming other proxies. Note that we omit the relu_log_det method from Figure 13(a) as it provides a low SRC of $0.42$ due to heavy reliance on ReLU activations. (a) (b) Figure 13: Experiments conducted on $100$ randomly sampled Transformers with homogeneous decoder blocks, trained on WikiText-103. (a) SRC between the ranking obtained from low-cost proxies and the ground truth ranking after full training. The decoder parameter count obtains the best SRC with zero cost. (b) Performance of parameter count proxies. The decoder parameter count provides a very accurate ranking proxy with an SRC of $0.95$ over all models. $\blacktriangleright$ Parameter Count. As seen in Figure 13(b), the total parameter count has a low SRC with the validation perplexity while the decoder parameter count provides an accurate proxy with an SRC of $0.95$ over all architectures. These findings on the homogeneous search space are well-aligned with the observations in the heterogeneous space. ## Appendix E How Does Model Topology Affect the Training-free Proxies? Figure 14(a) shows the validation perplexity versus the aspect ratio of random architectures sampled from the Transformer-XL backbone and trained on WikiText-103. Here, the models span wide, shallow topologies (e.g., dmodel=$1024$, nlayer=$3$) to narrow, deep topologies (e.g., dmodel=$128$, nlayer=$35$). The maximum change in the validation perplexity for a given decoder parameter count is $<7\%$ for a wide range of aspect ratios $\in[8,323]$. Nevertheless, for the same decoder parameter count budget, the latency and peak memory utilization vary by $1.3\times$ and $2.0\times$ as shown in Figures 14(b) and 14(c). For deeper architectures (more than $40$ layers) with the Transformer-XL backbone, we observe an increase in the validation perplexity, which results in a deviation from the pattern in Figure 14a. This observation is associated with the inherent difficulty in training deeper architectures, which can be mitigated with the proposed techniques in the literature [47]. Nevertheless, such deep models have a high latency, which makes them unsuitable for lightweight inference. For hardware-aware and efficient Transformer NAS, our search space contains architectures with less than $16$ layers. In this scenario, the decoder parameter count proxy holds a very high correlation with validation perplexity, regardless of the architecture topology as shown in Figure 14(a). (a) (b) (c) Figure 14: Validation perplexity after full training versus (a) the width-to- depth aspect ratio, (b) latency, and (c) peak memory utilization. Models are randomly generated from the Transformer-XL backbone and trained on WikiText-103. For a given decoder parameter count, we observe low variation in perplexity across different models, regardless of their topology. The topology, however, significantly affects the latency and peak memory utilization for models with the same perplexity. ## Appendix F 3D Pareto Visualization Figure 15 visualizes the $3$-dimensional Pareto obtained during search on the GPT-2 backbone. Here, the black and blue points denote regular and Pareto- frontier architectures, respectively. The pair of red dots are architectures which match in both memory and decoder parameter count ($\sim$ perplexity). However, as shown, their latency differs by $2\times$. The pair of green points correspond to models with the same decoder parameter count ($\sim$ perplexity) and latency, while the memory still differs by $30$MB, which is non-negligible for memory-constrained application. In a $2$-objective Pareto- frontier search of perplexity versus memory (or latency), each pair of red (or green) dots will result in similar evaluations. While in reality, they have very different characteristics in terms of the overlooked metric. This experiment validates the need for multi-objective Pareto-frontier search, which simultaneously takes into account multiple hardware performance metrics. Figure 15: 3D visualization of our multi-objective NAS for the GPT-2 backbone on TITAN Xp GPU. Architectures with similar memory and decoder parameter count can result in drastically different runtimes (up to $2\times$ difference). Similarly, architectures with similar decoder parameter count and latency may have different peak memory utilization. Therefore, it is important to perform multi-objective NAS where several hardware characteristics are simultaneously taken into account when extracting the Pareto-frontier. Figure 16: 2D visualization of the perplexity versus latency and memory Pareto-frontier found by LTS and scaled backbone models with varying number of layers. All models are trained on the WikiText-103 dataset. The architectural parameters for all models are enclosed in Appendix I. ## Appendix G LTS Pareto-frontier on WikiText-103 We compare the Pareto-frontier architectures found by LTS with the baseline after full training on the WikiText-103 dataset in Figure 16. Commensurate with the findings on the LM1B dataset, the NAS-generated models outperform the baselines in at least one of the three metrics, i.e., perplexity, latency, and peak memory utilization. We note that the gap between the baseline models and those obtained from NAS is larger when training on the LM1B dataset. This is due to the challenging nature of LM1B, which exceeds the WikiText-103 dataset size by $\sim 10\times$. Thus, it is harder for hand-crafted baseline models to compete with the optimized LTS architectures on LM1B. On the Transformer-XL backbone, the models on LTS Pareto-frontier for the ARM CPU have, on average, $3.8\%$ faster runtime and $20.7\%$ less memory under the same validation perplexity budget. On the Corei7, the runtime and memory savings increase to $13.2\%$ and $19.6\%$, respectively, while matching the baseline perplexity. We achieve our highest benefits on TITAN Xp GPU where LTS Pareto-frontier models have, on average, $31.8\%$ lower latency and $21.5\%$ lower memory utilization. Notably, the validation perplexity of the baseline $16$-layer Transformer-XL base can be achieved with a lightweight model with $2.1\times$ lower latency while consuming $1.6\times$ less memory at runtime. On the GPT-2 backbone, LTS achieves $6.3-11.2$ lower perplexity in the low- latency-and-memory regime. As we transition to larger models and higher latency, our results show that the GPT-2 architecture is nearly optimal on WikiText-103 when performing inference on a CPU. The benefits are more significant when targeting a GPU; For any given perplexity achieved by the baseline, LTS Pareto-frontier on TITAN Xp delivers, on average, $9.0\%$ lower latency and $4.5\%$ lower memory. Therefore, the perplexity and memory of the baseline $16$-layer GPT-2 can be achieved by a new model that runs $1.4\times$ faster and consumes $1.2\times$ less memory on TITAN Xp. ## Appendix H Zero and One-Shot Evaluation of LTS Models (a) zero-shot (b) one-shot Figure 17: LTS Pareto-frontier models (dots) achieve a higher zero and one- shot accuracy with lower latency compared to the hand-designed OPT-$350$M model (triangle). Latency is measured on an A6000 NVIDIA GPU. Architectural parameters for all models shown here are detailed in Appendix I. For this experiment, we design our search space to cover models with a similar parameter count budget as the OPT-$350$M model. To this end, we search over the following values for the architectural parameters: nlayer$\in\\{3,\dots,29|1\\}$, dmodel$\in\\{512,\dots,1472|64\\}$, dinner$\in\\{512,\dots,6080|64\\}$, and nhead$\in\\{2,4,8,16\\}$. To directly compare with OPT, we use a generic, non-adaptive embedding layer for our models. Therefore, the search space does not include the $k$ factor and dembed=dmodel. Figures 17(a) and 17(b) show the per-task zero and one-shot performance of LTS models and OPT-$350$M. Please refer to Section 4.5 of the main paper for a summarization of the results in these figures. ## Appendix I Architecture Details Tables 4, 5, 6, 7 enclose the architecture parameters for the baseline and NAS-generated models in Figures 8 and 16 for Transformer-XL and GPT-2 backbones. Table 3 further holds the architecture details of models used in our zero and one-shot evaluations of Figures 9 and 17. For each target hardware, the rows of the table are ordered based on increasing decoder parameter count (decreasing validation perplexity). For all models, dhead=dmodel/nhead and dembed=dmodel. For models in Tables 4, 5, 6, 7, the adaptive input embedding factor is set to $k=4$. The models in Table 3, however, use the generic, non-adaptive, input embedding ($k=1$) following the original OPT architecture [58]. ## Appendix J Transformers in other Domains In what follows, we perform preliminary experiments on Transformers used on other domains to investigate the applicability of parameter-based proxies for ranking. Encoder-only Transformers. BERT [11] is a widely popular Transformer composed of encoder blocks, which is used in a variety of tasks, e.g., question answering and language inference. The main difference between BERT and the Transformers studied in this paper is the usage of bidirectional versus causal attention. Specifically, the encoder blocks in BERT are trained to compute attention between each input token and all surrounding tokens. In autoregressive models, however, attention is only computed for tokens appearing prior to the current token. BERT is trained with a mixture of masked language modeling and next sentence prediction objectives to ensure applicability to language modeling as well as downstream language understanding tasks. We use the architectural parameters described in Section 3 to construct the search space and randomly sample $300$ models from the BERT backbone. We then train all models on WikiText-103 for 40K steps following the training setup provided in the original BERT paper [11] for the batch size, sequence length, optimizer, learning rate, vocabulary size, and tokenizer. Figure 18 demonstrates the CR and SRC of encoder parameter count and test perplexity measured on various top$k\%$ performing BERT models. As seen, both the encoder and total parameter count provide a highly accurate proxy for test perplexity of BERT, achieving an SRC of $0.96$ and $0.98$, respectively. This trend suggests that parameter-based proxies for NAS can be applicable to encoder-only search spaces as well. Figure 18: Performance of parameter count proxies on $300$ randomly sampled models from the BERT backbone, trained on WikiText-103. Both encoder and total parameter counts provide a very accurate ranking proxy with an SRC of 0.96 and 0.98 over all models, respectively. Encoder-Decoder Transformers. Transformers in this domain comprise both encoder and decoder layers with bidirectional and causal attention computation. This unique structure makes these models suitable for sequence- to-sequence tasks such as Neural Machine Translation (NMT). Recent work [19] shows that the performance of encoder-decoder Transformers also follows a scaling law with model size. This power-law behavior between model size and performance can be leveraged to develop training-free proxies for ranking these architectures during search. We test our hypothesis by performing experiments on the open-source NMT benchmark by [60, 59] which consists of $2000$ Transformers trained on various language pairs. The pre-trained Transformers in this benchmark have homogeneous layers, i.e., the architectural parameters are the same for all layers and identical for the encoder and the decoder. In addition to architectural parameters, the search space for this benchmark also includes various BPE tokenization and learning rates. We, therefore, pre-process the benchmark by gathering all instances of Transformers for a fixed BPE. Then for each given architecture, we keep the results corresponding to the best-performing learning rate. Figure 19 shows a heatmap of the SRC between parameter count proxies and perplexity as well as the BLEU score. As seen, the ranking performance of total parameter count versus non-embedding parameter count, i.e., parameters enclosed in the encoder and decoder blocks, is largely similar. On certain tasks, e.g., ‘ja-en’, ‘so-en’, and ‘sw-en’ the parameter count proxies perform quite well, achieving a high SRC with both the BLEU score and perplexity. Interestingly, on ‘so-en’ and ‘sw-en’, the parameter count and performance are inversely correlated, which may be due to the limited training data for these language pairs which gives smaller models a leading advantage over larger architectures. While these preliminary results show promise for parameter- based proxies in NAS for NMT, several aspects require further investigation, e.g., the effect of architectural heterogeneity and dataset size on the performance of these proxies. Studying these aspects may perhaps lead to a new formulation of training-free proxies for NMT and are out of scope for this paper. Figure 19: SRC between parameter count proxies and performance metrics, i.e., BLEU score and perplexity (PPL) for translation between various language pairs. The “NonEmbParams” label denotes the parameters enclosed in the encoder and decoder blocks while the “TotalParams” label corresponds to the total parameter count including those in the embedding layers. Here, darker versus lighter colors show a high positive and negative correlation, respectively. ## Appendix K Ethics Statement and Broader Impact We provide an extremely lightweight method for NAS on autoregressive Transformers. Our work is likely to increase the adoption of NAS in the NLP domain, providing several prevalent benefits: Firstly, more widespread adoption of automated techniques, e.g., NAS eliminates the need for laborious trials and error for manual design of Transformer architectures, freeing up hundreds of hours of man-power as well as computational resources. Secondly, automating architecture design can trigger the generation of new models with superior performance, which benefits the ever-growing applications of NLP in the everyday life. Finally, by making the search algorithm efficient, we ensure it can be accessible to the general scientific public without need for any expensive mode training, thereby minimizing the unwanted byproducts of the Deep Learning era such as the carbon footprint, and power consumption. While the benefits of automation in NLP are plenty, it can lead to potential side effects that have not been yet fully unveiled. Since our work advances the use of NAS in the NLP design pipeline, there is need for scrutiny of the models which have been automatically designed with respect to aspects such as bias, misinformation, and nefarious activity, to name a few. Table 3: Detailed architectural parameters for all models in Figure 9 with GPT-2 backbone. | nlayer | dmodel | nhead | dinner | DecoderParams (M) ---|---|---|---|---|--- baseline (OPT-$350$M) | 24 | 1024 | 16 | 4096 | 304.4 M1 | 26 | 1024 | 16 | 2816 | 261.4 M2 | 15 | 1280 | 16 | 4480 | 273.2 M3 | 24 | 1280 | 8 | 1856 | 274.3 M4 | 16 | 1344 | 8 | 3840 | 283.8 M5 | 14 | 1344 | 8 | 4800 | 284.8 M6 | 20 | 1216 | 4 | 3456 | 289.2 M7 | 16 | 1344 | 16 | 4096 | 294.8 M8 | 28 | 1344 | 8 | 1344 | 306.6 M9 | 28 | 1088 | 8 | 2816 | 306.7 M10 | 26 | 1152 | 16 | 2816 | 309.4 M11 | 25 | 832 | 2 | 5760 | 310.9 M12 | 20 | 1280 | 16 | 3456 | 310.9 M13 | 19 | 1280 | 8 | 3840 | 314.2 M14 | 26 | 1152 | 4 | 3008 | 320.9 M15 | 19 | 1472 | 8 | 2816 | 325.5 M16 | 13 | 1472 | 4 | 5568 | 329.0 M17 | 14 | 1480 | 2 | 5824 | 367.3 M18 | 20 | 1152 | 8 | 5760 | 374.3 M19 | 26 | 1024 | 4 | 5696 | 414.8 M20 | 25 | 1408 | 8 | 3136 | 422.3 Table 4: Detailed architectural parameters for all models in Figure 8 with Transformer-XL backbone. | nlayer | dmodel | nhead | dinner | DecoderParams (M) ---|---|---|---|---|--- baseline | $\in$[1,16] | 512 | 8 | 2048 | - ARM | M1 | 2 | 512 | [2, 2] | [1216, 1280] | 5.2 M2 | 3 | 320 | [2, 4, 2] | [1472, 2368, 3392] | 6.2 M3 | 2 | 512 | [2, 2] | [2560, 2176] | 7.5 M4 | 2 | 512 | [2, 2] | [3904, 1792] | 8.5 M5 | 2 | 640 | [2, 2] | [3520, 3456] | 13.0 M6 | 2 | 704 | [8, 2] | [3904, 3968] | 16.1 M7 | 2 | 832 | [2, 2] | [3264, 3968] | 19.0 M8 | 2 | 960 | [2, 2] | [3648, 3968] | 23.9 M9 | 2 | 960 | [2, 2] | [3904, 3968] | 24.4 M10 | 3 | 960 | [2, 2, 2] | [1856, 2368, 3392] | 28.5 M11 | 3 | 832 | [2, 2, 2] | [3904, 3968, 3008] | 28.5 M12 | 3 | 960 | [2, 4, 2] | [3328, 2368, 3200] | 30.9 M13 | 3 | 960 | [4, 2, 2] | [3648, 3584, 3584] | 34.6 M14 | 3 | 960 | [2, 2, 2] | [3904, 3584, 3456] | 34.9 M15 | 3 | 960 | [2, 2, 8] | [4032, 3968, 3904] | 36.7 M16 | 4 | 896 | [4, 2, 8, 2] | [3904, 3008, 3520, 3584] | 41.2 M17 | 4 | 960 | [8, 8, 8, 4] | [4032, 3968, 2880, 3200] | 45.5 M18 | 4 | 960 | [2, 2, 2, 2] | [3840, 3904, 3520, 3072] | 46.0 M19 | 4 | 960 | [2, 2, 2, 2] | [4032, 3648, 3136, 4032] | 47.0 M20 | 4 | 960 | [8, 2, 4, 8] | [4032, 3584, 3840, 3584] | 47.4 M21 | 4 | 960 | [2, 2, 4, 2] | [3904, 3968, 3840, 3584] | 47.8 M22 | 5 | 960 | [2, 2, 2, 2, 2] | [3904, 3968, 3264, 3456, 3200] | 57.3 M23 | 5 | 960 | [2, 2, 2, 8, 2] | [3904, 3648, 3136, 3648, 3840] | 58.0 M24 | 6 | 960 | [2, 2, 2, 2, 2, 8] | [3328, 2624, 3392, 2944, 3008, 3904] | 64.6 M25 | 6 | 960 | [2, 2, 4, 2, 8, 8] | [3584, 2624, 3392, 3968, 3008, 3328] | 65.9 M26 | 6 | 960 | [2, 4, 2, 2, 2, 2] | [2112, 3840, 3328, 3264, 3968, 3648] | 66.4 M27 | 6 | 960 | [2, 4, 2, 2, 8, 2] | [3904, 3008, 3392, 3648, 3392, 3584] | 67.9 M28 | 6 | 960 | [2, 2, 2, 2, 2, 4] | [3968, 3968, 3456, 3456, 3776, 2432] | 68.1 M29 | 6 | 960 | [2, 4, 8, 4, 2, 8] | [3904, 3008, 3392, 3200, 3968, 3904] | 68.8 M30 | 6 | 960 | [8, 8, 2, 4, 2, 4] | [3904, 3648, 3136, 3648, 3200, 3840] | 68.8 M31 | 6 | 960 | [8, 4, 8, 4, 2, 8] | [3904, 3648, 3392, 3200, 3968, 3840] | 69.9 M32 | 8 | 896 | [4, 2, 2, 4, 4, 2, 4, 8] | [3584, 3968, 3392, 3904, 2240, 1856, 2560, 3264] | 76.6 M33 | 8 | 896 | [4, 2, 2, 2, 4, 4, 4, 2] | [3584, 3584, 3520, 2368, 2752, 4032, 3520, 3264] | 79.9 M34 | 9 | 896 | [4, 2, 4, 4, 8, 2, 8, 8, 2] | [3840, 3136, 3520, 2880, 3200, 3008, 3328, 2560, 3136] | 87.5 M35 | 8 | 960 | [2, 4, 4, 4, 4, 8, 2, 2] | [3968, 3584, 3520, 3072, 3968, 4032, 1856, 3712] | 90.2 M36 | 12 | 832 | [2, 4, 4, 2, 2, 8, 8, 8, 4, 4, 2, 8] | [3136, 2112, 2112, 2368, 2752, 2432, 2432, 2176, 3456, 3712, 2880, 3712] | 97.0 M37 | 9 | 960 | [4, 4, 8, 2, 2, 2, 8, 8, 2] | [2112, 3008, 3520, 3648, 3968, 4032, 1984, 3200, 3520] | 97.2 M38 | 9 | 960 | [8, 2, 4, 2, 8, 8, 8, 2, 2] | [3968, 3008, 3520, 3200, 3200, 4032, 1984, 2816, 3520] | 97.7 M39 | 12 | 832 | [4, 4, 4, 2, 2, 8, 4, 8, 2, 8, 2, 8] | [3136, 3968, 2112, 2368, 3072, 2240, 2624, 2112, 3456, 3072, 2880, 3264] | 98.7 Corei7 | M1 | 2 | 384 | [2, 2] | [896, 2816] | 4.3 M2 | 2 | 576 | [2, 2] | [1792, 2816] | 8.6 M3 | 2 | 576 | [2, 2] | [1408, 3776] | 9.3 M4 | 2 | 832 | [2, 2] | [1728, 1536] | 12.4 M5 | 2 | 768 | [2, 2] | [3776, 1920] | 14.7 M6 | 2 | 768 | [2, 2] | [2112, 3584] | 14.7 M7 | 2 | 832 | [2, 2] | [3776, 3392] | 18.9 M8 | 2 | 832 | [2, 2] | [3968, 3584] | 19.5 M9 | 2 | 960 | [2, 4] | [1984, 3840] | 20.4 M10 | 2 | 960 | [8, 8] | [3968, 3584] | 23.7 M11 | 2 | 960 | [2, 2] | [3904, 3904] | 24.2 M12 | 3 | 896 | [2, 2, 2] | [2304, 3904, 3904] | 30.2 M13 | 3 | 960 | [2, 2, 4] | [2176, 3840, 2880] | 30.9 M14 | 3 | 960 | [2, 2, 4] | [3776, 2880, 3904] | 34.1 M15 | 3 | 960 | [2, 8, 2] | [3840, 3840, 3904] | 36.1 M16 | 3 | 960 | [2, 8, 8] | [3904, 3840, 3904] | 36.2 M17 | 3 | 960 | [2, 2, 8] | [3968, 3904, 3904] | 36.5 M18 | 4 | 960 | [2, 4, 2, 2] | [3904, 2112, 4032, 3584] | 44.6 M19 | 4 | 960 | [2, 2, 2, 4] | [2112, 3840, 3904, 3904] | 44.9 M20 | 4 | 960 | [2, 4, 8, 4] | [3776, 3392, 3520, 3904] | 46.5 M21 | 4 | 960 | [2, 2, 2, 4] | [3904, 3776, 3904, 3904] | 48.2 M22 | 5 | 960 | [2, 2, 2, 2, 2] | [3776, 1984, 3904, 3904, 3456] | 55.8 M23 | 5 | 960 | [2, 4, 2, 4, 2] | [3968, 3584, 3520, 3904, 3200] | 58.0 M24 | 5 | 960 | [2, 4, 4, 4, 2] | [3776, 3840, 3904, 3904, 3968] | 60.3 M25 | 6 | 960 | [2, 4, 4, 2, 2, 4] | [3776, 3840, 3904, 3904, 3008, 2304] | 67.5 M26 | 6 | 960 | [2, 4, 2, 4, 2, 4] | [3776, 2112, 4032, 3584, 3200, 4032] | 67.5 M27 | 6 | 960 | [2, 4, 2, 4, 4, 4] | [3776, 3840, 3904, 4032, 3648, 2432] | 69.2 M28 | 6 | 960 | [4, 2, 8, 4, 2, 2] | [3840, 3712, 3520, 4032, 3200, 4032] | 70.6 M29 | 7 | 960 | [2, 2, 8, 4, 2, 2, 4] | [3776, 3840, 3904, 1856, 3072, 3648, 4032] | 78.7 M30 | 8 | 960 | [2, 2, 2, 4, 2, 4, 8, 2] | [3392, 1792, 3904, 3904, 3200, 2432, 1792, 2496] | 80.9 M31 | 8 | 960 | [2, 2, 4, 4, 2, 4, 8, 2] | [3776, 3008, 4032, 3904, 3520, 3136, 1984, 3648] | 88.8 M32 | 8 | 960 | [8, 2, 2, 4, 8, 4, 4, 8] | [3776, 3008, 3904, 3904, 2176, 4032, 4032, 3648] | 91.6 M33 | 13 | 768 | [2, 8, 2, 4, 2, 2, 4, 2, 2, 8, 8, 8, 4] | [3776, 2112, 1600, 3904, 3840, 2880, 2304, 3200, 2048, 2944, 2816, 3328, 3968] | 97.9 M34 | 9 | 960 | [4, 2, 4, 4, 4, 4, 8, 8, 2] | [3840, 3136, 3520, 4032, 3200, 4032, 3648, 2112, 2368] | 98.9 M35 | 9 | 960 | [8, 2, 8, 8, 2, 4, 8, 2, 2] | [3520, 3008, 2880, 4032, 3200, 2432, 4032, 3904, 3136] | 99.4 Table 5: Detailed architectural parameters for all models in Figure 8 with Transformer-XL backbone. | nlayer | dmodel | nhead | dinner | DecoderParams (M) ---|---|---|---|---|--- baseline | $\in$[1,16] | 512 | 8 | 2048 | - TITAN Xp | M1 | 2 | 384 | [2, 2] | [1152, 2432] | 4.2 M2 | 2 | 448 | [8, 2] | [2944, 3008] | 7.4 M3 | 2 | 576 | [2, 2] | [2048, 1728] | 7.7 M4 | 2 | 512 | [2, 2] | [2368, 3072] | 8.2 M5 | 2 | 832 | [8, 2] | [3264, 3072] | 17.5 M6 | 2 | 768 | [2, 2] | [3968, 4032] | 18.2 M7 | 2 | 896 | [8, 4] | [4032, 2880] | 20.4 M8 | 2 | 960 | [4, 8] | [3968, 3008] | 22.6 M9 | 2 | 960 | [4, 8] | [3968, 3648] | 23.9 M10 | 2 | 960 | [2, 2] | [3840, 3968] | 24.2 M11 | 3 | 896 | [8, 4, 8] | [4032, 2112, 3392] | 29.2 M12 | 3 | 896 | [2, 2, 2] | [3840, 2880, 3840] | 31.0 M13 | 3 | 960 | [2, 2, 2] | [3584, 3072, 2624] | 31.7 M14 | 3 | 960 | [4, 2, 2] | [3840, 3008, 3840] | 34.4 M15 | 3 | 960 | [8, 2, 8] | [4032, 4032, 3520] | 36.1 M16 | 3 | 960 | [2, 2, 8] | [3584, 4032, 4032] | 36.2 M17 | 3 | 960 | [2, 2, 8] | [4032, 4032, 3840] | 36.7 M18 | 3 | 960 | [8, 4, 8] | [4032, 4032, 4032] | 37.1 M19 | 4 | 896 | [4, 4, 8, 8] | [4032, 3456, 3328, 3392] | 41.6 M20 | 4 | 960 | [4, 2, 8, 8] | [3840, 3008, 3328, 3584] | 44.9 M21 | 4 | 960 | [2, 2, 8, 8] | [4032, 3968, 3904, 3840] | 48.7 M22 | 4 | 960 | [4, 2, 4, 4] | [3840, 4032, 3904, 4032] | 48.8 M23 | 5 | 960 | [4, 2, 4, 4, 8] | [3840, 3008, 3392, 2496, 4032] | 55.3 M24 | 5 | 960 | [4, 2, 8, 8, 8] | [3840, 3008, 3840, 3328, 3968] | 57.6 M25 | 5 | 960 | [2, 2, 4, 4, 4] | [3968, 4032, 3328, 4032, 2752] | 57.9 M26 | 6 | 896 | [8, 4, 8, 4, 8, 8] | [3328, 2112, 3392, 3904, 3328, 3264] | 58.8 M27 | 5 | 960 | [8, 2, 8, 8, 4] | [4032, 3008, 3840, 3904, 3968] | 59.1 M28 | 5 | 960 | [2, 4, 2, 2, 8] | [3968, 3968, 3840, 4032, 3904] | 60.9 M29 | 6 | 896 | [2, 2, 4, 4, 2, 2] | [3840, 3968, 3840, 3328, 3904, 3904] | 65.0 M30 | 6 | 960 | [4, 8, 8, 4, 8, 4] | [3072, 3584, 3392, 3840, 3328, 3712] | 67.9 M31 | 6 | 960 | [4, 8, 8, 8, 4, 4] | [3840, 3584, 3392, 3328, 3968, 3776] | 69.7 M32 | 6 | 960 | [4, 8, 8, 8, 8, 2] | [3840, 3840, 3392, 3840, 3328, 3712] | 69.9 M33 | 6 | 960 | [4, 2, 2, 4, 2, 8] | [3840, 3008, 3840, 3904, 4032, 3392] | 70.0 M34 | 6 | 960 | [2, 4, 8, 8, 4, 2] | [3840, 3968, 3840, 3328, 4032, 3776] | 71.5 M35 | 7 | 960 | [4, 8, 8, 8, 8, 2, 8] | [3840, 3968, 3840, 3328, 3968, 3328, 4032] | 82.8 M36 | 8 | 960 | [4, 2, 8, 8, 8, 4, 8, 8] | [3840, 3968, 3840, 3328, 3072, 3328, 4032, 3072] | 91.6 M37 | 10 | 896 | [8, 4, 8, 8, 8, 2, 8, 2, 4, 8] | [4032, 3008, 3840, 2560, 3904, 3904, 3072, 3264, 2368, 2496] | 98.4 M38 | 12 | 832 | [2, 4, 8, 8, 8, 8, 8, 8, 8, 8, 4, 2] | [3840, 2816, 2112, 3584, 3648, 2432, 2304, 3008, 2880, 1664, 2432, 3776] | 99.0 M39 | 9 | 960 | [8, 8, 8, 4, 4, 8, 8, 4, 2] | [2752, 3456, 2880, 3904, 2752, 3904, 4032, 3264, 3136] | 99.3 M40 | 10 | 896 | [8, 8, 8, 2, 8, 2, 2, 2, 8, 2] | [3840, 3072, 3840, 2560, 3648, 3328, 3840, 3008, 2880, 3328] | 100.0 Table 6: Detailed architectural parameters for all models in Figure 8 with GPT-2 backbone. | nlayer | dmodel | nhead | dinner | DecoderParams (M) ---|---|---|---|---|--- baseline | $\in$[1,16] | 1024 | 12 | 3072 | - TITAN Xp | M1 | 3 | 256 | [2, 2, 2] | [3072, 3776, 3904] | 6.3 M2 | 2 | 448 | [2, 2] | [3456, 3776] | 8.1 M3 | 2 | 448 | [2, 4] | [4032, 3904] | 8.7 M4 | 3 | 384 | [2, 2, 2] | [3072, 2176, 4032] | 8.9 M5 | 2 | 576 | [2, 2] | [3456, 3584] | 10.8 M6 | 4 | 448 | [2, 2, 2, 2] | [4032, 3904, 1920, 3072] | 14.8 M7 | 4 | 512 | [2, 2, 4, 2] | [3904, 3136, 1280, 2624] | 15.4 M8 | 2 | 832 | [8, 2] | [3456, 3584] | 17.3 M9 | 2 | 960 | [2, 8] | [3456, 3648] | 21.0 M10 | 2 | 960 | [2, 2] | [3968, 3584] | 21.9 M11 | 5 | 640 | [2, 2, 2, 2, 2] | [4032, 2560, 2176, 2304, 3136] | 26.4 M12 | 3 | 832 | [2, 8, 4] | [3840, 3840, 3776] | 27.4 M13 | 5 | 704 | [2, 2, 2, 4, 4] | [2368, 3648, 1856, 3712, 3200] | 30.8 M14 | 3 | 960 | [2, 2, 2] | [3584, 3648, 4032] | 32.7 M15 | 3 | 960 | [2, 2, 2] | [3904, 3520, 4032] | 33.1 M16 | 6 | 640 | [2, 2, 2, 2, 2, 2] | [2624, 2560, 2880, 3776, 3648, 3840] | 34.6 M17 | 4 | 896 | [2, 2, 4, 2] | [4032, 3712, 3328, 3072] | 38.2 M18 | 5 | 832 | [2, 2, 2, 4, 4] | [3392, 3648, 2880, 3712, 3200] | 41.9 M19 | 4 | 960 | [2, 2, 4, 2] | [3904, 3136, 3328, 3776] | 42.0 M20 | 4 | 960 | [8, 8, 2, 4] | [3904, 3712, 4032, 3776] | 44.4 M21 | 6 | 832 | [2, 2, 4, 2, 2, 2] | [3904, 3456, 4032, 1792, 3072, 2496] | 47.9 M22 | 5 | 896 | [4, 2, 2, 2, 4] | [3968, 3200, 3840, 3328, 3648] | 48.3 M23 | 5 | 960 | [2, 2, 2, 2, 2] | [3904, 3264, 3328, 3776, 3392] | 52.4 M24 | 5 | 960 | [2, 2, 4, 2, 2] | [3584, 3456, 3776, 2944, 4032] | 52.7 M25 | 5 | 960 | [2, 8, 2, 4, 2] | [3904, 3648, 4032, 3776, 3968] | 55.6 M26 | 6 | 960 | [8, 8, 2, 2, 2, 2] | [3904, 2560, 2880, 3776, 2240, 3840] | 59.1 M27 | 6 | 960 | [2, 2, 2, 4, 2, 2] | [2496, 3456, 3328, 3904, 3968, 2944] | 60.8 M28 | 6 | 960 | [4, 2, 4, 4, 2, 8] | [4032, 3456, 3328, 3776, 4032, 2752] | 63.2 M29 | 6 | 960 | [2, 2, 2, 4, 4, 4] | [3968, 3648, 3840, 3776, 3584, 2624] | 63.4 M30 | 7 | 960 | [2, 2, 2, 4, 2, 4, 2] | [3904, 2368, 4032, 3008, 3520, 2944, 2496] | 68.7 M31 | 7 | 960 | [2, 2, 4, 2, 2, 2, 4] | [3072, 3648, 3520, 3584, 3136, 1984, 3584] | 69.1 M32 | 7 | 960 | [4, 2, 2, 2, 8, 2, 2] | [3712, 3648, 3584, 3520, 2752, 3008, 3392] | 71.2 M33 | 8 | 960 | [2, 4, 4, 2, 2, 2, 2, 2] | [3904, 2816, 3072, 1920, 3328, 3456, 2304, 2368] | 74.1 M34 | 8 | 960 | [2, 2, 2, 4, 2, 2, 8, 2] | [3520, 2368, 4032, 1792, 3200, 3776, 3200, 3648] | 78.6 M35 | 8 | 960 | [4, 2, 4, 4, 8, 8, 4, 2] | [3520, 3712, 3328, 3776, 3200, 2752, 3200, 2112] | 78.7 M36 | 8 | 960 | [8, 4, 2, 8, 2, 2, 2, 2] | [3520, 3840, 3328, 3776, 3200, 3776, 3968, 3648] | 85.4 M37 | 10 | 960 | [2, 8, 2, 4, 2, 2, 4, 2, 8, 8] | [3648, 2560, 3776, 1792, 3968, 2752, 3200, 2368, 4032, 2368] | 95.5 M38 | 10 | 960 | [2, 4, 2, 2, 4, 2, 4, 2, 4, 8] | [3840, 2240, 3328, 3776, 3648, 3200, 2944, 2368, 3968, 2880] | 98.8 M39 | 10 | 960 | [2, 4, 2, 2, 2, 2, 4, 2, 4, 8] | [3840, 2240, 3328, 3776, 3200, 3200, 3968, 2368, 3968, 2816] | 99.8 Table 7: Detailed architectural parameters for all models in Figure 8 with GPT-2 backbone. | nlayer | dmodel | nhead | dinner | DecoderParams (M) ---|---|---|---|---|--- baseline | $\in$[1,16] | 1024 | 12 | 3072 | - ARM | M1 | 2 | 512 | [2, 2] | [1920, 1920] | 6.0 M2 | 3 | 320 | [8, 2, 4] | [1920, 1920, 3712] | 6.1 M3 | 2 | 576 | [2, 2] | [1344, 3200] | 7.9 M4 | 3 | 384 | [2, 8, 2] | [3840, 2368, 3328] | 9.1 M5 | 5 | 384 | [4, 4, 2, 4, 4] | [2880, 1920, 960, 2496, 1280] | 10.3 M6 | 2 | 768 | [2, 2] | [1600, 2240] | 10.6 M7 | 5 | 320 | [4, 2, 2, 4, 2] | [1344, 2240, 3776, 3008, 3648] | 11.0 M8 | 3 | 768 | [2, 2, 4] | [1856, 1792, 1920] | 15.7 M9 | 3 | 704 | [2, 2, 2] | [3136, 2112, 1920] | 16.1 M10 | 2 | 960 | [4, 2] | [3584, 2304] | 18.7 M11 | 6 | 448 | [4, 4, 2, 2, 4, 2] | [3072, 2112, 4032, 2688, 1600, 3072] | 19.7 M12 | 3 | 960 | [4, 4, 2] | [2368, 2560, 2048] | 24.5 M13 | 4 | 704 | [4, 8, 4, 2] | [3008, 3776, 2560, 3648] | 26.3 M14 | 5 | 704 | [4, 2, 4, 2, 8] | [3584, 3136, 3776, 3072, 1856] | 31.7 M15 | 3 | 960 | [2, 2, 2] | [3392, 3648, 3840] | 32.0 M16 | 4 | 960 | [4, 2, 8, 2] | [2048, 3328, 1984, 1856] | 32.5 M17 | 7 | 704 | [2, 4, 4, 4, 8, 2, 2] | [3008, 2560, 1920, 1856, 2112, 1728, 3136] | 36.9 M18 | 4 | 960 | [2, 2, 4, 8] | [3392, 3456, 2432, 2304] | 37.0 M19 | 5 | 832 | [4, 4, 4, 4, 4] | [3840, 1920, 4032, 3072, 3968] | 41.9 M20 | 5 | 960 | [8, 4, 2, 2, 4] | [2560, 2048, 3648, 1728, 2304] | 42.1 M21 | 5 | 960 | [4, 4, 2, 2, 2] | [3072, 2240, 1984, 2176, 3520] | 43.4 M22 | 5 | 960 | [2, 4, 4, 4, 2] | [2496, 3648, 3328, 3392, 2112] | 47.2 M23 | 6 | 832 | [4, 2, 4, 4, 2, 4] | [2496, 3200, 1664, 3904, 3520, 3840] | 47.7 M24 | 6 | 960 | [8, 2, 2, 2, 8, 4] | [2304, 3328, 3456, 1856, 1792, 2112] | 50.7 M25 | 5 | 960 | [4, 8, 2, 4, 4] | [3264, 2688, 4032, 3968, 3712] | 52.4 M26 | 6 | 960 | [2, 4, 4, 2, 2, 2] | [3008, 2624, 4032, 2688, 3520, 2624] | 57.7 M27 | 6 | 960 | [2, 4, 4, 2, 8, 2] | [2304, 3648, 3328, 3648, 3904, 1728] | 57.8 M28 | 6 | 960 | [4, 4, 2, 4, 2, 2] | [3072, 2368, 4032, 4032, 3776, 3264] | 61.6 M29 | 7 | 960 | [2, 2, 2, 8, 4, 8, 4] | [3008, 2304, 1920, 1984, 3520, 2816, 3712] | 62.9 M30 | 7 | 960 | [2, 4, 4, 4, 4, 2, 2] | [3200, 4032, 2048, 2624, 2112, 2752, 2880] | 63.6 M31 | 7 | 960 | [2, 4, 4, 4, 4, 2, 4] | [3584, 3648, 3328, 3392, 3200, 1984, 3200] | 68.8 M32 | 7 | 960 | [2, 4, 8, 8, 2, 2, 8] | [3008, 3648, 3584, 3648, 3008, 1728, 3712] | 68.8 M33 | 7 | 960 | [4, 4, 2, 4, 4, 8, 4] | [3584, 3840, 3328, 3392, 3136, 2944, 2496] | 69.5 M34 | 8 | 960 | [8, 2, 2, 8, 2, 2, 8, 2] | [3008, 3648, 1792, 1984, 3008, 2816, 3712, 3520] | 74.7 M35 | 8 | 960 | [2, 2, 2, 2, 8, 4, 4, 2] | [3008, 2304, 1792, 3008, 3520, 2880, 3712, 3456] | 75.1 M36 | 8 | 960 | [2, 2, 2, 2, 2, 2, 4, 8] | [3008, 1792, 3840, 3392, 3520, 3136, 3712, 3520] | 79.4 | M37 | 9 | 960 | [2, 2, 4, 4, 8, 8, 4, 2, 4] | [1664, 1792, 2240, 3904, 3648, 3264, 2176, 3712, 1856] | 79.9 | M38 | 11 | 832 | [8, 4, 2, 4, 4, 2, 8, 4, 4, 8, 8] | [3072, 2368, 4032, 3968, 1664, 3968, 2176, 2624, 3840, 2176, 2112] | 83.8 | M39 | 9 | 960 | [4, 2, 4, 8, 2, 2, 4, 2, 4] | [2496, 3648, 3328, 3392, 3648, 1728, 2880, 3520, 2368] | 85.1 | M40 | 9 | 960 | [4, 2, 4, 8, 4, 2, 4, 2, 4] | [3072, 2816, 4032, 2560, 3648, 1728, 3840, 3264, 3456] | 87.8 | M41 | 10 | 960 | [8, 2, 4, 4, 2, 2, 4, 8, 2, 4] | [3648, 1792, 2432, 1856, 3392, 2304, 3776, 2944, 3136, 3904] | 93.0 | M42 | 10 | 960 | [8, 2, 2, 4, 2, 2, 2, 4, 2, 2] | [3264, 2048, 3520, 3904, 3840, 3840, 2624, 3072, 3776, 2304] | 98.8 | M43 | 12 | 896 | [4, 4, 4, 2, 4, 2, 4, 8, 8, 2, 4, 2] | [2048, 3136, 4032, 1792, 3584, 1728, 3136, 3008, 2560, 3200, 3648, 1728] | 98.9 | M44 | 10 | 960 | [4, 2, 8, 4, 2, 8, 4, 4, 4, 2] | [3584, 3968, 3328, 3904, 2368, 2112, 3904, 3520, 3328, 2688] | 99.8 | M45 | 10 | 960 | [8, 2, 4, 4, 4, 4, 4, 2, 2, 8] | [2688, 3200, 3840, 3392, 3520, 3136, 3392, 3520, 2880, 3200] | 99.9 Corei7 | M1 | 2 | 384 | [2, 2] | [3840, 2432] | 6.0 M2 | 3 | 320 | [2, 2, 2] | [2176, 3072, 2496] | 6.2 M3 | 2 | 512 | [2, 2] | [1408, 2624] | 6.2 M4 | 3 | 384 | [2, 2, 2] | [3264, 3456, 3584] | 9.7 M5 | 2 | 576 | [2, 2] | [3136, 3648] | 10.5 M6 | 3 | 448 | [2, 2, 2] | [4032, 3648, 4032] | 12.9 M7 | 4 | 448 | [2, 2, 4, 4] | [3072, 3648, 4032, 1792] | 14.5 M8 | 2 | 768 | [2, 2] | [3968, 3328] | 15.9 M9 | 4 | 576 | [2, 2, 2, 2] | [3072, 2752, 3456, 3136] | 19.6 M10 | 2 | 960 | [2, 2] | [3840, 3264] | 21.0 M11 | 4 | 640 | [2, 2, 2, 2] | [2176, 3648, 3584, 1920] | 21.1 M12 | 3 | 960 | [2, 2, 2] | [2176, 3264, 2432] | 26.2 M13 | 4 | 768 | [2, 2, 2, 2] | [3584, 2112, 3392, 1920] | 26.4 M14 | 4 | 768 | [2, 2, 2, 2] | [3584, 2560, 3776, 1536] | 27.1 M15 | 4 | 832 | [2, 2, 2, 2] | [3904, 1984, 3392, 3136] | 31.8 M16 | 3 | 960 | [2, 2, 2] | [3968, 4032, 2880] | 32.0 M17 | 5 | 768 | [2, 2, 4, 2, 2] | [3648, 3072, 3392, 1984, 2944] | 34.9 M18 | 4 | 960 | [2, 2, 2, 2] | [3136, 1984, 3392, 2944] | 36.8 M19 | 4 | 960 | [2, 2, 2, 4] | [3968, 3456, 3584, 3136] | 42.0 M20 | 6 | 768 | [4, 2, 2, 4, 2, 4] | [3584, 2112, 3456, 3136, 3840, 2560] | 42.9 M21 | 7 | 768 | [2, 4, 2, 4, 4, 4, 2] | [2624, 1984, 2496, 3968, 2880, 2112, 4032] | 47.5 M22 | 5 | 960 | [2, 2, 4, 2, 4] | [2176, 3264, 3392, 3008, 3328] | 47.6 M23 | 6 | 960 | [4, 4, 2, 4, 2, 2] | [2048, 2624, 3520, 1984, 2880, 2624] | 52.3 M24 | 6 | 960 | [2, 4, 4, 4, 2, 2] | [1792, 3456, 2752, 2240, 1664, 3840] | 52.4 M25 | 6 | 960 | [4, 2, 2, 2, 4, 4] | [2176, 1664, 3648, 3136, 3968, 3904] | 57.7 M26 | 7 | 960 | [2, 2, 4, 4, 2, 2, 8] | [2816, 1792, 3968, 1728, 1664, 3328, 2944] | 60.9 M27 | 7 | 896 | [2, 2, 4, 2, 2, 2, 2] | [3904, 3264, 3328, 3968, 1728, 2624, 4032] | 63.5 M28 | 7 | 960 | [4, 2, 4, 2, 2, 2, 2] | [3584, 2560, 1792, 1920, 3968, 2112, 3968] | 64.1 M29 | 8 | 960 | [2, 2, 2, 4, 2, 2, 2, 4] | [3328, 2432, 2624, 2752, 1664, 2240, 2304, 2816] | 68.3 M30 | 7 | 960 | [4, 2, 4, 2, 2, 2, 2] | [3904, 2304, 2368, 3584, 3264, 2880, 3904] | 68.5 M31 | 8 | 960 | [4, 2, 4, 2, 2, 4, 2, 4] | [2560, 3648, 2624, 2112, 3328, 2112, 1792, 3328] | 70.9 | M32 | 8 | 960 | [4, 4, 4, 2, 2, 4, 2, 4] | [2560, 2304, 2624, 4032, 2688, 2624, 3840, 2816] | 74.7 | M33 | 9 | 960 | [2, 4, 2, 4, 2, 4, 2, 2, 4] | [3072, 3264, 2944, 1984, 2880, 3520, 2112, 2624, 1728] | 79.6 | M34 | 10 | 896 | [2, 2, 4, 2, 2, 2, 2, 2, 4, 2] | [2816, 3264, 3584, 1792, 3136, 3584, 2240, 2240, 1920, 2752] | 81.2 | M35 | 9 | 960 | [8, 2, 2, 2, 4, 4, 2, 4, 4] | [3904, 3648, 2432, 3136, 3264, 2816, 2240, 3072, 3840] | 87.7 | M36 | 10 | 960 | [4, 4, 2, 2, 4, 4, 2, 4, 4, 2] | [2176, 3264, 2752, 3136, 3968, 3520, 3776, 3328, 1728, 2496] | 94.9 | M37 | 10 | 960 | [4, 2, 4, 2, 2, 2, 2, 4, 2, 2] | [3904, 2112, 2496, 3968, 3968, 2624, 3904, 2304, 3200, 3840] | 99.0 | M38 | 11 | 960 | [4, 2, 2, 4, 2, 4, 2, 2, 4, 4, 4] | [2176, 4032, 3264, 3840, 2688, 1984, 1728, 2944, 1920, 2368, 3840] | 99.8
# Predictable Bandwidth Slicing with Open vSwitch Jesse Chen and Behnam Dezfouli {jschen<EMAIL_ADDRESS>Internet of Things Research Lab, Department of Computer Science and Engineering, Santa Clara University, USA ###### Abstract Software switching, a.k.a virtual switching, plays a vital role in network virtualization and network function virtualization, enhances configurability, and reduces deployment and operational costs. Software switching also facilitates the development of edge and fog computing networks by allowing the use of commodity hardware for both data processing and packet switching. Despite these benefits, characterizing and ensuring deterministic performance with software switches is harder, compared to physical switching appliances. In particular, achieving deterministic performance is essential to adopt software switching in mission-critical applications, especially those deployed in edge and fog computing architectures. In this paper, we study the impact of switch configurations on bandwidth slicing and predictable packet latency. We demonstrate that latency and predictability are dependent on the implementation of the bandwidth slicing mechanism and that the packet schedulers used in OVS Kernel-Path and OVS-DPDK each focus on different aspects of switching performance. ###### Index Terms: Software Switching, Deterministic Performance, Latency Prediction, Edge Computing, Fog Computing ## I Introduction With the arrival of new paradigms such as edge and fog computing, the necessity for comprehensive understanding of network behavior becomes increasingly important as tasks often have a multitude of requirements that depend on network performance guarantees, such as minimum flow bandwidth or packet latency constraints. Some requirements are easy to fulfill. For example, it is straightforward to track the available processing and memory resources of edge and fog nodes. However, the prediction of network parameters such as end-to-end packet latency is dependent on a variety of factors and requires a comprehensive understanding of network configuration and topology [1, 2]. An essential step towards this understanding is the characterization of packet switching behaviors. In this paper, we focus on software switching considering its high applicability in edge and fog computing scenarios [3]. For example, software switches are utilized to build multi-function nodes in edge and fog systems, where each node performs both networking and computation tasks. Specifically, with commodity hardware that is capable of computation, software switches are used to add switching capability to a network, resulting in lower costs for deployment, maintenance, and upgrades. Software switches also offer greater configuration flexibility, making them more suitable for edge and fog networks which need to handle highly-dynamic workloads. In contrast, traditional hardware switches, even when enabled with Software Defined Networking (SDN) protocols such as OpenFlow and NETCONF, are limited in their ability to accomplish effective bandwidth slicing because of queue limitations. While hardware switches are usually limited to eight queues per port, software switches do not impose this limitation [4]. Furthermore, the size of software switches’ flow tables is flexible and can be extended in ways that hardware switches’ can not. Software switches open up the possibility of efficient bandwidth isolation for each task’s data flows and simplify the process of developing and applying new network policies [3, 5]. Although one of the main benefits of software switches when compared to traditional hardware switches is the high degree of flexibility offered, there remains areas of study that have largely been neglected in existing analyses. Specifically, existing studies overwhelmingly focus on the switches’ maximum throughput capabilities [6, 7, 8] or latency measurements in best-case, non- realistic scenarios [9, 1, 10, 11, 12, 2, 13]. These studies are important to understand the performance limitations of software switches, but they provide very little to characterize performance in real-world scenarios. In particular, these studies fail to provide relevant analysis of packet latency in edge and fog networking scenarios where the bandwidth is sliced to provide queue rate guarantees. In this work, we fill the gap in existing literature by studying how the various aspects of bandwidth slicing such as packet scheduling and queue rate affect latency. We study and evaluate bandwidth slicing using OVS-Kernel Path (OVS-KP) and OVS-DPDK and identify their strengths and weaknesses in terms of latency and resource efficiency. In addition to characterizing the latency patterns in bandwidth slicing scenarios, we also identify and analyze the underlying causes of these latency patterns. We observe that although the packet latency of OVS-DPDK is considerably lower than that of OVS-KP, the latency of OVS-KP is stable and predictable using M/M/1 queuing. This is because OVS-KP is able to efficiently utilize the available queue buffers. OVS-DPDK achieves its lower latency by minimizing the time spent by the packets in the packet scheduler queue; however, this comes at the cost of inefficient resource utilization. To keep the queue length short, it drops all packets that are in excess of the allocated queue rate, which results in high TCP retransmsission rates and the need for excess ingress bandwidth in order to maintain the target throughput. The observations of this paper can be leveraged to employ software switching in various scenarios, such as for building edge and fog computing systems that need to handle the diverse latency and throughput requirements of IoT applications. The rest of this paper is organized as follows. We present the related work in Section II. In Section III, we overview the two software switches used in this work. In Section IV, we discuss the importance and extraction of effective queue rate. In Section V, we show that the latency of OVS-KP is predictable using the M/M/1 queueing model. We discuss the latency of software switching with a user-space data plane in Section VI. In Section VII, we discuss the resource efficiency tradeoffs between different variants of the Open vSwitch. In Section VIII, we present discussions on the current and future applicability of this work, highlight its significance, and conclude the paper. ## II Related Work Existing works on the performance evaluation of software switches are either limited in scope and ignore latency as a performance parameter, or present an evaluation of oversimplified use-cases that are not representative of real- world network configurations. Fang et al. [6] evaluate a broad spectrum of the available software switching solutions and present a direct comparison of the maximum throughput values of each of the evaluated switches. Their evaluation of software switches remains surface-level as they focus on the intricacies of inter-switch comparability, leaving much to be desired in terms of performance analysis. McGuinness et al. [7] focus on performance evaluations of the BESS software switch in the context of high throughput datacenter use-cases. Although they evaluate the throughput accuracy of the rate limiter, its effect on latency has been neglected. Furthermore, datacenters cannot be compared to edge and fog networking scenarios, as the two types of networks have different hardware and applications. Meyer et al. [8] present a model for software switch performance, but limit the scope of their model to only include throughput measurements. Overall, these studies evaluate the performance of software switches primarily in terms of throughput and neglect to include any measurements of latency. In [9], Zhang et al. perform evaluations across various state-of-the-art software switching solutions. They analyze performance of the OVS-DPDK, snabb, BESS, FastClick, VPP, and netmap VALE software switches and present comparisons of their maximum throughput and packet latency. Despite the breadth of comparisons, their performance analysis is narrow and only encompasses the most basic of configurations and measurements. Emmerich et al. [1] present an in-depth performance evaluation of Open vSwitch (OVS). However, their work primarily focuses on throughput, and the analysis of latency is for very simple scenarios that are insufficient to model edge and fog networking use-cases. They evaluate latency only as a function of flow throughput and ignore the performance impacts of bandwidth slicing in multi-queue scenarios. He et al. [14] evaluate a software switch bandwidth slicing mechanism; however, their tests were performed in simple scenarios and their results are presented without a thorough analysis of the latency values. The same shortcoming is exhibited in [10, 11, 12, 13] in terms of latency evaluation: their models of packet latency are for simple, synthetic testing scenarios. The latency of a single flow has been evaluated in [10, 11, 12], while [13] only evaluates the latency of a single packet. These scenarios are rudimentary and cannot be used to accurately generate models of bandwidth-sliced software switch behaviors. ## III Methodology and Background Figure 1: The testbed used for the studies of this paper. The two software switches used are OVS-KP and OVS-DPDK. ### III-A Testbed Setup Figure 1 presents the testbed architecture. To measure the latency of OVS packet switching, we ran experiments on a testbed consisting of three machines: a traffic source, a software switch, and a traffic destination. We used Intel 82580 1GbE and Intel X550T 10GbE NICs. The traffic source sends UDP and TCP traffic to the traffic destination through the OVS. For the UDP flows with fixed bandwidth, we used iPerf, which supports specification of UDP flow bandwidth. In the tests, we modified the number of queues, the allocated throughput of each queue, and the type of data flows in each queue. We will further discuss the details of each test in their relevant sections. Packets are captured using tshark at the egress and ingress ports of the traffic source and traffic destination, respectively. To ensure the synchronization of timestamp values, the traffic source and traffic destination are two VMs running on a single machine, and the clocks of these two VMs are synchronized with that of the hypervisor. This configuration allows us to accurately measure and analyze latency values. ### III-B Open vSwitch Open vSwitch (OVS) [15, 16, 17] is an open source, production quality software switch that is compatible with various hypervisors and container systems. OVS is highly programmable and is configured using the OpenFlow and OVSDB protocols [18]. We consider the two main variants of OVS: (i) OVS Kernel-Path (OVS-KP), which implements its data path via a kernel module, and (ii) OVS with the Data Plane Development Kit (OVS-DPDK), which implements its data path through Poll Mode Drivers (PMDs) in the user-space. We use OVS 2.15.0 and DPDK 20.11.1. We perform bandwidth slicing on the switches by using their packet schedulers. The packet schedulers that we utilize are configured to shape flows via minimum guaranteed rate and/or maximum limited rate parameters. When combined with flow rules that direct packets to the queues, data flows are shaped to specific minimum and maximum rates. For OVS-KP, we use the Hierarchical Token Bucket (HTB) packet scheduler since it is widely used and available in the Linux traffic control module. HTB is a classful queuing discipline that supports hierarchical traffic shaping. Its rate control mechanisms are implemented with the token bucket filter algorithm, and its hierarchical token borrowing system allows parent classes to share tokens with their child classes. This token sharing system allows each child class to enforce a guaranteed minimum rate, while also sharing excess available bandwidth with their sibling classes. OVS-DPDK uses a different packet scheduler based on the Two-Rate, Three Color Marker (TRTCM) algorithm. Similar to HTB, TRTCM also uses a token bucket for rate control and provides traffic shaping abilities such as guaranteed minimum and maximum queue rates. Although HTB and TRTCM are very similar on the surface, their rate control mechanisms are significantly different, which results in different latency and throughput behaviors for queues with the same allocated rate. HTB is a hierarchical implementation of the token bucket filter algorithm, meaning that when a packet arrives at the head of the queue and there are no tokens available, the packet waits in the queue until tokens become available, at which time the packet is dequeued and sent to the NIC. On the other hand, for OVS-DPDK, when a packet arrives at the head of the queue, the TRTCM token buckets are checked for tokens, and if there are not enough tokens for the packet, the packet is dropped. Most significantly, this difference in behavior results in different dequeue rates from the queues; as we will discuss in Sections IV and VI, the dequeue rate impacts flow latency and throughput. ## IV Extracting Effective Queue Rate In a switching system, a packet experiences three types of latency: transmission latency, processing latency, and queueing latency. Transmission latency is directly related to NIC’s transmission rate and can be easily calculated. Processing latency is caused by various factors, including copying a packet to and from different queues, looking up a packet’s forwarding decision in the tables, or waiting for hardware I/O operations. Since hardware switches are solely dedicated to one function, packet processing delay in a hardware switch is consistently low. On the other hand, the processing delay of software switches is usually higher and with greater statistical variation. Choice of packet scheduler also affects processing delay. Software switches sacrifice processing latency in exchange for greater flexibility. Last, and the focus of this section, is queueing latency ($L_{q}$), which is defined as the time spent by a packet in the switch’s queue, waiting to be transmitted. This latency depends on the queue rate, flow rate, and packet scheduler. In every software switch, each packet must traverse three queues: the ingress NIC queue, the egress NIC queue, and the packet scheduler’s queue. In this system, throughput is limited by the lowest-rate queue, which is usually the user-allocated packet scheduler queue. As a result, a packet spends the most time waiting in the packet scheduler queue because queueing latency experienced by a packet increases exponentially as the queue input rate approaches the queue service rate. We show that this queueing latency follows an M/M/1 queue trend and that with enough knowledge of the system, the latency can be predicted when using OVS-KP. ### IV-A Observing Packet Scheduler Behavior An important factor in predicting packet latency through any queueing system is the queue’s dequeue behavior. In our case, we need to understand how the packet schedulers dequeue traffic from their rate-limited queues. Although rate-limited queues are allocated with bits/s or bytes/s values, the packet scheduler is not actually dequeueing in such small increments. The dequeue behavior varies, depending on how the packet scheduler is implemented, and even different packet schedulers with similar queue parameters behave differently. This often-overlooked variable causes packets to experience different queueing latency values, even if the allocated queue rates are identical. (a) Burst Latency for OVS-KP (b) Per Packet Latency for OVS-KP (c) Burst Latency for OVS-DPDK (d) Per Packet Latency for OVS-DPDK Figure 2: Packets are dequeued from HTB and TRTCM queues in different sized groupings. HTB dequeues in groupings of 16 packets while TRTCM deqeuues in groupings of 8 packets. In these experiments, we generate UDP flows with various packet burst sizes, then we track the individual packet latencies and the latency of each burst as a whole. Figure 2 presents the results. As Figures 2(a) and 2(c) demonstrate, when we increase the number of packets in each burst, the latency of the whole burst increases in a stepped pattern. This indicates that both HTB and TRTCM dequeue packets from their queues in bursts of packets, instead of one at a time. To further support this, the analysis of individual packet latencies in Figures 2(b) and 2(d) show that the latencies of each packet in the bursts are grouped in segments of about 16 and 8 packets each, although there exist small variation in grouping sizes for each dequeue segment. Figure 2(b) and 2(d) also highlight that the grouping pattern holds true for bursts of all sizes: no matter how big the burst, packets are always dequeued in fixed size groups. Using this information, we extrapolate that the effective rate of HTB queues must be calculated in units of 16 packets, and the effective rate of TRTCM queues must be calculated in units of 8 packets. Figure 3: A scenario where an IoT edge device needs to send data and a command to a server. Only case (c) allows faster transmission of data to the server. Once the server receives the data, it can start processing, and when the command is received, the action can be performed immediately. These observations can be leveraged to enhance communication determinism and performance in various contexts. For example, this knowledge of burst latency behaviors provides devices with the ability to send messages that take advantage of the fact that the latency of a 5-packet message is equal to that of a 15-packet message. For example, assume an IoT edge device needs to send data and command to a server. In the first scenario, data and command are sent as a single message, which is segmented into 20 packets, as Figure 3a shows. This results in the delivery of data and command at once. In the second scenario, two messages are generated for data and one message for command. Assume the first message is segmented into 7 packets, the second into 8 packets, and the third into 5 packets. Once these messages are received by the transport layer of the device, the packets are sent in an interleaved manner. As it can be observed in Figure 3b, the command message is delivered first, which cannot be used because the data messages have not been received yet. In the third scenario we rely on the behavior of software switches and generate two messages: one for data, which is transmitted first, and one for command, transmitted second. As Figure 3c shows, once the data is received by the server, it can start processing the data, and when the command arrives, the server can perform the action immediately. Therefore, the third solution provides the minimum latency and better utilization of resources. A similar method can be used regarding controller-switch communication in SDNs. To ensure timely delivery and execution of commands, the controller can manage the ordering of sent packets based on the command type and size. ## V Delay Prediction of OVS-KP Bandwidth Slicing We confirm that the queuing latency of OVS-KP switching follows an M/M/1 trend. We set up an experiment to measure the relationship between the latency of packets in a TCP flow and the queue rate. We generate and route a TCP flow through a rate-limited queue in the switch. We do not set any flow rate at the traffic source because the TCP flow rate naturally increases until it detects packet loss caused by the rate-limiter in the software switch. (a) 1GbE NIC (b) 10GbE NIC Figure 4: The queueing latency of TCP packets when switched by OVS-KP. The delay is dependent on the allocated queue rate, and with the knowledge of queue ingress rate and packet scheduler implementation, the expected latency can be predicted. We present the results in Figure 4. This figure demonstrates that (i) queue rates are the determining factor for rate-limited TCP packet latencies, and (ii) the pattern of observed latencies align with the latency values that one would expect when modeling each queue as an M/M/1 queue. The results for 1GbE (Figure 4(a)) and 10GbE (Figure 4(b)) NICs are presented side-by-side to show that NIC line-rate is not a significant factor in this experiment. It is important to note that when calculating the expected latency, the queue rate must be represented via the amount of data that is dequeued in one instance, i.e., the effective queue rate. In Section IV-A, we showed that HTB dequeues 16 packets at a time. Thus, instead of calculating expected latency using the queue’s bit-rate value, we calculate the expected latency using queue rate in units of 16 packets. This is where knowledge of average packet size is important, as we now combine average packet size, queue bit rate, and packet scheduler dequeue behavior to generate effective queue rate values. We calculate the expected latency via: $L_{q}=\frac{1}{\mu(1-\rho)}$ (1) where $\mu$ is the effective queue rate, and $\rho$ is the queue utilization ratio [19]. We include the latency predictions in Figure 4 for comparison against the observed values. The relationship between queue rate and observed latency is what is expected from an M/M/1 queueing system. In Figure 4, we validate that predictions based off of observed throughput ($\rho$) and packet scheduler knowledge ($\mu$) are accurate. We generate values for $\rho$ by comparing the observed throughput at the traffic source’s egress port and the allocated queue rates. The observed throughput is extracted from the wireshark captures at the traffic source’s egress port. $\mu$ is calculated by converting the units of each queue rate from bits per second to packet groupings per second. Given that small variations as low as 0.5% in queue utilization ratio significantly affect latency prediction, the results confirm that this latency prediction methodology is valid and accurate. We showed that for bandwidth-sliced flows, queueing latency is the most significant portion of end-to-end latency and that it overshadows transmission latencies; transmission latency on a 1GbE link for a single packet is on the order of microseconds, while the observed switching latencies are up to four orders of magnitude greater. Current task allocation schemes either ignore latency as a task request parameter, or assume that network latencies consist only of trivially calculated transmission latencies. In contrast, our method allows the prediction of communication latency, which can be leveraged to address the requirements of various tasks in edge and fog computing systems. As another example of leveraging this method, a SDN controller can accurately enforce bandwidth slicing schemes that satisfy the expected communication latency between edge devices and switches. Also, to configure switches with latency bounds, the controller can enforce bandwidth slicing methods along all the controller-switch paths. ## VI OVS-DPDK Bandwidth Slicing In this section, we focus on OVS-DPDK and the effect of the TRTCM packet scheduler on packet latency, in comparison to OVS-KP’s HTB. (a) 1GbE NIC (b) 10GbE NIC Figure 5: For OVS-DPDK, the end-to-end packet latency is not as predictable as that of OVS-KP due to the differences between HTB and TRTCM. For a direct comparison between the two variants of OVS, this time we use OVS- DPDK and run an experiment similar to that of Section V. We present the results in Figure 5. The results show that the latency behaviors are not similar at all to that of Figure 4. OVS-DPDK queues that are rate-limited with TRTCM cannot be modeled as an M/M/1 queueing system because the queues are not being dequeued at the allocated queue rate. Although the rate of data sent to the egress NIC matches the allocated rate, the rate at which packets are removed from the queue depends on the CPU frequency and OVS-DPDK’s tick rate. Unlike HTB, which uses the availability of tokens to limit the rate at which packets are removed from the queue, TRTCM uses the availability of tokens to decide which actions to take. If there are tokens available in the bucket when a packet is dequeued, the packet is passed on to the NIC. If there are not enough tokens for the packet, the packet is dropped. The token bucket is refilled at the allocated queue rate, hence, the amount of data sent to the NIC is limited by that value. This approach results in a very high dequeue rate for all TRTCM queues, and the effective dequeue rate is on the order of several Gbps. For OVS-KP, the value of $\rho$ in Equation (4) is close to 1 because the flow rate is approaching the effective queue rate, whereas for OVS-DPDK, that value is now much closer to 0 because the effective queue rate is much higher than the flow rate. This results in packets spending significantly less time waiting in the packet scheduler’s queues. As we can see from a direct comparison of Figures 4(a) and 5(a), for TCP flows that are rate limited to 500 Mbps, the latency is reduced from 19.22 ms with HTB to 0.27 ms with TRTCM, a 70x reduction. Although the magnitude of latency reduction varies depending on the allocated queue rate and NIC line rate, this shows that a significant portion of the latency experienced by the packets that traverse OVS-KP’s rate-limited queues is the time spent waiting in the packet scheduler’s queue. OVS-DPDK’s rate control mechanism is able to avoid these long queueing latencies, while still being able to accurately rate-limit the traffic to the egress NIC. ## VII Resource Efficiency Comparisons On the surface, OVS-KP and OVS-DPDK’s rate control mechanisms accomplish the same goal: limit the rate of traffic that is sent to the egress NIC. Although differences in implementation have significant implications for latency, another implication that is just as important is the effect of the packet scheduler on resource utilization efficiency. One of the main goals in edge/fog task allocation is to utilize resources effectively and efficiently, which, for network resources, is usually accomplished through bandwidth slicing and rate control mechanisms. We have observed that the HTB and TRTCM packet schedulers are capable of accurately rate-limiting a queue; however, our observations also show that OVS-DPDK’s choice to drop all packets that are in excess of the allocated queue rate is a tradeoff between latency and effective bandwidth utilization. Figure 6: Comparing the performance of switching a TCP flow through OVS-KP and OVS-DPDK using 1GbE ((a) and (b)) and 10GbE ((c) and (d)) NICs. The queue rate control method of OVS-DPDK is considerably less efficient than that of OVS-KP. We run experiments with the same setup as Figure 4 and 5, and we use iperf3 to capture data for TCP retransmission rate and TCP congestion window size. Figures 6a and 6c present the results for TCP retransmission rate. We observe that rate-limiting with TRTCM causes significantly more TCP retransmissions compared to HTB. Rate-limiting to 500 Mbps with TRTCM results in 4233 and 4652 TCP retransmissions per second for 1GbE and 10GbE NICs, respectively. This indicates that for TCP traffic, maintaining an egress throughput of 500 Mbps out of the switch requires an additional 50.8 Mbps and 55.8 Mbps of retransmission traffic, due to the large number of packets that TRTCM drops. More importantly, although OVS-DPDK is able to switch individual packets with lower latency than OVS-KP, the high rate of packet drops/retransmissions has an adverse effect on application message latency. The application layer is not dependent on individual packet latency, rather, is dependent on the latency of messages which can be composed of multiple packets. In a situation with a 10% retransmission rate, large application layer messages are very likely to experience retransmissions and slowdowns due to the inefficiencies of TRTCM. Figures 6b and 6d show that rate-limited flows with TRTCM have much smaller congestion window sizes than flows that are rate-limited with HTB. Once again, this is related to TRTCM—dropping all packets that are over the allocated queue rate is extremely limiting for TCP congestion window size. Each time a packet is dropped and retransmitted, the congestion window of that TCP connection is halved. For TCP flows with high retransmission rates like those we observed with TRTCM, the congestion windows are severely limited and are unable to grow due to the constant packet drops and subsequent window size adjustments. The frequent congestion window size adjustments also results in spikes and dips in flow throughput, which have a detrimental effect on latency predictability. As such, application layer messages that are sent through OVS- DPDK always have an element of unpredictability due to high retransmission rates while messages sent through OVS-KP do not. Since OVS-DPDK operates completely in user-space, it achieves its high performance by constantly consuming 100% of at least one processor core. For high performance use-cases, a separate core is used for each port, resulting in several processor cores being dedicated entirely to running the DPDK user- space data path. In low-cost and low-energy edge and fog computing scenarios, this is not desirable, especially when compared to OVS-KP, which consumes less than 5% of a single processor core with HTB while switching 10Gbps traffic with hundreds of flow rules and queues. ## VIII Conclusion and Future Work In this paper, we studied how packet schedulers affect switching latency and resource efficiency. We first developed models to predict the latency of the M/M/1 queueing system that can be found in the HTB packet scheduler. Specifically, we analyzed the behavior of the packet scheduler, then used these observations to predict packet latency of TCP flows. We then discussed the design differences between OVS-KP and OVS-DPDK packet schedulers and showed how each achieves either low latency or resource utilization efficiency at the cost of the other. The results presented in this work provide a foundation from which we can begin to build deterministic software switching systems that can be specifically used to build low-cost processing and packet switching systems using commodity hardware. The design decisions that allow OVS-DPDK’s TRTCM to achieve low latency in comparison to OVS-KP’s HTB come at the cost of inefficient bandwidth usage, throughput instability, and reduced latency predictability. This information is especially important to design networks with specific performance metrics in mind. Besides, this information can be leveraged to design packet schedulers that combine the desired properties of HTB and TRTCM. For example, a new packet scheduler that seeks to enforce latency bounds while also achieving flow reliability could dequeue packets according to the queue rate similar to HTB, and also dynamically adjust queue length so that packets that do not meet the packet latency requirements are dropped, similar to TRTCM. This way, the benefits of using the queue buffers can be realized, while also keeping queueing latency within established bounds. ## References * [1] P. Emmerich, D. Raumer, S. Gallenmüller, F. Wohlfart, and G. Carle, “Throughput and latency of virtual switching with Open vSwitch: A quantitative analysis,” _Journal of Network and Systems Management_ , vol. 26, no. 2, pp. 314–338, 2018. * [2] U. Javed, A. Iqbal, S. Saleh, S. A. Haider, and M. U. Ilyas, “A stochastic model for transit latency in OpenFlow SDNs,” _Computer Networks_ , vol. 113, pp. 218–229, 2017. * [3] C. Powell, C. Desiniotis, and B. Dezfouli, “The fog development kit: A platform for the development and management of fog systems,” _IEEE Internet of Things Journal_ , vol. 7, no. 4, pp. 3198–3213, 2020. * [4] P. Heise, F. Geyer, and R. Obermaisser, “Deterministic OpenFlow: Performance evaluation of SDN hardware for avionic networks,” in _11th International Conference on Network and Service Management (CNSM)_. IEEE, 2015, pp. 372–377. * [5] J. Sheth and B. Dezfouli, “Enhancing the energy-efficiency and timeliness of IoT communication in WiFi networks,” _IEEE Internet of Things Journal_ , vol. 6, no. 5, pp. 9085–9097, 2019. * [6] V. Fang, T. Lvai, S. Han, S. Ratnasamy, B. Raghavan, and J. Sherry, “Evaluating software switches: hard or hopeless?” _EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2018-136_ , 2018. * [7] R. McGuinness and G. Porter, “Evaluating the performance of software NICs for 100-gb/s datacenter traffic control,” in _Symposium on Architectures for Networking and Communications Systems_ , 2018, pp. 74–88. * [8] T. Meyer, F. Wohlfart, D. Raumer, B. E. Wolfinger, and G. Carle, “Validated model-based performance prediction of multi-core software routers.” _Praxis der Informationsverarbeitung und Kommunikation_ , vol. 37, no. 2, pp. 93–107, 2014. * [9] T. Zhang, L. Linguaglossa, J. Roberts, L. Iannone, M. Gallo, and P. Giaccone, “A benchmarking methodology for evaluating software switch performance for NFV,” in _IEEE Conference on Network Softwarization (NetSoft)_. IEEE, 2019, pp. 251–253. * [10] S. Shanmugalingam, A. Ksentini, and P. Bertin, “DPDK Open vSwitch performance validation with mirroring feature,” in _23rd International Conference on Telecommunications (ICT)_. IEEE, 2016, pp. 1–6. * [11] T. Begin, B. Baynat, G. A. Gallardo, and V. Jardin, “An accurate and efficient modeling framework for the performance evaluation of DPDK-based virtual switches,” _IEEE Transactions on Network and Service Management_ , vol. 15, no. 4, pp. 1407–1421, 2018. * [12] D. Sattar and A. Matrawy, “An empirical model of packet processing delay of the Open vSwitch,” in _IEEE 25th International Conference on Network Protocols (ICNP)_ , 2017, pp. 1–6. * [13] A. W. Manggala, A. Tanwidjaja _et al._ , “Performance analysis of white box switch on software defined networking using Open vSwitch,” in _International Conference on Wireless and Telematics (ICWT)_. IEEE, 2015, pp. 1–7. * [14] K. He, W. Qin, Q. Zhang, W. Wu, J. Yang, T. Pan, C. Hu, J. Zhang, B. Stephens, A. Akella _et al._ , “Low latency software rate limiters for cloud networks,” in _Proceedings of the First Asia-Pacific Workshop on Networking_ , 2017, pp. 78–84. * [15] B. Pfaff, J. Pettit, K. Amidon, M. Casado, T. Koponen, and S. Shenker, “Extending networking into the virtualization layer.” in _Hotnets_ , 2009\. * [16] J. Pettit, J. Gross, B. Pfaff, M. Casado, and S. Crosby, “Virtual switching in an era of advanced edges,” 2010. * [17] B. Pfaff, J. Pettit, T. Koponen, E. Jackson, A. Zhou, J. Rajahalme, J. Gross, A. Wang, J. Stringer, P. Shelar _et al._ , “The design and implementation of Open vSwitch,” in _12th USENIX Symposium on Networked Systems Design and Implementation (NSDI)_ , 2015, pp. 117–130. * [18] J. Chen and B. Dezfouli, “Modeling control traffic in software-defined networks,” in _7th IEEE International Conference on Network Softwarization (NefSoft)_ , 2021. * [19] J. Abate and W. Whitt, “Transient behavior of the M/M/l queue: starting at the origin,” _Queueing systems_ , vol. 2, no. 1, pp. 41–65, 1987.
# Roller-Coaster in a Flatland: Magnetoresistivity in Eu-intercalated Graphite A. L. Chernyshev Department of Physics and Astronomy, University of California, Irvine, California 92697, USA O. A. Starykh Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112, USA ###### Abstract Novel phenomena in magnetically-intercalated graphite has been a subject of much research, pioneered and promoted by M. S. and G. Dresselhaus and many others in the 1980s. Among the most enigmatic findings of that era was a dramatic, roller-coaster-like behavior of the magnetoresistivity in EuC6 compound, in which magnetic Eu2+ ions form a triangular lattice that is commensurate to graphite honeycomb planes. In this study, we provide a long- awaited microscopic explanation of this behavior, demonstrating that the resistivity of EuC6 is dominated by spin excitations in Eu-planes and their highly nontrivial evolution with the magnetic field. Together with our theoretical analysis, the present study showcases the power of the synthetic 2D materials as a source of potentially significant insights into the nature of exotic spin excitations. ###### Contents 1. I Introduction 2. II Phenomenology and modeling 1. II.1 Electronic structure of EuC6 2. II.2 Spin model and parameters 1. II.2.1 Classical ground states 2. II.2.2 Tilt angles and critical fields 3. II.2.3 Parameters 3. III Spin excitations 1. III.1 General case of a coplanar state 2. III.2 $1/S$-expansion 3. III.3 LSWT Hamiltonian 4. III.4 Diagonalization 5. III.5 Magnon eigenenergies 4. IV Kondo coupling and resistivity 1. IV.1 Kondo coupling 2. IV.2 Resistivity 1. IV.2.1 Large-${\bf q}$ insights 2. IV.2.2 Estimate of $J_{K}$ 5. V Results 1. V.1 $T$-dependence of resistivity 2. V.2 Magnetoresistivity vs biquadratic-exchange 3. V.3 Magnetoresistivity, role of $k_{F}$ 4. V.4 Outlook 6. VI Summary 7. A First order transitions 1. A.1 Y-UUD transition 2. A.2 V-FM transition 8. B Particular cases 1. B.1 Polarized state 2. B.2 120° state 3. B.3 Plateau state 9. C Transport formalism, $1/\tau$ approximation 1. C.1 Basics and conventions 2. C.2 Phonon-like spin-conserving scattering 1. C.2.1 Ansatz and solution 2. C.2.2 2D case 3. C.3 Spin-flip scattering 4. C.4 Quasiparticle $1/\tau_{\rm qp}$, angular dependence 1. C.4.1 Quasiparticle vs transport $1/\tau$ 2. C.4.2 Angular dependence of $1/\tau_{\rm qp}$ ## I Introduction The two-dimensional (2D) world of Flatland, a mathematical abstraction and a subject of popular culture [1], has, arguably, received its ultimate physical realization in the form of graphene [2, 3], whose unique properties [4] have ushered in a new era of making artificial heterostructures via a Lego-like [5] assembly of layered materials. Perhaps with an exception of research in the twisted bi- and $n$-layer graphene [6], the fledging field of van der Waals magnets holds most promise in opening new horizons for the fundamental studies and applications along the path of using this technology [7, 8, 9, 10, 11]. Historically, a more traditional, if not ancient [12], way of achieving similar goals of synthesizing materials with novel properties from a stack of carbon layers and various elements and compounds has relied on the process of intercalation, also known in popular culture [13]. The research in graphite intercalation compounds (GICs) has attracted significant attention in the past, with the evolution of such studies and understanding of these materials outlined in several books and, specifically, in the reviews by M. S. and G. Dresselhaus, whose efforts contributed to much of the progress in this area, see Refs. [14, 15, 16, 17]. Figure 1: (a) The schematics of EuC6. Small and large dots are C and Eu atoms, respectively. (b) The EuC6 in-plane resistivity data vs magnetic field, $\rho(H)$, for various temperatures, see Ref. [18]. Arrows are the fields of the anomalies in $\rho(H)$ that correspond to transitions between magnetic states, extrapolated to $T\\!=\\!0$. [Reprinted with permission from S. T. Chen, M. S. Dresselhaus, G. Dresselhaus, H. Suematsu, H. Minemoto, K. Ohmatsu, and Y. Yosida, Phys. Rev. B 34, 423 (1986), Ref. [18]. Copyright (1986) by the American Physical Society.] (c) Our representative results for the transport scattering rate. Of the fundamental footprint of this research, it is the magnetically- intercalated compounds that have produced the most intriguing phenomena [16]. The case of EuC6, made of alternating honeycomb layers of carbon and triangular-lattice layers of Eu ions, shown schematically in Fig. 1(a), particularly stands out. A highly dramatic, roller-coaster-like dependence of the in-plane resistivity on a magnetic field, reproduced from Ref. [18] in Fig. 1(b), is clearly indicative of an intricately intertwined magnetic and electronic degrees of freedom of this material. Incidentally, EuC6 is also the first magnet to exhibit the fabled $1/3$-magnetization plateau [19] and was inspirational for an understanding of this state [20]. The pioneering studies of EuC6 [21, 22, 23, 18] have analyzed and successfully identified key exchange terms of the triangular-lattice spin-$7/2$ model Hamiltonian of the localized $4f$ orbitals of Eu2+ ions that are necessary to understand the field-induced phases and the concomitant magnetization data [19]. However, while yielding a reasonable estimate of the Kondo coupling, the sole attempt to explain magnetoresistivity itself [24] has provided a largely unsuccessful modeling of it via a crude consideration of the spin-scattering of electrons and suggested a rather relic backflow mechanism to explain the $T$-dependence of the resistivity. Thus, it is fair to say that, by and large and to the best of our knowledge, there exists no proper explanation of the key resistivity results observed in EuC6, shown in Fig. 1(b). Furthermore, the refocusing of the research of the 1980s and 1990s on correlated systems and high-temperature superconductivity has left these striking results in their enigmatic state. In this work, we provide a microscopic theory of the magnetoresistivity of EuC6 and demonstrate that its highly nontrivial evolution with the magnetic field must be dominated by the scattering off the spin excitations in Eu- planes. Figure 1(c) demonstrates representative results of our theory, which capture most of the qualitative and quantitative features of the experimental data in Fig. 1(b), with the details of the theory provided below. In an anticipation of the upcoming revival of the research in the graphite intercalation compounds driven by an interest in the novel magnetic systems, our effort will help bringing together different approaches to the search of new phenomena [8, 17] in the graphite-derived artificial magnetic materials. More broadly, we would also like to highlight that there is a number of conducting magnetic materials that exhibit a highly non-monotonic magnetoresistivity [25, 26, 27, 28], showing that such measurements can serve as a very sensitive probe of the field-induced phase transitions. However, most theoretical explanations, if any, are limited to an associative construction of phenomenological spin models to match the number of phase transitions and broad trends in magnetization [28], without any attempt to explicate scattering mechanisms and calculate resistivity. In that respect, our present study is also the one that accomplishes precisely this goal: a fully microscopic calculation of the resistivity throughout all the phases in the phase diagram of the underlying spin model. We anticipate our results not only be inspirational for the broader research in metallic magnets, but also to provide the technical guidance for similar studies. We outline, in broad strokes, our approach and results. We build on the achievements of the prior work on EuC6 [21, 22, 23, 18] and reanalyze phenomenological constraints on the triangular-lattice spin-$7/2$ model of Eu2+ layers. In this analysis, we also use more recent experimental insights into the magnetic ground state of EuC6 [29] and density-functional theory of its electronic structure [30]. Thus, we establish bounds on the exchange parameters as related to the phenomenology of different magnetic phases of EuC6, examine ranges of parameters that make transitions between the phases first-order, and formulate a minimal model to describe EuC6. We proceed by constructing the spin-wave theory for all the field-induced phases of that model. Although a numerical procedure is generally needed to obtain magnon eigenenergies, the approach leading to it, as well as the results for some of the phases, are fully analytical. While the Kondo coupling between conducting electronic states and Eu2+ spins is fully local, the matrix elements of electron scattering on magnons have a nontrivial form, owing to the internal structure of quasiparticle eigenstates in different phases. This structure leads, among other things, to the non- spin-flip scattering processes in the non-collinear phases. We articulate that these matrix elements are essential for a consistent calculation of the transport scattering rate. The expression for the latter, given in a concise form, is derived using Boltzmann formalism, which we revisit for both spin- flip and non-spin-flip channels, providing a thorough derivation of the relaxation-time approximation in the process. The temperature-dependence of the resistivity anticipated from our theory is discussed for all field-induced phases. Significantly, the zero-field results of our theory demonstrate an analogue of the phonon-dominated resistivity behavior, but due to scattering off the acoustic magnons of the 120$\degree$ state, with a 2D equivalent of the Bloch-Grüneisen low-temperature asymptote of $\rho\\!\propto\\!T^{4}$ and the high-temperature Ohm’s law, $\rho\\!\propto\\!T$. Given the extent of the magnon bandwidth, the nearly linear trend of $\rho(T)$ observed in Ref. [24] above 8 K is shown to be well within the onset of the Ohm’s regime. The resistivity calculations are performed at experimentally relevant temperatures for various parameters of the minimal model to demonstrate qualitative trends and for a specific set of parameters that best describes EuC6. We also investigate the dependence of our results on the filling fraction of electronic bands, encoded in the Fermi momentum $k_{F}$, and conclude that the relatively smaller values of $k_{F}\\!\lesssim\\!\pi/3$ provide a better correspondence to the EuC6 phenomenology, inviting more research into a verification of its electronic properties. Other intriguing features of the resistivity for the larger values of $k_{F}$, potentially controllable by doping, are also discussed. Altogether, the results of our model for the transport relaxation rate, offered in Fig. 1(c) for a representative $k_{F}\\!=\\!\pi/3$, show a striking similarity to the experimental data in Fig. 1(b), with the possible origin of the discrepancies at higher field discussed below. Our theory implicitly contains the field-dependence via that of the magnon spectra and scattering matrix elements, which, in turn, depend on the spin arrangement in each of the field-induced phases. It also properly accounts for the effect of the thermal population of magnetic scatterers on the resistivity. One of the qualitative messages of our study is the importance of the non-spin-flip channel of the scattering, which is present in the phases with the non-collinear spin configurations, but is absent for the collinear ones. This effect explains the weaker scattering and lower resistivity in the $1/3$-magnetization plateau and fully polarized phases. The general picture that emerges from our analysis is that of the resistivity as a very informative probe of not only field-induced phase transitions, but also of the elementary spin excitations in these phases. The provided thorough theoretical analysis of the iconic two-dimensional triangular-lattice antiferromagnet coupled to conduction electrons showcases a largely untapped power of the synthetic 2D materials as a source of potentially significant insights into the nature of exotic spin excitations. Our approach and findings can be applied, for example, to the electron scattering by the fractionalized spinons of the Kitaev spin liquid [31, 32] and to the other magnetically- intercalated systems such as chalcogenides [33, 34]. The paper is organized as follows. Section II.1 discusses electronic structure of EuC6 and the approximate values of the Fermi momenta. Section II.2 gives an overview of the phenomenologically-motivated spin model of EuC6, its classical ground states and critical fields, and parameters of the minimal model. Details on the first-order transitions are delegated to Appendix A. Spin excitations of the model for all field-induced phases are discussed in Section III, which provides details of the spin-wave formalism and results for representative magnon eigenenergiesa and the eigenfunctions. The fully analytical results for the polarized, 120$\degree$, and plateau phases are given in Appendix B. The Kondo coupling and its estimate, as well as resistivity and some qualitative insight into it, are discussed in Section IV. This consideration relies, not in a small way, on a detailed derivation of the relaxation rates for the spin-flip and non-spin-flip channels from the Boltzmann formalism, provided in Appendix C, which also discusses possible limitations of this approach and potential new phenomena at large values of $2k_{F}$. The temperature-dependence of magnetoresistivity, results for various values of the key model parameters and Fermi momentum, and an outlook on the possible future extensions are given in Section V. We provide a summary in Section VI. Figure 2: (a) Energy bands of graphene in its full BZ (dashed lines), and folded onto Eu-lattice BZ to represent rigid-band structure of Eu-intercalated graphite (solid lines) along the paths M${}_{\rm gr}\Gamma$M(K)KgrMgr shown in (b); high-symmetry points are highlighted. Two Dirac bands are color-coded to indicate their origin before folding. Energies are in units of the graphene tight-binding hopping parameter $t\\!=\\!3.16$ eV [4]. Horizontal line is the Fermi energy $E_{F}\\!=\\!0.83t\\!\approx\\!2.62$ eV. It matches the Fermi momenta $k_{F,{\rm min}}\\!\approx\\!0.48\pi/a$ and $k_{F,{\rm min}}\\!\approx\\!0.7\pi/a$ from Ref. [30] of the trigonally warped Fermi surfaces shown in (b) that approximately correspond to the $0.5e$ filling of the bands. Insert: crystal structure of Eu-GIC from Fig. 1 with primitive translational vectors of Eu and C lattices. (b) Brillouin zones of the graphene and of the Eu-based triangular lattice. Fermi surfaces at $E_{F}$ are color-coded according to the bands in (a). Two representative circular Fermi surfaces, with $k_{F}\\!=\\!0.4\pi/a$ and $0.7\pi/a$, (dashed lines). High- symmetry paths are indicated by the arrows. ## II Phenomenology and modeling ### II.1 Electronic structure of EuC6 The electronic structure of Eu-intercalated graphite EuC6 has been investigated experimentally and theoretically in the mid-90s of the last century [30], with the summary of these efforts given in Ref. [17]. Structurally, EuC6 is the so-called stage-I intercalated compound, meaning that the Eu layers alternate with that of carbon. Viewed from a graphite layer, the rare-earth atoms are located on top of the centers of the graphite hexagons and form a $\sqrt{3}\times\\!\sqrt{3}$ superstructure as is illustrated in Fig. 1(a). The material is characterized by the so-called $A\alpha A\beta$ stacking (space group $P6_{3}/mmc$), in which Eu atoms form a hexagonal closed packed structure with alternating positions $\alpha$ and $\beta$ between consecutive layers, while carbons follow the $AA$ stacking [35, 36, 29]. This arrangement of carbon sheets is different from the $AB$, or Bernal, stacking of the graphite. As a result, the principal unit associated with the Eu-based triangular lattice can be seen as containing one Eu and six carbon atoms, while the structural unit cell contains two Eu atoms and twelve carbons. As is shown in Fig. 2(b), the two-dimensional (2D) Brillouin zone (BZ) of the triangular Eu lattice is three times smaller than that of the graphene. The lattice constants of the triangular Eu lattice and that of the honeycomb graphene lattice are related as $a\\!=\\!\sqrt{3}a_{\rm gr}$, see Fig. 2. The key features of the electronic band structure of EuC6 can be understood within the “rigid-band” approximation, see Ch. 5 of Ref. [17]. One assumes that the band structure of the graphene layer is not changed by the Eu intercalation, with the latter resulting only in a partial filling of the graphene bands up to a Fermi energy $E_{F}$, illustrated in Fig. 2(a) by a horizontal line. Upon folding onto the Eu-based Brillouin zone, the Dirac bands are mapped from the proximities of the Kgr and K${}^{\prime}_{\rm gr}$ points of the graphene Brillouin zone onto the neighborhood of the $\Gamma$ point, see Fig. 2. These bands are equivalent up to a $\pi/3$ rotation, with a representative constant- energy cut demonstrating characteristic “flower-petal” Fermi surfaces originating from the trigonal $C_{3}$ symmetry of the graphene lattice, see Fig. 2(b). These two bands from the two valleys at Kgr and K${}^{\prime}_{\rm gr}$ are the ones being filled away from the charge-neutrality point by the doping provided by the intercalated Eu. To estimate the size of the Fermi surfaces produced by doping, one can approximate them as circles with a radius $k_{F}$, neglecting their trigonal warping. Naturally, the Fermi momentum $k_{F}$ is determined by the 2D density of electrons donated to a graphene sheet by the Eu layer. Taking into account band (valley) and spin degeneracy factors yields $n_{e}^{2D}\\!=\\!k_{F}^{2}/\pi$ [17]. The nominal valence state of Eu is Eu2+. Assuming that all 2$e$/Eu go into the conduction bands and using the 2D volume of the Eu unit cell $V_{c}\\!=\\!a^{2}\sqrt{3}/2$, one obtains $k_{F,2e}\\!=\\!(4\pi/\sqrt{3}a^{2})^{1/2}\\!\approx\\!0.86\pi/a$. The same result can be obtained by matching the area (2D volume) of the fully occupied, doubly-degenerate triangular-lattice Brillouin zone of the Eu lattice, $V_{BZ}^{\triangle}\\!=\\!8\pi^{2}/\sqrt{3}a^{2}$, with the four-fold degenerate (valley$\times$spin) Fermi circle of radius $k_{F,2e}$. Altogether, the Fermi surface in EuC6, estimated within this approach, is expected to be large. The detailed calculations of electronic structure of EuC6 in Ref. [30] feature the band-structure that is not unlike the rigid-band picture in Fig. 2(a), with the bands that are crossing the Fermi level clearly reminiscent of the folded graphene bands. However, two key differences are a significantly lower doping of the carbon $\pi$-orbitals, which accounts for about $0.5e$ per Eu2+, and the rest of electrons filling up the Eu-derived $spd$-hybrid band, with the latter absent in the rigid-band description [30, 17]. These findings are also supported by the angle-resolved photoemission studies of stage-I EuC6 and stage-II EuC12 materials, reported in Ref. [30]. The most direct implication of the first result for our analysis of the Fermi surfaces is the four times smaller density of donated electrons, which straightforwardly translates into the two times smaller Fermi momentum in the graphene conduction bands, $k_{F,e/2}\\!\approx\\!0.43\pi/a$. We also estimate the Fermi momenta of the “true,” trigonally warped Fermi surfaces from the band structure in Ref. [30] as $k_{F,{\rm max}}\\!\approx\\!0.7\pi/a$ and $k_{F,{\rm min}}\\!\approx\\!0.45\pi/a$, in a qualitative agreement with the estimate of $k_{F,e/2}$ above. Our choice of the representative $E_{F}\\!=\\!0.83t$ (in units of $t\\!=\\!3.16$ eV [4]) in Fig. 2(a) and of the resultant Fermi surfaces in Fig. 2(b) is made to match the Fermi momenta from Ref. [30], which, in turn, should approximately correspond to the $0.5e$ filling of the bands. The other shortcoming of the rigid-band approximation is the omission of the Eu-derived partially filled $sd$-hybrid band [30, 17]. It was also argued that the hybridization of the Eu $sd$-orbitals and graphene $\pi$-orbitals is responsible for the mediation of the strong Kondo interaction between the localized $4f$-orbital spins of Eu and conduction $\pi$-orbital electrons of graphite, estimated at $J_{K}\\!\approx\\!0.15$ eV [24]. This key element of our study is described in Sec. IV.1. In our analytical treatment of the scattering rate in Sec. IV.2 and Appendix C, we are motivated by the analysis and discussion provided in this Section and approximate the relevant electronic degrees of freedom of EuC6 by the two degenerate bands with the circular Fermi surfaces of radius $k_{F}$ centered around $\Gamma$ point. We treat $k_{F}$ as a parameter and show how the key features of the calculated magnetoresistivity evolve with it, see Sec. V. We expect the renewed interest in the problem to result in a convergence of the band-structure calculations with the experimental data regarding the relevant electronic structure and parameters of EuC6 and other GICs. ### II.2 Spin model and parameters It has been proposed in Refs. [18, 21, 22, 23] that the minimal model that describes phenomenology of the magnetism in EuC6 is the triangular-lattice $S\\!=\\!7/2$ model $\displaystyle\mathcal{H}=$ $\displaystyle\sum_{\langle ij\rangle_{n}}J_{n}\,\mathbf{S}_{i}\cdot\mathbf{S}_{j}-B\sum_{\langle ij\rangle_{1}}\left(\mathbf{S}_{i}\cdot\mathbf{S}_{j}\right)^{2}-\mathbf{h}\cdot\sum_{i}\mathbf{S}_{i},$ (1) where $\langle ij\rangle_{1(2)}$ denote the (next-)nearest-neighbor bonds with the corresponding exchanges $J_{1(2)}$ and ${\bf h}\\!=\\!g\mu_{B}\mathbf{H}$ in the Zeeman term. A crucial ingredient of this model is the biquadratic term. While $B$ may be small compared to the exchanges, it is important because of the $S^{2}$ amplification factor. It was argued in Refs. [18, 21, 22, 23] that this minimal model would not be complete without the ring-exchange term, which is discussed below in some more detail. While the biquadratic and ring-exchange terms play similar role in stabilizing the up-up-down (UUD or plateau) state in a wide range of fields, our analysis of the EuC6 phenomenology provided below points to the values of the ring exchange that are secondary to $B$, differing from the values advocated in Refs. [18, 21]. However, given a close similarity of their effects, this variation is likely inconsequential and amounts to a different parametrization of such effects within an effective model. In the spin-wave consideration that follows, we will ignore the ring-exchange term entirely, citing cumbersomeness of its treatment. Another difference of our model from the consideration of Refs. [18, 21, 22, 23] is that the exchange terms in (1) are taken as Heisenberg, not $XY$. This makes no difference for the classical phase diagram in the in-plane field, which was simulated using classical Monte-Carlo in Ref. [18] in the $XY$ limit. However, the actual anisotropy in EuC6 is unlikely to exceed 10%, as is evidenced by the very similar saturation fields in the in-plane and out-of- plane magnetization and by the nearly isotropic $g$-factors [18], justifying our choice of the isotropic limit of the model. Figure 3: Triangular lattice, its elementary translation vectors ${\bm{\delta}}_{\alpha}$, primitive unit cell for the three-sublattice spin structures (shaded) with its basis vectors ${\bf a}_{\alpha}$, and an example of such a structure with $\\{{\rm A},{\rm B},{\rm C}\\}$ sublattices. Laboratory reference frame $\\{x_{0},y_{0},z_{0}\\}$ and the field direction are indicated. #### II.2.1 Classical ground states In this work, we focus exclusively on the field orientation that is in the plane of Eu2+ ions, see Fig. 3. While for the isotropic approximation that we choose in model (1) the direction of the field is irrelevant, the phenomenology that follows identifies with that of the in-plane field data for EuC6, which exhibits a weak easy-plane ($XXZ$) anisotropy [21]. The out-of- plane field direction in this latter case yields a different, and much simpler, magnetization and ground state evolution [37]. Triangular-lattice antiferromagnets host a rather astonishing variety of the unconventional field-induced phases, see Refs. [38, 39, 40]. As we have noted above, EuC6 was the first material in which the best-known of such unconventional phases, the UUD magnetization plateau state, has been identified [19]. Figure 4: (a) The schematics of the evolution of magnetic order with the field from 120${\degree}$ state at $H\\!=\\!0$ to the Y phase, with the transition to the UUD plateau phase at $H_{c1}$, from the UUD phase to V phase at $H_{c2}$, followed by a transition to the saturated FM phase at the saturation field $H_{s}$. The representative three-sublattice spin structures are shown. (b) Angles of spins with the laboratory $z_{0}$ axis (field direction) for an arbitrary coplanar three-sublattice structure. Angles $\widetilde{\alpha}\\!=\\!\\{\beta,\alpha_{1},\alpha_{2}\\}$ correspond to the $\alpha\\!=\\!\\{{\rm A},{\rm B},{\rm C}\\}$ sublattices. (c) Sets of $\widetilde{\alpha}$ for all phases. For the model (1), the field-evolution of the classical ground states is known from the earlier works [20, 39, 21] with the schematics of the evolution of magnetic order with the field shown in Fig. 4(a). At $H\\!=\\!0$, spins assume a 120${\degree}$ configuration that was confirmed for EuC6 by the muon-spin spectroscopy [29]. A finite field continuously deforms it into the so-called Y-structure followed by a transition to the UUD (plateau) state at $H_{c1}$. The spin angles and the field direction are shown in Fig. 4(b) and (c). The higher field induces a transition from the UUD phase to the V phase at $H_{c2}$ and to the fully polarized FM phase at the saturation field $H_{s}$. It is worth noting that in all ordered phases, spin configurations are coplanar and belong to the three-sublattice structure with the same unit cell, see Fig. 3 and Fig. 4. In the earlier studies of EuC6 [18, 21, 22, 23], the minimal model (1) was also augmented by the ring-exchange term $\displaystyle\mathcal{H}_{K}=K\sum_{\langle ijk\ell\rangle}P_{ijk\ell},$ (2) $\displaystyle P_{ijk\ell}=\left(\mathbf{S}_{i}\mathbf{S}_{j}\right)\left(\mathbf{S}_{k}\mathbf{S}_{\ell}\right)+\left(\mathbf{S}_{i}\mathbf{S}_{\ell}\right)\left(\mathbf{S}_{j}\mathbf{S}_{k}\right)-\left(\mathbf{S}_{i}\mathbf{S}_{k}\right)\left(\mathbf{S}_{j}\mathbf{S}_{\ell}\right),$ where spins belong to the elementary nearest-neighbor four-site plaquettes $\langle ijk\ell\rangle$. After some deliberation, one can write the classical energy of the model (1) with the ring-exchange term (2) for an arbitrary coplanar three-sublattice structure as $\displaystyle\frac{E_{cl}}{NS^{2}J_{1}}=(1-k)\big{(}\cos\widetilde{\alpha}_{AB}+\cos\widetilde{\alpha}_{AC}+\cos\widetilde{\alpha}_{BC}\big{)}$ $\displaystyle\quad\quad\ \ +3j_{2}-b\big{(}\cos^{2}\widetilde{\alpha}_{AB}+\cos^{2}\widetilde{\alpha}_{AC}+\cos^{2}\widetilde{\alpha}_{BC}\big{)}$ (3) $\displaystyle-h\big{(}\cos\widetilde{\alpha}_{A}+\cos\widetilde{\alpha}_{B}+\cos\widetilde{\alpha}_{C}\big{)}+2k\big{(}\cos\widetilde{\alpha}_{AB}\cos\widetilde{\alpha}_{AC}$ $\displaystyle\phantom{-h\big{(}\cos\widetilde{\alpha}_{A}+}\ \ \ \ +\cos\widetilde{\alpha}_{AB}\cos\widetilde{\alpha}_{BC}+\cos\widetilde{\alpha}_{AC}\cos\widetilde{\alpha}_{BC}\big{)},$ where $N$ is the number of sites in the triangular lattice, the dimensionless field and exchange parameters are in units of the nearest-neighbor exchange $J_{1}$, $h\\!=\\!g\mu_{B}H/3J_{1}S$, $j_{2}\\!=\\!J_{2}/J_{1}$, $b\\!=\\!BS^{2}/J_{1}$, and $k\\!=\\!KS^{2}/J_{1}$, spin angles with the field direction $\widetilde{\alpha}_{\alpha}\\!=\\!\\{\beta,\alpha_{1},\alpha_{2}\\}$ correspond to the $\alpha\\!=\\!\\{{\rm A},{\rm B},{\rm C}\\}$ sublattices according to Fig. 4(b) and (c), and mutual angles of spins are $\widetilde{\alpha}_{AB}\\!=\\!\beta-\alpha_{1}$, $\widetilde{\alpha}_{AC}\\!=\\!\beta-\alpha_{2}$, and $\widetilde{\alpha}_{BC}\\!=\\!\alpha_{1}-\alpha_{2}$. #### II.2.2 Tilt angles and critical fields Energy minimization in (3) at a fixed field with respect to spin angles should produce both the equilibrium spin configurations and critical fields for the transitions between phases. Figure 4 shows that in Y and V phases spin angles depend on the field continuously, while spins are (anti)collinear with the field for the full extent of the UUD and FM phases. In the Y and V phases, the general form of the classical energy in (3) simplifies, with the energy of the Y phase controlled by one independent angle and for the V phase by two angles. For the Y phase, a straightforward algebra gives an equation for the angle $\alpha_{1}$ $\frac{\partial E^{\rm Y}_{cl}}{\partial\alpha_{1}}=0=(1+b)\,x-a_{0}-6k\,x^{2}-4b\,x^{3}\,,$ (4) where $x\\!=\\!\cos\alpha_{1}$ and $a_{0}\\!=\\!(1+h-3k)/2$. Since cubic equation allows for analytical solutions [41], the angles of the spin configuration within the Y phase are fully determined by such a solution of (4). In the spin-wave treatment of the model (1) presented below, the equilibrium spin configuration in the Y phase is obtained from a $k\\!=\\!0$ version of (4). As was first noted in Ref. [21], the evolution of $\alpha_{1}$ with $H$ becomes discontinuous and transition to the UUD phase turns first-order at larger values of $B\\!>\\!0$ and $K\\!>\\!0$. However, leaving this detail aside for a moment, one can always find a solution for a transition field between the Y and UUD phases by assuming it to be continuous and putting $\cos\alpha_{1}\\!=\\!1$ in (4), which yields $h_{c1}=1-6b-9k\,,$ (5) in agreement with Ref. [21]. The meaning of this critical field is twofold. It is the true critical field for a phase transition at the smaller values of $B$ and $K$ where it is continuous. In what follows, we focus on $K\\!=\\!0$, “$B$-only” model (1), for which continuous transition can be shown to exists up to $b_{c}\\!=\\!1/11$. For the values of $b\\!>\\!b_{c}$, the Y phase is stable up to the _higher_ critical field $\widetilde{h}_{c1}=\sqrt{\frac{4(1+b)^{3}}{27b}}-1\,$ (6) at which the angle changes discontinuously. However, the critical field in (5) continues to define the region of $\widetilde{h}_{c1}\\!>\\!h\\!>\\!h_{c1}$ where the plateau phase is (meta)stable, meaning that the spin excitations defined within the UUD phase are stable down to $h_{c1}$ in (5). A detailed consideration of the critical fields associated with the first-order transitions is provided in Appendix A. Somewhat fortuitously, our choice of parameters for EuC6 discussed below corresponds to $b$ only very slightly larger than $b_{c}$, so the transitions that we find are very marginally first-order. Experiments in EuC6 [18] have also indicated small hysteresis effects in magnetoresistance [18], suggesting a correspondence between the two. For the V phase, energy minimization in (3) yields the following equations in the angles, $\sin\beta\\!=\\!2\sin\alpha_{1}$ and $h\,\sin\beta=2\sin\gamma\,\big{(}1+k+2(k-b)\cos\gamma\big{)}\,,$ (7) where $\gamma\\!=\\!\alpha_{1}+\beta$. For the “$B$-only” model (1) that we focus on below, one can simplify (7) to the equation for $\beta$ in the form $h\\!=\\!F(\cos\beta,b)$, with $F(x,b)=\big{(}x+\sqrt{x^{2}+3}\big{)}\big{(}1-b\big{(}x\sqrt{x^{2}+3}-1+x^{2}\big{)}\big{)}\,,$ (8) which can be solved numerically to find $\alpha_{1}$ and $\beta$ angles of the equilibrium spin configuration in the V phase. An approach to the transitions from the UUD to V and from V to the FM phases by assuming their continuity and (anti)collinearity of the spins in (7) yields $h_{c2}=1+2b-k\,,\ \ \ h_{s}=3\big{(}1-2b+3k\big{)}\,,$ (9) also in agreement with Ref. [21]. While a transition at $h_{c2}$ remains continuous for a wide range of parameters, transition to the saturated phase for the “$B$-only” model (1) turns first-order at the same $b_{c}\\!=\\!1/11$ as the Y-to-UUD transition at $h_{c1}$ discussed above, showing a similar phenomenology. Given that the range of parameters discussed below is only weakly affected by the associated discontinuities, we will carry on referring to $h_{c1}$, $h_{c2}$, and $h_{s}$ in Eq. (5) and Eq. (9) as to the “true” critical fields, see Appendix A for more detail. #### II.2.3 Parameters It is useful to consider pure Heisenberg limit of the model (1) as a reference. In that case, the dimensionless critical fields $h^{0}_{c1}\\!=\\!h^{0}_{c2}\\!=\\!1$ and $h^{0}_{s}\\!=\\!3$, all in units of $3J_{1}S/g\mu_{B}$. Thus, as one can see from (5) and (9), for $B,K\\!>\\!0$ the biquadratic and ring-exchange terms necessarily open up a finite range of fields for the plateau phase. However, while both terms drive down $h_{c1}$ from its $h^{0}_{c1}$ value, their effects on $h_{c2}$ and $h_{s}$ are opposite to each other. Most importantly, if the additional terms are dominated by the biquadratic one, the critical fields $h_{c1}$ and $h_{c2}$ split away from their Heisenberg value in the opposite directions, with $h_{c1}$ below and $h_{c2}$ above $h^{0}_{c1}$. If, however, the ring-exchange term is the leading one, both $h_{c1}$ and $h_{c2}$ shift down from $h^{0}_{c1}$. This observation has a direct impact on the analysis of the phenomenology of EuC6 and parameters of the model that follow from it. A summary of the experimental data that is relevant to such an analysis can be found in Ref. [18]. Eu2+ spins order antiferromagnetically at $T_{N}\\!\approx\\!40$ K, with the 120${\degree}$ structure of their zero-field ground state confirmed more recently [29]. The critical fields of all the transitions discussed above can be inferred directly from the $T\\!\rightarrow\\!0$ extrapolations of the associated anomalies in the resistivity data in Fig. 1(b), which is reproduced from Ref. [18]. Thus, the experimental value of the saturation field is $H_{s}^{\rm exp}\\!\approx\\!21.5$ T, while the Y-to-UUD and UUD-to-V transitions are at $H_{c1}^{\rm exp}\\!\approx\\!1.6$ T and $H_{c2}^{\rm exp}\\!\approx\\!9.0$ T, respectively, see also Table 1. Given Eq. (5) and Eq. (9), the experimental values of the three critical fields are sufficient to uniquely determine three parameters of the model, $J_{1}$, $B$, and $K$. In broad strokes, an overall energy scale dictated by $J_{1}$ sets an extent of the ordered phases that is determined from the saturation field $H_{s}$, while the width of the plateau between $H_{c1}$ and $H_{c2}$ and their relation to $H_{s}/3$ fixes $B$ and $K$. The results are listed in the first line of Table 1 where we have also used the Lande g-factor g=1.94 [18]. In agreement with the prior estimates [18] and general expectations, the biquadratic and ring-exchange terms are much smaller than the leading exchanges, yet they are essential for the existence of the unconventional UUD phase. Importantly, the ring-exchange is subleading to the biquadratic term with the ratio $B/K\\!\approx\\!3$. Provided our discussion above, the dominance of $B$ over $K$ is clear already from the fact that the UUD-to-V critical field $H_{c2}^{\rm exp}$ is substantially larger than $H_{s}^{\rm exp}/3$. It is, therefore, rather puzzling to find almost exactly opposite hierarchy of $B$ and $K$ in Refs. [18, 21, 22, 23], based on the same data for EuC6. The reason for the difference is the following. With the rest of the phenomenological constraints being the same, the UUD-to-V critical field in Refs. [18, 21] is chosen as $\widetilde{H}_{c2}^{\rm exp}\\!\approx\\!6.4$ T, which is less than $H_{s}^{\rm exp}/3$, hence implying the dominance of $K$ over $B$. The smaller critical field is inferred from a rather broad magnetization data, which, given the second-order nature of the UUD-to-V transition, is strongly affected by the finite-temperature effects, see also Ref. [42] on a different material highlighting the same effect. It is difficult for us to understand why the lower $\widetilde{H}_{c2}^{\rm exp}$ was insisted upon in the prior works, except for the premeditated importance of the ring-exchange terms. | $J_{1}$ | $J_{2}$ | $BS^{2}$ | $KS^{2}$ | $H_{c1}$ | $H_{c2}$ | $H_{s}$ ---|---|---|---|---|---|---|--- Exp. | 0.974 | -0.783 | 0.086 | 0.029 | 1.6 | 9.0 | 21.5 Model | 1.085 | -0.728 | 0.1 | 0 | 3.91 | 10.35 | 21.39 Table 1: Exchange parameters (K) and critical fields (T), $S\\!=\\!7/2$. Exp.: experimental values of the fields define parameters of the model as described in text. Model: chosen parameters of the model (1) with the resultant critical fields. The remaining parameter of the model (1) is the second-neighbor exchange $J_{2}$, which is necessary to reconcile the value of the ordering temperature, $T_{N}$, with that of the saturation field, as the two are not fully compatible for the model that contains only the nearest-neighbor exchanges. Since the leading mechanism that provides spin couplings in EuC6 is believed to be of the RKKY-type [18], the $J_{2}$-term with $J_{2}\\!<\\!0$ is seen as natural. Another element of the consideration that is easy to justify is the use of the mean-field approximation for the ordering temperature despite the quasi-2D character of EuC6 and continuous symmetries of the model (1). The large spin value $S\\!=\\!7/2$, aforementioned $XXZ$ anisotropy, and the presence of small interplane couplings [18] that are ignored in our model, all give strong ground for the use of the mean-field approach [43] $T_{N}^{\rm MF}=-\frac{S(S+1)}{3k_{B}}\ \lambda_{\rm min}(\mathbf{Q}),$ (10) where $\lambda_{\rm min}(\mathbf{Q})$ is the lowest eigenvalue of the Fourier transform of the exchange matrix in (1) at the ordering vector $\mathbf{Q}$. For the three-sublattice orders, $\mathbf{Q}\\!=\\!(\pm 4\pi/3,0)$ and $\lambda_{\rm min}(\mathbf{Q})$ can also be inferred from the classical energy in (3) as $\lambda_{\rm min}(\mathbf{Q})\\!=\\!2E_{cl}^{120\degree}/NS^{2}$ to yield $T_{N}^{\rm MF}\approx S(S+1)\big{(}J_{1}-2J_{2}\big{)},$ (11) where contributions of small interplane couplings are ignored and we have also dropped even smaller and nearly canceling contributions from the $B$ and $K$ terms. Since $J_{1}$ is already determined from the critical fields, $T_{N}^{exp}\\!=\\!T_{N}^{\rm MF}$ in Eq. (11) gives $J_{2}$ in the first line of Table 1. Having established the secondary role of the ring-exchange term in EuC6 phenomenology, we are going to completely ignore such a term in the model consideration of the scattering of electrons by spin excitations presented next. This step is motivated by both strong similarity of the effects provided by the ring-exchange to that of the biquadratic terms and a considerable cumbersomness of the spin-wave treatment of the ring-exchange in the triangular lattice, see Refs. [44, 45]. With the number of model parameters reduced, there are more phenomenological constraints than there parameters. Fixing one of $H_{c2}$ or $H_{c1}$ to their experimental value either narrows or widens the extent of the plateau by about $4$ T compared to the data, with $BS^{2}$ being 0.06K and 0.17K, respectively. Instead, we fix $BS^{2}$ to an intermediate value of 0.1K, which leads to only a slightly narrower plateau and somewhat higher critical fields than in experiment, $H^{th}_{c1}\\!\approx\\!3.9$ T and $H^{th}_{c2}\\!\approx\\!10.4$ T, see Table 1 for a full set of the model parameters. This is the set of parameters that will be used henceforth in all calculations of the magnetoresistivity. It corresponds to the dimensionless parameters $j_{2}\\!=\\!J_{2}/J_{1}\\!=\\!-0.671$ and $b\\!=\\!BS^{2}/J_{1}\\!=\\!0.0922$. For the representative pictures of the spin-wave spectra shown in Sec. III.5 below we choose a close set of $j_{2}\\!=\\!-0.8$ and $b\\!=\\!0.1$. ## III Spin excitations In this Section, a general spin-wave approach is formulated for all coplanar three-sublattice states in Fig. 4. In Appendix B, we provide a consideration of the FM, 120${\degree}$, and UUD states for which a simplified approach is possible, allowing to obtain fully analytical results. We would like to note that the biquadratic exchange has been widely employed to emulate quantum effects in a variety of spin models, including Heisenberg and $XXZ$ triangular-lattice models to stabilize their plateau state. However, we are not aware of the spin-wave theory consideration of the model (1) in the literature, with an exception of the early work [20], which provided a consideration of the zone-center, ${\bf k}\\!=\\!0$, modes. The possibility of a consistent spin-wave expansion for an arbitrary coplanar three-sublattice structure presented next was motivated, in part, by a general formalism in Ref. [46]. ### III.1 General case of a coplanar state For a spin-wave expansion, the laboratory reference frame $\\{x_{0},y_{0},z_{0}\\}$ in Fig. 3 and Fig. 4 needs to be rotated to the local reference frame $\\{x,y,z\\}$ on each site, so that the $z$ axis is along the direction dictated by a classical spin configuration obtained in Sec. II.2.1. For the coplanar states in Fig. 4, such a transformation is a simple rotation in the $x_{0}$–$z_{0}$ plane, such that $S_{\alpha}^{y_{0}}\\!=\\!S_{\alpha}^{y}$ and $\displaystyle S^{x_{0}}_{\alpha}=S^{x}_{\alpha}\cos\widetilde{\alpha}-S^{z}_{\alpha}\sin\widetilde{\alpha},$ (12) $\displaystyle S^{z_{0}}_{\alpha}=S^{z}_{\alpha}\cos\widetilde{\alpha}+S^{x}_{\alpha}\sin\widetilde{\alpha},$ where $\alpha$ and $\widetilde{\alpha}$ are, respectively, the sublattices and corresponding spin angles in Fig. 4(b). ### III.2 $1/S$-expansion Consider the $1/S$-expansion of each individual term in the model (1) separately. For the nearest-neighbor $J_{1}$ term, it is convenient to rewrite it first as $\displaystyle\mathcal{H}_{J_{1}}=J_{1}\sum_{\langle ij\rangle_{1}}\left(\hat{h}_{ij}^{(e)}+\hat{h}_{ij}^{(o)}\right),$ (13) where the “even” (e) and “odd” (o) parts $\displaystyle\hat{h}_{ij}^{(e)}$ $\displaystyle=S_{i}^{y}S_{j}^{y}+\cos\widetilde{\alpha}_{ij}\left(S_{i}^{x}S_{j}^{x}+S_{i}^{z}S_{j}^{z}\right),$ (14) $\displaystyle\hat{h}_{ij}^{(o)}$ $\displaystyle=\sin\widetilde{\alpha}_{ij}\left(S_{i}^{z}S_{j}^{x}-S_{i}^{x}S_{j}^{z}\right),$ are separated to distinguish their subsequent contribution of the even and odd powers of the bosonic operators to the $1/S$-expansion; here $\widetilde{\alpha}_{ij}\\!=\\!\widetilde{\alpha}_{i}-\widetilde{\alpha}_{j}$ are the angles between neighboring spins. In the lowest orders, the even part yields a contribution to the classical energy and to the harmonic, ${\cal O}(S)$, linear spin-wave theory (LSWT) order of the expansion $\displaystyle\hat{h}_{ij}^{(e)}$ $\displaystyle\Rightarrow S^{2}\cos\widetilde{\alpha}_{ij}+\hat{h}_{ij,LSWT}^{(e)}\,,$ (15) while the odd part in (14) gives the linear order, ${\cal O}(S^{3/2})$, which must vanish upon a summation in (13) for the classical energy minimum, followed by the higher-order, ${\cal O}(S^{1/2})$, anharmonic interactions that can be neglected for the large spin values. However, for the biquadratic term of the model (1) $\mathcal{H}_{B}=-B\sum_{\langle ij\rangle_{1}}\big{(}\mathbf{S}_{i}\cdot\mathbf{S}_{j}\big{)}^{2}=-B\sum_{\langle ij\rangle_{1}}\left(\hat{h}_{ij}^{(e)}+\hat{h}_{ij}^{(o)}\right)^{2},$ (16) both even and odd parts play a role in its LSWT order $\left(\mathbf{S}_{i}\cdot\mathbf{S}_{j}\right)^{2}\Rightarrow 2S^{2}\cos\widetilde{\alpha}_{ij}\,\hat{h}_{ij,LSWT}^{(e)}+\Big{(}\hat{h}_{ij}^{(o)}\Big{)}^{2}_{LSWT}\,,\ \ $ (17) with their contributions, obtained from the standard Holstein-Primakoff bozonization of spins in the rotated reference frame, $S_{i}^{z}\\!=\\!S-a^{\dagger}_{i}a^{\phantom{{\dagger}}}_{i}$ and $S_{i}^{-}\\!=\\!a_{i}^{\dagger}\sqrt{2S}$, are $\displaystyle\hat{h}_{ij,LSWT}^{(e)}=-\frac{S}{2}\Big{[}\big{(}a_{i}^{\dagger}-a_{i}\big{)}\big{(}a_{j}^{\dagger}-a_{j}\big{)}$ (18) $\displaystyle\quad\quad+\cos\widetilde{\alpha}_{ij}\left(2\big{(}a_{i}^{\dagger}a_{i}+a^{\dagger}_{j}a_{j}\big{)}-\big{(}a_{i}^{\dagger}+a_{i}\big{)}\big{(}a_{j}^{\dagger}+a_{j}\big{)}\right)\Big{]},$ $\displaystyle\Big{(}\hat{h}_{ij}^{(o)}\Big{)}^{2}_{LSWT}=\frac{S^{3}}{2}\sin^{2}\widetilde{\alpha}_{ij}\left(a_{i}^{\dagger}+a_{i}-a_{j}^{\dagger}-a_{j}\right)^{2}.$ Of the remaining terms in model (1), the Zeeman term is particularly simple, and so is the next-nearest-neighbor $J_{2}$ term, as the former involves only local energy of bosons while the latter connects spins that belong to the same sublattices, giving, in the LSWT order, $\displaystyle\mathcal{H}_{H}=$ $\displaystyle g\mu_{B}H\sum_{i}\cos\widetilde{\alpha}_{i}\,a_{i}^{\dagger}a_{i}\,,$ (19) $\displaystyle\mathcal{H}_{J_{2}}=$ $\displaystyle-J_{2}S\sum_{\langle ij\rangle_{2}}\left(a_{i}^{\dagger}a_{i}+a^{\dagger}_{j}a_{j}-\big{(}a_{i}^{\dagger}a_{j}+a^{\dagger}_{j}a_{i}\big{)}\right).$ (20) We point out, as a side remark, that it is relatively straightforward to modify model (1) and the resultant LSWT Hamiltonian to include the effects of the easy-plane anisotropy that is present in EuC6. However, we did not find significant qualitative changes in the results for some of the key phases studied in this work. Given the extra cumbersomeness this anisotropy would introduce in the LSWT matrix below, we leave a detailed study of such an extension to a future work. ### III.3 LSWT Hamiltonian The LSWT order of the model (1), explicated in Eqs. (13)–(20), is obtained for a general coplanar state. To make further progress, one needs to specify spin arrangement for the classical ground state. In our case, all such states of interest can be represented as the three-sublattice states, highlighted in Fig. 4. Thus, a general approach to all of them can be pursued [46]. The first step is to switch from the site notation $i$ to the one of the unit cells of the three-sublattice structure $\ell$ and sublattice index $\alpha$: $i\\!\rightarrow\\!\\{\alpha,\ell\\}$. As a result, the Holstein-Primakoff boson operators are split into three species $a^{({\dagger})}_{\alpha,\ell}\\!=\\!\\{a_{\ell}^{({\dagger})},b_{\ell}^{({\dagger})},c_{\ell}^{({\dagger})}\\}$ corresponding to $\alpha\\!=\\!\\{{\rm A},{\rm B},{\rm C}\\}$ sublattices. The Fourier transformation for them is $a^{\phantom{{\dagger}}}_{\alpha,\ell}=\frac{1}{\sqrt{N_{c}}}\sum_{\bf q}\,a^{\phantom{{\dagger}}}_{\alpha,\bf q}\,e^{-i{\bf q}{\bf r}_{\alpha,\ell}},$ (21) where ${\bf r}_{\alpha,\ell}\\!=\\!{\bf R}_{\ell}\\!+\\!\bm{\rho}_{\alpha}$ and $N_{c}\\!=\\!N/3$ is the number of unit cells. The sublattice coordinates within the unit cell can be chosen as $\bm{\rho}_{A}\\!=\\!0$, $\bm{\rho}_{B}\\!=\\!-\bm{\delta}_{2}$, and $\bm{\rho}_{C}\\!=\\!\bm{\delta}_{3}$, see Fig. 3. After some algebra, using these boson species and their Fourier transforms, the LSWT Hamiltonian for an arbitrary coplanar three-sublattice state reads as $\displaystyle\hat{\cal H}^{(2)}=\frac{3J_{1}S}{2}\sum_{\bf q}\hat{\bf x}_{\bf q}^{\dagger}\hat{\bf H}_{\bf q}\hat{\bf x}_{\bf q}^{\phantom{\dagger}}\,,$ (22) where $\hat{\bf x}^{\dagger}_{\bf q}\\!=\\!\left(a^{\dagger}_{\bf q},b^{\dagger}_{\bf q},c^{\dagger}_{\bf q},a^{\phantom{{\dagger}}}_{-\bf q},b^{\phantom{{\dagger}}}_{-\bf q},c^{\phantom{{\dagger}}}_{-\bf q}\right)$ and $\hat{\bf H}_{\bf q}$ is a matrix $\displaystyle\hat{\bf H}_{\bf q}=\left(\begin{array}[]{cc}\hat{\bf A}^{\phantom{\dagger}}_{\bf q}&\hat{\bf B}^{\phantom{\dagger}}_{\bf q}\\\\[2.15277pt] \hat{\bf B}^{\dagger}_{\bf q}&\hat{\bf A}^{*}_{-\bf q}\end{array}\right),$ (25) with the $3\times 3$ matrices $\hat{\bf A}^{\phantom{{\dagger}}}_{\bf q}$ and $\hat{\bf B}^{\phantom{{\dagger}}}_{\bf q}$ $\displaystyle\hat{\bf A}^{\phantom{{\dagger}}}_{\bf q}=\left(\begin{array}[]{ccc}A_{\bf q}&D_{\bf q}&E^{*}_{\bf q}\\\ D^{*}_{\bf q}&B_{\bf q}&F_{\bf q}\\\ E_{\bf q}&F^{*}_{\bf q}&C_{\bf q}\end{array}\right),\ \ \ \hat{\bf B}^{\phantom{{\dagger}}}_{\bf q}=\left(\begin{array}[]{ccc}G&J_{\bf q}&K^{*}_{\bf q}\\\ J^{*}_{\bf q}&H&L_{\bf q}\\\ K_{\bf q}&L^{*}_{\bf q}&I\end{array}\right).\ \ \ \ \ \ $ (32) The elements of the $\hat{\bf A}^{\phantom{{\dagger}}}_{\bf q}$ matrix are given by $\displaystyle A_{\bf q}=$ $\displaystyle\,h\cos\beta-\cos\widetilde{\alpha}_{AB}-\cos\widetilde{\alpha}_{AC}-2j_{2}\big{(}1-\gamma^{(2)}_{\bf q}\big{)}$ $\displaystyle+b\big{(}1+3\left(\cos 2\widetilde{\alpha}_{AB}+\cos 2\widetilde{\alpha}_{AC}\right)/2\big{)},$ $\displaystyle B_{\bf q}=$ $\displaystyle\,h\cos\alpha_{1}-\cos\widetilde{\alpha}_{AB}-\cos\widetilde{\alpha}_{BC}-2j_{2}\big{(}1-\gamma^{(2)}_{\bf q}\big{)}$ $\displaystyle+b\big{(}1+3\left(\cos 2\widetilde{\alpha}_{AB}+\cos 2\widetilde{\alpha}_{BC}\right)/2\big{)},$ $\displaystyle C_{\bf q}=$ $\displaystyle\,h\cos\alpha_{2}-\cos\widetilde{\alpha}_{BC}-\cos\widetilde{\alpha}_{AC}-2j_{2}\big{(}1-\gamma^{(2)}_{\bf q}\big{)}$ $\displaystyle+b\big{(}1+3\left(\cos 2\widetilde{\alpha}_{BC}+\cos 2\widetilde{\alpha}_{AC}\right)/2\big{)},$ (33) $\displaystyle D_{\bf q}=$ $\displaystyle\,\gamma_{\bf q}\big{(}1+2b(1-2\cos\widetilde{\alpha}_{AB})\big{)}\cos^{2}(\widetilde{\alpha}_{AB}/2),$ $\displaystyle E_{\bf q}=$ $\displaystyle\,\gamma_{\bf q}\big{(}1+2b(1-2\cos\widetilde{\alpha}_{AC})\big{)}\cos^{2}(\widetilde{\alpha}_{AC}/2),$ $\displaystyle F_{\bf q}=$ $\displaystyle\,\gamma_{\bf q}\big{(}1+2b(1-2\cos\widetilde{\alpha}_{BC})\big{)}\cos^{2}(\widetilde{\alpha}_{BC}/2),$ and of the $\hat{\bf B}^{\phantom{{\dagger}}}_{\bf q}$ matrix, respectively, $\displaystyle G=$ $\displaystyle-b\big{(}\sin^{2}\widetilde{\alpha}_{AB}+\sin^{2}\widetilde{\alpha}_{AC}\big{)},$ $\displaystyle H=$ $\displaystyle-b\big{(}\sin^{2}\widetilde{\alpha}_{AB}+\sin^{2}\widetilde{\alpha}_{BC}\big{)},$ $\displaystyle I=$ $\displaystyle-b\big{(}\sin^{2}\widetilde{\alpha}_{BC}+\sin^{2}\widetilde{\alpha}_{AC}\big{)},$ (34) $\displaystyle J_{\bf q}=$ $\displaystyle-\gamma_{\bf q}\big{(}1-2b(1+2\cos\widetilde{\alpha}_{AB})\big{)}\sin^{2}(\widetilde{\alpha}_{AB}/2),$ $\displaystyle K_{\bf q}=$ $\displaystyle-\gamma_{\bf q}\big{(}1-2b(1+2\cos\widetilde{\alpha}_{AC})\big{)}\sin^{2}(\widetilde{\alpha}_{AC}/2),$ $\displaystyle L_{\bf q}=$ $\displaystyle-\gamma_{\bf q}\big{(}1-2b(1+2\cos\widetilde{\alpha}_{BC})\big{)}\sin^{2}(\widetilde{\alpha}_{BC}/2),$ where $h\\!=\\!g\mu_{B}H/3J_{1}S$, $j_{2}\\!=\\!J_{2}/J_{1}$, and $b\\!=\\!BS^{2}/J_{1}$ as before, $\widetilde{\alpha}_{AB}\\!=\\!\beta-\alpha_{1}$, $\widetilde{\alpha}_{AC}\\!=\\!\beta-\alpha_{2}$, $\widetilde{\alpha}_{BC}\\!=\\!\alpha_{1}-\alpha_{2}$, and $\displaystyle\gamma_{\bf q}=\frac{1}{3}\sum_{\alpha}e^{i{\bf q}\bm{\delta}_{\alpha}},\ \ \ \gamma^{(2)}_{\bf q}=\frac{1}{3}\sum_{\alpha}\cos{\bf q}\bm{a}_{\alpha}\,,$ (35) with the first- and second-neighbor translation vectors $\bm{\delta}_{1}\\!=\\!(1,0)a$, $\bm{\delta}_{2}\\!=\\!(-1,\sqrt{3})a/2$, $\bm{\delta}_{3}\\!=\\!-(1,\sqrt{3})a/2$, and $\bm{a}_{1}\\!=\\!(3,-\sqrt{3})a/2$, $\bm{a}_{3}\\!=\\!(0,\sqrt{3})a$, $\bm{a}_{2}\\!=\\!-(3,\sqrt{3})a/2$, respectively, see Fig. 3; $a$ is the lattice constant. ### III.4 Diagonalization The eigenvalues of $\hat{\bf g}\hat{\bf H}_{\bf q}$ in (25), give magnon eigenenergies $\\{\omega_{1{\bf q}},\omega_{2{\bf q}},\omega_{3{\bf q}},-\omega_{1-{\bf q}},-\omega_{2-{\bf q}},-\omega_{3-{\bf q}}\\}$ (in units of $3J_{1}S$). Here $\hat{\bf g}$ is a diagonal matrix $[1,1,1,-1,-1,-1]$, see Ref. [47]. While magnon energies are crucial for our consideration of the spin-wave scattering of electrons that follows, an essential role is also played by the matrix elements, which are related to the $U$ and $V$ parameters of the generalized Bogolyubov transformation from the Holstein-Primakoff bosons to the ones of the quasiparticle eigenmodes $\displaystyle a^{\phantom{{\dagger}}}_{\alpha,{\bf q}}=\sum_{\gamma}\left(U_{\alpha,{\bf q}}^{({\gamma})}\,A^{\phantom{{\dagger}}}_{\gamma,{\bf q}}+V_{\alpha,{\bf q}}^{({\gamma})}\,A^{{\dagger}}_{\gamma,-{\bf q}}\right),$ (36) with the quasiparticle operators $A_{\gamma,{\bf q}}\\!=\\!\left\\{A_{\bf q},B_{\bf q},C_{\bf q}\right\\}$ and $\displaystyle\sum_{\gamma}\left(\big{|}U_{\alpha,{\bf q}}^{({\gamma})}\big{|}^{2}-\big{|}V_{\alpha,{\bf q}}^{({\gamma})}\big{|}^{2}\right)=1\,.$ (37) The transformation (36) can be written in a matrix form $\displaystyle\hat{\bf x}_{\bf q}\\!=\\!\left(\\!\begin{array}[]{c}\hat{\bf a}^{\phantom{\dagger}}_{\bf q}\\\\[2.15277pt] \hat{\bf a}^{\dagger}_{-\bf q}\end{array}\\!\right)\\!=\\!\left(\\!\begin{array}[]{cc}\hat{\bf U}_{\bf q}&\hat{\bf V}_{\bf q}\\\\[2.15277pt] \hat{\bf V}^{*}_{-\bf q}&\hat{\bf U}^{*}_{-\bf q}\end{array}\\!\right)\\!\left(\\!\begin{array}[]{c}\hat{\bf\cal A}^{\phantom{\dagger}}_{\bf q}\\\\[2.15277pt] \hat{\bf\cal A}^{\dagger}_{-\bf q}\end{array}\\!\right)\\!=\hat{\bf S}_{\bf q}\cdot\hat{\bf z}_{\bf q},\ \ \ $ (44) where vectors $\hat{\bf a}_{\bf q}\\!=\\!\big{[}a_{\bf q},b_{\bf q},c_{\bf q}\big{]}^{T}$, $\hat{\bf a}^{\dagger}_{-\bf q}\\!=\\!\big{[}a^{\dagger}_{-\bf q},b^{\dagger}_{-\bf q},c^{\dagger}_{-\bf q}\big{]}^{T}$ and $\hat{\bf\cal A}_{\bf q}\\!=\\!\big{[}A_{\bf q},B_{\bf q},C_{\bf q}\big{]}^{T}$, $\hat{\bf\cal A}^{\dagger}_{-\bf q}\\!=\\!\big{[}A^{\dagger}_{-\bf q},B^{\dagger}_{-\bf q},C^{\dagger}_{-\bf q}\big{]}^{T}$ are introduced. It follows that the transformation matrix $\hat{\bf S}_{\bf q}$ diagonalizes $\hat{\bf g}\hat{\bf H}_{\bf q}$ in Eq. (25), see Refs. [47, 48]. Thus, the $U_{\alpha}^{({\gamma})}$ and $V_{\alpha}^{({\gamma})}$ parameters can be extracted as the elements of the properly normalized eigenvectors of $\hat{\bf g}\hat{\bf H}_{\bf q}$ from a diagonalization procedure. With all components of the $\hat{\bf A}_{\bf q}$ and $\hat{\bf B}_{\bf q}$ matrices (32) given explicitly in (33) and (34), the $6\\!\times\\!6$ LSWT Hamiltonian (25) has to be diagonalized numerically. We have implemented such a procedure using MATHEMATICA. In Sec. III.5, we provide plots of magnon energies throughout the Brillouin zone (BZ) in Fig. 5 for the representative field values from all the phases in Fig. 4(a). Figure 5: Full BZ of the triangular lattice (outer hexagon) and magnetic BZ of the three-sublattice structures (inner hexagon), high-symmetry points in units of inverse lattice spacing $1/a$, and the direction of a representative K$\Gamma$MK cut. We also point out that although the approach to the multi-flavor boson problem discussed here is very general, there are significant simplifications in our case owing to the high symmetry of the model (1) and the lattice. Specifically, $\hat{\bf A}^{*}_{-\bf q}\\!=\\!\hat{\bf A}_{\bf q}$ and $\hat{\bf B}^{\dagger}_{\bf q}\\!=\\!\hat{\bf B}_{\bf q}$ in (25) as their off-diagonal matrix elements (33), (34) are simply proportional to the complex hopping amplitude $\gamma^{*}_{-\bf q}\\!=\\!\gamma_{\bf q}$ (35). As a result, all eigenenergies of $\hat{\bf g}\hat{\bf H}_{\bf q}$ are reciprocal, $\omega_{\gamma-{\bf q}}\\!=\\!\omega_{\gamma{\bf q}}$, and $\\{\hat{\bf V}^{*}_{-\bf q},\hat{\bf U}^{*}_{-\bf q}\\}\\!=\\!\\{\hat{\bf V}_{\bf q},\hat{\bf U}_{\bf q}\\}$ in Eq. (44). ### III.5 Magnon eigenenergies Figure 6: Magnon energies in units of $3SJ_{1}$ for several representative field values from the phases sketched in Fig. 4, $j_{2}\\!=\\!J_{2}/J_{1}\\!=\\!-0.8$ and $b\\!=\\!BS^{2}/J_{1}\\!=\\!0.1$. (a) 120$\degree$-phase, $h\\!=\\!0$, (b) Y phase, $h\\!=\\!0.3$, (c) UUD phase, $h\\!=\\!h_{c1}\\!=\\!0.4$, $h\\!=\\!0.8$, and $h\\!=\\!h_{c2}\\!=\\!1.2$, (d) V phase, $h\\!=\\!2.0$, (e) FM phase $h\\!=\\!h_{s}\\!=\\!2.4$, $h\\!=\\!h_{s}+1$, and $h\\!=\\!h_{s}+2$. Transitions are at $h_{c1}\\!=\\!1-6b$, $h_{c2}\\!=\\!1+2b$, and $h_{s}\\!=\\!3(1-2b)$. The 1D phase diagram vs field with representative field values is sketched in (f). In Fig. 6 we provide plots of magnon eigenenergies for several representative field values from all of the phases sketched in Fig. 4(a) and for the parameters in model (1) $j_{2}\\!=\\!J_{2}/J_{1}\\!=\\!-0.8$ and $b\\!=\\!BS^{2}/J_{1}\\!=\\!0.1$. Energies are in units of $3SJ_{1}$ and dimensionless field is $h\\!=\\!g\mu_{B}H/3J_{1}S$. All plots are along a representative cut K$\Gamma$MK through the full Brillouin zone shown in Fig. 5. In zero field, $h\\!=\\!0$, magnetic order is the canonical 120$\degree$ phase with the $O(3)$ symmetry that is spontaneously broken. Since it is broken fully by the choice of the ordering plane and by the spin arrangement within the plane, there are three Goldstone modes that one can observe in Fig. 6(a). As we discuss in some more detail in Appendix B for the 120$\degree$ case, the three magnon branches that are defined within the magnetic BZ can be related to a single branch defined within the full BZ using “rotated” reference frame for the spin quantization axes. This allows to represent the full spectrum as the “original” branch, labeled by $\omega_{\bf k}$ in Fig. 6(a), and two modes that are “shifted” by the ordering vector $\pm{\bf Q}\\!=\\!(4\pi/3,0)$. For a finite in-plane field, the symmetry of the model (1) is lowered to $U(1)$ by the field. Spontaneous breaking of the $U(1)$ symmetry within the Y phase in Fig. 4(a) at $h\\!<\\!h_{c1}$ results in a single magnon branch with a Goldstone mode and two gapped branches, as is shown in Fig. 6(b) for $h\\!=\\!0.3$. A characteristic feature of the gapless branch is an upward curvature of the dispersion in the long-wavelength limit, $\omega_{\bf k}\\!\approx\\!c|{\bf k}|+r|{\bf k}|^{3}$ with $r\\!>\\!0$. The UUD phase in Fig. 4(a) is sandwiched between two critical fields, $h_{c1}\\!=\\!1-6b\\!=\\!0.4$ and $h_{c2}\\!=\\!1+2b\\!=\\!1.2$. Since the $U(1)$ symmetry is preserved throughout this phase, the spectrum is, generally, gapped except at the transition points, see Fig. 6(c) that shows magnon spectra at both $h_{c1}$ and $h_{c2}$ and at intermediate $h\\!=\\!0.8$. The partially polarized, $U(1)$-preserving UUD state is, in a way, similar to the fully polarized FM phase, with the spectra for the latter for the fields at and above the saturation field $h_{s}\\!=\\!3(1-2b)$ shown in Fig. 6(e). Because of the continuous $U(1)$ symmetry of the model (1), magnetic field couples to a conserved total magnetization in both UUD and FM cases, which leads to the linear dependence of magnon energies in Fig. 6(c) and (e) on the field. This also makes the transitions at $h_{s}$ and $h_{c1(2)}$ analogous to the Bose-Einstein condensation (BEC) [49, 50]. We note that the absolute minima of $\omega_{\bf k}$ and the corresponding BEC condensation points in the FM case are at the ordering vectors of the three- sublattice order, $\pm{\bf Q}\\!=\\!\pm$K, not at the $\Gamma$ point. To emphasize this feature of the FM phase, the magnon energies in Fig. 6(e) are shown without folding on the magnetic BZ, see also Appendix B.1. The last phase of the model (1) with a spontaneously broken $U(1)$ symmetry that is realized at $h_{c2}\\!<\\!h\\!<\\!h_{s}$ is the V phase, see Fig. 4(a). Its spectrum is similar to that of the Y phase, having one concave Goldstone and two gapped modes, see Fig. 6(d). It can be seen as interpolating between the spectra of the UUD phase at $h_{c2}$ and that of the FM phase at the saturation field. ## IV Kondo coupling and resistivity In this Section we derive the electron-magnon interaction Hamiltonian, originating from the Kondo coupling, for a general case of a coplanar spin arrangement and present the expression for the electronic transport relaxation rate due to such a scattering mechanism. ### IV.1 Kondo coupling The most reasonable minimal model for the interaction of conduction electrons with the local spins in the magnetic layers of EuC6 is the Kondo coupling $\displaystyle{\cal H}_{int}=J_{K}\sum_{i}{\bf s}_{i}\cdot{\bf S}_{i},$ (45) where electron spin operators are $s^{a}_{i}\\!=\\!\frac{1}{2}f^{\dagger}_{i,\alpha}\hat{\sigma}^{a}_{\alpha\beta}f^{\phantom{{\dagger}}}_{i,\beta}$ with $\hat{\bm{\sigma}}_{i}$ being Pauli matrices. With the external field providing spin quantization axis, it is natural to split (45) into spin-flip and non-spin-flip parts, ${\cal H}_{int}\\!=\\!{\cal H}_{int}^{+-}+{\cal H}_{int}^{zz}$, $\displaystyle{\cal H}_{int}^{+-}=$ $\displaystyle\,\frac{J_{K}}{2}\sum_{i}\big{(}f^{\dagger}_{i,\uparrow}f^{\phantom{{\dagger}}}_{i,\downarrow}S_{i}^{-_{0}}+f^{\dagger}_{i,\downarrow}f^{\phantom{{\dagger}}}_{i,\uparrow}S_{i}^{+_{0}}\big{)},$ (46) $\displaystyle{\cal H}_{int}^{zz}=$ $\displaystyle\,\frac{J_{K}}{2}\sum_{i}\big{(}f^{\dagger}_{i,\uparrow}f^{\phantom{{\dagger}}}_{i,\uparrow}-f^{\dagger}_{i,\downarrow}f^{\phantom{{\dagger}}}_{i,\downarrow}\big{)}S_{i}^{z_{0}},$ where $f^{({\dagger})}_{i,\uparrow(\downarrow)}$ are operators of the conduction electrons and $\\{-_{0},+_{0},z_{0}\\}$ indices in the operators of local spins refer to the “laboratory” reference frame $\\{x_{0},y_{0},z_{0}\\}$ associated with the field direction, see Fig. 3 and Fig. 4. For a general coplanar spin configuration in Fig. 4, the local axes are rotated from the laboratory ones (12) to introduce quantized spin excitations for a given spin arrangement. Consider the non-spin-flip part. Here, according to (12), $S^{z_{0}}_{i}\\!=\\!S^{z}_{i}\cos\widetilde{\alpha}+S^{x}_{i}\sin\widetilde{\alpha}$, with $\widetilde{\alpha}$ being the angle of spin’s $z$-axis on a site $i$ with $z_{0}$. Upon quantization, $S^{z}_{i}$ converts into two-magnon term, while $S^{x}_{i}$ yields one-magnon emission/absorption. Similarly to the problem of electron-phonon scattering, it is the lowest-order coupling that needs to be considered, unless it is forbidden for a symmetry reason or its scattering kinematics is suppressed. In our case, there are no such constraints and the $S^{z}$ part is also of the higher order in $1/S$ sense. Therefore, we approximate the local spin operators in (46) by their single- magnon components $\displaystyle S_{i}^{+_{0}}\approx$ $\displaystyle\,2\,\sqrt{\frac{S}{2}}\,\Big{(}\cos^{2}\left(\widetilde{\alpha}/2\right)\,a^{\phantom{{\dagger}}}_{i}-\sin^{2}\left(\widetilde{\alpha}/2\right)\,a^{{\dagger}}_{i}\Big{)},$ $\displaystyle S_{i}^{z_{0}}\approx$ $\displaystyle\,\sqrt{\frac{S}{2}}\,\sin\widetilde{\alpha}\,\Big{(}a^{\phantom{{\dagger}}}_{i}+a^{{\dagger}}_{i}\Big{)},$ (47) where the angle $\widetilde{\alpha}$ depends on the sublattice. Using Fourier transform (21) in (46) together with (47), one arrives at $\displaystyle{\cal H}_{int}^{+-}=$ $\displaystyle\frac{2\widetilde{J}_{K}}{\sqrt{3N}}\sum_{{\bf k},{\bf q}}\Big{[}f^{\dagger}_{{\bf k}-{\bf q}\uparrow}f^{\phantom{{\dagger}}}_{{\bf k}\downarrow}\sum_{\alpha}\Big{(}\cos^{2}\left(\widetilde{\alpha}/2\right)a^{{\dagger}}_{\alpha,{\bf q}}$ $\displaystyle\phantom{\frac{2\widetilde{J}_{K}}{\sqrt{3N}}\sum_{{\bf k},{\bf q}}\Big{[}f^{\dagger}_{{\bf k}-{\bf q}\uparrow}f}-\sin^{2}\left(\widetilde{\alpha}/2\right)a^{\phantom{{\dagger}}}_{\alpha,-{\bf q}}\Big{)}+{\rm H.c.}\Big{]},$ $\displaystyle{\cal H}_{int}^{zz}=$ $\displaystyle\,\frac{\widetilde{J}_{K}}{\sqrt{3N}}\sum_{{\bf k},{\bf q}}\Big{[}f^{\dagger}_{{\bf k}-{\bf q}\uparrow}f^{\phantom{{\dagger}}}_{{\bf k}\uparrow}-f^{\dagger}_{{\bf k}-{\bf q}\downarrow}f^{\phantom{{\dagger}}}_{{\bf k}\downarrow}\Big{]}$ (48) $\displaystyle\phantom{\frac{\widetilde{J}_{K}}{\sqrt{3N}}\sum_{{\bf k},{\bf q}}\Big{[}f^{\dagger}_{{\bf k}-{\bf q}\uparrow}}\times\sum_{\alpha}\sin\widetilde{\alpha}\,\Big{(}a^{{\dagger}}_{\alpha,{\bf q}}+a^{\phantom{{\dagger}}}_{\alpha,-{\bf q}}\Big{)},$ where $\widetilde{J}_{K}\\!=\\!\frac{1}{2}J_{K}\sqrt{S/2}$, $N$ is the total number of sites and summation in ${\bf k}$ and ${\bf q}$ is over the full Brillouin zone of the triangular lattice, Fig. 5. We note that the single- magnon non-spin-flip terms are nonzero in the 120$\degree$, Y, and V phases, in which the angles $\widetilde{\alpha}\\!\neq\\!\\{0,\pi\\}$, because in these states the symmetry of the Hamiltonian (1) is broken completely and a spin-flip does not correspond to a particular spin value. The last transformation is to the quasiparticle operators given by Eq. (36), which yields $\displaystyle{\cal H}_{int}^{+-}=$ $\displaystyle\frac{\widetilde{J}_{K}}{\sqrt{3N}}\sum_{{\bf k},{\bf q}}\Big{[}f^{\dagger}_{{\bf k}-{\bf q}\uparrow}f^{\phantom{{\dagger}}}_{{\bf k}\downarrow}\sum_{\gamma}\Big{(}M^{+-}_{\gamma,{\bf q}}A^{{\dagger}}_{\gamma,{\bf q}}$ $\displaystyle\phantom{\frac{\widetilde{J}_{K}}{\sqrt{3N}}\sum_{{\bf k},{\bf q}}\Big{[}f^{\dagger}_{{\bf k}-{\bf q}\uparrow}f^{\phantom{{\dagger}}}_{{\bf k}\downarrow}}+N^{+-}_{\gamma,{\bf q}}A^{\phantom{{\dagger}}}_{\gamma,-{\bf q}}\Big{)}+{\rm H.c.}\Big{]},$ $\displaystyle{\cal H}_{int}^{zz}=$ $\displaystyle\,\frac{\widetilde{J}_{K}}{\sqrt{3N}}\sum_{{\bf k},{\bf q}}\Big{[}f^{\dagger}_{{\bf k}-{\bf q}\uparrow}f^{\phantom{{\dagger}}}_{{\bf k}\uparrow}-f^{\dagger}_{{\bf k}-{\bf q}\downarrow}f^{\phantom{{\dagger}}}_{{\bf k}\downarrow}\Big{]}$ (49) $\displaystyle\phantom{\frac{\widetilde{J}_{K}}{\sqrt{3N}}\sum_{{\bf k},{\bf q}}\Big{[}f^{\dagger}_{{\bf k}-{\bf q}\uparrow}}\times\sum_{\gamma}M^{zz}_{\gamma,{\bf q}}\Big{(}A^{{\dagger}}_{\gamma,{\bf q}}+A^{\phantom{{\dagger}}}_{\gamma,-{\bf q}}\Big{)},$ with the matrix elements $\displaystyle M^{+-}_{\gamma,{\bf q}}=$ $\displaystyle\,2\sum_{\alpha}\Big{(}\cos^{2}\left(\widetilde{\alpha}/2\right)U_{\alpha,-{\bf q}}^{({\gamma})}-\sin^{2}\left(\widetilde{\alpha}/2\right)V_{\alpha,-{\bf q}}^{({\gamma})}\Big{)},$ $\displaystyle N^{+-}_{\gamma,{\bf q}}=$ $\displaystyle\,2\sum_{\alpha}\Big{(}\cos^{2}\left(\widetilde{\alpha}/2\right)V_{\alpha,-{\bf q}}^{({\gamma})}-\sin^{2}\left(\widetilde{\alpha}/2\right)U_{\alpha,-{\bf q}}^{({\gamma})}\Big{)},$ $\displaystyle M^{zz}_{\gamma,{\bf q}}=$ $\displaystyle\,\sum_{\alpha}\sin\widetilde{\alpha}\,\Big{(}U_{\alpha,-{\bf q}}^{({\gamma})}+V_{\alpha,-{\bf q}}^{({\gamma})}\Big{)}.$ (50) One can see that while the structure of the non-spin-flip term in (49) is similar to that of the electron-phonon scattering, the spin-flip part is different as the amplitudes of magnon emission and absorption by the same $\uparrow(\downarrow)$ electron are, generally, different. This is, of course, most obvious in the polarized FM state, in which magnons do have a definite spin, and, therefore, can be emitted only by electrons with the spin $\downarrow$ and absorbed only by electrons with the spin $\uparrow$. With the electron-magnon couplings explicated in Eqs. (49) and (50), one has a clear path toward a calculation of the electron’s relaxation rate and, therefore, resistivity as a function of the field and temperature. The derivation of the electron-magnon interaction above and the calculation of the relaxation rate that follows can be repeated for individual particular cases of the 120$\degree$, FM, and UUD phases with an alternative spin-wave formulation considered in Appendix B. Each of these considerations follows the same structure with a varying degree of simplification compared to the general case described above. While we do not expose these alternative solutions here as they lead to identical outcomes, they do offer an important verification and an analytical insight into the makeup of our solution. Figure 7: Combinations $\Phi_{\gamma,{\bf k}}$ (53) of the matrix elements in (50) for several representative field values along a representative cut K$\Gamma$MK through the full BZ, see Fig. 5: Y phase, $h\\!=\\!0.3$, UUD phase, $h\\!=\\!0.8$, and V phase, $h\\!=\\!2.0$. We have repeatedly emphasized the importance of the field-induced changes in magnon energies and in electron-magnon matrix elements for our key results that follow next. With the representative magnon energies shown in Fig. 6, we complement them with the similar representative plots of the combinations of matrix elements given in (53), which enter the integral expression for the resistivity, see Eq. (52) below. In Fig. 7 we show combinations $\Phi_{\gamma,{\bf k}}$ from (53) for the three representative field values from different phases, Y phase at $h\\!=\\!0.3$, UUD phase at $h\\!=\\!0.8$, and V phase at $h\\!=\\!2.0$. The plot is along a representative cut K$\Gamma$MK through the full BZ as in Fig. 6 and for the same parameter choices in model (1) as above, $j_{2}\\!=\\!J_{2}/J_{1}\\!=\\!-0.8$ and $b\\!=\\!BS^{2}/J_{1}\\!=\\!0.1$. Since the numeration of magnon modes in Fig. 6(b)-(d) is from the lowest to highest in energy, the solutions for the matrix elements in Fig. 7 switch between branches whenever branches cross. Some of the crossings are at the high symmetry points and some are not. A general trend that can be observed in Fig. 7 is that some of the matrix elements are strongly suppressed around the $\Gamma$ point and are either maximal or singular at the $K$ point. An interesting feature of the matrix elements in the UUD phase can be noted. There is no dependence of $\Phi_{\gamma,{\bf k}}$ on the field, only switching between branches according to their numeration. That is, while there is a definite reshuffling of the magnon modes vs field that can be seen in Fig. 6(c), the same combination of matrix elements as depicted in the middle panel of Fig. 7 corresponds to any other point on the magnetization plateau (UUD) phase. This observation provides an interesting connection between the structure of the quasiparticle states, encoded in their wave-functions, and conserved magnetization. ### IV.2 Resistivity Similarly to the theory of electron-phonon scattering in the resistivity of metals, Fermi energy $E_{F}$ is by far the most dominant energy scale of the problem, perhaps even more so in our case, as the magnon bandwidth, field strengths, and temperature range of interest are all $\lesssim\\!50$ K while $E_{F}\\!\sim\\!1$–3 eV [30]. Because of that, magnon-induced scattering of electrons is happening within a thin energy shell around the Fermi surface. For the effectively 2D magnetic excitations, transport scattering rate can be shown to reduce to a 1D integral that is limited by that shell. With the technical details of the Boltzmann equation approach to the electron- magnon scatterings in Eq. (49) delegated to Appendix C and a mild assumption of the circular 2D Fermi surface, we obtain the transport relaxation rate for electrons with both spin projections $\displaystyle\frac{\hbar}{\tau_{F}}=\frac{\sqrt{3}}{\pi}\,\frac{|\widetilde{J}_{K}|^{2}}{E_{F}}\,(k_{F}a)^{2}\,I_{k_{F}}(T,H)\,,$ (51) where $\widetilde{J}_{K}\\!=\\!\frac{1}{2}J_{K}\sqrt{S/2}$, $E_{F}\\!=\\!\hbar^{2}k_{F}^{2}/2m$ with $m$ being effective electron mass, $k_{F}$ is a Fermi momentum, and the 1D integral given by $\displaystyle I_{k_{F}}(T,H)=$ $\displaystyle\,\int_{0}^{1}\frac{z^{2}\,dz}{\sqrt{1-z^{2}}}$ (52) $\displaystyle\,\phantom{\int_{0}^{1}}\times\frac{1}{3}\sum_{\gamma}\Phi_{\gamma,{\bf q}}\,{\rm n}^{0}_{\gamma,{\bf q}}\big{(}{\rm n}^{0}_{\gamma,{\bf q}}+1\big{)}\,\frac{\omega_{\gamma,{\bf q}}}{T}\,,$ with the 2D momentum parametrization along the 1D contour ${\bf q}\\!=\\!2k_{F}(z^{2},z\sqrt{1-z^{2}})$, Bose distribution function ${\rm n}^{0}_{\gamma,{\bf q}}$ for a magnon with the energy $\omega_{\gamma,{\bf q}}$, $\gamma$ numerating magnon branches, and $\Phi_{\gamma,{\bf q}}$ abbreviating matrix element contribution $\displaystyle\Phi_{\gamma,{\bf q}}=2\big{|}M^{zz}_{\gamma,{\bf q}}\big{|}^{2}+\big{|}M^{+-}_{\gamma,{\bf q}}\big{|}^{2}+\big{|}N^{+-}_{\gamma,{\bf q}}\big{|}^{2}.$ (53) This result in (51)-(53) combines the effort of the entire work in a concise form. It accumulates the solution of the transport theory that proves the validity of the $1/\tau$-approximation in our case, implicitly contains the field-dependence of the magnon spectra and matrix elements (50) via the spin angles $\widetilde{\alpha}$ and parameters of the generalized Bogolyubov transformation (36), incorporates field-induced transitions between different phases, and includes the effect of thermal population of magnetic scatterers on the resistivity. As is discussed in the Section V, contributions of the thermal distribution of magnons and matrix element component (53) are both essential for the resistivity results. While the general expressions (51) and (52) may not be too intuitive, the following consideration will provide essential ingredients for such an intuition. Figure 8: The kernel ${\cal K}_{2k_{F}}$, Eq. (54), vs $H$ for $k_{F}\\!=\\!\pi/4$ and for (a) $b\\!=\\!0$, and (b) $b\\!=\\!1/11$. Contributions of the spin-flip and non-spin-flip channels are represented by shadings, Y, UUD, V, and FM phases and transitions between them are indicated. #### IV.2.1 Large-${\bf q}$ insights The key elements of the physics packed in Eq. (51) can be extracted from the kernel of the integral in (52). We begin by noting that the $z^{2}$ factor in (52) originates from a suppression of the small-angle scattering processes of electrons in the transport relaxation rate. Thus, the integral is dominated by the large-${\bf q}$ scattering events that correspond to $z\\!\rightarrow\\!1$ and $q\\!\rightarrow\\!2k_{F}$. The $1/\sqrt{1-z^{2}}$ factor is due to angular integration in 2D and also contributes to an enhancement of the large-${\bf q}$ contributions. Further intuition, which also lays out expectations for the results presented in the next Section, is provided by the remainder of the kernel in the second line of (52) taken at a “typical” momentum ${\bf q}^{*}\\!=\\!(2k_{F},0)$ and in the high-temperature limit, approximating Bose-factors as ${\rm n}^{0}_{\gamma,{\bf q}}\\!\approx\\!T/{\omega_{\gamma,{\bf q}}}$ and omitting an overall prefactor $T/3$, ${\cal K}_{2k_{F}}=\sum_{\gamma}\frac{\Phi_{\gamma,{\bf q}^{*}}}{\omega_{\gamma,{\bf q}^{*}}}.$ (54) Referred to as “the kernel” below, ${\cal K}_{2k_{F}}$ is a sum over the branch index $\gamma$ of the ratios of the matrix elements (53) and magnon energies, taken at ${\bf q}^{*}$. We also note that the high-temperature approximation is closely relevant to EuC6 phenomenology discussed in Sec. V. The kernel ${\cal K}_{2k_{F}}$ allows one to analyze contributions of different magnon modes to the resistivity and also to compare the relative importance of the spin-flip and non-spin-flip channels in the scattering. While the latter is absent in the collinear UUD and FM phases, it is present in the noncollinear Y and V phases, where $\sin\widetilde{\alpha}\\!\neq\\!0$, see (50). This partitioning of (54) into the channels is done by separating the non-spin-flip matrix element contribution $|M^{zz}_{\gamma,{\bf q}^{*}}|^{2}$ in $\Phi_{\gamma,{\bf q}^{*}}$ from the rest, see (53). Figure 8 shows ${\cal K}_{2k_{F}}$ (54) vs magnetic field for a representative $k_{F}\\!=\\!\pi/4$ and for two values of the biquadratic parameter $b$ in model (1): (a) Heisenberg, $b\\!=\\!0$, and (b) $b\\!=\\!b_{c}\\!=\\!1/11$. The rest of the parameters are from Table 1, see Sec. II.2.3. In the Heisenberg limit, the UUD phase reduces to a critical point separating Y and V phases. Contributions of the spin-flip and non-spin-flip scattering channels are shaded by different colors. The key lesson of Figure 8 is that the non-spin-flip scattering channel, although secondary to the spin-flip one, is responsible for an enhancement of ${\cal K}_{2k_{F}}$ in the noncollinear Y and V phases relative to the collinear UUD and FM phases where it is not available. Accordingly, one should expect the higher transport relaxation rates (51) and higher resistivity in these noncollinear phases. Fig. 8 also shows that the biquadratic interaction enhances non-spin-flip scattering and causes stronger variations of the kernel near Y-UUD and V-FM transitions. One can anticipate all these trends to persist in the results for the resistivity discussed in Sec. V. #### IV.2.2 Estimate of $J_{K}$ Using considerations of the electronic structure of EuC6 provided in Sec. II.1 and assuming two doubly-degenerate bands with cylindrical Fermi surfaces to describe it, the 3D electronic concentration $n$ is related to the value of the Fermi momentum $k_{F}$ via $n=\frac{k_{F}^{2}}{\pi c}\,,$ (55) where $c$ is the interplane distance between Eu layers. With that and some rearranging, the expression for the resistivity can be cast in the following form $\rho=\frac{m}{ne^{2}\tau_{F}}=\frac{c}{4}\,R_{K}\cdot\frac{\hbar}{E_{F}\tau_{F}},$ (56) in which the von Klitzing constant $R_{K}\\!=\\!h/e^{2}\\!\approx\\!25.8$ k$\Omega$ and interplane distance $c$ set the proper units and the relaxation rate of (51) is made dimensionless by a normalization to the Fermi energy. One can use the expression for $\rho$ in Eq. (56) with $\hbar/\tau_{F}$ from (51) to estimate the Kondo coupling constant $J_{K}$ in (45) that is needed to reproduce experimental values of $\rho$ in EuC6. By taking $\rho_{H=0}(24{\rm K})\\!\approx\\!12.5$ $\mu\Omega$cm from Fig. 1(b) and $c\\!=\\!4.87$ Å[21], one obtains $\hbar/E_{F}\tau_{F}\\!\approx\\!0.091$, which, if matched to the theory results for $k_{F}a\\!=\\!\pi/3$ in Fig. 1(c), yields $J_{K}/E_{F}\\!\approx\\!0.275$. By scaling of the value of the Fermi energy in Fig. 2(a) to what it would be for $k_{F}a\\!=\\!\pi/3$, one has $E_{F}\\!\approx\\!1.17$ eV and $J_{K}\\!\approx\\!0.32$ eV. This estimate is of the same order, albeit somewhat larger, than the value 0.15 eV quoted in the early literature [24]. However, it seems that most of the discrepancy could be in the factor of two difference in the definition of electronic spin in the Kondo coupling (45), making the remaining difference rather academic. ## V Results With all the elements of our approach and qualitative and quantitative considerations and estimates provided above, we can now offer a detailed overview of the results that follow from our theory. A comparison of the experimental data for the magnetoresistivity in EuC6 vs field with our calculations for the model parameters from Table 1 and for a representative value of $k_{F}\\!=\\!\pi/3$ is given in Figs. 1(b) and (c). Given the simplicity of our model and potential additional unaccounted effects discussed in more detail in Sec. V.4, the similarity between experiment and theory is rather astounding. This similarity includes high resistivity in the Y phase and its quick roll- down near the Y-UUD transition, a gentle downward slope of $\rho$ vs $H$ in the UUD phase, followed by a smooth rise in the V phase. The temperature evolution of the $\rho(H)$ curves is also consistent with the data, perhaps with an exception of the lowest temperatures. A discrepancy can also be seen in the larger values of $\rho$ in the FM phase and a strong rise toward it near the V-FM transition in the theory results. This is likely due to a proximity to the $H$–$T$ phase transition boundary, where interactions between magnetic excitations, neglected in our consideration, become important. This successful comparison strongly suggests the correctness of the advocated mechanism of the magnetoresistance in magnetically intercalated graphite as dominated by electron scattering on magnetic excitations, which, in turn, allows insights into the nature of such excitations. In the following, we present further evidence of the success of our theory together with a detailed analysis of the dependence of our results on the key model parameters, such as biquadratic interaction of spins $b$ and electron Fermi momentum $k_{F}$, summarized in Figures 9–11. This analysis provides implications for the microscopic parameters that should describe EuC6 and also offers a glimpse of the prospective new phenomena that can be induced in intercalated magnetic materials and similar systems by means of the chemical-, pressure-, or gate-doping. Figure 9: The $T$-dependence of $\rho_{\rm mag}$ due to magnetic scattering (51) on the (a) linear and (b) log-log scale in the 120$\degree$ state, $H\\!=\\!0$, and in the middle of the UUD phase, $H\\!=\\!(H_{c2}+H_{c1})/2$, for the parameters from Table 1 and $k_{F}\\!=\\!\pi/3$. Dashed lines are Bloch-Grüneisen’s, $\sim\\!T^{4}$ in 2D, Ohm’s, $\sim\\!T$, and exponential asymptotics, see text. ### V.1 $T$-dependence of resistivity We complement our results for the field-dependence of $1/\tau_{F}$ in Fig. 1(c) by the temperature-dependence of $\rho(H,T)$ at fixed $H$. Our Figure 9(a) shows the results for two field values: $H\\!=\\!0$, 120$\degree$ spin state, and for the middle of the UUD phase, $H\\!=\\!(H_{c2}+H_{c1})/2$. The results are for the same optimal choice of parameters to describe EuC6 from Table 1 as in Fig. 1(c), and for $k_{F}\\!=\\!\pi/3$. Our Fig. 9(b) shows the same data on the log-log scale in order to emphasize two distinct temperature regimes, the “low-$T$” and the “high-$T$.” The overall energy scale for scattering is set by the magnon bandwidth, which plays the role analogous to that of the Debye energy in the electron-phonon resistivity [51]. Drawing from this analogy, a transition between the low- and high-$T$ regimes can be expected at a fraction of the magnetic Debye energy [51, 52, 53], which can be estimated from the magnon spectra in Fig. 6 as $W_{\rm m}\\!\approx\\!10J_{1}S$ with some variation between phases. Using $J_{1}\\!\approx\\!1.1$ K, see Table 1, and $S\\!=\\!7/2$ yields $W_{\rm m}\\!\approx\\!40$ K. Indeed, the transition between the two regimes can be observed in Fig. 9 at $W_{\rm m}/5\\!\approx\\!8$ K. This consideration implies that the majority of experimental results on EuC6 in Refs. [21, 22, 23], in our Fig. 1(b), which is reproduced from Ref. [18], and in all our theoretical plots are in that “high-$T$” regime, $T\\!\gtrsim\\!8$ K. The nature of this regime, where resistivity crosses over to a linear-$T$ dependence as is indicated by the asymptotes in Fig. 9, is simply an equivalent of the Ohm’s law. Approximating Bose-factors in (52) by their high-temperature limit, ${\rm n}^{0}_{\gamma,{\bf q}}\\!\approx\\!T/{\omega_{\gamma,{\bf q}}}$, naturally yields $1/\tau_{F}\\!\propto\\!T$. Parenthetically, this also motivates our high-$T$ approximation used in the consideration of the kernel in Sec. IV.2.1. Figure 10: $1/\tau_{F}$ vs $H$ from (51) for representative $k_{F}\\!=\\!\pi/3$ and temperatures, exchange parameters are from Table 1. (a) Results for the biquadratic exchange $b\\!=\\!0$, 0.03, 0.06, and 0.0922 are displaced down for clarity by the increments of 0.05 in the given units. Circles/squares mark transitions between magnetic phases. (b) $b\\!=\\!0.13$, $j_{2}\\!=\\!0.08$, dotted lines indicate discontinuities. Thus, the nearly-linear $T$-dependence of $\rho(T)$ in EuC6 observed in Ref. [24] above 8 K is simply within the onset of the Ohm’s regime, with no need for an artificial backflow scenario proposed in that work. Needless to say, details of the spectra do not matter at high temperatures and the same $\rho(T)\\!\propto\\!T$ dependence should hold for all field-induced phases, as is shown by a comparison of the UUD and 120$\degree$ states in Fig. 9. Naturally, in very high fields, the field-induced gaps $g\mu_{B}(H\\!-\\!H_{s})\\!\gg\\!k_{B}T$ will lead to a freeze-out of the magnon scattering. The low-$T$ regime is a bit more subtle and depends on the magnetic phase. The case of the $120^{\circ}$ state and, by proxy, of the Y and V phases with the Goldstone modes that are linear at low energies, $\omega_{\bf q}\\!\propto\\!q$, see Fig. 6(a), (b) and (d), is very much similar to the textbook case of acoustic phonons. For the $120^{\circ}$ state, one can show from Appendix B.2 that the matrix element contribution (53) associated with the coupling to such a mode is also linear in $q$ in that limit, $\Phi_{\gamma,{\bf q}}\\!\propto\\!q$. Then, a simple power-counting in (52) for $T\\!\gtrsim\\!\omega_{\bf q}$ using $z\\!\propto\\!q\\!\propto\\!T$, yields a 2D analogue of the Bloch-Grüneisen asymptotic regime $1/\tau_{F}\\!\propto\\!T^{4}$ shown in Fig. 9. As opposed to the gapless phases, the UUD and FM phases are gapped away from the transition points, see Fig. 6(c) and (e). Thus, one can expect to see an activated behavior of the resistivity, $\rho\\!\propto\\!e^{-\Delta/T}$, at sufficiently low temperatures. While this regime can be visible in Fig. 9(b), in practice its detection requires reaching temperatures $T\ll\Delta$. There is also an additional smallness due to a $\propto\\!T^{7/2}$ prefactor of the exponent associated with a suppressed coupling to the lowest mode, $\Phi_{\gamma,{\bf q}}\\!\propto\\!q^{4}$. An estimate for the gap in the middle of the UUD phase for EuC6 gives $\Delta\\!\approx\\!1.1SJ_{1}\\!\approx\\!4.2$ K, providing a guidance for the future observations. The locus of magnon momenta ${\bf q}$ that are involved in the scattering depends on the value of $k_{F}$ as we discuss below. However, in the field- polarized FM case, it is, generally, away from the energy minimum in Fig. 6(e), leading to a larger gap in the exponent, which is further increased by the Zeeman energy $g\mu_{B}(H-H_{s})$ away from the saturation field, accompanied by a more favorable prefactor $\propto\\!T^{1/2}$, so the freezing-out should be readily observable in the FM phase at higher temperatures. Lastly, one can naively expect a power-law that is different from $\propto\\!T^{4}$ near the Y-UUD and UUD-V transition points as both are affiliated with the BEC-like transitions, in which magnon energy is quadratic, $\omega_{\bf q}\\!\propto\\!q^{2}$, see Fig. 6(c). Although we refrain from discussing it in any significant detail, the situation is more complicated as the coupling to these BEC modes is different at $H_{c1}$ and $H_{c2}$. In the first case the coupling vanishes, maintaining an exponential trend due to higher energy modes, while in the second case it indeed leads to a different power-law $\propto\\!T^{7/2}$ due to a suppressed coupling, $\Phi_{\gamma,{\bf q}}\\!\propto\\!q^{4}$. Altogether, the consideration given above presents further evidence of the validity of our theoretical approach, providing a physically transparent description of the temperature-dependence of the resistivity of EuC6 in the previously accessed temperature regime. It also invites further such studies in finite fields and especially at lower temperatures, where resistivity should be a sensitive probe of the spin excitation spectra. ### V.2 Magnetoresistivity vs biquadratic-exchange The discussion provided below serves two goals. First is to investigate how prevalent are the strong anomalies in the magnetoresistivity, $\rho$ vs $H$, in the model of the conduction electrons coupled via a coupling (45) to the spins that are described by the Heisenberg-biquadratic model (1). Second is to demonstrate that the magnetoresistivity of EuC6 is consistent with the substantial biquadratic-exchange parameter $b$ in such a model. In Figure 10, we present the transport relaxation rate vs field obtained from (51) for several representative temperatures, Fermi momentum $k_{F}\\!=\\!\pi/3$, and exchange parameters from Table 1, except that now we vary the key biquadratic-exchange parameter $b\\!=\\!BS^{2}/J_{1}$. The corresponding magnetoresistivity is related to these results by a dimensional constant factor, see Eq. (56). Figure 10(a) shows two sequences of curves, offset for clarity, with the biqudratic exchange increasing in nearly equal steps from the Heisenberg limit, $b\\!=\\!0$, to the value $b\\!=\\!0.0922$ that we use as an optimal choice for EuC6, see Sec. II.2.3. Figure 10(b) shows results for the biquadratic exchange $b\\!=\\!0.13$ that substantially exceeds the “critical” value $b_{c}\\!=\\!1/11$, which corresponds to a change of the Y-UUD and V-FM transitions to the first-order type as discussed in detail in Sec. II.2.2 and Appendix A. The evolution of $1/\tau_{F}$ with $b$ in Fig. 10(a) features already anticipated trends. First, the opening of the $1/3$-magnetization plateau (UUD) phase away from the Heisenberg limit, see Sec. II.2, is clearly visible. Second, the results in Fig. 10(a) are in a close accord with with the behavior of the kernel, discussed in Sec. IV.2.1 and illustrated in Fig. 8, providing an explicit confirmation that the transport relaxation rate and magnetoresistivity are dominated by the $2k_{F}$ scattering processes. The key observation from the results in Fig. 10(a) is that strong roller- coaster like variations in magnetoresistivity, such as the ones observed in EuC6, must be associated with the nearly critical values of the biquadratic exchange within our model. Although some aspects resembling strong variations and indicating clear differences of $\rho$ vs $H$ dependence between different phases can already be observed in the pure Heisenberg model, see, for example, a kink-like feature at the Y-V boundary in the upper curves in Fig. 10(a), others are much less pronounced, see a rather small change of slope at the the V-FM transition at $H_{s}$ in the same results. Our Fig. 10(a) demonstrates that the role of the biquadratic term in model (1) goes far beyond just establishing the UUD phase boundaries, which are clearly marked by the kinks in $1/\tau_{F}$. With increasing $b$, the Y-UUD transition becomes steeper upon shifting to the lower fields, showing a divergent derivative for $b\\!=\\!0.0922$ that is related to a similar behavior of spin angles in Fig. 12. Still, the biggest change takes place at the V-FM transition, which too becomes weakly first-order, as is elaborated on in Appendix A. Here, the $1/\tau_{F}$ field-dependence evolves from a nearly featureless one in the Heisenberg limit to a “shock-wave”-like shape for $b\\!=\\!0.0922$. In contrast, the UUD-V transition remains continuous throughout these transformations, although the slope of $1/\tau_{F}$ at $H_{c2}$ also changes visibly. Increasing $b$ beyond the critical $b_{c}$ should lead to a hysteresis in the magnetoresistance. Figure 10(b) illustrates the case of $b\\!=\\!0.13\\!>\\!b_{c}$. These results are obtained by using the local stability of the solutions for the magnetic configurations, of their corresponding spin-wave energies, and of electron-magnon matrix elements within the overlap regions of the coexisting phases. For example, from within the Y phase, the Y magnetic configuration persist up to $\widetilde{h}_{c1}$, as is described in Appendix A.1. From within the UUD phase, the same field region can be accessed starting from $h_{c1}\\!<\\!\widetilde{h}_{c1}$. Therefore, in the overlap region, $h_{c1}\\!<\\!h\\!<\\!\widetilde{h}_{c1}$, the relaxation rate $1/\tau_{F}$ can be calculated in two different ways, resulting in the sizable discontinuities in Fig. 10(b), indicated by vertical dotted lines marking the overlap intervals of $h_{c1}\\!<\\!h\\!<\\!\widetilde{h}_{c1}$ for the Y-UUD and $h_{s}\\!<\\!h\\!<\\!\widetilde{h}_{s}$ for the V-FM transition. We note that the transition regions and discontinuities in Fig. 10(b) are only illustrative. As is discussed in Sec. A.1, the transition between the two overlapping phases should take place at $h_{c}^{*}$, at which the energies of the two phases become equal. At a finite temperature, a proper consideration of the first-order transition should include entropic contribution to the free energy of the competing phases. In addition, one can expect the co-existence region to be affected by secondary anisotropies that are neglected in our minimal model. Nonetheless, we believe that Fig. 10(b) faithfully represents a qualitative effect of a strong biquadratic interaction on the magnetoresistance across the first-order transition. Altogether, the results presented in this section provide an important overview of the characteristic evolution of the magnetotransport within the model of electrons coupled to the spin subsystem, which is described by the Heisenberg-biquadratic model. As is discussed in Sec. II.2.3, the microscopic parameters of the spin model (1) describing EuC6 are determined entirely from the thermodynamic quantities, such as critical fields and transition temperature. Therefore, for claiming a success of a theoretical description it is crucial that the resulting set of microscopic parameters yields distinctive features that are in accord with a wider phenomenology of the material, especially the one that involves less trivial quantities such as dynamical response and transport. We can claim such a success here, as the parameters chosen to describe EuC6 in Table 1 are also the ones that produce sharp, nearly singular features in magnetoresistivity results that follow from our theory and also match closely the observed ones. Figure 11: $1/\tau_{F}$ from (51) vs $H$ for (a) $k_{F}\\!=\\!\pi/4$, (b) $k_{F}\\!=\\!0.4\pi$, (c) $k_{F}\\!=\\!0.5\pi$, and (d) $k_{F}\\!=\\!0.6\pi$. Parameters are from Table 1. ### V.3 Magnetoresistivity, role of $k_{F}$ Two more aspects of our study merit further discussion. First, as is mentioned in Sec. II, the electronic band filling fraction in EuC6 and the Fermi momentum $k_{F}$ parametrizing it are not well-determined. While the nominal Eu2+ valence naively implies a large Fermi surface, the electronic structure and angle-resolved photoemission study [30] suggested a substantially smaller electron fraction in the relevant carbon orbitals and a smaller $k_{F}$. We would like to weigh in on this subject, with the magnetoresistivity in our model arguing for a still somewhat smaller Fermi surface, with $k_{F}\\!\lesssim\\!\pi/3$. Second, much of the interest in the synthetic materials in general and in the graphite-derived systems in particular is due to a significant flexibility regarding electronic density manipulation. Then, in addition to varying parameters of the spin model, it is also important to explore the outcomes of our theory in a wider range of electronic parameters in order to anticipate potential new effects that can be accessible due to such a flexibility. To that end, we discuss some of the larger-$k_{F}$ results. Our Figure 11 shows the constant-$T$ curves of the transport relaxation rate $1/\tau_{F}$ vs $H$ calculated using (51) as in Fig. 1(c) and Fig. 10 for a set of representative temperatures from $T\\!=\\!24$ K down to $2$ K in 2 K steps. Results are for the model parameters from Table 1 which describe EuC6, and for the four different values of the Fermi momentum, $k_{F}\\!=\\!\pi/4$, $0.4\pi$, $0.5\pi$, and $0.6\pi$. In this case, the field-independent constant factor that relates $1/\tau_{F}$ to magnetoresistivity $\rho(H,T)$, Eq. (56), is different for the four sets as they correspond to different electronic concentrations $n$ via (55). Consider $k_{F}\\!=\\!\pi/4$ results in Fig. 11(a) first. All of the features in the data are the same as in Fig. 1(c) and as discussed in Sec. V.2 for Fig. 10(a), including the steep Y-UUD transition, a “shock-wave” feature at the V-FM boundary, and a decline in the FM phase due to the Zeeman-induced gap that is depleting magnon population. In agreement with the analysis of $\rho$ vs $T$ in Sec. V.1, the temperature-induced offset of the curves is nearly linear in $T$ except for the lowest sets. On a closer and more quantitative side, one can argue that in terms of the overall trends in magnetoresistivity curves, the $k_{F}\\!=\\!\pi/3$ results in Fig. 1(c) provide a somewhat better fit to the EuC6 data in Fig. 1(b) than the $k_{F}\\!=\\!\pi/4$ ones. Moreover, to match experimental data, the decrease of $k_{F}$ requires a nearly proportional increase of the Kondo coupling constant (45) relative to the Fermi energy, $J_{K}/E_{F}$, thus restricting $k_{F}$ from being too small. A surprising trend starts being revealed by the results for the larger $k_{F}\\!=\\!0.4\pi$ in Fig. 11(b). Although the features in the constant-$T$ curves are qualitatively similar to the $k_{F}\\!=\\!\pi/4$ case, changes at the transitions are less steep and less like the ones in the experimental data in Fig. 1(b). They are nearly gone for the Y-UUD boundary in the $k_{F}\\!=\\!0.5\pi$ results in Fig. 11(c) and the V-FM transition for this $k_{F}$ is also marked by the spike-like structures, certainly unlike anything observed in EuC6. The $k_{F}\\!=\\!0.6\pi$ results in Fig. 11(d) complete this unexpected trend, with all the transitions, including the formerly rather featureless UUD-V one, showing spikes. These qualitative transformations signify a change in the dominant scattering that contributes to the resistivity. Regardless of its nature, which we discuss below, an immediate outcome of this analysis is in the phenomenological restriction on the size of the Fermi surface in EuC6. As was described in Sec. II.1, the trigonally warped Fermi surfaces from the band structure in Ref. [30] have the extent from $k_{F,{\rm min}}\\!\approx\\!0.45\pi$ to $k_{F,{\rm max}}\\!\approx\\!0.7\pi$, in a qualitative agreement with a rigid-band estimate assuming circular Fermi surface and $e/2$ per Eu2+ doping of the carbon bands that gives $k_{F,e/2}\\!\approx\\!0.43\pi$, see also Fig. 2. However, the magnetoresistivity of EuC6 within our theory suggests a still smaller Fermi surface with an optimal $k_{F}$ near $\pi/3$. These results invite more research into the band structure and direct measurements of the Fermi surface of EuC6. To understand the transformation of the relaxation rates with $k_{F}$ in Fig. 11, we need to return to the analysis of $1/\tau_{F}$ in Eq. (51) and in Sec. IV.2.1. Because of the hierarchy $\omega_{\bf q},T\\!\ll\\!E_{F}$, electrons participating in a conduction process scatter between momenta that are in a close vicinity of the Fermi surface. With an assumption of the circular 2D Fermi surface, the magnon momenta that are involved in such a scattering also form a circular locus of points in the ${\bf q}$ space, see Fig. 13(b) and Fig. 16 in Appendix C. These momenta extend from $|{\bf q}|\\!=\\!0$ to the maximum of $|{\bf q}|\\!=\\!2k_{F}$, with the small-momentum contribution to the transport scattering rate in (51) suppressed and large-momentum contribution enhanced, as is discussed in Sec. IV.2.1. Then it follows for the $k_{F}\\!=\\!\pi/3$ case that the typical large- momentum “$2k_{F}$”-magnons, responsible for most of the scattering, are from the set of $|{\bf q}|$ near $2\pi/3$. Referring to the Brillouin zones in Fig. 5, this value corresponds to the proximity of the $\widetilde{M}$ point of the magnetic Brillouin zone and to the high-energy magnons near the maxima of $\omega_{\gamma,{\bf q}}$, see Fig. 6. However, further increase of $k_{F}$ drives the extent of the ${\bf q}$-contour outside of the first magnetic BZ and also brings the $2k_{F}$-magnon energy down. Then, the truly “dangerous” value of the Fermi momentum of the circular Fermi surface is $k_{F}\\!=\\!2\pi/3$, as it allows magnon momenta to reach the corners of the full Brillouin zone, $K$ and $K^{\prime}$, which correspond to the ordering vector of all ordered phases, ${\bf Q}\\!=\\!\pm(4\pi/3,0)$, with gapless or nearly gapless modes. Thus, it is the approach of $k_{F}\\!\to\\!2\pi/3$, or, rather, $2k_{F}\\!\to\\!|{\bf Q}|$, which is responsible for the dramatic changes in Fig. 11 from (a) to (d). This analysis also shows that at a given $T$, the population of the relevant scatterers for $k_{F}\\!=\\!\pi/3$ is lower than that for the larger $k_{F}$ values, which explains an order-of-magnitude enhancement of $1/\tau_{F}$ from $k_{F}\\!=\\!\pi/3$ in Fig. 1(c) to $k_{F}\\!=\\!0.6\pi$ in Fig. 11(d) that is only partially accounted for by the $(k_{F}a)^{2}$ factors in (51). Since the arguments above relies only on $2k_{F}\\!\rightarrow\\!|{\bf Q}|$, they suggests a degree of universality. Specifically, we argue that $1/\tau_{F}$ in all gapless phases should diverge in this limit as $\propto\\!|2k_{F}-Q|^{-1}$, with a field-dependent prefactor, leading to an overall increase of the relaxation rates observed in Fig. 11(d). In addition, the $2k_{F}\\!\to\\!|{\bf Q}|$ behavior of $1/\tau_{F}$ should apply equally to the pure Heisenberg case, which offers an opportunity for a quantitative analytical insight. Using expressions for the FM and 120$\degree$ phases from Appendix B and high-$T$ limit for the Bose factors in (52), neglecting $b$ and $j_{2}$, and expanding ${\bf q}$ near ${\bf Q}$, after some algebra, indeed yields $I_{k_{F}}(T,H_{s})\\!\approx\\!(8\pi T/3J_{1}S)(2k_{F}|2k_{F}-Q|)^{-1}$ for the FM phase at the gapless $H_{s}$ point. The result is the same for the 120$\degree$ phase at $H\\!=\\!0$, but smaller by a factor 3/4. Since $H_{s}$ is the transition point that exhibits spike-like features in Fig. 11, while the 120$\degree$ state is away from the transition, this result confirms our hypothesis that the entire set of $1/\tau_{F}$ is divergent, or nearly divergent in the weakly gapped UUD phase, with the spikes being a quantitative effect that is associated with $\propto\\!q^{2}$ Goldstone modes at the transitions compared to $\propto\\!q$ modes inside the gapless Y and V phases. We can also verify that the factor 3/4 between the $H_{s}$ and $H\\!=\\!0$ (120$\degree$ phase) points is indeed in a reasonable accord with the results in Fig. 11(d). The $\propto\\!|2k_{F}-Q|^{-1}$ divergence is also consistent with the difference between $k_{F}\\!=\\!0.5\pi$ and $k_{F}\\!=\\!0.6\pi$ results at $H_{s}$ in Figs. 11(c) and (d). This study of the divergence brings in one more important aspect of the problem that has been neglected so far. We use a fairly reasonable and certainly simplifying assumption of the cylindrical Fermi surface. However, by itself this assumption does not automatically make $1/\tau_{F}$ independent of the direction of the electron momentum ${\bf k}$, with the angular-dependence originating from the discrete lattice symmetry that is still encoded in the spin excitations and electron-magnon matrix elements. As may be clear intuitively, the reason for this issue to be important in the context of the $\propto\\!|2k_{F}-Q|^{-1}$ divergence is that the “dangerous” ${\bf Q}$-vectors correspond to the discrete points (BZ corners) in the momentum space, see Fig. 5, leading to the truly divergent $1/\tau_{F}$ only for these directions. This subject is considered in Appendix C.4.2 for a closely related quasiparticle relaxation rate $1/\tau_{\rm qp}$, for which the effect of angular dependence can be taken into account without any additional approximations. In Appendix C.4.2, we demonstrate that the effect of the angle-dependence in $1/\tau_{\rm qp}$ is really negligible up to $k_{F}\\!\approx\\!0.5\pi$ and even for $k_{F}\\!\gtrsim\\!0.6\pi$ it is still very modest, confirming the accuracy of our results presented in this work and justifying our initial approximation that omitted this effect. Given the limitations of the cylindrical Fermi-surface approximation, which should become problematic for larger $k_{F}$, and possible effects of the Fermi-surface reconstruction at the magnetic zone boundaries, it is not entirely clear whether the true divergences will survive, but they may still have strong effects even if avoided. This points to an interesting venue of potential studies of the magnetic scattering effects in the large-Fermi- surface EuC6, induced by the chemical-, pressure-, or gate-doping. Some of the considered phenomenology is reminiscent of the “hot spots” phenomena, much discussed in the theory of cuprates [54], where certain parts of the Fermi surface are suggested to experience strong scattering due to the low-energy magnetic excitations with a particular ${\bf Q}$-vector. It is not unthinkable that the suggested further studies of the large-Fermi-surface EuC6 may also be able to shed a new light on this important problem. ### V.4 Outlook We would like to reiterate that the present study has provided a thorough consideration of one of the iconic models in frustrated magnetism, the triangular-lattice Heisenberg model, enriched by the biquadratic exchange and coupled to conduction electrons, with the goal of understanding magnetoresistivity throughout its phase diagram in an external magnetic field. The use of this model as a microscopic description of EuC6, with additional approximations for electron bands and parameters estimated from experimental critical fields and temperature, is clearly a simplification. Yet, the evidence of the success of such a description is undeniable, with many, if not most, features of the magnetoresistivity reproduced, also leading to constraints on the model parameters for both localized spins and electron densities discussed above. However, the presented description is not complete. Below, we briefly discuss other possible terms in the model that might be missing, their expected effects, possible sources of the remaining discrepancies of our theory with the experimental data, and desirable future studies. First additional term in the spin model (1) is the ring-exchange term (2), inspired by the early works [55, 56], see a discussion of it and its secondary role for EuC6 in Sec. II.2. According to our estimates in Table 1, the ring- exchange is about three times smaller than the biquadratic term. Since the symmetry of the model is unaltered by it and the field-induced spin-angle dependencies on the parameters that make transitions first-order are very similar to the biquadratic-only case [21], it was reasonable to neglect it. The only unexplored outcome of the ring-exchange term is a possible stabilization of an additional magnetic state between V and FM phases, with some evidence of such a phase in EuC6 suggested by the data, see Fig. 1(b) and Ref. [18]. Next is the $XXZ$ anisotropy in the $J_{1}$ and $J_{2}$ terms, which is necessary to explain very different magnetization behavior for the in- and out-of-plane field direction [18]. Given close values of the saturation fields for these directions and nearly isotropic g-factor, it is expected to be relatively weak, at the level of 10% [21], see also Sec. II.2. However, unlike the ring-exchange, it lowers the symmetry of the model. Therefore, for the in- plane field, none of the considered phases will have the true Goldstone modes, and gapless excitations will exist only at the field-induced transitions. This is going to alter the low-$T$ behavior of the resistivity discussed in Sec V.1, and also change the dynamical critical exponent at all critical fields from the BEC-like ($z\\!=\\!2$, $\omega_{\bf q}\\!\propto\\!q^{2}$) to the relativistic-like ($z\\!=\\!1$, $\omega_{\bf q}\\!\propto\\!q$). However, as most of experimentally-relevant theory results is pertinent to the “high-$T$” regime, see Sec V.1, the effect is expected to be secondary on them. We have performed a limited study of anisotropic $XXZ$ model for some of the phases, and found no qualitatively significant differences from the Heisenberg limit results in that regime even for strong anisotropy. Since Eu2+ spins are large, the dipole-dipole interactions are not necessarily negligible. However, using the analysis of Ref. [57] and the size of the unit cell of EuC6, we estimate this term to be at least an order of magnitude smaller than the values of the exchanges. Since the dipolar terms also break spin rotational symmetries, one can expect effects similar to that of the $XXZ$ anisotropy. The spin and electronic degrees of freedom are not purely 2D in EuC6, with the 3D interplane spin couplings estimated to be of order $0.1J_{1}$ in Ref. [18]. While nominally essential for the finite Néel temperature, the effect on the spin excitations can be expected to be minimal. However, even if they are small, both electron and spin 3D dispersions can be crucial for the softening of the $2k_{F}$ singularities discussed in Sec. V.3. Perhaps the most significant difference of the outcome of our theory from the magnetoresistance data in EuC6 in Figs. 1(b) and (c) is the larger values of $\rho$ in the FM phase and a strong rise toward it near the V-FM transition. An obvious reason for the discrepancy is the neglect of the temperature dependence of the phase transition boundaries in our approach. While it can be dismissed for the Y-UUD and UUD-V boundaries, for which the results are in a close accord with the data, transition at the V-FM boundary has a substantial downward suppression with $T$. One of the possible approaches, which we leave for future studies, is to include temperature dependence of $H_{s}$ by accounting for the interactions between magnetic excitations that are ignored in our consideration. A less straightforward suggestion is related to the form of the Kondo coupling (45). We have two Fermi surfaces, originated from the downfolding of two Dirac graphene bands onto the Brillouin zone of the Eu-lattice. The coupling to local spins is treated as fully diagonal in the band index in (45). This may, or may not be the full story, with an intriguing prospect that the spin arrangement can permit or forbid intrerband scattering. This is again an interesting subject for a further investigation. Another notable difference of our results in Fig. 1(c) from the data in Fig. 1(b) is at the lowest temperatures that are accessed experimentally. While scattering on magnetic excitations dies out in our theory, magnetoresistivity data retains sizable differences between magnetic phases. One of the scenarios is due to a feedback of the spin orders on the electronic density of states, producing a different imprint onto the resistivity in different field-induced phases. Further studies, with a help of electronic structure calculations, can be envisioned here. A different scenario for the same effect involves a compelling picture associated with the impurity-induced spin-textures [58, 59], which generally arise in frustrated spin systems due to magnetic couplings that are modified in the presence of impurities. While the impact of such textures on the dynamical properties has been recently investigated [60, 61], their effect on the resistivity in the field-induced phases is simply unknown. However, one can expect the spin-textures to exist readily in the noncolinear and be suppressed in the collinear phases [58, 60], suggesting a profile that is similar to the one observed in the magnetoresistivity. Needless to say, this is a yet another direction for future investigations. We conclude this Section by suggesting several extensions of experimental work in EuC6. As is discussed in Sec. V.1 and above, precise low-temperature measurements would provide a significant source of information on the field- induced transitions to and from the UUD phase that would allow to study intriguing critical behaviors and help determine the effective model more precisely. As we have expounded on in Sec. V.3, the tunable-$k_{F}$ experiments can allow to study singular behavior in resistivity that may have significant implications to other systems, and as is briefly mentioned in Appendix C.4.2, a significant violation of the Wiedemann-Franz law can be expected at the field-induced transitions in EuC6. ## VI Summary The main goal of the present study has been to develop a microscopic theoretical description of the highly dramatic evolution of the resistivity in EuC6 with the magnetic field. The results and discussions presented in the prior sections provide a strong affirmation that we have succeeded in that goal, with our results capturing most of the qualitative and quantitative features of the experimental data. This success is based on a physical picture of the scattering of electrons from the graphene-derived bands of the carbon layers by spin excitations from the triangular-lattice Eu-planes. In the course of this work, we have provided a thorough theoretical investigation of the ground states, field-induced phase transitions, spin excitations, and their couplings to the conduction electrons in the paradigmatic two-dimensional triangular-lattice antiferromagnet with a biquadratic exchange, throughout its phase diagram. Our effort highlights the virtues of the full-scale microscopic approach to the problem, not only to the spin model, but also to the transport formalism for the spin-flip and non- spin-flip channels, allowing rigorously obtained numerical results to receive comprehensive analytical and physical insights and interpretations. The research advanced in the present study yields predictions of new field- induced and doping-induced phenomena in magnetically-intercalated graphite and related systems, also offering an inspiration for bringing together different approaches in the search of new effects in the graphite-derived artificial magnetic materials. We anticipate our effort to be relevant to a broader research in metallic magnets and to provide significant technical guidance for the similar theoretical studies. Presently, our study invites more research into EuC6 electronic, thermodynamic, transport, and magnetic properties. Lastly, our work has advocated resistivity measurements, combined with a detailed theoretical analysis, as a very informative probe of not only field- induced phase transitions, but also of the unconventional spin excitations in magnetic materials. We believe that synthetic 2D materials may become a significant source of potentially novel insights into the nature of exotic spin excitations such as, for example, fractionalized spinons in a quantum spin liquid. ###### Acknowledgements. We are indebted to Roser Valentí and Vladislav Borisov for a fruitful discussion regarding electronic structure of intercalated graphite and for sharing their unpublished data that provided an important first-principles and moral support to our Fermi-surface consideration. We would like to wholeheartedly thank KITP for the semi-virtual hospitality during the workshop of the pandemic-impacted Fall of 2020 when the bulk of this work was completed. An unexpected and enlightening encounter by one of the authors (A. L. C.) with Urobatis Halleri during the partial in-person attendance have undoubtedly impacted the reported results. O. A. S. thanks Hassan Allami and Dima Pesin for discussions of experiments on EuC6 and initial attempts at the theoretical formulation of the problem. A. L. C. is grateful to Pavel Maksimov for a substantial Mathematica help and to Ilya Krivorotov for an illuminating discussion regarding possible values of the dipole-dipole terms. The work of A. L. C. was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0021221. The work of O. A. S. was supported by the National Science Foundation CMMT program under Grant No. DMR-1928919. KITP is supported by the National Science Foundation under Grant No. NSF PHY-1748958. ## Appendix A First order transitions Here we describe analysis of the first order Y-UUD and V-FM transitions. We focus on the Heisenberg-biquadratic model (1) and provide some technical details on the classical ground states introduced in Sec. II.2.1. ### A.1 Y-UUD transition Without the ring-exchange term, $k\\!=\\!0$, equation on $x\\!=\\!\cos\alpha_{1}$ in the Y phase, Eq. (4), reduces to $x^{3}-\frac{1+b}{4b}x=-\frac{1+h}{8b}.$ (57) A substitution $x\\!=\\!q\,\sqrt{(1+b)/3b}$ gives $q^{3}-\frac{3}{4}q=-(1+h)\sqrt{\frac{27b}{(1+b)^{3}}}.$ (58) Comparison with the trigonometric identity $\sin^{3}\phi-\frac{3}{4}\sin\phi=-\frac{1}{4}\sin 3\phi,$ (59) leads to the solution for $x\\!=\\!\cos\alpha_{1}$ $x\\!=\\!\sqrt{\frac{1+b}{3b}}\sin\\!\left(\frac{1}{3}\arcsin\left[(1+h)\sqrt{\frac{27b}{4(1+b)^{3}}}\right]\right).$ (60) It is now easy to check that the Y-UUD transition for $b\leq b_{c}\\!=\\!1/11$ is continuous and takes place at $h_{c1}\\!=\\!1-6b$, see Eq. (5), at which $\cos\alpha_{1}\\!=\\!1$. For $b\\!>\\!b_{c}$, the Y phase remains locally stable up to a critical field $\widetilde{h}_{c1}\\!=\\!\sqrt{4(1+b)^{3}/27b}-1$, see Eq. (6), which is found from the condition that the argument of $\arcsin$ in (60) reaches the maximal value of $1$. Given that the UUD phase remains locally stable for all $h\\!\geq\\!h_{c1}$, we observe that field interval $h_{c1}\\!\leq\\!h\\!\leq\\!\widetilde{h}_{c1}$ determines the overlap region of the Y and UUD phases. Within our approach, however, the actual transition between the two phases takes place when the classical energies of the two phases become equal. This defines another field, $h_{c1}^{*}$, which can be found as follows. First, with the help of (57), the per-site energy of the Y phase $\widetilde{E}_{Y}\\!=\\!E_{Y}/NS^{2}J_{1}-3j_{2}$ can be simplified to a quadratic form of $x\\!=\\!\cos\alpha_{1}$ $\widetilde{E}_{Y}=(1+b)x^{2}-1.5(1+h)x+h-1-b,$ (61) which, upon equating with the energy of the UUD phase $\widetilde{E}_{\rm UUD}\\!=\\!-1-h-3b$, yields $x^{*}=\frac{3(1+h)-\sqrt{9(1+h)^{2}-32(1+b)(h+b)}}{4(1+b)}.$ (62) For a given $b\\!>\\!b_{c}$, the right-hand sides of (62) and (60) determine the transition field $h_{c1}^{*}$. The solution is easily obtained numerically using MATHEMATICA. For our choice of $b\\!=\\!0.0922$, see Table 1, which is only slightly larger than $b_{c}\\!=\\!1/11$, the resultant critical fields are nearly indistinguishable from each other: $\\{h_{c1},h_{c1}^{*},\widetilde{h}_{c1}\\}\\!=\\!\\{0.4468,0.446869,0.446891\\}$. For the larger values of $b$, the three fields become sufficiently different and allow one to study resistivity hysteresis. For example, for $b\\!=\\!0.13$ considered in Sec. V.2 we have $\\{h_{c1},h_{c1}^{*},\widetilde{h}_{c1}\\}\\!=\\!\\{0.22,0.25556,0.282313\\}$. ### A.2 V-FM transition Figure 12: (a) The cosines and (b) the sines of angles $\alpha_{1}$ and $\beta$ vs $h$ throughout the Y-UUD-V-FM sequence of the phases in Fig. 4, $h\\!=\\!g\mu_{B}H/3J_{1}S$. Dashed lines are for the pure Heisenberg model, $b\\!=\\!0$, solid lines are for $b\\!=\\!BS^{2}/J_{1}\\!=\\!0.0922$ from Table 1. Small discontinuities due to very weakly first-order transitions can be seen at the Y-UUD and V-FM transitions, as $b\\!=\\!0.0922\\!>\\!b_{c}\\!=\\!1/11\\!\approx\\!0.0909$. In the V phase, denoting $y\\!=\\!\cos\beta$, introducing $t\\!=\\!y+\sqrt{3+y^{2}}$, and rewriting Eq. (8) as a cubic equation for the variable $t$ defined in the interval $1\\!\leq\\!t\\!\leq\\!3$ yields $t^{3}-\frac{2+5b}{b}t=-\frac{2h}{b}.$ (63) This equation is solved by mapping to the same identity (59) as above, with the result given by $t=2\sqrt{\frac{2+5b}{3b}}\sin\left(\frac{1}{3}\arcsin\left[h\sqrt{\frac{27b}{(2+5b)^{3}}}\right]\right),$ (64) from which the angles are obtained as $\cos\beta=\frac{t^{2}-3}{2t},\ \ \ \cos\alpha_{1}=\frac{t^{2}+3}{4t}.$ (65) It is easy to check that the V-FM transition is continuous for $b\leq b_{c}$, with the same $b_{c}\\!=\\!1/11$ as above, and the critical field of the transition is given by $h_{s}\\!=\\!3-6b$, see Eq. (9). For $b\\!>\\!b_{c}$, V phase remains locally stable up to a larger field $\widetilde{h}_{s}\\!=\\!\sqrt{(2+5b)^{3}/27b}$, which is found from the argument of $\arcsin$ in (64) being equal to $1$. Similarly to the case of the discontinuous Y-UUD transition described above, the actual V-FM transition field $h_{s}^{*}$ is found by equating energies of the V and FM phases. Some tedious algebra gives the energy of the V phase $\widetilde{E}_{V}=-\frac{b}{8}t^{4}+\frac{2+5b}{4}t^{2}-ht-\frac{33b}{8}-\frac{3}{2},$ (66) while $\widetilde{E}_{\rm FM}\\!=\\!3(1-h-b)$. Solving $\widetilde{E}_{V}\\!=\\!\widetilde{E}_{\rm FM}$ numerically, with $t$ given by (64), we find that the actual transition field $h_{s}^{*}$ satisfies $h_{s}\\!<\\!h_{s}^{*}\\!<\\!\widetilde{h}_{s}$. For $b\\!=\\!0.0922$ used in our work, we find $\\{h_{s},h_{s}^{*},\widetilde{h}_{s}\\}\\!=\\!\\{2.4468,2.44689,2.44692\\}$, which are, again, essentially identical. For $b\\!=\\!0.13$ considered in Sec. V.2, these fields become $\\{h_{s},h_{s}^{*},\widetilde{h}_{s}\\}\\!=\\!\\{2.22,2.28231,2.30258\\}$. The evolution of the cosines and sines of spin angles $\alpha_{1}$ and $\beta$ with $h$ throughout the Y-UUD-V-FM sequence of the phases in Fig. 4 is shown in Fig. 12 for two representative values of $b$. ## Appendix B Particular cases With the general spin-wave approach for the coplanar three-sublattice states outlined in Sec. III.1, it is still immensely useful to have a fully analytical approach developed for some of the states. This is for the sake of both explicit analytical results and for an independent verification of the partially numerical approach of Sec. III. For the fully polarized FM and 120${\degree}$ states, a single-sublattice formulation of the SWT is possible. For the UUD state, the Hamiltonian matrix in (25) can be reduced to $3\\!\times\\!3$ matrix and solved in a compact form. ### B.1 Polarized state In the fully polarized FM state, see Fig. 4(a), all angles are the same, $\widetilde{\alpha}_{\alpha}\\!=\\!0$. In the absence of the easy-plane anisotropy, the off-diagonal $aa$ ($a^{\dagger}a^{\dagger}$) terms cancel out and the LSWT Hamiltonians in Eqs. (13)–(20) reduce to a tight-binding form similar to that of (20). Since there is no distinction between the sublattices in this case, a Fourier transform of the Holstein-Primakoff bosons $a^{\phantom{{\dagger}}}_{i}=\frac{1}{\sqrt{N}}\sum_{\bf q}\,\widetilde{a}^{\phantom{{\dagger}}}_{\bf q}\,e^{-i{\bf q}{\bf r}_{i}},$ (67) where $N$ is the total number of sites and ${\bf q}$ belongs to the full Brillouin zone of the triangular lattice, is sufficient to diagonalize the LSWT model. The magnon energy is $\omega_{\bf q}=3J_{1}S\Big{(}h-2(1-2b)\big{(}1-\overline{\gamma}_{\bf q}\big{)}-2j_{2}\big{(}1-\gamma^{(2)}_{\bf q}\big{)}\Big{)}\,,$ (68) where $h\\!=\\!g\mu_{B}H/3J_{1}S$, $j_{2}\\!=\\!J_{2}/J_{1}$, $b\\!=\\!BS^{2}/J_{1}$ as before, $\gamma^{(2)}_{\bf q}$ is given in (35), and $\displaystyle\overline{\gamma}_{\bf q}=\frac{1}{3}\sum_{\alpha}\cos{\bf q}\bm{\delta}_{\alpha}\,.$ (69) Here, consideration of the Kondo coupling (45) simplifies substantially as the single-magnon spin-conserving terms in the electron-magnon interaction in (46) are not present, laboratory and local spin axes are the same, and all sublattices are equivalent. Using Fourier transform (67) in (46) with (47) and $\widetilde{\alpha}_{\alpha}\\!=\\!0$ yields $\displaystyle{\cal H}_{int}^{+-}=\frac{2\widetilde{J}_{K}}{\sqrt{N}}\sum_{{\bf k},{\bf q}}\Big{[}f^{\dagger}_{{\bf k}-{\bf q}\uparrow}f^{\phantom{{\dagger}}}_{{\bf k}\downarrow}\widetilde{a}^{{\dagger}}_{{\bf q}}+{\rm H.c.}\Big{]},$ (70) where $\widetilde{J}_{K}\\!=\\!\frac{1}{2}J_{K}\sqrt{S/2}$ as before. The spin-flip scattering term is simple, with a matrix element containing no momentum-dependence. According to Appendix C.3, the spin-flip scattering straightforwardly leads to the relaxation rate in the form of Eq. (51) with the 1D integral in (52) taking the form $\displaystyle I_{k_{F}}(T,H)=$ $\displaystyle\,4\int_{0}^{1}\frac{z^{2}\,dz}{\sqrt{1-z^{2}}}\,{\rm n}^{0}_{{\bf q}}({\rm n}^{0}_{{\bf q}}+1)\,\frac{\omega_{{\bf q}}}{T}\,,$ (71) with the same momentum parametrization along the 1D contour ${\bf q}\\!=\\!2k_{F}(z^{2},z\sqrt{1-z^{2}})$ and Bose distribution function ${\rm n}^{0}_{{\bf q}}$ with the magnon energy $\omega_{{\bf q}}$ from Eq. (68). Given the simplicity of the polarized FM state, it may be instructive to demonstrate a relation of the single-sublattice formalism to the general three-sublattice one described in Secs. III.3 and III.4, as the latter is supposed to give an identical description. For all spins polarized, $\widetilde{\alpha}_{\alpha}\\!=\\!0$, the off- diagonal term in the Hamiltonian matrix (25) $\hat{\bf B}^{\phantom{\dagger}}_{\bf q}\\!\equiv\\!0$ and $\displaystyle\hat{\bf A}^{\phantom{\dagger}}_{\bf q}$ $\displaystyle=C_{\bf q}\hat{\bf I}+(1-2b)\hat{\bm{\Lambda}}^{\phantom{\dagger}}_{\bf q}\,,$ (72) $\displaystyle C_{\bf q}=h-2(1-2b)-2j_{2}\big{(}1-\gamma^{(2)}_{\bf q}\big{)},$ where $\hat{\bf I}$ is a $3\times 3$ identity matrix and $\displaystyle\hat{\bm{\Lambda}}^{\phantom{\dagger}}_{\bf q}=\left(\begin{array}[]{ccc}0&\gamma_{\bf q}&\gamma^{*}_{\bf q}\\\ \gamma^{*}_{\bf q}&0&\gamma_{\bf q}\\\ \gamma_{\bf q}&\gamma^{*}_{\bf q}&0\end{array}\right),$ (76) with $\gamma_{\bf q}$ from (35). Since $[\hat{\bm{\Lambda}}^{\phantom{\dagger}}_{\bf q},\hat{\bf I}]\\!=\\!0$, the three magnon branches have the energies $\omega_{\gamma\bf q}=3J_{1}S\big{(}C_{\bf q}+(1-2b)\lambda_{\gamma\bf q}\big{)}\,,$ (77) where $\lambda_{\gamma\bf q}$ are the eigenvalues of $\hat{\bm{\Lambda}}^{\phantom{\dagger}}_{\bf q}$. A straightforward algebra with (76) gives $\lambda_{1\bf q}\\!=\\!2\overline{\gamma}_{\bf q}$ and $\lambda_{2(3)\bf q}\\!=\\!2\overline{\gamma}_{{\bf q}\pm{\bf Q}}$, with $\overline{\gamma}_{\bf q}\\!=\\!(\gamma_{\bf q}+\gamma^{*}_{\bf q})/2$ from (69) and ${\bf Q}\\!=\\!(4\pi/3,0)$. Thus, the three magnon branches are the “original” single-sublattice result in (68), $\omega_{1\bf q}\\!=\\!\omega_{\bf q}$, and the other two are “shifted” by the ordering vectors, $\omega_{2(3)\bf q}\\!=\\!\omega_{{\bf q}\pm{\bf Q}}$. In the single-sublattice treatment shown above, the one-magnon coupling involves $\widetilde{a}^{\phantom{{\dagger}}}_{\bf q}$ ($\widetilde{a}^{{\dagger}}_{\bf q}$) operators from (67) with a constant matrix element, see Eq. (70). In the three-sublattice approach, matrix elements (50) of the general form of electron-magnon coupling in Eq. (49) require the knowledge of the Hamiltonian eigenfunctions. In the polarized FM case, anomalous terms in the transformation to quasiparticles (44) are absent, $\hat{\bf V}_{\bf q}\\!=\\!0$, and all angles are $\widetilde{\alpha}_{\alpha}\\!=\\!0$, immediately simplifying (50) to just ${\cal H}_{int}^{+-}=\frac{\widetilde{J}_{K}}{\sqrt{3N}}\sum_{{\bf k},{\bf q},\gamma}\Big{[}M^{+-}_{\gamma,{\bf q}}f^{\dagger}_{{\bf k}-{\bf q}\uparrow}f^{\phantom{{\dagger}}}_{{\bf k}\downarrow}a^{{\dagger}}_{\gamma,{\bf q}}\ +{\rm H.c.}\Big{]},$ (78) with the matrix elements $M^{+-}_{\gamma,{\bf q}}\\!=\\!2\sum_{\alpha}U_{\alpha,-{\bf q}}^{({\gamma})}$, where the matrix of vectors $\hat{\bf U}_{\bf q}$ should diagonalize $\hat{\bm{\Lambda}}^{\phantom{\dagger}}_{\bf q}$ in (76). A simple algebra yields $\displaystyle{\bf U}^{(1)}\\!=\\!\frac{1}{\sqrt{3}}(1,1,1)^{T},\ \ {\bf U}^{(2,3)}\\!=\\!\frac{1}{\sqrt{3}}(1,e^{\pm i\theta},e^{\mp i\theta})^{T},\ \ \ \ \ \ $ (79) for the “original” $\omega_{1\bf q}$ and “shifted” $\omega_{2(3)\bf q}$, respectively; here $\theta\\!=\\!4\pi/3$. Because of the local nature of the Kondo interaction, the matrix element of the coupling to an eigenmode $\gamma$ in (78) is simply proportional to the sum of the components of a corresponding vector, $\sum_{\alpha}U_{\alpha}^{(\gamma)}$. Thus, as it trivially follows from (79), the resultant matrix elements of the coupling to the “shifted” ${\bf q}\pm{\bf Q}$ modes are identically zero and only the “original” $\omega_{1\bf q}$-mode contributes to (78) with $M^{+-}_{\gamma,{\bf q}}\\!=\\!2\sqrt{3}$. Needless to say, this renders the coupling Hamiltonians in the single- sublattice and three-sublattice treatment, (70) and (78), equivalent. Their resultant scattering rate is given in (51) and (71). A closely associated problem is the relation between the single-sublattice operators $\widetilde{a}^{\phantom{{\dagger}}}_{\bf q}$ ($\widetilde{a}^{{\dagger}}_{\bf q}$) to the three-flavor operators $a^{\phantom{{\dagger}}}_{\alpha\bf q}$ ($a^{{\dagger}}_{\alpha\bf q}$). A simple algebra gives $\displaystyle a^{\phantom{{\dagger}}}_{\bf q}$
# Scalable Federated Unlearning via Isolated and Coded Sharding Anonymous submission Yijing Lin1,2 Zhipeng Gao1∗ Hongyang Du4 Dusit Niyato4 Gui Gui5 Shuguang Cui3,2 Jinke Ren2,3 1 State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications 2 The Future Network of Intelligence Institute, The Chinese University of Hong Kong (Shenzhen) 3 School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen) 4 School of Computer Science and Engineering, Nanyang Technological University 5 School of Automation, Central South University <EMAIL_ADDRESS><EMAIL_ADDRESS>Corresponding authors: Jinke Ren and Zhipeng Gao. ###### Abstract Federated unlearning has emerged as a promising paradigm to erase the client- level data effect without affecting the performance of collaborative learning models. However, the federated unlearning process often introduces extensive storage overhead and consumes substantial computational resources, thus hindering its implementation in practice. To address this issue, this paper proposes a scalable federated unlearning framework based on isolated sharding and coded computing. We first divide distributed clients into multiple isolated shards across stages to reduce the number of clients being affected. Then, to reduce the storage overhead of the central server, we develop a coded computing mechanism by compressing the model parameters across different shards. In addition, we provide the theoretical analysis of time efficiency and storage effectiveness for the isolated and coded sharding. Finally, extensive experiments on two typical learning tasks, i.e., classification and generation, demonstrate that our proposed framework can achieve better performance than three state-of-the-art frameworks in terms of accuracy, retraining time, storage overhead, and F1 scores for resisting membership inference attacks. ## 1 Introduction With the increasing awareness and stringent regulations of data protection, there is a growing demand for new machine learning technologies that can reap the full benefit of rich data while preserving data privacy. Federated learning (FL) McMahan et al. (2017) has become a promising paradigm to collaboratively train a learning model across multiple clients without sharing their local data. As shown in Figure 1(a), each client independently trains a local model using its local dataset and uploads the model parameters to a central server. The server collects the model parameters from clients and broadcasts the aggregated model to all clients. This process is iterated until model convergence. Figure 1: Federated Learning vs. Federated Unlearning. (a) In federated learning, the clients and the server collaboratively train a global model by exchanging model parameters. (b) In federated unlearning, upon receiving an unlearning request from a specific client $C^{\prime}$, the well-trained global model will be unlearned by using the local models of other clients for calibration, thus removing the corresponding data effect. Despite the great potential of FL, it encounters challenges in data privacy since the shared model parameters still contain some data information. On the other hand, the European Union’s General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) in the United States have released critical requirements for the “right to be forgotten”, which allows clients to erase the data effects from the model parameters trained on their local datasets Chen and Yang (2023); Liu et al. (2022); Su and Li (2023), thus motivates a new computing paradigm called federated unlearning Liu et al. (2021, 2022); Su and Li (2023). As shown in Figure 1(b), each client performs local model calibration and sends the calibrated model parameters to the central server, which removes the data effect from the well-trained FL model via calibration and aggregation. Since the unlearned model does not need to be trained from scratch, the computational overhead can be significantly reduced. Recent studies on federated unlearning have identified two distinct strategies, including provable and unprovable guarantees. Provable guarantees Bourtoule et al. (2021) ensure the complete removal of data effects, while unprovable guarantees indicate that the model has almost forgotten the data effects of specific clients. To reduce retraining time, a new framework called FedEraser is proposed in Liu et al. (2021), which removes the data effects of specific clients by storing intermediate model parameters on the central server. Although FedEraser achieves provable guarantees and satisfactory unlearning performance, it is short of scalability due to the extensive storage overhead. On the other hand, several orthogonal works Liu et al. (2022); Wu et al. (2022); Wang et al. (2022) employ various unprovable guarantee-based techniques to enhance the unlearning effectiveness. However, these works mainly optimize loss functions and prune network layers to bound the removal level of data effect, which cannot provide stringent provable guarantees. To overcome the aforementioned challenges, this paper proposes a scalable federated unlearning framework to efficiently remove the data effects of specific clients while maintaining the model accuracy. Specifically, the entire learning and unlearning process is divided into multiple stages, in which we construct several isolated shards and perform efficient retraining to reduce the storage overhead of the central server. Based on this, we carry out coded computing on different shards to further improve the storage efficiency and reduce the training time. To demonstrate the effectiveness of the proposed framework, we conduct extensive experiments on two typical learning tasks, i.e., classification and generation, and compare our proposed framework with three state-of-the-art frameworks, i.e., FedEraser Liu et al. (2021), RapidRetrain Liu et al. (2022), and FedRetrain Wang et al. (2022); Su and Li (2023). Our main contributions can be summarized as follows: * • We for the first time introduce a stage-based isolated sharding mechanism to reduce the number of affected clients for federated unlearning. Theoretical analysis proves that the expected time cost for processing unlearning requests can be significantly reduced in both sequential and concurrent cases. * • We design a coded computing-based sharding mechanism to improve the system scalability. Specifically, it can improve the storage efficiency by $(1-2\mu)C$ and the throughput by $S/O(C^{2}\log^{2}C\log\log C)$, where $C$ represents the number of clients, $\mu$ is the proportion of erroneous results, and $S$ is the number of shards. * • Experiment results show that the proposed framework can reduce at least 65% retraining time and 98% storage overhead while achieving comparable unlearning effectiveness to FedEraser, RapidRetrain, and FedRetrain. ## 2 Related Work To mitigate the data effects of specific clients, machine unlearning is proposed by partitioning training data into isolated slices and performing a joint sharded, isolated, sliced, and aggregated (SISA) method Bourtoule et al. (2021); Xu et al. (2023). Specifically, upon receiving an unlearning request, these slices completely remove the data effects and retrain the learning model to save computational resources and achieve unlearning with provable guarantees. Besides SISA, some previous works also adopt unprovable guarantee- based methods, such as gradient ascent Chen and Yang (2023), adding unlearning layers via a selective teacher-student formulation Jang et al. (2022), and transforming the unlearning process into a single-class classification task Yan et al. (2022). Although these methods have achieved good unlearning performance, they need direct access to client data, thus cannot be applied in FL due to data privacy. Recent studies have paid attention to removing the data effects of specific clients from well-trained FL models. Specifically, the authors in Liu et al. (2021) initially propose the concept of federated unlearning and develop a new framework called FedEraser. By using the storage resources of the central server, it can retain intermediate model parameters to achieve unlearning with provable guarantees. To reduce computational overhead while maintaining model accuracy, many unprovable guarantee-based frameworks have also been developed, such as RapidRetrain Liu et al. (2022), TF-IDF Wang et al. (2022); Salton and Buckley (1988), FedRecovery Zhang et al. (2023), KNOT Su and Li (2023), and BFU Wang et al. (2023). Specifically, RapidRetrain modifies the loss function to accelerate the retraining process and remove the data effects of all clients. In classification tasks, TF-IDF is utilized to unlearn the contributions of specific classes by pruning the most relevant class- discriminative channels Wang et al. (2022). FedRecovery considers differential privacy in the federated unlearning process, where intermediate model parameters from clients are utilized to retrain the global model. KNOT proposes a clustered aggregation-based asynchronous unlearning mechanism to reduce the retraining cost. BFU develops a parameter self-sharing approach to maintain model accuracy while erasing the data effects of clients. While these works have achieved impressive unlearning performance via experiments, they mainly consider unprovable guarantees such that the unlearning requirements may not be fulfilled Bourtoule et al. (2021). ## 3 Methodology In this section, we first describe the federated unlearning framework and then introduce two mechanisms based on isolated sharding and coded computing. Figure 2: Scalable Federated Unlearning Framework. For the unlearning requests initiated at different stages, only affected shards perform calibrations to remove the data effects of specific clients. To reduce storage overhead and improve scalability, intermediate model parameters are encoded/decoded as slices for efficient communication between clients and servers. ### 3.1 Scalable Federated Unlearning Framework We consider an FL system consisting of $S$ central servers and $C$ distributed clients, denoted by a set $\mathcal{C}=\\{C_{1},\ldots,C_{C}\\}$. Each client collects a fraction of data and constitutes its local dataset. A shared learning model $\mathbf{w}$ needs to be collaboratively trained across all clients and servers. In the training process, the intermediate model parameters are stored on the servers for subsequent unlearning purposes. To remove the data effects of specific clients, we define a subset of clients, denoted by $\mathcal{C}^{\prime}\subseteq\mathcal{C}$, in which each client needs to be unlearned based on the unlearning requests. Meanwhile, we define $\mathcal{D}^{\prime}\subseteq\mathcal{D}$ as the overall dataset associated with the unlearned clients. Let $\mathbf{w}^{\prime}$ denote the unlearned global model. Then, according to Kurmanji et al. (2023), the unlearned global model should satisfy $\left\\{\begin{split}&I\left(\mathbf{w}(\mathcal{D}^{\prime});\mathbf{w}^{\prime}(\mathcal{D}^{\prime})\right)=0,\\\ &I\left(\mathbf{w}(\mathcal{D}-\mathcal{D}^{\prime});\mathbf{w}^{\prime}(\mathcal{D}-\mathcal{D}^{\prime})\right)=1,\end{split}\right.$ (1) where $I(\cdot)$ represents the mutual information. To achieve (1), most existing solutions utilize the intermediate model parameters stored on the servers to unlearn the well-trained global model, which inevitably increases the storage overhead of the servers. To solve this problem, we present a new federated unlearning framework, which considers both uncoded and coded sharding to meet adaptive and even unlearning requests. As shown in Figure 2, uncoded sharding utilizes a stage-based isolated sharding mechanism to erase the data effect in the global model, which will be illustrated in Section 3.2. Coded sharding compresses intermediate model parameters into slices, which is able to maintain the storage overhead as the number of clients increases and will be detailed in Section 3.3. ### 3.2 Stage-based Isolated Sharding Mechanism Initialization. Since each client may join or leave the system at any time, we divide the entire learning and unlearning process into multiple stages, where the learning and unlearning operations are performed within each stage. Specifically, at each stage, the clients are distributed over multiple shards, denoted by a set $\mathcal{S}$. In each shard, there is a particular server for model aggregation. Therefore, the number of shards is equal to the number of servers, i.e., $S$. The clients in the shard $s$ are defined by a set $\mathcal{C}_{s}$. In addition, the dataset and the server associated with the shard $s$ are denoted by $\mathcal{D}_{s}$ and $J_{s}$, respectively. At each stage, the clients in the same shard perform the FedAvg algorithm McMahan et al. (2017) to train a global model. Specifically, each client first trains its local model by $L$ epochs and then transmits the model parameters to the corresponding server for aggregation. Thereafter, the aggregated model parameters are broadcast to the clients for local model update. These steps are iterated for $G$ rounds to obtain a well-trained FL model, which serves as the foundational model for the subsequent unlearning process. Moreover, the server stores the intermediate model parameters of clients in the same shard, which will also be used in the subsequent unlearning process. Preparation. At each stage, we assume that there are $K$ unlearning requests across the impacted shards, denoted by a set $\mathcal{S}^{\prime}$. In each impacted shard $s_{i}\in\mathcal{S}^{\prime}$, the clients and unlearning clients are denoted by $\mathcal{C}_{s_{i}}$ and $\mathcal{C}_{s_{i}}^{\prime}$, respectively. Let $J_{s_{i}}$ denote the server associated with the impacted shard $s_{i}$, which collects intermediate model parameters $\mathbf{w}_{\mathcal{C}_{s_{i}}}^{g},\forall g\in\mathcal{G}$ from the clients in the same shard for storage, where $\mathcal{G}=\\{1,\cdots,G\\}$ is the set of the global learning round. Next, the server $J_{s_{i}}$ removes the intermediate model parameters associated with the unlearning clients $\mathcal{C}_{s_{i}}^{\prime}$ from the whole parameter set, as given by $\mathbf{w}_{s_{i}}^{g}=\mathbf{w}_{\mathcal{C}_{s_{i}}}^{g}-\mathbf{w}_{\mathcal{C}_{s_{i}}^{\prime}}^{g},\forall g\in\mathcal{G}$. Then, the server aggregates the model parameters of the retained clients to obtain the initial global unlearned model, as given by $\mathbf{w}_{s_{i}}^{g^{\prime}=0}=\frac{1}{M}\sum_{m=1}^{M}\mathbf{w}_{s_{i},m}^{g},$ (2) where $\mathbf{w}_{s_{i},m}^{g}$ is the local model of the retained client $m$ in the shard $s_{i}$, $M$ is the number of retained clients, $g^{\prime}$ denotes the unlearning round, and $\mathcal{G}^{\prime}=\\{1,\cdots,G\\}$ represents the set of all unlearning rounds. Finally, $\mathbf{w}_{s_{i}}^{g^{\prime}=0}$ is sent to the retained clients in the same shard, i.e., $\mathcal{C}_{s_{i}}-\mathcal{C}_{s_{i}}^{\prime}$ for subsequent retraining. Retraining. In the unlearning round $g^{\prime}$, instead of retraining from scratch, each retained client in shard $s_{i}$ receives the global unlearned model $\mathbf{w}_{s_{i}}^{g^{\prime}}$ and utilizes its local dataset to update $\mathbf{w}_{s_{i}}^{g^{\prime}}$ by $\dfrac{L}{r}$ local epochs, where $r$ is a ratio for reducing the number of local retraining rounds Liu et al. (2021). Then, the updated model parameters are uploaded to the server, which calibrates and updates the global unlearned model by $\mathbf{w}_{s_{i}}^{{g^{\prime}}+1}=\mathbf{w}_{s_{i}}^{g^{\prime}}+\frac{1}{M}\sum_{m=1}^{M}\frac{\|\mathbf{w}_{\mathcal{C}_{s_{i},m}}^{g}\|}{\|\mathbf{w}_{\mathcal{C}_{s_{i},m}^{\prime}}^{g^{\prime}}\|}\mathbf{w}_{\mathcal{C}_{s_{i},m}^{\prime}}^{g^{\prime}}.$ (3) Here, $g=g^{\prime}$ indicates that $\mathbf{w}_{\mathcal{C}_{s_{i},m}^{\prime}}^{g^{\prime}}$ is used to calibrate the local model in the same global learning round. Finally, $\mathbf{w}_{s_{i}}^{g^{\prime}+1}$ is distributed to the retained clients in the shard. This process will be iterated for $G$ rounds. According to $\eqref{eq:requirement}$, the provable guarantees of the unlearning process is achieved if the global unlearned model satisfies $\left\\{\begin{split}&I\left(\mathbf{w}_{s_{i}}^{g}(\mathcal{D}_{s_{i}}^{\prime});\mathbf{w}_{s_{i}}^{{g}^{\prime}}(\mathcal{D}_{s_{i}}^{\prime})\right)=0,\\\ &I\left(\mathbf{w}_{s_{i}}^{g}(\mathcal{D}_{s_{i}}-\mathcal{D}_{s_{i}}^{\prime});\mathbf{w}_{s_{i}}^{{g}^{\prime}}(\mathcal{D}_{s_{i}}-\mathcal{D}_{s_{i}}^{\prime})\right)=1.\end{split}\right.$ (4) To achieve this, it is important to maintain the isolation of the shard in the entire unlearning process, i.e., avoiding cross-shard interactions at each stage. We shall note that there are scenarios where cross-shard interactions are necessary at different stages. In such cases, unprovable guarantee-based methods can be employed. However, it is not the main focus of this paper and will not be discussed herein. ### 3.3 Coded Computing-based Sharding Mechanism As stated in Bourtoule et al. (2021), an effective federated unlearning framework should be lightweight and scalable, which is able to balance the retraining time, storage overhead, and computational cost for unlearning. To achieve this goal, we develop a new coded computing-based sharding mechanism, which is composed of two parts, including coded computation and coded reconstruction. Coded Computation. In the isolated sharding mechanism, the server stores the uncoded intermediate model parameters $\mathbf{w}_{\mathcal{C}_{s}}^{g},\forall g\in\mathcal{G}$ of the clients in the same shard, which are used to remove the data effects from the global model. Conversely, in the coded computing-based sharding mechanism, the server collects the coded intermediate model parameters, denoted by $\mathbf{\widetilde{w}}_{i},\forall i\in\\{1,\cdots,C\\}$, which are generated from $\mathbf{w}_{\mathcal{C}_{s}}^{g}$ and distributed in clients. Without loss of generality, we utilize the Lagrange interpolation polynomial method Roth (2006) to compute $\mathbf{\widetilde{w}}_{i}$. Specifically, we first select $S$ different real numbers $\\{\omega_{1},\ldots,\omega_{S}\\}$, where $\omega_{s}$ is associated with the shard $s$. Then, the Lagrange polynomial with variable $\alpha$ is given by $u(\alpha)=\sum_{s=1}^{S}\mathbf{w}_{\mathcal{C}_{s}}^{g}\prod_{j\neq s}\frac{\alpha-\omega_{j}}{\omega_{s}-\omega_{j}}.$ (5) To distribute the intermediate model parameters over all clients, we further select $C$ different real numbers $\\{\alpha_{1},\ldots,\alpha_{C}\\}$, where $\alpha_{i}$ is associated with client $i$ and $C$ is the total number of clients. Based on this, the coded intermediate model parameters stored in client $i$ can be generated at point $\alpha_{i}$, as $\displaystyle\mathbf{\widetilde{w}}_{i}$ $\displaystyle=u(\alpha_{i})=\sum_{s=1}^{S}\mathbf{w}_{\mathcal{C}_{s}}^{g}\prod_{j\neq s}\frac{\alpha_{i}-\omega_{j}}{\omega_{s}-\omega_{j}}.$ (6) In the coded computing-based sharding mechanism, each client not only stores a mix of coded intermediate model parameters across different shards but also generates a series of keys. These keys are shared by the clients and the server in each shard, which will be used in the following coded reconstruction. In particular, each client distributes the coded intermediate model parameters to other clients within or outside the shard. We shall note that coded computation is utilized in the initialization phase described in Section 3.2. Instead of directly storing the intermediate model parameters, the server only collects coded intermediate model parameters from clients for unlearning. Since the size of the coded model parameters is much smaller than that of the uncoded ones, the storage overhead of the server can be significantly reduced. Coded Reconstruction. In the preparation phase introduced in Section 3.2, instead of directly utilizing the intermediate model parameters, the server in each impacted shard utilizes the keys (generated in coded computation) to retrieve and reconstruct these model parameters from clients. This process is akin to decoding a Reed-Solomon code Roth (2006). Given the evaluations at $C$ distinct points, the server reconstructs the model parameters by decoding a Reed-Solomon code with dimension $S$ and length $C$. The detailed decoding process can be described as follows: Step 1: Let $\\{\mathbf{\widetilde{w}}_{1},\ldots,\mathbf{\widetilde{w}}_{C}\\}$ denote the corresponding coded slices of the model parameters among clients, which can be obtained according to (6); Step 2: The server uses the keys shared by the clients in the same shard to access these slices; Step 3: The server reconstructs the original model parameters by solving the following linear equation derived from the Reed-Solomon decoding process, as $\mathbf{\widetilde{w}}_{\mathcal{C}_{s}}^{g}=\left[\begin{array}[]{cccc}1&\alpha_{1}&\cdots&\alpha_{1}^{S-1}\\\ 1&\alpha_{2}&\cdots&\alpha_{2}^{S-1}\\\ \vdots&\vdots&\ddots&\vdots\\\ 1&\alpha_{C}&\cdots&\alpha_{C}^{S-1}\end{array}\right]^{-1}\cdot\left[\begin{array}[]{c}\mathbf{\widetilde{w}}_{1}\\\ \mathbf{\widetilde{w}}_{2}\\\ \vdots\\\ \mathbf{\widetilde{w}}_{C}\end{array}\right],$ (7) where the matrix formed by $\alpha_{i}$ is a Vandermonde matrix. It is invertible as long as all elements are distinct. According to (7), the server can accurately reconstruct the intermediate model parameters from the coded slices distributed among all clients. Moreover, since each client only receives a single slice of the coded model parameters, it is difficult for other clients to reconstruct all coded model parameters, thus preserving the data privacy. After finishing the decoding process, the server in each impacted shard removes the intermediate model parameters associated with the clients requesting unlearning, initiates the retraining phase, and retrains the global model. The overall federated unlearning process is summarized in Algorithm 1. Algorithm 1 Federated Unlearning with Isolated Sharding and Coded Computing 1:for each shard $s\in\mathcal{S}$ do 2: Initialization: // Run on clients 3: Perform coded computation via (6). 4: Distribute $\mathbf{\widetilde{w}}_{i}$ to the clients within or outside shard. 5: if $s\in\mathcal{S}$ then 6: Preparation: // Run on the server 7: Perform coded reconstruction via (7). 8: Remove the intermediate model parameters of the unlearning clients. 9: Obtain the initial unlearned global model via (2). 10: Retrain: // Run on clients and server 11: Retrain the global model for unlearning via (3). 12: end if 13:end for ## 4 Theoretical Analysis ### 4.1 Time Efficiency for Isolated Sharding Sequential Unlearning Requests. In the sequential setting, each unlearning request is addressed individually. Given $S$ shards and $K$ unlearning requests, the probability that a shard $s$ is selected for retraining $j$ times across $i-1$ unlearning requests is given by $P_{s}={i-1\choose j}\left(\frac{1}{S}\right)^{j}\left(1-\frac{1}{S}\right)^{i-1-j},$ (8) where $\dfrac{1}{S}$ is the probability that affects any given shard. Let $\overline{C}_{t}$ denote the average time cost associated with all shards. Then, the expected time cost for processing all unlearning requests can be estimated by $\displaystyle T_{s}$ $\displaystyle=\sum_{i=1}^{K}\sum_{j=0}^{i-1}\left(\frac{1}{S}\right)^{j}\left(1-\frac{1}{S}\right)^{i-1-j}\overline{C}_{t}\overset{(a)}{=}K\overline{C}_{t},$ (9) where $(a)$ is obtained by the binomial theorem Bourtoule et al. (2021). Concurrent Unlearning Requests. In the concurrent setting, the unlearning requests are aggregated in a batch for joint processing. Let $\\{b_{1},\ldots,b_{S}\\}$ denote a set of Bernoulli random variables, where $b_{s}$ represents whether shard $s$ is affected in the unlearning process. Therefore, $P(b_{s}=1)=\dfrac{1}{S}$. Accordingly, the expected time cost for processing all unlearning requests can be estimated by $\displaystyle T_{c}$ $\displaystyle=\mathbb{E}\left(\sum_{s=1}^{S}b_{s}\overline{C}_{t}\right)\ \overset{(b)}{=}S\overline{C}_{t}\left(1-\left(1-\frac{1}{S}\right)^{K}\right),$ (10) where $(b)$ is derived from the expected value of the Bernoulli random variable. ### 4.2 Storage Effectiveness for Coded Sharding In this work, we use two typical metrics to evaluate the effectiveness of the coded sharding mechanism: 1) Storage efficiency, denoted by $\gamma$, which is defined as the ratio of the size of intermediate model parameters to that of the data stored in the server; 2) Throughput, denoted by $\lambda$, which characterizes the capacity for processing unlearning requests. The throughput is mainly determined by two factors, i.e., the number of unlearning requests being processed and the associated computational cost. We shall note that the storage efficiency and throughput are the same for the sequential and concurrent settings since only the size of the model parameters is different. We take the full storage mechanism Liu et al. (2021) as the benchmark, where all the intermediate model parameters are stored in the servers. Therefore, its storage efficiency is set as $\gamma_{f}=1$. Moreover, since the servers handle all unlearning requests, the throughput is $\lambda_{f}=1$. For the proposed uncoded sharding mechanism, all clients are divided into $S$ shards. Therefore, its storage efficiency and throughput are given by $\gamma_{s}=S$ and $\lambda_{s}=S$, respectively. On the other hand, the proposed coded sharding mechanism is resistant to $\mu C$ erroneous results, where $\mu$ is the proportion of the erroneous results to the coded slices Roth (2006). Thus, it follows $2\mu C\leq C-S,$ (11) which implies that the upper bound of $S$ is $(1-2\mu)C$. Accordingly, we have $S\leq\gamma_{c}\leq(1-2\mu)C.$ (12) Then, given $O(C^{2}\log^{2}C\log\log C)$ as the additional computational overhead for coded computing Li et al. (2020), the throughput of the coded sharding mechanism can be expressed as $\lambda_{c}=S/O(C^{2}\log^{2}C\log\log C).$ (13) ## 5 Experiments ### 5.1 Experimental Settings Federated Learning and Unlearning. We consider a total of 100 clients in our experiments to demonstrate the effectiveness of the proposed framework. In the learning process, only 20 clients are randomly selected in each training round. These clients are divided into 4 shards such that each shard has 5 clients. The numbers of local epochs and training rounds are set as 10 and 30, respectively. In the unlearning process, we consider two types of unlearning requests: 1) Even, where all requests are evenly distributed across shards; 2) Adapt, where all requests are adaptively initiated in one shard Bourtoule et al. (2021). Figure 3: Performance with a single unlearning request. | MNIST | FMNIST | CIFAR-10 | Shakespeare ---|---|---|---|--- F1 Score ($\downarrow$) | IID | Non-IID | IID | Non-IID | IID | Non-IID | IID | Non-IID FR | 0.5455 | 0.5354 | 0.5423 | 0.5434 | 0.5074 | 0.4742 | 0.5426 | 0.5995 FE | 0.5450 | 0.1953 | 0.4659 | 0.2803 | 0.1649 | 0.6536 | - | - RR | 0.5496 | 0.2289 | 0.5203 | 0.5553 | 0.5264 | 0.6684 | - | - SE | 0.5486 | 0.5959 | 0.5447 | 0.5560 | 0.4812 | 0.5141 | 0.6240 | 0.0392 Retraining Time ($\downarrow$) | IID | Non-IID | IID | Non-IID | IID | Non-IID | IID | Non-IID FR | 565.68 | 563.66 | 556.57 | 558.25 | 573.78 | 571.92 | 2406.96 | 2343.51 FE | 293.84 | 287.40 | 290.90 | 291.94 | 301.57 | 304.22 | 1196.03 | 1213.69 RR | 391.71 | 396.40 | 376.49 | 390.88 | 386.00 | 392.61 | - | - SE | 96.79 | 96.51 | 96.50 | 98.12 | 105.05 | 103.96 | 148.06 | 136.95 Gain | $\downarrow$ 67.06% | $\downarrow$ 66.41% | $\downarrow$ 66.82% | $\downarrow$ 66.39% | $\downarrow$ 65.16% | $\downarrow$ 65.82% | $\downarrow$ 87.62% | $\downarrow$ 88.71% Table 1: F1 score and retraining time in IID and non-IID scenarios. Datasets and Models. We use four commonly-adopted datasets including MNIST LeCun et al. (1998), Fashion-MNIST Xiao et al. (2017), CIFAR-10 Krizhevsky et al. (2009), and Tiny Shakespeare McMahan et al. (2017) for experiments, which are applicable to a diverse range of tasks. On the other hand, we consider two typical learning tasks, i.e., image classification and language generation. To accomplish the two tasks, we use two learning models: 1) Convolutional neural network, which is composed of 2 convolutional, 2 pooling, and 2 fully connected layers. It will be trained on the MINST, Fashion-MINST, and CIFAR-10 datasets, respectively; 2) NanoGPT111https://github.com/karpathy/nanoGPT Radford et al. (2019), which consists of a 4-layer transformer with 4 attention heads Vaswani et al. (2017), an embedding layer with dimension = 16, a block layer, and a vocabulary with size = 109. In particular, NanoGPT is trained on the Tiny Shakespeare dataset. For data distribution, we consider two data-partition approaches: 1) Independent and identically distributed (IID), where all data samples are randomly divided into 100 equal parts, and each client is assigned with one part; 2) Non-IID. Specifically, for the image classification task, 80% data samples of each client belong to one primary class, while the remaining data samples belong to other classes Wang et al. (2020). For the language generation task, the entire dataset is divided into several unbalanced buckets, and each client is assigned with two buckets to ensure non-IID data distribution across different clients. Baseline Frameworks. We adopt three state-of-the-art baseline frameworks: 1) FedRetrain (FR) Liu et al. (2021), which retrains the model from scratch to achieve unlearning purposes; 2) FedEraser (FE) Liu et al. (2021), which calibrates model parameters using the historical updates from retained clients; 3) RapidRetrain (RR) Liu et al. (2022), which employs a diagonal empirical Fisher information matrix to expedite retraining. We name our proposed framework as SE, which is the abbreviation of Sharding Eraser enabled by isolated and coded sharding. Performance Metrics. Without loss of generality, the unlearning effectiveness can be evaluated by four metrics, including accuracy, retraining time, storage overhead, and F1 score for resisting membership inference attacks (MIAs) Shokri et al. (2017). Note that F1 score is utilized to verify whether data is successfully unlearned from the model Bourtoule et al. (2021); Liu et al. (2021). These four metrics align with the fundamental principle outlined in Bourtoule et al. (2021), which focuses on maintaining model accuracy, reducing unlearning time, as well as providing security guarantees. ### 5.2 Experimental Results Figure 4: Performance with concurrent unlearning requests. Figure 5: Communication time and storage overhead of different frameworks with concurrent adaptive unlearning requests. Performance with Single Unlearning Request. We first consider a simple scenario in which each retained client has a single unlearning request. In this scenario, we test the unlearning performance of the four frameworks, and the results are shown in Figure 3. As can be seen from Figures 3(a)-(c), the proposed framework SE can achieve comparable accuracy to the baseline framework FR in both IID and non-IID cases. Moreover, SE always outperforms the other two baseline frameworks, i.e., FE and RR. The results on the Tiny Shakespear dataset are very similar, except that the rapid retraining framework RR cannot converge Su and Li (2023). On the other hand, since FR retrains the model from scratch and entirely removes data effects of specific clients, its F1 score and accuracy should be jointly considered for performance comparison. As shown in Table 1 and Figure 3, the F1 score of FE is lower than those of FR, RR, and SE. However, this result is attributed to the reduced accuracy of FE, which cannot be considered as the improved unlearning performance. Instead, our proposed framework SE can achieve comparable F1 scores with FR while maintaining model accuracy. Besides, for the retraining time, our framework SE always surpasses the other three frameworks. For example, in the non-IID case, SE can achieve about 66.41%, 66.39%, 65.16%, and 87.62% time reduction on the MNIST, Fashion-MNIST, CIFAR-10, and Tiny Shakespeare dataset. Therefore, our proposed framework can significantly reduce the retraining time without sacrificing accuracy. Performance with Concurrent Unlearning Requests. To demonstrate the scalability of the proposed framework, we further consider a scenario in which each retained client has multiple concurrent unlearning requests. In particular, each request can be evenly or adaptively initiated across different shards. The experimental results are illustrated in Figure 4. Specifically, on the CIFAR-10 dataset, the proposed framework SE can outperform FE and RR, and can offer a comparable F1 score to all baseline frameworks. Moreover, it reduces the retraining time by 70% in the even scenario. Similar trends can be observed in the Tiny Shakespear dataset. These gains are attributed to the fact that SE can significantly reduce the number of affected clients in the unlearning process. As for the adaptive scenario, the variance in data quality across different shards may affect the loss of SE on the Tiny Shakespeare dataset. Nevertheless, its retraining time is still the shortest among all frameworks, demonstrating the effectiveness of the proposed framework in terms of time efficiency. Storage Overhead with Concurrent Unlearning Requests. Since FR and RR do not utilize storage resources to improve unlearning performance, we only compare the proposed framework SE with the baseline framework FE. We take the concurrent adaptive scenario for experiments. Therein, the communication time for transmitting model parameters includes two parts: 1) Base network delay, which is set as 0.1 second; 2) Model transmission time, which is computed as the ratio of the model size (in bits) and the network data rate (in bit/s). Moreover, to demonstrate the benefit of the coded computing mechanism, we define Coded SE as the framework with isolated and coded sharding, and denote Uncoded SE as the framework with only isolated sharding. In both frameworks, the storage overhead includes the model parameters stored in one shard. Due to that the storage overhead is independent of data distribution, we take the IID scenario as an example. Figure 5(a) and Figure 5(b) show the communication time and storage overhead of FE, Uncoded SE, and Coded SE with concurrent adaptive unlearning requests. We can observe that Coded SE achieves the smallest storage overhead and the minimum communication time among all frameworks. For example, the storage overhead can be reduced by almost 98%. On the other hand, we perform experiments on the Shakespeare dataset to investigate the impact of the total number of clients and global rounds on the storage overhead. The results are depicted in Figures 5(c)-(d). From these figures, we can see that Coded SE shows a sharp reduction in storage overhead with a slight increase in communication time as the number of clients increases. This result is attributed to the increased division of model parameters into more shards for storage when the number of clients increases. Also, there exists a marginal increase in the distribution and retrieval time when the server utilizes the model parameters for unlearning. ## 6 Conclusion In this paper, we have developed a scalable federated unlearning framework by introducing two mechanisms based on isolated sharding and coded computing. The isolated sharding mechanism divides distributed clients into multiple shards and removes the data effects according to sequential or concurrent unlearning requests. The coded computing mechanism extends the scalability of the framework by encoding, decoding, distributing, and retrieving intermediate model parameters. By doing so, the storage overhead of the central server can be significantly reduced. Theoretical analysis and experimental results demonstrate the effectiveness of the proposed framework as compared with three baseline frameworks. Future works may consider integrating unlearning layers into model architectures to achieve cross-shard unlearning. ## References * Bourtoule et al. [2021] Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 141–159. IEEE, 2021. * Chen and Yang [2023] Jiaao Chen and Diyi Yang. Unlearn what you want to forget: Efficient unlearning for llms. In EMNLP 2023-Empirical Methods in Natural Language Processing, 2023\. * Jang et al. [2022] Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, and Minjoon Seo. Knowledge unlearning for mitigating privacy risks in language models. 2022\. * Krizhevsky et al. [2009] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009\. * Kurmanji et al. [2023] Meghdad Kurmanji, Peter Triantafillou, and Eleni Triantafillou. Towards unbounded machine unlearning. arXiv preprint arXiv:2302.09880, 2023. * LeCun et al. [1998] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. * Li et al. [2020] Songze Li, Mingchao Yu, Chien-Sheng Yang, Amir Salman Avestimehr, Sreeram Kannan, and Pramod Viswanath. Polyshard: Coded sharding achieves linearly scaling efficiency and security simultaneously. IEEE Transactions on Information Forensics and Security, 16:249–261, 2020. * Liu et al. [2021] Gaoyang Liu, Xiaoqiang Ma, Yang Yang, Chen Wang, and Jiangchuan Liu. Federaser: Enabling efficient client-level data removal from federated learning models. In 2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS), pages 1–10. IEEE, 2021. * Liu et al. [2022] Yi Liu, Lei Xu, Xingliang Yuan, Cong Wang, and Bo Li. The right to be forgotten in federated learning: An efficient realization with rapid retraining. In IEEE INFOCOM 2022-IEEE Conference on Computer Communications, pages 1749–1758. IEEE, 2022. * McMahan et al. [2017] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017. * Radford et al. [2019] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. * Roth [2006] Ron M Roth. Introduction to coding theory. IET Communications, 47(18-19):4, 2006. * Salton and Buckley [1988] Gerard Salton and Christopher Buckley. Term-weighting approaches in automatic text retrieval. Information processing & management, 24(5):513–523, 1988. * Shokri et al. [2017] Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pages 3–18. IEEE, 2017. * Su and Li [2023] Ningxin Su and Baochun Li. Asynchronous federated unlearning. In IEEE INFOCOM 2023-IEEE Conference on Computer Communications, pages 1–10. IEEE, 2023. * Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. * Wang et al. [2020] Hao Wang, Zakhary Kaplan, Di Niu, and Baochun Li. Optimizing federated learning on non-iid data with reinforcement learning. In IEEE INFOCOM 2020-IEEE Conference on Computer Communications, pages 1698–1707. IEEE, 2020. * Wang et al. [2022] Junxiao Wang, Song Guo, Xin Xie, and Heng Qi. Federated unlearning via class-discriminative pruning. In Proceedings of the ACM Web Conference 2022, pages 622–632, 2022\. * Wang et al. [2023] Weiqi Wang, Zhiyi Tian, Chenhan Zhang, An Liu, and Shui Yu. Bfu: Bayesian federated unlearning with parameter self-sharing. In Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security, pages 567–578, 2023. * Wu et al. [2022] Leijie Wu, Song Guo, Junxiao Wang, Zicong Hong, Jie Zhang, and Yaohong Ding. Federated unlearning: Guarantee the right of clients to forget. IEEE Network, 36(5):129–135, 2022. * Xiao et al. [2017] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017. * Xu et al. [2023] Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, and Philip S Yu. Machine unlearning: A survey. ACM Computing Surveys, 56(1):1–36, 2023. * Yan et al. [2022] Haonan Yan, Xiaoguang Li, Ziyao Guo, Hui Li, Fenghua Li, and Xiaodong Lin. Arcane: An efficient architecture for exact machine unlearning. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 4006–4013, 2022. * Zhang et al. [2023] Lefeng Zhang, Tianqing Zhu, Haibin Zhang, Ping Xiong, and Wanlei Zhou. Fedrecovery: Differentially private machine unlearning for federated learning frameworks. IEEE Transactions on Information Forensics and Security, 2023.
# Optimal charging of open spin-chain quantum batteries via homodyne-based feedback control Y. Yao Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun 130024, China Center for Advanced Optoelectronic Functional Materials Research, and Key Laboratory for UV Light-Emitting Materials and Technology of Ministry of Education, Northeast Normal University, Changchun 130024, China X. Q. Shao<EMAIL_ADDRESS>Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun 130024, China Center for Advanced Optoelectronic Functional Materials Research, and Key Laboratory for UV Light-Emitting Materials and Technology of Ministry of Education, Northeast Normal University, Changchun 130024, China ###### Abstract We study the problem of charging a dissipative one-dimensional $XXX$ spin- chain quantum battery using local magnetic fields in the presence of spin decay. The introduction of quantum feedback control based on homodyne measurement contributes to improve various performance of the quantum battery, such as energy storage, ergotropy, and effective space utilization rate. For the zero temperature environment, there is a set of optimal parameters to ensure that the spin-chain quantum battery can be fully charged and the energy stored in the battery can be fully extracted under the perfect measurement condition, which is found through the analytical calculation of a simple two- site spin-chain quantum battery and further verified by numerical simulation of a four-site spin-chain counterpart. For completeness, the adverse effects of imperfect measurement, anisotropic parameter, and finite temperature on the charging process of the quantum battery are also considered. ## I Introduction The quantum batteries have become an important topic concerned by researchers due to the requirement for the efficient and miniaturized energy storage devices. Initially, the concept of quantum batteries proposed by Alicke and Fannes Alicki and Fannes (2013) is defined as a small quantum mechanical system for temporarily storing energy. Subsequently, a series of quantum battery models have been proposed in closed systems. Simply put, the charging process can be achieved by coupling the battery to the charger which can be regarded as both energy provider and energy mediator. For simple models, such as a two-level system quantum battery or a quantum harmonic oscillator quantum battery Andolina _et al._ (2018); Chen and Hasegawa ; Zhang _et al._ (2019); Delmonte _et al._ (2021); Crescente _et al._ (2020a); Seah _et al._ (2021), the batteries of these can achieve complete charging directly, but the stored energy has oscillation behavior because the charging process is unitary. Once the optimal charging time is missed, the stored energy of the battery is reduced obviously. To solve this problem, some researchers proposed adiabatic charging schemes to stabilize the energy storage of battery Santos _et al._ (2019); Dou _et al._ (2020); Santos _et al._ (2020). In addition to the single-body quantum battery model, there are also many-body quantum battery models Qi and Jing (2021); Andolina _et al._ (2019a, b); Rossini _et al._ (2019); Julià-Farré _et al._ (2020); Huangfu and Jing (2021); Liu _et al._ (2021); Rossini _et al._ (2020); Lu _et al._ (2021); Crescente _et al._ (2020b); Zhang and blaauboer ; Dou _et al._ (2022); Caravelli _et al._ (2020); Le _et al._ (2018); Ghosh _et al._ (2020); Kamin _et al._ (2020a); Zhao _et al._ (2022); Barra _et al._ (2022); Arjmandi _et al._ (2022); Mondal and Bhattacharjee , including Sachdev-Ye-Kitaev batteries Rossini _et al._ (2020), Tavis-Cummings quantum battery Lu _et al._ (2021), Dicke quantum battery Crescente _et al._ (2020b); Zhang and blaauboer ; Dou _et al._ (2022), Random quantum batteries Caravelli _et al._ (2020), and spin-chain quantum battery Le _et al._ (2018); Ghosh _et al._ (2020); Kamin _et al._ (2020a); Zhao _et al._ (2022); Barra _et al._ (2022); Arjmandi _et al._ (2022); Mondal and Bhattacharjee ; Ghosh and Sen (De) and so on. These many- body quantum batteries exhibit remarkable charging speed compared with single- body quantum battery Binder _et al._ (2015); Ferraro _et al._ (2018); Gyhm _et al._ (2022). As a matter of fact, the quantum batteries cannot be completely isolated from the environment. Therefore, it is necessary to investigate how to stabilize the stored energy and resist energy leakage caused by decoherence when the quantum battery is immersed in the environment Farina _et al._ (2019); Barra (2019); Pirmoradian and Mølmer (2019); Tacchino _et al._ (2020); Hovhannisyan _et al._ (2020); Kamin _et al._ (2020b); García-Pintos _et al._ (2020); Ito and Watanabe ; Gherardini _et al._ (2020); Bai and An (2020); Quach and Munro (2020); Tabesh _et al._ (2020); Zhao _et al._ (2021); Ghosh _et al._ (2021); Peng _et al._ (2021); Santos (2021); Xu _et al._ (2021); Yao and Shao (2021); Centrone _et al._ ; Mitchison _et al._ (2021); Carrasco _et al._ (2022). In 2020, Gherardini et al. Gherardini _et al._ (2020) put forward a stable charging scheme based on continuous measurement to compensate the entropy increase and keep the open quantum battery in the highest entropy state. Later, Quach and Munro Quach and Munro (2020) introduced an open quantum battery model by using dark states to achieve superextensive capacity and power density. In general, quantum batteries can hardly be fully charged in the presence of environmental noise. Therefore how to improve the energy storage of quantum batteries in specific systems is still a challenging problem. At the same time, we should not only pay attention to the stable and effective charging process of the battery, but also consider the maximum capacity of the battery itself, so as to provide sufficient energy for other equipment. Recently, the model of $N$-spin chain quantum battery interacting with environmental noise is constructed in Ref. Zhao _et al._ (2021), and the maximum energy of a quantum battery driven by a coherent cavity driving field or a heat reservoir is investigated. The results show that the nearest- neighbor hopping interaction enhances the energy storage and ergotropy of quantum batteries. Then, Ghosh et al. advanced an open spin-chain battery charging scheme Ghosh _et al._ (2021), where they considered each spin was connected to two local bosonic reservoirs. During the charging process, the energy dissipation channel is closed and the energy absorption channel is opened. Their research shows that when the quantum battery is affected by Markovian noise, the storage speed of the quantum battery in the transient region is faster than that without noise, and the maximum recoverable work quantified by ergotropy is also higher. It is worth noting that the above results only hold when the ratio of dephasing noise rate to energy absorption rate is within a certain range, and the advantage of noise disappears when the system reaches the steady state. In order to charge the quantum battery stably and effectively, we propose a dissipative protocol based on homodyne feedback, which is composed of $XXX$ spin chain with open boundary condition. The energy injection into the battery is performed by applying a local external magnetic field to each spin. During the charging process, photons of spin radiation induced by the environment are collected, so that the homodyne current is generated after detection, and then the strength of local charging field becomes related to the photocurrent signal. By adjusting the feedback strength and the feedback direction, the $XXX$ spin-chain quantum battery can be fully charged and stabilized. Therefore, the energy stored by the battery in steady state is more favorable than the Markovian scheme of Ref. Ghosh _et al._ (2021). The advantage of the homodyne-based feedback control mechanism is its simplicity in practical applications because it avoids the challenge of real-time state estimation required by Bayesian or state-based feedback Doherty and Jacobs (1999); Carvalho _et al._ (2008). The relevant feedback theory has been widely used in various fields Wang _et al._ (2005); Liu _et al._ (2010); Bushev _et al._ (2006); Wang and Wiseman (2001); Campagne-Ibarcq _et al._ (2016); Genoni _et al._ (2013) and guided by it some experimental results have been obtained Smith _et al._ (2002); Armen _et al._ (2002); Geremia _et al._ (2004). The remainder of the paper is organized as follows. In Sec. II, we first analyze the stored energy and the maximum energy extracted from the spin-chain quantum battery (ergotropy) under feedback control, and then introduce the concept of effective space utilization rate. In Sec. III, we successively obtain the analytical solution of the charging process for $XXX$ spin-chain quantum battery consisting of two spins, and numerically verify the case of multiple spins. In addition, the stochastic dynamics of the battery charging process, the effects of spin-spin interaction, the imperfect measurement, anisotropic parameter, and finite temperature on the battery charging process are also discussed in detail. Finally, we give a summary of the present protocol in Sec. IV. ## II DISSIPATIVE Spin-chain QUANTUM BATTERY Figure 1: The spin-chain quantum battery. Each spin is coupled with an independent reservoir (light grey ellipse) and has a spontaneous emission rate $\Gamma$. Initially, the battery is prepared in its ground state and each spin is subjected to a homodyne measurement. After the detection, a corresponding homodyne photocurrent is produced. Then the feedback control is activated by applying local magnetic fields depending on the measured results. ### II.1 Master equation of the system We model the quantum battery as a one-dimensional Heisenberg spin chain consisting of $N$ spins on the lattice, each spin interacts with a local zero- temperature reservoir, as shown in Fig. 1. In the absence of charging, the static Hamiltonian of the battery can be described as ($\hbar=1$) $\displaystyle H_{B}$ $\displaystyle=$ $\displaystyle\frac{h}{2}\sum\limits_{j=1}^{N}\sigma_{j}^{z}+\sum\limits_{j=1}^{N-1}J[(1+\gamma)\sigma_{j}^{x}\sigma_{j+1}^{x}+(1-\gamma)\sigma_{j}^{y}\sigma_{j+1}^{y}$ (1) $\displaystyle+\Delta\sigma_{j}^{z}\sigma_{j+1}^{z}],$ where $h$ represents the strength of external magnetic field breaking the degeneracy between $\mid\downarrow\rangle$ and $\mid\uparrow\rangle$, $J$ refers to the nearest-neighbor coupling strength between spins. $\sigma^{k}$ $(k=x,y,z)$ is the corresponding Pauli spin matrix, and $\gamma$ and $\Delta$ are the anisotropic parameters. For convenience, we assume that all the parameters here are positive real numbers. To inject energy into battery, the local external magnetic field needs to be applied to each spin. The form of charging Hamiltonian is chosen as: $H_{\rm charging}=\sum_{j=1}^{N}\\{\Omega_{j}[\sigma_{j}^{x}\sin(\alpha)+\sigma_{j}^{y}\cos(\alpha)]\\},$ (2) where $\alpha\in(-\pi,\pi]$ is viewed as a regulator of changing the direction of magnetic field, and $\Omega_{j}$ is the corresponding strength applied to the $j$th spin. The evolution of the system can be characterised by the following Lindblad master equation $\mathop{\dot{\rho}}=-i[H_{B}+H_{\rm charging},\rho]+\Gamma\sum_{j=1}^{N}\mathcal{D}[\sigma_{j}^{-}]\rho,$ (3) where $\mathcal{D}[o]\bullet=o\bullet o^{{\dagger}}-({o^{{\dagger}}o\bullet+\bullet o^{{\dagger}}o})/2$ and $\Gamma$ is the spontaneous emission rate of each spin constituting the quantum battery. If the above model is considered in a closed system, the energy stored in the battery usually oscillates over time, and the protocol is only effective for the instantaneous charging of the quantum battery, and in actual environment, the non-zero value of $\Gamma$ further reduces the charging performance of the battery. Therefore, designing an efficient charging scheme in open quantum systems is a problem worthy of consideration. From this perspective, we introduce the quantum feedback mechanism based on homodyne measurement into the charging protocol of quantum battery. In the process of feedback control, the homodyne measurement of each spin is performed, and the resulting photocurrent $J^{\rm hom}_{j}$ can be approximated as signal plus Gaussian white noise Wiseman and Milburn (1993); Wiseman (1994); Gardiner (1985); Wiseman and Milburn (2009), i.e., (please see appendix. A) $J^{\rm hom}_{j}(t)=\langle\sigma_{j}^{x}\rangle+\frac{\xi(t)}{\sqrt{\eta\Gamma}},$ (4) where $\eta\leq 1$ is the total measurement efficiency of the detection, and $\xi(t)=dw(t)/dt$ represents Gaussian white noise with a complex Wiener increment $dw(t)$ satisfying $[dw(t)]^{2}=dt$ and an average over the noise $E[dw(t)]=0$ Gardiner (1985). The local magnetic field thus becomes time- dependent and its strength depending on the measurement results can be expressed as $\Omega_{j}(t)=\Omega_{0j}+fJ^{\rm hom}_{j}(t-\tau),$ (5) where $f$ is the feedback strength, and $\tau$ represents a small time delay in the feedback loop. For simplicity, the constant amplitude $\Omega_{0j}$ can be set to 0, then the corresponding feedback Hamiltonian reads $H_{\rm fb}=\sum_{j=1}^{N}J^{\rm hom}_{j}(t-\tau)F_{j},$ (6) where $F_{j}=f[\sigma^{x}_{j}\sin(\alpha)+\sigma^{y}_{j}\cos(\alpha)]$. Now the total conditioned equation including feedback obeys the following rule Wiseman and Milburn (2009) $\rho_{J}(t+dt)=\sum_{j=1}^{N}e^{\mathcal{K}_{j}J^{\rm hom}_{j}(t-\tau)dt}(\rho_{J}(t)+\mathcal{A}[\rho_{J}(t)]),$ (7) where $\mathcal{K}_{j}$ is a Liouville superoperator satisfying $\mathcal{K}_{j}\rho=-i[F_{j},\rho]$. $\mathcal{A}[\rho_{J}(t)]=\Gamma\mathcal{D}[\sigma^{-}_{j}]\rho_{J}(t)dt+\mathcal{H}[-iH_{B}]\rho_{J}(t)dt+\sqrt{\eta\Gamma}d\omega(t)\mathcal{H}[\sigma^{-}_{j}]\rho_{J}(t)$ with $\mathcal{H}$ being a nonlinear superoperator defined as $\mathcal{H}[o]\bullet=o\bullet+\bullet o^{{\dagger}}-{\rm Tr}\\{\bullet(o+o^{{\dagger}})\\}\bullet$. Under the Markovian limit of $\tau\rightarrow 0$, Eq. (7) can be expanded to second order in $\mathcal{K}_{j}$ while retaining the first order of $dt$. Then, by applying the rules of Ito calculus (i.e., $[dw(t)]^{2}=dt$ ), the stochastic master equation describing the charging process of the spin-chain quantum battery is $\displaystyle\mathop{\dot{\rho}_{J}}$ $\displaystyle=$ $\displaystyle\sum_{j=1}^{N}\\{\mathcal{K}_{j}(\sigma^{-}_{j}\rho_{J}+\rho_{J}\sigma^{+}_{j})+\frac{1}{2\eta\Gamma}\mathcal{K}_{j}^{2}\rho_{J}+\sqrt{\eta\Gamma}\xi(t)$ (8) $\displaystyle\times[\mathcal{H}[\sigma^{-}_{j}]+(\eta\Gamma)^{-1}\mathcal{K}_{j}]\rho_{J}\\}+\mathcal{L}\rho_{J},$ where $\mathcal{L}\rho_{J}=-i[H_{B},\rho_{J}]+\Gamma\sum_{j=1}^{N}\mathcal{D}[\sigma^{-}_{j}]\rho_{J}$. Taking the ensemble average of Eq. (8), we have $\displaystyle\mathop{\dot{\rho}}$ $\displaystyle=$ $\displaystyle-i[H_{B},\rho]+\Gamma\sum_{j=1}^{N}\mathcal{D}[\sigma^{-}_{j}]\rho$ (9) $\displaystyle-i\sum_{j=1}^{N}\\{[F_{j},\sigma_{j}^{-}\rho+\rho\sigma_{j}^{+}]+\frac{1}{2\eta\Gamma}[F_{j},-i[F_{j},\rho]]\\},$ where the first two terms describe the dynamical evolution caused by the internal Hamiltonian of battery and the spontaneous decay, respectively. The last two terms describe the charging coherence introduced by the feedback operation and the measurement noise fed back into the system. The central idea is to use continuous measurement records to control the dynamics of system. ### II.2 The relevant performance parameters of the battery The stored energy of the quantum battery at an arbitrary time $t$ is defined as $\Delta E(t)={\rm Tr}[H_{B}\rho_{B}(t)]-{\rm Tr}[H_{B}\rho_{B}(0)],$ (10) where $\rho_{B}(t)$ is the density matrix of the battery and $\rho_{B}(0)$ is the initial state. In order to describe the maximum energy that can be extracted from the battery, we need to calculate the corresponding ergotropy Allahverdyan _et al._ (2004) $\mathcal{E}(t)={\rm Tr}[H_{B}\rho_{B}(t)]-\mathop{\rm min}\limits_{U}{\rm Tr}[U\rho_{B}(t)U^{{\dagger}}H_{B}],$ (11) the second term in the r.h.s. of Eq. (11) indicates the minimum energy value which cannot be extracted from the battery by executing all unitaries $U$ on the system. If Hamiltonian $H$ and density matrix $\rho$ of a system are written orderly as $H=\sum_{i}\varepsilon_{i}|\varepsilon_{i}\rangle\langle\varepsilon_{i}|$ ($\varepsilon_{1}<\varepsilon_{2}\cdots<\varepsilon_{N}$), and $\rho=\sum_{k}r_{k}|r_{k}\rangle\langle r_{k}|$ ($r_{1}>r_{2}\cdots>r_{N}$), respectively, the corresponding passive state Allahverdyan _et al._ (2004); Lenard (1978) of the system can be expressed as $\sigma=\sum_{j}r_{j}|\varepsilon_{j}\rangle\langle\varepsilon_{j}|.$ (12) Thus the ergotropy can be explicitly rewritten as $\displaystyle\mathcal{E}(t)$ $\displaystyle=$ $\displaystyle{\rm Tr}[H_{B}\rho_{B}(t)]-{\rm Tr}[H_{B}\sigma]$ (13) $\displaystyle=$ $\displaystyle{\rm Tr}[H_{B}\rho_{B}(t)]-\sum_{j}r_{j}\varepsilon_{j}.$ It is well known that the energy spectrum structure of a spin chain is closely related to the nearest-neighbor coupling strength between spins, thus it is not comprehensive to measure the performance of the battery only by the stored energy of the battery and the ergotropy. Here, we introduce the effective space utilization rate of the battery, i.e. the ratio of the total battery space occupied by the stored energy of the battery, which is also an important parameter for benchmarking the performance of quantum battery with internal interaction, $R(t)=\frac{\Delta E(t)}{C_{\rm max}},$ (14) where $C_{\rm max}=E_{\rm max}-E_{\rm min}$ is the maximum capacity of the battery, and $E_{\max}$ and $E_{\min}$ are the highest energy and the lowest energy in the energy spectrum of $H_{B}$, respectively. Undoubtedly, $R=1$ indicates that the battery can be fully charged. ## III the OPTIMAL FEEDBACK CONDITION ### III.1 The charging of two-spin quantum battery under homodyne-based feedback control For convenience, we mainly take the $XXX$ spin-chain quantum battery ($\Delta=1$ and $\gamma=0$) consisting of two spins as an toy model to study the charging process. This simple model can not only reflect the characteristics obviously different from the single-body quantum battery, but also help us determine the optimal parameter range when extending to a many- body spin quantum battery. According to Eq. (1), we can get the following eigenstates $\displaystyle|E_{a}\rangle$ $\displaystyle=\mid\downarrow\downarrow\rangle,$ (15a) $\displaystyle|E_{b}\rangle$ $\displaystyle=\frac{1}{\sqrt{2}}(\mid\uparrow\downarrow\rangle-\mid\downarrow\uparrow\rangle),$ (15b) $\displaystyle|E_{c}\rangle$ $\displaystyle=\frac{1}{\sqrt{2}}(\mid\uparrow\downarrow\rangle+\mid\downarrow\uparrow\rangle),$ (15c) $\displaystyle|E_{d}\rangle$ $\displaystyle=\mid\uparrow\uparrow\rangle,$ (15d) with the corresponding eigenvalues $E_{a}=-h+J$, $E_{b}=-3J$, $E_{c}=J$, and $E_{d}=h+J$, respectively. It is easy to check that the order of eigenvalues is $E_{a}<E_{b}<E_{c}<E_{d}$ when $J<h/4$, and the order of eigenvalues becomes $E_{b}<E_{a}<E_{c}<E_{d}$ once $J>h/4$. The above results show that the ground state of the system is related to the coupling strength $J$ between spins. But the highest energy state of the battery is always $|E_{d}\rangle=\mid\uparrow\uparrow\rangle$ regardless of $J$. The steady-state solution of Eq. (9) (see appendix. B) is $\displaystyle\rho_{11}(\infty)$ $\displaystyle=\frac{\chi^{4}}{(2\chi^{2}+2\chi\eta\cos\alpha+\eta)^{2}},$ (16a) $\displaystyle\rho_{22}(\infty)$ $\displaystyle=\frac{\chi^{2}(\chi^{2}+2\chi\eta\cos{\alpha}+\eta)}{(2\chi^{2}+2\chi\eta\cos{\alpha}+\eta)^{2}},$ (16b) $\displaystyle\rho_{33}(\infty)$ $\displaystyle=\rho_{22}(\infty),$ (16c) $\displaystyle\rho_{44}(\infty)$ $\displaystyle=\frac{(\chi^{2}+2\chi\eta\cos{\alpha}+\eta)^{2}}{(2\chi^{2}+2\chi\eta\cos{\alpha}+\eta)^{2}},$ (16d) where we have assumed $\chi=f/\Gamma$ for simplicity. The optimal feedback condition can be determined through $\partial\rho_{11}(\infty)/\partial\chi=2(\chi^{2}\eta\cos\alpha+\chi\eta)/(2\chi^{2}+2\chi\eta\cos\alpha+\eta)^{2}=0$. By selecting $\chi=-1/\cos\alpha$, the maximum value of $\rho_{11}(\infty)$ is obtained as $\rho_{11}^{\rm max}(\infty)=\frac{1}{(2-\eta\cos^{2}{\alpha})^{2}}.$ (17) Under the perfect measurement condition $\eta=1$, the optimal value of $\rho_{11}^{\rm max}(\infty)$ is 1 when $\alpha=0$ or $\pi$, which means the battery can be fully charged [$R(\infty)$=1] from its ground state by applying $y$-direction feedback to the battery, even in the presence of dissipation. Thus, $\alpha=0$ $(\pi)$ and $\chi=\mp 1$ are viewed as the optimal feedback parameters of the $XXX$ spin-chain quantum battery. In what follows, we stipulate that the $y$-direction feedback refers to the case of $\alpha=\pi$, unless otherwise specified. According to Eqs. (10) and (13), the corresponding ratio of ergotropy to the stored energy of the battery in steady state is computed to be $\mathcal{E}(\infty)/\Delta E(\infty)=1$, whenever the ground state of the $XXX$ model is $\mid\downarrow\downarrow\rangle$ ($J<h/4$) or $(\mid\uparrow\downarrow\rangle-\mid\downarrow\uparrow\rangle)/\sqrt{2}$ ($J>h/4$), which means the stored energy in the battery can be fully extracted. ### III.2 The charging process of the battery made up with four spins In the above study, we mainly take the spin-chain quantum battery composed of two spins as an example to analyze the charging performance of the battery. In order to verify the similar conclusion is also applicable to the battery with more spins, we now numerically discuss the case of spin-chain quantum battery composed of four spins. Considering more spins will undoubtedly increase our computational time, but will not change the conclusion. In the numerical simulation, we set the ground state of the battery as the initial state, and the measurement efficiency $\eta$ is 1. Figure 2: (a) The stored energy $\Delta E(\infty)/h$ of the battery at steady state is a function of $\alpha$ and $\chi$. (b) The maximum energy $\mathcal{E}(\infty)/h$ that can be extracted (ergotropy) from the battery, other parameters are $\eta=1$ and $J=h$. In order to determine the optimal feedback parameters, the stored energy and the ergotropy of the battery in steady state are displayed as functions of $\alpha$ and $\chi$ in Fig. 2, governed by Eq. (9), and other parameters are $\eta=1$ and $J=h$. It clearly shows that there are three equivalent optimal parameters ($\alpha$, $\chi$) in the corresponding parameter range, which is consistent with the case of two spins. (Please see appendix. C for more spins). For convenience, we choose the optimal feedback condition ($\alpha=\pi$, $\chi=1$) for the following discussion. Figure 3: The $y$-direction feedback control is applied to the system, i.e., $\alpha=\pi$. (a) The variation of optimal stored energy $\Delta E(\infty)/h$ of the battery at steady state with $J/h$. The inset describes the optimal feedback parameter $\chi$ at different $J$. (b) The variations of $R(\infty)$ and $\mathcal{E}(\infty)/\Delta E(\infty)$ of the battery in the steady state with the coupling strength $J$. The measurement efficiency is given as $\eta=1$. Fig. 3(a) shows that the optimal stored energy of the battery in the steady state increases with the increase of coupling strength $J/h$ between spins, which means the interaction between spins expands the capacity of the battery itself. From the inset, we see the feedback strength required for the battery to achieve the optimal energy storage does not change with $J$. In order to measure the battery performance under feedback mechanism more comprehensively, we characterize the variation of the space utilization rate [$R(\infty)$] and the ratio of the ergotropy to the stored energy [$\mathcal{E}(\infty)/\Delta E(\infty)$] of the battery with the coupling strength between spins in Fig. 3(b). It indicates that the coupling strength between spins is not the factor affecting the ability of energy storage and energy extraction, because the highest energy state of the $XXX$ spin-chain battery is always the state all spins pointing upward, irrespective to the coupling strength between the spins. After the above analysis of four spins, we qualitatively obtain the conclusion similar to the case of two spins. It clearly proves that our scheme is still effective for quantum battery composed of multiple spins. ### III.3 Random dynamics, imperfect measurement, and different initial state Figure 4: Stochastic charging dynamics of the battery under the optimal feedback condition. In the inset, the measuring efficiency is set to $\eta=0.8$. The shallow grey lines represent the random 200 trajectories of battery storage energy obtained by Eq. (8), the red solid line represents the average value of 200 energy trajectories, and green dashed line represents the ensemble average energy obtained by Eq. (9). In order to show the stochastic nature of the measurement and feedback processes, in Fig. 4, we numerically simulate 200 random trajectories of the energy stored in the $XXX$ spin-chain battery varying with $\Gamma t$ based on the random master equation Eq. (8). Each trajectory represents the result of a single operation (see the light grey line). Obviously, the energy of different trajectories has dispersion behavior in the charging process. But, when the system reaches the steady state, the feedback has a strong inhibitory effect on the fluctuation of battery energy. In addition, the average of 200 energy trajectories (the red solid line) is coincide with the energy of the ensemble average obtained through the master equation Eq. (9) (the green dashed line). The above results show that the battery can be fully charged not only in the overall average sense, but also in the single-trajectory. And the inset shows that the battery can be stably and effectively charged even if the measurement efficiency is not perfect. Figure 5: (a) The stored energy $\Delta E(\infty)/h$ (solid line) and the ergotropy (dash-dotted line) of the spin-chain quantum battery with different coupling strengths $J/h$. The inset shows the optimal feedback parameter $\chi$ required by the stored energy (solid line) and ergotropy (dash-dotted line). (b) The effective utilization rate $R(\infty)$ of the battery as a function of $J/h$ under the optimal feedback strength. The measurement efficiency is $\eta=0.8$. Fig. 5(a) describes the optimal stored energy (solid line) and ergotropy (dash-dotted line) of the battery in the steady state with imperfect measurement efficiency $\eta=0.8$. The corresponding optimal feedback strengths with different $J$ values are shown in the inset. We can clearly see that the maximum energy extracted from the battery is lower than the optimal stored energy of the battery. When the spin-spin interaction strength increases to a certain value $J_{c}$ ($J_{c}=2h$ in terms of energy storage and $J_{c}=0.8763h$ in terms of energy extraction, please see appendix D for details), the required optimal feedback intensity should be zero, which means that the feedback control is invalid in this case. Fig. 5(b) describes the variation of the effective space utilization rate $R(\infty)$ of the battery with the $J$ at the optimal feedback strength (the inset of Fig. 5(a)). The results reflect that the imperfect measurement weakens the role of feedback control, and reduces the performance of the battery. Figure 6: The stored energy $\Delta E(\infty)/h$ of the battery in steady state as a function of $J/h$ under different measurement efficiency $\eta$. The initial state of the battery is spin-down state and the feedback parameter $\chi$ at different $J$ values is optimal. So far, we have studied the charging performance of spin-chain quantum battery using the ground state as the initial state. Next, we mainly study the charging process of the battery when all the spin are initialized downward. Fig. 6 reveals that the stored energy of the battery is independent of $J$ under the condition of perfect measurement (red solid line), because the stored energy of the battery during the whole charging process is $\Delta E(\infty)=(3J+2h)-(-2h+3J)=4h$. Nevertheless, when the measurement efficiency is imperfect ($\eta=0.8$), the stored energy of the battery at steady state is greatly affected (blue dash line). We can see again that the nearest-neighbor hopping between spins has a critical value that renders the feedback control ineffective, as shown in the inset on the left. To better understand this phenomenon, we use $J/h=3$ to give the energy storage curve of the battery with the change of feedback strength in the right inset. The result manifests that the stored energy of the battery becomes negative when the feedback control is switched on, indicating that the battery has evolved into a lower energy state than the initial state, where the battery is in the discharging process. ### III.4 Performance of $XY$ spin-chain quantum battery and $XYZ$ spin-chain quantum battery The effect of anisotropic parameter ($\gamma$) on the charging process of the battery can be characterized by analyzing the related properties of the $XY$ spin-chain quantum battery ($\Delta=0$, $0<\gamma<1$) and $XYZ$ spin-chain quantum battery ($\Delta=1$, $0<\gamma<1$). In this part, we assume the measurement efficiency is $\eta=1$, and the feedback parameter $\chi$ takes the optimal value under their respective $\gamma$. Figure 7: The effective space utilization rate $R(\infty)$ in steady state of the $XY$ spin-chain quantum battery ($\Delta=0$) with the spin number of $N=2$ (a) and $N=4$ (b) as a function of the anisotropic parameter $\gamma$ under different coupling strengths $J$. The insets in (a) and (b) describe the variations of the stored energy $\Delta E(\infty)/h$ of the battery with $\gamma$ in steady state. The feedback strengths at different $\gamma$ values are optimal. Other parameters are $\eta=1$, $\alpha=\pi$ and $\Gamma=h$. (c) and (d) show the variations of the effective space utilization with $\gamma$ of the battery with spin number of $N=2$ and $N=4$ under different spontaneous emission rates $\Gamma$ with $J=h$, respectively. Figs. 7(a) and 7(b) respectively describe the influence of the anisotropic parameter $\gamma$ on the effective space utilization rate $R(\infty)$ and the optimal stored energy $\Delta E(\infty)/h$ of the $XY$ spin-chain battery when the spin numbers of $N=2$ and $N=4$. Different curves correspond to different spin-spin coupling strengths, where the spontaneous emission rate is set to $\Gamma=h$. The results in Fig. 7(a) and the inset show that for a weak $J=0.1h$ the anisotropic parameter $\gamma$ almost does not affect the effective space utilization rate $R(\infty)$ and the optimal stored energy $\Delta E(\infty)$. However, for a relatively strong $J=3h$, $R(\infty)$ and $\Delta E(\infty)$ increase with $\gamma$ significantly. When the number of particles increases to $N=4$, we can obtain a conclusion similar to that of $N=2$, as shown in Fig. 7(b). The above results indicate that under the strong interaction between spins, the increase of anisotropic parameter $\gamma$ plays a positive role in improving the performance of the $XY$ spin-chain quantum battery. Figs. 7(c) and 7(d) respectively describe that the effective space utilization rates of the battery with $N=2$ and $N=4$ decrease with the increase of dissipation rate under a given spin-spin coupling strength ($J=h$), which is completely different from the $XXX$ model mentioned above. Nevertheless, a relatively weak dissipation rate ranging from $0.1h$ to $h$, can still be regarded as an independent variable. It is worth noting that the specific values of $\gamma=0$ and $\gamma=1$ correspond to the typical transverse $XX$ model and transverse Ising model respectively. Figure 8: (a) The effective space utilization rate $R(\infty)$ of the $XYZ$ spin-chain quantum battery ($\Delta=1$) in steady state as a function of the anisotropic parameter $\gamma$ for $N=2$. The inset describes the change of the effective space utilization rate $R$ of the battery with $\gamma$ in steady state. (b) The same as (a) but the spin number is $N=4$. Other conditions are the same with the Fig. 7. (c) and (d) show the variations of the effective space utilization with $\gamma$ of the battery with spin number of $N=2$ and $N=4$ under different spontaneous emission rates $\Gamma$ with $J=h$, respectively. The influence of anisotropic parameter $\gamma$ on the effective space utilization rate and optimal energy storage of $XYZ$ spin-chain quantum batteries with $N=2$ and $N=4$ is illustrated in Figs. 8(a) and 8(b) with $\Gamma=h$ respectively. It indicates that no matter $N=2$ or $N=4$, when the spin coupling strength is weak ($J=0.1h$), the anisotropic parameters still have no significant effect on the effective space utilization rate and optimal energy storage. However, when the coupling strength between spins increases to $J=3h$, with the increase of anisotropic parameters, the batteries with $N=2$ and $N=4$ exhibit different behaviors in the terms of optimal energy storage. Meanwhile, we see the effective space utilization of the batteries with spins of $N=2$ and $N=4$ decreases with the increase of $\gamma$, which means that the increase in anisotropy is detrimental to the battery chargeability. Figs. 8(c) and 8(d) show the effects of different spontaneous emission rates $\Gamma$ on the effective space utilization rate of the $XYZ$ spin-chain quantum batteries with the spin number of $N=2$ and $N=4$, respectively. The influence of spontaneous emission rate on the current model is obviously different from that of $XY$ spin-chain model, especially shown in Figs. 8(d), the effective space utilization increases with the increase of dissipation rate. Similarly, the specific value of $\gamma=0$ corresponds to the $XXX$ model in Sec. III.2, and we have shown in the previous discussion that its performance is not affected by spin-spin interaction strength and the spontaneous emission rate. ### III.5 The effect of finite temperature Figure 9: (a)-(b) describe the optimal energy storage and ergotropy of the battery at steady state when the spin numbers are $N=2$ and $N=4$, respectively. And the corresponding optimal feedback parameter $\chi$ is shown in the inset. (c) describes the variation of effective space utilization rate of the battery with the temperature. (d) shows the evolution of energy stored in the battery ($N=4$) with time $\Gamma t$ at different temperatures, where the feedback strengths are adjusted to the optimal values and the initial state of the battery is set to the corresponding ground state. Other parameters are $\eta_{d}=\eta_{c}=0.8$ and $J=h$. When there is a finite temperature $T$ in the environment, the uncollected photons will enter the thermal radiation reservoir with an average occupancy of $n_{\rm T}=[\exp(\hbar\omega/k_{B}T)-1]^{-1}$. Therefore, we assume that the photons radiated by spin enter two channels, one is the thermal reservoir with finite temperature for the uncollected photons, the other is the effective zero temperature for the collected photons. Then the conditional state of the system influenced by homodyne measurement can be written as $\displaystyle\dot{\rho}_{J}$ $\displaystyle=$ $\displaystyle-i[H_{B},\rho_{J}]+\sum_{j=1}^{N}\\{\eta_{c}\Gamma\mathcal{D}[\sigma^{-}_{j}]\rho_{J}+\sqrt{\eta\Gamma}\xi(t)\mathcal{H}[\sigma^{-}_{j}]\rho_{J}\\}$ (18) $\displaystyle+(1-\eta_{c})\sum_{j=1}^{N}\mathcal{L}_{j\rm T}\rho_{J},$ where $\mathcal{L}_{j\rm T}\rho_{J}=\Gamma\\{(1+n_{\rm T})\mathcal{D}[\sigma^{-}_{j}]\rho_{J}+n_{\rm T}\mathcal{D}[\sigma^{+}_{j}]\rho_{J}\\}$ is the standard Lindblad dissipator of the thermal radiation reservoir Mitchison _et al._ (2021); Wiseman and Milburn (2009). $\eta$ is the total measurement efficiency of the detection system satisfying $\eta=\eta_{c}\eta_{d}$, which incorporates the collected photon efficiency $\eta_{c}$ and the detector efficiency $\eta_{d}$ simultaneously. The master equation based on the homodyne feedback control should be rewritten as $\displaystyle\mathop{\dot{\rho}}$ $\displaystyle=$ $\displaystyle-i[H_{B},\rho]+\eta_{c}\Gamma\sum_{j=1}^{N}\mathcal{D}[\sigma^{-}_{j}]\rho+(1-\eta_{c})\sum_{j=1}^{N}\mathcal{L}_{j\rm T}\rho$ (19) $\displaystyle-i\sum_{j=1}^{N}\\{[F_{j},\sigma_{j}^{-}\rho+\rho\sigma_{j}^{+}]+\frac{1}{2\eta\Gamma}[F_{j},-i[F_{j},\rho]]\\}.$ By solving the Eq. (19) with $N=2$, we find that the terms in the steady state are given by $\displaystyle\rho_{11}(\infty)$ $\displaystyle=\frac{(f^{2}+n_{\rm T}\Gamma^{2}\eta-n_{\rm T}\Gamma^{2}\eta\eta_{c})^{2}}{[2f^{2}-2f\Gamma\eta+(1+2n_{\rm T})\Gamma^{2}\eta-2n_{\rm T}\Gamma^{2}\eta\eta_{c}]^{2}},$ (20a) $\displaystyle\rho_{22}(\infty)$ $\displaystyle=-\frac{\Gamma^{2}(-2f+\Gamma)^{2}\eta^{2}}{4[2f^{2}-2f\Gamma\eta+(1+2n_{\rm T})\Gamma^{2}\eta-2n_{\rm T}\Gamma^{2}\eta\eta_{c}]^{2}}$ $\displaystyle\quad+\frac{1}{4},$ (20b) $\displaystyle\rho_{33}(\infty)$ $\displaystyle=\rho_{22}(\infty),$ (20c) $\displaystyle\rho_{44}(\infty)$ $\displaystyle=\frac{[f^{2}-2f\Gamma\eta+(1+n_{\rm T})\Gamma^{2}\eta-n_{\rm T}\Gamma^{2}\eta\eta_{c}]^{2}}{[2f^{2}-2f\Gamma\eta+(1+2n_{\rm T})\Gamma^{2}\eta-2n_{\rm T}\Gamma^{2}\eta\eta_{c}]^{2}}.$ (20d) For a given temperature and collection efficiency, the optimal feedback strength $f$ for the stored energy of the battery can be determined as $f/\Gamma=\frac{1}{2}[1+\sqrt{1+4n_{\rm T}\eta(1-\eta_{c})}],$ (21) and the corresponding maximum value of the $\rho_{11}(\infty)$ is obtained as follows $\rho_{11}^{\rm max}(\infty)=\frac{1}{4}[1+\frac{\eta\sqrt{1+\mu}}{1+\mu+(1-\eta)\sqrt{1+\mu}}]^{2},$ (22) where $\mu=4n_{\rm T}\eta(1-\eta_{c})$. The existence of finite temperature increases the required optimal feedback strength $f$ and reduces the population of the system in $\rho_{11}(\infty)$, and cause the battery to not be fully charged. In Fig. 9, we set the coupling strength between spins as $J=h$, and the total feedback efficiency is $\eta=0.64$. Whether the spin chain quantum battery is composed of two spins (Fig. 9 (a)) or four spins (Fig. 9 (b)), it can be clearly seen that the ergotropy and stored energy of battery in the steady state decrease monotonously with the increase of temperature. Moreover, the finite temperature increases the required optimal feedback strength as shown by the corresponding inset, we once again observe that the feedback control will fails in term of the ergotropy when the temperature is relatively low, which is similar to the behavior in Fig. 5 (a). Fig. 9 (c) depicts the change of the effective space utilization rate of the battery with the temperature, and the results show that $R(\infty)$ decreases with the increase of temperature. Finally, the energy stored of the battery at different temperatures is plotted as a function of $\Gamma t$ in the Fig. 9 (d), where the feedback intensity is set to the corresponding optimal value. It indicates that the time to reach the steady state of the system is shortened with the increase of temperature. Although the finite temperature is not conducive to the battery energy storage, it can still be maintained at a high value when the temperature is relatively low ($n_{\rm T}\in[0,0.2])$. ## IV Summary In summary, we have constructed a dissipative $XXX$ spin-chain quantum battery model based on homodyne measurement. Its core idea is to use continuous measurement records to control system dynamics. The analytical form of $XXX$ spin chain model of two spins show that the battery can be charged stably and fully under the optimal feedback condition. We also extend the model to the case of four spins, and qualitatively get conclusions similar to that of two- spin model. The interaction between spins expands the capacity of the $XXX$ spin-chain battery itself, but does not affect the degree of storage and extraction of battery energy. The stochastic dynamics of the battery charging process reflects that the present feedback mechanism is not only effective in the overall average sense, but also effective in a single trajectory. The imperfect measurement and the finite temperature weaken the influence of feedback control on the system, which are not conducive to the storage and extraction of battery energy. In addition, under the imperfect measurement condition, there is a critical value $J_{c}$ of the coupling strength, and beyond this critical value, the feedback control will fail. For the $XY$ spin- chain quantum battery, the increase of anisotropy is beneficial to improve the effective space utilization rate. Meanwhile, the lower dissipation rate is more favorable for battery energy storage. For the $XYZ$ spin-chain quantum battery, the relatively large spontaneous emission will help to improve the effective space utilization rate. We hope that our work will provide a valuable reference for future research on open quantum batteries. ## Acknowledgements X. Q. Shao would like to express his thanks to Dr. Lingzhen Guo for his valuable discussion on the numerical simulation of homodyne-based feedback control, and Dr. Gangcheng Wang for his helpful comment on the feature of $XXX$ spin-chain model. The anonymous reviewers are also thanked for constructive comments that helped in improving the quality of this paper. This work is supported by the National Natural Science Foundation of China (NSFC) under Grants No. 11774047 and No. 12174048. ## Appendix A The relevant derivation of Eq. (4) In the feedback control process, the photon emitted by each spin is collected and becomes a beam with annihilation operator $\sqrt{\Gamma}\sigma^{-}_{j}$. Then the beam enters one port of a 50 : 50 beam splitter, and a strong local oscillator $\beta$ enters another port Wiseman and Milburn (1993); Wiseman (1994); Gardiner (1985); Wiseman and Milburn (2009). After output by beam splitter, two field operators $b_{1j}$ and $b_{2j}$ are obtained in the form of $b_{kj}=[\sqrt{\Gamma}\sigma^{-}_{j}-(-1)^{k}\beta]/\sqrt{2},$ (23) when both fields are detected, the corresponding forms of two photocurrent are $\overline{J_{kj}}=\langle|\beta|^{2}-(-1)^{k}(\sqrt{\Gamma}\beta\sigma^{+}_{j}+\sqrt{\Gamma}\beta^{\ast}\sigma^{-}_{j})+\Gamma\sigma^{+}_{j}\sigma^{-}_{j}\rangle/2,$ (24) where $\sqrt{\Gamma}\beta\sigma^{+}_{j}+\sqrt{\Gamma}\beta^{\ast}\sigma^{-}_{j}$ represents the interference of the system and the local oscillator. The above equation refers to the average photocurrent. However, instantaneous photocurrent rather than average photocurrent is recorded in practice. Due to the randomness in the quantum measurement process, the photocurrent will change randomly. In the ideal case of homodyne detection, namely $|\beta|^{2}\gg\Gamma$, the rate of photodetections tends to be infinite, and the corresponding form of photocounts can be approximated as signal plus Gaussian white noise. $J^{\rm{hom}}_{j}(t)=\frac{J_{1j}(t)-J_{2j}(t)}{|\beta|}=\sqrt{\Gamma}\langle e^{-i\Phi}\sigma^{+}_{j}+e^{i\Phi}\sigma^{-}_{j}\rangle+\xi(t).$ (25) where $\Phi={\rm arg}\beta$ is the phase of the local oscillator, and $\xi(t)=dw(t)/dt$ represents Gaussian white noise satisfying normal distribution with $dw(t)$ a complex Wiener increment. When the detector with efficiency $\eta$ is considered in the whole process, the above expression can be modified as $J^{\rm{hom}}_{j}(t)=\langle\sigma^{x}_{j}\rangle+\frac{\xi(t)}{\sqrt{\eta\Gamma}},$ (26) where we have set $\Phi=0$ for simplicity. ## Appendix B Steady-state solution of two-body $XXX$ spin-chain quantum battery The Hilbert space of considered $XXX$ spin-chain quantum battery consists of the following four bases $|1\rangle=\mid\uparrow\uparrow\rangle$, $|2\rangle=\mid\uparrow\downarrow\rangle$, $|3\rangle=\mid\downarrow\uparrow\rangle$, and $|4\rangle=\mid\downarrow\downarrow\rangle$. The density matrix $\rho$ of the system at an arbitrary time $t$ can be written as $\rho(t)=\left(\begin{array}[]{cccc}\rho_{11}(t)&\rho_{12}(t)&\rho_{13}(t)&\rho_{14}(t)\\\ \rho_{21}(t)&\rho_{22}(t)&\rho_{23}(t)&\rho_{24}(t)\\\ \rho_{31}(t)&\rho_{32}(t)&\rho_{33}(t)&\rho_{34}(t)\\\ \rho_{41}(t)&\rho_{42}(t)&\rho_{43}(t)&\rho_{44}(t)\end{array}\right).$ (27) By solving Eq. (9), we obtain following coupled differential equation of diagonal elements $\dot{\rho}_{11}(t)=-2\Gamma\rho_{11}(t)-4f\cos\alpha\rho_{11}(t)+\frac{f^{2}}{\Gamma\eta}[\rho_{22}(t)+\rho_{33}(t)-2\rho_{11}(t)],$ (28) $\dot{\rho}_{22}(t)=2iJ[\rho_{23}(t)-\rho_{32}(t)]+(\Gamma+2f\cos\alpha)[\rho_{11}(t)-\rho_{22}(t)]+\frac{f^{2}}{\Gamma\eta}[\rho_{11}(t)-2\rho_{22}(t)+\rho_{44}(t)],$ (29) $\dot{\rho}_{33}(t)=-2iJ[\rho_{23}(t)-\rho_{32}(t)]+(\Gamma+2f\cos\alpha)[\rho_{11}(t)-\rho_{33}(t)]+\frac{f^{2}}{\Gamma\eta}[\rho_{11}(t)-2\rho_{33}(t)+\rho_{44}(t)],$ (30) and $\dot{\rho}_{44}(t)=-\dot{\rho}_{11}(t)-\dot{\rho}_{22}(t)-\dot{\rho}_{33}(t)$. The corresponding differential equations of off-diagonal elements are $\displaystyle\dot{\rho}_{12}(t)$ $\displaystyle=$ $\displaystyle\dot{\rho}_{21}^{\ast}(t)=-i\\{h\rho_{12}(t)+2J[\rho_{12}(t)-\rho_{13}(t)]\\}-f\\{[3\rho_{12}(t)+\rho_{21}(t)]\cos\alpha+i[\rho_{12}(t)+\rho_{21}(t)]\sin\alpha\\}$ (31) $\displaystyle-\frac{f^{2}}{\Gamma\eta}[2\rho_{12}(t)+e^{2i\alpha}\rho_{21}(t)-\rho_{34}(t)]-\frac{3\Gamma\rho_{12}(t)}{2},$ $\displaystyle\dot{\rho}_{13}(t)$ $\displaystyle=$ $\displaystyle\dot{\rho}_{31}^{\ast}(t)=-i\\{h\rho_{13}(t)+2J[\rho_{13}(t)-\rho_{12}(t)]\\}-f\\{[3\rho_{13}(t)+\rho_{31}(t)]\cos\alpha+i[\rho_{13}(t)+\rho_{31}(t)]\sin\alpha\\}$ (32) $\displaystyle-\frac{f^{2}}{\Gamma\eta}[2\rho_{13}(t)+e^{2i\alpha}\rho_{31}(t)-\rho_{24}(t)]-\frac{3\Gamma\rho_{13}(t)}{2},$ $\dot{\rho}_{14}(t)=\dot{\rho}_{41}^{\ast}(t)=-e^{i\alpha}f[2\rho_{14}(t)+\rho_{23}(t)+\rho_{32}(t)]-2ih\rho_{14}(t)-\Gamma\rho_{14}(t)-\frac{f^{2}}{\Gamma\eta}\\{2\rho_{14}(t)+e^{2i\alpha}[\rho_{23}(t)+\rho_{32}(t)]\\},$ (33) $\displaystyle\dot{\rho}_{23}(t)$ $\displaystyle=$ $\displaystyle\dot{\rho}_{32}^{\ast}(t)=2iJ[\rho_{22}(t)-\rho_{33}(t)]-\Gamma\rho_{23}(t)+\frac{f^{2}}{\Gamma\eta}\\{-2\rho_{23}(t)-e^{-2i\alpha}[\rho_{14}(t)+e^{4i\alpha}\rho_{41}(t)]\\}$ (34) $\displaystyle-f\cos\alpha[\rho_{14}(t)+2\rho_{23}(t)+\rho_{41}(t)]+if\sin\alpha[\rho_{14}(t)-\rho_{41}(t)],$ $\displaystyle\dot{\rho}_{24}(t)$ $\displaystyle=$ $\displaystyle\dot{\rho}_{42}^{\ast}(t)=[\rho_{13}(t)-\frac{\rho_{24}(t)}{2}]\Gamma-i\\{h\rho_{24}(t)+2J[\rho_{34}(t)-\rho_{24}(t)]\\}+\frac{f^{2}}{\Gamma\eta}[\rho_{13}(t)-2\rho_{24}(t)-e^{2i\alpha}\rho_{42}(t)]$ (35) $\displaystyle+f\\{[2\rho_{13}(t)-\rho_{24}(t)-\rho_{42}(t)]\cos\alpha-i[\rho_{24}(t)+\rho_{42}(t)]\sin\alpha\\},$ $\displaystyle\dot{\rho}_{34}(t)$ $\displaystyle=$ $\displaystyle\dot{\rho}_{43}^{\ast}(t)=[\rho_{12}(t)-\frac{\rho_{34}(t)}{2}]\Gamma-i\\{h\rho_{34}(t)+2J[\rho_{24}(t)-\rho_{34}(t)]\\}+\frac{f^{2}}{\Gamma\eta}[\rho_{12}(t)-2\rho_{34}(t)-e^{2i\alpha}\rho_{43}(t)]$ (36) $\displaystyle+f\\{[2\rho_{12}(t)-\rho_{34}(t)-\rho_{43}(t)]\cos\alpha-i[\rho_{34}(t)+\rho_{43}(t)]\sin\alpha\\}.$ Then by imposing the condition $\dot{\rho}(t)=0$, we get the following steady- state solution $\rho_{11}(\infty)=\frac{f^{4}}{(2f^{2}+2f\Gamma\eta\cos{\alpha}+\Gamma^{2}\eta)^{2}},\qquad\rho_{22}(\infty)=\frac{f^{2}(f^{2}+2f\Gamma\eta\cos{\alpha}+\Gamma^{2}\eta)}{(2f^{2}+2f\Gamma\eta\cos{\alpha}+\Gamma^{2}\eta)^{2}},$ (37) and $\rho_{33}(\infty)=\rho_{22}(\infty)$, $\rho_{44}(\infty)=1-\rho_{11}(\infty)-\rho_{22}(\infty)-\rho_{33}(\infty)$. According to Eq. (10), the energy stored by the quantum battery is $\displaystyle\Delta E_{XXX}(\infty)$ $\displaystyle=$ $\displaystyle(J+h)\rho_{11}(\infty)-J\rho_{22}(\infty)+2J\rho_{32}(\infty)+2J\rho_{23}(\infty)-J\rho_{33}(\infty)+(J-h)\rho_{44}(\infty)$ (38) $\displaystyle-[(J+h)\rho_{11}(0)-J\rho_{22}(0)+2J\rho_{32}(0)+2J\rho_{23}(0)-J\rho_{33}(0)+(J-h)\rho_{44}(0)].$ ## Appendix C The highest energy state of a $N$-site $XXX$ spin chain For a general $N$-site $XXX$ spin chain ($\gamma=0$, $\Delta=1$), the corresponding Hamiltonian can be rewritten as $H_{B}=h\sum\limits_{j=1}^{N}S_{j}^{z}+\sum\limits_{j=1}^{N-1}2J[(\bm{S}_{j}+\bm{S}_{j+1})^{2}-\bm{S}_{j}^{2}-\bm{S}_{j+1}^{2}],$ (39) where $\bm{S}_{j}\equiv(S^{x}_{j},S^{y}_{j},S^{z}_{j})=(\sigma^{x}_{j},\sigma^{y}_{j},\sigma^{z}_{j})/2$. Since the two parts of above Hamiltonian commute, it’s easy to check that highest-energy eigenstate of the $XXX$ spin chain is $\mid\uparrow\uparrow\cdots\uparrow\rangle$ for all positive real parameters, which is independent of the nearest-neighbor coupling strength between spins, and the corresponding eigenenergy is $E_{\rm max}=Nh/2+(N-1)J$. In the Sec. III.2, we have observed that the optimal feedback conditions of the quantum battery composed of four spins are the same as that of the two spins. Now we present the numerical results of the energy storage and ergotropy of the battery composed of six spins varying with parameters $\alpha$ and $\chi$ in Fig. 10. For convenience, we also reproduce the numerical simulation results of the quantum battery composed of two spins. Figure 10: (a)-(b) describe the stored energy and ergotropy of the battery with spin number of $N=2$ as functions of $\alpha$ and $\chi$, respectively. Other parameters are $\eta=1$ and $J=h$. (c)-(d) are the same as (a)-(b), but the spin number is $N=6$. Fig. 11 further depicts the variation of effective space utilization rate and $\mathcal{E}(\infty)/\Delta E(\infty)$ in the steady state with the number of spin, under the three groups of optimal feedback conditions. These results can reflect to some extent that our feedback scheme is independent of the number of spins constituting the spin-chain quantum battery. Figure 11: The value of $R(\infty)$ and $\mathcal{E}(\infty)/\Delta E(\infty)$ of the battery with different spin number under the three groups of optimal feedback parameters, where the measurement efficiency is $\eta=1$. Figure 12: (a) The critical value $J_{c}$ of spin-spin interaction in energy storage varies with the spin number. (b) The critical value $J_{c}$ of spin- spin interaction in energy extraction (ergotropy) varies with the spin number. Different curves correspond to different measuring efficiency $\eta$. ## Appendix D The charging process under imperfect measurement Under imperfect measurement conditions ($\eta<1$), if the ground state of the battery with $N=2$ is $\mid\downarrow\downarrow\rangle$ ($J<h/4$), the stored energy of the battery in the steady state is $\Delta E_{1}(\infty)=\frac{2(h-2J)\vartheta(\eta+2\chi\eta\cos\alpha)}{2\chi^{2}+2\chi\eta\cos\alpha+\eta}+4(h-J)\vartheta^{2},$ (40) where $\vartheta=\chi^{2}/(2\chi^{2}+2\chi\eta\cos\alpha+\eta)$. Since $(h-2J)>0$ at this time, $\Delta E_{1}(\infty)$ can always be maximized by selecting $\chi=1$, which means the feedback control is still valid under the condition of imperfect measurement. While if the ground state of the system is $(\mid\uparrow\downarrow\rangle-\mid\downarrow\uparrow\rangle)/\sqrt{2}$ ($J>h/4$), the corresponding stored energy of the battery in the steady state becomes $\Delta E_{2}(\infty)=-h+4J+4J\vartheta^{2}+2(h-2J)\vartheta.$ (41) The third and fourth terms in the r.h.s. of Eq. (41) are mutually restricted for a large $J$, thus the feedback control may fail when $J$ exceeds a certain value $J_{c}$ which can be determined as $J_{c}/h=(2-\eta)/2(1-\eta)$ according to $\Delta E_{2}(\infty)\mid_{(\chi)=1}=\Delta E_{2}(\infty)\mid_{\chi=0}$. In order to more intuitively reflect the influence of measurement efficiency $(\eta)$ and spin number ($N$) on the critical coupling strength $J_{c}$, we describe the variation of the critical value $J_{c}$ with the spin number under different measurement efficiencies from the perspective of energy storage and extraction in Figs. 12 (a) and 12 (b), respectively. ## References * Alicki and Fannes (2013) R. Alicki and M. Fannes, “Entanglement boost for extractable work from ensembles of quantum batteries,” Phys. Rev. E 87, 042123 (2013). * Andolina _et al._ (2018) G. M. Andolina, D. Farina, A. Mari, V. Pellegrini, V. Giovannetti, and M. Polini, “Charger-mediated energy transfer in exactly solvable models for quantum batteries,” Phys. Rev. B 98, 205423 (2018). * (3) Y. Chen and Y. Hasegawa, “Indefinite causal order in quantum batteries,” arXiv:2105.12466 . * Zhang _et al._ (2019) Yu-Yu Zhang, T.-R. Yang, L. Fu, and X. Wang, “Powerful harmonic charging in a quantum battery,” Phys. Rev. E 99, 052106 (2019). * Delmonte _et al._ (2021) A. Delmonte, A. Crescente, M. Carrega, D. Ferraro, and M. Sassetti, “Characterization of a two-photon quantum battery: Initial conditions, stability and work extraction,” Entropy 23, 612 (2021). * Crescente _et al._ (2020a) A. Crescente, M. Carrega, M. Sassetti, and D. Ferraro, “Charging and energy fluctuations of a driven quantum battery,” New J. Phys. 22, 063057 (2020a). * Seah _et al._ (2021) S. Seah, M. Perarnau-Llobet, G. Haack, N. Brunner, and S. Nimmrichter, “Quantum speed-up in collisional battery charging,” Phys. Rev. Lett. 127, 100601 (2021). * Santos _et al._ (2019) A. C. Santos, B. Çakmak, S. Campbell, and N. T. Zinner, “Stable adiabatic quantum batteries,” Phys. Rev. E 100, 032107 (2019). * Dou _et al._ (2020) F.-Q. Dou, Y.-J. Wang, and J.-A. Sun, “Closed-loop three-level charged quantum battery,” Europhys. Lett. 131, 43001 (2020). * Santos _et al._ (2020) A. C. Santos, A. Saguia, and M. S. Sarandy, “Stable and charge-switchable quantum batteries,” Phys. Rev. E 101, 062114 (2020). * Qi and Jing (2021) S.-F. Qi and J. Jing, “Magnon-mediated quantum battery under systematic errors,” Phys. Rev. A 104, 032606 (2021). * Andolina _et al._ (2019a) G. M. Andolina, M. Keck, A. Mari, M. Campisi, V. Giovannetti, and M. Polini, “Extractable work, the role of correlations, and asymptotic freedom in quantum batteries,” Phys. Rev. Lett. 122, 047702 (2019a). * Andolina _et al._ (2019b) G. M. Andolina, M. Keck, A. Mari, V. Giovannetti, and M. Polini, “Quantum versus classical many-body batteries,” Phys. Rev. B 99, 205437 (2019b). * Rossini _et al._ (2019) D. Rossini, G. M. Andolina, and M. Polini, “Many-body localized quantum batteries,” Phys. Rev. B 100, 115142 (2019). * Julià-Farré _et al._ (2020) S. Julià-Farré, T. Salamon, A. Riera, M. N. Bera, and M. Lewenstein, “Bounds on the capacity and power of quantum batteries,” Phys. Rev. Res. 2, 023113 (2020). * Huangfu and Jing (2021) Y. Huangfu and J. Jing, “High-capacity and high-power collective charging with spin chargers,” Phys. Rev. E 104, 024129 (2021). * Liu _et al._ (2021) J.-X. Liu, H.-L. Shi, Y.-H. Shi, X.-H. Wang, and W.-L. Yang, “Entanglement and work extraction in the central-spin quantum battery,” Phys. Rev. B 104, 245418 (2021). * Rossini _et al._ (2020) D. Rossini, G. M. Andolina, D. Rosa, M. Carrega, and M. Polini, “Quantum advantage in the charging process of sachdev-ye-kitaev batteries,” Phys. Rev. Lett. 125, 236402 (2020). * Lu _et al._ (2021) W. Lu, J. Chen, L.-M. Kuang, and X. Wang, “Optimal state for a tavis-cummings quantum battery via the bethe ansatz method,” Phys. Rev. A 104, 043706 (2021). * Crescente _et al._ (2020b) A. Crescente, M. Carrega, M. Sassetti, and D. Ferraro, “Ultrafast charging in a two-photon dicke quantum battery,” Phys. Rev. B 102, 245407 (2020b). * (21) X. Zhang and M. blaauboer, “Enhanced energy transfer in a dicke quantum battery,” arXiv:1812.10139 . * Dou _et al._ (2022) F.-Q. Dou, Y.-Q. Lu, Y.-J. Wang, and J.-A. Sun, “Extended dicke quantum battery with interatomic interactions and driving field,” Phys. Rev. B 105, 115405 (2022). * Caravelli _et al._ (2020) F. Caravelli, G. Coulter-De Wit, L. P. García-Pintos, and A. Hamma, “Random quantum batteries,” Phys. Rev. Res. 2, 023095 (2020). * Le _et al._ (2018) T. P. Le, J. Levinsen, K. Modi, M. M. Parish, and F. A. Pollock, “Spin-chain model of a many-body quantum battery,” Phys. Rev. A 97, 022106 (2018). * Ghosh _et al._ (2020) S. Ghosh, T. Chanda, and A. Sen(De), “Enhancement in the performance of a quantum battery by ordered and disordered interactions,” Phys. Rev. A 101, 032115 (2020). * Kamin _et al._ (2020a) F. H. Kamin, F. T. Tabesh, S. Salimi, and A. C. Santos, “Entanglement, coherence, and charging process of quantum batteries,” Phys. Rev. E 102, 052109 (2020a). * Zhao _et al._ (2022) F. Zhao, F.-Q. Dou, and Q. Zhao, “Charging performance of the su-schrieffer-heeger quantum battery,” Phys. Rev. Res. 4, 013172 (2022). * Barra _et al._ (2022) F. Barra, K. V. Hovhannisyan, and A. Imparato, “Quantum batteries at the verge of a phase transition,” New J. Phys. 24, 015003 (2022). * Arjmandi _et al._ (2022) M. B. Arjmandi, H. Mohammadi, and A. C. Santos, “Enhancing self-discharging process with disordered quantum batteries,” Phys. Rev. E 105, 054115 (2022). * (30) S. Mondal and S. Bhattacharjee, “Charging of quantum battery with periodic driving,” arXiv:2112.10451 . * Ghosh and Sen (De) S. Ghosh and A. Sen(De), “Dimensional enhancements in a quantum battery with imperfections,” Phys. Rev. A 105, 022628 (2022). * Binder _et al._ (2015) F. C. Binder, S. Vinjanampathy, K. Modi, and J. Goold, “Quantacell: powerful charging of quantum batteries,” New J. Phys. 17, 075015 (2015). * Ferraro _et al._ (2018) D. Ferraro, M. Campisi, G. M. Andolina, V. Pellegrini, and M. Polini, “High-power collective charging of a solid-state quantum battery,” Phys. Rev. Lett. 120, 117702 (2018). * Gyhm _et al._ (2022) J. Y. Gyhm, D. Šafránek, and D. Rosa, “Quantum charging advantage cannot be extensive without global operations,” Phys. Rev. Lett. 128, 140501 (2022). * Farina _et al._ (2019) D. Farina, G. M. Andolina, A. Mari, M. Polini, and V. Giovannetti, “Charger-mediated energy transfer for quantum batteries: An open-system approach,” Phys. Rev. B 99, 035421 (2019). * Barra (2019) F. Barra, “Dissipative charging of a quantum battery,” Phys. Rev. Lett. 122, 210601 (2019). * Pirmoradian and Mølmer (2019) F. Pirmoradian and K. Mølmer, “Aging of a quantum battery,” Phys. Rev. A 100, 043833 (2019). * Tacchino _et al._ (2020) F. Tacchino, T. F. F. Santos, D. Gerace, M. Campisi, and M. F. Santos, “Charging a quantum battery via nonequilibrium heat current,” Phys. Rev. E 102, 062133 (2020). * Hovhannisyan _et al._ (2020) K. V. Hovhannisyan, F. Barra, and A. Imparato, “Charging assisted by thermalization,” Phys. Rev. Res. 2, 033413 (2020). * Kamin _et al._ (2020b) F. H. Kamin, F. T. Tabesh, S. Salimi, F. Kheirandish, and A. C. Santos, “Non-markovian effects on charging and self-discharging process of quantum batteries,” New J. Phys. 22, 083007 (2020b). * García-Pintos _et al._ (2020) L. P. García-Pintos, A. Hamma, and A. Del Campo, “Fluctuations in extractable work bound the charging power of quantum batteries,” Phys. Rev. Lett. 125, 040601 (2020). * (42) K. Ito and G. Watanabe, “Collectively enhanced high-power and high-capacity charging of quantum batteries via quantum heat engines,” arXiv:2008.07089 . * Gherardini _et al._ (2020) S. Gherardini, F. Campaioli, F. Caruso, and F. C. Binder, “Stabilizing open quantum batteries by sequential measurements,” Phys. Rev. Res. 2, 013095 (2020). * Bai and An (2020) S.-Y. Bai and J.-H. An, “Floquet engineering to reactivate a dissipative quantum battery,” Phys. Rev. A 102, 060201 (2020). * Quach and Munro (2020) J. Q. Quach and W. J. Munro, “Using dark states to charge and stabilize open quantum batteries,” Phys. Rev. Appl. 14, 024092 (2020). * Tabesh _et al._ (2020) F. T. Tabesh, F. H. Kamin, and S. Salimi, “Environment-mediated charging process of quantum batteries,” Phys. Rev. A 102, 052223 (2020). * Zhao _et al._ (2021) F. Zhao, F.-Q. Dou, and Q. Zhao, “Quantum battery of interacting spins with environmental noise,” Phys. Rev. A 103, 033715 (2021). * Ghosh _et al._ (2021) S. Ghosh, T. Chanda, S. Mal, and A. Sen(De), “Fast charging of a quantum battery assisted by noise,” Phys. Rev. A 104, 032207 (2021). * Peng _et al._ (2021) L. Peng, W.-B. He, S. Chesi, H.-Q. Lin, and X.-W. Guan, “Lower and upper bounds of quantum battery power in multiple central spin systems,” Phys. Rev. A 103, 052220 (2021). * Santos (2021) A. C. Santos, “Quantum advantage of two-level batteries in the self-discharging process,” Phys. Rev. E 103, 042118 (2021). * Xu _et al._ (2021) K. Xu, H.-J. Zhu, G.-F. Zhang, and W.-M. Liu, “Enhancing the performance of an open quantum battery via environment engineering,” Phys. Rev. E 104, 064143 (2021). * Yao and Shao (2021) Y. Yao and X. Q. Shao, “Stable charging of a rydberg quantum battery in an open system,” Phys. Rev. E 104, 044116 (2021). * (53) F. Centrone, L. Mancino, and M. Paternostro, “Charging batteries with quantum squeezing,” arXiv:2106.07899 . * Mitchison _et al._ (2021) M. T. Mitchison, J. Goold, and J. Prior, “Charging a quantum battery with linear feedback control,” Quantum 5, 500 (2021). * Carrasco _et al._ (2022) J. Carrasco, J. R. Maze, C. Hermann-Avigliano, and F. Barra, “Collective enhancement in dissipative quantum batteries,” Phys. Rev. E 105, 064119 (2022). * Doherty and Jacobs (1999) A. C. Doherty and K. Jacobs, “Feedback control of quantum systems using continuous state estimation,” Phys. Rev. A 60, 2700–2711 (1999). * Carvalho _et al._ (2008) A. R. R. Carvalho, A. J. S. Reid, and J. J. Hope, “Controlling entanglement by direct quantum feedback,” Phys. Rev. A 78, 012334 (2008). * Wang _et al._ (2005) J. Wang, H. M. Wiseman, and G. J. Milburn, “Dynamical creation of entanglement by homodyne-mediated feedback,” Phys. Rev. A 71, 042309 (2005). * Liu _et al._ (2010) Z. Liu, L. Kuang, K. Hu, L. Xu, S. Wei, L. Guo, and X.-Q. Li, “Deterministic creation and stabilization of entanglement in circuit qed by homodyne-mediated feedback control,” Phys. Rev. A 82, 032335 (2010). * Bushev _et al._ (2006) P. Bushev, D. Rotter, A. Wilson, F. Dubin, C. Becher, J. Eschner, R. Blatt, V. Steixner, P. Rabl, and P. Zoller, “Feedback cooling of a single trapped ion,” Phys. Rev. Lett. 96, 043003 (2006). * Wang and Wiseman (2001) J. Wang and H. M. Wiseman, “Feedback-stabilization of an arbitrary pure state of a two-level atom,” Phys. Rev. A 64, 063810 (2001). * Campagne-Ibarcq _et al._ (2016) P. Campagne-Ibarcq, S. Jezouin, N. Cottet, P. Six, L. Bretheau, F. Mallet, A. Sarlette, P. Rouchon, and B. Huard, “Using spontaneous emission of a qubit as a resource for feedback control,” Phys. Rev. Lett. 117, 060502 (2016). * Genoni _et al._ (2013) M. G. Genoni, S. Mancini, and A. Serafini, “Optimal feedback control of linear quantum systems in the presence of thermal noise,” Phys. Rev. A 87, 042333 (2013). * Smith _et al._ (2002) W. P. Smith, J. E. Reiner, L. A. Orozco, S. Kuhr, and H. M. Wiseman, “Capture and release of a conditional state of a cavity qed system by quantum feedback,” Phys. Rev. Lett. 89, 133601 (2002). * Armen _et al._ (2002) M. A. Armen, J. K. Au, J. K. Stockton, A. C. Doherty, and H. Mabuchi, “Adaptive homodyne measurement of optical phase,” Phys. Rev. Lett. 89, 133602 (2002). * Geremia _et al._ (2004) J. M. Geremia, J. K. Stockton, and H. Mabuchi, “Real-Time Quantum Feedback Control of Atomic Spin-Squeezing,” Science 304, 270–273 (2004). * Wiseman and Milburn (1993) H. M. Wiseman and G. J. Milburn, “Quantum theory of optical feedback via homodyne detection,” Phys. Rev. Lett. 70, 548–551 (1993). * Wiseman (1994) H. M. Wiseman, “Quantum theory of continuous feedback,” Phys. Rev. A 49, 2133–2150 (1994). * Gardiner (1985) C. W. Gardiner, _Handbook of stochastic methods_ (springer Berlin, 1985). * Wiseman and Milburn (2009) H. M. Wiseman and G. J. Milburn, _Quantum Measurement and Control_ (Cambridge University Press, 2009). * Allahverdyan _et al._ (2004) A. E. Allahverdyan, R. Balian, and Th. M. Nieuwenhuizen, “Maximal work extraction from finite quantum systems,” Europhys. Lett. 67, 565–571 (2004). * Lenard (1978) A. Lenard, “Thermodynamical proof of the Gibbs formula for elementary quantum systems,” J. Stat. Phys. 19, 575–586 (1978).
# Lattice ground states for embedded-atom models in 2D and 3D Laurent Bétermin Faculty of Mathematics, University of Vienna, Oskar- Morgenstern-Platz 1, 1090 Vienna, Austria<EMAIL_ADDRESS>https://sites.google.com/site/homepagelaurentbetermin/ , Manuel Friedrich Applied Mathematics, University of Münster, Einsteinstr. 62, D-48149 Münster, Germany<EMAIL_ADDRESS>https://www.uni- muenster.de/AMM/en/Friedrich/ and Ulisse Stefanelli Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria, Vienna Research Platform on Accelerating Photoreaction Discovery, University of Vienna, Währingerstraße 17, 1090 Wien, Austria, and Istituto di Matematica Applicata e Tecnologie Informatiche E. Magenes \- CNR via Ferrata 1, 27100 Pavia, Italy<EMAIL_ADDRESS>http://www.mat.univie.ac.at/$\sim$stefanelli ###### Abstract. The Embedded-Atom Model (EAM) provides a phenomenological description of atomic arrangements in metallic systems. It consists of a configurational energy depending on atomic positions and featuring the interplay of two-body atomic interactions and nonlocal effects due to the corresponding electronic clouds. The purpose of this paper is to mathematically investigate the minimization of the EAM energy among lattices in two and three dimensions. We present a suite of analytical and numerical results under different reference choices for the underlying interaction potentials. In particular, Gaussian, inverse-power, and Lennard-Jones-type interactions are addressed. ###### Key words and phrases: Embedded-atom model, lattice energy minimization, Epstein zeta function. ###### 2010 Mathematics Subject Classification: 70G75, 74G65, 74N05 ## 1\. Introduction Understanding the structure of matter is a central scientific and technological quest, cutting across disciplines and motivating an ever increasing computational effort. First-principles calculations deliver accurate predictions but are often impeded by the inherent quantum complexity, as systems size up [30]. One is hence led to consider a range of approximations. The minimization of empirical atomic pair-potentials represents the simplest of such approximations being able to describe specific properties of large-scaled atomic systems. Still, atomic pair-interactions fall short of describing the basic nature of metallic bonding, which is multi- body by nature, and often deliver inaccurate predictions of metallic systems. The Embedded-Atom Model (EAM) is a semi-empirical, many-atom potential aiming at describing the atomic structure of metallic systems by including a nonlocal electronic correction. Introduced by Daw and Baskes [17], it has been used to address efficiently different aspects inherent to atomic arrangements including defects, dislocations, fracture, grain boundary structure and energy, surface structure, and epitaxial growth. Proving capable of reproducing experimental observations and being relatively simple to implement, the Embedded-Atom Model is now routinely used in molecular dynamic simulations [19, 28]. In particular, it has been applied in a variety of metallic systems [22], including alkali metals Li, Na, K [20, 27, 38], transition metals Fe, Ni, Cu, Pd, Ag, Pt, Au [14, 23, 27, 28], post-transition metals Al, Pb [14, 25, 35], the metalloid Si [4], and some of their alloys [14, 26]. In the case of a metallic system with a single atomic species, the EAM energy is specified as $\sum_{i}F(\overline{\rho}_{i})+\sum_{i\not=j}\phi(|x_{i}-x_{j}|)\quad\text{with}\quad\overline{\rho}_{i}=\sum_{j\not=i}\rho(|x_{i}-x_{j}|).$ Here, $\\{x_{i}\\}$ indicate atomic positions in ${\mathbb{R}}^{d}$ and the long-range interaction potential $\phi\colon\mathbb{R}_{+}:=(0,\infty)\to\mathbb{R}_{+}$ modulates atomic pair- interactions. Atomic positions induce electronic-cloud distributions. The function $\rho\colon\mathbb{R}_{+}\to\mathbb{R}_{+}$ models the long-range electron-cloud contribution of an atom placed at $x_{j}$ on an atom placed at $x_{i}$. The sum $\overline{\rho}_{i}$ describes the cumulative effect on the atom placed at $x_{i}$ of the electronic clouds related to all other atoms. Eventually, the function $F\colon\mathbb{R}_{+}\to\mathbb{R}_{+}$ describes the energy needed to place (embed) an atom at position $x_{i}$ in the host electron gas created by the other atoms at positions $\\{x_{j}\\}$. Purely pair-interaction potentials can be re-obtained from the EAM model by choosing $F=0$ and have been the subject of intense mathematical research under different choices for $\phi$. The reader is referred to [13] for a survey on the available mathematical results. The setting $F=0$ corresponds indeed to the so-called Born-Oppenheimer approximation [30], which is well adapted to the case of very low temperatures and is based on the subsequent solution of the electronic and the atomic problem. As mentioned, this approximation turns out to be not always appropriate for metallic systems at finite temperatures [35, 8] and one is asked to tame the quantum nature of the problem. This is however very challenging from the mathematical viewpoint and rigorous optimality results for point configurations in the quantum setting are scarce [12, 11]. The EAM model represents hence an intermediate model between zero-temperature phenomenological pair-interaction energies and quantum systems. Electronic effects are still determined by atomic positions, but in a more realistic nonlocal fashion when $F$ is nonlinear, resulting in truly multi-body interaction systems, see [17, 21, 35] and [19] for a review. The aim of this paper is to investigate point configurations minimizing the EAM energy. Being interested in periodic arrangements, we restrict our analysis to the class of lattices, namely infinite configurations of the form $L=\oplus_{i=1}^{d}\mathbb{Z}u_{i}$ where $\\{u_{i}\\}_{i=1}^{d}$ is a basis of $\mathbb{R}^{d}$. This reduces the optimality problem to finite dimensions, making it analytically and numerically amenable. In particular, the EAM energy-per-atom of the lattice $L$ takes the specific form $\mathcal{E}[L]=F\Big{(}\sum_{q\in L\setminus\\{0\\}}\rho(|q|)\Big{)}+\sum_{q\in L\setminus\\{0\\}}\phi(|q|).$ In the classical pair-interaction case $F=0$, the lattice energy $\mathcal{E}$ has already received attention and a variety of results are available, see [29, 32, 15, 5, 10, 9] and the references therein. Such results are of course dependent on the choice of the potential $\phi$. Three reference choices for $\phi$ are the Gaussian $\phi(r)=e^{-\pi\delta r^{2}}$ for $\delta>0$, the inverse-power law $\phi(r)=r^{-s}$ for $s>d$, and the Lennard-Jones-type form $\phi(r)=ar^{-\alpha}-br^{-\beta}$ for $d<\beta<\alpha$ and $a,\,b>0$. In the Gaussian case, it has been shown by Montgomery [29] that, for all $\delta>0$, the triangular lattice of unit density is the unique minimizer (up to isometries) of $\mathcal{E}$ with $F=0$ among unit-density lattices. The same can be checked for the the inverse-power-law case by a Mellin-transform argument. More generally, the minimality of the triangular lattice of unit density is conjectured by Cohn and Kumar in [15, Conjecture 9.4] to hold among all unit-density periodic configurations. This fact is called universal optimality and has been recently proved in dimension $8$ and $24$ for the lattice $\mathsf{E}_{8}$ and the Leech lattice $\Lambda_{24}$, respectively [16]. In the Lennard-Jones case, the minimality in 2d of the triangular lattice at fixed density has been investigated in [11, 5], the minimality in 3d of the cubic lattice is proved in [7], and more general properties in arbitrary dimensions have been investigated in [10]. A recap of the main properties of the Lennard-Jones case is presented in Subsection 2.3. These play a relevant role in our analysis. In this paper, we focus on the general case $F\not=0$, when $F$ is nonlinear. More precisely, we discuss the reference cases of embedding functions $F$ of the form $F(r)=r^{t}\log(\gamma r)\quad\text{or}\quad F(r)=r^{t}$ for $t,\,\gamma>0$. The first, logarithmic choice is the classical one chosen to fit with the so-called Universal Binding Curve (see e.g., [31]) and favoring a specific minimizing value $r_{0}>0$, see [2, 14]. The second, power-law form favors on the contrary $r_{0}=0$ and allows for a particularly effective computational approach. Let us mention that other choices for $F$ could be of interest. In particular, the form $F(r)=-c\sqrt{r}$, $c>0$, is related to the Finnis-Sinclair model [21] and is discussed in Remark 4.3. Some of our theory holds for general functions $F$, provided that they are minimized at a sole value $r_{0}$. We call such functions of one-well type. The electronic-cloud contribution function $\rho\colon\mathbb{R}_{+}\to\mathbb{R}_{+}$ is assumed to be decreasing and integrable. We specifically focus on the Gaussian and inverse-power law $\rho(r)=e^{-\delta r^{2}}\quad\text{or}\quad\rho(r)=r^{-s}$ for $\delta>0$ and $s>d$, discussed, e.g., in [39] and [17, 21, 35], respectively. As for the pair-interaction potential $\phi\colon\mathbb{R}_{+}\to\mathbb{R}_{+}$, we assume a Lennard-Jones-type form [3, 34] or an inverse-power law [17, 35], i.e., $\phi(r)=ar^{-\alpha}-br^{-\beta}\quad\text{or}\quad\phi(r)=r^{-\alpha}$ for $d<\beta<\alpha$ and $a,\,b>0$. Note that short-ranged potentials $\phi$ have been considered as well [17, 18]. Our main theoretical results amount at identifying minimizers in the specific reference case of $F(r)=r\log r$ and $\rho(r)=r^{-s}$. More precisely, we find the following: * • (Inverse-power law) If $\phi(r)=r^{-\alpha}$, the minimizers of $\mathcal{E}$ coincide with those of the Lennard-Jones potential $r\mapsto r^{-\alpha}-r^{-s}$, up to rescaling (Theorem 4.1); * • (Lennard-Jones) If $\phi(r)=ar^{-\alpha}-br^{-\beta}$, under some compatibility assumptions on the parameters, the minimizers of $\mathcal{E}$ coincide with those of the Lennard-Jones potential $r\mapsto r^{-\alpha}-r^{-s}$ (Theorem 5.2). Actually, both results hold for more general embedding functions $F$, see (4.1) and Remarks 4.2–4.5. With this at hand, the problem can be reduced to the pure Lennard-Jones case (i.e., $F=0$) which is already well understood. In particular, in the two dimensional case we find that the triangular lattice, up to rescaling and isometries, is the unique minimizer of $\mathcal{E}$ in specific parameters regimes. These theoretical findings are illustrated by numerical experiments in two and three dimensions. By alternatively choosing the Gaussian $\rho(r)=e^{-\delta r^{2}}$, in two dimensions we additionally observe the onset of a phase transition between the triangular and an orthorhombic lattice, as $\delta$ decreases. In three dimensions, both in the inverse-power-law case $\rho(r)=r^{-s}$ and in the Gaussian case $\rho(r)=e^{-\delta r^{2}}$, the simple cubic lattice $\mathbb{Z}^{3}$ is favored against the face-centered and the body-centered cubic lattice for $s$ or $\delta$ small, respectively. In the power-law case $F(r)=r^{t}$, for $\rho$ of inverse-power-law type and $\phi$ of Lennard-Jones type and specific, physically relevant choices of parameters, one can conveniently reduce the complexity of the optimization problem from the analytical standpoint. This reduction allows to explicitly compute the EAM energy for any lattice of unit density, hence allowing to investigate numerically minimality in two and three dimensions. Depending on the parameters, the relative minimality of the triangular, square, and orthorhombic lattices in two dimensions and the simple cubic, body-centered cubic, and face-centered cubic lattices in three dimension is ascertained. This is the plan of the paper: Notation on potentials and energies are introduced in Subsections 2.1 and 2.2. The two subcases $F=0$ and $\phi=0$ are discussed in Subsection 2.3 and in Section 3, respectively. In particular, known results on Lennard-Jones-type interactions are recalled in Subsection 2.3. The inverse-power-law case $\phi(r)=r^{-\alpha}$ is investigated in Section 4. The Lennard-Jones case $\phi(r)=ar^{-\alpha}-br^{-\beta}$ is addressed theoretically and numerically in Section 5. In particular, Subsection 5.1 contains the classical case $F(r)=r\log r$, and Subsection 5.2 discusses the power-law case $F(r)=r^{t}$. ## 2\. Notation and preliminaries ### 2.1. Lattices For any dimension $d$, we write $\mathcal{L}_{d}$ for the set of all lattices $L=\bigoplus_{i=1}^{d}\mathbb{Z}u_{i}$, where $\\{u_{i}\\}_{i=1}^{d}$ is a basis of $\mathbb{R}^{d}$. We write $\mathcal{L}_{d}(1)\subset\mathcal{L}_{d}$ for the set of all lattices with unit density, which corresponds to $|\det(u_{1},\ldots,u_{d})|=1$. In dimension two, any lattice $L\in\mathcal{L}_{2}(1)$ can be written as $L:=\mathbb{Z}\left(\frac{1}{\sqrt{y}},0\right)\oplus\mathbb{Z}\left(\frac{x}{\sqrt{y}},\sqrt{y}\right),$ for $(x,y)\in\mathcal{D}$, where $\mathcal{D}=\big{\\{}(x,y)\in\mathbb{R}^{2}\,:\,0\leq x\leq 1/2,\,y>0;\,x^{2}+y^{2}\geq 1\big{\\}}$ (2.1) is the so-called (half) fundamental domain for $\mathcal{L}_{2}(1)$ (see, e.g., [29, Page 76]). In particular, the square lattice $\mathbb{Z}^{2}$ and the triangular lattice with unit density, denoted by $\mathsf{A}_{2}\in\mathcal{L}_{d}(1)$, are given by the respective choices $(x,y)=(0,1)$ and $(x,y)=\left(1/2,{\sqrt{3}}/{2}\right)$, i.e., $\mathbb{Z}^{2}=\mathbb{Z}(1,0)\oplus\mathbb{Z}(0,1)\quad\text{ and }\quad\mathsf{A}_{2}:=\sqrt{\frac{2}{\sqrt{3}}}\left[\mathbb{Z}(1,0)\oplus\mathbb{Z}\left(\frac{1}{2},\frac{\sqrt{3}}{2}\right)\right].$ In dimension three, the fundamental domain of $\mathcal{L}_{3}(1)$ is much more difficult to describe (see e.g., [36, Section 1.4.3]) and its 5-dimensional nature makes it impossible to plot compared to the 2-dimensional $\mathcal{D}$ defined in (2.1). The Face-Centered Cubic (FCC) and Body- Centered Cubic (BCC) lattices with unit density are respectively indicated by $\mathsf{D}_{3}\in\mathcal{L}_{3}(1)$ and $\mathsf{D}_{3}^{*}\in\mathcal{L}_{3}(1)$, and are defined as $\displaystyle\mathsf{D}_{3}:=2^{-\frac{1}{3}}\left[\mathbb{Z}(1,0,1)\oplus\mathbb{Z}(0,1,1)\oplus\mathbb{Z}(1,1,0)\right];$ $\displaystyle\mathsf{D}_{3}^{*}:=2^{\frac{1}{3}}\left[\mathbb{Z}(1,0,0)\oplus\mathbb{Z}(0,1,0)\oplus\mathbb{Z}\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\right].$ ###### Remark 2.1 (Periodic configurations). All results in this paper are stated in terms of lattices, for the sake of definiteness. Let us however point out that the same statements hold in the more general setting of periodic configurations in dimensions $d\in\\{8,24\\}$, on the basis of the recently proved optimality results from [16]. In dimension $d=2$, universal optimality is only known among lattices, see [29]. Still, the validity of the Cohn-Kumar conjecture (see [15, Conjecture 9.4]) would allow us to consider more general periodic configurations as well. ### 2.2. Potentials and energies For any dimension $d$, let $\mathcal{S}_{d}$ be the set of all functions $f\colon\mathbb{R}_{+}\to\mathbb{R}$ such that $|f(r)|={\rm O}(r^{-d-\eta})$ for some $\eta>0$ as $r\to\infty$. By $\mathcal{S}_{d}^{+}\subset\mathcal{S}_{d}$ we denote the subset of nonnegative functions. We say that a continuous function $F\colon\mathbb{R}_{+}\to\mathbb{R}$ is a one-well potential if there exists $r_{0}>0$ such that $F$ is decreasing on $(0,r_{0})$ and increasing on $(r_{0},\infty)$. For any $\phi\in\mathcal{S}_{d}$, we define the interaction energy $E_{\phi}\colon\mathcal{L}_{d}\to\mathbb{R}$ by $E_{\phi}[L]:=\sum_{q\in L\backslash\\{0\\}}\phi(|q|).$ (2.2) If $\phi(r)=r^{-s}$, $s>d$, $E_{\phi}[L]$ actually corresponds to the Epstein zeta function, which is defined by $\displaystyle\zeta_{L}(s):=\sum_{q\in L\backslash\\{0\\}}\frac{1}{|q|^{s}}.$ (2.3) For any function $F\colon\mathbb{R}_{+}\to\mathbb{R}$ and for any $\rho\in\mathcal{S}_{d}^{+}$, we define the embedding energy $E_{F,\rho}\colon\mathcal{L}_{d}\to\mathbb{R}$ by $\displaystyle E_{F,\rho}[L]:=F(E_{\rho}[L])\quad\text{with}\quad E_{\rho}[L]:=\sum_{q\in L\backslash\\{0\\}}\rho(|q|).$ (2.4) Finally, for any $\phi\in\mathcal{S}_{d}$, any $\rho\in\mathcal{S}_{d}^{+}$, and any $F\colon\mathbb{R}_{+}\to\mathbb{R}$, we define the _total energy_ $\mathcal{E}\colon\mathcal{L}_{d}\to\mathbb{R}$ by $\displaystyle\mathcal{E}[L]:=E_{F,\rho}[L]+E_{\phi}[L]=F(E_{\rho}[L])+E_{\phi}[L].$ (2.5) In the following, we investigate $\mathcal{E}$ under different choices of the potentials $F$, $\rho$, and $\phi$. In some parts, we will require merely abstract conditions on the potentials, such as a monotone decreasing $\rho$ or a one-well potential $F$. In other parts, we will consider more specific potentials. In particular, we will choose, for $\gamma,\delta,t,a,b>0$, $s>d$, and $\alpha>\beta>d$, $F(r)\in\\{r^{t},r^{t}\log(\gamma r)\\},\quad\rho(r)\in\\{r^{-s},e^{-\delta r^{2}}\\},\quad\phi(r)\in\\{r^{-\alpha},ar^{-\alpha}-br^{-\beta}\\}.$ Note that the choice of $s$, $\delta$, $\alpha$, and $\beta$ implies that $\phi\in\mathcal{S}_{d}$ and $\rho\in\mathcal{S}_{d}^{+}$, so that the sums in (2.2) and (2.4) are well defined. For any $L\in\mathcal{L}_{d}(1)$, any $\phi\in\mathcal{S}_{d}$, any $\rho\in\mathcal{S}_{d}^{+}$, and any $F\colon\mathbb{R}_{+}\to\mathbb{R}$, we define, if they uniquely exist, the following optimal scaling parameters for the energies: $\displaystyle\lambda^{\mathcal{E}}_{L}:=\mathop{\rm argmin}\nolimits_{\lambda>0}\mathcal{E}[\lambda L],\quad\lambda^{F,\rho}_{L}:=\mathop{\rm argmin}\nolimits_{\lambda>0}E_{F,\rho}[\lambda L],\quad\lambda^{\phi}_{L}:=\mathop{\rm argmin}\nolimits_{\lambda>0}E_{\phi}[\lambda L].$ (2.6) ### 2.3. A recap on the Lennard-Jones-type energy A classical problem is to study the $F=0$ case for a Lennard-Jones-type potential $\phi(r)=ar^{-\alpha}-br^{-\beta},\quad\alpha>\beta>d,\quad a,\,b>0.$ (2.7) Let us recap some known facts in this case [5, 10], which will be used later on. We start by reducing the minimization problem on _all_ lattices to a minimization problem on lattices of _unit density only_. This is achieved by computing the optimal scaling parameter of the energy $\lambda^{\phi}_{L}$ , see (2.6), for each $L\in\mathcal{L}_{d}(1)$, which in turn allows to find the minimum of the energy among dilations of $L$. More precisely, in case (2.7), for all $\lambda>0$ and all lattices $L\in\mathcal{L}_{d}(1)$, one has $E_{\phi}[\lambda L]=a\lambda^{-\alpha}\zeta_{L}(\alpha)-b\lambda^{-\beta}\zeta_{L}(\beta),$ where we use (2.3). (This energy was studied first in [5, Section 6.3].) Then, we find the unique minimizer $\displaystyle\lambda^{\phi}_{L}=\left(\frac{\alpha a\zeta_{L}(\alpha)}{\beta b\zeta_{L}(\beta)}\right)^{\frac{1}{\alpha-\beta}},$ (2.8) and therefore the energy is given by $\min_{\lambda>0}E_{\phi}[\lambda L]=E_{\phi}[\lambda^{\phi}_{L}L]=\frac{b^{\frac{\alpha}{\alpha-\beta}}\zeta_{L}(\beta)^{\frac{\alpha}{\alpha-\beta}}}{a^{\frac{\beta}{\alpha-\beta}}\zeta_{L}(\alpha)^{\frac{\beta}{\alpha-\beta}}}\left(\left(\frac{\beta}{\alpha}\right)^{\frac{\alpha}{\alpha-\beta}}-\left(\frac{\beta}{\alpha}\right)^{\frac{\beta}{\alpha-\beta}}\right)<0.$ The latter inequality follows from the fact that $\alpha>\beta$. Consequently, for any lattices $L,\Lambda\in\mathcal{L}_{d}(1)$, we have that $E_{\phi}[\lambda_{L}^{\phi}L]\leq E_{\phi}[\lambda^{\phi}_{\Lambda}\Lambda]\iff\frac{\zeta_{L}(\alpha)^{\beta}}{\zeta_{L}(\beta)^{\alpha}}\leq\frac{\zeta_{\Lambda}(\alpha)^{\beta}}{\zeta_{\Lambda}(\beta)^{\alpha}}.$ This means that finding the lattice with minimal energy amounts to minimizing the function $\displaystyle L\mapsto e^{*}(L):=\frac{\zeta_{L}(\alpha)^{\beta}}{\zeta_{L}(\beta)^{\alpha}}$ (2.9) on $\mathcal{L}_{d}(1)$. This is particularly effective in dimension two where for fixed $(\alpha,\beta)$ the minimizer can be found numerically by plotting $L\mapsto\min_{\lambda}\mathcal{E}[\lambda L]$ in the fundamental domain $\mathcal{D}$. Figure 1 shows the case $(\alpha,\beta)=(12,6)$, i.e., when $\phi$ is the classical Lennard-Jones potential. The global minimum of $E_{\phi}$ in $\mathcal{L}_{2}$ appears to be the triangular lattice $\lambda^{\phi}_{\mathsf{A}_{2}}\mathsf{A}_{2}$. For a certain range of parameters $(\alpha,\beta)$, this observation can be rigorously ascertained. Indeed, for $d=2$, it is shown in [5, Theorem 1.2.B.] that the global minimum of $E_{\phi}$ is uniquely achieved by a triangular lattice $\lambda_{\mathsf{A}_{2}}^{\phi}\mathsf{A}_{2}$ if $H(\alpha)<H(\beta),\quad\textnormal{where}\quad H(t):=\frac{1}{2}\pi^{-t/2}\Gamma\left(\frac{t}{2}\right)t,$ (2.10) and $\Gamma$ is the classical Gamma function $\Gamma(r)=\int_{0}^{\infty}x^{r-1}e^{x}\,{\rm d}x$ for $r>0$. (In the sequel, all statements on uniqueness are intended up to isometries, without further notice.) In fact, under condition (2.10) one has that [5] * • $\mathsf{A}_{2}$ is the unique minimizer in $\mathcal{L}_{2}(1)$ of $\displaystyle L\mapsto\lambda^{\phi}_{L}=\left(\frac{\alpha a\zeta_{L}(\alpha)}{\beta b\zeta_{L}(\beta)}\right)^{\frac{1}{\alpha-\beta}}$, * • $\mathsf{A}_{2}$ is the unique minimizer in $\mathcal{L}_{2}(1)$ of $e^{*}$ defined in (2.9). As pointed out in [5, Remark 6.18], it is necessary to choose $2<\beta<\alpha<M\approx 9.2045818$ in order to obtain these optimality results by using the method developed there. In particular, this means that the following pairs of integer exponents can be chosen: $(\alpha,\beta)\in\\{(4,3);(5,3);(6,3);(5,4);(6,4)\\}$. Note that the classical Lennard-Jones potential $(\alpha,\beta)=(12,6)$ is not covered by [5, Theorem 1.2.B.]. Figure 1. Contour plot of $L\mapsto e^{*}(L)=\frac{\zeta_{L}(12)^{6}}{\zeta_{L}(6)^{12}}$ in the fundamental domain $\mathcal{D}$. The triangular lattice $\mathsf{A}_{2}$ with coordinates $(1/2,\sqrt{3}/2)$ appears to be the unique minimizer. Moreover, $\mathbb{Z}^{2}$ with coordinates $(0,1)$ appears to be a saddle point. We now ask ourselves what is the minimal scaling parameter $\lambda$ and the corresponding lattice $L\in\mathcal{L}_{d}(1)$ for which $E_{\phi}[\lambda L]$ is minimized. Physically, this would correspond to identifying the first minimum of $E_{\phi}$ starting from a high-density configuration by progressively decreasing the density. We have the following. ###### Proposition 2.2 (Smallest volume meeting the global minimum). Let $\phi$ be a Lennard-Jones-type potential as in (2.7). If $L_{d}\in\mathcal{L}_{d}(1)$ is the minimizer of $L\mapsto\zeta_{L}(\beta)$ on $\mathcal{L}_{d}(1)$ and $\lambda_{L_{d}}^{\phi}L_{d}$ is the unique global minimizer of $E_{\phi}$ on $\mathcal{L}_{d}$, then $\lambda_{L_{d}}^{\phi}$ is the unique minimizer of $L\mapsto\lambda^{\phi}_{L}$ on $\mathcal{L}_{d}(1)$. ###### Proof. As discussed above, if $\lambda_{L_{d}}^{\phi}L_{d}$ is a global minimizer of $E_{\phi}$ on $\mathcal{L}_{d}$, then $L_{d}$ minimizes the function $e^{*}$ defined in (2.9) on $\mathcal{L}_{d}(1)$. This yields $\displaystyle\frac{\zeta_{L_{d}}(\alpha)^{\beta}}{\zeta_{L_{d}}(\beta)^{\alpha}}\leq\frac{\zeta_{L}(\alpha)^{\beta}}{\zeta_{L}(\beta)^{\alpha}}$ (2.11) for all $L\in\mathcal{L}_{d}(1)$. We thus have $\left(\frac{\zeta_{L}(\beta)\zeta_{L_{d}}(\alpha)}{\zeta_{L}(\alpha)\zeta_{L_{d}}(\beta)}\right)^{\beta}=\frac{\zeta_{L}(\beta)^{\alpha}\zeta_{L_{d}}(\alpha)^{\beta}}{\zeta_{L}(\alpha)^{\beta}\zeta_{L_{d}}(\beta)^{\alpha}}\left(\frac{\zeta_{L_{d}}(\beta)}{\zeta_{L}(\beta)}\right)^{\alpha-\beta}\leq\left(\frac{\zeta_{L_{d}}(\beta)}{\zeta_{L}(\beta)}\right)^{\alpha-\beta}.$ As we are assuming that $\zeta_{L_{d}}(\beta)\leq\zeta_{L}(\beta)$ for all $L\in\mathcal{L}_{d}(1)$, we further get $\frac{\zeta_{L}(\beta)\zeta_{L_{d}}(\alpha)}{\zeta_{L}(\alpha)\zeta_{L_{d}}(\beta)}\leq\left(\frac{\zeta_{L_{d}}(\beta)}{\zeta_{L}(\beta)}\right)^{\frac{\alpha-\beta}{\beta}}\leq 1,$ (2.12) where we use that $\alpha>\beta$. In view of (2.8), this shows that $\lambda^{\phi}_{L}\geq\lambda_{L_{d}}^{\phi}$ for all $L\in\mathcal{L}_{d}(1)$. If $\lambda^{\phi}_{L}=\lambda_{L_{d}}^{\phi}$, then we have a double equality in (2.12). This implies also equality in (2.11) which is equivalent to $e^{*}(L)=e^{*}(L_{d})$. Therefore, it follows that $L=L_{d}$ up to rotation, by uniqueness of the minimizer $L_{d}$ of $e^{*}$. ∎ Figure 2. Contour plot of $L\mapsto(b/a)^{1/6}\lambda_{L}^{\phi}=\left(\frac{12\zeta_{L}(12)}{6\zeta_{L}(6)}\right)^{1/6}$, see (2.8), in the fundamental domain $\mathcal{D}$. The triangular lattice $\mathsf{A}_{2}$ with coordinates $(1/2,\sqrt{3}/2)$ is the unique minimizer for any choice of $a,\,b>0$. We refer to Figure 2 for an illustration in the two-dimensional case $(\alpha,\beta)=(12,6)$. Note that in this case the global minimum is not known. Still, the triangular lattice appears to be the first stable structure reached by increasing the volume (decreasing the density). This is in agreement with Figure 1 and Proposition 2.2. Recall that the triangular lattice also minimizes $L\mapsto\zeta_{L}(\beta)$ on $\mathcal{L}_{d}(1)$, as required in the statement of Proposition 2.2, see [29]. Notice that in dimension $d=3$ there is no rigorous result concerning the minimizer of $E_{\phi}$ in $\mathcal{L}_{3}$. Only local minimality results for cubic lattices $\mathbb{Z}^{3},\mathsf{D}_{3},\mathsf{D}_{3}^{*}$ have been derived in [7]. Numerical investigations suggest that $\lambda_{\mathsf{D}_{3}}^{\phi}\mathsf{D}_{3}$ is the unique minimizer of $E_{\phi}$ in $\mathcal{L}_{3}$ for any values $\alpha>\beta>d$ of the exponents, see, e.g., [40, Figure 5], [10, Figures 5 and 6] and [7, Conjecture 1.7]. Therefore, we can conjecture that $\mathsf{D}_{3}$ is the unique minimizer of $L\mapsto\lambda_{L}^{\phi}$ in $\mathcal{L}_{3}(1)$ by application of Proposition 2.2. ## 3\. Properties of the embedding energy $E_{F,\rho}$ In this section we focus on the properties of the embedding energy $E_{F,\rho}$ given in (2.4). Although other choices for the potential $F$ may been considered (see, e.g., [21, 19]), we concentrate ourselves on the one- well case (see, e.g., [14] and references therein). In that case, it is clear that the global minimum of $E_{F,\rho}$ in $\mathcal{L}_{d}$ can be achieved for any $L$ by simply choosing $\lambda$ such that $E_{\rho}[\lambda L]=r_{0}=\mathop{\rm argmin}\nolimits_{r>0}F(r)$. We now ask ourselves what is the minimal scaling parameter $\lambda$ and the corresponding lattice $L\in\mathcal{L}_{d}(1)$ for which $E_{F,\rho}[\lambda L]$ achieves $\min F$. In other words, what is the minimizer of $L\mapsto\lambda^{F,\rho}_{L}$ in $\mathcal{L}_{d}(1)$ (recall (2.6)). Physically, this would correspond to reach the ground state of the embedding energy $\min F$ starting from a high- density configuration by progressively decreasing the density. ###### Theorem 3.1 (Smallest volume meeting the global minimum). Let $F\colon\mathbb{R}_{+}\to\mathbb{R}$ be a one-well potential and let $\rho\in\mathcal{S}^{+}_{d}$ be strictly decreasing. Then, $\lambda^{F,\rho}_{L}$ exists and $\min F$ is achieved by choosing $\lambda^{F,\rho}_{L}L$ for all $L\in\mathcal{L}_{d}(1)$. Furthermore, if $L_{d}$ is the unique minimizer in $\mathcal{L}_{d}(1)$ of $L\mapsto E_{\rho}[\lambda_{L_{d}}^{F,\rho}L]$, then $L_{d}$ is the unique minimizer in $\mathcal{L}_{d}(1)$ of $L\mapsto\lambda^{F,\rho}_{L}$. ###### Proof. Let $r_{0}>0$ be the unique minimizer of $F$, namely, $F(r_{0})=\min F$. Given any $L\in\mathcal{L}_{d}(1)$, the fact that $\rho\in\mathcal{S}_{d}^{+}$ is strictly decreasing implies that $\lambda\mapsto E_{\rho}[\lambda L]$ is strictly decreasing and goes to $0$ at infinity and to $\infty$ at $0$. Therefore, there exists a unique $\lambda>0$ such that $E_{\rho}[\lambda L]=r_{0}$. Such $\lambda$ obviously coincides with $\lambda_{L}^{F,\rho}$ given in (2.6). This shows the first part of the statement. Suppose now that $L_{d}$ is the unique minimizer in $\mathcal{L}_{d}(1)$ of $L\mapsto E_{\rho}[\lambda_{L_{d}}^{F,\rho}L]$. Assume by contradiction that there exists $L\in\mathcal{L}_{d}(1)$, $L\neq L_{d}$, with $\lambda^{F,\rho}_{L}\leq\lambda_{L_{d}}^{F,\rho}$. By using that $\lambda\mapsto E_{\rho}[\lambda L]$ is decreasing, this would imply $E_{\rho}[\lambda^{F,\rho}_{L}L]\geq E_{\rho}[\lambda_{L_{d}}^{F,\rho}L]>E_{\rho}[\lambda_{L_{d}}^{F,\rho}L_{d}]=r_{0}=E_{\rho}[\lambda^{F,\rho}_{L}L],$ a contradiction. We thus deduce that $\lambda_{L_{d}}^{F,\rho}\leq\lambda^{F,\rho}_{L}$ for all $L\in\mathcal{L}_{d}(1)$, with equality if and only if $L=L_{d}$. ∎ We note that Theorem 3.1 can be applied to the choice $\rho(r)=r^{-s}$, $s>d$, and the triangular lattice $\mathsf{A}_{2}$, the $\mathsf{E}_{8}$ lattice, or the Leech lattice $\Lambda_{24}$ in dimensions $2$, $8$, and $24$, respectively. In fact, these lattices are the unique minimizer of $L\mapsto E_{\rho}[\lambda L]$ for all $\lambda>0$, see [29, 16]. Let us mention that, in this setting, asking $F$ to be one-well is not restrictive. In fact, if $F$ is a strictly increasing (resp. decreasing) function, no optimal scaling parameters $\lambda>0$ can be found since, for any $L\in\mathcal{L}_{d}(1)$, $E_{F,\rho}[\lambda L]$ will be minimized for $\lambda\to 0$ (resp. $\lambda\to\infty$). ## 4\. The EAM energy with inverse-power interaction $\phi(r)=r^{-\alpha}$ In this section, we study the energy $\mathcal{E}$ defined in (2.5) when $\phi$ is given by the inverse-power interaction $\phi(r)=r^{-\alpha}$. The main result of this section is the following. ###### Theorem 4.1 (EAM energy for inverse-power interaction). For any $\alpha>s>d$, let $\rho(r)=r^{-s}$, let $\phi(r)=r^{-\alpha}$, and let $F\in C^{1}(\mathbb{R}_{+})$. We assume that the functions $\displaystyle g(r):=r^{1-{\alpha}/{s}}F^{\prime}(r)\quad\textnormal{and}\quad h(r):=F(r)-\frac{s}{\alpha}rF^{\prime}(r)\quad\quad\text{for $r>0$}$ (4.1) satisfy that $g$ is strictly increasing on $I:=\\{F^{\prime}<0\\}$, that $g(I)=(-\infty,0)$, and that $h\circ g^{-1}$ is strictly decreasing on $(-\infty,0)$. (Note that $g^{-1}$ exists on $(-\infty,0)$ and takes values in $\mathbb{R}_{+}$.) Then, $\lambda^{\mathcal{E}}_{L}$ exists for all $L\in\mathcal{L}_{d}(1)$ and the following statements are equivalent: * • $L_{d}$ is the unique minimizer in $\mathcal{L}_{d}(1)$ of $L\mapsto e^{*}(L)=\frac{\zeta_{L}(\alpha)^{s}}{\zeta_{L}(s)^{\alpha}}$, see (2.9); * • $\lambda^{\mathcal{E}}_{L_{d}}L_{d}$ is the unique minimizer of $\mathcal{E}$ in $\mathcal{L}_{d}$; * • $\lambda_{L_{d}}^{\bar{\phi}}L_{d}$ is the unique minimizer in $\mathcal{L}_{d}$ of $E_{\bar{\phi}}$ for $\bar{\phi}(r)=r^{-\alpha}-r^{-s}$, see (2.7). In particular, when $d=2$ and $H(\alpha)<H(s)$ where $H$ is defined by (2.10), then the unique minimizer of $\mathcal{E}$ in $\mathcal{L}_{2}$ is the triangular lattice $\lambda_{\mathsf{A}_{2}}^{\mathcal{E}}\mathsf{A}_{2}$. Furthermore, if $L_{d}$ is the unique minimizer of $L\mapsto\zeta_{L}(s)$ in $\mathcal{L}_{d}(1)$ as well as a minimizer of $e^{*}$ in $\mathcal{L}_{d}(1)$, then $L_{d}$ is the unique minimizer of $L\mapsto\lambda^{\mathcal{E}}_{L}$ in $\mathcal{L}_{d}(1)$, where $\lambda^{\mathcal{E}}_{L}$ is defined in (2.6). The gist of this result is the coincidence of the minimizers of $\mathcal{E}$ with those of $E_{\bar{\phi}}$ for $\bar{\phi}(r)=r^{-\alpha}-r^{-s}$ (up to proper rescaling), under quite general choices of $F$. This results in a simplification of the minimality problem for $\mathcal{E}$ as one reduces to the study of minimality for the Lennard-Jones-type potential $\bar{\phi}$, which is already well known, see Subsection 2.3. In particular, in two dimensions and under condition $H(\alpha)<H(s)$, the unique minimizer is a properly rescaled triangular lattice. Before proving the theorem, let us present some applications to specific choices of $F$. ###### Remark 4.2 (Application 1 - The classical case $F(r)=r\log r$). We can apply this theorem to $F(r)=r^{t}\log(\gamma r)$ for $t\in(0,\alpha/s)$ and $\gamma>0$ which is a one-well potential with minimum attained at point $r_{0}^{t}:=\frac{1}{\gamma}e^{-1/t}$. In particular, the case $F(r)=r\log r$ is admissible since $s<\alpha$. In fact, we have $I=(0,r_{0}^{t})$ and $\displaystyle g(r)=r^{t-\alpha/s}\big{(}t\log(\gamma r)+1),\quad g^{\prime}(r)=r^{t-\alpha/s-1}\left(\left(t-\frac{\alpha}{s}\right)(t\log(\gamma r)+1)+t\right),$ $\displaystyle h(r)=r^{t}\left(\left(1-\frac{ts}{\alpha}\right)\log(\gamma r)-\frac{s}{\alpha}\right),\quad h^{\prime}(r)=r^{t-1}\left(t\left(1-\frac{ts}{\alpha}\right)\log(\gamma r)-\frac{2ts}{\alpha}+1\right).$ Since $g$ is strictly increasing on $(0,r_{1}^{t})$ for $r_{1}^{t}:=\frac{1}{\gamma}e^{\frac{2ts-\alpha}{t(\alpha-ts)}}$ and $r_{0}^{t}<r_{1}^{t}$ we have that $g$ is strictly increasing on $I$. Moreover, $g(I)=(-\infty,0)$. On the other hand, $h$ is strictly decreasing on $(0,r_{1}^{t})$. Therefore, also $h\circ g^{-1}$ is strictly decreasing on $(-\infty,0)$. Hence, Theorem 4.1 applies. ###### Remark 4.3 (Application 2 - Finnis-Sinclair model). Theorem 4.1 can also be applied to $F(r)=-c\sqrt{r}$ for $c>0$. This case is known as the long-range Finnis-Sinclair model defined in [35], based on the work of Finnis and Sinclair [21] on the description of cohesion in metals and also used as a model to test the validity of machine-learning algorithms [24]. In this case, we obtain $g(r)=-\frac{c}{2r^{\frac{\alpha}{s}-\frac{1}{2}}}\quad\textnormal{and}\quad h(r)=c\sqrt{r}\left(\frac{s}{2\alpha}-1\right).$ Since $s<\alpha<2\alpha$, $g$ is strictly increasing on $I=\\{F^{\prime}<0\\}=\mathbb{R}_{+}$, $g(I)=(-\infty,0)$, and $h$ is strictly decreasing on $\mathbb{R}_{+}$. Therefore, Theorem 4.1 applies. ###### Remark 4.4 (Application 3 - inverse-power law). Also the inverse-power law $F(r)=r^{-t}$ for $t>0$ satisfies the assumption of the theorem. In fact, we have $g(r)=-tr^{-t-{\alpha}/{s}}\quad\text{and}\quad h(r)=\left(1+\frac{st}{\alpha}\right)r^{-t}.$ In particular, $g$ is strictly increasing on $I=\\{F^{\prime}<0\\}=\mathbb{R}_{+}$ and $g(I)=(-\infty,0)$. Moreover, $h$ is strictly decreasing on $\mathbb{R}_{+}$ and therefore also $h\circ g^{-1}$ is strictly decreasing on $(-\infty,0)$. ###### Remark 4.5 (Application 4 - negative-logarithm). We can apply Theorem 4.1 to the inverse-logarithmic case $F(r)=-\log r$. Indeed, we compute $g(r)=-r^{-\alpha/s}\quad\text{and}\quad h(r)=-\log r+\frac{s}{\alpha}.$ We hence have that $g$ is strictly increasing on $I=\\{F^{\prime}<0\\}=\mathbb{R}_{+}$ and $g(I)=(-\infty,0)$. As $h$ is strictly decreasing on $\mathbb{R}_{+}$, we have that $h\circ g^{-1}$ is strictly decreasing on $(-\infty,0)$. ###### Proof of Theorem 4.1. In view of (2.3) and (2.5), for any $\lambda>0$ and $L\in\mathcal{L}_{d}(1)$ we have that $\mathcal{E}[\lambda L]=F(\lambda^{-s}\zeta_{L}(s))+\lambda^{-\alpha}\zeta_{L}(\alpha).$ The critical points of $\lambda\mapsto\mathcal{E}[\lambda L]$ for fixed $L$ are the solutions of $\partial_{\lambda}\mathcal{E}[\lambda L]=-s\lambda^{-s-1}\zeta_{L}(s)\,F^{\prime}(\lambda^{-s}\zeta_{L}(s))-\alpha\lambda^{-\alpha-1}\zeta_{L}(\alpha)=0.$ (4.2) This is equivalent to $g(\lambda^{-s}\zeta_{L}(s))=-\frac{\alpha}{s}\frac{\zeta_{L}(\alpha)}{\zeta_{L}(s)^{\frac{\alpha}{s}}}=-\frac{\alpha}{s}e^{*}(L)^{\frac{1}{s}},\quad$ where $g$ is given in (4.1), and $e^{*}(L)=\frac{\zeta_{L}(\alpha)^{s}}{\zeta_{L}(s)^{\alpha}}$ was defined in (2.9). Since $g^{-1}$ is positive and strictly increasing on $(-\infty,0)$, we have that the unique critical point is given by $\displaystyle\lambda^{*}:=\left(\frac{\zeta_{L}(s)}{g^{-1}\left(-\frac{\alpha}{s}e^{*}(L)^{\frac{1}{s}}\right)}\right)^{\frac{1}{s}}.$ (4.3) In view of (4.2), we also have that $\partial_{\lambda}\mathcal{E}[\lambda L]\geq 0$ if and only if $g(\lambda^{-s}\zeta_{L}(s))\leq-\frac{\alpha}{s}e^{*}(L)^{\frac{1}{s}}$, which is equivalent to $\lambda\geq\lambda^{*}$. In particular, $\lambda\mapsto\mathcal{E}[\lambda L]$ is decreasing on $(0,\lambda^{*})$ and increasing on $(\lambda^{*},\infty)$. This shows that $\lambda^{*}$ is a minimizer and thus $\lambda^{*}=\lambda^{\mathcal{E}}_{L}$, where $\lambda^{\mathcal{E}}_{L}$ is defined in (2.6). By using the fact that $(\lambda^{\mathcal{E}}_{L})^{-\alpha}\zeta_{L}(\alpha)=-\frac{s}{\alpha}(\lambda^{\mathcal{E}}_{L})^{-s}\zeta_{L}(s)F^{\prime}((\lambda^{\mathcal{E}}_{L})^{-s}\zeta_{L}(s))$ from (4.2) and the identity $\lambda^{*}=\lambda^{\mathcal{E}}_{L}$, the minimal energy among dilated copies $\lambda L$ of a given lattice $L$ can be checked to be $\displaystyle\mathcal{E}[\lambda^{\mathcal{E}}_{L}L]$ $\displaystyle=F\big{(}(\lambda^{\mathcal{E}}_{L})^{-s}\zeta_{L}(s)\big{)}+(\lambda^{\mathcal{E}}_{L})^{-\alpha}\zeta_{L}(\alpha)$ $\displaystyle=F\big{(}(\lambda^{\mathcal{E}}_{L})^{-s}\zeta_{L}(s)\big{)}-\frac{s}{\alpha}(\lambda^{\mathcal{E}}_{L})^{-s}\zeta_{L}(s)F^{\prime}\big{(}(\lambda^{\mathcal{E}}_{L})^{-s}\zeta_{L}(s)\big{)}$ $\displaystyle=h\big{(}(\lambda^{\mathcal{E}}_{L})^{-s}\zeta_{L}(s)\big{)}$ $\displaystyle=h\circ g^{-1}\left(-\frac{\alpha}{s}e^{*}(L)^{\frac{1}{s}}\right),$ where $h$ is defined in (4.1). By assumption $h\circ g^{-1}$ is strictly decreasing on $(-\infty,0)$. Hence, $L_{d}$ minimizes $L\mapsto\mathcal{E}[\lambda^{\mathcal{E}}_{L}L]$ in $\mathcal{L}_{d}(1)$ (uniquely) if and only if $L_{d}$ minimizes $e^{*}$ (uniquely). This shows the equivalence of the first two items in the statement. The equivalence to the third item has already been addressed in the discussion before (2.9). The two- dimensional case is a simple application of [5, Theorem 1.2.B.] which ensures that $\mathsf{A}_{2}$ is the unique minimizer of $e^{*}$ in $\mathcal{L}_{2}(1)$, as it has been already recalled in Subsection 2.3. To complete the proof, it remains to show the final statement in $d$ dimensions. Assume that $L_{d}$ is the unique minimizer of $L\mapsto\zeta_{L}(s)$ in $\mathcal{L}_{d}(1)$ as well as a minimizer of $e^{*}$ in $\mathcal{L}_{d}(1)$. In this case, by using (4.3) and the identity $\lambda^{*}=\lambda^{\mathcal{E}}_{L}$, it indeed follows that $L_{d}$ is the unique minimizer of $L\mapsto\lambda^{\mathcal{E}}_{L}$ in $\mathcal{L}_{d}(1)$, since $g^{-1}$ is positive and increasing on $(-\infty,0)$. ∎ ## 5\. The EAM energy with Lennard-Jones-type interaction $\phi(r)=ar^{-\alpha}-br^{-\beta}$ We now move on to consider the full EAM energy $\mathcal{E}$ defined in (2.5) for Lennard-Jones-type potentials $\phi$ as in (2.7). We split this section into two parts. At first, we address the classical case $F(r)=r\log r$ analytically and numerically. Afterwards, we provide some further numerical studies for the power law case $F(r)=r^{t}$. ### 5.1. The classical case $F(r)=r\log r$ We start with two theoretical results and then proceed with several numerical investigations. #### 5.1.1. Two theoretical results The following corollary is a straightforward application of Theorem 3.1. ###### Corollary 5.1 (Existence of parameters for the optimality of $\mathsf{A}_{2}$). Let $F(r)=r^{t}\log(\gamma r),\quad\rho(r)=r^{-s},\quad\textnormal{and}\quad\phi(r)=ar^{-\alpha}-br^{-\beta},$ for $\gamma,t>0$, $s>2$, $\alpha>\beta>2$, and $a,b>0$. Then, given parameters $(\alpha,\beta,\gamma,s,{t})$ such that $H(\alpha)<H(\beta)$, where $H$ is defined in (2.10), one can find coefficients $a$ and $b$ such that the unique global minimizer in $\mathcal{L}_{2}$ of $\mathcal{E}$ is the triangular lattice $\lambda_{\mathsf{A}_{2}}\mathsf{A}_{2}$ where $\lambda_{\mathsf{A}_{2}}=e^{\frac{t^{-1}+\log\gamma}{s}}\zeta_{\mathsf{A}_{2}}(s)^{\frac{1}{s}}.$ Moreover, $\mathsf{A}_{2}$ is the unique minimizer of $L\mapsto\lambda^{\mathcal{E}}_{L}$ in $\mathcal{L}_{2}(1)$. ###### Proof. We first remark that $F$ and $\rho$ satisfy the assumption of Theorem 3.1. By recalling (2.3), (2.6) and using the fact that $\mathop{\rm argmin}\nolimits F=\frac{1}{\gamma}e^{-1/t}$, we have $\lambda_{\mathsf{A}_{2}}^{F,\rho}=e^{\frac{t^{-1}+\log\gamma}{s}}\zeta_{\mathsf{A}_{2}}(s)^{\frac{1}{s}},$ and $E_{F,\rho}[\lambda_{\mathsf{A}_{2}}^{F,\rho}\mathsf{A}_{2}]=F((\lambda_{\mathsf{A}_{2}}^{F,\rho})^{-s}\zeta_{\mathsf{A}_{2}}(s))=\min F$. On the other hand, we know from [5, Theorem 1.2] that $E_{\phi}$ is uniquely minimized in $\mathcal{L}_{2}$ by $\lambda_{\mathsf{A}_{2}}^{\phi}\mathsf{A}_{2}$ where $\lambda_{\mathsf{A}_{2}}^{\phi}=\left(\frac{\alpha a\zeta_{\mathsf{A}_{2}}(\alpha)}{\beta b\zeta_{\mathsf{A}_{2}}(\beta)}\right)^{\frac{1}{\alpha-\beta}},$ see (2.8). Hence, if $\lambda_{\mathsf{A}_{2}}^{F,\rho}=\lambda_{\mathsf{A}_{2}}^{\phi}$, then $\lambda_{\mathsf{A}_{2}}^{F,\rho}\mathsf{A}_{2}=\lambda_{\mathsf{A}_{2}}^{\phi}\mathsf{A}_{2}$ is the unique minimizer of the sum of the two energies $E_{F,\rho}$ and $E_{\phi}$. The identity $\lambda_{\mathsf{A}_{2}}^{F,\rho}=\lambda_{\mathsf{A}_{2}}^{\phi}$ is equivalent to equation $\frac{a}{b}=\frac{\beta\zeta_{\mathsf{A}_{2}}(\beta)}{\alpha\zeta_{\mathsf{A}_{2}}(\alpha)}\zeta_{\mathsf{A}_{2}}(s)^{\frac{\alpha-\beta}{s}}e^{\frac{\alpha-\beta}{s}(t^{-1}+\log\gamma)}.$ For this choice of $a$ and $b$, we thus get that the unique global minimizer in $\mathcal{L}_{2}$ of $\mathcal{E}$ is the triangular lattice $\lambda^{\mathcal{E}}_{\mathsf{A}_{2}}\mathsf{A}_{2}$ with $\lambda^{\mathcal{E}}_{\mathsf{A}_{2}}=\lambda_{\mathsf{A}_{2}}^{F,\rho}=\lambda_{\mathsf{A}_{2}}^{\phi}$. The last statement follows by applying Proposition 2.2 to $L_{d}=\mathsf{A}_{2}$. ∎ The drawback of the result is that it is not _generic_ in the sense that it holds only for specific coefficients $a$ and $b$. We now give a result which holds in any dimension for _all_ coefficients $a,\,b>0$, at the expense of the fact that $\phi$ and $\rho$ need to have the same decay ${\rm O}(r^{-s})$. In this regard, the result is in the spirit Theorem 4.1 but under the choice $\phi(r)=ar^{-\alpha}-br^{-s}$. ###### Theorem 5.2 (EAM energy for Lennard-Jones-type interaction). Let $F$ be as in Theorem 4.1 and additionally suppose that $F$ is convex and in $C^{2}(\mathbb{R}_{+})$. Let $\rho(r)=r^{-s},\quad\phi(r)=ar^{-\alpha}-br^{-s},\quad\text{ for }\ d<s<\alpha\quad\text{and}\quad a,\,b>0.$ Then, $\lambda^{\mathcal{E}}_{L}$ exists for all $L\in\mathcal{L}_{d}(1)$ and the following statements are equivalent: * • $L_{d}$ is the unique minimizer of $L\mapsto e^{*}(L)=\frac{\zeta_{L}(\alpha)^{s}}{\zeta_{L}(s)^{\alpha}}$, see (2.9); * • $\lambda_{L_{d}}^{\mathcal{E}}L_{d}$ is the unique minimizer of $\mathcal{E}$ in $\mathcal{L}_{d}$; * • $\lambda_{L_{d}}^{\phi}L_{d}$ is the unique minimizer in $\mathcal{L}_{d}$ of $E_{{\phi}}$. In particular, when $d=2$ and $H(\alpha)<H(s)$ where $H$ is defined by (2.10), then the unique minimizer of $\mathcal{E}$ in $\mathcal{L}_{2}$ is the triangular lattice $\lambda_{\mathsf{A}_{2}}^{\mathcal{E}}\mathsf{A}_{2}$. Furthermore, if $L_{d}$ is the unique minimizer of $L\mapsto\zeta_{L}(s)$ in $\mathcal{L}_{d}(1)$ as well as a minimizer of $e^{*}$ in $\mathcal{L}_{d}(1)$, then $L_{d}$ is the unique minimizer of $L\mapsto\lambda^{\mathcal{E}}_{L}$ in $\mathcal{L}_{d}(1)$. ###### Proof. In view of (2.3), the energy $\mathcal{E}$ can be written as $\mathcal{E}[L]=F(\zeta_{L}(s))+a\zeta_{L}(\alpha)-b\zeta_{L}(s)=a\big{(}\tilde{F}(\zeta_{L}(s))+\zeta_{L}(\alpha)\big{)},$ where $\tilde{F}(r)=a^{-1}(F(r)-br)$. In a similar fashion to (4.1), we define $\tilde{g}(r):=r^{1-{\alpha}/{s}}\tilde{F}^{\prime}(r)=a^{-1}g(r)-\frac{b}{a}r^{1-{\alpha}/{s}},\quad\tilde{h}(r):=\tilde{F}(r)-\frac{s}{\alpha}r\tilde{F}^{\prime}(r)=a^{-1}h(r)-\frac{b}{a}\Big{(}1-\frac{s}{\alpha}\Big{)}r,$ where $g$ and $h$ are defined in (4.1). We first check that $\tilde{g}$ is strictly increasing on $\tilde{I}:=\\{\tilde{F}^{\prime}<0\\}$. Indeed, since $F$ (and hence $\tilde{F}$) is convex and $\alpha>s$, we get that $\tilde{g}^{\prime}(r)=\left(1-\frac{\alpha}{s}\right)r^{-{\alpha}/{s}}\tilde{F}^{\prime}(r)+r^{1-{\alpha}/{s}}\tilde{F}^{\prime\prime}(r)\geq\left(1-\frac{\alpha}{s}\right)r^{-{\alpha}/{s}}\tilde{F}^{\prime}(r)>0$ for all $r\in\tilde{I}$. Since by assumption $g(\\{F^{\prime}<0\\})=(-\infty,0)$ and $\tilde{I}=\\{\tilde{F}^{\prime}<0\\}\supset\\{{F}^{\prime}<0\\}$, we find $\tilde{g}(\tilde{I})=(-\infty,0)$. Eventually, $\tilde{h}\circ\tilde{g}^{-1}$ is strictly decreasing on $(-\infty,0)$, as well. We can hence apply Theorem 4.1 and obtain the assertion. ∎ ###### Remark 5.3. As a consequence of Remark 4.2, the previous result can be applied to $F(r)=r\log r$. Already for this $F$, in the case of a more general Lennard- Jones potential $\phi(r)=ar^{-\alpha}-br^{-\beta}$, the equation for the critical points of $\lambda\mapsto\mathcal{E}[\lambda L]$ for a fixed lattice $L$ is $\log\lambda=\frac{a^{\prime}}{b^{\prime}}\lambda^{s-\alpha}-\frac{d^{\prime}}{b^{\prime}}\lambda^{s-\beta}+\frac{c^{\prime}}{b^{\prime}}$ for $a^{\prime}=\alpha a\zeta_{L}(\alpha)$, $b^{\prime}=s^{2}\zeta_{L}(s)$, $c^{\prime}=s\zeta_{L}(s)(1+\log\zeta_{L}(s))$, and $d^{\prime}=\beta b\zeta_{L}(\beta)$. This is generically not solvable in closed form when $s\neq\beta$, and makes the computation of $\mathcal{E}[\lambda^{\mathcal{E}}_{L}L]$ more difficult. This is why we choose $s=\beta$ in the above result. #### 5.1.2. Numerical investigation in 2d We choose $s$ as parameter and fix $t=\gamma=a=b=1$, and $\alpha=12$, $\beta=6$, i.e., $F(r)=r\log r,\quad\rho(r)=r^{-s},\quad\phi(r)=\frac{1}{r^{12}}-\frac{1}{r^{6}}.$ (5.1) We employ here a gradient descent method, which is rather computationally intensive. Note that a more efficient numerical method will be amenable in Subsection 5.2, as an effect of a different structure of the potentials. Numerically, we observe the following (see Figure 3): * • For $s>s_{1}$, $s_{1}\approx 5.14$, the triangular lattice $\lambda_{\mathsf{A}_{2}}^{\mathcal{E}}\mathsf{A}_{2}$ is apparently the unique global minimizer of $\mathcal{E}$. * • For $s<s_{1}$, the energy does not seem to have a global minimizer. Furthermore, for $s>s_{0}$, $s_{0}\approx 5.09$, we have checked (see Figure 4) that $\min_{\lambda}\mathcal{E}[\lambda\mathbb{Z}^{2}]=\mathcal{E}[\lambda_{\mathbb{Z}^{2}}^{\mathcal{E}}\mathbb{Z}^{2}]>\mathcal{E}[\lambda_{\mathsf{A}_{2}}^{\mathcal{E}}\mathsf{A}_{2}]=\min_{\lambda}\mathcal{E}[\lambda\mathsf{A}_{2}],$ whereas the inequality is reversed if $s<s_{0}$. Figure 3. Case (5.1) in two dimensions. Plot of $L\mapsto\min_{\lambda}\mathcal{E}[\lambda L]$ in the fundamental domain $\mathcal{D}$ for $s=6$ (up left), $s=5.3$ (up right), $s=5.15$ (down left) and $s=5.07$ (down right). Figure 4. Case (5.1) in two dimensions. Plot of $s\mapsto\min_{\lambda}\mathcal{E}[\lambda\mathbb{Z}^{2}]-\min_{\lambda}\mathcal{E}[\lambda\mathsf{A}_{2}]$ for $s\in[2.1,7]$. We now replace $\rho$ by a Gaussian function. Namely, we consider the case $F(r)=r\log r,\quad\rho(r)=e^{-\delta r^{2}},\quad\phi(r)=\frac{1}{r^{12}}-\frac{1}{r^{6}}.$ (5.2) Figure 5. Case (5.2) in two dimensions. Plot of $\delta\mapsto\min_{\lambda}\mathcal{E}[\lambda\mathbb{Z}^{2}]-\min_{\lambda}\mathcal{E}[\lambda\mathsf{A}_{2}]$ for $\delta\in[0.1,5]$. In this case, the triangular lattice $\lambda_{\mathsf{A}_{2}}^{\mathcal{E}}\mathsf{A}_{2}$ still seems to be minimizing $\mathcal{E}$ for large $\delta$, see Figure 6. More precisely: * • There exists $\delta_{0}\approx 1.04$ such that, for $\delta>\delta_{0}$, the triangular lattice $\lambda_{\mathsf{A}_{2}}^{\mathcal{E}}\mathsf{A}_{2}$ is the global minimizer of $\mathcal{E}$ in $\mathcal{L}_{2}$. * • For $\delta<\delta_{0}$, the global minimizer of $\mathcal{E}$ seems to move (continuously) in $\mathcal{D}$ increasingly following the $y$-axis as $\delta$ decreases to $0$. For instance, * – If $\delta=1$, then the minimizer is $(0,y_{1})$ where $y_{1}\approx 1.014$. * – If $\delta=0.95$, then the minimizer is $(0,y_{0.95})$ where $y_{0.95}\approx 1.665$. * • Furthermore, we have checked that, for $\delta>\delta_{0}$, $\min_{\lambda}\mathcal{E}[\lambda\mathbb{Z}^{2}]=\mathcal{E}[\lambda_{\mathbb{Z}^{2}}^{\mathcal{E}}\mathbb{Z}^{2}]>\mathcal{E}[\lambda_{\mathsf{A}_{2}}^{\mathcal{E}}\mathsf{A}_{2}]=\min_{\lambda}\mathcal{E}[\lambda\mathsf{A}_{2}],$ whereas the inequality is reversed if $\delta<\delta_{0}$ (see Figure 5). Figure 6. Case (5.2) in two dimensions. Plot of $L\mapsto\min_{\lambda}\mathcal{E}[\lambda L]$ when $\delta=2$ (up left), $\delta=1$ (up right) and $\delta=0.95$ (down left and right) in the fundamental domain $\mathcal{D}$. #### 5.1.3. Numerical investigation in 3d Let us go back to case (5.1), now in three dimensions. We investigate the difference of energies between the Simple Cubic (SC), Face-Centered Cubic (FCC), and Body-Centered Cubic (BCC) lattices, namely, $\mathbb{Z}^{3},\mathsf{D}_{3},\mathsf{D}_{3}^{*}$, as $s$ increases. Examples of FCC and BCC metals are Al, Cu, Ag, Au, Ni, Pd, Pt, and Nb, Cr, V, Fe, respectively [37]. Po is the only metal crystallizing in a SC structure [33]. Before giving our numerical results, let us remark that the lattices $\mathbb{Z}^{3}$, $\mathsf{D}_{3}$, and $\mathsf{D}_{3}^{*}$ are critical points of $\mathcal{E}$ in $\mathcal{L}_{3}(1)$. Moreover, recall the following conjectures: * • Sarnak-Strombergsson’s conjecture (see [32, Equation (44)]): for all $s\geq 3/2$ (and in particular for $s>3$, so that $r\mapsto r^{-s}\in\mathcal{S}_{3}^{+}$), $\mathsf{D}_{3}$ is the unique minimizer of $L\mapsto\zeta_{L}(s)$ in $\mathcal{L}_{3}(1)$. * • The global minimizer of the Lennard-Jones energy $E_{\phi}$ is $\lambda_{\mathsf{D}_{3}}^{\phi}\mathsf{D}_{3}$ (see e.g. [40, Figure 5] and [7, Conjecture 1.7]). We have numerically studied the following function $s\mapsto\min_{\lambda>0}\mathcal{E}[\lambda L],\quad L\in\\{\mathsf{D}_{3},\mathsf{D}_{3}^{*},\mathbb{Z}^{3}\\}$ for $s>3$, see Figure 7. We have found that there exist $s_{0}<s_{1}<s_{2}$ where $s_{0}\approx 5.4985$, $s_{1}\approx 5.576$, and $s_{2}\approx 5.584$ such that * • For $s\in(3,s_{0})$, $\min_{\lambda>0}\mathcal{E}[\lambda\mathbb{Z}^{3}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}^{*}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}]$; * • For $s\in(s_{0},s_{1})$, $\min_{\lambda>0}\mathcal{E}[\lambda\mathbb{Z}^{3}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}^{*}]$; * • For $s\in(s_{1},s_{2})$, $\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathbb{Z}^{3}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}^{*}]$; * • For $s>s_{2}$, $\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}^{*}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathbb{Z}^{3}]$. It is remarkable that for small values of $s$ the simple cubic lattice $\mathbb{Z}^{3}$ has lower energy with respect to the usually energetically favored $\mathsf{D}_{3}$ and $\mathsf{D}_{3}^{*}$. Figure 7. Case (5.1) in three dimensions. Plots of $s\mapsto\min_{\lambda}\mathcal{E}[\lambda L]$ for $L=\mathsf{D}_{3}$ (red), $L=\mathsf{D}_{3}^{*}$ (blue) and $L=\mathbb{Z}^{3}$ (black) on two different intervals. Consider now the Gaussian case (5.2) in three dimensions. The total energy then reads $\mathcal{E}[L]:=\theta_{L}(\delta)\log\theta_{L}(\delta)+\zeta_{L}(12)-\zeta_{L}(6),\quad\textnormal{where}\quad\theta_{L}(\delta):=\sum_{p\in L\backslash\\{0\\}}e^{-\delta|p|^{2}}.$ In the following, we will call $\theta_{L}(\delta)$ the lattice theta function with parameter $\delta>0$. Note however that under this name one usually refers to such sum including the term for $p=0$ and with weight $e^{-\delta\pi|p|^{2}}$. We recall the following conjectures: * • Sarnak-Strombergsson’s conjecture (see [32, Equation (43)]): if $\delta<\pi$, then $\mathsf{D}_{3}^{*}$ minimizes $L\mapsto\theta_{L}(\delta)$ in $\mathcal{L}_{3}(1)$. If $\delta>\pi$, then $\mathsf{D}_{3}$ minimizes the same lattice theta function in $\mathcal{L}_{3}(1)$ (with a coexistence phase around $\pi$ actually). * • As mentioned before, the unique minimizer of the Lennard-Jones energy $E_{\phi}$ in $\mathcal{L}_{3}$ is $\lambda_{\mathsf{D}_{3}}^{\phi}\mathsf{D}_{3}$ (see e.g. [7] and [40, Figure 5]). In Figure 8 we plot the functions $\delta\mapsto\min_{\lambda>0}\mathcal{E}[\lambda L]$ for $L\in\\{\mathsf{D}_{3},\mathsf{D}_{3}^{*},\mathbb{Z}^{3}\\}$. We numerically observe that there exist $0<\delta_{1}<\delta_{2}<\delta_{3}$, where $\delta_{1}\approx 1.13$, $\delta_{2}\approx 1.21$, and $\delta_{3}\approx 1.223$ such that * • for all $\delta\in(0,\delta_{1})$, $\min_{\lambda>0}\mathcal{E}[\lambda\mathbb{Z}^{3}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}^{*}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}]$; * • for all $\delta\in(\delta_{1},\delta_{2})$, $\min_{\lambda>0}\mathcal{E}[\lambda\mathbb{Z}^{3}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}^{*}]$; * • for all $\delta\in(\delta_{2},\delta_{3})$, $\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathbb{Z}^{3}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}^{*}]$; * • for all $\delta>\delta_{3}$, $\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathsf{D}_{3}^{*}]<\min_{\lambda>0}\mathcal{E}[\lambda\mathbb{Z}^{3}]$. It is indeed important that the EAM energy favors $\mathsf{D}_{3}$ or $\mathsf{D}_{3}^{*}$ for some specific choice of parameters. In fact, FCC and BCC lattices are commonly emerging in metals. It is also remarkable that the simple cubic lattice $\mathbb{Z}^{3}$ (up to rescaling) is favored with respect to $\mathsf{D}_{3}$ or $\mathsf{D}_{3}^{*}$ for some other choice of parameters. In [7], we were able to identify a range of densities such that cubic lattices are locally optimal at fixed density, but it is the first time – according to our knowledge – that such phenomenon is observed at the level of the global minimizer. Figure 8. Case (5.2) in three dimensions. Plot of $\delta\mapsto\min_{\lambda}\mathcal{E}[\lambda L]$ for $L=\mathsf{D}_{3}$ (red), $L=\mathsf{D}_{3}^{*}$ (blue) and $L=\mathbb{Z}^{3}$ (black) on two different intervals. ### 5.2. The power-law case $F(r)=r^{t}$ In this subsection, we study the case where $F(r)=r^{t}$, $t>0$. Although $F$ is not a one-well potential, this case turns out to be mathematically interesting. Indeed, we are able to present a special case where we can explicitly compute $\min_{\lambda}\mathcal{E}[\lambda L]$ for any $L\in\mathcal{L}_{d}(1)$. As we have seen above, this dimension reduction is extremely helpful when one looks for the ground state of $\mathcal{E}$ in $\mathcal{L}_{d}$, especially for $d=2$, since we can plot $L\mapsto\min_{\lambda}\mathcal{E}[\lambda L]$ in the fundamental domain $\mathcal{D}$. #### 5.2.1. A special power-law case Let us now assume that $F(r)=r^{t},\quad\rho(r)=r^{-s},\quad\phi(r)=ar^{-\alpha}-br^{-\beta},$ for $t>0$, $s>d$, $\alpha>\beta>d$, and $a,\,b>0$. Therefore, by (2.3) we have, for any $\lambda>0$ and any $L\in\mathcal{L}_{d}(1)$, that $\mathcal{E}[\lambda L]=\lambda^{-st}\zeta_{L}(s)^{t}+a\lambda^{-\alpha}\zeta_{L}(\alpha)-b\lambda^{-\beta}\zeta_{L}(\beta).$ For a fixed lattice $L$, the critical points of $\lambda\mapsto\mathcal{E}[\lambda L]$ are the solutions of the following equation $\displaystyle b\beta\zeta_{L}(\beta)\lambda^{st+\alpha}-st\zeta_{L}(s)^{t}\lambda^{\alpha+\beta}-a\alpha\zeta_{L}(\alpha)\lambda^{st+\beta}=0.$ (5.3) Solving this equation in closed form is impracticable out of a discrete set of parameter values. Correspondingly, comparing energy values is even more complicated than in the pure Lennard-Jones-type case, which is already challenging when treated in whole generality. Having pointed out this difficulty, we now focus on some additional specifications of the parameters, allowing to proceed further with the analysis. We have the following. ###### Theorem 5.4 (Special power-law case). Let $\alpha,\beta,s$, and $t$ such that $\displaystyle d<s,\quad d<\beta<st<\alpha,\quad\textnormal{and}\quad\alpha=2st-\beta.$ (5.4) Then, $\lambda^{\mathcal{E}}_{L}$ exists for all $L\in\mathcal{L}_{d}(1)$. Moreover, $\lambda^{\mathcal{E}}_{L_{d}}L_{d}$ is a global minimizer in $\mathcal{L}_{d}$ of $\mathcal{E}$, now reading $\mathcal{E}[L]=\zeta_{L}(s)^{t}+a\zeta_{L}(\alpha)-b\zeta_{L}(\beta),$ if and only if $L_{d}$ is a minimizer in $\mathcal{L}_{d}(1)$ of $\displaystyle e_{*}(L):$ $\displaystyle=-\frac{\displaystyle C_{1}\frac{\zeta_{L}(s)^{2t}}{\zeta_{L}(\beta)}+C_{2}\zeta_{L}(s)^{t}\sqrt{c_{1}\frac{\zeta_{L}(s)^{2t}}{\zeta_{L}(\beta)^{2}}+c_{2}\frac{\zeta_{L}(\alpha)}{\zeta_{L}(\beta)}}+C_{3}\zeta_{L}(\alpha)}{\displaystyle\left(\sqrt{c_{1}}\frac{\zeta_{L}(s)^{t}}{\zeta_{L}(\beta)}+\sqrt{c_{1}\frac{\zeta_{L}(s)^{2t}}{\zeta_{L}(\beta)^{2}}+c_{2}\frac{\zeta_{L}(\alpha)}{\zeta_{L}(\beta)}}\right)^{\frac{\alpha}{\alpha- st}}},$ where $C_{i},c_{j}$, $i\in\\{1,2,3\\}$, $j\in\\{1,2\\}$, are positive constants defined by $C_{1}:=\frac{st}{2b\beta}\left(\frac{st}{\beta}-1\right),\quad C_{2}:=\frac{st}{\beta}-1,\quad C_{3}:=a\left(\frac{\alpha}{\beta}-1\right),\quad c_{1}:=\frac{s^{2}t^{2}}{4b^{2}\beta^{2}},\quad c_{2}:=\frac{a\alpha}{b\beta}.$ (5.5) ###### Proof. For any $L\in\mathcal{L}_{d}(1)$, any critical point of $\lambda\mapsto\mathcal{E}[\lambda L]$ satisfies (see (5.3)) $\lambda^{st+\beta}\left(b\beta\zeta_{L}(\beta)\lambda^{\alpha-\beta}-st\zeta_{L}(s)^{t}\lambda^{\alpha- st}-a\alpha\zeta_{L}(\alpha)\right)=0.$ Since $\lambda>0$, by writing $X=\lambda^{\alpha-st}$ and using (5.4) we want to solve $b\beta\zeta_{L}(\beta)X^{2}-st\zeta_{L}(s)^{t}X-a\alpha\zeta_{L}(\alpha)=0,\quad X>0,$ for which the unique solution is $X=\frac{st\zeta_{L}(s)^{t}+\sqrt{s^{2}t^{2}\zeta_{L}(s)^{2t}+4ab\alpha\beta\zeta_{L}(\alpha)\zeta_{L}(\beta)}}{2b\beta\zeta_{L}(\beta)}.$ Since $\alpha-st>0$ and $b\beta\zeta_{L}(\beta)>0$, we find that the critical point is a minimizer and thus coincides with $\lambda^{\mathcal{E}}_{L}$ defined in (2.6). More precisely, we have $\lambda^{\mathcal{E}}_{L}=\left(\frac{st\zeta_{L}(s)^{t}+\sqrt{s^{2}t^{2}\zeta_{L}(s)^{2t}+4ab\alpha\beta\zeta_{L}(\alpha)\zeta_{L}(\beta)}}{2b\beta\zeta_{L}(\beta)}\right)^{\frac{1}{\alpha- st}}.$ We hence get, for any $L\in\mathcal{L}_{d}(1)$, that $\displaystyle\min_{\lambda}\mathcal{E}[\lambda L]=\mathcal{E}[\lambda^{\mathcal{E}}_{L}L]$ $\displaystyle=(\lambda^{\mathcal{E}}_{L})^{-st}\zeta_{L}(s)^{t}+a(\lambda^{\mathcal{E}}_{L})^{-\alpha}\zeta_{L}(\alpha)-b(\lambda^{\mathcal{E}}_{L})^{-\beta}\zeta_{L}(\beta)$ $\displaystyle=(\lambda^{\mathcal{E}}_{L})^{-\alpha}\left\\{\zeta_{L}(s)^{t}(\lambda^{\mathcal{E}}_{L})^{\alpha- st}-b\zeta_{L}(\beta)(\lambda^{\mathcal{E}}_{L})^{\alpha-\beta}+a\zeta_{L}(\alpha)\right\\}$ $\displaystyle=(\lambda^{\mathcal{E}}_{L})^{-\alpha}\left\\{\zeta_{L}(s)^{t}(\lambda^{\mathcal{E}}_{L})^{\alpha- st}-\frac{st\zeta_{L}(s)^{t}(\lambda^{\mathcal{E}}_{L})^{\alpha- st}+a\alpha\zeta_{L}(\alpha)}{\beta}+a\zeta_{L}(\alpha)\right\\}$ $\displaystyle=(\lambda^{\mathcal{E}}_{L})^{-\alpha}\left\\{\zeta_{L}(s)^{t}\left(1-\frac{st}{\beta}\right)(\lambda^{\mathcal{E}}_{L})^{\alpha- st}+a\zeta_{L}(\alpha)\left(1-\frac{\alpha}{\beta}\right)\right\\}$ $\displaystyle=(\lambda^{\mathcal{E}}_{L})^{-\alpha}\left\\{\zeta_{L}(s)^{t}\left(1-\frac{st}{\beta}\right)\left(\frac{st\zeta_{L}(s)^{t}+\sqrt{s^{2}t^{2}\zeta_{L}(s)^{2t}+4ab\alpha\beta\zeta_{L}(\alpha)\zeta_{L}(\beta)}}{2b\beta\zeta_{L}(\beta)}\right)+a\zeta_{L}(\alpha)\left(1-\frac{\alpha}{\beta}\right)\right\\}$ $\displaystyle=(\lambda^{\mathcal{E}}_{L})^{-\alpha}\left\\{\frac{st}{2b\beta}\left(1-\frac{st}{\beta}\right)\frac{\zeta_{L}(s)^{2t}}{\zeta_{L}(\beta)}+\left(1-\frac{st}{\beta}\right)\zeta_{L}(s)^{t}\sqrt{\frac{s^{2}t^{2}\zeta_{L}(s)^{2t}}{4b^{2}\beta^{2}\zeta_{L}(\beta)^{2}}+\frac{a\alpha\zeta_{L}(\alpha)}{b\beta\zeta_{L}(\beta)}}+a\left(1-\frac{\alpha}{\beta}\right)\zeta_{L}(\alpha)\right\\},$ where in the fourth line we have used the fact that $\lambda^{\mathcal{E}}_{L}$ is a critical point of $\lambda\mapsto\mathcal{E}[\lambda L]$, i.e., $b\beta\zeta_{L}(\beta)(\lambda^{\mathcal{E}}_{L})^{\alpha-\beta}-st\zeta_{L}(s)^{t}(\lambda^{\mathcal{E}}_{L})^{\alpha- st}-a\alpha\zeta_{L}(\alpha)=0$. Note that by assumption we have $1-\frac{st}{\beta}<0,\quad 1-\frac{\alpha}{\beta}<0.$ It follows that, defining the positive constants $C_{i},c_{j}$, $i\in\\{1,2,3\\}$, $j\in\\{1,2\\}$, as in (5.5), that $\displaystyle\min_{\lambda}\mathcal{E}[\lambda L]$ $\displaystyle=-(\lambda^{\mathcal{E}}_{L})^{-\alpha}\left\\{C_{1}\frac{\zeta_{L}(s)^{2t}}{\zeta_{L}(\beta)}+C_{2}\zeta_{L}(s)^{t}\sqrt{c_{1}\frac{\zeta_{L}(s)^{2t}}{\zeta_{L}(\beta)^{2}}+c_{2}\frac{\zeta_{L}(\alpha)}{\zeta_{L}(\beta)}}+C_{3}\zeta_{L}(\alpha)\right\\}$ $\displaystyle=-\frac{\displaystyle C_{1}\frac{\zeta_{L}(s)^{2t}}{\zeta_{L}(\beta)}+C_{2}\zeta_{L}(s)^{t}\sqrt{c_{1}\frac{\zeta_{L}(s)^{2t}}{\zeta_{L}(\beta)^{2}}+c_{2}\frac{\zeta_{L}(\alpha)}{\zeta_{L}(\beta)}}+C_{3}\zeta_{L}(\alpha)}{\displaystyle\left(\sqrt{c_{1}}\frac{\zeta_{L}(s)^{t}}{\zeta_{L}(\beta)}+\sqrt{c_{1}\frac{\zeta_{L}(s)^{2t}}{\zeta_{L}(\beta)^{2}}+c_{2}\frac{\zeta_{L}(\alpha)}{\zeta_{L}(\beta)}}\right)^{\frac{\alpha}{\alpha- st}}},$ which completes the proof. ∎ #### 5.2.2. Numerical investigations of the special power-law case in 2d and 3d We let $t\in(0,9/d)$ vary and fix $a=b=1,\quad\alpha=12,\quad\beta=6,\quad s=9/t,$ so that $F(r)=r^{t},\quad\rho(r)=r^{-{9}/{t}},\quad\phi(r)=\frac{1}{r^{12}}-\frac{1}{r^{6}}.$ (5.6) Note that (5.4) holds under these assumptions. In two dimensions, by testing as $t\in(0,4.5)$ increases, we observe numerically the following: * • If $t\in(0,t_{1})$, $t_{1}\approx 1.605$, then $\mathsf{A}_{2}$ minimizes $e_{*}$ (see Figures 9 and 10); * • If $t\in(t_{1},t_{2})$, where $t_{2}\approx 1.633$, then $\mathbb{Z}^{2}$ is a local minimizer of $e_{*}$ but there seems to be no minimizer for $e_{*}$ (see Figure 11); * • if $t\in(t_{2},4.5)$, there seems to be no minimizer for $e_{*}$, and $\mathbb{Z}^{2}$ is a saddle point (see Figure 12). Figure 9. Special power-law case (5.6) in two dimensions. Plot of $e_{*}$ on the fundamental domain $\mathcal{D}$. For $t=1$ (left) and $t=1.5$ (right), the minimizer of $e_{*}$ is the triangular lattice $\mathsf{A}_{2}$ given by the point $(1/2,\sqrt{3}/2)$. Figure 10. Special power-law case (5.6) in two dimensions. Plot of $e_{*}$ on the fundamental domain $\mathcal{D}$ for $t=1.605$. The triangular lattice is the global minimizer of $e_{*}$ whereas $\mathbb{Z}^{2}$ (given by the point $(0,1)$) is a local minimizer. Similarly to the discussion of Subsection 5.1, for some choice of parameters, a square lattice seems to be locally minimizing the EAM energy, at least within the range of our numerical testing. In [6], we have identified a range of densities for which a square lattice is optimal at fixed density. This seems however to be the first occurrence of such minimality among all possible lattices, without a density constraint. Indeed, when minimizing among all lattices, the square lattice $\mathbb{Z}^{2}$ usually happens to be a saddle point, see, e.g., Figure 1 for the Lennard-Jones case. Figure 11. Special power-law case (5.6) in two dimensions. Plot of $e_{*}$ on the fundamental domain $\mathcal{D}$ for $t=1.606$. The square lattice is a local minimizer of $e_{*}$ which does not have any global minimizer. Still, $\mathsf{A}_{2}$ is a local minimizer. Figure 12. Special power-law case (5.6) in two dimensions. Plot of $e_{*}$ on the fundamental domain $\mathcal{D}$. For $t=1.632$ (left) the square lattice is a local minimizer of $e_{*}$ whereas $\mathsf{A}_{2}$ is a local maximizer. For $t=2$ (right) it seems that $e_{*}$ does not have any local minimizer and $\mathsf{A}_{2}$ stays a local maximizer. In both cases, there is no global minimum. We have numerically investigated the three-dimensional case as well, comparing the energies of $L\in\\{\mathbb{Z}^{3},\mathsf{D}_{3},\mathsf{D}_{3}^{*}\\}$. Figure 13 illustrates the numerical results. We observe that there exist $t_{1},t_{2},t_{3}$, where $t_{1}\approx 1.5505$, $t_{2}\approx 1.5515$, and $t_{3}\approx 1.5647$ such that: * • If $t\in(0,t_{1})$, $e_{*}(\mathsf{D}_{3})<e_{*}(\mathsf{D}_{3}^{*})<e_{*}(\mathbb{Z}^{3})$; * • If $t\in(t_{1},t_{2})$, $e_{*}(\mathsf{D}_{3})<e_{*}(\mathbb{Z}^{3})<e_{*}(\mathsf{D}_{3}^{*})$; * • If $t\in(t_{2},t_{3})$, $e_{*}(\mathbb{Z}^{3})<e_{*}(\mathsf{D}_{3})<e_{*}(\mathsf{D}_{3}^{*})$; * • If $t\in(t_{3},3)$, $e_{*}(\mathbb{Z}^{3})<e_{*}(\mathsf{D}_{3}^{*})<e_{*}(\mathsf{D}_{3})$. When $t\to 0$, since $s=9/t\to\infty$ and $r^{t}\to 1$ for fixed $r>0$, it is expected that the global minimizer of $\mathcal{E}$ in $\mathcal{L}_{3}$ converges to the one of $E_{\phi}$, which in turn is expected to be a FCC lattice. This is supported by our numerics for $t<t_{1}$. Figure 13. Special power-law case (5.6) in three dimensions. Plot of $t\mapsto e_{*}(L)$ for $L=\mathsf{D}_{3}$ (red), $L=\mathsf{D}_{3}^{*}$ (blue) and $L=\mathbb{Z}^{3}$ (black) for $t\in(0,3)$. The graph on the right is a close- up of the two transitions at $t_{1}$ and $t_{2}$. ## Acknowledgments MF and US are supported by the DFG-FWF international joint project FR 4083/3-1/I 4354. MF is also supported by the Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy EXC 2044-390685587, Mathematics Münster: Dynamics–Geometry–Structure. US and LB are supported by the FWF project F 65. US is also supported by the FWF project P 32788. ## References * [1] * [2] A. Banerjea and J. R. Smith. Origins of the universal binding-energy relation. Phys. Rev. B, 37(12):6632–6645, 1988. * [3] M. I. Baskes. Many-body effects in fcc metals: a Lennard-Jones embedded-atom potential. Phys. Rev. Lett., 83(13):2592–2595, 1983. * [4] M. I. Baskes. Application of the Embedded-Atom Method to covalent materials: a semiempirical potential for silicon. Phys. Rev. Lett., 59(23):2666–2669, 1987. * [5] L. Bétermin. Two-dimensional Theta Functions and crystallization among Bravais lattices. SIAM J. Math. Anal., 48(5):3236–3269, 2016. * [6] L. Bétermin. Local variational study of 2d lattice energies and application to Lennard-Jones type interactions. Nonlinearity, 31(9):3973–4005, 2018. * [7] L. Bétermin. Local optimality of cubic lattices for interaction energies. Anal. Math. Phys., 9(1):403–426, 2019. * [8] L. Bétermin. Minimizing lattice structures for Morse potential energy in two and three dimensions. J. Math. Phys., 60(10):102901, 2019. * [9] L. Bétermin. Effect of periodic arrays of defects on lattice energy minimizers. Preprint. arXiv:2008.00676, 2020. * [10] L. Bétermin and M. Petrache. Optimal and non-optimal lattices for non-completely monotone interaction potentials. Anal. Math. Phys., 9(4):2033–2073, 2019. * [11] L. Bétermin and P. Zhang. Minimization of energy per particle among Bravais lattices in $\mathbb{R}^{2}$: Lennard-Jones and Thomas-Fermi cases. Commun. Contemp. Math., 17(6):1450049, 2015. * [12] X. Blanc and C. Le Bris. Periodicity of the infinite-volume ground state of a one-dimensional quantum model. Nonlinear Anal., 48(6):791–803, 2002. * [13] X. Blanc and M. Lewin. The Crystallization Conjecture: a review. EMS Surv. Math. Sci., 2:255–306, 2015. * [14] J. Cai and Y. Y. Ye. Simple analytical embedded-atom-potential model including a long-range force for fcc metals and their alloys. Phys. Rev. B, 54(12):8398–8410, 1996. * [15] H. Cohn and A. Kumar. Universally optimal distribution of points on spheres. J. Amer. Math. Soc., 20(1):99–148, 2007. * [16] H. Cohn, A. Kumar, S. D. Miller, D. Radchenko, and M. Viazovska. Universal optimality of the $E_{8}$ and Leech lattices and interpolation formulas. Preprint. arXiv:1902:05438, 2019. * [17] M. S. Daw and M. I. Baskes. Semiempirical, Quantum Mechanical Calculation of Hydrogen Embrittlement in Metals. Phys. Rev. Lett., 50(17):1285–1288, 1983. * [18] M. S. Daw and M. I. Baskes. Embedded-atom method: Derivation and application to impurities, surfaces and other defects in metals. Phys. Rev. B, 29(12):6443–6453, 1984. * [19] M. S. Daw, S. M. Foiles, and M. I. Baskes. The embedded-atom method: a review of theory and applications. Materials Science Reports, 9(7-8):251–310, 1993. * [20] J. Dorrell and L. B. Pártay. Pressure-temperature phase diagram of lithium, predicted by Embedded Atom Model potentials. J. Phys. Chem. B, 124:6015–6023, 2020. * [21] M. W. Finnis and J. E. Sinclair. A simple empirical n-body potential for transition metals. Philosophical Magazine A, 50(1):45–55, 1984. * [22] S. Foiles. Embedded-Atom and related methods for modeling metallic mystems. MRS Bulletin, 21(2):24–28, 1996. * [23] G. Grochola, S. P. Russo, and I. K. Snook On fitting a gold embedded atom method potential using the force matching method. J. Chem. Phys., 123:204719, 2005. * [24] A. Hernandez, A. Balasubramanian, F. Yuan et al. Fast, accurate, and transferable many-body interatomic potentials by symbolic regression. npj Comput Mater, 5:112, 2019. * [25] J. E. Jaffe, R. J. Kurtz, and M. Gutowski. Comparison of embedded-atom models and first-principles calculations for Al phase equilibrium. Computational Materials Science, 18(2):199–204, 2000. * [26] R. A. Johnson. Alloy models with the embedded-atom method. Phys. Rev. B, 39, 12554, 1989. * [27] R. A. Johnson and D. J. Oh. Analytic embedded atom method model for bcc metals. J. Mater. Res., 4(5):1195–1201, 1989. * [28] R. LeSar. Introduction to Computational Materials Science. Fundamentals to Applications. Cambridge University Press, 2013. * [29] H. L. Montgomery. Minimal Theta functions. Glasg. Math. J., 30(1):75–85, 1988. * [30] C. Poole. Encyclopedic Dictionary of Condensed Matter Physics. Elsevier, 1st edition edition, 2004. * [31] J. H. Rose, J. R. Smith, F. Guinea, and J. Ferrante. Universal features of the equation of state of metals. Phys. Rev. B, 29(6):2963–2969, 1984. * [32] P. Sarnak and A. Strömbergsson. Minima of Epstein’s Zeta Function and heights of flat tori. Invent. Math., 165:115–151, 2006. * [33] A. Silva and J. van Wezel. The simple-cubic structure of elemental Polonium and its relation to combined charge and orbital order in other elemental chalcogens. SciPost Phys., 4 (2018), 028. * [34] S. G. Srinivasan and M. I. Baskes. On the Lennard-Jones EAM potential. Proc. R. Soc. London, Ser. A, 460:1649–1672, 2004. * [35] A. P. Sutton and J. Chen. Long-range Finnis-Sinclair potentials. Philosophical Magazine Letters, 61(3):139–146, 1990. * [36] A. Terras. Harmonic analysis on symmetric spaces and applications II. Springer New York, 1988. * [37] A. F. Wells. Structural Inorganic Chemistry. Clarendon Press, Oxford, 1975. * [38] X.-J. Yuan, N.-X. Chen, and J. Shen. Construction of embedded-atom-method interatomic potentials for alkaline metals (Li, Na, and K) by lattice inversion Chin. Phys. B, 21(5):053401, 2012 * [39] Y. Zhang, C. Hu, and B. Jiang. Embedded-atom neural network potentials: efficient and accurate machine learning with a physically inspired representation J. Phys. Chem. Lett., 10(17):4962-4967, 2019 * [40] M. Zschornak, T. Leisegang, F. Meutzner, H. Stöcker, T. Lemser, T. Tauscher, C. Funke, C. Cherkouk, and D. C. Meyer. Harmonic principles of elemental crystals — from atomic interaction to fundamental symmetry. Symmetry, 10(6):228, 2018.
# Quasiperfect graph Veronica Phan ###### Abstract. A perfect graph is a graph which every induced subgraph has clique number equal to chromatic number. In this paper, I will introduce a new family of graphs, the quasiperfect graphs which generalizes the perfect graphs. *Ho Chi Minh City; email<EMAIL_ADDRESS> ## 1\. introduction All graphs in this paper are finite graphs. Let a graph $G=(V;E)$. $\omega(G),\alpha(G),\chi(G)$ be the clique number, independent number, chromatic number of $G$, respectively, $\overline{G}$ be the complement of $G$, $G[S]$ be the subgraph induced in $G$ by $S$, $K_{n}$ be the complete graph of order $n$, $K_{0}$ be the trivial graph. A perfect graph is a graph which every induced subgraphs has clique number equal to chromatic number [1, 2]. They are important objects in graph theory. In this paper, I will introduce a new family of graphs, the quasiperfect graphs which generalizes the perfect graph and show the quasiperfect graphs also share some properties with the perfect graphs. ###### Definition 1. The definition of quasiperfect graph is inductive. Let $K_{0}$ be quasiperfect. Let a graph $G=(V,E)$, assume we define a graph is quasiperfect or not for all graph have less vertices than $G$. $G$ is quasiperfect if and only if: $-$ There exist an independent set $PI$ such that it intersects all maximum cliques, each vertex of $PI$ contained in a maximum clique and $G[V-PI]$ is quasiperfect. $-$ There exist a clique $PK$ such that it intersects all maximum independent sets, each vertex of $PK$ contained in a maximum independent set and $G[V-PK]$ is quasiperfect.. We call the set $PI,PK$ as above prime independent set and prime clique respectively. ## 2\. The properties of quasiperfect graphs We want the quasiperfect graphs have the most basic property: clique number equals chromatic number. ###### Theorem 1. A quasiperfect graph $G=(V,E)$ has clique number equals chromatic number. ###### Proof. We’ll prove by induction. It’s trivial for $G=K_{0}$. Assume the statement is true for all induced quasiperfect subgraph of $G$. We need to prove $G$ has clique number equals chromatic number. Let $PI$ be a prime independent set of $G$. Then we have $G[V-PI]$ is quasiperfect, so by induction hypothesis, $\omega(G[V-I])=\chi(G[V-PI])$. We have $PI$ intersects all maximum cliques, and for each cliques, there as most $1$ vertex in $PI$ because $PI$ is independent, so $\omega(G)=\omega(G[V-PI])+1$. We have a $\omega(G)$-coloring of $G$ by taking a $\omega(G[V-PI])$-coloring of $G[V-PI]$, and color all vertices in $PI$ with a new color. So $G$ clique number equals chromatic number. ∎ This theorem is the analog of _Weakly perfect graph theorem_ : ###### Theorem 2. The complement of a quasiperfect graph $G=(V,E)$ is also quasiperfect. ###### Proof. We’ll prove by induction. It’s trivial for $G=K_{0}$. Assume the statement is true for all induced quasiperfect subgraph of $G$. We need to prove $\overline{G}$ is quasiperfect. The complement of a graph turn independent set into clique and vice versa. So we easily see that prime independent set of $G$ is prime clique of $\overline{G}$ and vice versa. So by Definition 1, you need to prove that $\overline{G[V-PI]},\overline{G[V-PC]}$ is quasiperfect, which is true by induction hypothesis and Definition 1. ∎ So we see that the quasiperfect graph have the clique number equal the chromatic number, the complement of a quasiperfect graph is also quasiperfect graph. They are also basic properties of perfect graphs as we want. ## 3\. Some families of graphs that are quasiperfect We will show the quasiperfect graphs is the non-trivial generation of the perfect graphs ### 3.1. The perfect graphs We also hope that the perfect graphs is quasiperfect. Because all induced subgraphs of the perfect graph is also perfect, we just need to proof there exists a prime clique and prime independent set of perfect graph and the result follow by induction. By taking complement, we just need to find a prime clique. We inspired by the proof of _Weakly perfect graph theorem_ by Lovász [3, 4]. Let $G$ be a perfect graph. For each vertex $v$ of $G$, we replace it by a clique with $t_{v}$ vertices to create a new graph $G^{\prime}$, $t_{v}$ is the number of maximum independent set contains $v$, if $t_{v}=0$, we just delete $v$. Because all induced subgraphs of a perfect graph is also perfect and replace a vertex by a clique create a new perfect graph so $G^{\prime}$ is perfect, so it have clique number equal chromatic number. By the construction of $G^{\prime}$, we can correspond each vertex of $G^{\prime}$ to a maximum independent set of $G$, so we have a $I$-coloring of $G^{\prime}$ with $I$ is the number of maximum independent set of $G$. We see that the set of all vertices colored by the same color is a maximum independent set of $G^{\prime}$, so this is an optimal coloring, so $\omega(G)=\chi(G)=I$. We take a maximum clique $K^{\prime}$ of size $I$ in $G^{\prime}$, then $K^{\prime}$ intersects all maximum independent set of $G^{\prime}$. Let $PK$ be the set of all vertices $v$ of $G$ such that there is a vertex $v^{\prime}\in K^{\prime}$ created by $v$, then we easily have $PK$ is a prime clique of $G$. So the quasiperfect graphs is a generation of the perfect graphs. We can use ideas from perfect graph to work with quasiperfect graph. ### 3.2. Graphs created from odd cycles Now we create an imperfect graph which is quasiperfect. The easiest way is creating from an odd cycles $C_{n}$ with $n\geq 5$. The chromatic number of $C_{n}$ is $3$, so we need to add cliques of size $3$. Let $v_{1}=v_{n+1},v_{2},...,v_{n}$ be vertices of $C_{n}$, with $v_{i},v_{i+1}$ are joined. We add new vertices $w_{k_{1}},w_{k_{2}},...,w_{k_{o}},1\leq k_{p}\leq n$ and join $w_{k_{p}}$ with $v_{k_{p}},v_{k_{p}+1}$ to create a new graph $G=(V,E)$. We can choose prime clique $PK=\\{w_{k_{1}},v_{k_{1}},v_{k_{1}+1}\\}$ then it’s easy to see that $G[V-PK]$ is a block graph (a graph which every biconnected component is a clique), so is perfect and also quasiperfect. Now we choose prime independent set $PI$. There are two cases: $-$ If $o=n=2m+1$, we choose $PI=\\{w_{n},v_{2},v_{4},...,v_{2m}\\}$. It’s easy to see that $PI$ is a prime independent set, and $G[V-PI]$ is a forest, so is perfect and also quasiperfect. $-$ If $o<n$, we taking the set $P=\\{v_{k_{1}},v_{k_{2}},...,v_{k_{o}}\\}$, if there exists $l$ such that $v_{l-1},v_{l},v_{l+1}\in P$ and $v_{l},v_{l+1}$ are joined but $v_{l},v_{l-1}$ are not, delete $v_{l}$ from the set, in the end we have a prime independent set $PI$ (because $o<n$). It’s easy to see that $G[V-PI]$ is a forest, so is perfect and also quasiperfect. So $G$ is quasiperfect and the quasiperfect graphs is a non-trivial generation of the perfect graphs. ## 4\. Further remarks The trick of replacing a vertex by a clique doesn’t work for quasiperfect case. For example, we add a vertex to $C_{5}$ and join it to 2 vertices that are joined, then replace all vertices of $C_{5}$ by large enough cliques of the same size, it is easy to see that the result graph doesn’t have clique number equal chromatic number. And we can proof that the trick only work for perfect graph, so we can’t hope for any better generation. It’s a natural question to ask which induced subgraphs of a quasiperfect graph is quasiperfect? In the coloring in Theorem 1, we see that the prime independent set is a set of all vertices have the same color, so we can hope that, give a minimum coloring and a color $c$, remove all the vertices have color $C$ from the quasiperfect creats a new quasiperfect graph. There are some families of graphs base on algebraic objects have clique number equal chromatic number such as the enhanced power graphs of groups, the annihilating-ideal graphs of commutative rings [5, 6, 7]. We hope that those families of graphs are also quaiperfect. I’ve showed above that there are quasiperfect graphs that have odd cycle $C_{n}$ with $n\geq 5$ as their induced subgraph. It’s a natural question to ask for arbitrary graph $G=(V,E)$, is there a quasiperfect graphs $G^{\prime}=(V^{\prime},E^{\prime})$ that has $G$ as its induced graph, and for fixed value of $|V|$ how smallest $|V^{\prime}|$ can be? It’s likely that the properties ”quasiperfect” is completely global, so by research on the quasiperfect graphs (or even classification), we can learn more about global property of the perfect graphs and other families of graphs that are quasiperfect. ## 5\. acknowledgement I would like to thank Professor Peter J. Cameron for endorsing me to submit this paper. ## References * [1] C. Berge, _Färbung von Graphen, deren sämtliche bzw. deren ungerade Kreise starr sind_ , Wiss. Z. Martin-Luther-Univ. Halle-Wittenberg Math.-Natur. Reihe (1961), 114–115. * [2] Berge, Claude (1963). ”Perfect graphs”. _Six Papers on Graph Theory_. Calcutta: Indian Statistical Institute. pp. 1–21. * [3] L. Lovász. A characterization of perfect graphs. _Journal of Combinatorial Theory, Series B_ , 13:95–98, 1972. * [4] L. Lovász. Normal hypergraphs and the perfect graph conjecture. _Discrete Mathematics_ , 2:253–267, 1972. * [5] Jitender Kumar, Parveen. The complement of enhanced power graph of a finite group, arXiv: 2207.04641 * [6] Peter J. Cameron, Veronica Phan. Enhanced power graphs of groups are weakly perfect, arXiv: 2207.07156 * [7] R. Nikandish, M. Mehrara, M. J. Nikmehr. Coloring in essential annihilating-ideal graphs of commutative rings, arXiv: 2208.03751 * [8] R. Diestel. _Graph Theory_. Springer, New York, third edition, 2006. * [9] M. Chudnovsky, N. Robertson, P. Seymour, and R. Thomas. The strong perfect graph theorem. _Annals of Mathematics_ , 164(1):51–229, 2006\. * [10] Ghodratallah Aalipour, Saieed Akbari, Peter J. Cameron, Reza Nikandish and Farzad Shaveisi, On the structure of the power graph and the enhanced power graph of a group, _Electronic J. Combinatorics_ 24(3) (2017), P3.16.
# The Fact Selection Problem in LLM-Based Program Repair Nikhil Parasaram University College London <EMAIL_ADDRESS>Huijie Yan† University College London <EMAIL_ADDRESS>Boyu Yang† University College London <EMAIL_ADDRESS>Zineb Flahy University College London <EMAIL_ADDRESS>Abriele Qudsi University College London <EMAIL_ADDRESS>Damian Ziaber University College London <EMAIL_ADDRESS>Earl T. Barr University College London <EMAIL_ADDRESS>Sergey Mechtaev University College London <EMAIL_ADDRESS> ###### Abstract Recent research has shown that incorporating bug-related facts, such as stack traces and GitHub issues, into prompts enhances the bug-fixing capabilities of large language models (LLMs). Considering the ever-increasing context window of these models, a critical question arises: what and how many facts should be included in prompts to maximise the chance of correctly fixing bugs? To answer this question, we conducted a large-scale study, employing over 19K prompts featuring various combinations of seven diverse facts to rectify 314 bugs from open-source Python projects within the BugsInPy benchmark. Our findings revealed that each fact, ranging from simple syntactic details like code context to semantic information previously unexplored in the context of LLMs such as angelic values, is beneficial. Specifically, each fact aids in fixing some bugs that would remain unresolved or only be fixed with a low success rate without it. Importantly, we discovered that the effectiveness of program repair prompts is non-monotonic over the number of used facts; using too many facts leads to subpar outcomes. These insights led us to define the fact selection problem: determining the optimal set of facts for inclusion in a prompt to maximise LLM’s performance on a given task instance. We found that there is no one-size-fits-all set of facts for bug repair. Therefore, we developed a basic statistical model, named Maniple, which selects facts specific to a given bug to include in the prompt. This model significantly surpasses the performance of the best generic fact set. To underscore the significance of the fact selection problem, we benchmarked Maniple against the state-of-the-art zero-shot, non-conversational LLM-based bug repair methods. On our testing dataset of 157 bugs, Maniple repairs 88 bugs, 17% above the best configuration. ###### Index Terms: automated program repair, large language models, prompt engineering ${}^{\dagger}$${}^{\dagger}$footnotetext: These authors contributed equally to this work. ## I Conclusion In this paper, we explore the construction of effective prompts for LLM-based APR, leveraging facts extracted from the buggy program and external sources. Through a systematic investigation, we incorporate seven bug-relevant facts into the prompts, notably including angelic values, a factor previously not considered in this domain. Furthermore, we define the fact selection problem and demonstrate that a universally optimal set of facts for addressing various bugs does not exist. Building on this insight, we devise a bug-tailored fact selection strategy enhancing the effectiveness of APR. ## References * [1] T. Ahmed, K. S. Pai, P. Devanbu, and E. T. Barr, “Improving few-shot prompts with relevant static analysis products,” _arXiv preprint arXiv:2304.06815_ , 2023. * [2] C. S. Xia, Y. Ding, and L. Zhang, “Revisiting the plastic surgery hypothesis via large language models,” _arXiv preprint arXiv:2303.10494_ , 2023. * [3] S. Fakhoury, S. Chakraborty, M. Musuvathi, and S. K. Lahiri, “Towards generating functionally correct code edits from natural language issue descriptions,” _arXiv preprint arXiv:2304.03816_ , 2023. * [4] C. S. Xia and L. Zhang, “Keep the conversation going: Fixing 162 out of 337 bugs for $0.42 each using chatgpt,” _arXiv preprint arXiv:2304.00385_ , 2023\. * [5] J. A. Prenner and R. Robbes, “Out of context: How important is local context in neural program repair?” in _2024 IEEE/ACM 46th International Conference on Software Engineering (ICSE)_. IEEE, 2024. * [6] Y. Chen, J. Wu, X. Ling, C. Li, Z. Rui, T. Luo, and Y. Wu, “When large language models confront repository-level automatic program repair: How well they done?” _arXiv preprint arXiv:2403.00448_ , 2024. * [7] J. Keller and J. Nowakowski, “Ai-powered patching: the future of automated vulnerability fixes,” Tech. Rep., 2024. * [8] S. Chandra, E. Torlak, S. Barman, and R. Bodik, “Angelic debugging,” in _Proceedings of the 33rd International Conference on Software Engineering_ , 2011, pp. 121–130. * [9] S. Mechtaev, J. Yi, and A. Roychoudhury, “Angelix: Scalable multiline program patch synthesis via symbolic analysis,” in _Proceedings of the 38th international conference on software engineering_ , 2016, pp. 691–701. * [10] R. Widyasari, S. Q. Sim, C. Lok, H. Qi, J. Phan, Q. Tay, C. Tan, F. Wee, J. E. Tan, Y. Yieh _et al._ , “Bugsinpy: a database of existing bugs in python programs to enable controlled testing and debugging studies,” in _Proceedings of the 28th ACM joint meeting on european software engineering conference and symposium on the foundations of software engineering_ , 2020, pp. 1556–1560. * [11] N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang, “Lost in the middle: How language models use long contexts,” _arXiv preprint arXiv:2307.03172_ , 2023. * [12] F. Shi, X. Chen, K. Misra, N. Scales, D. Dohan, E. H. Chi, N. Schärli, and D. Zhou, “Large language models can be easily distracted by irrelevant context,” in _International Conference on Machine Learning_. PMLR, 2023, pp. 31 210–31 227. * [13] “The pandas:128 bug from bugsinpy,” https://github.com/pandas-dev/pandas/commit/112e6b8d054f9adc1303138533ed6506975f94db, 2023, accessed: 2024-03-21. * [14] J. Schulman, B. Zoph, C. Kim, J. Hilton, J. Menick, J. Weng, a. Felipe Juan Uribe, L. Fedus, L. Metz, M. Pokorny, R. G. Lopes, S. Zhao, A. Vijayvergiya, E. Sigler, A. Perelman, C. Voss, M. Heaton, J. Parish, D. Cummings, R. Nayak, V. Balcom, D. Schnurr, T. Kaftan, C. Hallacy, N. Turley, N. Deutsch, V. Goel, J. Ward, A. Konstantinidis, W. Zaremba, L. Ouyang, L. Bogdonoff, J. Gross, D. Medina, S. Yoo, T. Lee, R. Lowe, D. Mossing, J. Huizinga, R. Jiang, C. Wainwright, D. Almeida, S. Lin, M. Zhang, K. Xiao, K. Slama, S. Bills, A. Gray, J. Leike, J. Pachocki, P. Tillet, S. Jain, G. Brockman, N. Ryder, A. Paino, Q. Yuan, C. Winter, B. Wang, M. Bavarian, I. Babuschkin, S. Sidor, I. Kanitscheider, M. Pavlov, M. Plappert, N. Tezak, H. Jun, W. Zhuk, V. Pong, L. Kaiser, J. Tworek, A. Carr, L. Weng, S. Agarwal, K. Cobbe, V. Kosaraju, A. Power, S. Polu, J. Han, R. Puri, S. Jain, B. Chess, C. Gibson, O. Boiko, E. Parparita, A. Tootoonchian, K. Kosic, and C. Hesse, “Introducing chatgpt,” _OpenAI blog_ , Nov 2022. [Online]. Available: https://openai.com/blog/chatgpt * [15] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou _et al._ , “Chain-of-thought prompting elicits reasoning in large language models,” _Advances in Neural Information Processing Systems_ , vol. 35, pp. 24 824–24 837, 2022. * [16] L. Weng, “Prompt engineering,” _lilianweng.github.io_ , Mar 2023. [Online]. Available: https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/ * [17] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell _et al._ , “Language models are few-shot learners,” _Advances in neural information processing systems_ , vol. 33, pp. 1877–1901, 2020. * [18] J. Liu, D. Shen, Y. Zhang, B. Dolan, L. Carin, and W. Chen, “What makes good in-context examples for gpt-3?” _arXiv preprint arXiv:2101.06804_ , 2021\. * [19] K. Liu, A. Koyuncu, T. F. Bissyandé, D. Kim, J. Klein, and Y. Le Traon, “You cannot fix what you cannot find! an investigation of fault localization bias in benchmarking automated program repair systems,” in _2019 12th IEEE conference on software testing, validation and verification (ICST)_. IEEE, 2019, pp. 102–113. * [20] R. Abreu, P. Zoeteweij, and A. J. Van Gemund, “On the accuracy of spectrum-based fault localization,” in _Testing: Academic and industrial conference practice and research techniques-MUTATION (TAICPART-MUTATION 2007)_. IEEE, 2007, pp. 89–98. * [21] T. Lutellier, H. V. Pham, L. Pang, Y. Li, M. Wei, and L. Tan, “Coconut: combining context-aware neural translation models using ensemble for program repair,” in _Proceedings of the 29th ACM SIGSOFT international symposium on software testing and analysis_ , 2020, pp. 101–114. * [22] Z. Chen, S. Kommrusch, M. Tufano, L.-N. Pouchet, D. Poshyvanyk, and M. Monperrus, “Sequencer: Sequence-to-sequence learning for end-to-end program repair,” _IEEE Transactions on Software Engineering_ , vol. 47, no. 9, pp. 1943–1959, 2019. * [23] R. Just, D. Jalali, and M. D. Ernst, “Defects4j: A database of existing faults to enable controlled testing studies for java programs,” in _Proceedings of the 2014 international symposium on software testing and analysis_ , 2014, pp. 437–440. * [24] “Pandas, python data analysis library,” https://pandas.pydata.org/, 2023, accessed: 2024-03-21. * [25] “Matplotlib: Visualization with python,” https://matplotlib.org/, 2023, accessed: 2024-03-21. * [26] S. Ouyang, J. M. Zhang, M. Harman, and M. Wang, “Llm is like a box of chocolates: the non-determinism of chatgpt in code generation,” _arXiv preprint arXiv:2308.02828_ , 2023. * [27] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman _et al._ , “Evaluating large language models trained on code,” _arXiv preprint arXiv:2107.03374_ , 2021. * [28] Z. Qi, F. Long, S. Achour, and M. Rinard, “An analysis of patch plausibility and correctness for generate-and-validate patch generation systems,” in _Proceedings of the 2015 International Symposium on Software Testing and Analysis_ , 2015, pp. 24–36. * [29] E. K. Smith, E. T. Barr, C. Le Goues, and Y. Brun, “Is the cure worse than the disease? overfitting in automated program repair,” in _Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering_ , 2015, pp. 532–543. * [30] “Openai api,” https://openai.com/blog/openai-api, 2024, accessed: 2024-03-21. * [31] E. Winter, “The shapley value,” _Handbook of game theory with economic applications_ , vol. 3, pp. 2025–2054, 2002. * [32] “Relative change,” https://en.wikipedia.org/wiki/Relative_change, 2023, accessed: 2024-03-21. * [33] A. Lex, N. Gehlenborg, H. Strobelt, R. Vuillemot, and H. Pfister, “Upset: visualization of intersecting sets,” _IEEE transactions on visualization and computer graphics_ , vol. 20, no. 12, pp. 1983–1992, 2014. * [34] Y. Lu, M. Bartolo, A. Moore, S. Riedel, and P. Stenetorp, “Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity,” _arXiv preprint arXiv:2104.08786_ , 2021. * [35] S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y. Cao, and K. Narasimhan, “Tree of thoughts: Deliberate problem solving with large language models,” _arXiv preprint arXiv:2305.10601_ , 2023. * [36] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa, “Large language models are zero-shot reasoners,” _Advances in neural information processing systems_ , vol. 35, pp. 22 199–22 213, 2022. * [37] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, “React: Synergizing reasoning and acting in language models,” _arXiv preprint arXiv:2210.03629_ , 2022. * [38] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou, “Self-consistency improves chain of thought reasoning in language models,” _arXiv preprint arXiv:2203.11171_ , 2022. * [39] Y. Zhou, A. I. Muresanu, Z. Han, K. Paster, S. Pitis, H. Chan, and J. Ba, “Large language models are human-level prompt engineers,” _arXiv preprint arXiv:2211.01910_ , 2022. * [40] D. Shrivastava, H. Larochelle, and D. Tarlow, “Repository-level prompt generation for large language models of code,” in _International Conference on Machine Learning_. PMLR, 2023, pp. 31 693–31 715. * [41] C. Le Goues, T. Nguyen, S. Forrest, and W. Weimer, “Genprog: A generic method for automatic software repair,” _Ieee transactions on software engineering_ , vol. 38, no. 1, pp. 54–72, 2011. * [42] H. D. T. Nguyen, D. Qi, A. Roychoudhury, and S. Chandra, “Semfix: Program repair via semantic analysis,” in _2013 35th International Conference on Software Engineering (ICSE)_. IEEE, 2013, pp. 772–781. * [43] S. Mechtaev, A. Griggio, A. Cimatti, and A. Roychoudhury, “Symbolic execution with existential second-order constraints,” in _Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering_ , 2018, pp. 389–399. * [44] N. Parasaram, E. T. Barr, and S. Mechtaev, “Trident: Controlling side effects in automated program repair,” _IEEE Transactions on Software Engineering_ , vol. 48, no. 12, pp. 4717–4732, 2021. * [45] ——, “Rete: Learning namespace representation for program repair,” in _2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)_. IEEE, 2023, pp. 1264–1276. * [46] Z. Feng, D. Guo, D. Tang, N. Duan, X. Feng, M. Gong, L. Shou, B. Qin, T. Liu, D. Jiang, and M. Zhou, “CodeBERT: A pre-trained model for programming and natural languages,” in _Findings of the Association for Computational Linguistics: EMNLP 2020_ , T. Cohn, Y. He, and Y. Liu, Eds. Online: Association for Computational Linguistics, Nov. 2020, pp. 1536–1547. [Online]. Available: https://aclanthology.org/2020.findings-emnlp.139 * [47] M. Wen, J. Chen, R. Wu, D. Hao, and S.-C. Cheung, “Context-aware patch generation for better automated program repair,” in _Proceedings of the 40th international conference on software engineering_ , 2018, pp. 1–11. * [48] Y. Li, S. Wang, and T. N. Nguyen, “Dlfix: Context-based code transformation learning for automated program repair,” in _Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering_ , 2020, pp. 602–614. * [49] M. Jin, S. Shahriar, M. Tufano, X. Shi, S. Lu, N. Sundaresan, and A. Svyatkovskiy, “Inferfix: End-to-end program repair with llms,” _arXiv preprint arXiv:2303.07263_ , 2023. * [50] C. S. Xia, Y. Wei, and L. Zhang, “Practical program repair in the era of large pre-trained language models,” _arXiv preprint arXiv:2210.14179_ , 2022. * [51] N. Jiang, K. Liu, T. Lutellier, and L. Tan, “Impact of code language models on automated program repair,” _arXiv preprint arXiv:2302.05020_ , 2023. * [52] D. Lin, J. Koppel, A. Chen, and A. Solar-Lezama, “Quixbugs: A multi-lingual program repair benchmark set based on the quixey challenge,” in _Proceedings Companion of the 2017 ACM SIGPLAN international conference on systems, programming, languages, and applications: software for humanity_ , 2017, pp. 55–56. * [53] D. A. Tomassi, N. Dmeiri, Y. Wang, A. Bhowmick, Y.-C. Liu, P. T. Devanbu, B. Vasilescu, and C. Rubio-González, “Bugswarm: Mining and continuously growing a dataset of reproducible failures and fixes,” in _2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)_. IEEE, 2019, pp. 339–349. * [54] W. Oh and H. Oh, “Pyter: effective program repair for python type errors,” in _Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering_ , 2022, pp. 922–934.
# Predicting Individualized Effects of Internet-Based Treatment for Genito- Pelvic Pain/Penetration Disorder: Development and Internal Validation of a Multivariable Decision Tree Model Anna-Carlotta Zarski Friedrich-Alexander-Universität Erlangen-Nürnberg & Technical University of Munich <EMAIL_ADDRESS> &Mathias Harrer Technical University of Munich & Friedrich-Alexander-Universität Erlangen-Nürnberg <EMAIL_ADDRESS> Paula Kuper Otto-von-Guericke University Magdeburg & Technical University of Munich <EMAIL_ADDRESS> &Antonia A. Sprenger Otto-von-Guericke University Magdeburg & Technical University of Munich <EMAIL_ADDRESS> Matthias Berking Friedrich-Alexander-Universität Erlangen-Nürnberg <EMAIL_ADDRESS> &David Daniel Ebert Technical University of Munich <EMAIL_ADDRESS> ###### Abstract Genito-Pelvic Pain/Penetration-Disorder (GPPPD) is a common disorder but rarely treated in routine care. Previous research documents that GPPPD symptoms can be treated effectively using internet-based psychological interventions. However, non-response remains common for all state-of-the-art treatments and it is unclear which patient groups are expected to benefit most from an internet-based intervention. Multivariable prediction models are increasingly used to identify predictors of heterogeneous treatment effects, and to allocate treatments with the greatest expected benefits. In this study, we developed and internally validated a multivariable decision tree model that predicts effects of an internet-based treatment on a multidimensional composite score of GPPPD symptoms. Data of a randomized controlled trial comparing the internet-based intervention to a waitlist control group ($N$=200) was used to develop a decision tree model using model-based recursive partitioning. Model performance was assessed by examining the apparent and bootstrap bias-corrected performance. The final pruned decision tree consisted of one splitting variable, joint dyadic coping, based on which two response clusters emerged. No effect was found for patients with low dyadic coping ($n$=33; $d$=0.12; 95% CI: -0.57-0.80), while large effects ($d$=1.00; 95%CI: 0.68-1.32; $n$=167) are predicted for those with high dyadic coping at baseline. The bootstrap-bias-corrected performance of the model was $R^{2}$=27.74% (RMSE=13.22). _K_ eywords Genito-Pelvic Pain/Penetration Disorder $\cdot$ Heterogeneity of Treatment Effects $\cdot$ Clinical Prediction Model ## 1 Introduction Genito-Pelvic Pain/Penetration-Disorder (GPPPD) is a sexual dysfunction previously known as dyspareunia and vaginismus, that can be defined by genito- pelvic pain, vaginal intercourse difficulties, pelvic floor muscle tension, and fear of pain or vaginal penetration (American Psychiatric Association, 2013). Overlapping conditions include Vulvodynia and Provoked Vestibulodynia (Bergeron et al., 2015). GPPPD adversely affects women’s sexuality, quality of life, physical and mental health and relationships (Arnold et al., 2006; Khandker et al., 2011; Pâquet et al., 2016; Thomtén, 2014). GPPPD symptoms are experienced by 20.8% (1%-72%) of premenopausal women in the general population (McCool et al., 2016) with a 12-month prevalence rate ranging from 4.9 to 10.9 depending on its associated impairment (Briken et al., 2020). Multiple biopsychosocial factors influence the etiology of GPPPD with psychological factors such as fear avoidance and pain catastrophizing playing an important role in its maintenance (Thomtén et al., 2014; Thomtén & Linton, 2013). Psychological interventions have been shown to be effective in reducing genito-pelvic pain and associated distress in vaginismus and dyspareunia and enabling sexual intercourse (Flanagan et al., 2015; Maseroli et al., 2018). However, there is a lack of willingness to participate in psychological interventions due to limited availability, fear of stigmatization, and feelings of shame (Bergvall & Himelein, 2014; Bond et al., 2015; Donaldson & Meana, 2011). Internet-based treatment approaches are considered particularly suitable for sexual dysfunctions to make evidence-based treatment 1) more easily accessible at any time from any location, 2) well scalable in the ratio of invested resources to people reached, and 3) anonymous so that stigmatization can be reduced (Ebert et al., 2018). Meta-analytic results showed medium to large effects of internet-based treatment compared to control conditions with regard to female sexual functioning (g=0.59, CI: 0.28–0.90, I2=0%) and satisfaction (g=0.90, CI: 0.02–1.79, I2=82%) (Zarski et al., 2022). For GPPPD in specific, we evaluated an internet-based treatment which resulted in an improvement rate of 31% of women being able to successfully have sexual intercourse compared to 13% in the control group and small to medium effects on other GPPPD core symptom dimensions of genito-pelvic pain and coital and noncoital fear of sexuality (d=0.40-0.74) (Zarski et al., 2021) However, similar to other forms of treatment for various mental disorders (Kessler et al., 2017), not all participants benefit to the same extent from internet-based treatment for sexual dysfunctions. Although individuals with mental disorders who do not receive treatment are at higher risk for deterioration, about 25% of individuals show no changes, and 5-6% become worse (Rozental et al., 2017, 2019). Differences in treatment outcomes can be attributed to heterogeneity in personal and disorder characteristics (Barber, 2007; Delgadillo et al., 2016; DeRubeis et al., 2014). This heterogeneity may result in different needs with regard to treatment. In order to enable appropriate treatment modality allocation as well as adapt a chosen treatment to individual needs, these varying effects should be identified (Kraemer et al., 2002). To collect a risk profile at baseline could allow different baseline conditions to be addressed in treatment, with the goal of promoting treatment success. Few existing studies have identified promising effect modifiers of psychological treatment outcomes for GPPPD. Higher levels of pretreatment pain intensity and catastrophizing have been shown to be associated with poorer genito-pelvic pain outcomes after psychological treatment (Bergeron et al., 2008; Brotto et al., 2015, 2020; Desrochers et al., 2010). Higher relationship satisfaction, in turn, has been found to correlate with better treatment outcome regarding sexual satisfaction and distress (Hummel et al., 2018; Stephenson et al., 2013). In contrast, one study found high joint dyadic coping and high levels in evaluation of dyadic coping associated with lower odds of sexual intercourse (Zarski et al., 2017). Results on relationship duration as predictor of pain outcomes were mixed (Brotto et al., 2020; N. O. Rosen et al., 2021). Looking at sociodemographic variables, younger age (Brotto et al., 2020; Zarski et al., 2017), and lower levels of education (Zarski et al., 2017) have been associated with better treatment outcome for GPPPD. With regard to psychological variables, lower anxiety in women has been found to be associated with pain outcomes after treatment (Brotto et al., 2020; N. O. Rosen et al., 2021). Additionally, women with higher levels of childhood maltreatment receiving CBT-based couples therapy had poorer outcomes in sexual satisfaction and sexual function at posttreatment compared to women in a lidocaine condition (Charbonneau-Lefebvre et al., 2022). Besides some high-quality moderator analysis, most studies, however, did not include (1) specific a-priori statement about the intention of testing moderators (2) evidence- or theory-based selection of moderators (3) measurement of moderators prior to randomization (4) adequate quality of measurements of moderators, and (5) specific test of interaction between moderators and treatment (Pincus et al., 2011). Moreover, moderators were almost exclusively evaluated on a variable-by-variable basis, and more complex multivariate interactions were not taken into account. Modeling multivariate interactions within a joint prognostic model could allow to better capture the heterogeneity of treatment effects across patients (Dahabreh et al., 2016; Kent et al., 2020). In this study, we therefore aim to develop and internally validate a multivariate prognostic model predicting individualized treatment effects of an Internet-based treatment for GPPPD. ## 2 Methods Where applicable, this article adheres to the "transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD)" statement (Collins et al., 2015). ### 2.1 Study Design The data for this study came from a two-armed randomized controlled trial ($n_{\text{intervention}}$ = 100, $n_{\text{waitlist}}$ = 100) designed to evaluate the efficacy of an internet-based treatment for GPPPD in comparison to a waitlist control condition (study registration number: DRKS00010228, ethical approval by University of Erlangen-Nürnberg: no. 324_15B). Further information on the study and the intervention can be found in the published study protocol, a case study, and the efficacy evaluation paper (Zarski, Berking, & Ebert, 2018; Zarski, Berking, Hannig, et al., 2018; Zarski et al., 2021). ### 2.2 Study Inclusion Criteria and Process Participants were women 18 years and older who were unable to have sexual intercourse for the last six months or longer and who were in a heterosexual relationship. Additionally, pre-existing medical conditions causing GPPPD symptoms had to have been ruled out before enrollment. In order to participate in the study, individuals were required to have sufficient German language skills, internet access, and provide informed consent. Those with 1) current or lifetime psychosis or dissociative symptoms, 2) present substance dependency or abuse, 3) present moderate or severe depression or bipolar disorder, 4) current or lifetime posttraumatic stress disorder or traumatization caused by sexual abuse, or 5) present treatment of GPPPD were excluded from the study. After completion of the first assessment, a 1:1 randomization through an automated computer-based random integer generator (randlist) took place and participants were allocated to either the intervention group (IG) or the waitlist control group (WCG). The research staff was blinded to the randomization until group allocation was successfully completed. ### 2.3 Intervention The 10-week online intervention consisted of 9 sessions in total, including an extra booster session which took place four weeks after the program ended. These sessions incorporated psychoeducation, communication exercises, non- judgmental awareness, body exposure and genital self-exploration, attention- focusing for pain-management, cognitive reconstructing, sensate focus, gradual exposure of fingers and dilators, sexual intercourse preparation exercises, and relapse prevention. Additionally, participants kept an online diary where they monitored weekly transfer tasks as well as exercises in their daily life. They also received encouraging text messages, personalized written feedback on completed sessions, and practice reminders throughout the treatment program. Specifically, if a module had failed to be completed within seven days, a reminder from an eCoach was sent out. ### 2.4 Outcome measures For this study, outcome data assessed in both the IG and WCG via self-report after completion of treatment/12 weeks after randomization (T2) was used. Potential moderators were assessed prior to randomization at baseline. #### 2.4.1 Multidimensional composite primary outcome measure As primary outcome, we built a composite measure to comprise the core dimensions of GPPPD by aggregating scores of the following components: coital and noncoital penetration ability, genito-pelvic pain and interference of genital pain with sexual intercourse, fear of coitus and noncoital sexual activity, and sexual satisfaction. The Primary Endpoint Questionnaire (PEQ) was used to assess intercourse penetration behavior (1 item, scores: 0 [not attempted or attempted, but unsuccessful] and 1 [attempted and sometimes successful or attempted and always successful]) and noncoital self-insertion behavior (3 items, $\alpha$=.71) (van Lankveld et al., 2006). Genito-pelvic pain (3 items, $\alpha$=.90) and sexual satisfaction (3 items, $\alpha$=.75) was measured using the respective subscales of the Female Sexual Functioning Index (FSFI) (Berner et al., 2004; R. Rosen et al., 2000). Interference of genital pain with sexual intercourse was assessed according to the Diagnostic Guidelines for the Assessment of Genito-Pelvic Pain/Penetration Disorder (3 items, $\alpha$=.60) (Binik, 2010) and fear of coitus (3 items, $\alpha$=.78) and noncoital sexual activity by the Fear of Sexuality Questionnaire (5 items, $\alpha$=.82) (FSQ) (ter Kuile et al., 2007). Comparable scaling among the included measures was achieved by 1) harmonizing measures so that higher values indicated better GPPPD symptom severity, 2) building the same number of quantiles over each scale, and 3) allocating scores in the same quantile to the same value. For the dichotomous coital penetration scale, no coital penetration was assigned to the lowest quantile and coital penetration to the highest. A solution with eleven quantiles was chosen since the primary outcome should sufficiently differentiate between participants. At the same time, the chosen number of quantiles should neither overly exceed the smallest range of the scales included nor increase the impact of the dichotomous coital penetration scale beyond measure. When computing quantiles, skewness, mean and standard deviation of the distribution of each scale were taken into account using the R package sn (Azzalini, 2021). The new scores of each scale were added to a sum score. Figure 1 shows the density plot of the aggregated outcome. As sensitivity analyses, the aggregated outcome was recalculated several times according to the leave-one-out principle, i.e., one scale score at a time was excluded to build the outcome. Respective density plots and histograms are shown in Figure 4 and 5 in the Appendix. Figure 1: _Density plot of the aggregated outcome_ _Note._ Outcome at post-assessment based on imputed data. #### 2.4.2 Participant-Level Moderators In order to examine the potential moderating role of various baseline variables relevant to GPPPD on the effect of the intervention, 39 potential moderator variables were included: nine sociodemographic variables, i.e., age (years), nationality (German/non-German), level of education (low/middle/high), married (_yes/no_), children (_yes/no_), previous treatment for GPPPD (_yes/no_), experience with psychotherapy (_yes/no_), online training experience (_yes/no_); 14 variables related to sexual function, i.e., duration of GPPPD (years), lifelong GPPPD (_yes/no_), sexual intercourse attempts in past 6 months, experience of sexual abuse (_yes/no_), control (four items, $\alpha$=.72), catastrophic and pain (five items, $\alpha$=.73), self-image (six items, $\alpha$=.81), genital incompatibility (two items, $\alpha$=.66), and positive (five items, $\alpha$=.80) cognitions regarding vaginal penetration assessed by the Vaginal Penetration Cognition Questionnaire (VPCQ) (Klaassen & ter Kuile, 2009), sexual desire (two items, $\alpha$=84), sexual arousal (four items, $\alpha$=93), lubrication (four items, $\alpha$=94), orgasm (three items, $\alpha$=.93) assessed by the FSFI (Berner et al., 2004; Rosen et al., 2000), noncoital insertion by the partner (three items, $\alpha$=.60) assessed by the Primary Endpoint Questionnaire (PEQ) (van Lankveld et al., 2006); 5 partnership-related variables, i.e., duration of partnership (years), delegated (two items, $\alpha$=92.), joint (five items, $\alpha$=.76), and evaluation of (two items, $\alpha$=.87) dyadic coping assessed by the Dyadic Coping Inventory (DCI) (Bodenmann, 2000; Ledermann et al., 2010), satisfaction with partnership (9 items, $\alpha$=.79) and partnership happiness (1 item) assessed by the Partnership Questionnaire Short Form (PFB-k) (Kliem et al., 2012); 4 mental health variables, i.e., self-esteem assess by the Self-Esteem Scale (ten items, SES, $\alpha$=.90) (von Collani & Herzberg, 2003), generalized anxiety disorder symptoms assessed by the Generalized Anxiety Disorder Assessment (GAD-7, 7 items, $\alpha$=.81) (Spitzer et al., 2006), trait anxiety assessed by the State Trait Anxiety Inventory (STAI-T, 20 items, $\alpha$=.89) (Laux et al., 1981; Spielberger et al., 1970), well-being assessed by the Well-Being Index (WHO-5, $\alpha$=.84; Brähler et al., 2007); as well as the 7 GPPPD-related items used to build the aggregated outcome. ### 2.5 Imputation Missing data at baseline and post-test was imputed under the "missing at random" assumption using random forest methodology (Stekhoven & Bühlmann, 2012). The R package missForest (Stekhoven, 2013) is a non-parametric missing value imputation method that can handle mixed-type data on the basis of Breiman’s random forests (Breiman, 2001). The imputation procedure consists of the following principal steps: First, missing values are estimated using a simple procedure (e.g., mean imputation) and the variables are sorted in ascending order according to the number of missing values. Subsequently, a random forest is fitted on the observed parts of the dataset to predict the missing values variable by variable. This is repeated iteratively until a stopping criterion is met (Stekhoven & Bühlmann, 2012). The imputation model included all demographic and scale-level questionnaire scores collected at baseline. Additionally, variables at post-test used to build the primary outcome were included in the model. The number of trees in each forest was set to 100. ### 2.6 Decision Tree Model Model based-recursive partitioning (MOB) (Zeileis et al., 2008) was used to examine potential treatment moderators using the R partykit package (Hothorn et al., 2015). This method can be applied when it is assumed that a single global model does not fit the data well. In such a case, MOB involved the treatment effect to be tested for heterogeneity with respect to moderator variables (via parameter stability tests). This results in a tree where each node is associated with a local model consisting of subgroups of patients with similar model trends (Zeileis et al., 2008). ### 2.7 Data Analysis #### 2.7.1 Preselection of Moderator Candidates Before starting MOB, potential moderator candidates are preselected in order to reduce the number of variables in the final analysis model resulting in a more parsimonious model. This was done using model-based random forest analysis (Garge et al., 2013). It allows to calculate variable permutation importance for the moderator variables through random forest methodology by constructing multiple ($n$=300) model-based trees (Garge et al., 2013). For each tree, a random subset of the splitting variables is sampled to construct the tree, resulting in more stable and less sample-specific predictions. All baseline variables except for the outcome and predictor variables are selected as potential moderator variables. Potential moderator variables are ranked in order of their contribution to the production of accurate predictions (Garge et al., 2013). #### 2.7.2 Model-Based Recursive Partitioning For the subsequent model-based tree analysis, variables with a positive variable importance are included as partitioning variables. The model was a linear model in which the aggregated outcome at post-treatment was regressed on the treatment indicator variable and the aggregated GPPPD symptom severity outcome at baseline. The algorithm consists of several steps that are performed iteratively over the model until no significant parameter instability with respect to the moderators can be determined anymore (Zeileis et al., 2008): 1) it fits the model to the observations in the current node, then 2) it assesses parameter instability of the treatment effect with respect to the covariates, 3) split points are computed that locally optimize the objective function and then 4) splits the current node into child nodes. By separating variable and cut-point selection, an unbiased variable selection is enabled unlike in other tree-based methods (Fokkema et al., 2021). To avoid overfitting, the significance level for parameter stability tests was set to $\alpha$=0.05 and $P$ values were Bonferroni-corrected. The minimum number of observations in each terminal node was set to 10, i.e., the default of ten times the number of estimated parameters divided by the number of responses per observation (Zeileis et al., 2008). Effect sizes were calculated in each node as Cohen’s $d$ ($d$=0.2 small, $d$=0.5 medium, and $d$=0.8 large effects) with 95% confidence intervals (CIs). #### 2.7.3 Model Performance Model performance was assessed using (adjusted) $R^{2}$ and root mean squared errors (RMSE) to capture the (adjusted) proportion of variance explained by the decision tree model and the mean difference between predicted and observed values. In line with recommendations by Moons et al. (2019) and Steyerberg (2019), bootstrap bias correction was applied to internally validate the decision tree model. Via bootstrap bias correction, the proportion of excess performance that is due to overfitting and does not reflect the true population-based performance is quantified. ## 3 Results ### 3.1 Participant Characteristics The women participating in this study were on average 28.75 (SD=8.89) years old and over half had a university degree (53.50%, n=107). All women were in a partnership and were not able to have sexual intercourse with vaginal penetration due to GPPPD symptoms for at least the last 6 months prior to study participants (see Table 1 and Table 2) (Zarski et al., 2021). At post- treatment, 78% of participants in the IG and 92% in the WCG completed the assessment. Table 1: Participant characteristics and potential moderators at baseline. Potential moderators | IG ($n=100$) | CG ($n=100$) | Total ($N=200$) ---|---|---|--- | | | _Sociodemographic Variables_ | | | Age (years), $M$ (SD), range | 29.46 (9.82) | 28.04 (7.84) | 28.75 (8.89) German nationality, $n$ (%) | 89 (89.00) | 93 (93.00) | 182 (91.00) Level of education, $n$ (%) | | | \- Low | 0 | 4 (4.00) | 4 (2.00) \- Middle | 42 (42.00) | 47 (47.00) | 89 (44.50) \- High | 58 (58.00) | 49 (49.00) | 107 (53.50) Married, $n$ (%) | 27 (27.00) | 24 (24.00) | 51 (25.50) Have children, $n$ (%) | 13 (13.00) | 3 (3.00) | 16 (8.00) Previous GPPPD treatment, $n$ (%) | 32 (32.00) | 32 (32.00) | 64 (32.00) Psychotherapy experience, $n$ (%) | 36 (36.00) | 36 (36.00) | 72 (36.00) Online training experience, $n$ (%) | 10 (10.00) | 4 (4.00) | 14 (7.00) | | | | | | _Sexual Function Variables_ | | | Duration of GPPPD (years), $M$ (SD)${}^{\text{a}}$ | 7.90 (7.15) | 8.13 (7.00) | 8.02 (7.06) Lifelong GPPPD, $n$ (%) | 32 (32.00) | 44 (44.00) | 76 (38.00) Intercourse attempts past 6 months | 6.55 (11.29) | 7.34 (20.87) | 6.95 (16.74) Sexual abuse, $n$ (%)${}^{\text{b}}$ | 10 (10.99) | 9 (9.28) | 19 (10.11) Control cognitions, $M$ (SD) | 2.62 (1.43) | 2.84 (1.46) | 2.73 (1.45) Catastrophic and pain cognitions, $M$ (SD) | 4.16 (1.18) | 4.09 (1.30) | 4.12 (1.24) Self-image cognitions, $M$ (SD) | 3.28 (1.37) | 3.47 (1.38) | 3.38 (1.37) Genital incompatibility cognitions, $M$ (SD) | 3.05 (1.78) | 3.27 (1.77) | 3.16 (1.77) Positive cognitions, $M$ (SD) | 2.26 (1.11) | 2.23 (1.22) | 2.25 (1.16) Sexual desire, $M$ (SD) | 3.26 (1.06) | 3.35 (1.17) | 3.31 (1.11) Arousal, $M$ (SD) | 4.03 (1.49) | 3.89 (1.55) | 3.96 (1.52) Lubrication, $M$ (SD) | 3.99 (1.63) | 3.91 (1.73) | 3.95 (1.68) Orgasm, $M$ (SD) | 3.96 (1.81) | 3.65 (1.82) | 3.81 (1.82) Noncoital insertion by the partner, $M$ (SD) | 0.52 (0.64) | 0.49 (0.63) | 0.50 (0.64) | | | | | | _Partnership-Related Variables_ | | | Duration of partnership (years), $M$ (SD) | 6.72 (7.09) | 5.62 (4.59) | 6.17 (5.98) Delegated dyadic coping, $M$ (SD) | 7.14 (2.02) | 7.00 (1.87) | 7.07 (1.94) Joint dyadic coping, $M$ (SD) | 17.00 (4.04) | 17.26 (3.64) | 17.13 (3.84) Evaluation of dyadic coping, $M$ (SD) | 7.37 (1.94) | 7.56 (1.63) | 7.47 (1.79) Relationship quality, $M$ (SD) | 21.20 (4.21) | 20.86 (4.40) | 21.03 (4.30) Happiness in the relationship, $M$ (SD) | 3.73 (0.97) | 3.64 (1.16) | 3.68 (1.07) | | | | | | _Mental Health Variables_ | | | Self-esteem, $M$ (SD) | 20.99 (5.71) | 20.55 (5.96) | 20.77 (5.82) Generalized anxiety, $M$ (SD) | 6.94 (4.05) | 7.65 (4.05) | 7.30 (4.06) Trait anxiety, $M$ (SD) | 50.03 (13.25) | 51.83 (14.47) | 50.93 (13.87) Well-being, $M$ (SD) | 53.08 (17.18) | 47.48 (17.40) | 50.28 (17.47) | | | * • _Note._ ${}^{\text{a}}$refers to a subsample of $n$ = 199, ${}^{\text{b}}$refers to a subsample of $n$ = 188, F2F: face-to-face psychotherapy Table 2: GPPPD-related variables at baseline and post-treatment. GPPPD-Related Outcomes | IG ($n=100$) | CG ($n=100$) | ---|---|---|--- | | | Intercourse penetration behavior T1, $n$, (%) | 0 | 0 | Intercourse penetration behavior T2, $n$, (%) | 31 (31.00) | 13 (13.00) | Noncoital self-insertion T1, $M$ (SD) | 1.11 (0.93) | 1.11 (0.92) | Noncoital self-insertion T2, $M$ (SD) | 1.82 (0.85) | 1.17 (0.91) | Genito-pelvic pain T1, $M$ (SD) | 1.61 (1.00) | 1.64 (1.06) | Genito-pelvic pain T2, $M$ (SD) | 2.82 (1.36) | 1.84 (1.26) | Genital pain interference T1, $M$ (SD) | 4.63 (0.45) | 4.63 (0.56) | Genital pain interference T2, $M$ (SD) | 3.72 (0.97) | 4.37 (0.72) | Coital fear T1, $M$ (SD) | 11.03 (2.86) | 11.12 (3.37) | Coital fear T2, $M$ (SD) | 8.50 (3.20) | 9.93 (3.43) | Noncoital fear T1, $M$ (SD) | 11.76 (4.39) | 11.42 (4.09) | Noncoital fear T2, $M$ (SD) | 10.09 (3.55) | 11.70 (4.51) | Sexual Satisfaction T1, $M$ (SD) | 3.35 (1.37) | 3.65 (1.33) | Sexual Satisfaction T2, $M$ (SD) | 3.89 (1.12) | 3.57 (1.36) | | | | * • _Note._ T1 = baseline, T2 = post-treatment, $n$ = sample size, $M$ = mean, SD = standard deviation. ### 3.2 Decision Tree Model The resulting regression-based tree depicts scatter plots for GPPPD symptom severity as the aggregated outcome at baseline and at post-treatment as well as box-and-whisker plots by treatment group with GPPPD symptom severity at post-treatment as outcome measure. The data is partitioned at a joint dyadic coping index of 13 which corresponds roughly to one standard deviation below the sample mean ($M$ = 17.13; SD = 3.84). ### 3.3 Preselection of Moderator Candidates Three potential moderators, namely noncoital insertion by the partner, self- esteem, and interference of genital pain with sexual intercourse showed a negative variable importance and were thus excluded from the pool of partitioning variables. Variable importance was highest for joint dyadic coping with a score above 1.0. Figure 6 in the Appendix illustrates the variable permutation importance of potential moderator variables using model- based random forest analysis. ### 3.4 Terminal Node Models The final model-based tree is displayed in Figure 2 and the terminal node specific regression coefficients in Table 3. For women with low joint dyadic coping (Node 2; $N$ = 33), the effect of the intervention on GPPPD symptom severity was negligible ($b$ = 4.08, $p$ = 0.139) while higher GPPPD symptom severity at baseline was significantly associated with higher GPPPD symptom severity scores at post-treatment ($b$ = 0.88, $p<$0.001). In contrast, women with average-to-high joint dyadic coping (Node 3), a group comprising more than three-quarters of the participants ($N$ = 167), profited significantly from the intervention ($b$ = 13.24, $p<$0.001). Again, higher aggregated outcome scores in GPPPD symptom severity at baseline were significantly associated with higher aggregated outcome scores at post-treatment ($b$ = 0.43, $p<$0.001). Figure 2: _Decision tree model; based on imputed data._ Accordingly, standardized mean treatment effects were highest for women with average-to-high joint dyadic coping (Cohen’s $d$ = 1.00, 95%-CI: 0.68 - 1.32) with a proportion of explained variance of $R^{2}$ = 30.36% (adjusted $R^{2}$ = 29.51). In women with low joint dyadic coping, the standardized mean treatment effect size was Cohen’s $d$ = 0.12, 95%-CI: -0.57 to 0.80. A forest plot of the subgroup-conditional effects is presented in Figure 3. Taken together, the model suggests that women with average-to-high joint dyadic coping will significantly benefit from the intervention in respect to their GPPPD symptom severity while women with low joint dyadic coping will probably not benefit. Assessing joint dyadic coping at an early stage might thus be helpful for beneficial treatment assignment. Figure 3: _Differential treatment effects in each terminal node resulting from the model-based decision tree_ _Note._ d denotes Cohen’s $d$ in each terminal node, SE denotes respective standard errors. ### 3.5 Model Performance The proportion of variance explained by the decision tree model was $R^{2}$ = 38.64% (adjusted $R^{2}$ = 38.33%). After bootstrap bias correction, the proportion of explained variance was reduced by about 10% ($R^{2}$ = 27.74%; adjusted $R^{2}$ = 27.38%). The mean difference between predicted and observed values was RMSE = 11.93. After bootstrap bias correction, this difference was slightly increased (RMSE = 13.22). Both measures indicated substantial predictive performance of the decision tree model even after bootstrap bias correction. ### 3.6 Planned Further Analyses To draw firm conclusions about this prediction model of individualized effects of Internet-based treatment for GPPPD and to derive evidence-based practical implications, external validation of the multivariable decision tree model is needed. Therefore, the second aim of this study is to validate the tree-based prediction model in a second, independent randomized-controlled trial on the evaluation of internet-based treatment for GPPPD. $\blacksquare$ ## Funding The project was supported by the "Gender & Diversity" funding of the Friedrich-Alexander-Universität Erlangen-Nürnberg. ## Supplementary Information Data is available upon reasonable request from the first author. ## References * American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders (5th ed.). American Psychiatric Association. * Arnold, L. D., Bachmann, G. A., Rosen, R., Kelly, S., & Rhoads, G. G. (2006). Vulvodynia: characteristics and associations with comorbidities and quality of life. _Obstetrics and Gynecology, 107_(3), 617–624. https://doi.org/10.1097/01.AOG.0000199951.26822.27 * Azzalini, A. (2021). _sn: The Skew-Normal and Related Distributions Such as the Skew-t and the SUN_. R package version 2.0.0. * Barber, J. P. (2007). Issues and findings in investigating predictors of psychotherapy outcome: Introduction to the special section. In _Psychotherapy Research_ (Vol. 17, Issue 2, pp. 131–136). https://doi.org/10.1080/10503300601175545 * Bergeron, S., Corsini-Munt, S., Aerts, L., Rancourt, K., & Rosen, N. O. (2015). Female sexual pain disorders: A review of the literature on etiology and treatment. _Current Sexual Health Reports, 1–11_. https://doi.org/10.1007/s11930-015-0053-y * Bergeron, S., Khalifé, S., Glazer, H. I., & Binik, Y. M. (2008). Surgical and behavioral treatments for vestibulodynia. Two-and-one-half year follow-up and predictors of outcome. _Obstetrics and Gynecology, 111_(1), 159–166. https://doi.org/10.1097/01 .AOG.0000295864.76032.a7 * Bergvall, L., & Himelein, M. J. (2014). Attitudes toward seeking help for sexual dysfunctions among US and Swedish college students. _Sexual and Relationship Therapy, 29_(2), 215–228. https://doi.org/10.1080/14681994.2013.860222 * Berner, M. M., Kriston, L., Zahradnik, H.-P., Härter, M., & Rohde, A. (2004). Überprüfung der Gültigkeit und Zuverlässigkeit des Deutschen Female Sexual Function Index (FSFI-d). _Geburtshilfe Und Frauenheilkunde, 64_(3), 293–303. https://doi.org/10.1055/s-2004-815815 * Binik, Y. M. (2010). The DSM diagnostic criteria for vaginismus. _Archives of Sexual Behavior, 39_(2), 278–291. https://doi.org/10.1007/s10508-009-9560-0 * Bodenmann, G. (2000). _Stress und Coping bei Paaren_ [Stress and coping in couples]. Hogrefe. * Bond, K., Mpofu, E., & Millington, M. (2015). Treating women with genito-pelvic pain/penetration disorder: Influences of patient agendas on help-seeking. _Journal of Family Medicine, 2_(4), 1–8. issn: 2380-0658 * Brähler, E., Mühlan, H., Albani, C., & Schmidt, S. (2007). Teststatistische Prüfung und Normierung der Deutschen Versionen des EUROHIS-QOL Lebensqualität-index und des WHO-5 Wohlbefindens-index. _Diagnostica, 53_(2), 83–96. https://doi.org/10.1026/0012-1924.53.2.83 Breiman, L. (2001). Random Forests (Vol. 45). * Briken, P., Matthiesen, S., Pietras, L., Wiessner, C., Klein, V., Reed, G. M., & Dekker, A. (2020). Prävalenzschätzungen sexueller Dysfunktionen anhand der neuen ICD-11-Leitlinien. _Deutsches Ärzteblatt International, 117_(39), 653–658. https://doi.org/10.3238/arztebl.2020.0653 * Brotto, L. A., Basson, R., Smith, K. B., Driscoll, M., & Sadownik, L. (2015). Mindfulness-based Group Therapy for Women with Provoked Vestibulodynia. _Mindfulness, 6_(3), 417–432. https://doi.org/10.1007/s12671-013-0273-z * Brotto, L. A., Zdaniuk, B., Rietchel, L., Basson, R., & Bergeron, S. (2020). Moderators of Improvement from Mindfulness-Based vs Traditional Cognitive Behavioral Therapy for the Treatment of Provoked Vestibulodynia. _Journal of Sexual Medicine, 17_(11), 2247–2259. https://doi.org/10.1016/j .jsxm.2020.07.080 * Charbonneau-Lefebvre, V., Vaillancourt-Morel, M. P., Rosen, N. O., Steben, M., & Bergeron, S. (2022). Attachment and Childhood Maltreatment as Moderators of Treatment Outcome in a Randomized Clinical Trial for Provoked Vestibulodynia. _Journal of Sexual Medicine, 19_(3), 479–495. https://doi.org/10.1016/j.jsxm.2021.12.013 * Collins, G. S., Reitsma, J. B., Altman, D. G., & Moons, K. G. M. (2015). Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD Statement. _BMC Medicine, 13_(1). https://doi.org/10.1186/s12916-014-0241-z * Dahabreh, I. J., Hayward, R., & Kent, D. M. (2016). Using group data to treat individuals: Understanding heterogeneous treatment effects in the age of precision medicine and patient-centred evidence. _International Journal of Epidemiology, 45_(6), 2184–2193. https://doi.org/10.1093/ije/dyw125 * Delgadillo, J., Moreea, O., & Lutz, W. (2016). Different people respond differently to therapy: A demonstration using patient profiling and risk stratification. _Behaviour Research and Therapy, 79_ , 15–22. https://doi.org/10.1016/j.brat.2016.02.003 * DeRubeis, R. J., Gelfand, L. A., German, R. E., Fournier, J. C., & Forand, N. R. (2014). Understanding processes of change: How some patients reveal more than others-and some groups of therapists less-about what matters in psychotherapy. _Psychotherapy Research, 24_(3), 419–428. https://doi.org/10.1080/10503307.2013.838654 * Desrochers, G., Bergeron, S., Khalifé, S., Dupuis, M. J., & Jodoin, M. (2010). Provoked vestibulodynia: Psychological predictors of topical and cognitive-behavioral treatment outcome. _Behaviour Research and Therapy, 48_(2), 106–115. https://doi.org/10.1016/j.brat.2009.09.014 * Donaldson, R. L., & Meana, M. (2011). Early dyspareunia experience in young women: Confusion, consequences, and help-seeking barriers. _Journal of Sexual Medicine, 8_(3), 814–823. https://doi.org/10.1111/j.1743-6109.2010.02150.x * Ebert, D. D., van Daele, T., Nordgreen, T., Karekla, M., Compare, A., Zarbo, C., Brugnera, A., Øverland, S., Trebbi, G., Jensen, K. L., Kaehlke, F., Baumeister, H., & Taylor, J. (2018). Internet and mobile-based psychological interventions: Applications, efficacy and potential for improving mental health. A report of the EFPA E-Health Taskforce. _European Psychologist, 23_(2), 167–187. https://doi.org/10.1027/1016-9040/a000346 * Flanagan, E., Herron, K. A., O’Driscoll, C. C., & de Williams, A. C. C. C. (2015). Psychological treatment for vaginal pain: Does etiology matter? A systematic review and meta-analysis. _Journal of Sexual Medicine, 12_(1), 3–16. https://doi.org/10.1111/jsm.12717 * Fokkema, M., Edbrooke-Childs, J., & Wolpert, M. (2021). Generalized linear mixed-model (GLMM) trees: A flexible decision-tree method for multilevel and longitudinal data. _Psychotherapy Research, 31_(3), 329–341. https://doi.org/10.1080/10503307.2020.1785037 * Garge, N. R., Bobashev, G., & Eggleston, B. (2013). _Random forest methodology for model-based recursive partitioning: the mobForest package for R_. * Hothorn, T., Zeileis, A., Cheng, E., & Ong, S. (2015). partykit: A Modular Toolkit for Recursive Partytioning in R. In _Journal of Machine Learning Research_ (Vol. 16). * Hummel, S. B., van Lankveld, J. J., Oldenburg, H. S. A., Hahn, D. E. E., Broomans, E., & Aaronson, N. K. (2018). Internet-based Cognitive Behavioral Therapy for DSM-IV Sexual Dysfunctions in Breast Cancer Survivors: Predictors of Treatment Response. _International Journal of Sexual Health, 30_(3), 281–294. https://doi.org/10.1080/19317611.2018.1491925 * Kent, D. M., van Klaveren, D., Paulus, J. K., D’Agostino, R., Goodman, S., Hayward, R., Ioannidis, J. P. A., Patrick-Lake, B., Morton, S., Pencina, M., Raman, G., Ross, J. S., Selker, H. P., Varadhan, R., Vickers, A., Wong, J. B., & Steyerberg, E. W. (2020). The Predictive Approaches to Treatment effect Heterogeneity (PATH) statement: Explanation and elaboration. _Annals of Internal Medicine, 172_(1), W1–W25. https://doi.org/10.7326/M18-3668 * Kessler, R. C., van Loo, H. M., Wardenaar, K. J., Bossarte, R. M., Brenner, L. A., Ebert, D. D., de Jonge, P., Nierenberg, A. A., Rosellini, A. J., Sampson, N. A., Schoevers, R. A., Wilcox, M. A., & Zaslavsky, A. M. (2017). Using patient self-reports to study heterogeneity of treatment effects in major depressive disorder. _Epidemiology and Psychiatric Sciences, 26_(1), 22–36. https://doi.org/10.1017/S2045796016000020 * Khandker, M., Brady, S. S., Vitonis, A. F., MacLehose, R. F., Stewart, E. G., & Harlow, B. L. (2011). The Influence of depression and anxiety on risk of adult onset vulvodynia. _Journal of Women’s Health, 20_(10), 1445–1451. https://doi.org/10.1089/jwh.2010.2661 * Klaassen, M., & ter Kuile, M. M. (2009). Development and initital validation of the vaginal penetration cognition questionnaire (VPCQ) in a sample of women with vaginismus and dyspareunia. _J Sex Med, 6_(6), 1617–1627. https://doi.org/10.1111/j.1743-6109.2009.01217.x * Kliem, S., Job, A.-K., Kröger, C., Bodenmann, G., Stöbel-Richter, Y., Hahlweg, K., & Brähler, E. (2012). Entwicklung und Normierung einer Kurzform des Partnerschaftsfragebogens (PFB-K) an einer repräsentativen deutschen Stichprobe. _Zeitschrift Für Klinische Psychologie Und Psychotherapie, 41_(2), 81–89. https://doi.org/10.1026/1616-3443/a000135 * Laux, L., Glanzmann, P., Schaffner, P., & Spielberger, C. D. (1981). Das State-Trait-Angstinventar (Testmappe mit Handanweisung, Fragebogen STAI-G Form X 1 und Fragebogen STAI-G Form X 2). Beltz. * Ledermann, T., Bodenmann, G., Gagliardi, S., Charvoz, L., Verardi, S., Rossier, J., Bertoni, A., & Iafrate, R. (2010). Psychometrics of the dyadic coping inventory in three language groups. _Swiss Journal of Psychology, 69_(4), 201–212. https://doi.org/10.1024/1421-0185/a000024 * Maseroli, E., Scavello, I., Campone, B., Di Stasi, V., Cipriani, S., Felciai, F., Camartini, V., Magini, A., Castellini, G., Ricca, V., Maggi, M., & Vignozzi, L. (2018). Psychosexual Correlates of Unwanted Sexual Experiences in Women Consulting for Female Sexual Dysfunction According to Their Timing Across the Life Span. _Journal of Sexual Medicine, 15_(12), 1739–1751. https://doi.org/10.1016/j .jsxm.2018.10.004 * McCool, M. E., Zuelke, A., Theurich, M. A., Knuettel, H., Ricci, C., & Apfelbacher, C. (2016). Prevalence of female sexual dysfunction among premenopausal women: A systematic review and meta-analysis of observational studies. _Sexual Medicine Reviews, 4_(3), 197–212. https://doi.org/10.1016/j.sxmr.2016.03.002 * Moons, K. G. M., Wolff, R. F., Riley, R. D., Whiting, P. F., Westwood, M., Collins, G. S., Reitsma, J. B., Kleijnen, J., & Mallett, S. (2019). PROBAST: A tool to assess risk of bias and applicability of prediction model studies: Explanation and elaboration. In _Annals of Internal Medicine_ (Vol. 170, Issue 1, pp. W1–W33). American College of Physicians. https://doi.org/10.7326/M18-1377 * Pâquet, M., Bois, K., Rosen, N. O., Mayrand, M.-H., Charbonneau-Lefebvre, V., & Bergeron, S. (2016). Why us? Perceived injustice is associated with more sexual and psychological distress in couples coping with genito-pelvic pain. _The Journal of Sexual Medicine, 13_(1), 79–87. https://doi.org/10.1016/j.jsxm.2015.11.007 * Pincus, T., Miles, C., Froud, R., Underwood, M., Carnes, D., & Taylor, S. J. (2011). Methodological criteria for the assessment of moderators in systematic reviews of randomised controlled trials: A consensus study. _BMC Medical Research Methodology, 11_. https://doi.org/10.1186/1471-2288-11-14 * Rosen, N. O., Vaillancourt-Morel, M. P., Corsini-Munt, S., Steben, M., Delisle, I., Baxter, M. lou, & Bergeron, S. (2021). Predictors and Moderators of Provoked Vestibulodynia Treatment Outcome Following a Randomized Trial Comparing Cognitive-Behavioral Couple Therapy to Overnight Lidocaine. _Behavior Therapy_. https://doi.org/10.1016/j.beth.2021.05.002 * Rosen, R., Brown, C., Heiman, J., Leiblum, S., Meston, C., Shabsigh, R., Ferguson, D., & D’Agostino, R. (2000). The female sexual function index (FSFI): A multidimensional self-report instrument for the assessment of female sexual function. _Journal of Sex & Marital Therapy, 26_(2), 191–208. https://doi.org/10.1080/009262300278597 * Rozental, A., Andersson, G., & Carlbring, P. (2019). In the absence of effects: An individual patient data meta-analysis of non-response and its predictors in internet-based cognitive behavior therapy. _Frontiers in Psychology_ , 10. * Rozental, A., Magnusson, K., Boettcher, J., Andersson, G., & Carlbring, P. (2017). For Better or Worse: An Individual Patient Data Meta-Analysis of Deterioration Among Participants Receiving Internet-Based Cognitive Behavior Therapy. _Journal of Consulting and Clinical Psychology_ , 85(2), 160–177. https://doi.org/10.1037/ccp0000158 * Spielberger, C. D., Gorsuch, R. L., & Lushene, R. E. (1970). _State-Trait Anxiety Inventory, Manual for the State-Trait Anxiety Inventory_. Consulting Psychologist Press. * Spitzer, R. L., Kroenke, K., Williams, J. B., & Löwe, B. (2006). A brief measure for assessing generalized anxiety disorder. _Arch Intern Med, 166_ , 1092–1097. https://doi.org/10.1001/archinte.166.10.1092 * Stekhoven, D. J. (2013). _missForest: Nonparametric Missing Value Imputation using Random Forest_. R package version 1.4. * Stekhoven, D. J., & Bühlmann, P. (2012). Missforest-Non-parametric missing value imputation for mixed-type data. _Bioinformatics, 28(1)_ , 112–118. https://doi.org/10.1093/bioinformatics/btr597 * Stephenson, K. R., Rellini, A. H., & Meston, C. M. (2013). Relationship satisfaction as a predictor of treatment response during cognitive behavioral sex therapy. _Archives of Sexual Behavior, 42_(1), 143–152. https://doi.org/10.1007/s10508-012-9961-3 * Steyerberg, E. W. (2019). Clinical Prediction Models: A Practical Approach to Development, Validation, and Updating (2nd ed.). http://www.springer.com/series/2848 * ter Kuile, M. M., van Lankveld, J. J., Groot, E. d., Melles, R., Neffs, J., Zandbergen, M., de Groot, E., Melles, R., Neffs, J., & Zandbergen, M. (2007). Cognitive-behavioral therapy for women with lifelong vaginismus: Process and prognostic factors. _Behaviour Research and Therapy, 45_(2), 359–373. https://doi.org/10.1016/j.brat.2006.03.013 * Thomtén, J. (2014). Living with genital pain: Sexual function, satisfaction, and help-seeking among women living in Sweden. _Scandinavian Journal of Pain, 5_(1), 19–25. https://doi.org/10.1016/j.sjpain.2013.10.002 * Thomtén, J., & Linton, S. J. (2013). A psychological view of sexual pain among women: Applying the fear-avoidance model. _Women’s Health, 9_(3), 251–263. https://doi.org/10.2217/whe.13.19 * Thomtén, J., Lundahl, R., Stigenberg, K., & Linton, S. (2014). Fear avoidance and pain catastrophizing among women with sexual pain. _Women’s Health, 10_(6), 571–581. * van Lankveld, J. J., Melles, R., Zandbergen, M., ter Kuile, M. M., de Groot, H. E., & Nefs, J. (2006). Cognitive-behavioral therapy for women with lifelong vaginismus: A randomized waiting-list controlled trial of efficacy. _Journal of Consulting and Clinical Psychology, 74_(1), 168–178. https://doi.org/10.1037/0022-006X.74.1.168 * von Collani, G., & Herzberg, P. Y. (2003). Eine revidierte Fassung der deutschsprachigen Skala zum Selbstwertgefühl von Rosenberg. Zeitschrift Für Differentielle Und Diagnostische Psychologie, 24(1), 3–7. https://doi.org/10.1024//0170-1789.24.1.3 * Zarski, A.-C., Berking, M., & Ebert, D. D. (2018). Efficacy of internet-based guided treatment for genito-pelvic pain/penetration disorder: Rationale, treatment protocol, and design of a randomized controlled trial. _Frontiers in Psychiatry, 22_(8), 260. https://doi.org/10.3389/fpsyt.2017.00260 * Zarski, A.-C., Berking, M., & Ebert, D. D. (2021). Efficacy of internet-based treatment for genito-pelvic pain/penetration disorder: Results of a randomized controlled trial. _Journal of Consulting and Clinical Psychology, 89_(11), 909–924. https://doi.org/10.1037/ccp0000665 * Zarski, A.-C., Berking, M., Fackiner, C., Rosenau, C., & Ebert, D. D. (2017). Internet-based guided self-help for vaginal penetration difficulties: Results of a randomized controlled pilot trial. _The Journal of Sexual Medicine, 14_(2), 238–254. https://doi.org/10.1016/j .jsxm.2016.12.232 * Zarski, A.-C., Berking, M., Hannig, W., & Ebert, D. D. (2018). Internet-Based Treatment for Genito-Pelvic Pain/Penetration Disorder: A Case Report. _Verhaltenstherapie, 28_(3), 177–184. https://doi.org/10.1159/000485041 * Zarski, A.-C., Velten, J., Knauer, J., Berking, M., & Ebert, D. D. (2022). Internet- and mobile-based psychological interventions for sexual dysfunctions: a systematic review and meta-analysis. In _npj Digital Medicine_ (Vol. 5, Issue 1). Nature Research. https://doi.org/10.1038/s41746-022-00670-1 * Zeileis, A., Hothorn, T., & Hornik, K. (2008). Model-based recursive partitioning. _Journal of Computational and Graphical Statistics_ , 17(2), 492–514. https://doi.org/10.1198/106186008X319331 ## Appendix Figure 4: _Leave-one-out density plots of the aggregated outcome if one scale score at a time is excluded to build the outcome; based on imputed data._ Figure 5: _Leave-one-out histograms of the aggregated outcome if one scale score at a time is excluded to build the outcome; based on imputed data._ Figure 6: _Variable permutation importance of potential moderator variables using model-based random forest analysis; based on imputed data._ Table 3: Regression coefficients of composite scores at post-treatment on composite scores at baseline and group in terminal node models Variable | Estimate | _SE_ | _t_ | _p_ ---|---|---|---|--- Node 2 a | | | | Constant | 3.435 | 3.982 | 0.863 | .395 Composite Score at Baseline | 0.877 | 0.122 | 7.189 | <.001 Group c | 4.084 | 2.686 | 1.520 | .139 Node 3 b | | | | Constant | 18.694 | 3.301 | 5.663 | <.001 Composite Score at Baseline | 0.428 | 0.088 | 4.873 | <.001 Group c | 13.243 | 1.978 | 6.695 | <.001 * • Note. Node 2 comprises participants with low, Node 3 participants with average-to-high joint dyadic coping. * • ${}^{\text{a}}$N = 33. ${}^{\text{b}}$N = 167. ${}^{\text{c}}$0 = waitlist control group, 1 = intervention group.
# The Space Density of Intermediate Redshift, Extremely Compact, Massive Starburst Galaxies Kelly E. Whalen Department of Physics and Astronomy, Dartmouth College, Hanover, NH 03755, USA Ryan C. Hickox Department of Physics and Astronomy, Dartmouth College, Hanover, NH 03755, USA Alison L. Coil Center for Astrophysics and Space Sciences, University of California, San Diego, La Jolla, CA 92093, USA Aleksandar M. Diamond-Stanic Department of Physics and Astronomy, Bates College, Lewiston, ME, 04240, USA James E. Geach Centre for Astrophysics Research, University of Hertfordshire, Hatfield, Hertfordshire AL10 9AB, UK John Moustakas Department of Physics and Astronomy, Siena College, Loudonville, NY 12211, USA Gregory H. Rudnick Department of Physics and Astronomy, University of Kansas, Lawrence, KS 66045, USA David S. N. Rupke Department of Physics, Rhodes College, Memphis, TN, 38112, USA Paul H. Sell Department of Astronomy, University of Florida, Gainesville, FL, 32611 USA Christy A. Tremonti Department of Astronomy, University of Wisconsin- Madison, Madison, WI 53706, USA Julie D. Davis Department of Astronomy, University of Wisconsin-Madison, Madison, WI 53706, USA Serena Perrotta Center for Astrophysics and Space Sciences, University of California, San Diego, La Jolla, CA 92093, USA Grayson C. Petter Department of Physics and Astronomy, Dartmouth College, Hanover, NH 03755, USA (Accepted for publication in ApJ) ###### Abstract We present a measurement of the intrinsic space density of intermediate redshift ($z\sim 0.5$), massive ($M_{*}\sim 10^{11}\ \text{M}_{\odot}$), compact ($R_{e}\sim 100$ pc) starburst ($\Sigma_{SFR}\sim 1000\ \text{M}_{\odot}\ \text{yr}^{-1}\text{kpc}^{-1}$) galaxies with tidal features indicative of them having undergone recent major mergers. A subset of them host kiloparsec scale, $>1000\ \text{km}\ \text{s}^{-1}$ outflows and have little indication of AGN activity, suggesting that extreme star formation can be a primary driver of large-scale feedback. The aim for this paper is to calculate their space density so we can place them in a better cosmological context. We do this by empirically modeling the stellar populations of massive, compact starburst galaxies. We determine the average timescale for which galaxies that have recently undergone an extreme nuclear starburst would be targeted and included in our spectroscopically selected sample. We find that massive, compact starburst galaxies targeted by our criteria would be selectable for $\sim 148^{+27}_{-24}$ Myr and have an intrinsic space density $n_{\text{CS}}\sim(1.1^{+0.5}_{-0.3})\times 10^{-6}\ \ \text{Mpc}^{-3}$. This space density is broadly consistent with our $z\sim 0.5$ compact starbursts being the most extremely compact and star forming low redshift analogs of the compact star forming galaxies in the early Universe as well as them being the progenitors to a fraction of intermediate redshift post starburst and compact quiescent galaxies. galaxies: general — galaxies: evolution — galaxies: starburst — galaxies: interactions ## 1 Introduction Galaxy formation models within a $\Lambda$-Cold Dark Matter ($\Lambda$CDM) framework that do not include feedback typically over-predict the present day baryon fraction as well as the number of number density of galaxies on the high and low mass ends of the local stellar mass function (SMF) (e.g., Croton, 2006; Kereš et al., 2009; Moster et al., 2010; Moustakas et al., 2013). This implies that star formation over cosmic timescales is inefficient, which requires that galaxy formation models inject energy into cooling clouds of gas. This is typically done by invoking feedback from massive stars and active galactic nuclei (AGNs) to heat and eject gas, thus reducing star formation efficiency (e.g., Springel et al., 2005b; Di Matteo et al., 2005; Somerville & Davé, 2015). Feedback as a driver of the cosmic star formation inefficiency is supported by evidence of large-scale gas outflows and/or relativistic jets in star forming and active galaxies (e.g. Veilleux et al., 2005; McNamara & Nulsen, 2007; Fabian, 2012; Somerville & Davé, 2015). In massive galaxies, feedback-driven outflows are often attributed to AGN activity since dark matter halo mass, galaxy stellar mass, bulge mass, and black hole mass all scale with one another (e.g., Ferrarese & Merritt, 2000; Guo et al., 2010; Kormendy & Ho, 2013). However, cosmological galaxy formation simulations show that the exclusion of stellar feedback in models leads to the formation of galaxies that are $\sim 10$ times more massive than observed at a given redshift, showing that stellar-driven feedback plays an integral role in regulating star formation in massive galaxies (e.g., Springel et al., 2005b; Hopkins et al., 2012). On small (giant molecular cloud) scales, feedback can slow the local star formation rate by decreasing the gas surface density in a region, but this alone is not sufficient to produce simulated galaxies whose masses match those observed. Large-scale galactic wind-driven outflows where $\dot{M}_{*,outflow}\sim\text{SFR}$ are necessary to be able to model galaxies with masses that are consistent with observations (e.g., Veilleux et al., 2005). Constraining the importance of feedback-driven quenching is crucial to understanding how massive galaxies form, especially at high redshift. Massive, quiescent galaxies at $z>1.5$ are typically more compact than their local counterparts by roughly a factor of 5 (e.g. Zirm et al., 2007; van Dokkum et al., 2008; van der Wel et al., 2014). The likely progenitors of these massive, compact quiescent galaxies are similarly compact star forming galaxies that were formed in gas-rich mergers of disk galaxies and were then rapidly quenched via some dissipative feedback (e.g., Barro et al., 2013; Stefanon et al., 2013; van Dokkum et al., 2015). However, heavy dust obscuration coupled with high redshift makes constraining the role of AGN vs. stellar-driven feedback difficult with the typical UV signatures of outflows (e.g., van Dokkum et al., 2015). We have been studying a population of $z\sim 0.5$ massive, compact galaxies which show signs of recent, extreme bursts of star formation and gas depletion, similar to what we would expect as the progenitors to high-$z$ massive, quiescent galaxies (Tremonti et al., 2007; Diamond-Stanic et al., 2012, 2021; Geach et al., 2013; Sell et al., 2014; Geach et al., 2014; Rupke et al., 2019; Petter et al., 2020). Our sample of galaxies consists of sources initially targeted as SDSS quasars, but subsequently classified as young post- starburst galaxies due to their blue stellar continua, weak nebular emission lines, and bright infrared photometry (Tremonti et al., 2007). Hubble Space Telescope (HST) imaging showed that these galaxies have extremely compact morphologies ($R_{e}\sim 100$ pc) with tidal features indicative of having recently undergone a major merger event (see Figure 1) (Diamond-Stanic et al., 2012; Sell et al., 2014). We also note that rings and diffraction spikes from the HST PSF are visible in the images of our sources, showing that their angular sizes are on the order of that of the PSF which further highlights their compactness (Sell et al., 2014; Diamond-Stanic et al., 2021; Davis et al., in prep). The sources in our sample can have SFR surface densities up to $\sim 1000\ \text{M}_{\odot}\ \text{yr}^{-1}\text{kpc}^{-1}$ (Diamond-Stanic et al., 2012; Sell et al., 2014), and lie below the $0.5<z<1$ size-mass relations for star forming and quiescent galaxies (see Figure 2; Mowla et al. 2019; Diamond-Stanic et al. 2021). Spectroscopic observations show that these galaxies host outflows with velocities $>1000\ \text{km}\ \text{s}^{-1}$ that can extend to tens of kpc (Tremonti et al., 2007; Rupke et al., 2019; Davis et al., in prep). There is also little evidence that these massive outflows are primarily driven by AGN activity based on X-ray, IR, radio, and spectral line diagnostics, meaning that extreme star formation can be responsible for gas depletion in these galaxies (Diamond-Stanic et al., 2012; Sell et al., 2014; Petter et al., 2020). These galaxies are important because they allow us to directly observe the effects of extreme star formation on gas kinematics in starburst and post- merger galaxies. In merger-driven galaxy evolution scenarios, a major merger event can trigger a strong burst of obscured star formation. Dissipative feedback via AGN or starburst activity can then expel large amounts of gas and dust from the galaxy, allowing it to passively evolve into a gas-poor massive elliptical galaxy (e.g. Sanders et al., 1988; Lonsdale et al., 2006). The objects we are studying can possibly be representative of galaxies that are actively undergoing quenching, and might be an important phase for the building up of a massive, quiescent elliptical population. However, this is difficult to determine without knowing the space density of extreme compact starburst galaxies like the ones we have been studying. We are broadly defining our compact starbursts as massive, centrally concentrated galaxies that have recently experienced a burst of star formation. The space density of extreme massive, compact starbursts is strongly dependent on the timescales upon which starburst events can be observed using our selection criteria. The aim of this paper is to estimate the average amount of time sources in a simulated galaxy population would be selected as extreme compact starburst galaxies under our selection criteria, in addition to their space density. We also place our galaxies into context with their high redshift compact star forming analogs, compact quiescent galaxies, post starburst galaxies, ultraluminous infrared galaxies (ULIRGs), the merger rate density, and massive, quiescent galaxies within the same redshift interval (e.g. Sanders et al., 1988; Lonsdale et al., 2006; Lotz et al., 2011; Barro et al., 2013; van der Wel et al., 2014; Wild et al., 2016). The outline of the paper is as follows: in Section 2 we discuss the selection of the parent sample of galaxies. In Section 3 we discuss empirical model construction and constraining model free parameters via an MCMC routine. In Section 4 we discuss our implementation of the SDSS quasar selection function. In Section 5 we calculate the average observability timescale and space density for our population of compact starbursts. In Section 6 we place our galaxies into cosmological context with other phases of merger-driven galaxy evolution. We adopt a cosmology of $H_{0}=70.2\ \textrm{km}\textrm{s}^{-1}\textrm{Mpc}^{-1}$, $\Omega_{M}=\Omega_{CDM}+\Omega_{b}=0.229+0.046=0.275$, and $\Omega_{\Lambda}=0.725$ (Komatsu et al., 2011) ## 2 The observed sample The selection criteria used for our sample will be detailed in Tremonti et al. in prep, but we will give a brief summary in this section. Our sample was originally selected with the objective to understand the role galaxy-scale winds play in star formation quenching for massive, intermediate redshift galaxies. The parent sample of galaxies we use in this work is drawn from the Eighth Data Release of SDSS (York et al., 2000; Aihara et al., 2011). We set out to select sources that were targeted as quasars (flagged either as qso_hiz, qso_cap, qso_skirt, qso_mag_outlier), since the SDSS QSO sample extends to fainter magnitudes than the main galaxy sample (Strauss et al., 2002). Selecting sources that have been targeted as quasars allows our sample to consist of objects that are massive and compact. The magnitude limits ensure that our sources are massive, highly star forming, and not strongly dust attenuated and the SDSS quasar selection algorithm requires that our sources are either unresolved or that they are resolved but satisfy more stringent color-magnitude cuts. This is described in more detail in Section 4.1. We required that our sources were spectrally classified as galaxies with apparent $16<i<20$. We selected sources within $0.4<z<0.9$ to ensure that the MgII $\lambda\lambda 2796,2804$ line would be shifted into the optical so we could use that as a probe of galactic winds. We also exclude sources that were classified as distant red galaxies (legacy_target1 != DRG). Sources with redshift warnings and bad quality plates were also thrown away. This initial cut left us with a sample of 1198 galaxies. We fit the SDSS spectra with a combination or simple stellar population models, similar to Tremonti et al. (2004), and a type I quasar template. From the spectral fitting, we calculated the fraction of light attributed to the quasar model ($f_{qso}$). We also measured nebular emission and stellar absorption line indices (following Kauffmann et al., 2003) for the sources in our parent sample as well as the strength of the 4000 Å break (D${}_{n}(4000)$) (Balogh et al., 1999). Our initial aim was target post starburst galaxies (PSBs) by selecting galaxies with evidence of having gone through a starburst event within the last 1 Gyr ($(\text{H}\delta_{A}+\text{H}\gamma_{A})/2$ OR $D_{n}(4000)<1.2.$), but with little ongoing star formation within the last 10 Myr ([OII] 3727 Å equivalent width (EW) $>-20$ Å). These cuts reduce our sample to 645 sources. Lastly, our sample was limited to consisting of brighter galaxies with tighter cuts on [OII] EW and including a cut on the measured quasar fraction to further ensure that strong AGN were not included. The new cuts imposed were [OII] 3727 ÅEW $>-15$ Å, and $f_{qso}<0.25$. We also require that apparent $g$ and $i$ magnitudes were brighter than $g<20$ or $i<19.1$. Although we select for weak nebular emission to eliminate starbursts, many of our sources were detected in WISE (Wright et al., 2010), and SED fitting through the mid- infrared shows they can have SFRs$=20-500\ \text{M}_{\odot}\ \text{yr}^{-1}$ (Diamond-Stanic et al., 2012; Perrotta et al., 2021; Davis et al., in prep). These cuts leave us with a sample of 121 galaxies. We take advantage of the WISE detections for our sources and make an IR color cut of $W1-W2<0.8$ to further limit AGN contamination (Stern et al., 2012; Hickox et al., 2017). The WISE AGN cut leaves us with a population of 115 galaxies in what we are considering to be our parent sample. We include this selection criteria in our modeling of compact starburst galaxies to estimate the amount of time our galaxies would be targeted and selected by this set of criteria. A full list of targets is given in Table LABEL:tab:sample along with their redshifts, stellar masses, and SDSS photometry. In addition to the SDSS and WISE data for our parent sample, we also have high-S/N ($\sim 15-30$ per pixel) spectra from the Blue Channel Spectograph on the 6.5-m MMT (Angel et al., 1979), the Magellan Echellette (MagE; Marshall et al. 2008) spectrograph on the Magellan Clay telescope, and the Low Resolution Imaging Spectrometer (LRIS; Oke et al. 1995) on the Keck I telescope for 37 of the sources in our parent sample. These observations and their data reduction are detailed in Davis et al. (in prep), but broadly these observations were done using 1” slits resulting in spectra with resolution $R\sim 600-4100$. We refer to these 37 galaxies as the MgII sample. ## 3 Model construction The aim of this work is to constrain the importance of massive, compact starburst events in galaxy quenching at $z\sim 0.5$ by estimating the space density of these objects. Here, we do this by constructing an empirical model based on the galaxies we have in our sample and then evolving a large simulated population of compact starbursts to estimate the timescales upon which they would be targeted by our selection criteria. This process can be broken down into two steps: 1. 1. Construct a set of template distributions of stellar population parameters and SFHs by fitting SDSS $ugriz$ model mags and W1, W2 photometry for the 115 galaxies in our sample with a Markov Chain Monte Carlo (MCMC; Metropolis et al. 1953; Foreman-Mackey et al. 2013) fitter. 2. 2. Use the posterior distribution of SFH parameters from step 1 to predict luminous properties of a set of mock galaxies whose SFHs are consistent with our observed sample. The luminous properties are computed using the Flexible Stellar Population Synthesis models (FSPS; Conroy et al. 2009). Since our small sample of galaxies consists of sources that are unresolved in SDSS imaging, we have to make a number of assumptions about their underlying stellar populations. First, we assume that the light from our compact starburst galaxies can largely be broken down into two components: a young, simple stellar population (SSP) that formed in a single, nuclear burst, and an older component that has a star formation history representative of a massive, star forming galaxy at $z\sim 0.5$. We note that there is likely clumpy star formation occurring outside of the nuclear regions of our galaxies, but due to their extremely compact HST morphologies it is fair to assume that the contribution of these star forming regions to the total emitted light is minimal compared to the large nuclear burst. We also assume that our galaxies will only experience one burst of nuclear star formation and will then passively evolve. Although HST observations (Sell et al., 2014) showed that many of our sources have more than one core that could trigger a starburst event, we note that these sources are still unresolved in SDSS so the burst would not be localized to a particular core. This assumption is also consistent with the single burst of star formation triggered by a merger event seen in simulations (e.g. Springel et al., 2005a). Next, we naively assume that since the nuclear burst component dominates the spectral energy distribution (SED) of the total system, that the differences observed between the galaxies in our sample can solely be attributed to differences in the properties of the nuclear starburst. This assumption is consistent with the galaxies in the MgII sample having very blue spectra and young ages as derived from spectral modeling (e.g., Davis et al., in prep). These assumptions allow us to construct a model that utilizes FSPS to simulate the stellar populations for the nuclear starburst component as well as the older, non-burst underlying stellar population. In our modeling framework, we introduce four free parameters that are fit via an MCMC routine for each of the galaxies in our sample: the age of the burst ($t_{age}$), the fraction of total galaxy stellar mass formed in the nuclear burst ($f_{burst}$), the optical depth for the dust around young stars formed in the nuclear burst ($\tau_{dust,1}$), and the total stellar mass of the system ($M_{*}$). We separately calculate the $ugriz$, W1, W2, [OII] (3727 Å) fluxes for the nuclear burst and non-burst components and their $f_{burst}$ weighted sum to determine the SED and [OII] EW for the total simulated galaxy. In this section, we describe the assumptions made in the FSPS modeling of both the extended non-burst and nuclear starburst components as well as the MCMC fitting we use to constrain values for the free parameters in our model. For both, the non-burst and nuclear burst components, we make the following assumptions. We assume a Chabrier (2003) initial mass function (imf_type = 1) and $\log Z/Z_{\odot}=-0.3$ metallicity (logzsol = -0.3) using the $M_{*}-Z$ relation presented in Gillman et al. (2021) calibrated for solar $12+\log(\text{O/H})=8.66$ and $Z_{\odot}=0.0121$. We set add_neb_emission = true to allow for nebular emission from CLOUDY models (Byler et al., 2017). We assume Charlot & Fall (2000) extinction (dust_type = 0) with dust_tesc = 7 ($\log(t_{age}/\text{yr})$) (e.g Blitz & Shu, 1980; Charlot & Fall, 2000; Conroy et al., 2009), where dust_tesc is the age in Charlot & Fall (2000) extinction model at which stars are attenuated by $\tau_{dust,1}$ and $\tau_{dust,2}$. We also set agb_dust = true since IR SEDs of star forming galaxies are poorly fit without incorporating dust shells around AGB stars (Villaume et al., 2015). ### 3.1 Modeling the extended, non-burst component The photometric and morphological properties of the extended stellar population are most important in the later stages of the compact starburst’s evolution since the contribution of the nuclear burst wanes over time. Here, we describe the assumptions we make in the FSPS modeling of the extended, non- burst component. We initialize FSPS such that tage is the Hubble time (in Gyr) at the redshift of a given galaxy, dust1 = 1, and dust2 = 0.5. We chose these dust optical depths to ensure that the $ug$ photometry for the modeled extended stellar component would be fainter than that of the reddest observed sources in our sample, while being consistent with the recommended values given in Charlot & Fall (2000). We explored the effects of changing tage and the dust parameters for the extended components in the galaxies shown in Figure 1 to ensure that our modeling is largely robust to extended component assumptions and found that the results of our MCMC fitting do not change with changing non-burst initial conditions. A crucial piece to modeling the stellar population of the extended, non-burst component is assuming a particular star formation history (SFH). HST images show hints of a smooth, extended underlying stellar population (Diamond-Stanic et al., 2021). The presence of tidal features in our HST observations suggests that the galaxies in our sample have recently undergone merger events, and their high star formation surface densities indicate that that these mergers were likely gas rich (e.g., Diamond-Stanic et al., 2012; Sell et al., 2014). Based on this, we assume that the extended, non-burst stellar populations have a star formation history typical of actively star forming disk galaxies. However, the SFHs of star forming disk galaxies are uncertain. There are many possible SFHs that would be able to build up the tightly-correlated star formation main sequence at late cosmic times (e.g. Oemler et al., 2017). For simplicity, since young stars dominate the light output from a stellar population we approximate the SFH as being flat over cosmic time to ensure that the progenitor galaxies in the system were experiencing some degree of star formation prior to merging. We do this by setting the FSPS SFH parameter as a delayed-burst SFH (sfh = 4 in FSPS) but with the constant star formation fraction set to 1. We also note that we explored other SFHs that peaked at earlier cosmic times, such as the dark matter halo mass dependent models constructed in Behroozi et al. (2019), but our MCMC chains for these models were not able to reach convergence. The inability for our chains to converge is consistent with the fact that we do not believe that Behroozi et al. (2019)-like SFHs would be physically representative of galaxies like those in our sample. For massive ($M_{*}\sim 10^{11}\ \text{M}_{\odot}$) galaxies like the ones in our sample, this would suggest that our sources would have peaked in star formation at $z\sim 2$ and then passively evolved until $z\sim 0.5$. This would imply that the progenitors of our compact starbursts would be almost entirely be quiescent, which is unlikely do to their high gas fractions. Therefore, we do not include models like this in our analysis. ### 3.2 Modeling the nuclear burst Recent observational evidence has shown that intermediate redshift, extreme compact starburst galaxies are likely to exhibit flat age gradients, meaning that their optical light is dominated by star formation that began and ended in one uniform event (e.g., Setton et al., 2020). Since we expect all of the stars formed in the nuclear burst to have formed at approximately the same time, we model the starburst as a simple stellar population (SSP) in FSPS (sfh = 0). This choice is consistent with very short burst durations we derive from non-parametric SFH modeling of a subset of our sample with high S/N spectra (Geach et al., 2018; Tremonti et al., in prep; Davis et al., in prep). This work (detailed in Davis et al. (in prep)) is done by fitting the rest frame UV-mid IR broadband photometry and high-resolution spectra simultaneously using Prospector (Leja et al., 2019; Johnson et al., 2021). We also assume that the dust in the vicinity of the nuclear starburst extincts some of the light from the newly formed stars. We leave the age of the central burst ($\log t_{age}$) and the optical depth ($\tau_{burst}$) as free parameters that will later be constrained with MCMC fits to the photometric data of the sources in our observed sample. We set dust2 = $\tau_{burst}/2$ (e.g., Wild et al., 2011). We similarly calculate SDSS $ugriz$ and WISE W1 & W2 magnitudes for the nuclear bursts as we did for the extended, non-burst stellar population. ### 3.3 Calculating PSF magnitudes Once we have the model photometry for the extended, non-burst stellar populations and their nuclear bursts, we can combine them to get the photometry for the entire system. We start by converting the modeled apparent AB magnitudes for the extended, non-burst stellar population and the burst component to flux densities. The output magnitudes of FSPS are normalized to $1\ \text{M}_{\odot}$ at every epoch, so we calculate the fluxes for our galaxies and nuclei by multiplying their $1\ \text{M}_{\odot}$ flux densities by their respective masses. We define the mass of the nuclear burst as $M_{nuc}=f_{burst}\times M_{*}$ and $M_{host}=(1-f_{burst})\times M_{*}$. We also leave $f_{burst}$ and $M_{*}$ as free parameters in our MCMC fitting in addition to $\tau_{dust}$ and $\log t_{age}$ as described earlier. For sources observed in SDSS, the QSO targeting pipeline takes a source’s $ugriz$ PSF magnitudes as input rather than its de Vaucouleurs or exponential disk model magnitudes (Richards et al., 2002). The output magnitudes from FSPS are representative of model magnitudes, so we must first convert these to PSF magnitudes before we run the SDSS QSO targeting algorithm on our modeled sample. We do this by first assigning surface brightness profiles to both components of the galaxy. For the extended, non-burst component, we assume a $n=1$ Sérsic profile where the effective radius ($R_{\text{eff}}$) is taken from the redshift-dependent star forming galaxy size-mass relation presented in Mowla et al. (2019). Due to the nuclear starburst’s compact nature, we assume a $n=4$ Sérsic profile where $R_{\text{eff}}$ is $\sim 300$ pc, as motivated by observations (e.g., Geach et al., 2013; Sell et al., 2014). Diamond-Stanic et al. (2021) showed that $R_{eff}<1$ kpc for the HST-observed galaxies. We do not vary $R_{eff}$ for the nuclear components for our modeled galaxies since $\sim 100$ pc scale starbursts would always be unresolved in SDSS and are effectively observed as point sources. We convert $R_{\text{eff}}$ for each component from kpc to arcsec using their cosmological angular size distances and normalize the surface brightness profiles ($I(r)$) for each component such that $2\pi\int^{\infty}_{0}I_{comp}(r)rdr=f_{\nu,comp}.$ We then convolve these component surface brightness profiles with the SDSS PSF in each photometric band. The full width half maxes (FWHMs) for the $ugriz$ bands are 1.53, 1.44, 1.32, 1.26, 1.29 arcsec, respectively. The convolved burst and disk components are then added together to create a modeled total galaxy surface brightness profile. We then fit this profile with a 2D-Gaussian model of the SDSS PSF and integrate the Gaussian model fit to obtain PSF fluxes in each respective band. The PSF fluxes are then converted to apparent AB magnitudes so they could later (§4.1) be passed through the SDSS QSO selection pipeline. ### 3.4 Constraining model free parameters with MCMC We have constructed a 4-parameter model for the photometry and [OII] (3727 Å) EW of intermediate-$z$ compact starbursts by utilizing FSPS. FSPS directly outputs model mags and spectra of stellar populations. We calculate [OII] (3727 Å) EW from the FSPS output spectrum using specutils (Earl et al., 2022). As stated above, our compact starburst model is the sum of separately modeling the host galaxy and nuclear burst contributions to the overall photometric and spectral properties. In this model, we leave the age of the nuclear starburst ($\log\ t_{age}/\text{Myr}$), the burst fraction ($f_{burst}$), optical depth of dust extincting young stellar light ($\tau_{dust}$), and the galaxy stellar mass ($\log\ M_{*}/M_{\odot}$) as free parameters. Here we detail how we constrain possible parameter values using MCMC fitting to the $ugriz$ and W1/W2 photometry for our observed galaxies. Figure 1: HST WFC3 cutouts of 6 representative galaxies in our sample that overlap with those presented in Sell et al. (2014). We note that we omit J0944+0930 and J1104+5946 from Sell et al. (2014) as they do not satisfy all of our selection criteria. All of these galaxies show clear signs of tidal disruptions, consistent with their extreme nuclear starbursts being triggered by major merger events. Figure 2: Location of our galaxies (black star) within the $0.5<z<1$ size-mass plane as presented in Mowla et al. (2019). Blue and red points are van der Wel et al. (2014) star forming and quiescent galaxies, respectively. The red, blue, and grey lines are the best fit size-mass relations for the quiescent, star forming, and total CANDELS/3DHST galaxies in Mowla et al. (2019). Our data point represents the average $R_{eff}$ and $M_{*}$ for a subset of the MgII galaxies presented in Davis et al. (in prep). Our sources are significanly more compact than other galaxies at similar $z$ and $M_{*}$. #### 3.4.1 Parameter fitting As discussed in Section 2, our collaboration has been studying a sample of 115 intermediate-$z$ compact starburst galaxies. Archival SDSS $ugriz$ and WISE W1 and W2 photometry are available for the full parent sample. For each of these, we constrain the probability densities for $\log\ t_{age}$, $f_{burst}$, $\tau_{dust}$, and $\log M_{*}$ using the ensemble adaptation of the Metropolis-Hastings MCMC algorithm from the package, emcee (Metropolis et al., 1953; Foreman-Mackey et al., 2013). Each step of our MCMC calculates the model SDSS $ugriz$, WISE W1, and W2 photometry, and compares them to those for each observed galaxy. For each galaxy, we run the MCMC such that the autocorrelation time for each walker is $\sim 50$ times less than the run time. For most of our galaxies this is $\sim 60,000$ steps. We use the emcee ensemble stretch move with scale parameter $a=2$. We randomly initialize each walker in the intervals $0.5<\log\ t_{age}/\text{Myr}<2$ $0.05<f_{burst}<0.4$ $0.3<\tau_{dust}<1$ $10<\log M_{*}/M_{\odot}<11$ and allow them to explore the parameter space $0.5<\log\ t_{age}/\text{Myr}<3$ $0.05<f_{burst}<0.65$ $0<\tau_{dust}<5$ $10<\log M_{*}/M_{\odot}<12$ such that it finds the parameter values that are most likely to minimize the difference between the model and observed photometry. Figure 3: Panel (a): Best fit SED for galaxy J0826+4305. The red points and error bars are the observed photometry and $\pm 0.25$ magnitude uncertainty region, respectively. The open black squares are the modeled photometry. The blue, violet, and green curves are the modeled SED for the total galaxy system, nuclear burst, and host galaxy, respectively. Panel (b): Triangle plot of parameter posterior distributions for galaxy J0826+4305. We calculate the mean and covariances of these posterior distributions to model them as 4D-Gaussian distributions. We then randomly draw sets of parameter values from the Gaussian-modeled posterior to construct a mock population of compact starbursts. Panel (c): Galaxy cutout as seen in Figure 1. For each galaxy in our sample, we output the mean parameter values and their covariance from MCMC-calculated posterior distributions. We use these mean values and their covariances to model these posteriors as 4-dimensional Gaussian distributions whose means and standard deviations are identical that of the MCMC output. We do this to reduce noise later in our analysis since we use these distributions to randomly draw sets of parameter values to model mock galaxies based on the ones in our observed sample. The best fit SED and parameter probability distributions for a constantly star forming host based on the galaxy J0826+4305 can be seen in panels (a) and (b) Figure 3, respectively.We also include these for J1713+2817, J2118+0017, J1506+6131, J1558+3957, and J1613+2834 in Figures 9, 10, 11, 12, and 13, respectively. For consistency with other studies of our objects, we note general agreement between our best fit stellar masses and those presented in Sell et al. (2014) for the galaxies that were included in both of our samples. This is shown in Table 2. For each of the 115 galaxies in our sample we randomly draw $\log\ t_{age}$, $f_{burst}$, $\tau_{dust}$, and $\log M_{*}$ values from their respective Gaussian-modeled posterior distributions taking into account the covariances between each of the parameters, to model a population of galaxies with properties similar to the observed source. We can then evolve these modeled galaxies to estimate a distribution of selectable lifetimes for each of the galaxies in our sample. ## 4 Modeling the targeting algorithm & selection function The ultimate goal for our model is to be able to estimate the space density of $z\sim 0.5$, massive, compact starburst galaxies. To do this, we need to understand the timescales upon which these galaxies would be selected under a set of targeting criteria. Here, we detail how we model the various components of the selection function we use to identify sources in our sample. ### 4.1 The SDSS QSO targeting algorithm All of the sources in our observed sample were initially targeted for SDSS spectroscopy as QSOs based their bright magnitudes and blue colors. In order to ensure that our modeled galaxies would satisfy these criteria, we need to incorporate this selection into our modeled targeting function. The SDSS QSO targeting algorithm identifies sources based on their location in three-dimensional color space. This is the $(u-g)$-$(g-r)$-$(r-i)$ ($ugri$) color cube for $z<3$ sources and $(g-r)$-$(r-i)$-$(i-z)$ ($griz$) cube for galaxies at higher redshifts. The QSO catalog constructed from SDSS DR8 sources was selected using the Richards et al. (2002) targeting algorithm 111Python adaptation of Richards et al. (2002) QSO selection algorithm can be found at www.github.com/ke27whal/sdss_qso_selection.. The SDSS quasar selection function aims to identify sources that lie far from the region of color space where stars are most likely to be found as well as for sources to satisfy general color/magnitude cuts. All magnitudes referenced in the targeting algorithm are PSF magnitudes. Since we are working with modeled data that is free from observational uncertainty, we do not include the steps in the algorithm that flag sources for having data with fatal errors. Since quasars and local stars both exhibit bright apparent magnitudes and are unresolved point sources, the algorithm needs to be able to differentiate between them in color-color-color space. The algorithm makes use of the method described in Newberg & Yanny (1997) that defines a “stellar locus” in color- color-color space where stars are most likely to exist. The stellar locus is constructed by analyzing the distribution of SDSS identified stars in color space. To maintain generality, we will refer to the main coordinate system describing the color-color-color cube as $\langle\hat{x},\hat{y},\hat{z\rangle}$, where $\hat{x}$ is in the direction of the bluest color axis and $\hat{z}$ in the direction of the reddest. The locus construction algorithm begins by setting the endpoints of the stellar distribution in color space and then iteratively calculating midpoints. This process allows a local coordinate system ($\langle\hat{i}_{i},\hat{j}_{i},\hat{k}_{i}\rangle$) to be defined at each locus point. At each locus point ($p_{i}$), $\ \hat{k}_{i}$ is defined as a unit vector in the direction $\overrightarrow{p_{i+2}-p_{i}}$. As detailed in Newberg & Yanny (1997), unit vectors $\hat{i}_{i}$, $\hat{j}_{i}$, and $\hat{k}_{i}$ are given as $\hat{k}_{i}\equiv k_{x}\hat{x}+k_{y}\hat{y}+k_{z}\hat{z},$ $\hat{j}_{i}\equiv(\hat{k}_{i}\times\hat{z})/|\hat{k}_{i}\times\hat{z}|=(k_{y}\hat{x}-k_{x}\hat{y})/\sqrt{k_{x}^{2}+k_{y}^{2}},$ $\hat{i}_{i}\equiv\hat{j}_{i}\times\hat{k}_{i}=[-k_{x}k_{z}\hat{x}-k_{y}k_{z}\hat{y}+(k_{x}^{2}+k_{y}^{2})\hat{z}]/\sqrt{k_{x}^{2}+k_{y}^{2}}.$ The cross section of the stellar locus is measured by fitting an ellipse perpendicular to $\hat{k}_{i}$ at each point. The semi-major and semi-minor axes of the ellipses are in the direction of unit vectors $\hat{l}_{i}$ and $\hat{m}_{i}$, respectively, and are defined as $\hat{l}_{i}\equiv\hat{i}_{i}\cos\theta_{i}+\hat{j}_{i}\sin\theta_{i},$ $\hat{m}_{i}\equiv-\hat{i}_{i}\sin\theta_{i}+\hat{j}_{i}\cos\theta_{i}$ where $\theta_{i}$ is the angle between the major axis of the ellipse and unit vector $\hat{i}$. We adopted the locus point positions, $\theta_{i}$, $\hat{k}_{i}$, $|\vec{l}_{i}|$, and $|\vec{m}_{i}|$ values from Richards et al. (2002), and proceeded to construct right cylinders that define the $4\sigma$ stellar locus probability region in color-color-color space. We also incorporate the mid-$z$ inclusion region as the white dwarf/A star exclusion regions detailed in Richards et al. (2002). Sources targeted as quasars must also satisfy color and magnitude cuts in addition to not belonging to the stellar locus. For low-$z$ sources in the $ugri$ color cube, all objects must have apparent $i$-band magnitude $15<i<19.1$ (Richards et al., 2002). Both extended and point source objects are allowed to be selected as quasars, but they need to satisfy different sets of criteria. Point source objects only need to fulfill the magnitude and stellar locus cuts to be targeted. Extended sources are kept if they are likely to contain an active nucleus. This is most likely when $(u-g)<0.9$, as redder AGN would be at high-$z$ and would not be extended (Richards et al., 2002; Adelman-McCarthy et al., 2006). This $(u-g)$ cut does not remove blue, extended star forming galaxies, so a second cut of $l_{i}>0$ and $m_{i}>0$ is applied where $l_{i}$ and $m_{i}$ are positions within the $\langle\hat{k},\hat{l},\hat{m}\rangle$ coordinate space defined earlier. In the high-$z$ $griz$ color cube, all outliers from the stellar locus with $15<i<20.4$ are targeted as quasars. However, to avoid contamination from low-$z$ quasars, sources are removed from the high-$z$ sample when all of the following criteria are met; $(g-r)<1.0,$ $(u-g)\geq 0.8,$ $i\geq 19.1\ \ \text{OR}\ \ (u-g)<2.5.$ We allow the sources in our sample to be targeted as either low-$z$ or high-$z$ quasars since our observed sample contains a mixture of both target types. ### 4.2 Spectroscopic/photometric selection In addition to being blue, unresolved sources, the galaxies in our sample also exhibit weak nebular emission characteristic of post starburst galaxies. As mentioned earlier, we implement an emission line equivalent width (EW) cut on [OII] (3727 Å) such that [OII] EW$>-15$ Å, consistent with that used for our parent sample (Sell et al., 2014; Davis et al., in prep; Tremonti et al., in prep). We also model the $g<20$ flux limit and $W1-W2<0.8$ WISE color cut that we impose on our sample. ## 5 Estimating the space density In this section, we discuss the various parameters that contribute to the calculated compact starburst space density ($n_{CS}$) as well as the possible sources of uncertainty. We estimate the space density in the redshift range $0.4<z<0.9$ as $n_{CS}\sim\frac{N_{targeted}}{f_{complete}}\cdot\frac{t_{cosmic}}{V_{0.4<z<0.9}}\cdot\frac{A_{sky}}{A_{SDSS}}\cdot\biggl{<}\frac{1}{t_{obs}}\biggr{>}.$ (1) Here, $N_{targeted}$ is defined as the number of galaxies in our observed sample of massive, compact starburst galaxies, $f_{complete}$ is the completeness of the SDSS QSO catalog ($f_{complete}\sim 0.9$; Vanden Berk et al. 2005), $V_{0.4<z<0.9}\ $ is the volume in Mpc-3 contained within the redshift range $0.4<z<0.9$, $A_{SDSS}/A_{sky}$ is the fractional area of the SDSS footprint relative to the area of the entire sky, $t_{cosmic}$ is the amount of cosmic time in Myr contained in the redshift range $0.4<z<0.9$, and $\langle 1/t_{obs}\rangle$ is the average of the inverse selectability timescale in Myr. The only model-dependent factor in this calculation is the amount of time our sources would be selected under a particular set of targeting criteria, so we will spend the first part of this section focusing on calculating this value. It is also worth highlighting that the timescale we are calculating for our sources is the amount of time these objects would be targeted under our set of selection criteria. This is a separate quantity from the amount of physical time galaxies might be undergoing an extremely compact starburst phase. The physical timescale is also dependent on how we define these sources. A unifying feature of the observed sources in our sample is that they are late- stage major mergers that host extremely young stellar populations. It is possible that some of them have quenched/are very recent PSBs and that others are still forming stars. Broadly, we define our sources as galaxies that have recently experienced an extreme nuclear burst of star formation. Calculating the physical timescale for these sources would require much more detailed modeling which is beyond the scope of this work. Our goal here is to estimate the space density of objects that would be targeted by our selection criteria at some point in their evolution. ### 5.1 Calculating observed lifetimes Figure 4: Shown here are the modeled evolutionary tracks of the apparent $i$-band and $g$-band SDSS magnitudes (panels (a) & (b)), [OII] equivalent width (panel (c)), and WISE $W1-W2$ color (panel (d)) for a sub-sample of modeled galaxies. The x-axis is age relative to the burst peak. The grey- shaded rectangles represent the regions of parameter space that would not be selected by the criteria placed on that given parameter. This is a schematic representation— the full details of our source selection can be found in Section 2. For each of the 115 galaxies in our sample, we used SDSS $ugriz$ model mags and WISE W1/W2 measured photometry to construct SEDs which were then fit by our MCMC routine to obtain the posterior distributions for $\log t_{age}/\text{Myr}$, $f_{burst}$, $\tau_{dust}$, and $\log\ M_{*,tot}/M_{\odot}$. These posterior distributions were then modeled as 4-dimensional Gaussian distributions and we output their covariance matrices. For each of the 115 observed galaxies in our sample, we draw 200 sets of parameters from the respective posterior distributions while taking into account covariances between parameters. This gives us 115$\times$200 mock galaxies which we then evolve. We evolve our modeled galaxies within the time interval $-1<\log\ t_{age}/{\text{Myr}}<2.5$ in 1000 uniformly spaced steps. We calculate [OII] EWs from the output FSPS spectrum using specutils (Earl et al., 2022), as well as the photometry at each step to determine if the sources would be targeted by our selection criteria at each time step. This allows us to construct selected lifetime distributions for each of the 115 observed galaxies in our sample. The evolutionary tracks for a subset of randomly selected galaxies’ $i$ and $g$-band magnitudes, [OII] EWs, and $W1-W2$ colors, as well as the selection limits on each respective parameter can be seen in Figure 4. We note that Figure 4 does not include the SDSS QSO targeting selection since that is a much more complicated set of criteria and would be impossible to visually display. However, we do apply it in our target selection. Figure 5: Distribution of average selected lifetimes from the mock sample. We find that extreme nuclear starbursts like the ones observed in our galaxies would be selected for $\sim 148^{+27}_{-24}$ Myr, consistent with the burst ages calculated in Davis et al. (in prep). In the following section, we detail how we determine the space density of our sources by randomly sampling with replacement the selected lifetime distribution calculated by evolving mock galaxies. In short, we bootstrap by generating 100,000 randomly sampled (with replacement) populations of 115 mock galaxies. For each iteration, we randomly draw an array of 115 indices which correlates to the various observed galaxies in our sample. We use the randomly drawn indices to pull selected lifetimes from the corresponding selected lifetime distributions. We then average these lifetimes to determine a selectability timescale for that given mock population of galaxies. The average selected lifetime distribution for the 100,000 samples of 115 mock galaxies is shown in Figure 5. We find that on average, compact starburst galaxies like the ones we observe would be selected under our set of targeting criteria for $148^{+27}_{-24}$ Myr. This timescale is broadly consistent with the average post-starburst peak age of $70\pm 106$ Myr calculated in Davis et al. (in prep). In our modeling, we find that our mock galaxies would be targeted soon after the nuclear burst occurs, meaning that we can directly compare our selectability timescale and the post-starburst peak SF ages in Davis et al. (in prep). The light-weighted stellar ages of the MgII sample ranging from $\sim$ 13-300 Myr) galaxies are consistent with the calculated selectability timescale in this work. This is a good consistency check to ensure that our modeling shows that galaxies in our observed sample would be selectable at their best-fit stellar ages. We next use the selectability timescales of our modeled compact starburst galaxies to estimate their space density. ### 5.2 Calculating space density As stated above, we estimate the space density in the redshift range $0.4<z<0.9$ (Equation 1) by randomly sampling from our selected lifetime distributions. To ensure that we sample a sufficiently large population of mock galaxies, we iterate this part of the calculation 100,000 times. For each of the 100,000 iterations, we randomly sample with replacement 115 galaxies from our mock sample. For each of the galaxies in that sample, we randomly draw a $\log t_{obs}/\text{Myr}$ value from the observable lifetime distribution that corresponds to that particular galaxy. In each iteration, we use these $\log t_{obs}/\text{Myr}$ values to compute, $\biggl{<}\frac{1}{t_{obs}}\biggr{>}=\frac{1}{N_{sim}}\sum_{i}^{N_{sim}}\ \bigg{(}\frac{1}{t_{obs,i}}\bigg{)},$ (2) where $N_{sim}=115$. We then use this to calculate the space density for the random population generated each iteration using the expression above. Figure 6: Space density distribution calculated from our mock population of galaxies. We estimate that the space density for our population of $0.4<z<0.9$ compact starburst galaxies is $(1.1^{+0.5}_{-0.3})\times 10^{-6}\ \text{Mpc}^{-3}$. The resulting space density distribution (calculated using Equation 1) can be seen in Figure 6. We estimate the space density of these massive, compact starbursts to be $(1.1^{+0.5}_{-0.3})\times 10^{-6}\ \text{Mpc}^{-3}$ in the redshift range $0.4<z<0.9$. ## 6 Cosmological context One of the most interesting questions surrounding our sample of galaxies is whether or not this type of compact starburst phase is characteristic in the evolution of many, if not most, massive galaxies. A widely supported view of galaxy formation and evolution is that mergers are responsible for building up increasingly massive galaxies and for triggering starbursts and AGN activity (e.g., Toomre, 1977; Sanders et al., 1988; Kauffmann et al., 1993; Mihos & Hernquist, 1996; Hopkins et al., 2006, 2008; Lotz et al., 2011; Somerville & Davé, 2015). Sanders et al. (1988) presented a basic framework in which the collision of two gas-rich disk galaxies would funnel gas towards the center of the system via tidal streams or shocks, thus creating a dusty, gas-rich environment to foster rapid star formation (e.g. Lonsdale et al., 2006). This dusty starburst stage would be selected as a ULIRG. As gas is fueling rapid star formation, it is continuously being funneled into the nucleus and also being accreted onto the black hole, thus also triggering AGN activity (e.g. Hopkins et al., 2006, 2008). Within this framework, gas from the galaxy can be expelled by a blowout phase driven by violent, dissipative feedback. Figure 7: Comparison of the average timescales (in Gyr) upon which various phases of massive galaxy evolution would be observable. The black star represents the average selectability timescale for the modeled compact starburst galaxies in our sample, and its error bar along the redshift axis represents the size of the redshift range of our sources and the error bar along the $t_{obs}$ axis is the statistical uncertainty calculated via bootstrapping as described in Section 5.2 The grey, purple, and blue shaded regions represent the range of observable timescales for galaxy mergers (Lotz et al., 2011), ULIRGs (Farrah et al., 2003), and post starburst galaxies (PSBs; Wild et al. 2016), respectively. We note that the timescales presented for galaxy mergers and PSBs correspond to the amount of time a source would be targeted under a set of selection criteria (similar to the value calculated for our sources), while the timescale for ULIRGs reflects the amount of physical time a source would experience star formation characteristic of the ULIRG phase. We elaborate on how we obtain the timescale estimates for the shaded regions in the text. It is clear that compact starburst galaxies like the ones in our sample occur on relatively short lived timescales that are comparable to that of ULIRG star formation. Figure 8: Comparison of the space densities of various phases of massive galaxy evolution. The black star represents the modeled space density for compact starburst galaxies like those in our observed sample. Its error bar along the redshift axis represents the size of the redshift range of our sources and the error bar along the space- density axis is the statistical uncertainty calculated via bootstrapping as described in Section 5.2. We note that there are additional systematic errors, including uncertainty with model assumptions, which make this statistical error a lower limit. The blue squares represent the space density evolution of massive, compact star forming galaxies from the CANDELS survey (Barro et al., 2013), the red points represent massive ($\log M_{*}/M_{\odot}\sim 11$), compact quiescent galaxies (van der Wel et al., 2014), the green triangle represents low-$z$ PSBs (Pattarakijwanich et al., 2016), and the purple hexagon represents low-$z$ ULIRGs (Kim & Sanders, 1998). The grey, red, purple, and green shaded regions depict the Lotz et al. (2011) observed merger rate density, the Stott et al. (2013) observed merger rate density (calculated using merger observability timescales), ULIRG space density (Magnelli et al., 2011), and intermediate-$z$ PSB space density (Wild et al., 2016) ranges, respectively. The Barro et al. (2013) points, Lotz et al. (2011) region, and Stott et al. (2013) region have been adjusted to account that our sources have masses $\log M_{*}/M_{\odot}>10.5$, while most of the other populations shown include galaxies $\log M_{*}/M_{\odot}>10$. While only a relatively small fraction of intermediate-$z$ major mergers will result in an extreme compact starburst similar to those in our sample, it is likely that sources like ours are the more extreme, lower-$z$ analogs to compact star forming galaxies more common in the early Universe and are closely related to intermediate-$z$ PSBs. The galaxies in our observed sample have many features that could tie them into this evolutionary framework. We know that the galaxies for which we have HST observations have disturbed morphological features such as tidal tails or two nuclei, which is indicative of them having undergone a recent merger (e.g Sell et al., 2014). In addition to having disturbed morphologies, our galaxies host high velocity ionized and molecular gas outflows which can extend out to kpc scales (e.g. Tremonti et al., 2007; Diamond-Stanic et al., 2012; Geach et al., 2013, 2014; Sell et al., 2014; Geach et al., 2018) or even over 100 kpc scales (Rupke et al., 2019). In order to understand the evolutionary significance of extreme, compact star formation events like those observed in our galaxies, we need to contextualize their space density relative to that of various phases within massive galaxy, merger-driven evolution. Our results are summarized in Figures 7 and 8, and we discuss in greater detail within this section. ### 6.1 Evolution of massive compact galaxies The sample of galaxies we have been studying is comparable to a high-$z$ population of similarly compact, massive forming galaxies. Massive, quiescent galaxies in the Universe at $z>1.5$ are typically more compact than their local counterparts by roughly a factor of 5 (e.g. Zirm et al., 2007; van Dokkum et al., 2008; van der Wel et al., 2014). The progenitors of these galaxies were likely compact star forming galaxies that were formed in gas- rich mergers of disk galaxies and were then rapidly quenched via some dissipative feedback, a formation scenario that is reminiscent of what we expect for ULIRGs and quiescent galaxies in the lower-$z$ Universe (e.g., Barro et al., 2013; Stefanon et al., 2013; van Dokkum et al., 2015). Barro et al. (2013) observed populations of compact quiescent and star forming galaxies in the redshift range $\sim 1<z<3$ to understand the evolutionary pathways that lead to the assembly of massive, compact quiescent galaxies we see predominantly in the early Universe. We include their compact star forming galaxy space density evolution as the blue squares in Figure 8 for comparison with the intermediate-$z$ massive, compact starburst galaxies we are studying (black star). We adjust the points from Barro et al. (2013) using redshift appropriate stellar mass functions (Moustakas et al., 2013; Adams et al., 2021) to account for the fact that their sample consists of sources with a wider stellar mass distribution than our sample. The adjusted space density is given as $n_{\text{adjusted}}=n_{\text{literature}}\times\frac{\int^{\infty}_{\text{lim, us}}\phi_{\text{SMF}}\ \text{d}\log M_{*}}{\int^{\infty}_{\text{lim, lit}}\phi_{\text{SMF}}\ \text{d}\log M_{*}},$ (3) where $n_{\text{literature}}$ is the literature space density calculated for a larger mass range than our sample, and $\phi_{\text{SMF}}$ is the stellar mass function. We use the Moustakas et al. (2013) and Adams et al. (2021) SMFs for $z\leq 1.5$ and $z>1.5$, respectively. The Barro et al. (2013) compact star forming galaxies have constant space densities at high redshift, but begin to decline at $z<\sim 1.5$. This decline is consistent with the decline in galaxy merger, star formation, and cold gas densities with decreasing redshift (e.g., Tacconi et al., 2010; Daddi et al., 2010; Tacconi et al., 2013; Madau & Dickinson, 2014; Riechers et al., 2019). We show in Figure 8 that the space density of our sources lies only slightly below the space density evolution trend shown with the Barro et al. (2013) compact star forming galaxies. We note that our galaxies are more extreme than the Barro et al. (2013) sources as they are both more compact and more rapidly star forming. This likely biases our compact starburst space density to be slightly lower than that for the Barro et al. (2013) galaxies. It is possible that our sources represent the low redshift analogs for an extreme subset of compact starburst galaxies that are more prevalent in the early Universe. Understanding how stellar feedback rapidly quenches star formation at intermediate redshift is necessary to be able to build models for galaxy formation and evolution in the early Universe when compact star formation events were significantly more common. For compact star-forming galaxies in the early Universe, it is difficult to observe the effects of feedback due to their high redshift and the fact they are commonly obscured by dust, making it nearly impossible to observe UV spectral signatures of outflows (e.g., van Dokkum et al., 2015). The broad consistency between the space density of our extreme, compact starburst galaxies and the Barro et al. (2013) sample allows us to better understand how compact star formation might be a phase that massive galaxies go through across a wide range of cosmic time. Barro et al. (2013) also presented a schematic representation of how galaxies evolve onto the local size-mass relation. Within this framework, compact star forming galaxies will experience rapid quenching via AGN or star formation feedback, resulting in a massive, compact quiescent galaxy population. Over cosmic time, these sources will undergo minor and major mergers resulting in a buildup of mass and size (e.g. Naab et al., 2009). If our sources are the low- redshift analogs of early Universe compact star forming galaxies beginning their quenching phase, we would expect that they would also end up as compact, quiescent galaxies. We show the space density evolution from van der Wel et al. (2014) for high-$z$, massive ($M_{*}\sim 10^{11}\ M_{\odot}$), compact ($R/(M_{*}/M^{11})^{0.75}<2.5$ kpc) galaxies as red points in Figure 8. The space density of compact quiescent galaxies peaks just as that of compact star forming galaxies begins to decline. It then wanes with decreasing redshift due to size buildup via galaxy mergers. Within the lowest redshift bin, the van der Wel et al. (2014) sources have a space density of $\sim 10$ larger than that of our compact, starburst galaxies. It is also worth noting that the compact quiescent galaxies would be considered to be “compact” for $\sim 2$ Gyr before minor mergers significantly contribute to size buildup (e.g., Naab et al., 2009; Newman et al., 2012)— a timescale that is significantly longer than the $\sim 100$ Myr timescale for which our sample would be targeted as extremely compact starbursts (e.g. Barro et al., 2013). In addition to this, the effective radii for the van der Wel et al. (2014) sources is significantly larger than that of our nuclear starbursts. This could be due to the compact quiescent radii being more linked to the stellar mass profiles, while ours might be biased to small values because of mass-to-light ratio (M/L) effects. However, Diamond-Stanic et al. (2021) showed that even accounting for M/L effects that the stellar mass effective radius for our systems is on the order of 0.1-0.5 kpc, which indicates that our population could be even smaller and potentially more extreme than the compact quiescent galaxies in the van der Wel et al. (2014) sample. All of this together suggests that a significant fraction of massive, compact quiescent sources at intermediate redshift could have recently gone through a starburst similar to what we observe for the galaxies in our sample. ### 6.2 Comparison to post starburst galaxies In order to get a full picture of the role intermediate-$z$, extremely compact starbursts galaxies play in the buildup of a massive, quiescent population, we also need to understand the evolutionary stages that follow their bursts. By design of our selection criteria, the compact starburst galaxies in our sample are similar to PSBs in that they have B and A-star dominated spectral features and weak nebular emission. Understanding the population of PSBs in a similar redshift interval as our sources would provide context for quenching timescales as well as what the progenitors of PSBs might look like. Wild et al. (2016) studied a population of massive, PSBs within $0.5<z<2$, and determined that PSBs are a relatively short-lived, transitory phase in galaxy evolution, likely lasting $\sim 0.1-1$ Gyr (see also Wild et al., 2009). This timescale range was determined by modeling PSBs in both toy-model and hydrodynamic simulations, and evolving them to determine the amount of time they would be targeted as PSBs— a similar method to what we do here for our compact starburst galaxies. The PSBs selectability timescale is given as the blue region in Figure 7. Our compact starburst galaxies with selectability timescales of $\sim 100$ Myr would be selected for $10-100$% of the time PSBs would be selected by their respective selection criteria. It would be expected that extremely compact starburst galaxies and PSBs would have similar space densities within a given redshift range if they were two evolutionary stages that were directly related to each other. In other words, if compact starburst galaxies are the immediate progenitors to PSBs, they should be found in similar abundances. This is what is seen in Figure 8. The Wild et al. (2016) PSBs within the mass range $10.5<\log M_{*}/M_{\odot}<12.5$ show a decrease in space density with decreasing redshift. The lowest redshift bin for the Wild et al. (2016) PSBs overlaps with the upper limits of the redshift range probed for our compact starburst galaxies. The mass bin for Wild et al. (2016) is consistent with that of our sources so we did not have to correct for integrating the SMF within different mass intervals. Our sources overlap within the margin of error with the estimated PSB space density at the lowest redshift included in the Wild et al. (2016) sample. The redshift evolution of the Wild et al. (2016) PSB space density is also consistent with declining star formation and cold-gas densities over cosmic time— properties that would also impact the frequency of extremely compact bursts of star formation (e.g Madau & Dickinson, 2014; Riechers et al., 2019). Since the cosmic SFR density sharply declines at low-$z$, we also want to compare our compact starburst space density to that of low-$z$ PSBs to determine if our calculated space density is consistent with the decline in PSB space density on the interval $0<z<1$. We calculate the $z\sim 0.05$ PSB space density by integrating the lowest-$z$ luminosity function presented in Pattarakijwanich et al. (2016). This luminosity function is given per [5000 Å] magnitude, a fiducial top hat filter used to calculate average $f_{\lambda}$ across $4950<\lambda/\text{\AA}<5100$ for the rest frame spectra of the PSBs in their sample. In order to calculate a comparable space density from this, we needed to construct a [5000 Å] mass-luminosity relation to determine our bounds of integration. We did this by calculating [5000 Å] magnitudes from SDSS spectra for the low-$z$ PSBs studied in French et al. (2018) using the methodology described in Pattarakijwanich et al. (2016) and using MPA-JHU stellar masses (Brinchmann et al., 2004; Tremonti et al., 2004). We then integrated the Pattarakijwanich et al. (2016) luminosity function within $10.5<\log M_{*}/\text{M}_{\odot}<11.5$, which corresponds to $-23.3<[5000\text{\AA}]<-21.3$, to obtain a low-$z$ PSB space density of $\sim(2.9^{+1.2}_{-1.3})\times 10^{-6}\ \text{Mpc}^{-3}$. This is given as the green triangle in Figure 8. This is of the same order of magnitude of that for our $z\sim 0.5$ compact starburst galaxies, which supports that a fraction of the most extreme PSBs might have undergone an extremely compact starburst phase like that observed in our galaxies. ### 6.3 Comparison to ULIRGs Within the framework of merger-driven galaxy evolution, it is likely that extremely compact starburst events are most relevant in the remnants of major, gas-rich mergers. We also know that major, gas-rich mergers can trigger strong bursts of dusty star formation which would be observed as a ULIRG with $L_{FIR}>10^{12}\ L_{\odot}$. It is possible that sources like the massive, extremely compact starburst galaxies in our sample could represent the transition between the dust-obscured ULIRG and the beginning of a galaxy-scale blowout. Here, we compare the selectability timescale and space density of our compact starbursts to that of ULIRGs in order to contextualize their importance in merger-driven galaxy evolution. The timescales upon which a galaxy will experience ULIRG-like star formation are poorly constrained. On the low end, SN-driven winds could cut the lifetime of a single starburst in a ULIRG to 1-10 Myr (e.g., Thornley et al., 2000). However, studies of ULIRGs with a wide variety of morphologies have allowed the ULIRG lifetime to be estimated to be in the 0.1-1 Gyr range (e.g. Farrah et al., 2001; Murphy et al., 2001; Farrah et al., 2003). It is possible that this wide range of estimated ULIRG lifetimes is due to the fact that it is likely that a ULIRG undergoes multiple large bursts of star formation, allowing it to be selected as such on discontinuous time intervals (e.g., Bekki, 2001; Farrah et al., 2001). Farrah et al. (2003) analyzed a population of 41 local ULIRGs and found that most of their sources would have lifetimes $10\lesssim\text{Myr}\lesssim 40$. From all of the values quoted above, we assume that the lifetime of a ULIRG is $\sim 1-100$ Myr, and show this range as the purple shaded region in Figure 7. However, it is important to make the distinction that these timescales are more strongly related to the physical timescales of dusty star formation than to observable lifetimes caused by respective selection criteria as discussed in other sections. The post-peak SF ages for the MgII galaxies in our sample calculated in Davis et al. (in prep) are better comparisons to the ULIRG lifetimes due to the fact that they are tied more to the physical properties of the galaxies. As stated earlier, Davis et al. (in prep) calculated the average post-peak SF age of $\sim 70$ Myr, which is largely consistent our estimate that they would be able to be targeted for $\sim 148^{+27}_{-24}$ Myr. These timescales are of a similar order of magnitude to that of ULIRGs, which is largely unsurprising because both types of systems are characterized by their energetic starbursts, albeit ours are a bit more extreme. We next compare our estimated compact starburst space density to that of ULIRGs in a similar redshift interval. Koprowski et al. (2017) computed the evolution of the far-IR luminosity function for galaxies out to $z\sim 5$. We estimate the observed space density of ULIRGs by adopting the $0.5<z<1.5$ far- IR luminosity function presented here. Integrating the luminosity function for $L_{\text{IR}}>10^{12}\ L_{\odot}$ gives $n_{\text{ULIRG}}\sim 6\times 10^{-5}\ \text{Mpc}^{-3}$. This is shown as the purple shaded region in Figure 8, where the range of values is due to the uncertainty in the Schechter function fit as described in Koprowski et al. (2017). We note that we do not correct for differences in the mass distributions between the ULIRG sample and our sources because ULIRG sample was luminosity selected. Similarly, Magnelli et al. (2009) calculated the evolving far-IR luminosity function and space density for ULIRGs for several redshift bins within the interval $0.4<z<1.3$. For the $0.4<z<0.7$ and $0.7<z<1$ bins, $n_{\text{ULIRG}}\sim 3\times 10^{-5}\ \text{Mpc}^{-3}$ and $n_{\text{ULIRG}}\sim 2\times 10^{-5}\ \text{Mpc}^{-3}$, respectively. Comparing these values to our estimated compact starburst space density ($(1.1^{+0.5}_{-0.3})\times 10^{-6}\ \text{Mpc}^{-3}$) suggests that it is possible that $\sim 3-8$% of intermediate-$z$ ULIRGs can experience a phase similar to that observed in our sample of extremely compact starburst galaxies. The physical timescales of ULIRGs and our compact starbursts are driven by the same processes, and they are on the same order of magnitude, while there is a factor $\sim 12-40$ difference in their space densities. It is possible the sources in our sample represent a small fraction of the most extreme population of ULIRGs that have the highest SFRs and/or are the most compact. We also compare the space density of our intermediate-$z$ massive, compact starburst galaxies to that of low-$z$ ULIRGs, similar what we hae done in the previous subsection for PSBs since we expect a sharp decline in the ULIRG space density alongside that of the cosmic SFR density (e.g., Madau & Dickinson, 2014). Kim & Sanders (1998) presented a luminosity function for $0.05<z<0.2$ ULIRGs, and integrating the luminosity for $\log\ L_{\text{IR}}/\text{L}_{\odot}>12$ gives a space density of $\sim(4\pm 1)\times 10^{-7}\ \text{Mpc}^{-3}$. This is given as the purple hexagon in Figure 8. Given that the space density of our intermediate-$z$, compact starburst galaxies is calculated in a redshift range between that of the low and intermediate-$z$ ULIRGs, this very steep decline in ULIRG space density also suggests that a small fraction of ULIRGs could undergo a phase like that observed in our galaxies as they evolve. ### 6.4 Comparison to $z\sim 0.5$ merger rate per co-moving unit volume Since extremely compact starburst galaxies are likely formed by the merging of gas-rich disk galaxies, it is important to characterize how many major mergers could produce events like those observed in our sample of galaxies. This requires having knowledge of the major merger rate over a given redshift range. In the past few decades, much work has been done to constrain the galaxy-galaxy merger rate throughout cosmic time. However, there are large systematic uncertainties in this measurement that have prevented the reaching of a consensus between theory and observations and even between different observational techniques. Here, we summarize the most recent results in calculating the $z\sim 0.5$ galaxy merger rate per co-moving unit volume and use them to contextualize our compact starburst space density. To be more concise, we will refer to the merger rate per co-moving unit volume as the merger rate density for the rest of this paper. A crucial piece of calculating the galaxy merger rate density is understanding the timescales upon which a system would be identified as a major merger. This is also the aspect of the calculation that contributes the most uncertainty to the major merger rate density. The two main methods to identify merging galaxies are to select systems with disturbed morphologies (e.g., Abraham et al., 1994, 2003; Conselice, 2009; Lotz et al., 2008) or to search for systems comprised of close pairs (e.g, Le Fèvre et al., 2000; Bluck et al., 2009). Each of these methods probe different stages of the merger and are susceptible to different biases. Close pair selection identifies sources before the merger begins but morphological selection can detect systems before, during, and after the merger occurs, allowing morphologically selected galaxy mergers to be identifiable on different timescales than their close pair counterparts. In Figure 7, we compare the selectability timescale calculated for our modeled compact starburst galaxies (black star) to that of all galaxy mergers presented in Lotz et al. (2008) (grey shaded region). The Lotz et al. (2011) region reflects the range of timescales calculated for simulated systems with mass ratios $1:10<\mu<1:1$ that were selected morphologically (for a detailed review; Abraham et al. 1994, 2003; Lotz et al. 2011). We find that extreme compact starburst events are selectable for a fraction of the amount of time that a morphologically selected galaxy merger would be under its own respective criteria. Having constraints on galaxy merger timescales allows for the merger rate density to be calculated. We show our calculated compact starburst space density (black star) in conjunction with merger rate densities (grey and red shaded regions) as well as the space densities of other phases of merger- driven evolution in Figure 8. The grey shaded region represents the range of the predicted observable merger rate densities calculated in Lotz et al. (2011), and the red shaded region represents the observed range of merger rate densities presented in Stott et al. (2013) which used Lotz et al. (2011) predicted observable timescales. Both the Lotz et al. (2011) and Stott et al. (2013) merger rate densities were calculated for samples containing galaxies with $\log M_{*}/M_{\odot}>10$, while the compact starburst galaxies in our sample are typically $\log M_{*}/M_{\odot}>10.5$. We therefore adjusted the Lotz et al. (2011) and Stott et al. (2013) merger rate densities to ensure that we are working within the same mass interval of the galaxy stellar mass function (SMF) within the appropriate redshift range, as described above. We also converted these merger rate densities to merger space densities by assuming a typical merger timescale of 0.5 Gyr (Lotz et al., 2011). We find that our estimated massive compact starburst space density is $\sim 200$ times smaller than the merger rate density within a similar redshift interval, suggesting that only a small fraction of galaxy mergers would trigger an extreme burst of compact star formation similar to our observed sample. However, we reiterate that the Lotz et al. (2011) and Stott et al. (2013) merger rates consider both major and minor mergers. It is likely that these compact starburst events are triggered only by gas-rich major (mass ratio 1:1 - 4:1) mergers which only make up a fraction of the total number of mergers occurring across a given redshift range (e.g., Lin et al., 2010). This suggests that although only a small fraction of all galaxy mergers might result in extremely compact starbursts, that these could be a likely result of a larger fraction of gas-rich major mergers. ### 6.5 Comparison to $z\sim 0.5$ massive, quiescent galaxies Another way of understanding the role of compact starburst galaxies in the buildup of quiescent galaxy populations is to compare their space density to that of massive, quiescent galaxies within the same redshift range. Moustakas et al. (2013) presented a detailed study of galaxies targeted in PRism Multi- object Survey (PRIMUS) and provided contsraints on the evolution of the stellar mass function from $0<z<1$. The galaxies in PRIMUS were sorted into star forming and quiescent populations, and the evolution of their space density was calculated across different stellar mass and redshift bins. For quiescent PRIMUS galaxies in the mass range $10.5<\log M/M_{\odot}<11$, their space density increases by $\sim 2\times 10^{-4}\ \text{Mpc}^{-3}$ from $z\sim 0.8$ to $z\sim 0.35$. The net decline in space density for star forming galaxies in this redshift interval is $\sim 9\times 10^{-5}\ \text{Mpc}^{-3}$. These changes in space density are comparable to the merger rate in this redshift range and are a factor of $\sim 1000$ larger than our measured space density of $n\sim(1.1^{+0.5}_{-0.3})\times 10^{-6}\ \text{Mpc}^{-3}$ for our sample of massive, compact starburst galaxies. This is broadly consistent with short-lived compact starbursts existing for $\sim 100$ Myr, evolving into massive, quiescent galaxies which would exist on $\sim$Gyr timescales. It is likely that this is a relatively rare phase of galaxy evolution within the general population of massive, quiescent galaxies. However, it is possible that the fraction of those that have also previously undergone extreme ULIRG or PSB phases also could have experienced extremely, compact starbursts like those in our sample. ## 7 Summary & Conclusions In order to build up a population of quiescent galaxies, otherwise gas-rich and star forming galaxies need to undergo some type of quenching process to either disrupt or expel the gas in the system. Violent, dissipative feedback in which either AGN activity or rapid star formation injects energy into the ISM is an important process that impedes the formation of stars in a galaxy. Observationally, feedback manifests as large-scale gas outflows being driven from a galaxy. Within the context of merger-driven galaxy evolution, we expect gas-rich mergers of massive star forming galaxies to trigger dusty starburst events that would then be followed by a blowout event in which nuclear gas and dust is expelled from the system, therefore exposing the nuclear regions of the galaxy. In this work, we have studied a population of 115 $z\sim 0.5$ massive galaxies that are experiencing extreme, compact starburst events and outflows. Resolved HST WFC3 observations of a subset of these show that they are merger remnants, suggesting that these types of events could be an phase within a simple merger-driven evolutionary pathway. Our goal for this work was to determine how long galaxies like the ones we observe would be selected under a certain set of selection criteria, to estimate their space density, and to place them into cosmological context with other evolutionary phases massive galaxies could experience. We do this by empirically modeling the stellar populations of $z\sim 0.5$ massive, compact starburst galaxies. Our model is dependent on four parameters: nuclear burst age, burst mass fraction, optical depth of dust enshrouding newly formed stars, and total galaxy stellar mass. These posterior distributions for these parameter values are constrained for each of the 115 galaxies in our sample by fitting the SDSS $ugriz$ and WISE W1/W2 photometry for the 151 galaxies in our sample using an MCMC technique. We randomly draw sets of parameters from the Gaussian models for the MCMC-calculated posterior distributions to assemble a mock population of compact starburst galaxies. We evolve the modeled sources to determine the timescales under which the galaxies we model would be selected by our targeting criteria. We find that this timescale is $148^{+27}_{-24}$ Myr and that the corresponding intrinsic space density is $n_{\text{CS}}\sim(1.1^{+0.5}_{-0.3})\times 10^{-6}\ \text{Mpc}^{-3}$. Our results, as summarized in Figure 8, suggest that our observed population of extreme compact starburst galaxies could fit into an evolutionary scheme described in Barro et al. (2013). At higher redshifts massive, compact star forming galaxies are more common, and they are believed to be the progenitors of massive, compact quiescent galaxies. Based on comparisons with the Barro et al. (2013) sample of massive, compact galaxies it is likely that our sources follow a similar life cycle in which a gas-rich major merger triggers a burst of star formation. This starburst then drives massive, high velocity gas outflows, thus rapidly quenching the galaxy. This galaxy would be observable for $\sim 100$ Myr timescales as a PSB (e.g., Wild et al., 2016), and would then evolve into a massive, compact, quiescent galaxy. Throughout cosmic time, the massive, quiescent galaxy will undergo minor mergers, allowing it to grow in both mass and size to become a typical quiescent galaxy consistent with the mass-size relation of the massive quiescent galaxy population at z=0, which is notably devoid of compact quiescent galaxies (e.g., Taylor et al., 2010). Although it is more common for galaxies to experience this timeline earlier in the Universe, our galaxies appear to be consistent with these trends within their respective redshift interval. The space density of our massive, compact starbursts suggests that they can contribute to the buildup of a fraction of PSBs and massive, extreme compact quiescent galaxies within their epoch, which in turn could contribute to the overall population of massive, quiescent galaxies in the future. We acknowledge support from the National Science Foundation (NSF) under a collaborative grant (AST-1814233, 1813299, 1813365, 1814159 and 1813702) and from the Heising-Simons Foundation grant 2019-1659. ## References * Abraham et al. (1994) Abraham, R. G., Valdes, F., Yee, H. K. C., & van den Bergh, S. 1994, ApJ, 432, 75, doi: 10.1086/174550 * Abraham et al. (2003) Abraham, R. G., van den Bergh, S., & Nair, P. 2003, ApJ, 588, 218, doi: 10.1086/373919 * Adams et al. (2021) Adams, N. J., Bowler, R. A. A., Jarvis, M. J., Häußler, B., & Lagos, C. D. P. 2021, MNRAS, 506, 4933, doi: 10.1093/mnras/stab1956 * Adelman-McCarthy et al. (2006) Adelman-McCarthy, J. K., Agüeros, M. A., Allam, S. S., et al. 2006, ApJS, 162, 38, doi: 10.1086/497917 * Aihara et al. (2011) Aihara, H., Allende Prieto, C., An, D., et al. 2011, ApJS, 193, 29, doi: 10.1088/0067-0049/193/2/29 * Angel et al. (1979) Angel, J. R. P., Hilliard, R. L., & Weymann, R. J. 1979, in The MMT and the Future of Ground-Based Astronomy, Vol. 385, 87 * Balogh et al. (1999) Balogh, M. L., Morris, S. L., Yee, H. K. C., Carlberg, R. G., & Ellingson, E. 1999, ApJ, 527, 54, doi: 10.1086/308056 * Barro et al. (2013) Barro, G., Faber, S. M., Pérez-González, P. G., et al. 2013, ApJ, 765, 104, doi: 10.1088/0004-637X/765/2/104 * Behroozi et al. (2019) Behroozi, P., Wechsler, R. H., Hearin, A. P., & Conroy, C. 2019, MNRAS, 488, 3143, doi: 10.1093/mnras/stz1182 * Bekki (2001) Bekki, K. 2001, ApJ, 546, 189, doi: 10.1086/318231 * Blitz & Shu (1980) Blitz, L., & Shu, F. H. 1980, ApJ, 238, 148, doi: 10.1086/157968 * Bluck et al. (2009) Bluck, A. F. L., Conselice, C. J., Bouwens, R. J., et al. 2009, MNRAS, 394, L51, doi: 10.1111/j.1745-3933.2008.00608.x * Brinchmann et al. (2004) Brinchmann, J., Charlot, S., White, S. D. M., et al. 2004, MNRAS, 351, 1151, doi: 10.1111/j.1365-2966.2004.07881.x * Byler et al. (2017) Byler, N., Dalcanton, J. J., Conroy, C., & Johnson, B. D. 2017, ApJ, 840, 44, doi: 10.3847/1538-4357/aa6c66 * Chabrier (2003) Chabrier, G. 2003, PASP, 115, 763, doi: 10.1086/376392 * Charlot & Fall (2000) Charlot, S., & Fall, S. M. 2000, ApJ, 539, 718, doi: 10.1086/309250 * Conroy et al. (2009) Conroy, C., Gunn, J. E., & White, M. 2009, ApJ, 699, 486, doi: 10.1088/0004-637X/699/1/486 * Conselice (2009) Conselice, C. J. 2009, MNRAS, 399, L16, doi: 10.1111/j.1745-3933.2009.00708.x * Croton (2006) Croton, D. J. 2006, MNRAS, 369, 1808, doi: 10.1111/j.1365-2966.2006.10429.x * Daddi et al. (2010) Daddi, E., Bournaud, F., Walter, F., et al. 2010, ApJ, 713, 686, doi: 10.1088/0004-637X/713/1/686 * Davis et al. ( in prep) Davis et al., J. in prep, ApJ * Di Matteo et al. (2005) Di Matteo, T., Springel, V., & Hernquist, L. 2005, Nature, 433, 604, doi: 10.1038/nature03335 * Diamond-Stanic et al. (2012) Diamond-Stanic, A. M., Moustakas, J., Tremonti, C. A., et al. 2012, ApJ, 755, L26, doi: 10.1088/2041-8205/755/2/L26 * Diamond-Stanic et al. (2021) Diamond-Stanic, A. M., Moustakas, J., Sell, P. H., et al. 2021, arXiv e-prints, arXiv:2102.11287. https://arxiv.org/abs/2102.11287 * Earl et al. (2022) Earl, N., Tollerud, E., Jones, C., et al. 2022, astropy/specutils: V1.7.0, v1.7.0, Zenodo, doi: 10.5281/zenodo.6207491 * Fabian (2012) Fabian, A. C. 2012, ARA&A, 50, 455, doi: 10.1146/annurev-astro-081811-125521 * Farrah et al. (2003) Farrah, D., Afonso, J., Efstathiou, A., et al. 2003, MNRAS, 343, 585, doi: 10.1046/j.1365-8711.2003.06696.x * Farrah et al. (2001) Farrah, D., Rowan-Robinson, M., Oliver, S., et al. 2001, MNRAS, 326, 1333, doi: 10.1111/j.1365-2966.2001.04721.x * Ferrarese & Merritt (2000) Ferrarese, L., & Merritt, D. 2000, ApJ, 539, L9, doi: 10.1086/312838 * Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306, doi: 10.1086/670067 * French et al. (2018) French, K. D., Yang, Y., Zabludoff, A. I., & Tremonti, C. A. 2018, ApJ, 862, 2, doi: 10.3847/1538-4357/aacb2d * Geach et al. (2013) Geach, J. E., Hickox, R. C., Diamond-Stanic, A. M., et al. 2013, ApJ, 767, L17, doi: 10.1088/2041-8205/767/1/L17 * Geach et al. (2014) —. 2014, Nature, 516, 68, doi: 10.1038/nature14012 * Geach et al. (2018) Geach, J. E., Tremonti, C., Diamond-Stanic, A. M., et al. 2018, ApJ, 864, L1, doi: 10.3847/2041-8213/aad8b6 * Gillman et al. (2021) Gillman, S., Tiley, A. L., Swinbank, A. M., et al. 2021, MNRAS, 500, 4229, doi: 10.1093/mnras/staa3400 * Guo et al. (2010) Guo, Q., White, S., Li, C., & Boylan-Kolchin, M. 2010, MNRAS, 404, 1111, doi: 10.1111/j.1365-2966.2010.16341.x * Hickox et al. (2017) Hickox, R. C., Myers, A. D., Greene, J. E., et al. 2017, ApJ, 849, 53, doi: 10.3847/1538-4357/aa8c77 * Hopkins et al. (2008) Hopkins, P. F., Hernquist, L., Cox, T. J., & Kereš, D. 2008, The Astrophysical Journal Supplement Series, 175, 356, doi: 10.1086/524362 * Hopkins et al. (2012) Hopkins, P. F., Quataert, E., & Murray, N. 2012, MNRAS, 421, 3522, doi: 10.1111/j.1365-2966.2012.20593.x * Hopkins et al. (2006) Hopkins, P. F., Somerville, R. S., Hernquist, L., et al. 2006, ApJ, 652, 864, doi: 10.1086/508503 * Johnson et al. (2021) Johnson, B. D., Leja, J., Conroy, C., & Speagle, J. S. 2021, ApJS, 254, 22, doi: 10.3847/1538-4365/abef67 * Kauffmann et al. (1993) Kauffmann, G., White, S. D. M., & Guiderdoni, B. 1993, MNRAS, 264, 201, doi: 10.1093/mnras/264.1.201 * Kauffmann et al. (2003) Kauffmann, G., Heckman, T. M., White, S. D. M., et al. 2003, MNRAS, 341, 33, doi: 10.1046/j.1365-8711.2003.06291.x * Kereš et al. (2009) Kereš, D., Katz, N., Davé, R., Fardal, M., & Weinberg, D. H. 2009, MNRAS, 396, 2332, doi: 10.1111/j.1365-2966.2009.14924.x * Kim & Sanders (1998) Kim, D. C., & Sanders, D. B. 1998, ApJS, 119, 41, doi: 10.1086/313148 * Komatsu et al. (2011) Komatsu, E., Smith, K. M., Dunkley, J., et al. 2011, ApJS, 192, 18, doi: 10.1088/0067-0049/192/2/18 * Koprowski et al. (2017) Koprowski, M. P., Dunlop, J. S., Michałowski, M. J., et al. 2017, MNRAS, 471, 4155, doi: 10.1093/mnras/stx1843 * Kormendy & Ho (2013) Kormendy, J., & Ho, L. C. 2013, Annual Review of Astronomy and Astrophysics, 51, 511, doi: 10.1146/annurev-astro-082708-101811 * Le Fèvre et al. (2000) Le Fèvre, O., Abraham, R., Lilly, S. J., et al. 2000, MNRAS, 311, 565, doi: 10.1046/j.1365-8711.2000.03083.x * Leja et al. (2019) Leja, J., Carnall, A. C., Johnson, B. D., Conroy, C., & Speagle, J. S. 2019, ApJ, 876, 3, doi: 10.3847/1538-4357/ab133c * Lin et al. (2010) Lin, L., Cooper, M. C., Jian, H.-Y., et al. 2010, ApJ, 718, 1158, doi: 10.1088/0004-637X/718/2/1158 * Lonsdale et al. (2006) Lonsdale, C. J., Farrah, D., & Smith, H. E. 2006, Ultraluminous Infrared Galaxies, ed. J. W. Mason, 285, doi: 10.1007/3-540-30313-8_9 * Lotz et al. (2011) Lotz, J. M., Jonsson, P., Cox, T. J., et al. 2011, ApJ, 742, 103, doi: 10.1088/0004-637X/742/2/103 * Lotz et al. (2008) Lotz, J. M., Jonsson, P., Cox, T. J., & Primack, J. R. 2008, MNRAS, 391, 1137, doi: 10.1111/j.1365-2966.2008.14004.x * Madau & Dickinson (2014) Madau, P., & Dickinson, M. 2014, ARA&A, 52, 415, doi: 10.1146/annurev-astro-081811-125615 * Magnelli et al. (2009) Magnelli, B., Elbaz, D., Chary, R. R., et al. 2009, A&A, 496, 57, doi: 10.1051/0004-6361:200811443 * Magnelli et al. (2011) —. 2011, A&A, 528, A35, doi: 10.1051/0004-6361/200913941 * Marshall et al. (2008) Marshall, J. L., Burles, S., Thompson, I. B., et al. 2008, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7014, Ground-based and Airborne Instrumentation for Astronomy II, ed. I. S. McLean & M. M. Casali, 701454, doi: 10.1117/12.789972 * McNamara & Nulsen (2007) McNamara, B. R., & Nulsen, P. E. J. 2007, ARA&A, 45, 117, doi: 10.1146/annurev.astro.45.051806.110625 * Metropolis et al. (1953) Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. 1953, J. Chem. Phys., 21, 1087, doi: 10.1063/1.1699114 * Mihos & Hernquist (1996) Mihos, J. C., & Hernquist, L. 1996, ApJ, 464, 641, doi: 10.1086/177353 * Moster et al. (2010) Moster, B. P., Somerville, R. S., Maulbetsch, C., et al. 2010, ApJ, 710, 903, doi: 10.1088/0004-637X/710/2/903 * Moustakas et al. (2013) Moustakas, J., Coil, A. L., Aird, J., et al. 2013, ApJ, 767, 50, doi: 10.1088/0004-637X/767/1/50 * Mowla et al. (2019) Mowla, L. A., van Dokkum, P., Brammer, G. B., et al. 2019, ApJ, 880, 57, doi: 10.3847/1538-4357/ab290a * Murphy et al. (2001) Murphy, T. W., J., Soifer, B. T., Matthews, K., & Armus, L. 2001, ApJ, 559, 201, doi: 10.1086/322321 * Naab et al. (2009) Naab, T., Johansson, P. H., & Ostriker, J. P. 2009, ApJ, 699, L178, doi: 10.1088/0004-637X/699/2/L178 * Newberg & Yanny (1997) Newberg, H. J., & Yanny, B. 1997, ApJS, 113, 89, doi: 10.1086/313051 * Newman et al. (2012) Newman, A. B., Ellis, R. S., Bundy, K., & Treu, T. 2012, ApJ, 746, 162, doi: 10.1088/0004-637X/746/2/162 * Oemler et al. (2017) Oemler, Augustus, J., Abramson, L. E., Gladders, M. D., et al. 2017, ApJ, 844, 45, doi: 10.3847/1538-4357/aa789e * Oke et al. (1995) Oke, J. B., Cohen, J. G., Carr, M., et al. 1995, PASP, 107, 375, doi: 10.1086/133562 * Pattarakijwanich et al. (2016) Pattarakijwanich, P., Strauss, M. A., Ho, S., & Ross, N. P. 2016, ApJ, 833, 19, doi: 10.3847/0004-637X/833/1/19 * Perrotta et al. (2021) Perrotta, S., George, E. R., Coil, A. L., et al. 2021, arXiv e-prints, arXiv:2106.02366. https://arxiv.org/abs/2106.02366 * Petter et al. (2020) Petter, G. C., Kepley, A. A., Hickox, R. C., et al. 2020, ApJ, 901, 138, doi: 10.3847/1538-4357/abb19d * Richards et al. (2002) Richards, G. T., Fan, X., Newberg, H. J., et al. 2002, AJ, 123, 2945, doi: 10.1086/340187 * Riechers et al. (2019) Riechers, D. A., Pavesi, R., Sharon, C. E., et al. 2019, ApJ, 872, 7, doi: 10.3847/1538-4357/aafc27 * Rupke et al. (2019) Rupke, D. S. N., Coil, A., Geach, J. E., et al. 2019, Nature, 574, 643, doi: 10.1038/s41586-019-1686-1 * Sanders et al. (1988) Sanders, D. B., Soifer, B. T., Elias, J. H., et al. 1988, ApJ, 325, 74, doi: 10.1086/165983 * Sell et al. (2014) Sell, P. H., Tremonti, C. A., Hickox, R. C., et al. 2014, MNRAS, 441, 3417, doi: 10.1093/mnras/stu636 * Setton et al. (2020) Setton, D. J., Bezanson, R., Suess, K. A., et al. 2020, arXiv e-prints, arXiv:2010.04734. https://arxiv.org/abs/2010.04734 * Somerville & Davé (2015) Somerville, R. S., & Davé, R. 2015, ARA&A, 53, 51, doi: 10.1146/annurev-astro-082812-140951 * Springel et al. (2005a) Springel, V., Di Matteo, T., & Hernquist, L. 2005a, MNRAS, 361, 776, doi: 10.1111/j.1365-2966.2005.09238.x * Springel et al. (2005b) Springel, V., White, S. D. M., Jenkins, A., et al. 2005b, Nature, 435, 629, doi: 10.1038/nature03597 * Stefanon et al. (2013) Stefanon, M., Marchesini, D., Rudnick, G. H., Brammer, G. B., & Whitaker, K. E. 2013, ApJ, 768, 92, doi: 10.1088/0004-637X/768/1/92 * Stern et al. (2012) Stern, D., Assef, R. J., Benford, D. J., et al. 2012, ApJ, 753, 30, doi: 10.1088/0004-637X/753/1/30 * Stott et al. (2013) Stott, J. P., Sobral, D., Smail, I., et al. 2013, MNRAS, 430, 1158, doi: 10.1093/mnras/sts684 * Strauss et al. (2002) Strauss, M. A., Weinberg, D. H., Lupton, R. H., et al. 2002, AJ, 124, 1810, doi: 10.1086/342343 * Tacconi et al. (2010) Tacconi, L. J., Genzel, R., Neri, R., et al. 2010, Nature, 463, 781, doi: 10.1038/nature08773 * Tacconi et al. (2013) Tacconi, L. J., Neri, R., Genzel, R., et al. 2013, ApJ, 768, 74, doi: 10.1088/0004-637X/768/1/74 * Taylor et al. (2010) Taylor, E. N., Franx, M., Glazebrook, K., et al. 2010, ApJ, 720, 723, doi: 10.1088/0004-637X/720/1/723 * Thornley et al. (2000) Thornley, M. D., Förster Schreiber, N. M., Lutz, D., et al. 2000, ApJ, 539, 641, doi: 10.1086/309261 * Toomre (1977) Toomre, A. 1977, in Evolution of Galaxies and Stellar Populations, ed. B. M. Tinsley & D. C. Larson, Richard B. Gehret, 401 * Tremonti et al. (2007) Tremonti, C. A., Moustakas, J., & Diamond-Stanic, A. a. M. 2007, ApJ, 663, L77, doi: 10.1086/520083 * Tremonti et al. (2004) Tremonti, C. A., Heckman, T. M., Kauffmann, G., et al. 2004, ApJ, 613, 898, doi: 10.1086/423264 * Tremonti et al. ( in prep) Tremonti et al., C. in prep, ApJ * van der Wel et al. (2014) van der Wel, A., Franx, M., van Dokkum, P. G., et al. 2014, ApJ, 788, 28, doi: 10.1088/0004-637X/788/1/28 * van Dokkum et al. (2008) van Dokkum, P. G., Franx, M., Kriek, M., et al. 2008, ApJ, 677, L5, doi: 10.1086/587874 * van Dokkum et al. (2015) van Dokkum, P. G., Nelson, E. J., Franx, M., et al. 2015, ApJ, 813, 23, doi: 10.1088/0004-637X/813/1/23 * Vanden Berk et al. (2005) Vanden Berk, D. E., Schneider, D. P., Richards, G. T., et al. 2005, AJ, 129, 2047, doi: 10.1086/427856 * Veilleux et al. (2005) Veilleux, S., Cecil, G., & Bland-Hawthorn, J. 2005, ARA&A, 43, 769, doi: 10.1146/annurev.astro.43.072103.150610 * Villaume et al. (2015) Villaume, A., Conroy, C., & Johnson, B. D. 2015, ApJ, 806, 82, doi: 10.1088/0004-637X/806/1/82 * Wild et al. (2016) Wild, V., Almaini, O., Dunlop, J., et al. 2016, MNRAS, 463, 832, doi: 10.1093/mnras/stw1996 * Wild et al. (2011) Wild, V., Charlot, S., Brinchmann, J., et al. 2011, MNRAS, 417, 1760, doi: 10.1111/j.1365-2966.2011.19367.x * Wild et al. (2009) Wild, V., Walcher, C. J., Johansson, P. H., et al. 2009, MNRAS, 395, 144, doi: 10.1111/j.1365-2966.2009.14537.x * Wright et al. (2010) Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868, doi: 10.1088/0004-6256/140/6/1868 * York et al. (2000) York, D. G., Adelman, J., Anderson, John E., J., et al. 2000, AJ, 120, 1579, doi: 10.1086/301513 * Zirm et al. (2007) Zirm, A. W., van der Wel, A., Franx, M., et al. 2007, ApJ, 656, 66, doi: 10.1086/510713 ## Appendix A Auxillary MCMC Output Table 1: Properties for the galaxies included in our sample. SDSS ID | $z$ | $\langle\log M_{*}/M_{\odot}\rangle$ | $\sigma_{\log M_{*}/M_{\odot}}$ | SDSS $u$ | SDSS $g$ | SDSS $r$ | SDSS $i$ | SDSS $z$ | WISE W1 | WISE W2 ---|---|---|---|---|---|---|---|---|---|--- | | | | (AB) | (AB) | (AB) | (AB) | (AB) | (Vega) | (Vega) (1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | (9) | (10) | (11) J1015+0004 | 0.417 | 11.0 | 0.07 | 22.03 | 20.71 | 19.25 | 18.95 | 18.77 | 15.83 | 15.38 J1109-0040 | 0.593 | 11.4 | 0.47 | 22.07 | 20.88 | 19.46 | 18.8 | 18.61 | 15.26 | 15.22 J1210+0030 | 0.441 | 11.1 | 0.08 | 21.88 | 20.87 | 19.37 | 19.02 | 18.79 | 15.87 | 15.3 J1341-0009 | 0.446 | 11.0 | 0.19 | 22.34 | 20.96 | 19.38 | 19.05 | 18.79 | 15.74 | 15.74 J1434-0052 | 0.461 | 11.3 | 0.51 | 23.45 | 21.04 | 19.29 | 18.66 | 18.31 | 14.86 | 14.64 J1440+0039 | 0.564 | 11.2 | 0.10 | 20.93 | 20.4 | 19.27 | 18.86 | 18.74 | 15.59 | 15.59 J1125-0145 | 0.519 | 10.9 | 0.27 | 19.6 | 19.33 | 18.69 | 18.48 | 18.39 | 14.84 | 14.65 J0745+3754 | 0.406 | 10.7 | 0.22 | 20.27 | 19.86 | 19.14 | 18.79 | 18.46 | 14.78 | 14.13 J0251-0657 | 0.406 | 11.1 | 0.27 | 22.91 | 21.14 | 19.39 | 18.88 | 18.57 | 15.38 | 15.19 J0905+5759 | 0.711 | 10.8 | 0.28 | 19.91 | 19.58 | 19.4 | 19.1 | 19.14 | 15.56 | 15.46 J1219+0336 | 0.451 | 11.0 | 0.21 | 20.15 | 19.52 | 18.79 | 18.53 | 18.33 | 14.99 | 14.56 J1232+0226 | 0.418 | 11.1 | 0.22 | 21.55 | 20.36 | 18.81 | 18.53 | 18.4 | 15.41 | 15.25 J1440+0107 | 0.456 | 10.9 | 0.23 | 20.63 | 20.26 | 19.38 | 18.97 | 18.76 | 15 | 14.53 J1441+0116 | 0.537 | 11.0 | 0.22 | 20.35 | 19.76 | 19.34 | 18.97 | 18.68 | 15.33 | 15.1 J0901+0314 | 0.459 | 10.6 | 0.23 | 19.55 | 19.29 | 18.82 | 18.7 | 18.57 | 15.22 | 15.01 J1107+0417 | 0.467 | 10.6 | 0.22 | 19.96 | 19.52 | 19.07 | 18.89 | 18.7 | 15.58 | 14.93 J1453+6022 | 0.406 | 10.9 | 0.15 | 20.49 | 20.04 | 19.02 | 18.78 | 18.55 | 15.61 | 15.33 J1506+6131 | 0.437 | 10.3 | 0.17 | 19.69 | 19.58 | 19.12 | 19.04 | 19.16 | 15.72 | 15.52 J1610+5104 | 0.469 | 11.1 | 0.07 | 22.1 | 20.93 | 19.35 | 18.92 | 18.76 | 15.68 | 15.51 J1635+4709 | 0.699 | 11.6 | 0.13 | 20.65 | 20.28 | 19.51 | 18.75 | 18.56 | 15.21 | 15.11 J2116-0634 | 0.728 | 11.3 | 0.18 | 20.74 | 20.02 | 19.72 | 19.2 | 19.05 | 15.51 | 15.55 J2311-0839 | 0.725 | 11.7 | 0.14 | 21.15 | 20.89 | 19.93 | 18.92 | 18.71 | 15.4 | 15.29 J2140+1209 | 0.751 | 11.1 | 0.25 | 20.63 | 20.19 | 19.85 | 19.31 | 19.1 | 15.57 | 14.98 J2256+1504 | 0.727 | 11.4 | 0.22 | 20.76 | 20.1 | 19.59 | 18.91 | 18.74 | 15.12 | 15.19 J2319+1435 | 0.422 | 10.5 | 0.36 | 22.62 | 21.07 | 19.42 | 19.01 | 18.78 | 15.77 | 15.44 J0826+4305 | 0.603 | 10.7 | 0.27 | 19.64 | 19.43 | 19.14 | 18.88 | 18.85 | 15.42 | 15.13 J0951+5514 | 0.402 | 11.3 | 0.11 | 20.65 | 20.01 | 18.91 | 18.51 | 18.15 | 14.85 | 14.37 J1235+6140 | 0.599 | 11.3 | 0.48 | 20.91 | 20.31 | 19.19 | 18.61 | 18.51 | 15.4 | 15.13 J1253+6256 | 0.536 | 10.4 | 0.17 | 19.69 | 19.64 | 19.3 | 19.25 | 19.22 | 16.16 | 15.68 J1506+5402 | 0.608 | 10.7 | 0.27 | 19.28 | 19.13 | 18.88 | 18.65 | 18.61 | 15.26 | 14.78 J1248+0601 | 0.632 | 11.2 | 0.18 | 20.89 | 20.33 | 19.49 | 18.98 | 18.85 | 15.77 | 15.64 J1117+5123 | 0.49 | 11.3 | 0.11 | 21.06 | 20.42 | 19.24 | 18.91 | 18.68 | 15.33 | 15.38 J1020+5331 | 0.457 | 11.0 | 0.31 | 22.53 | 20.68 | 19.21 | 18.88 | 18.69 | 15.77 | 15.62 J1401-0223 | 0.402 | 11.0 | 0.20 | 20.36 | 19.91 | 19.05 | 18.64 | 18.29 | 15.01 | 14.54 J0933+4135 | 0.441 | 10.7 | 0.25 | 19.07 | 18.97 | 18.46 | 18.39 | 18.24 | 15.14 | 14.59 J0939+4251 | 0.411 | 10.9 | 0.17 | 20.05 | 19.58 | 18.73 | 18.52 | 18.24 | 15.18 | 14.89 J1142+6037 | 0.568 | 11.5 | 0.29 | 20.86 | 20.13 | 18.81 | 18.29 | 18.17 | 15.05 | 14.79 J1713+2817 | 0.577 | 11.3 | 0.16 | 20.82 | 20.3 | 19.33 | 18.91 | 18.86 | 15.52 | 15.23 J1720+3017 | 0.684 | 11.6 | 0.10 | 21.25 | 20.67 | 19.75 | 18.89 | 18.78 | 15.45 | 15.47 J2118+0017 | 0.459 | 10.8 | 0.25 | 20.17 | 19.78 | 18.96 | 18.74 | 18.53 | 14.96 | 14.25 J0922+0452 | 0.476 | 11.1 | 0.08 | 21.22 | 20.34 | 18.99 | 18.79 | 18.57 | 15.7 | 15.49 J1052+0607 | 0.555 | 10.9 | 0.14 | 20.53 | 20.14 | 19.32 | 19 | 18.86 | 15.69 | 15.82 J1353+5300 | 0.408 | 11.3 | 0.18 | 20.43 | 19.84 | 18.81 | 18.38 | 18.12 | 14.65 | 14.2 J1436+5017 | 0.454 | 11.0 | 0.19 | 20.22 | 19.81 | 18.83 | 18.61 | 18.42 | 15.36 | 14.91 J1558+3957 | 0.402 | 10.6 | 0.23 | 19.37 | 19.07 | 18.54 | 18.44 | 18.24 | 15.17 | 14.55 J1604+3939 | 0.564 | 11.7 | 0.29 | 20.85 | 20.01 | 18.8 | 18.21 | 18.06 | 14.58 | 14.48 J0828+0336 | 0.572 | 11.0 | 0.11 | 20.9 | 20.3 | 19.3 | 18.98 | 18.94 | 16.09 | 15.77 J0808+2709 | 0.563 | 11.1 | 0.19 | 20.63 | 20.09 | 19.4 | 18.9 | 18.77 | 15.52 | 14.94 J1009+4336 | 0.519 | 10.9 | 0.26 | 19.6 | 19.37 | 18.79 | 18.56 | 18.38 | 14.98 | 14.62 J1133+0956 | 0.483 | 11.0 | 0.19 | 20.45 | 19.93 | 19.05 | 18.8 | 18.75 | 15.2 | 14.95 J0900+3212 | 0.496 | 11.3 | 0.22 | 20.18 | 19.84 | 18.96 | 18.5 | 18.14 | 14.67 | 14.49 J1330+4821 | 0.444 | 11.5 | 0.16 | 20.6 | 19.73 | 18.76 | 18.32 | 17.97 | 14.64 | 14.13 J1420+5313 | 0.742 | 11.8 | 0.20 | 20.72 | 20.39 | 19.84 | 19.01 | 18.67 | 14.68 | 14.61 J1556+4234 | 0.401 | 11.4 | 0.11 | 20.42 | 19.76 | 18.52 | 18.19 | 17.88 | 14.8 | 14.38 J1456+3849 | 0.421 | 10.8 | 0.23 | 19.84 | 19.48 | 18.88 | 18.49 | 18.16 | 14.44 | 13.8 J1459+3844 | 0.433 | 10.5 | 0.22 | 19.93 | 19.64 | 19.05 | 18.9 | 18.68 | 15 | 14.56 J1037+4048 | 0.439 | 11.1 | 0.16 | 22.17 | 20.94 | 19.4 | 18.98 | 18.74 | 15.74 | 15.56 J1248+4444 | 0.43 | 10.7 | 0.22 | 19.71 | 19.49 | 18.83 | 18.62 | 18.48 | 15.14 | 14.76 J1447+3650 | 0.414 | 11.0 | 0.06 | 22.71 | 20.89 | 19.17 | 18.84 | 18.72 | 15.7 | 15.58 J1520+3334 | 0.516 | 11.3 | 0.12 | 21.91 | 20.78 | 19.39 | 18.98 | 18.88 | 15.41 | 15.18 J1611+2650 | 0.483 | 11.4 | 0.13 | 20.97 | 20.18 | 18.97 | 18.67 | 18.45 | 14.92 | 14.62 J1039+4537 | 0.634 | 11.2 | 0.26 | 20.37 | 20 | 19.42 | 18.98 | 18.86 | 15.02 | 14.87 J1035+3854 | 0.422 | 11.0 | 0.29 | 22.44 | 20.8 | 19.26 | 18.96 | 18.75 | 15.85 | 15.59 J1052+4104 | 0.576 | 10.9 | 0.18 | 20.14 | 19.84 | 19.27 | 18.96 | 18.89 | 15.78 | 15.74 J1215+4233 | 0.479 | 11.2 | 0.33 | 22.22 | 20.86 | 19.21 | 18.82 | 18.58 | 15.48 | 15.22 J1244+4140 | 0.459 | 10.8 | 0.18 | 19.91 | 19.54 | 18.79 | 18.64 | 18.45 | 15.71 | 15.15 J0921+3251 | 0.73 | 11.1 | 0.41 | 26.53 | 15.87 | 14.65 | 16.92 | 16.62 | 15.01 | 14.96 J1012+1134 | 0.411 | 11.0 | 0.42 | 24.01 | 21.22 | 19.6 | 19.07 | 18.79 | 15.43 | 14.81 J1113+1119 | 0.628 | 11.6 | 0.63 | 20.57 | 17.48 | 17.04 | 16.68 | 16.5 | 15.49 | 15.43 J1232+0723 | 0.401 | 10.7 | 0.22 | 19.86 | 19.41 | 18.73 | 18.6 | 18.42 | 14.96 | 14.7 J1239+0731 | 0.542 | 11.0 | 0.18 | 20.51 | 20.12 | 19.29 | 18.95 | 18.85 | 15.62 | 15.49 J1415+4830 | 0.496 | 11.0 | 0.21 | 19.66 | 19.2 | 18.73 | 18.34 | 18.08 | 14.12 | 13.4 J1450+4621 | 0.782 | 11.6 | 0.15 | 20.6 | 20.09 | 19.66 | 18.89 | 18.85 | 15.24 | 15.23 J1658+2354 | 0.498 | 11.4 | 0.17 | 19.74 | 19.22 | 18.33 | 18.07 | 17.94 | 14.54 | 14.36 J0908+1039 | 0.502 | 11.0 | 0.23 | 19.77 | 19.45 | 18.74 | 18.47 | 18.27 | 14.89 | 14.6 J1119+1526 | 0.491 | 11.1 | 0.07 | 22.09 | 20.95 | 19.43 | 19.04 | 18.87 | 15.82 | 15.66 J0830+5552 | 0.526 | 11.0 | 0.25 | 20.19 | 19.85 | 19.16 | 18.81 | 18.56 | 14.82 | 14.49 J1435+0846 | 0.404 | 11.3 | 0.15 | 20.28 | 19.73 | 18.61 | 18.32 | 18.05 | 14.99 | 14.56 J0742+4844 | 0.431 | 11.0 | 0.14 | 20.79 | 20.19 | 19.03 | 18.83 | 18.59 | 15.6 | 15.19 J0752+1806 | 0.619 | 10.5 | 0.13 | 20.44 | 19.91 | 19.5 | 19.03 | 20.65 | 14.66 | 14.32 J0836+2526 | 0.531 | 10.8 | 0.23 | 20.75 | 20.29 | 19.28 | 18.94 | 18.8 | 16.12 | 15.82 J1016+3026 | 0.402 | 10.8 | 0.30 | 23 | 20.8 | 19.18 | 18.83 | 18.64 | 15.72 | 16.02 J1133+3958 | 0.487 | 11.1 | 0.21 | 19.7 | 19.29 | 18.52 | 18.29 | 18.15 | 14.74 | 14.46 J1229+3545 | 0.614 | 11.4 | 0.41 | 20.57 | 20 | 19 | 18.46 | 18.33 | 15.18 | 15.1 J1301+3615 | 0.573 | 11.3 | 0.19 | 20.51 | 20.01 | 19.13 | 18.68 | 18.59 | 15.05 | 14.91 J0901+2338 | 0.438 | 10.8 | 0.23 | 20.06 | 19.45 | 18.96 | 18.58 | 18.39 | 14.39 | 13.7 J0911+2619 | 0.471 | 11.0 | 0.24 | 20.18 | 19.71 | 18.95 | 18.58 | 18.36 | 14.5 | 13.92 J1403+2440 | 0.455 | 11.0 | 0.15 | 22.2 | 20.79 | 19.24 | 18.93 | 18.76 | 15.77 | 15.69 J1505+2312 | 0.417 | 11.0 | 0.07 | 22.6 | 20.89 | 19.33 | 18.98 | 18.71 | 15.78 | 15.39 J1548+1834 | 0.688 | 11.2 | 0.22 | 20.53 | 20.08 | 19.55 | 18.93 | 18.89 | 15.37 | 15.31 J1634+1729 | 0.491 | 10.9 | 0.23 | 20.71 | 20.1 | 19.35 | 19.04 | 18.78 | 15.22 | 14.54 J1635+1749 | 0.469 | 11.0 | 0.24 | 20.71 | 20.17 | 19.21 | 18.93 | 18.75 | 15.08 | 14.67 J1226+2753 | 0.427 | 10.3 | 0.14 | 19.14 | 19.14 | 18.83 | 18.81 | 18.78 | 15.87 | 15.46 J0936+2237 | 0.571 | 11.3 | 0.49 | 22.66 | 21.12 | 19.48 | 18.86 | 18.63 | 15.41 | 15.34 J1012+2258 | 0.504 | 11.5 | 0.20 | 20.74 | 20.13 | 18.81 | 18.43 | 18.21 | 14.89 | 14.69 J1000+2816 | 0.469 | 11.2 | 0.14 | 20.74 | 20.25 | 19.17 | 18.76 | 18.56 | 15.22 | 15.22 J0941+1827 | 0.569 | 11.5 | 0.14 | 21.02 | 20.26 | 19.1 | 18.61 | 18.36 | 15.2 | 14.97 J1005+1836 | 0.402 | 10.8 | 0.33 | 24.96 | 21.13 | 19.48 | 19.02 | 18.79 | 15.7 | 15.56 J0912+1523 | 0.747 | 11.7 | 0.29 | 20.91 | 20.37 | 19.59 | 18.64 | 18.4 | 15.23 | 15.06 J0900+1130 | 0.407 | 11.2 | 0.21 | 20.64 | 19.97 | 19.04 | 18.62 | 18.18 | 14.66 | 14.04 J1203+1807 | 0.595 | 11.4 | 0.38 | 22.37 | 21.25 | 19.73 | 18.96 | 18.82 | 15.41 | 15.38 J1205+1818 | 0.526 | 10.6 | 0.27 | 19.01 | 18.88 | 18.54 | 18.41 | 18.45 | 15.19 | 14.84 J1256+1826 | 0.424 | 11.0 | 0.39 | 22.52 | 21.02 | 19.35 | 18.88 | 18.6 | 15.42 | 15.27 J1248+1954 | 0.561 | 11.0 | 0.17 | 20.15 | 19.8 | 19.13 | 18.81 | 18.79 | 15.68 | 15.65 J1352+1653 | 0.533 | 11.3 | 0.32 | 22.07 | 21.07 | 19.47 | 18.88 | 18.63 | 15.57 | 15.36 J1400+1524 | 0.564 | 11.3 | 0.15 | 20.72 | 20.35 | 19.35 | 18.86 | 18.76 | 15.38 | 15.24 J1412+1635 | 0.454 | 11.1 | 0.39 | 22.5 | 20.91 | 19.3 | 18.88 | 18.57 | 15.5 | 15.21 J1412+1943 | 0.413 | 10.9 | 0.20 | 21.76 | 20.68 | 19.26 | 19 | 18.75 | 15.79 | 15.57 J1500+1739 | 0.577 | 10.7 | 0.27 | 19.68 | 19.38 | 19.04 | 18.82 | 18.76 | 15.15 | 14.82 J1516+1650 | 0.589 | 11.0 | 0.24 | 19.73 | 19.35 | 18.93 | 18.54 | 18.35 | 14.46 | 13.95 J1049+6433 | 0.454 | 11.4 | 0.38 | 21.79 | 20.36 | 18.78 | 18.35 | 18.13 | 15 | 14.71 J1528+0126 | 0.403 | 10.9 | 0.22 | 20.3 | 19.78 | 19.03 | 18.62 | 18.23 | 14.53 | 14.06 J0811+4716 | 0.516 | 11.0 | 0.11 | 21.16 | 20.69 | 19.55 | 19.2 | 18.92 | 15.93 | 15.87 J0827+2954 | 0.682 | 11.5 | 0.11 | 21.48 | 21.05 | 20.12 | 19.42 | 19.14 | 15.69 | 15.48 J1613+2834 | 0.449 | 11.0 | 0.24 | 20.26 | 19.76 | 18.94 | 18.69 | 18.42 | 14.84 | 14.25 Table 2: Comparison between our derived stellar masses and those presented in Sell et al. (2014). SDSS Name | $\langle\log M_{*}/M_{\odot}\rangle$ | $\log M_{*}/M_{\odot}$ ---|---|--- | (This work) | (Sell et al., 2014) (1) | (2) | (3) J1506+6131 | $10.3^{+0.22}_{-0.15}$ | 10.2 J0826+4305 | $10.7\pm 0.29$ | 10.8 J2118+0017 | $10.8^{+0.22}_{-0.27}$ | 11.1 J1558+3957 | $10.6\pm 0.24$ | 10.6 J1613+2834 | $11.0^{+0.17}_{-0.24}$ | 11.2 Figure 9: Panel (a): Best fit SED for galaxy J01713+2817. The red points and error bars are the observed photometry and $\pm 0.25$ magnitude uncertainty region, respectively. The open black squares are the modeled photometry. The blue, violet, and green curves are the modeled SED for the total galaxy system, nuclear burst, and host galaxy, respectively. Panel (b): Triangle plot of parameter posterior distributions for galaxy J01713+2817. We calculate the mean and covariances of these posterior distributions to model them as 4D-Gaussian distributions. We then randomly draw sets of parameter values from the Gaussian-modeled posterior to construct a mock population of compact starbursts. Panel (c): Galaxy cutout as seen in Figure 1. Figure 10: Panel (a): Best fit SED for galaxy J2118+0017. The red points and error bars are the observed photometry and $\pm 0.25$ magnitude uncertainty region, respectively. The open black squares are the modeled photometry. The blue, violet, and green curves are the modeled SED for the total galaxy system, nuclear burst, and host galaxy, respectively. Panel (b): Triangle plot of parameter posterior distributions for galaxy J2118+0017. We calculate the mean and covariances of these posterior distributions to model them as 4D-Gaussian distributions. We then randomly draw sets of parameter values from the Gaussian-modeled posterior to construct a mock population of compact starbursts. Panel (c): Galaxy cutout as seen in Figure 1. Figure 11: Panel (a): Best fit SED for galaxy J1506+6131. The red points and error bars are the observed photometry and $\pm 0.25$ magnitude uncertainty region, respectively. The open black squares are the modeled photometry. The blue, violet, and green curves are the modeled SED for the total galaxy system, nuclear burst, and host galaxy, respectively. Panel (b): Triangle plot of parameter posterior distributions for galaxy J1506+6131. We calculate the mean and covariances of these posterior distributions to model them as 4D-Gaussian distributions. We then randomly draw sets of parameter values from the Gaussian-modeled posterior to construct a mock population of compact starbursts. Panel (c): Galaxy cutout as seen in Figure 1. Figure 12: Panel (a): Best fit SED for galaxy J1558+3957. The red points and error bars are the observed photometry and $\pm 0.25$ magnitude uncertainty region, respectively. The open black squares are the modeled photometry. The blue, violet, and green curves are the modeled SED for the total galaxy system, nuclear burst, and host galaxy, respectively. Panel (b): Triangle plot of parameter posterior distributions for galaxy J1558+3957. We calculate the mean and covariances of these posterior distributions to model them as 4D-Gaussian distributions. We then randomly draw sets of parameter values from the Gaussian-modeled posterior to construct a mock population of compact starbursts. Panel (c): Galaxy cutout as seen in Figure 1. Figure 13: Panel (a): Best fit SED for galaxy J1613+2834. The red points and error bars are the observed photometry and $\pm$0.25 magnitude uncertainty region, respectively. The open black squares are the modeled photometry. The blue, violet, and green curves are the modeled SED for the total galaxy system, nuclear burst, and host galaxy, respectively. Panel (b): Triangle plot of parameter posterior distributions for galaxy J1613+2834. We calculate the mean and covariances of these posterior distributions to model them as 4D-Gaussian distributions. We then randomly draw sets of parameter values from the Gaussian-modeled posterior to construct a mock population of compact starbursts. Panel (c): Galaxy cutout as seen in Figure 1.
# Benchmark Dataset for Precipitation Forecasting by Post-Processing the Numerical Weather Prediction Taehyeon Kim , Namgyu Ho∗, Donggyu Kim, Se-Young Yun KAIST AI Seoul, South Korea {potter32, itsnamgyu, eaststar<EMAIL_ADDRESS> Equally contributed.Mainly contributed to our open-source Python code development. ###### Abstract Precipitation forecasting is an important scientific challenge that has wide- reaching impacts on society. Historically, this challenge has been tackled using numerical weather prediction (NWP) models, grounded on physics-based simulations. Recently, many works have proposed an alternative approach, using end-to-end deep learning (DL) models to replace physics-based NWP models. While these DL methods show improved performance and computational efficiency, they exhibit limitations in long-term forecasting and lack the explainability. In this work, we present a hybrid NWP-DL workflow to fill the gap between standalone NWP and DL approaches. Under this workflow, the outputs of NWP models are fed into a deep neural network, which post-processes the data to yield a refined precipitation forecast. The deep model is trained with supervision, using Automatic Weather Station (AWS) observations as ground- truth labels. This can achieve the best of both worlds, and can even benefit from future improvements in NWP technology. To facilitate study in this direction, we present a novel dataset focused on the Korean Peninsula, termed KoMet (Korea Meteorological Dataset), comprised of NWP outputs and AWS observations. For the NWP model, the Global Data Assimilation and Prediction Systems-Korea Integrated Model (GDAPS-KIM) is utilized. We provide analysis on a comprehensive set of baseline methods aimed at addressing the challenges of KoMet, including the sparsity of AWS observations and class imbalance. To lower the barrier to entry and encourage further study, we also provide an extensive open-source Python package for data processing and model development. Our benchmark data and code are available at https://github.com/osilab-kaist/KoMet-Benchmark-Dataset. ## 1 Introduction Precipitation forecasting is an ardous problem of forecasting the specific range of region rainfall based on current observations, from sources including radar [47; 54], satellites [28], and rain gauges [54]. Forecasting plays a vital role in society, aiding in a wide range of weather-dependant decision- making, including large-scale crop management, marine services, and air traffic control [72; 69; 37; 23]. A Numerical Weather Prediction (NWP) model [64] is a general tool for weather forecasting, which involves calculating complex physical processes pertaining to interactions across large time and space scales, spanning the Earth’s atmosphere, ocean, land, and ice [36; 26; 4]. Despite continuous efforts to enhance NWP models [57], erroneous predictions can occur due to the delicacy of the models with respect to the noise in the initially observed state. The weather and climate research community is becoming more aware of contemporary deep learning (DL) technologies, and numerous attempts have been made to apply them to the task of precipitation forecasting. Most DL-based methods directly utilize radar observations to predict future precipitation. Despite the lack of expert knowledge or explicit physics-based modeling, these methods achieve notable performance while being computationally efficient [56; 1]. Ravuri et al. [47] facilitate the performance of observation-driven methods with a deep generative model that learns probability distributions of the data and allow for easy generation of samples from their learned distributions. Espeholt et al. [16] extend the capability of precipitation forecasting up to twelve hours ahead, surpassing the performance of existing physics-based models widely used in the Continental United States. Despite its significance for instantaneous forecasting, it is shown that pure observation- driven DL models underperform those that are supplemented by NWP outputs when lead time is larger than 6 hours, revealing the limitations of DL in long-term forecasting. Furthermore, it still lacks explainability and explicit consideration for physics constraints in the end-to-end DL framework [53]. This paper presents a novel benchmark dataset for a hybrid workflow that combines the strengths of both NWP and DL approaches. Our new benchmark, termed KoMet (Korea Meteorological dataset), consists of two parts: predictions from an NWP model, specifically, the Global Data Assimilation and Prediction Systems-Korea Integrated Model (GDAPS-KIM), and Automatic Weather Station (AWS) observations, whose rainfall amounts are hourly monitored in areas covering 13km around the center. The data is provided by the Korean Meteorological Administration (KMA), for July and August of 2020 and 2021–two months in which most precipitation in the Korean Peninsula occurs. Our dataset takes into account of the characteristics of the Korean Peninsula whose precipitation is difficult to predict due to congested spatio-temporal variability owing to complex terrain and the Asian monsoon season [42; 9]. For example, its complex topography leads to large variation in annual precipitation in the southern part (1000 mm to 1800 mm) than the central part (1100 mm to 1400 mm) [2]. Under the proposed workflow, GDAPS-KIM predictions are used as inputs to a DL model which is trained to output refined precipitation forecasts. AWS observations are used as ground-truth targets to train the deep model. In nutshell, the overall goal is to post-process the predictions from GDAPS-KIM using deep models, under the supervision of AWS observations. To date, many benchmark datasets have been introduced to facilitate existing approaches in precipitation forecasting–those based on NWP and end-to-end DL [70; 44]. We advocate an alternative approach, in which NWP output is post- processed using DL models for a more refined prediction. This enables the prediction to respect physics-related constraints and is partially explainable by the injected NWP predictions. To catalyze future work, we also provide extensive evaluation of several baseline models and learning methods over our KoMet dataset. Our key contributions include: * • KoMet is the most comprehensive public dataset for DL post-processing of NWP outputs. The spatial size of GDAPS-KIM is 50 x 65 covering the terrain and surrounding sea whose each pixel covers 12km2 and 484 AWS (ground-truth pixels) are installed in such resolution. * • Hybrid NWP-DL Baselines. We present and analyze a comprehensive set of baseline methods for our proposed hybrid approach, and provide a suite of Python modules designed to facilitate further research from the ML community. #### Post-NWP Optimization. We refer to the optimization problem implicit in the post-processing of NWP model ouput for precipitation forecasting as post-NWP optimization, drawing a connection to skillful precipitation optimization. Post-NWP optimization has several key properties that differentiates it from a typical weather-forecast optimization problem (Figure 1): * • Influential Variable Selection. In NWP, each pixel on grid has a list of variables expressing the state regarding atmospheric feature (Table 1). * • Sensitivity to Hyperparameter Settings. NWP data can be regarded as tabular data. In this sense, as Kadra et al. [25] observe, finding the optimal cocktail with tuning the modern regularization techniques can boost the performance comparable to the state-of-the-art models in end-to-end DL models. * • Robust Architectures. We expect a new dedicated architecture by considering the spatial and temporal characteristics of our benchmark. * • Non-Learning Points of AWS. AWS is not installed at all locations due to space and cost limitations. Therefore, its value for some regions are sparse, which needs to be overcome for the supervision of NWP. * • Class-Imbalance. The distribution of precipitation amounts are highly asymmetric. Despite rare occurrences of heavier rain, it can bring significant real-world damages. Figure 1: Overview of challenges for KoMet. The central part shows our main workflow; as GDAPS-KIM simulations are released timely, a deep neural network aims to post-process the model output for rain forecast. Challenges can be categorized into five folds: (1) influential variable selection that selects the key features for data pre-processing, (2) sensitivity to hyperparameter settings, (3) robust architectures, (4) non-learning points on the ground- truth labels, and (5) class-imbalance issues where the number of precipitation areas are fewer than the number of other cases. #### Organization. The remainder of this paper is organized as follows. In Section 2, we discuss the related literature on traditional NWP approaches, data assimilation, and deep learning-based weather forecasting. In Section 3, we describe our benchmark KoMet thoroughly. In Section 4, we formulate the problem settings for the use of KoMet, and Section 5 exhibits the baseline experimental results. Finally, Section 6 concludes the paper. ## 2 Related Works #### Numerical Weather Prediction (NWP). NWP models explain and predict the future changes of the atmospheric conditions through solving dynamics and physics partial differential equations. Since the advent of the NWP models, weather forecasting highly relies on them according to their steady progress over time [33; 4]. NWP applications have been involved into air quality [76], solar [41; 75; 67], wind power [11; 8], wildfire [13], anomalies (i.e. severe weather events) [14; 35; 73; 50], and hydrological forecasting. The last of these, hydrological forecasting, includes precipitation, humidity [15], drought [39; 62], and evapotranspiration [34] forecasting. Despite its raving utility in real world, NWP faces some endemic challenges [64; 4]. Firstly, the quality of NWP models can be determined with how accurate the given observations of present atmospheric states (initial condition) is [27; 58; 68; 38]. In addition, the computational and power demands of NWP grow in lockstep with the resolution of the forecast, creating a trade-off between forecast accuracy, which necessitates higher levels of resolution, and forecasting time. Lastly, some models utilize the machine learning approach for prediction while there remains a lack of considering two types of uncertainty: aleatoric uncertainty and epistemic uncertainty [12]. #### Data Assimilation (DA). DA is the process of combining every possible source of information for the accurate prediction of each status. Generally, DA serves as an initial condition for computing the NWP in weather forecasting [27; 36]. Various observation platforms have been evolving through weather station, radar, and satellite [73; 63; 15; 5; 6] whose observation equipment is also becoming more sophisticated. To reinforce model uncertainty, variational data assimilation methods have been emerged as a line of works [66; 40]. Data assimilation methods [66; 40] have been done on such observations. Three-dimensional variational methods (3DVar) [10; 17; 31], four dimentional approaches (4DVar) [43], ensemble Kalman filter [21; 32] are general examples of DA techniques used in global NWP workflow. #### Deep Learning-based Weather Forecasting. Deep learning rises as the one way to mitigate the shortcomings of NWP. DL enables its success by conjugating vast amount of available data as well as geometrically increased computational power of tensor processing units (e.g. GPUs or TPUs) [24]. Because deluge of meteorological data has been accumulated since the establishment of weather stations, there are an increasing demand of efforts to adopt modern DL techniques for weather forecasting [56; 53; 48; 51]. DL-based weather forecast approach can be categorized into two folds: end-to-end DL approach and NWP-DL hybrid approach. End-to-end DL approaches [30; 46; 49; 71; 74; 47] learn the representation from meteorological observations using DNN without the access of physics knowledge. On the other side, NWP-DL hybrid approaches [45; 71; 19; 65; 20; 22; 20] integrate theoretical models (NWP) and data-driven models (DL), and thus encompass the explainability as well as accurate expressivity. Table 1: List of variables contained in the benchmark dataset. Pres data contains atmospheric features at different pressure levels, and Unis data contains some other features as well as those in Pres data but does not at certain pressure levels. † indicates the variable with integer type. Type | Long name | Short name | Description | Unit ---|---|---|---|--- Pres | U-component of wind | u | Wind in x/longitude-direction | ($ms^{-1}$) V-component of wind | v | Wind in y/latitude direction | ($ms^{-1}$) Temperature | T | Temperature | (K) Relative humidity | rh liq | Humidity relative to saturation | (%) Geopotential | hgt | Proportional to the height of a pressure level | ($m^{2}s^{-2}$) Unis | Rain | rain | Rain | ($mm/h$) Snow | snow | Snow | ($cm/h$) Height of PBL | hpbl | Height of planetary boundary layer | (km) Type of PBL† | pbltype | Type of planetary boundary layer | - 2m specfic humidity | q2m | Specific humidity at 2m height | (g/kg) 2m relative humidity | rh2m | Humidity relative to saturation at 2m height | (%) 2m temperature | t2m | Temperature at 2m height above surface | (K) Surface temperature | tsfc | Surface temperature | (K) 10m u component of wind | u10m | Wind in x/longitude-direction at 10m height | ($ms^{-1}$) 10m v component of wind | v10m | Wind in y/latitude-direction at 10m height | ($ms^{-1}$) Topography | topo | Topography | (m) Pressure of surface | ps | Pressure of surface | (Pa) ## 3 KoMet Dataset The KoMet dataset is comprised of predictions from GDAPS-KIM, a global numerical weather prediction model operated by the Korea Meterological Administration (KMA111https://www.kma.go.kr/eng/index.jsp), as well as AWS observations which serve as ground-truth precipitation data. ### 3.1 Input: GDAPS-KIM GDAPS-KIM is a global numerical weather prediction model designed to improve the prediction accuracy of weather phenomena with a particular focus on the Korean Peninsula [57]. GDAPS-KIM provides hourly predictions of various atmospheric variables. This is achieved by assimilating observations from all over the globe to compute the initial state, and subsequently solving dynamics and physics equations to predict future states. GDAPS-KIM prediction data is provided in the following format: $\mathbf{X}\in\mathbb{R}^{T\times L\times C\times W\times H}$ where each dimension corresponds to the origin time (T) at which the simulation took place, lead time (L) between the origin time and the target prediction time, the variable index (C), and the spatial dimensions of the prediction map (W, H). Each GDAPS-KIM instance in KoMet dataset is a daily simulation executed at 00 UTC. We provide predictions made between July 1st and August 31st (inclusive) of 2020 and 2021, i.e., $T=124$ days, when rainfall is most intensive due to the seasonal characteristics of the region. We provide predictions with lead times ranging from 0 to 89 hours, i.e., $L=90$. We provide data on $C=122$ atmospheric variables for the Korean Peninsula and the surrounding region lying within [32.94°N, 39.06°N] and [124.00°E, 132.00°E]. GDAPS-KIM has a resolution of 12 km $\times$ 12 km, and thus its spatial dimension is 65$\times$50. GDAPS-KIM variables can be categorized into two types: Pres variables and Unis variables. While Pres variables include values for various altitudes corresponding to specific air pressure levels (i.e., different isobaric surfaces), Unis data does not. Each GDAPS-KIM instance has 5 Pres variable types at 22 isobaric surfaces (22$\times$5=110 variables) and 12 Unis variables (Table 1), thus it has 122 variables at each pixel. Despite its rich variables, using all variables may not always be a silver bullet owing to the high correlation among themselves and noisiness. Furthermore, as the assimilated initial condition used for numerical prediction has significant spatial- and temporal-variability, it is more prone to inaccuracies compared to other regions, leading to cascading errors. In this paper, for the sake of benchmark experiments, we use 12 variables222Unis 6 variables: rain, q2m, rh2m, t2m, tsfc, ps; Pres 6 variables: T and rh_liq at 500, 700, 850 hpa. out of 122 through the advice of KMA experts. ### 3.2 Ground-Truth: AWS Observations Surface observations from AWS provide ground-truth data on hourly precipitation. While AWS observation data includes various weather measurements such as temperature, wind speed, and solar radiation, the precipitation is only utilized for supervision. Unlike GDAPS-KIM predictions, only several pixels have AWS observations as weather stations are sparsely installed. Even for such stations, observations are occasionally omitted due to miscellaneous reasons, leading to NaN values. Raw AWS observation data is provided in the following format: $\mathbf{Y}_{raw}\in\mathbb{R}^{h\times S}$ where each dimension corresponds to the hour (h) at which the measurement of accumulated rainfall (from the preceding hour) is taken and the weather station index (S). AWS observations are recorded every hours at which GDAPS-KIM predictions are provided, with some additional leeway. More precisely, hourly observations are ranging from July 1st, 00:00 UTC to September 3rd, 23:00 UTC, totaling $h=3120$ hours, from 484 stations ($S=484$). To facilitate use of AWS data to train GDAPS-KIM post- processing models, we use the location metadata of each weather station to convert AWS data into the spatial grid format used in GDAPS-KIM. The format is $\mathbf{Y}\in\mathbb{R}^{h\times W\times H}$ where each dimension corresponds to the hour (h) at which the measurement of accumulated rainfall is taken, and the spatial dimensions of the prediction map (W, H; identical to GDAPS-KIM). We investigate the characteristics of AWS considering the sparsity of AWS installation, temporal and spatial rainfall distributions. Figure 2 provides the distributions of hourly and daily average of rainfall amounts on each year. As Figure 2 shows, hourly rainfall events occur sporadically, and these characteristics vary greatly from year to year. For the spatial distribution, among the $65\times 50=3250$ pixels comprising the GDAPS-KIM grid, there are only 484 pixels where ground-truth data is available, and even many of those are concentrated to certain locations such as metropolitan areas (Figure 3). These challenges can pose a significant challenge for training calibration models, as the supervisory signal from ground-truth values is limited to specific locations and time. (a) Precipitation Ratio by Date (2020) (b) Precipitation Ratio by Date (2021) (c) Precipitation Ratio by Hour (2020) (d) Precipitation Ratio by Hour (2021) Figure 2: Temporal distribution of the rain ratio among all AWS observations. ‘Rain’ refers to hourly precipitation between 0.1mm and 10mm, while ‘heavy rain’ refers to that above 10mm. In (c), (d), the left axis refers to the ratio of ‘rain’ and the right axis is the ratio of ‘heavy rain’, in percentages. (a) No Rain Ratio (2020) (b) Rain Ratio (2020) (c) Heavy Rain Ratio (2020) (d) No Rain Ratio (2021) (e) Rain Ratio (2021) (f) Heavy Rain Ratio (2021) Figure 3: Spatial distribution of precipitation from AWS observations in South Korea. ‘Rain’ refers to hourly precipitation between 0.1mm and 10mm, while ‘heavy rain’ refers to that above 10mm. ### 3.3 Dataset Interface Table 2: Statistics in the KoMet benchmark. Rain rate (mm/h) | Proportion (%) | Rainfall Level ---|---|--- $[0.0,0.1)$ | 87.24 | No Rain $[0.1,10.0)$ | 11.57 | Rain $[10.0,\infty)$ | 1.19 | Heavy Rain #### Inputs. In our dataset, GDAPS-KIM predictions are provided in NumPy format. These predictions are propagated to deep models, following normalization; each variable is linearly scaled based on min-max values derived from the entire dataset. #### Outputs. We formulate the precipitation forecast problem as a pointwise classification task pertaining to three categorical classes: non-rain, rain, and heavy rain (Table 2). Following this, the AWS observation data is pre-processed into 2D array format according to the grid used in GDAPS-KIM, respectively. The location of each station is determined within each grid based on the location metadata of AWS stations and grid specifications for GDAPS-KIM. Of course, our benchmark also supports predicting rainfall as a regression task or classification tasks having more categories. However, the evaluation should be based on the Table 2, which is generally used in South Korea [59]. ### 3.4 Dataset Split We split the data temporally into three non-overlapping datasets by repeatedly using approximately 4 days for training followed by 2 days for validation and 2 days for testing. Separation by month or year could cause a shift in distribution between datasets. With reference to Sønderby et al. [60], this category of temporal split is utilized. Because the data scale is small, more proportions are devoted to validation and testing compared to Sønderby et al. [60]. Due to the overlap between the predicted instances at the same time from different GDAPS-KIM simulations, it is not straightforward to perform this split. For example, there are multiple predictions for April 21st, 2021 2:00PM, including a prediction ran at midnight the same day with lead time of 14 hours, and one ran at midnight of April 20th with 38 hour lead time, to name a few. In this paper, we follow the strategy that divides among prediction simulations, based on the point in time in which the predictions were ran. Since predictions from GDAPS-KIM simulations are highly conditioned on the initial state, our benchmark uses the aforementioned scheme to prevent predictions from the same initial state to leak into different data splits. ## 4 Optimization for Post-Processing the NWP Outputs #### Problem Formulation. In this work, we consider the following optimization model: $\displaystyle\min_{{\bm{w}}}\Big{\\{}\mathcal{L}({\bm{w}};\mathcal{D})\triangleq\mathbb{E}_{(\mathbb{X}_{t},\mathbb{Y}_{t})\sim\mathcal{D}}[\ell(\mathbb{X}_{t},\mathbb{Y}_{t};{\bm{w}})]\Big{\\}}\vspace{-10pt}$ (1) where $\mathcal{L}$ is the objective function parameterized by ${\bm{w}}$ on the dataset $\mathcal{D}$ having the input as the NWP outputs $\mathbb{X}_{t}$, the corresponding rainy output $\mathbb{Y}_{t}$ at time $t$, and $\ell$ is the classification loss between the output of the neural network and the ground-truth. $\mathbb{X}_{t}$ is a sequence of NWP predictions printed at time $t$, ${\bm{x}}_{(t,0)},{\bm{x}}_{(t,1)},\cdots,{\bm{x}}_{(t,L-1)}$ where ${\bm{x}}_{(t,i)}$ is the prediction at time $t+i$ based on a simulated NWP output at time $t$, $L$ is the length of lead time which means the prediction of additional hour per simulation, $\mathbb{Y}_{t}$ is a sequence of AWS observations ${\bm{y}}_{t},{\bm{y}}_{t+1},\cdots,{\bm{y}}_{t+L-1}$. In this approach, the prediction of the neural network at time $t$ is defined as the following probabilistic forecast: $\displaystyle f(\tau;\mathbb{X}_{t_{0}},{\bm{w}},ws)=\mathbb{P}({\bm{y}}_{\tau}|{\bm{x}}_{(t,\Delta- ws+1)},\cdots,{\bm{x}}_{(t,\Delta-1)},{\bm{x}}_{(t,\Delta)})\vspace{-10pt}$ (2) where $f(\cdot;{\bm{w}})$ is the neural network parameterized by ${\bm{w}}$, $t$ is the time at which the forecast is made, $\Delta=\tau-t_{0}$, $ws$ is the window size which is the number of consequent observations used for inference, and $\mathbb{P}(\cdot)$ is the probability matrix whose each pixel in grid indicates the probability of class among {‘non-rain’, ‘rain’, ‘heavy rain’}. #### Algorithm Description. Here, we describe one inference step (say the $\tau$-th; $\tau\geq t$) of the proposed post-NWP optimization task. Firstly, the subset $\mathbb{X}_{t}$ is reorganized with channel-wise concatenation: $[{\bm{x}}_{(t,\Delta- ws+1)};\cdots;{\bm{x}}_{(t,\Delta-1)};{\bm{x}}_{(t,\Delta)}]\in\mathbb{R}^{(C*ws)\times W\times H}$ where $W,H$ is the width and height of the grid map and $C$ is the number of variables in each pixel–and thus each instance after reorganization is 3D-shaped. We then create batches based on the modified $\mathbb{X}_{t}$ and perform forward and backward propagation. The model is trained over the entire dataset until convergence, or pre-fixed total epochs. #### Temporal Dependencies on Window Size & Lead Time. The problem formulation of post-NWP optimization elicits two important temporal parameters that influence the learning problem. First, the window size of NWP predictions used for post-processing affects the richness of the input information. While larger window size allows the post-processing model to consider ample information, it may be more difficult to optimize the model. Another important temporal component is lead time. Predictions from different lead times exhibit distinct characteristics, leading to differences in evaluation performance. Taking this into consideration, it may be beneficial to train separate post-processing models for different lead times. For our baseline analysis, we focus on a single model trained to handle all possible lead times, as a representative baseline. #### Evaluations. For evaluation, the validity of forecasting algorithms are assessed using diverse statistical metrics, which are commonly used for precipitation forecasting. Following common practice in the multi-classification settings [18], such metrics can be calculated based on the number of true positives ($TP_{k}$), false positives ($FP_{k}$), true negatives ($TN_{k}$), and false negatives ($FN_{k}$) for some generic class $k$: * • Accuracy (ACC) returns an overall measure of how much the model is correctly predicting on the entire set of data. * • Probability of Detection (POD) is a recall calculated as $\frac{TP_{k}}{TP_{k}+FP_{K}}$. * • Critical Success Index (CSI) [14] is a categorical score that considers more aspects of the confusion matrix similar with F1-score having the value as $\frac{TP_{k}}{TP_{k}+FN_{k}+FP_{k}}$. * • False Alarm Ratio (FAR) [3] is the number of false alarms per the total number of warnings or alarms, also known as the probability of false detection. It is computed as $\frac{FP_{k}}{TP_{k}+FP_{k}}$ * • Bias is defined as the ratio of the observed frequency of occurrence of a phenomenon to the frequency of the occurrence predicted by the forecast model. $\frac{TP_{k}+FP_{k}}{TP_{k}+FN_{k}}$. If it has a value greater than 1, it means that the frequency of occurrence predicted by the forecasting model is greater than the frequency of occurrence of the actual phenomenon, and therefore more frequent prediction is made. The more accurate the forecast, the closer this index is to 1. ## 5 Experiment Table 3: Evaluation metrics of KoMet and baseline models for precipitation while 12 variables are utilized for the training. Best performances are marked in bold. | Rain | Heavy Rain ---|---|--- | Acc | POD | CSI | FAR | Bias | Acc | POD | CSI | FAR | Bias GDAPS-KIM | 0.747 | 0.633 | 0.263 | 0.690 | 2.042 | 0.985 | 0.055 | 0.045 | 0.795 | 0.266 U-Net | 0.840 | 0.441 | 0.282 | 0.562 | 1.007 | 0.983 | 0.040 | 0.029 | 0.906 | 0.426 ConvLSTM | 0.869 | 0.387 | 0.296 | 0.444 | 0.696 | 0.986 | 0.007 | 0.006 | 0.889 | 0.059 MetNet | 0.854 | 0.468 | 0.314 | 0.512 | 0.959 | 0.986 | 0.013 | 0.012 | 0.838 | 0.079 Figure 4: CSI scores of GDAPS-KIM and baseline models for a class ‘rain’. Scores are given for predictions corresponding to specific lead times, ranging from 6 to 20 hours. (a) Acc (b) CSI (c) Bias Figure 5: Performance of the models according to the changes of sampling strategies. Vanilla indicates baseline training strategy without resampling of AWS targets. The metrics are averaged over 10 different random seeds and error bars indicate the 95% confidence intervals. (a) U-Net, window size (b) ConvLSTM, window size (c) MetNet, window size (d) U-Net, weight decay (e) ConvLSTM, weight decay (f) MetNet, weight decay Figure 6: (a), (b), (c): CSI performance according to changes in window size in hours; (d), (e), (f): CSI performance according to changes in weight decay. ### 5.1 Benchmarking neural network architectures Various models have been adopted or designed to effectively extrapolate future weather status from past and current state. Here, we provide three baseline architectures for precipitation forecasting: U-Net [52], ConvLSTM [47], and MetNet [60; 16] (detailed explanations are provided in Appendix). Table 3 shows the results for various lead times ranging from 6 to 87 hours, according to the changes of the architectures. Compared to the statistics of GDAPS-KIM for predicting class ‘rain’, the qualities of post-processed predictions with the three networks are consistently improved while that of MetNet is the best. In contrast, deep learning-based post-processing rather hinders performance for ‘heavy rain’ prediction. In particular, MetNet faces severe performance degradation. We leave this phenomenon as a challenge to be addressed. Figure 4 illustrates the results of CSI scores for precipitation of $\geq 0.1$ (mm/h) over lead time. Here, MetNet also outperforms other baseline models. ### 5.2 Influential Variable Selection As listed in Table 1, GDAPS-KIM contains many variables. Naively using all available variables may impede generalization, therefore it is important to select the most prominent variables. Variable selection experimentation can be done in various ways, and we propose a baseline experiment in which the variables are divided into Pres type and Unis type groups and considered as manipulated variables and control variables, respectively. An example is illustrated in the Appendix, and we observe that selecting certain significant variables performs better than simply using more variables. ### 5.3 Sampling Strategies for Class-Imbalance Issue As a solution to the class-imbalance issue, we provide two probabilistic data sampling techniques for target transformation: (1) Under-Sampling for ‘No Rain’ Points that uses only a fraction $p$ of the ‘no rain’ points used for learning, to reduce the no precipitation points, and (2) Balancing for ‘Rain Points’ that maintains the ratio between ‘no rain’ points and ‘rain’ points to $1:p$ by under-sampling ‘no rain’ points, when there are excessive ‘no rain’ points. Figure 5 depicts the performance of models trained with different sampling strategies. Although the performance change differs depending on the architecture, it can be confirmed that generalization can be further facilitated if the data distribution is adjusted fairly. It is expected that an approach that manipulates the loss function can also be effective [29; 7]. ### 5.4 Sensitivity to Hyperparameter Settings #### Window Size. The first row of Figure 6 shows the CSI scores as the window size changes. In ‘rain’, the CSI score is relatively unchanged, while the ‘heavy rain’ case is shown to be extremely sensitive. #### Weight Decay. The second row of Figure 6 shows the CSI scores according to the changes in magnitude of weight decay. When comparing the performance of rain and heavy rain, we see differing trends according to increase in weight decay. The results above suggest the need for combinatorial optimization among a variety of hyperparameters such as batch size, optimizer, beta of batch normalization, and learning rate. ## 6 Conclusion & Future Work In this paper, we present a new benchmark dataset, termed KoMet, for a hybrid NWP-DL workflow that post-processes the rain prediction from South Korea’s NWP models under the supervision of surface-level observations from AWS. Our work engages in the solution of precipitation forecasting. We believe that our work opens the door to develop robust and explainable forecasting services that benefit the broad society. Our benchmark dataset will lower the barriers to entry for those in the DL community to contribute to research in precipitation forecasting. ## Acknowledgments and Disclosure of Funding This work was supported by the Korea Meteorological Administration Research and Development Program "Development of AI techniques for Weather Forecasting" under Grant (KMA2021-00121) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program(KAIST)). We thank Jaehoon Oh and Sangmook Kim for discussing the pipelines of precipitation forecasting and Yun Am Seo (Jeju National University) and Min-Gee Hong (National Institute of Meteorological Sciences) for discussing the data pre-processing. ## References * [1] Georgy Ayzel, Tobias Scheffer, and Maik Heistermann. Rainnet v1. 0: a convolutional neural network for radar-based precipitation nowcasting. Geoscientific Model Development, 13(6):2631–2644, 2020. * [2] Muhammad Azam, Seung Jin Maeng, Hyung San Kim, Seung Wook Lee, and Jae Eun Lee. Spatial and temporal trend analysis of precipitation and drought in south korea. Water, 10(6):765, 2018. * [3] Lindsey R Barnes, David M Schultz, Eve C Gruntfest, Mary H Hayden, and Charles C Benight. Corrigendum: False alarm rate or false alarm ratio? Weather and Forecasting, 24(5):1452–1454, 2009. * [4] Peter Bauer, Alan Thorpe, and Gilbert Brunet. The quiet revolution of numerical weather prediction. Nature, 525(7567):47–55, 2015. * [5] Souhail Boussetta, Gianpaolo Balsamo, Emanuel Dutra, Anton Beljaars, and Clement Albergel. Assimilation of surface albedo and vegetation states from satellite observations and their impact on numerical weather prediction. Remote Sensing of Environment, 163:111–126, 2015. * [6] Mark Buehner and Dominik Jacques. Non-gaussian deterministic assimilation of radar-derived precipitation accumulations. Monthly Weather Review, 148(2):783–808, 2020. * [7] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. Advances in neural information processing systems, 32, 2019. * [8] Niya Chen, Zheng Qian, Ian T Nabney, and Xiaofeng Meng. Wind power forecasts using gaussian processes and numerical weather prediction. IEEE Transactions on Power Systems, 29(2):656–665, 2013. * [9] Tsing-Chang Chen, Shih-Yu Wang, Wan-Ru Huang, and Ming-Cheng Yen. Variation of the east asian summer monsoon rainfall. Journal of Climate, 17(4):744–762, 2004. * [10] Philippe Courtier, E Andersson, W Heckley, D Vasiljevic, M Hamrud, A Hollingsworth, F Rabier, M Fisher, and J Pailleux. The ecmwf implementation of three-dimensional variational assimilation (3d-var). i: Formulation. Quarterly Journal of the Royal Meteorological Society, 124(550):1783–1807, 1998. * [11] Maria Grazia De Giorgi, Antonio Ficarella, and Marco Tarantino. Assessment of the benefits of numerical weather predictions in wind power forecasting based on statistical methods. Energy, 36(7):3968–3978, 2011. * [12] Armen Der Kiureghian and Ove Ditlevsen. Aleatory or epistemic? does it matter? Structural safety, 31(2):105–112, 2009. * [13] Francesca Di Giuseppe, Florian Pappenberger, Fredrik Wetterhall, Blazej Krzeminski, Andrea Camia, Giorgio Libertá, and Jesus San Miguel. The potential predictability of fire danger provided by numerical weather prediction. Journal of Applied Meteorology and Climatology, 55(11):2469–2491, 2016. * [14] RJ Donaldson, Rosemary M Dyer, and Michael J Kraus. Objective evaluator of techniques for predicting severe weather events. In Bulletin of the American Meteorological Society, volume 56, pages 755–755. AMER METEOROLOGICAL SOC 45 BEACON ST, BOSTON, MA 02108-3693, 1975\. * [15] M Drusch. Initializing numerical weather prediction models with satellite-derived surface soil moisture: Data assimilation experiments with ecmwf’s integrated forecast system and the tmi soil moisture data set. Journal of Geophysical Research: Atmospheres, 112(D3), 2007. * [16] Lasse Espeholt, Shreya Agrawal, Casper Sønderby, Manoj Kumar, Jonathan Heek, Carla Bromberg, Cenk Gazen, Jason Hickey, Aaron Bell, and Nal Kalchbrenner. Skillful twelve hour precipitation forecasts using large context neural networks. arXiv preprint arXiv:2111.07470, 2021. * [17] Pierre Gauthier, C Charette, L Fillion, P Koclas, and S Laroche. Implementation of a 3d variational data assimilation system at the canadian meteorological centre. part i: The global analysis. Atmosphere-Ocean, 37(2):103–156, 1999. * [18] Margherita Grandini, Enrico Bagli, and Giorgio Visani. Metrics for multi-class classification: an overview. arXiv preprint arXiv:2008.05756, 2020. * [19] Peter Grönquist, Tal Ben-Nun, Nikoli Dryden, Peter Dueben, Luca Lavarini, Shigang Li, and Torsten Hoefler. Predicting weather uncertainty with deep convnets. arXiv preprint arXiv:1911.00630, 2019. * [20] Peter Grönquist, Chengyuan Yao, Tal Ben-Nun, Nikoli Dryden, Peter Dueben, Shigang Li, and Torsten Hoefler. Deep learning for post-processing ensemble weather forecasts. Philosophical Transactions of the Royal Society A, 379(2194):20200092, 2021. * [21] Thomas M Hamill and Chris Snyder. A hybrid ensemble kalman filter–3d variational analysis scheme. Monthly Weather Review, 128(8):2905–2919, 2000. * [22] Sue Ellen Haupt, William Chapman, Samantha V Adams, Charlie Kirkwood, J Scott Hosking, Niall H Robinson, Sebastian Lerch, and Aneesh C Subramanian. Towards implementing artificial intelligence post-processing in weather and climate: proposed actions from the oxford 2019 workshop. Philosophical Transactions of the Royal Society A, 379(2194):20200091, 2021. * [23] Gerrit Hoogenboom. Contribution of agrometeorology to the simulation of crop production and its applications. Agricultural and forest meteorology, 103(1-2):137–157, 2000. * [24] Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th annual international symposium on computer architecture, pages 1–12, 2017. * [25] Arlind Kadra, Marius Lindauer, Frank Hutter, and Josif Grabocka. Well-tuned simple nets excel on tabular datasets. Advances in Neural Information Processing Systems, 34, 2021. * [26] E Kalnay, M Kanamitsu, and WE Baker. Global numerical weather prediction at the national meteorological center. Bulletin of the American Meteorological Society, 71(10):1410–1428, 1990. * [27] Eugenia Kalnay. Atmospheric modeling, data assimilation and predictability. Cambridge university press, 2003. * [28] Vadim Lebedev, Vladimir Ivashkin, Irina Rudenko, Alexander Ganshin, Alexander Molchanov, Sergey Ovcharenko, Ruslan Grokhovetskiy, Ivan Bushmarinov, and Dmitry Solomentsev. Precipitation nowcasting with satellite imagery. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2680–2688, 2019. * [29] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988, 2017. * [30] Yunjie Liu, Evan Racah, Joaquin Correa, Amir Khosrowshahi, David Lavers, Kenneth Kunkel, Michael Wehner, William Collins, et al. Application of deep convolutional neural networks for detecting extreme weather in climate datasets. arXiv preprint arXiv:1605.01156, 2016. * [31] AC Lorenc, SP Ballard, RS Bell, NB Ingleby, PLF Andrews, DM Barker, JR Bray, AM Clayton, T Dalby, D Li, et al. The met. office global three-dimensional variational data assimilation scheme. Quarterly Journal of the Royal Meteorological Society, 126(570):2991–3012, 2000. * [32] Andrew C Lorenc. The potential of the ensemble kalman filter for nwp—a comparison with 4d-var. Quarterly Journal of the Royal Meteorological Society: A journal of the atmospheric sciences, applied meteorology and physical oceanography, 129(595):3183–3203, 2003. * [33] Peter Lynch. The origins of computer weather prediction and climate modeling. Journal of computational physics, 227(7):3431–3444, 2008. * [34] Hanoi Medina and Di Tian. Comparison of probabilistic post-processing approaches for improving numerical weather prediction-based daily and weekly reference evapotranspiration forecasts. Hydrology and Earth System Sciences, 24(2):1011–1030, 2020. * [35] GA Mills and JR Colquhoun. Objective prediction of severe thunderstorm environments: Preliminary results linking a decision tree with an operational regional nwp model. Weather and Forecasting, 13(4):1078–1092, 1998. * [36] Ionel M Navon. Data assimilation for numerical weather prediction: a review. Data assimilation for atmospheric, oceanic and hydrologic applications, pages 21–65, 2009. * [37] Pertti Nurmi, Adriaan Perrels, and Väinö Nurmi. Expected impacts and value of improvements in weather forecasting on the road transport sector. Meteorological Applications, 20(2):217–223, 2013. * [38] Haraldur Olafsson and Jian-Wen Bao. Uncertainties in Numerical Weather Prediction. Elsevier, 2020. * [39] TN Palmer and Č Branković. The 1988 us drought linked to anomalous sea surface temperature. Nature, 338(6210):54–57, 1989. * [40] David F Parrish and John C Derber. The national meteorological center’s spectral statistical-interpolation analysis system. Monthly Weather Review, 120(8):1747–1763, 1992. * [41] Richard Perez, Elke Lorenz, Sophie Pelland, Mark Beauharnois, Glenn Van Knowe, Karl Hemker Jr, Detlev Heinemann, Jan Remund, Stefan C Müller, Wolfgang Traunmüller, et al. Comparison of numerical weather prediction solar irradiance forecasts in the us, canada and europe. Solar Energy, 94:305–326, 2013. * [42] Weihong Qian, H-S Kang, and D-K Lee. Distribution of seasonal rainfall in the east asian monsoon region. Theoretical and Applied Climatology, 73(3):151–168, 2002. * [43] Florence Rabier, Heikki Järvinen, E Klinker, J-F Mahfouf, and A Simmons. The ecmwf operational implementation of four-dimensional variational assimilation. i: Experimental results with simplified physics. Quarterly Journal of the Royal Meteorological Society, 126(564):1143–1170, 2000. * [44] Stephan Rasp, Peter D Dueben, Sebastian Scher, Jonathan A Weyn, Soukayna Mouatadid, and Nils Thuerey. Weatherbench: a benchmark data set for data-driven weather forecasting. Journal of Advances in Modeling Earth Systems, 12(11):e2020MS002203, 2020. * [45] Stephan Rasp and Sebastian Lerch. Neural networks for postprocessing ensemble weather forecasts. Monthly Weather Review, 146(11):3885–3900, 2018. * [46] Stephan Rasp, Michael S Pritchard, and Pierre Gentine. Deep learning to represent subgrid processes in climate models. Proceedings of the National Academy of Sciences, 115(39):9684–9689, 2018. * [47] Suman Ravuri, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan Fitzsimons, Maria Athanassiadou, Sheleem Kashem, Sam Madge, et al. Skilful precipitation nowcasting using deep generative models of radar. Nature, 597(7878):672–677, 2021. * [48] Xiaoli Ren, Xiaoyong Li, Kaijun Ren, Junqiang Song, Zichen Xu, Kefeng Deng, and Xiang Wang. Deep learning-based weather prediction: a survey. Big Data Research, 23:100178, 2021. * [49] Eduardo Rocha Rodrigues, Igor Oliveira, Renato Cunha, and Marco Netto. Deepdownscale: a deep learning strategy for high-resolution weather forecast. In 2018 IEEE 14th International Conference on e-Science (e-Science), pages 415–422. IEEE, 2018. * [50] MJ Rodwell and TN Palmer. Using numerical weather prediction to assess climate models. Quarterly Journal of the Royal Meteorological Society: A journal of the atmospheric sciences, applied meteorology and physical oceanography, 133(622):129–146, 2007. * [51] David Rolnick, Priya L Donti, Lynn H Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, et al. Tackling climate change with machine learning. ACM Computing Surveys (CSUR), 55(2):1–96, 2022. * [52] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015. * [53] Martin G Schultz, Clara Betancourt, Bing Gong, Felix Kleinert, Michael Langguth, Lukas Hubert Leufen, Amirpasha Mozaffari, and Scarlet Stadtler. Can deep learning beat numerical weather prediction? Philosophical Transactions of the Royal Society A, 379(2194):20200097, 2021. * [54] Bora Shehu and Uwe Haberlandt. Relevance of merging radar and rainfall gauge data for rainfall nowcasting in urban hydrology. Journal of Hydrology, 594:125931, 2021. * [55] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems, 28, 2015. * [56] Xingjian Shi, Zhihan Gao, Leonard Lausen, Hao Wang, Dit-Yan Yeung, Wai-kin Wong, and Wang-chun Woo. Deep learning for precipitation nowcasting: A benchmark and a new model. Advances in neural information processing systems, 30, 2017. * [57] Hyun-Cheol Shin, Ji-Hyun Ha, Kwang Deuk Ahn, Eun Hee Lee, Chang Hwan Kim, Yong Hee Lee, and Adam Clayton. An overview of kma’s operational nwp data assimilation systems. Data Assimilation for Atmospheric, Oceanic and Hydrologic Applications (Vol. IV), pages 665–687, 2022. * [58] Julia Slingo and Tim Palmer. Uncertainty in weather and climate prediction. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 369(1956):4751–4767, 2011. * [59] Keon Tae Sohn, Jeong Hyeong Lee, Soon Hwan Lee, and Chan Su Ryu. Statistical prediction of heavy rain in south korea. Advances in Atmospheric Sciences, 22(5):703–710, 2005. * [60] Casper Kaae Sønderby, Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver, Tim Salimans, Shreya Agrawal, Jason Hickey, and Nal Kalchbrenner. Metnet: A neural weather model for precipitation forecasting. arXiv preprint arXiv:2003.12140, 2020. * [61] Hwan-Jin Song, Byunghwan Lim, and Sangwon Joo. Evaluation of rainfall forecasts with heavy rain types in the high-resolution unified model over south korea. Weather and Forecasting, 34(5):1277–1293, 2019. * [62] Petr Štěpánek, Miroslav Trnka, Filip Chuchma, Pavel Zahradníček, Petr Skalák, Aleš Farda, Rostislav Fiala, Petr Hlavinka, Jan Balek, Daniela Semerádová, et al. Drought prediction system for central europe and its validation. Geosciences, 8(4):104, 2018. * [63] Juanzhen Sun. Convective-scale assimilation of radar data: progress and challenges. Quarterly Journal of the Royal Meteorological Society: A journal of the atmospheric sciences, applied meteorology and physical oceanography, 131(613):3439–3463, 2005. * [64] Juanzhen Sun, Ming Xue, James W Wilson, Isztar Zawadzki, Sue P Ballard, Jeanette Onvlee-Hooimeyer, Paul Joe, Dale M Barker, Ping-Wah Li, Brian Golding, et al. Use of nwp for nowcasting convective precipitation: Recent progress and challenges. Bulletin of the American Meteorological Society, 95(3):409–426, 2014. * [65] Maxime Taillardat and Olivier Mestre. From research to applications–examples of operational ensemble post-processing in france using machine learning. Nonlinear Processes in Geophysics, 27(2):329–347, 2020. * [66] Olivier Talagrand and Philippe Courtier. Variational assimilation of meteorological observations with the adjoint vorticity equation. i: Theory. Quarterly Journal of the Royal Meteorological Society, 113(478):1311–1328, 1987. * [67] Hadrien Verbois, Yves-Marie Saint-Drenan, Alexandre Thiery, and Philippe Blanc. Statistical learning for nwp post-processing: A benchmark for solar irradiance forecasting. Solar Energy, 238:132–149, 2022. * [68] Sabrina Wahl. Uncertainty in mesoscale numerical weather prediction: Probabilistic forecasting of precipitation. PhD thesis, Bonn, Rheinische Friedrich-Wilhelms-Universität Bonn, Diss., 2015, 2015. * [69] Yong Wang, Estelle Coning, Abdoulaye Harou, Wilfried Jacobs, Paul Joe, Larisa Nikitina, Rita Roberts, Jianjie Wang, Jim Wilson, Aitor Atencia, Benedikt Bica, Barbara Brown, Steven Goodmann, Alexander Kann, Ping-wah Li, Isabel Monterio, Franziska Schmid, Alan Seed, and Jenny Sun. Guidelines for Nowcasting Techniques. 11 2017. * [70] Peter A. G. Watson. Applying machine learning to improve simulations of a chaotic dynamical system using empirical error correction. Journal of Advances in Modeling Earth Systems, 11(5):1402–1417, 2019. * [71] Jonathan A Weyn, Dale R Durran, and Rich Caruana. Can machines learn to predict weather? using deep learning to predict gridded 500-hpa geopotential height from historical weather data. Journal of Advances in Modeling Earth Systems, 11(8):2680–2693, 2019. * [72] James W Wilson, Yerong Feng, Min Chen, and Rita D Roberts. Nowcasting challenges during the beijing olympics: Successes, failures, and implications for future nowcasting systems. Weather and Forecasting, 25(6):1691–1714, 2010. * [73] Ming Xue, Donghai Wang, Jidong Gao, Keith Brewster, and Kelvin K Droegemeier. The advanced regional prediction system (arps), storm-scale numerical weather prediction and data assimilation. Meteorology & Atmospheric Physics, 82, 2003. * [74] Fuhan Zhang, Xiaodong Wang, Jiping Guan, Meihan Wu, and Lina Guo. Rn-net: A deep learning approach to 0–2 h rainfall nowcasting based on radar and automatic weather station data. Sensors, 21(6):1981, 2021. * [75] Gang Zhang, Dazhi Yang, George Galanis, and Emmanouil Androulakis. Solar forecasting with hourly updated numerical weather prediction. Renewable and Sustainable Energy Reviews, 154:111768, 2022. * [76] Yang Zhang, Marc Bocquet, Vivien Mallet, Christian Seigneur, and Alexander Baklanov. Real-time air quality forecasting, part i: History, techniques, and current status. Atmospheric Environment, 60:632–655, 2012. ## Appendix A Overview of Appendix In this supplementary material, we present additional details, results, and experiments that are not included in the main paper due to the space limit. ## Appendix B Access to Dataset * • URL: https://github.com/osilab-kaist/KoMet-Benchmark-Dataset * • Dataset URL: https://www.dropbox.com/s/qachyygl2ouuy1v/KoMet.v1.0.tar.gz?dl=0 * • Hosting and maintenance plan: The dataset is provided via Dropbox. Our GitHub repository provides tools for data access as well as model training and evaluation for our proposed workflow. We will continue to ensure public access to our dataset, and possibly provide the dataset through multiple hosting services, if the need arises. * • License: We include the MIT License about the use of public GitHub repository. ## Appendix C Ethics Statement To address potential concerns, we describe the ethical aspect with respect to security and the environment. #### Security. Weather information is an important factor in a country’s economic and military security. Nevertheless, general users do not have access to up-to- date NWP prediction data, so if access to live NWP data is well protected, the security problem will be sufficiently prevented. #### Environment. While our approach brings more benefits in generalizing precipitation forecasting, it still relies on NWP simulations which require vast amounts of computation, and thus energy consumption. The same is true for training and running deep learning models for post-processing. However, we believe that the positive impact of accurate weather forecasts on energy consumption and planning far outweighs these costs and will lead to more environmentally conscious energy policies. ## Appendix D Limitations and Future Directions In this section, we describe the limitations of our work and future directions for further development. #### Limitations. Although we illustrate the advantages of our benchmark dataset, the bottleneck lies in the accuracy of GDAPS-KIM as well as AWS. We only consider the summer season of South Korea, specifically, July and August. We do not investigate other NWP datasets. We also do not consider the precipitation nowcasting task, with lead times of less than 3 hours, as this would limit the deep model’s window size to less than 3, possibly hindering model performance. #### Future Directions. In future work, we aim to explore more robust training strategies under the data scarcity scenario. Furthermore, thorough investigation is needed for the selection of important variables in Table 1. In addition, we intend to develop our methods in other seasons such as Spring, Fall, and Winter. Lastly, we plan to explore methods to tackle precipitation nowcasting, using a limited window size. ## Appendix E Additional Detailed Explanations * • Thresholds for Rain and Heavy Rain. KMA defines that rain occurs when the rainfall is over 0.1mmh-1 and heavy rain occurs when the rainfall is over 10mmh-1. Literature based on the precipitation forecasting of the Korea Peninsula follows this standard [59, 61]. * • U-Net [52] is a model designed to solve the image segmentation problem in biomedical images. During the propagation of the encoder part, important features can be captured in a low-dimensional form. Applying this to the task of NWP post-processing, U-Net is used to extract the meteorological features from GDAPS-KIM and decode them into a feature map containing precipitation predictions, similar to [47]. Note that [47] used radar observations as inputs, rather than NWP predictions. * • Convolutional Long Short-Term Memory Neural Network (Conv-LSTM) [55, 56] is a model that combines LSTM and convolutional operations, each designed to model temporal and spatial relationships, respectively, within sequences of images. This enables the model to identify relationships between sequential NWP prediction maps to derive a refined predictions for precipitation. The model structure consists of encoding, decoding and forecasting modules comprised of stacks of ConvLSTM layers. * • MetNet. [60, 16] In MetNet, the spatial downsampler module first reduces the size by passing the input through several convolutional layers. Next, the ConvLSTM structure is used in the temporal encoder to create an output tensor with spatial and temporal information for each pixel. Lastly, such propagated feature map goes through the self-attention block of the Spatial Aggregator, collects information in the global range, and finally passes through a classifier to output the precipitation amount for each pixel as a probability distribution. ## Appendix F Implementation Details The exact implementation details and reproduction instructions for our experiments are available on our GitHub repository. We explain important details in the following paragraphs. #### Data Preprocessing. We refer to the settings in [16]. For the transformation of GDAPS-KIM, we only apply Min-Max normalization to each variable. We employ various transformations on the ground-truth AWS targets to mitigate the sparsity of observations. In all of our experiments, we apply linear interpolation between available observations to fill in the non-learning points, using functions provided by the scipy package. This is illustrated in Figure 7. To tackle the class imbalance issue, we study two sampling strategies in subsection 5.3. Details of sampling strategies are explained in the following paragraphs. (a) Original (b) Interpolated Figure 7: Visualization of AWS observations of precipitation before and after interpolation, on April 23, 2020 06:00 UTC. The unit of precipitation is mm/h. #### Under-Sampling for ‘No Rain’ Points. As the ratio of ‘no rain’ is significantly higher than that of ‘rain’ and ‘heavy rain’, we consider a simple sampling strategy of randomly sampling a subset of ‘no rain‘ points within each sample, at a rate of $p$. The remaining ‘no rain’ points are discarded during training, i.e., set to nan. We use a sampling ratio of $p=0.20$. #### Balancing for ‘Rain’ Points. This technique considers the ratio of ‘rain’ and ‘heavy rain’ to ‘no rain’ within each feature map. Specifically, let $R$ be the number of points with ‘rain’ or ‘heavy rain’, and $N$ be the number of ‘no rain’ points. Given the target rate $p$, we randomly sample ‘no rain’ points to match $p=N/R$ as closely as possible. We also use a sampling ratio of $p=0.20$ for this method. The specific strategy applied to each sample is as follows: * • Case 1. $R=0$: The sample is not used for training. * • Case 2. $(R×p)<N$: $(N-R×p)$ ‘no rain’ points are discarded from the sample. * • Case 3. $(R×p)>=N$: The sample is used as is. #### Training Settings. We use the Adam optimizer with the default learning rate of 0.001, and beta parameters of 0.9 and 0.999. We train the models for 20 epochs and select the best epoch based on the highest CSI performance on the validation set. Reported performances are based on the test set, unless otherwise specified. We use a window size of 3 and lead times ranging from 6 to 87 hours. We use two Pres type variables (T, rh_liq) at three isobaric planes,{500hPa, 700hPa, 850hPa}; and six variables of Unis type (rain, q2m, rh2m, t2m, tsfc, ps). These baseline settings are used in all experiments, unless otherwise stated. #### Resources. We run our experiments on NVIDIA GeForce RTX 3090 Ti GPUs. Each training run takes approximately 30 minutes. ## Appendix G Additional Experiments ### G.1 Influential Variables We conduct an ablation study to investigate the influence of each individual variable. We start with a baseline set of variables, curated under the guidance of weather forecasters in South Korea. This includes all baseline variables used in our other experiments, as well as one additional Unis variable: pbltype. We then measure the performance of our baseline U-Net model when adding or subtracting individual variables used for training. We consider the u, v and u10m, v10m variables jointly, as they both pertain to wind. For Pres variables, we use values corresponding to three isobraic surfaces at {500hPa, 700hPa, 850hPa}. We report our results in Table 4 and find that while some metrics such as accuracy are relatively unaffected, others such as CSI and bias vary depending on the choice of variables. Table 4: Ablation study on the binary classification performance of our baseline U-Net model for ‘rain’ and ‘heavy rain’ depending on the selection of variables. Each row represents an addition (+) or a subtraction (-) of a single variable from our curated set. Type | Ablation | Rain | Heavy Rain ---|---|---|--- Acc | POD | CSI | FAR | Bias | Acc | POD | CSI | FAR | Bias Curated | | 0.840 | 0.441 | 0.282 | 0.562 | 1.007 | 0.983 | 0.040 | 0.029 | 0.906 | 0.426 Pres | +uv | 0.859 | 0.385 | 0.280 | 0.493 | 0.759 | 0.983 | 0.042 | 0.031 | 0.896 | 0.407 -T | 0.857 | 0.388 | 0.279 | 0.503 | 0.782 | 0.984 | 0.037 | 0.030 | 0.874 | 0.296 -rh_liq | 0.854 | 0.444 | 0.302 | 0.515 | 0.915 | 0.984 | 0.018 | 0.015 | 0.930 | 0.264 +hgt | 0.851 | 0.392 | 0.272 | 0.530 | 0.834 | 0.983 | 0.046 | 0.035 | 0.880 | 0.388 Unis | +hpbl | 0.851 | 0.378 | 0.266 | 0.527 | 0.799 | 0.984 | 0.032 | 0.025 | 0.889 | 0.287 -pbltype | 0.840 | 0.483 | 0.300 | 0.558 | 1.093 | 0.983 | 0.041 | 0.031 | 0.891 | 0.378 +psl | 0.862 | 0.361 | 0.271 | 0.480 | 0.695 | 0.985 | 0.035 | 0.028 | 0.870 | 0.268 -q2m | 0.848 | 0.457 | 0.300 | 0.534 | 0.982 | 0.983 | 0.052 | 0.037 | 0.885 | 0.449 -rh2m | 0.838 | 0.489 | 0.301 | 0.561 | 1.112 | 0.983 | 0.065 | 0.046 | 0.860 | 0.462 -t2m | 0.846 | 0.460 | 0.299 | 0.540 | 1.000 | 0.983 | 0.044 | 0.032 | 0.892 | 0.409 -tsfc | 0.844 | 0.510 | 0.317 | 0.544 | 1.119 | 0.983 | 0.067 | 0.048 | 0.850 | 0.445 +uv10m | 0.843 | 0.447 | 0.288 | 0.553 | 0.999 | 0.984 | 0.049 | 0.039 | 0.837 | 0.299 +topo | 0.855 | 0.411 | 0.288 | 0.510 | 0.839 | 0.984 | 0.057 | 0.044 | 0.842 | 0.361 +ps | 0.852 | 0.445 | 0.300 | 0.521 | 0.927 | 0.985 | 0.026 | 0.022 | 0.884 | 0.227 ### G.2 Lead Time on Heavy Rain Prediction Figure 8 illustrates the results of CSI scores for precipitation of $\geq 10.0$ (mm/h) over lead time. Unlike the trends in rain prediction, interestingly GDAPS-KIM outperforms other baseline DL models. It seems that heavy rain prediction is a more ardous challenge to be addressed. (a) Heavy Rain Figure 8: CSI scores of GDAPS-KIM and baseline models for binary heavy rain classification. Scores are given for predictions corresponding to specific lead times, ranging from 6 to 20 hours. ### G.3 Learning Rate Figure 9 shows the CSI scores according to changes in learning rate. When comparing the performance of ‘rain’ and ‘heavy rain’, we see different trends based on increase in learning rate. (a) U-Net (b) ConvLSTM (c) MetNet Figure 9: CSI performance according to changes in learning rate. ### G.4 Learning Curve We visualize the learning curves of the networks according to changes in window size (Figure 10, Figure 11, Figure 12). (a) U-Net, window size=1 (b) U-Net, window size=3 (c) U-Net, window size=6 Figure 10: U-Net: CSI learning curve according to window size. (a) Conv-LSTM, window size=1 (b) Conv-LSTM, window size=3 (c) Conv-LSTM, window size=6 Figure 11: Conv-LSTM: CSI learning curve according to window size. (a) MetNet, window size=1 (b) MetNet, window size=3 (c) MetNet, window size=6 Figure 12: MetNet: CSI learning curve according to window size. ### G.5 Generalization for Different Times of the Month We evaluate the deep models trained on our benchmark for different times of the year, ranging from July 2020 to July 2021. Our benchmark dataset covers only summer for 2020 and 2021, and thus it harbors concerns about the generalization of different seasons. While the data of other months is not provided owing the limit of license, it is available for the analysis of CSI distribution according to the lead time as well as month, as a substitute of raw data. Figure 13 and Figure 14 show the CSI distribution by lead time and month from ConvLSTM and U-Net, respectively. For the lead time, both models has larger CSI values of early lead time than those of latter and the overall curve has inflection points at similar time compared to the curve of rain ratio. On the other hand, the curve of heavy rain CSI is significantly similar with the graph of heavy rain ratio distribution on lead time. For the month, it can be seen that the annual average rain CSI is near 0.3, but the performance is extremely low in winter in both models. This seems to be the case, however, because the amount of rain in winter is close to zero. This phenomenon is more pronounced in heavy rain. (a) Rain CSI by lead time (b) Heavy rain CSI by lead time (c) Rain CSI by month (d) Heavy rain CSI by month Figure 13: CSI distribution of ConvLSTM outputs from July 2020 to July 2021. Orange line indicates the rain CSI of the model and the blue dot line is the distribution of observed rain ratio from AWS. (a) Rain CSI by lead time (b) Heavy rain CSI by lead time (c) Rain CSI by month (d) Heavy rain CSI by month Figure 14: CSI distribution of U-Net outputs from July 2020 to July 2021. Orange line indicates the rain CSI of the model and the blue dot line is the distribution of observed rain ratio from AWS. ### G.6 Training on Other Multiple Categorical Classes Our framework allows users to specify categorical classes in training as they wish if 0.1 and 10 are included in the threshold list. By referring to the other regions [47, 16, 56], we conduct an experiments with different categorical classes while the evaluation is verified with the standard threshold for rain and heavy rain used in our framework. Followings are the threshold lists with the number of classes: * • V1: [0.1, 2.0, 10.0], 4 classes * • V2: [0.1, 5.0, 10.0], 4 classes * • V3: [0.1, 0.5, 5.0, 10.0], 5 classes * • V4: [0.1, 0.5, 5.0, 10.0, 30.0], 6 classes Table 5 shows the performance according to the changes of the threshold list. As the class of middle rain is newly created, it can be seen that the CSI can be changed. Furthermore, when the number of classes in the middle rain is further increased, it can be seen that the csi of the heavy rain increases significantly with some degradation in rain CSI. Table 5: Evaluation metrics of KoMet with the MetNet for precipitation while 12 variables are utilized for the training, according to the changes of the number of categorical classes during training. | Rain | Heavy Rain ---|---|--- | Acc | CSI | Bias | Acc | CSI | Bias V1 | 0.862 | 0.293 | 0.769 | 0.986 | 0.011 | 0.079 V2 | 0.858 | 0.322 | 0.946 | 0.986 | 0.016 | 0.097 V3 | 0.871 | 0.270 | 0.581 | 0.987 | 0.048 | 0.210 V4 | 0.862 | 0.274 | 0.705 | 0.984 | 0.032 | 0.286 ### G.7 Pointwise Architecture without the Use of Spatial Information We conduct an experiment with methods that do not use spatial information for NWP correction. While our benchmark architectures include both spatial encoder and temporal encoder, it is important to check the needs of spatial neural processing. Table 6 shows the results for various lead times from 6 to 87 hours, according to the presence of spatial kernel operator in neural networks. Here, we design pointwise architecture having no spatial kernel. Pointwise architecture with LSTM indicates the modified version of ConvLSTM whose spatial encoder is replaced with pointwise architecture. For the NWP correction, as Table 6 shows, using only pointwise operator lead to worse optima than our benchmark models. In other words, it can be seen that the methodology considering the spatial information is more effective. Table 6: Evaluation metrics of KoMet and pointwise architectures for precipitation while 12 variables are utilized for the training. | Rain | Heavy Rain ---|---|--- | Acc | CSI | Bias | Acc | CSI | Bias Pointwise Architecture | 0.871 | 0.195 | 0.346 | 0.987 | 0.001 | 0.009 Pointwise Architecture + LSTM | 0.866 | 0.253 | 0.575 | 0.987 | 0.006 | 0.060 ConvLSTM | 0.869 | 0.296 | 0.696 | 0.986 | 0.006 | 0.059
# Particle Mean Field Variational Bayes Minh-Ngoc Tran Discipline of Business Analytics, The University of Sydney Business School Paco Tseng ARC Centre for Data Analytics for Resources and Environments (DARE) Robert Kohn School of Economics, UNSW Business School ###### Abstract Mean Field Variational Bayes (MFVB) is one of the most computationally efficient Bayesian inference methods. However, its use has been restricted to models with conjugate priors or those that allow analytical calculations. This paper proposes a novel particle-based MFVB approach that greatly expands the applicability of the MFVB method. We establish the theoretical basis of the new method by leveraging the connection between Wasserstein gradient flows and Langevin diffusion dynamics, and demonstrate the effectiveness of this approach using Bayesian logistic regression, stochastic volatility, and deep neural networks. Key words: Bayesian computation, optimal transport, Bayesian deep learning ## 1 Introduction The main challenge of Bayesian statistics is to conduct inference of a computationally intractable posterior distribution, $\pi(x)\propto\exp{\left(-U(x)\right)}$, $x\in\mathbb{R}^{d}$, generally only known up a normalising constant. To solve this problem, there are two main classes of computational methods that provide different approaches to approximate $\pi$. The first one is Markov chain Monte Carlo (MCMC) methods (Metropolis et al.,, 1953; Hastings,, 1970; Robert and Casella,, 1999). For many years, MCMC has been the standard approach for Bayesian analysis because of its theoretical soundness. The method constructs a Markov chain to produce simulation consistent samples from the target distribution $\pi$. A general MCMC approach is the Metropolis-Hastings algorithm that generates a Markov chain by first generating a proposed state from a proposal distribution, then using an acceptance rule to decide whether to accept the proposal or stay at the current state (Robert and Casella,, 1999). Another, and often more efficient, class of MCMC methods is based on the Langevin dynamics $\text{\rm d}X_{t}=-\frac{1}{2}\nabla U\left(X_{t}\right)\text{\rm d}t+\text{\rm d}B_{t}$ (1) where $\left\\{B_{t}\right\\}_{t\geq 0}$ is the Brownian process on $\mathbb{R}^{d}$. This stochastic differential equation (SDE) characterises the dynamics of the process $\\{X_{t}\\}_{t\geq 0}$ whose distribution, under some regularity conditions on the potential energy $U(x)$, converge to $\pi$ as the stationary distribution. In practice, however, it is necessary to work with a discretisation of the SDE in (1), whose the distribution might not converge to $\pi$ (Roberts and Tweedie,, 1996). The Metropolis-Hastings acceptance rule is then needed to correct for the error produced by the discretisation. This method is known as the Metropolis-adjusted Langevin algorithm (MALA) (Besag,, 1994; Roberts and Tweedie,, 1996). The Metropolis-Hastings acceptance rule is necessary to guarantee for convergence of MCMC methods, but it also prevents the use of MCMC in big data and big model settings. This is because calculating the Metropolis-Hastings acceptance probability requires the full data (however, see, e.g. Quiroz et al., (2019) for speeding up MCMC using data subsampling), and that this probability can easily get close to zero when the dimension $d$ is high. Other limitations of MCMC methods include the need for a sufficient burn-in period for the generated Markov chain to be distributed as $\pi$, and the absence of effective stopping criteria for checking convergence. These limitations can be circumvented by the Variational Bayes method at the cost of some approximation accuracy. Variational Bayes (VB) (Waterhouse et al.,, 1996; Attias,, 1999; Blei et al.,, 2017) emerges as an alternative approach to inference in complex posterior distributions with large datasets. More recently, it grown in popularity due to its ability to scale up in terms of both the model complexity and data size. Different to MCMC, the VB method proposes a family of distributions $\mathcal{Q}$, called the variational family, and then identifies within $\mathcal{Q}$ the closest variational distribution to $\pi$ with respect to KL-divergence, i.e., $q^{*}\in\arg\min_{q\in\mathcal{Q}}\left\\{\text{KL}\left(q||\pi\right)\right\\}.$ (2) A representative example is Mean-Field VB (MFVB) (sec. 3) which imposes a factorisation structure on the variational distributions, i.e. $\mathcal{Q}=\left\\{q=\prod q_{i}\right\\}$, and $q^{*}$, if it exists and is unique, is called the mean-field approximation of $\pi$. An obvious limitation of MFVB is that it fails to capture the posterior dependence between the factorized blocks. Despite this limitation, MFVB has been widely used in applications; see, e.g. Wand et al., (2011), Giordani et al., (2013), Wang and Blei, (2013) and Zhang and Zhou, (2017). Implementation of MFVB relies on conjugate priors and the ability to calculate the associated expectations (sec. 3). As a result, it is challenging to apply standard MFVB for some simple models such as Bayesian logistic regression. Our work aims at extending the scope of MFVB to makes it widely applicable by combining MFVB with the Langevin dynamics and circumventing the main issues of either method while maintaining the strengths of both, that is providing a scalable Bayesian inference algorithm with theoretical guarantees. The new method, called Particle Mean field VB (PMFVB), leverages the Langevin dynamics to bypass the limitations in standard MFVB, and employs the theory of Wasserstein gradient flows to establish its theoretical guarantee. Wasserstein gradient flows are a fundamental element of the Optimal Transport (OT) theory (Ambrosio et al.,, 2005; Villani et al.,, 2009), that quantifies the dissimilarity between probability measures and introduces a differential structure into the space of probability measures. Inspired by fluid dynamics, Jordan et al., (1998) introduce the concept of the gradient flow of a functional defined on this space, which is a continuous curve of probability measures along which the functional is optimised. It turns out that the gradient flow of KL-divergence functional is identical to the Langevin dynamics (sec. 2). This important connection between Wasserstein gradient flows and Langevin dynamics, and Stochastic Differential Equation (SDE) theory in general, provides the theoretical foundation for our PMFVB procedure. Our contribution. We study the KL-divergence functional on the space of factorised distributions $\mathcal{Q}$ equipped with the 2-Wasserstein distance, and show that the MFVB optimisation problem (2) has an unique solution $q^{*}$. We propose an algorithm for approximating the optimal mean field distribution $q^{*}$ by particles that are moved by combining the classical MFVB framework with the Langevin diffusion. We show that the distribution of these particles converges to $q^{*}$. We also study the posterior consistency of $q^{*}$ in terms of the data size. The numerical performance of PMFVB is demonstrated using Bayesian logistic regression and Stochastic Volatility - the statistical models that classical MFVB methods, to the best of our knowledge, have not been successfully applied to. We then develop a variant of PMFVB for inference in Bayesian deep learning, in which modifications to standard PMFVB are introduced to make PMFVB suitable for deep neural networks. For Bayesian inference in deep neural networks, Stochastic Gradient Langevin Dynamics (SGLD) of Welling and Teh, (2011) is among the most commonly used methods. We discuss connections between PMFVB and SGLD, and numerically compare their performance using both simulated and real datasets. Code implementing the examples is available at https://github.com/VBayesLab/PMFVB. Related work. Our work builds on recent advances in the Optimal Transport theory and Langevin dynamics. The most closely related work to ours is Yao and Yang, (2022) who, by focusing on a particular MFVB framework for statistical models with local latent variables, combine Wasserstein gradient flows with MFVB for dealing with the intractability of the optimal MFVB factorised distribution. Our PMFVB framework is more general than earlier works and can be applied to any statistical models including deep learning. Galy-Fajou et al., (2021) develop a particle-based Gaussian approximation that uses a flow of linear transformations for moving the particles; the resulting curve of Gaussian distributions can be viewed as an approximation of the Wasserstein gradient flow of the KL divergence functional. Lambert et al., (2022) study convergence results of Gaussian Variational Inference using the theory of Bures-Wasserstein gradient flow. They use particles to realise the flow of Gaussian approximations, and also extend Gaussian approximation to mixtures of Gaussians approximation. The variant of PMFVB for neural networks is related to SGLD and its variants (Welling and Teh,, 2011; Chen et al.,, 2014; Li et al.,, 2016; Kim et al.,, 2022). Notation. $\nabla f(x)$ denotes the gradient vector of scalar function $f$ defined on $\mathbb{R}^{d}$. For a vector-valued function $v$ defined on $\mathbb{R}^{d}$, its divergence is $\nabla\cdot v(x)=\sum_{j}\frac{\partial v_{j}(x)}{\partial x_{j}}$. $\Delta f(x)$ is the Laplacian of $f$, $\Delta f(x)=\nabla\cdot(\nabla f(x))=\sum_{j}\frac{\partial^{2}f(x)}{\partial x_{j}^{2}}$. For a generic set ${\cal X}\subset\mathds{R}^{d}$, we denote by $\mathcal{P}({\cal X})$ the set of probability measures on ${\cal X}$; and for a measure $q$, with some abuse of notation, we will denote by $q(\text{\rm d}x)$ and $q(x)$ the probability measure and density function, respectively. ## 2 Preliminaries This section collects the preliminaries on optimal transport theory and Langevin diffusion that are used in the paper. ### 2.1 Wasserstein space and gradient flow Consider a generic set ${\cal X}\subset\mathbb{R}^{d}$, let ${\cal P}_{2}(\mathcal{X})$ be the set of absolutely continuous probability measures on ${\cal X}$ with finite second moments. For any $p,q\in\mathcal{P}_{2}(\mathcal{X})$, let $\displaystyle W_{2}(p,q)$ $\displaystyle=$ $\displaystyle\min_{\gamma\in\Gamma(p,q)}\Big{\\{}\Big{(}\int_{\mathcal{X}\times\mathcal{X}}\|x-y\|^{2}\gamma(dx\times dy)\Big{)}^{1/2}\Big{\\}}$ (3) $\displaystyle=$ $\displaystyle\min_{T:T_{\\#}p=q}\Big{\\{}\Big{(}\int_{\mathcal{X}}\|x-T(x)\|^{2}p(dx)\Big{)}^{1/2}\Big{\\}}$ (4) be the 2-Wasserstein metric on $\mathcal{P}_{2}(\mathcal{X})$. Here, $\Gamma(p,q)$ denotes the set of joint probability measures on $\mathcal{X}\times\mathcal{X}$ with the marginals $p$ and $q$, and $T_{\\#}(p)$ is the push forward measure of $p$, i.e. $T_{\\#}(p)(A)=p(x:T(x)\in A),\;\;\;A\subset\mathcal{X}.$ The existence of (3) and (4) and their equivalence is well studied; see, e.g., Ambrosio et al., (2005). It is well-known that, equipped with this metric, $\mathcal{P}_{2}(\mathcal{X})$ becomes a metric space, often called the Wasserstein space, denoted by $\mathbb{W}_{2}(\mathcal{X})$. The Wasserstein space $\mathbb{W}_{2}(\mathcal{X})$ has many attractive properties (Ambrosio et al.,, 2005; Villani et al.,, 2009) that make it possible to perform calculus on this space. In particular, $\mathbb{W}_{2}(\mathcal{X})$ can viewed as a Riemannian manifold (Otto,, 2001) whose rich geometry structure can be exploited to efficiently solve optimisation problems such as (2). Consider the functional $F(q)=\text{\rm KL}(q\|\pi)$ defined on $\mathbb{W}_{2}(\mathcal{X})$, with some fixed measure $\pi\in\mathbb{W}_{2}(\mathcal{X})$. Jordan et al., (1998) propose the following iterative scheme, known as the JKO scheme, to optimize $F(q)$. Let $q^{(0)}\in\mathbb{W}_{2}(\mathcal{X})$ be some initial measure and $\epsilon>0$. At step $k\geq 0$, define $q^{(k+1)}:=\arg\min_{q\in\mathbb{W}_{2}(\mathcal{X})}\Big{\\{}F(q)+\frac{1}{2\epsilon}W_{2}^{2}(q,q^{(k)})\Big{\\}}.$ (5) Denote by $\frac{\delta F}{\delta q}(q):\mathbb{R}^{d}\mapsto\mathbb{R}$ a first variation at $q$ of the functional $F$, i.e. $\lim_{\epsilon\to 0}\frac{F(q+\epsilon\xi)-F(q)}{\epsilon}=\int\frac{\delta F}{\delta q}(q)\text{\rm d}\xi$ (6) for all $\xi\in\mathbb{W}_{2}(\mathcal{X})$ such that $F(q+\epsilon\xi)$ is defined. The first variation, defined up to an additive constant, characterises the change of $F$ at $q$. Let $v^{(k)}(x)=\nabla\frac{\delta F}{\delta q}(q^{(k)})(x):\mathbb{R}^{d}\mapsto\mathbb{R}^{d}$; from (6), it can be shown that $v^{(k)}(x)=\nabla\log\pi(x)-\nabla\log q^{(k)}(x).$ (7) Jordan et al., (1998) prove that (see also (Santambrogio,, 2015, Chapter 8) and (Ambrosio et al.,, 2005, Chapter 10)), as $\epsilon\to 0$, the discrete- time solution $\\{q^{(k)}\\}_{k=0,1,...}$ from (5) converges to the continuous-time solution $\\{q_{t}\\}_{t\geq 0}$ of the continuity equation $\frac{\partial q_{t}(x)}{\partial t}+\nabla\cdot\big{(}q_{t}(x)v_{t}(x)\big{)}=0$ (8) with $v_{t}(x)=\nabla\frac{\delta F}{\delta q}(q_{t})(x)=\nabla\log\pi(x)-\nabla\log q_{t}(x)$. For $\psi_{t}(x)=\log(q_{t}(x)/\pi(x))$, by noting that $\int({\text{\rm d}\log q_{t}(x)}/{\text{\rm d}t})q_{t}(x)dx=0$, we have $\displaystyle\frac{\text{\rm d}F(q_{t})}{\text{\rm d}t}=\int\psi_{t}(x)\frac{\partial q_{t}(x)}{\partial t}dx$ $\displaystyle=$ $\displaystyle-\int\psi_{t}(x)\nabla\cdot(q_{t}(x)v_{t}(x))\text{\rm d}x$ $\displaystyle=$ $\displaystyle{\mathbb{E}}_{q_{t}}<\nabla\psi_{t},v_{t}>$ $\displaystyle=$ $\displaystyle-{\mathbb{E}}_{q_{t}}\big{(}\|v_{t}(x)\|^{2}\big{)}<0,$ which justifies that the curve $\\{q_{t}\\}_{t\geq 0}$, called the gradient flow, minimizes the KL functional $F(q)$. ### 2.2 Langevin Monte Carlo diffusion Let $\pi(\text{\rm d}x)$ be a target probability measure defined on ${\cal X}\subset\mathbb{R}^{d}$ with density $\pi(x)$. The Langevin diffusion is the stochastic process $\\{L_{t}\\}_{t\geq 0}$ governed by the SDE $\displaystyle L_{0}$ $\displaystyle\sim$ $\displaystyle p_{0}$ $\displaystyle\text{\rm d}L_{t}$ $\displaystyle=$ $\displaystyle\frac{1}{2}\nabla\log\pi(L_{t})\text{\rm d}t+\text{\rm d}B_{t},\;\;\;t>0$ (9) where $B_{t}$ is the $d-$dimensional Brownian motion and $p_{0}$ is an initial distribution on ${\cal X}\subset\mathbb{R}^{d}$. Under some regularity conditions on $U(x)=-\log\pi(x)$, this SDE has an unique solution which is an ergodic Markov process with the invariant distribution $\pi(\text{\rm d}x)$ (Pavliotis,, 2014, Chapter 4). Let $q_{t}(x)$ be the probability density (w.r.t. the Lebesgue measure on $\mathbb{R}^{d}$) of $L_{t}$; then $\\{q_{t}\\}_{t\geq 0}$ is the unique solution of the Fokker–Planck equation (often called the forward Kolmogorov equation in the probability literature) $\displaystyle\frac{\partial q_{t}(x)}{\partial t}$ $\displaystyle=$ $\displaystyle-\nabla\cdot\Big{(}q_{t}(x)\nabla\log\pi(x)\Big{)}+\Delta q_{t}(x),\;\;t>0$ (10) $\displaystyle q_{0}(x)$ $\displaystyle=$ $\displaystyle p_{0}(x);$ (11) see Pavliotis, (2014), Chapter 4. One can easily check that $\nabla\cdot\Big{(}q_{t}(x)\nabla\log\pi(x)\Big{)}-\Delta q_{t}(x)=\nabla\cdot\big{(}q_{t}(x)v_{t}(x)\big{)},$ with $v_{t}(x)=\nabla\log\pi(x)-\nabla\log q_{t}(x)$; hence, the Fokker–Planck equation (10) is identical to the continuity equation in (8). Therefore the curve $\\{q_{t}\\}_{t\geq 0}$ induced by the Langevin dynamics can be viewed as a gradient flow that minimises some sort of a discrepancy between $q_{t}$ and $\pi$; see, Dalalyan, (2017) and Cheng and Bartlett, (2018). For a fixed $h>0$, consider the following Langevin Monte Carlo (LMC) diffusion which is a time-continuous discretization approximation of (2.2) $\text{\rm d}X_{t}^{h}=\frac{1}{2}\nabla\log\pi(X_{\tau(t)}^{h})\text{\rm d}t+\text{\rm d}B_{t},\;\;\;t\geq 0,$ (12) where $\tau(t):=kh$ if $t\in[kh,(k+1)h)$. Equation (12) implies that, at the time points $kh$, $k=0,1,...$, we have $X_{(k+1)h}^{h}=X_{kh}^{h}+\frac{h}{2}\nabla\log\pi(X_{kh}^{h})+\sqrt{h}\eta_{k},\;\;\eta_{k}\stackrel{{\scriptstyle iid}}{{\sim}}N(0,I_{d}).$ (13) Denote by $\mu_{t}^{h}$ the distribution of $X_{t}^{h}$, $t\geq 0$, from the LMC diffusion (12). Cheng and Bartlett, (2018) prove the following lemma, which says that the functional $F(\cdot)$ is reduced along the curve $\\{\mu_{t}^{h}\\}_{t\geq 0}$. ###### Lemma 1. [Cheng and Bartlett, 2018, Lemma 1] Suppose that $U(x)=-\log\pi(x)$ is strongly convex and has a Lipschitz continuous gradient. That is, there exist constants $c>0$ and $C>0$ such that $cI_{d}\leq\nabla^{2}U(x)\leq CI_{d},\;\;\;\text{ for all }x\in{\cal X}.$ Let $F(\mu_{t}^{h})=\text{\rm KL}(\mu_{t}^{h}\|\pi)$. Then, when the step size $h$ is sufficiently small $\frac{\text{\rm d}F(\mu_{t}^{h})}{\text{\rm d}t}\leq 0,\;\;\;t>0.$ (14) Although the Langevin dynamics in (2.2) converges to the invariant distribution $\pi(\text{\rm d}x)$, its convergence rate is not optimal (Pavliotis,, 2014). Many studies have aimed to improve the speed of convergence. A simple yet effective method is to add a momentum term to the drift coefficient $\nabla\log\pi(x)$; see, e.g., Hwang et al., (2005) and Kim et al., (2022). We will use accelerated Langevin dynamics in our implementation of the PMFVB method in Section 7. ## 3 Mean Field Variational Bayes We are concerned with the problem of approximating a target probability measure $\pi(\text{\rm d}x\times\text{\rm d}y)$ defined on $\Theta={\cal X}\times{\cal Y}\subset\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}}$ with density $\pi(x,y)$ (with respect to some reference measure such as the Lebesgue measure). The methodology proposed in this paper can be easily extended to cases of more than two blocks, $\Theta={\cal X}_{1}\times{\cal X}_{2}\times\cdots\times{\cal X}_{k}$, $k>2$. MFVB approximates $\pi(\cdot)$ by a probability measure $q(\text{\rm d}x\times\text{\rm d}y)=q_{x}(\text{\rm d}x)q_{y}(\text{\rm d}y)$, where $q_{x}(\text{\rm d}x)\in\mathcal{P}({\cal X})$ and $q_{y}(\text{\rm d}y)\in\mathcal{P}({\cal Y})$. Consider the following optimisation problem $q^{*}\in\arg\min_{q\in\mathcal{P}({\cal X})\otimes\mathcal{P}({\cal Y})}\Big{\\{}F(q)=\text{\rm KL}(q\|\pi)\Big{\\}}.$ (15) We study in Section 5.1 conditions under which the problem in (15) is well- defined and has a unique solution. Define $q_{x}^{*}(x)\propto\exp\Big{(}{\mathbb{E}}_{q_{y}}\big{[}\log\pi(x,y)\big{]}\Big{)},\;\;\;\;q_{y}^{*}(y)\propto\exp\Big{(}{\mathbb{E}}_{q_{x}}\big{[}\log\pi(x,y)\big{]}\Big{)}.$ (16) MFVB turns the optimisation problem (15) into the following coordinate- descent-type problem: $\text{given $q_{y}$ solve: }\min_{q_{x}\in\mathcal{P}({\cal X})}\Big{\\{}\text{\rm KL}(q_{x}\|q_{x}^{*})\Big{\\}},$ (17) and $\text{given $q_{x}$ solve: }\min_{q_{y}\in\mathcal{P}({\cal Y})}\Big{\\{}\text{\rm KL}(q_{y}\|q_{y}^{*})\Big{\\}}.$ (18) Assuming that $q_{x}^{*}(x)$ and $q_{y}^{*}(y)$ have a standard form and that the expectations in (16) can be computed, the solutions in (17)-(18) are $q_{x}^{*}(x)$ and $q_{y}^{*}(y)$ respectively. These assumptions limit the use of MFVB to simple cases. For example, even in the simple Bayesian logistic regression model, MFVB cannot be used as the assumptions above are not satisfied. The next section proposes a method for solving (15) without making these assumptions, and Section 5.1 studies the theoretical properties of $F(q)$ and its minimizer $q^{*}$. ## 4 Particle Mean Field Variational Bayes We now present our particle MFVB procedure. The key idea is that whenever the optimal solutions $q_{x}^{*}(x)$ and $q_{y}^{*}(y)$ of (17) and (18) are unavailable in closed form, we use Langevin Monte Carlo diffusions to iteratively approximate them. We assume below that both $q_{x}^{*}(x)$ and $q_{y}^{*}(y)$ are intractable; in MFVB applications where the variational distribution is factorized into $K$ blocks, $q=q_{1}\times q_{2}\times\cdots\times q_{K}$, one only needs to use Langevin Monte Carlo diffusions to approximate those optimal solutions $q_{k}^{*}$ that are intractable. Particle MFVB works more efficiently when many but a few of the optimal $q_{k}^{*}$ are tractable. Lemma 1 suggests that the KL functional $F(q)$ decreases after each iteration, together with the result that $F(q)$ has the unique solution (c.f. Corollary 3), this justifies our particle MFVB procedure. Theorem 4 provides a formal proof. We use a set of particles $\\{X_{i}^{(t)},i=1,...,M\\}$ to approximate $q_{x}^{*}(x)$ at iteration $t$, and $\\{Y_{i}^{(t)},i=1,...,M\\}$ to approximate $q_{y}^{*}(y)$, $t\geq 1$. We note that, unlike Section 2 where $t$ denotes continuous time, in this section $t$ denotes the $t$th iteration in the PMFVB algorithm. Given $q_{y}^{(t)}(\text{\rm d}y)$ which is approximated by $\widehat{q}_{y}^{(t)}(\text{\rm d}y)=\frac{1}{M}\sum_{i}\delta_{Y_{i}^{(t)}}(\text{\rm d}y)$, a Langevin Monte Carlo diffusion is used to approximate $q_{x}^{*}(x)\propto\exp\Big{(}{\mathbb{E}}_{q_{y}^{(t)}}\big{[}\log\pi(x,y)\big{]}\Big{)}$: $X_{i}^{(t+1)}=X_{i}^{(t)}+\frac{h_{x}}{2}{\mathbb{E}}_{q_{y}^{(t)}}\big{[}\nabla_{x}\log\pi(X_{i}^{(t)},y)\big{]}+\sqrt{h_{x}}\eta_{x,i},\;\;\;i=1,...,M$ (19) with $\eta_{x,i}\sim N_{d_{x}}(0,I)$. The term ${\mathbb{E}}_{q_{y}^{(t)}}\big{[}\nabla_{x}\log\pi(X_{i}^{(t)},y)\big{]}$ can be approximated using a subset of $\\{Y_{i}^{(t)},i=1,...,M\\}$ by $\frac{1}{m}\sum_{k=1}^{m}\nabla_{x}\log\pi(X_{i}^{(t)},Y_{i_{k}}^{(t)})$, where $\\{Y_{i_{k}}^{(t)},k=1,...,m\\}$ is a random subset of size $m$ from $\\{Y_{i}^{(t)},i=1,...,M\\}$. That is, $X_{i}^{(t+1)}=X_{i}^{(t)}+\frac{h_{x}}{2m}\sum_{k=1}^{m}\nabla_{x}\log\pi(X_{i}^{(t)},Y_{i_{k}}^{(t)})+\sqrt{h_{x}}\eta_{x,i},\;\;\;i=1,...,M.$ (20) Similarly, given $q_{x}^{(t+1)}(\text{\rm d}x)$ approximated by the particles $\\{X_{i}^{(t+1)},i=1,...,M\\}$, we use a Langevin Monte Carlo diffusion to approximate $q_{y}^{*}(y)\propto\exp\Big{(}{\mathbb{E}}_{q_{x}^{(t+1)}}\big{[}\log\pi(x,y)\big{]}\Big{)}$: $Y_{i}^{(t+1)}=Y_{i}^{(t)}+\frac{h_{y}}{2m}\sum_{k=1}^{m}\nabla_{y}\log\pi(X_{i_{k}}^{(t+1)},Y_{i}^{(t)})+\sqrt{h_{y}}\eta_{y,i},\;\;\;i=1,...,M$ (21) with $\eta_{y,i}\sim N_{d_{y}}(0,I)$. Algorithm 1 summarises the procedure. Algorithm 1 Particle MFVB 1:procedure PMFVB($M,\epsilon,q^{0}_{x},q^{0}_{y}$) 2: Input: number of particles $M$, tolerance $\epsilon>0$, initial distributions $q^{(0)}_{x}$ and $q^{(0)}_{y}$. 3: Initialise $X_{i}\sim q^{(0)}_{x}$ and $Y_{i}\sim q^{(0)}_{y},~{}i=1,\dots M$. $t\leftarrow 0$. 4: while Not $S(\epsilon)$ do 5: Update $\\{X_{i}^{(t+1)},i=1,...,M\\}$ as in (20) 6: Update $\\{Y_{i}^{(t+1)},i=1,...,M\\}$ as in (21) 7: $t\leftarrow t+1$. 8: end while 9:end procedure In Algorithm 1, $S(\epsilon)$ denotes a stopping rule as a function of some tolerance $\epsilon>0$. The computational complexity in each iteration is $O(mM)$. Once the subset $\\{Y_{i_{k}}^{(t)},k=1,...,m\\}$ has been selected, the update in (20) can be paralellised across the particles $i$ as there is no communication required between the particles. Similarly, the update in (21) can also be parallelised. We now discuss the stopping rule. When a validation data set is available, as is typically the case in deep learning applications, one can use a stopping rule based on the validation error and stop the iteration in Algorithm 1 if the validation error, or a rolling window smoothed version of it, no longer decreases. This stopping rule is recommended in Section 7 where PMFVB is used for training deep neural networks. Alternatively, the update in Algorithm 1 can be stopped using the lower bound. Let $\pi(x,y)=\widetilde{\pi}(x,y)/C$ with $C$ the normalising constant. Then, $\text{\rm KL}(q\|\pi)=-\mathcal{L}(q)+\log C,$ where $\mathcal{L}(q)$ is the lower bound term $\mathcal{L}(q)=\int\log\widetilde{\pi}(x,y)q(\text{\rm d}x,\text{\rm d}y)+H(q),\;\;\;H(q)=-\int\log q(x,y)q(\text{\rm d}x,\text{\rm d}y).$ The entropy term $H(q)$ encourages the spread of the particles to avoid their collapse to a degenerate distribution. However, as the LMC diffusion already spreads the particles by adding a Gaussian noise to them (c.f. (13)), hence circumventing convergence to a degenerate measure. At the $t$th iteration, given the $M$ particles $\\{X_{i}^{(t)},Y_{i}^{(t)}\\}_{i=1}^{M}$ approximating $q$, we suggest approximating the entropy $H(q)$ by $-(1/M)\sum_{i=1}^{M}\log(1/M)=\log(M)$. The lower bound term at the $t$th iteration is approximated by $\widehat{\mathcal{L}}=\frac{1}{M}\sum_{i}\log\widetilde{\pi}(X_{i}^{(t)},Y_{i}^{(t)})+\log(M).$ (22) ## 5 Theoretical analysis of particle MFVB In order to avoid technical complications, we will assume in this section that ${\cal X}$ and ${\cal Y}$ are compact sets in $\mathbb{R}^{d_{x}}$ and $\mathbb{R}^{d_{y}}$, respectively. All proofs of the theorems and corollaries are in the Appendix. ### 5.1 Properties of functional $F(q)$ on the Wasserstein space We first study the properties of the KL functional $F(q)$ on the Wasserstein space $\mathcal{Q}=\mathbb{W}_{2}(\mathcal{X})\otimes\mathbb{W}_{2}(\mathcal{Y})$. To the best of our knowledge, there is no previous work studying the theoretical properties of the MFVB problem (15). By limiting $P({\cal X})\otimes\mathcal{P}({\cal Y})$ to the Wasserstein space $\mathbb{W}_{2}(\mathcal{X})\otimes\mathbb{W}_{2}(\mathcal{Y})$, the theorem below shows that the optimal MFVB distribution exists and is unique. ###### Theorem 2. Assume that ${\cal X}$ and ${\cal Y}$ are compact sets and that $\pi(x,y)$ is continuous in both $x$ and $y$. Then, * (i) $F(q)$ is lower semi-continuous (w.r.t. the weak convergence on ${\cal Q}$). * (ii) $F(q)$ is convex. ###### Corollary 3. Under the assumptions in Theorem 2, $F(q)$ has an unique minimizer on ${\cal Q}$. ### 5.2 Convergence of the particle MFVB algorithm As the number of particles $M\to\infty$, Algorithm 1 defines a sequence of measures $\\{q^{(t)}(\text{\rm d}x\times\text{\rm d}y)=q_{x}^{(t)}(\text{\rm d}x)q_{y}^{(t)}(\text{\rm d}y),\ t=1,2,...\\}$ on ${\cal Q}$. The theorem below shows that the KL functional $F(q^{(t)})$ is non-increasing over $t$, and hence $q^{(t)}$ converges to the solution $q^{*}$. ###### Theorem 4. Assume that, for each fixed $y$, $U(x)=-\log\pi(x,y)$ is strongly convex and has a Lipschitz continuous gradient w.r.t. $x$. That is, there exist constants $c_{1}>0$ and $C_{1}>0$ such that $c_{1}I_{d_{x}}\leq\nabla^{2}U(x)\leq C_{1}I_{d_{y}},\;\;\;\text{ for all $x$}.$ Similarly, for each fixed $x$, $V(y)=-\log\pi(x,y)$ is strongly convex and has a Lipschitz continuous gradient w.r.t. $y$. That is, there exist constants $c_{2}>0$ and $C_{2}>0$ such that $c_{2}I_{d_{y}}\leq\nabla^{2}V(y)\leq C_{2}I_{d_{y}},\;\;\;\text{ for all $y$}.$ Then, if the step sizes $h_{x}$ and $h_{y}$ are sufficiently small and $M\to\infty$, $F(q^{(t)})$ is non-increasing over $t$, and $q^{(t)}$ converges to the unique minima $q^{*}$ of $F(q)$. ### 5.3 Posterior consistency of $q^{*}$ This section studies the properties of the particle MFVB solution $q^{*}=q_{n}^{*}$ in the context of Bayesian inference where the target $\pi(x,y)=\pi_{n}(\theta)$ is a posterior distribution $\pi_{n}(\theta)=p(\theta|X^{(n)})$ of a model parameter $\theta\in\mathbb{R}^{d}$, and $X^{(n)}$ denotes the data of size $n$. Is the PMFVB approximation $q_{n}^{*}$ posterior consistent? i.e. does $q_{n}^{*}(\text{\rm d}\theta)$ concentrate on a small neighborhood of the true parameter as the sample size $n\to\infty$? We look at this problem from the frequentist point of view where we assume that the true data generating parameter exists. To study this question, we first define some notation. Let $p_{\theta}^{(n)}(\text{\rm d}X^{(n)})$ be the distribution of data $X^{(n)}$ under the model, parameterized by $\theta\in\Theta$. Assume that $X^{(n)}$ is generated under some true parameter $\theta_{0}\in\Theta$, and denote by $p_{0}^{(n)}(\text{\rm d}X^{(n)})$ the true underlying distribution of $X^{(n)}$. Let $\pi_{0}(\text{\rm d}\theta)\in\mathcal{P}(\Theta)$ be the prior distribution of $\theta$. The posterior distribution is $\pi_{n}(\text{\rm d}\theta)\propto\pi_{0}(\text{\rm d}\theta)p_{\theta}^{(n)}(X^{(n)}).$ We shall study the asymptotic behavior of the PMFVB variational posterior $q_{n}^{*}=\arg\min_{q\in\mathcal{Q}}\text{\rm KL}\big{(}q\|\pi_{n}\big{)}$ with the space of probability measures $\mathcal{Q}$ having the factorised form in Section 3, $\mathcal{Q}={\cal P}({\cal X})\otimes{\cal P}({\cal Y})$, $\Theta={\cal X}\times{\cal Y}$. We note that it is unnecessary to equip $\mathcal{Q}$ with the Wassterstein distance in this section. The asymptotic behavior and convergence rate of the posterior $\pi_{n}(\text{\rm d}\theta)$ are well studied in the Bayesian statistics literature; see, e.g., Ghosal et al., (2000). The asymptotic behavior of the conventional VB approximation was studied recently in Zhang and Gao, (2019) and Alquier and Ridgway, (2019), who study the conditions on the variational family (also the prior $\pi_{0}$ and the likelihood $p_{\theta}^{(n)}$) to characterize the convergence properties of the variational posterior. The predicament with conventional VB is that the variational family should not be too large, so that the VB optimization is solvable; but it should not be too small either, so that the variational posterior can still achieve some sort of posterior consistency. It turns out that the variational family $\mathcal{Q}$ in our particle MFVB is general enough; using the results in Zhang and Gao, (2019), we will show that the PMFVB approximation $q_{n}^{*}$ enjoys the posterior consistency as the posterior $\pi_{n}(\text{\rm d}\theta)$ does. We need the following assumptions on the prior $\pi_{0}$ and likelihood $p_{\theta}^{(n)}$. ###### Assumption 5. Let $\varepsilon_{n}$ be a sequence of positive numbers such that $\varepsilon_{n}\to 0$ and $n\varepsilon_{n}^{2}\geq 1$, and $C_{1},C_{2}$ and $C$ are constants such that $C>C_{2}+2$. * (A1) Testing condition: For any $\varepsilon>\varepsilon_{n}$, there exist a set $\Theta_{n}(\varepsilon)\subset\Theta$ and a test function $\phi_{n}=\phi_{n}(X^{(n)})\in[0,1]$ such that ${\mathbb{E}}_{p^{(n)}_{0}}(\phi_{n})\leq\exp(-Cn\varepsilon^{2})\;\;\text{and}\;\;\sup_{\begin{subarray}{c}\theta\in\Theta_{n}(\varepsilon),\\\ \|\theta-\theta_{0}\|_{2}^{2}>C_{1}\varepsilon^{2}\end{subarray}}{\mathbb{E}}_{p^{(n)}_{\theta}}(1-\phi_{n})\leq\exp(-Cn\varepsilon^{2})$ * (A2) Prior mass conditions: $\pi_{0}\big{(}\Theta_{n}(\varepsilon)^{c}\big{)}\leq\exp(-Cn\varepsilon^{2})$ and, for some $\rho>1$, $\pi_{0}\Big{(}\theta:D_{\rho}(p^{(n)}_{0}\|p^{(n)}_{\theta})\leq C_{2}n\varepsilon_{n}^{2}\Big{)}>0,\;\;\;\text{for any $n$}.$ * (A3) Smoothness: $|\log\pi_{0}(\theta)-\log\pi_{0}(\theta^{\prime})|\leq C_{3}\|\theta-\theta^{\prime}\|_{2},\;\;\;\forall\theta,\theta^{\prime}\in\Theta$ for some $C_{3}>0$, and $|\log p_{\theta}^{(n)}(X^{(n)})-\log p_{\theta^{\prime}}^{(n)}(X^{(n)})|\leq C_{4}(X^{(n)})\|\theta-\theta^{\prime}\|_{2},\;\;\;\forall\theta,\theta^{\prime}\in\Theta$ with $C_{5}:={\mathbb{E}}_{p^{(n)}_{0}}\big{[}C_{4}(X^{(n)})\big{]}<\infty$. Here, $D_{\rho}(p\|q)$ denotes the $\rho$-Rényi divergence between two probability measures $p$ and $q$, $D_{\rho}(p\|q)=\begin{cases}\frac{1}{\rho-1}\log\int\left(\frac{\text{\rm d}p}{\text{\rm d}q}\right)^{\rho}\text{\rm d}q,&\rho\not=1,\\\ \text{\rm KL}(p\|q),&\rho=1.\end{cases}$ Assumptions (A1) and (A2) are standard in the Bayesian statistics literature (Ghosal et al.,, 2000; Zhang and Gao,, 2019), and are used to characterize the convergence rate of the posterior distribution. Assumption (A1) states that, restricted to a subset $\Theta_{n}(\varepsilon)$ of $\Theta$, there exists a test function that is able to distinguish the true probability measure from the complement of its neighborhood. Assumption (A2) requires that the prior distribution concentrates on $\Theta_{n}(\varepsilon)$ and puts a positive mass on “good” values of $\theta$ in the sense that $p^{(n)}_{\theta}$ is close enough to the true measure $p^{(n)}_{0}$ in terms of the $\rho$-Rényi divergence. Under these assumptions, one can prove that the convergence rate of the posterior $\pi_{n}(\text{\rm d}\theta)$ to the true data generating measure is $\varepsilon_{n}^{2}$ (Ghosal et al.,, 2000; Zhang and Gao,, 2019). Typically, $\varepsilon_{n}=1/\sqrt{n}$. Assumption (A3) requires some smoothness of the prior and likelihood with respect to $\theta$. The theorem below shows that $q_{n}^{*}$ is posterior consistent. ###### Theorem 6. Suppose that conditions (A1), (A2) and (A3) are satisfied. Then, for any $\epsilon>0$, $q_{n}^{*}\Big{(}\|\theta-\theta_{0}\|_{2}^{2}>\epsilon\Big{)}=o(1)\stackrel{{\scriptstyle n\to\infty}}{{\longrightarrow}}0,\;\;\;\;p^{(n)}_{0}-a.s.$ (23) ## 6 Numerical examples ### 6.1 Bayesian logistic regression Although Bayesian logistic regression is a benchmark model in statistics, is not straightforward to use the classical MFVB method here because of the lack of a conjugate prior. This section demonstrates that it is straightforward to use the PMFVB method for Bayesian logistic regression. Consider the model $\beta\sim N(0,\sigma_{0}^{2}I_{d}),\;\;\;y_{i}\sim\text{Binomial}\big{(}1,\sigma(x_{i}^{\top}\beta)\big{)},\;\;\;\sigma(x_{i}^{\top}\beta)=\frac{1}{1+\exp(-x_{i}^{\top}\beta)},$ (24) where $\beta=(\beta_{0},\beta_{1},\beta_{2},\beta_{3})^{\top}$ and $x_{i}=(1,x_{i,1},x_{i,2},x_{i,3})^{\top}$. We generated a dataset of size $n=200$ from (24) with $d=4$ and $\sigma_{0}^{2}=4$. The likelihood is $L(\beta)=\prod_{i=1}^{n}\big{(}\sigma(x_{i}^{\top}\beta)\big{)}^{y_{i}}\big{(}1-\big{(}\sigma(x_{i}^{\top}\beta)\big{)}\big{)}^{1-y_{i}}$ with posterior $\pi(\beta)\propto p(\beta)L(\beta).$ We consider the PMFVB procedure with the model parameters $\beta$ factorized into two blocks $\theta_{1}=(\beta_{0},\beta_{1})^{\top}$ and $\theta_{2}=(\beta_{2},\beta_{3})^{\top}$. Write $x_{i}^{(1)}=(1,x_{i,1})^{\top}$ and $x_{i}^{(2)}=(x_{i,2},x_{i,3})^{\top}$. The gradient of the log posterior with respect to $\theta_{1}$ and $\theta_{2}$ is given by $\nabla_{\theta_{1}}\log\pi(\theta_{1},\theta_{2})=\sum_{i=1}^{n}\left(y_{i}-\sigma\left({x_{i}^{(1)}}^{\top}\theta_{1}+{x_{i}^{(2)}}^{\top}\theta_{2}\right)\right)x_{i}^{(1)}-\frac{1}{\sigma_{0}^{2}}\theta_{1}$ and $\nabla_{\theta_{2}}\log\pi(\theta_{1},\theta_{2})=\sum_{i=1}^{n}\left(y_{i}-\sigma\left({x_{i}^{(1)}}^{\top}\theta_{1}+{x_{i}^{(2)}}^{\top}\theta_{2}\right)\right)x_{i}^{(2)}-\frac{1}{\sigma_{0}^{2}}\theta_{2},$ respectively. The PMFVB algorithm maintains a set of $M$ particles $\\{\theta_{1,i}^{(t)},\theta_{2,i}^{(t)}\\}_{i=1,...,M}$ over iterations $t\geq 0$. At iteration $t+1$, according to (20), the first block of the particles is updated as $\theta_{1,i}^{(t+1)}=\theta_{1,i}^{(t)}+\frac{h}{2m}\sum_{k=1}^{m}\nabla_{\theta_{1}}\log\pi(\theta_{1,i}^{(t)},\theta_{2,i_{k}}^{(t)})+\sqrt{h}N(0,I_{2}),$ (25) where $\\{\theta_{2,i_{k}}^{(t)}\\}_{k=1,...,m}$ is a random subset of size $m$ from $\\{\theta_{2,i}^{(t)}\\}_{i=1,...,M}$, and $h>0$ is a step size. The second block of particles is updated as $\theta_{2,i}^{(t+1)}=\theta_{2,i}^{(t)}+\frac{h}{2m}\sum_{k=1}^{m}\nabla_{\theta_{2}}\log\pi(\theta_{1,i_{k}}^{(t+1)},\theta_{2,i}^{(t)})+\sqrt{h}N(0,I_{2}),$ (26) where $\\{\theta_{1,i_{k}}^{(t+1)}\\}_{k=1,...,m}$ is a random subset from $\\{\theta_{1,i}^{(t+1)}\\}_{i=1,...,M}$. The PMFVB procedure for approximating the posterior $\pi(\beta)$ iterates between (25) and (26) until stopping. Below, we implemented the PMFVB algorithm using $M=3000$ particles, and the optimisation time took $18$ seconds. In comparison, the MCMC (Halmitonian Monte Carlo) was conducted using PyMC with standard setup, 10,000 samples and 1,000 burn-in period, and the sampling procedure took 54 seconds. Both algorithms were run on the same Dell Optiplex 7490 AIO (i7-11700) computer. The trace plot in Figure 1 shows the lower bound (22) over the iterations. Figure 1: The trace plot of the PMFVB lower bound in Bayesian logistic regression. Figure 2 plots the marginal posteriors using kernel density estimation based on the PMFVB particles and Hamiltonian MC, and shows similar results. (a) $\beta_{0}$ (b) $\beta_{1}$ (c) $\beta_{2}$ (d) $\beta_{3}$ Figure 2: The estimated posterior density curves and rug plots of Bayesian logistic regression: the green is of the PMFVB and black of HMC (generated using PyMC). ### 6.2 Stochastic volatility This section applies the PMFVB method to Bayesian inference in the Stochastic Volatility (SV) model (Taylor,, 1982). Let $\\{y_{t},\ t=1,2,...\\}$ be an asset return time series. The SV model is $\displaystyle y_{t}$ $\displaystyle\sim$ $\displaystyle N\big{(}0,e^{x_{t}}\big{)},\;\;t=1,2,...,T$ (27) $\displaystyle x_{t}$ $\displaystyle\sim$ $\displaystyle N\big{(}\mu(1-\phi)+\phi x_{t-1},\sigma^{2}\big{)},\;\;t=2,...,T,\;\;x_{1}\sim N\big{(}\mu,\sigma^{2}/(1-\phi^{2})\big{)},$ (28) with $\mu\in\mathbb{R},\phi\in(-1,1)$ and $\sigma^{2}>0$ being the model parameters. Write $\theta=(\mu,\phi,\sigma^{2})$. Following Kim et al., (1998), we use the prior $\mu\sim N(0,\sigma_{0}^{2})$ with $\sigma_{0}^{2}=10$, $\tau=(1+\phi)/2\sim\text{Beta}(a_{0},b_{0})$ with $a_{0}=20$ and $b_{0}=1.5$, and $\sigma^{2}\sim\text{inverse- Gamma}(\alpha_{0},\beta_{0})$ with $\alpha_{0}=2.5$ and $\beta_{0}=0.025$. It is challenging to perform Bayesian inference for the SV model because the likelihood $p(y|\theta)$ is a high-dimensional integral over the latent variables $x=x_{1:T}$. A number of Bayesian methods are available for estimating the SV model including SMC2 (Chopin et al.,, 2013; Gunawan et al.,, 2022) and fixed-form Variational Bayes (Tran et al.,, 2020). However, to the best of our knowledge, MFVB has never been successfully used for this model. To apply the PMFVB for SV, we use the following factorized variational distribution $q(\theta,x)=q(\mu)q(\sigma^{2})q(\phi,x).$ (29) This factorization leads to an analytical update for $q(\mu)$ and $q(\sigma^{2})$, and we only need one LMC procedure to update $q(\phi,x)$, see Appendix A for the derivation. One could also use $q(\theta,x)=q(\theta)q(x),$ (30) but then two LMC procedures are needed to update $q(\theta)$ and $q(x)$ as $q(\theta)$ cannot be updated analytically. We generate a return time series of $T=500$ observations from the SV model (27)-(28) with $\mu=1$, $\phi=0.8$ and $\sigma=0.5$. Figure 3 plots the posterior densities for $\theta$ estimated by the PMFVB and SMC methods. We use 500 particles in both methods. The CPU times taken by PMFVB and SMC were 8.2 and 29.3 minutes, respectively. The running time for SMC depends on many factors such as the number of particles used in the particle filter for estimating the likelihood and the number of Markov moves. We select these numbers following their typical use in the literature; see, e.g., Gunawan et al., (2022). Figure 3 shows that the PMFVB estimates for $\mu$ and $\phi$ are almost identical to that of SMC except the estimate of $\sigma^{2}$, where PMFVB underestimates the posterior variance. This is the well-known problem for the MFVB method due to the factorization (29) it imposes on the variational distribution. Section 8 suggests a possible solution. Figure 3: The posterior density curves for the SV model parameters estimated by PMFVB (solid line) and SMC (dashed line). The right panel shows the smoothed lower bound in PMFVB over a window of size 50. ## 7 Particle MFVB for Bayesian neural networks This section presents a variant of the PMFVB approach for Bayesian inference in big models like deep neural networks. One of the most commonly used method for Bayesian inference in deep learning is perhaps the Stochastic Gradient Langevin Dynamics (SGLD) method of Welling and Teh, (2011) and its variants. Let $\theta\in\mathbb{R}^{d}$ be the model parameters and $\pi(\text{\rm d}\theta)$ their posterior distribution. The SGLD algorithm is based on the discretised Langevin diffusion (2.2) $\theta^{(t+1)}=\theta^{(t)}+\frac{h}{2}\widehat{\nabla\log\pi\big{(}\theta^{(t)}\big{)}}+\sqrt{h}\eta_{t},\;\;\;\eta_{t}\stackrel{{\scriptstyle i.i.d}}{{\sim}}N(0,I_{d}),$ (31) where $\widehat{\nabla\log\pi(\theta)}$ is an unbiased estimator of the gradient $\nabla\log\pi(\theta)$, often computed from a data mini-batch. There is a large literature aiming at improving SGLD by exploiting the curvature structure of the log-target density function. For example, Li et al., (2016) propose a preconditioned Stochastic Gradient Langevin Dynamics (pSGLD) that rescales the gradient of the log target density by a diagonal matrix learnt using the second moments of the previous gradients. Kim et al., (2022) introduce several SGLD schemes that add an adaptive drift to the noisy log- gradient estimator. See also Girolami and Calderhead, (2011) and Chen et al., (2014). ### 7.1 The algorithm We introduce three refinements to make the PMFVB approach computationally efficient in big-data and complex-model situations. First, we choose the updating block randomly in each iteration $t$ and for each particle $i$. Let $\iota=\\{1\leq j_{1}<j_{2}<...j_{m}\leq d\\}$ be an index subset of size $m$ from $\\{1,2,...,d\\}$. We denote by $\theta(\iota)=(\theta_{j_{1}},...,\theta_{j_{m}})$ the sub-vector obtained from $\theta$ corresponding to the index set $\iota$; and by $\theta(\setminus\iota)$ the vector from $\theta$ after removing the components in $\theta(\iota)$. For iteration $t$ and for each particle $i$, the index set $\iota_{i}$ is randomly selected (we suppress the dependence on $t$ for notational simplicity); the corresponding block of $m$ components $\theta_{i}^{(t)}(\iota_{i})$ of $\theta_{i}^{(t)}$ is updated via LMC $\theta_{i}^{(t+1)}(\iota_{i})=\theta_{i}^{(t)}(\iota_{i})+\frac{h}{2}{\mathbb{E}}_{q_{\theta(\setminus\iota)}^{(t)}}\Big{[}\nabla_{\theta(\iota_{i})}\log\pi\big{(}\theta_{i}^{(t)}(\iota_{i}),\theta(\setminus\iota_{i})\big{)}\Big{]}+\sqrt{h}\eta_{i},$ (32) with $\eta_{i}\sim N_{m}(0,I)$. Here, $q_{\theta(\setminus\iota_{i})}^{(t)}$ denotes the marginal distribution of the particles $\theta_{i}^{(t)}$ w.r.t. the $\theta(\setminus\iota_{i})$ components. Second, we approximate the expectation term ${\mathbb{E}}_{q_{\theta(\setminus\iota_{i})}^{(t)}}[\cdot]$ in (32) by $\nabla_{\theta(\iota_{i})}\log\pi\big{(}\theta_{i}^{(t)}(\iota_{i}),\bar{\theta}^{(t)}(\setminus\iota_{i})\big{)}$, where $\bar{\theta}^{(t)}$ is the sample mean of the particles $\\{\theta_{i}^{(t)},i=1,...,M\\}$. This leads to a significant reduction in computational time, compared to an alternative that averages over a subset of particles as in (20). Third, it is desirable to incorporate an adaptive SGLD scheme into the LMC update in PMFVB. Our work uses the ADAM-based adaptive-drift SGLD scheme of Kim et al., (2022). Algorithm 2 summarises the method. Algorithm 2 Particle MFVB for neural networks 1:procedure PMFVB-NN($h$, $\beta_{1},\beta_{2},a,\lambda,\epsilon$) 2: Input: step size $h$, smoothing weights $\beta_{1},\beta_{2}$, tolerance $\epsilon>0$, scale factor $a$ and $\lambda>0$. 3: Initialise particles $\theta_{i}^{(0)}\sim p_{0},~{}i=1,...,M$, $m_{0}=0$, $v_{0}=0$, $t=0$. 4: while Not $S(\epsilon)$ do 5: $g_{t}\leftarrow m_{t}\oslash\sqrt{v_{t}+\lambda}$ and $\bar{\theta}^{(t)}\leftarrow\frac{1}{M}\sum_{i=1}^{M}\theta^{(t)}_{i}$ 6: for $i$ in $[M]$ do 7: $\iota_{i}\leftarrow\\{1\leq j_{1}<j_{2}<...j_{m}\leq d\\}$, $j_{i}\in\\{1,...,d\\}$ are randomly selected. 8: Update: $\tilde{g}_{t}:=\Big{(}\nabla_{\theta(\iota_{i})}\log\pi\big{(}\theta_{i}^{(t)}(\iota_{i}),\bar{\theta}^{(t)}(\setminus\iota_{i})\big{)}+ag_{t}(\iota_{i})\Big{)}$ $\displaystyle\theta_{i}^{(t+1)}(\iota_{i})$ $\displaystyle\leftarrow\theta_{i}^{(t)}(\iota_{i})+\frac{h}{2}\tilde{g}_{t}+\sqrt{h}\eta_{i},~{}\eta_{i}\sim N_{m}(0,I)$ (33) $\displaystyle\theta_{i}^{(t+1)}$ $\displaystyle\leftarrow\big{(}\theta_{i}^{(t+1)}(\iota_{i}),\theta_{i}^{(t)}(\setminus\iota_{i})\big{)}$ 9: $g^{(t)}_{i}\leftarrow\nabla_{\theta}\log\pi(\theta_{i}^{(t+1)})$, new gradient 10: Update the adaptive drift $\displaystyle\bar{g}_{t}\leftarrow\frac{1}{M}\sum_{i=1}^{M}g^{(t)}_{i},$ $\displaystyle~{}~{}\bar{v}_{t}\leftarrow\sqrt{\frac{1}{M}\sum_{i}(g^{(t)}_{i}-\bar{g}_{t})\odot(g^{(t)}_{i}-\bar{g}_{t})}$ $\displaystyle m_{t+1}\leftarrow\beta_{1}m_{t}+(1-\beta_{1})\bar{g}_{t},$ $\displaystyle~{}~{}v_{t+1}\leftarrow\beta_{2}v_{t}+(1-\beta_{2})\bar{v}_{t}.$ 11: end for 12: $t=t+1$. 13: end while 14:end procedure In Algorithm 2, $\oslash$ and $\odot$ denote the component-wise division and multiplication operators, respectively. The implementation below follows Kim et al., (2022) and sets $\beta_{1}=0.9$, $\beta_{2}=0.99$, $a=100$ and $\lambda=10^{-8}$. The block size $m$ is 10% of the length $d$ of $\theta$. The step size $h$ is typically $0.001$. ### 7.2 Applications #### 7.2.1 Bayesian neural networks for regression Consider the neural network regression model $y_{i}=\eta(x_{i},w)+\epsilon_{i},\;\;\epsilon_{i}\sim N(0,\sigma^{2}),\;\;i=1,...,n,$ (34) where $\eta(x_{i},w)$ is the output from a neural network with the input $x_{i}$ and weights $w=\\{w_{j},j=1,...,d_{w}\\}$. Neural networks are prone to overfitting and the Bayesian framework provides a principled way for dealing with this problem by placing a regularization prior on the weights. We consider the following adaptive ridge-type regularization prior on the weights $\displaystyle w_{j}|\tau_{j}$ $\displaystyle\sim$ $\displaystyle N(0,\tau_{j}),\;\;j=1,...,d_{w},$ (35) $\displaystyle\tau_{j}$ $\displaystyle\sim$ $\displaystyle\text{Inverse Gamma}(\alpha_{0},\beta_{0}).$ We set $\alpha_{0}=1$ and $\beta_{0}=0.01$ in all the examples below. For the variance $\sigma^{2}$, we use the improper prior $p(\sigma^{2})\sim 1/\sigma^{2}$ (Park and Casella,, 2008). The model parameters include $\theta=\big{(}w,\tau=\\{\tau_{j},j=1,...,d_{w}\\},\sigma^{2}\big{)}$. Given the different roles of $w$, $\tau$ and $\sigma^{2}$, it is natural to use the following factorized variational distribution in PMFVB $q(\theta)=q(w)q(\tau)q(\sigma^{2}).$ (36) It can be seen from (16) that the optimal MFVB distribution for $\tau$ is $q(\tau)=\prod_{j}q(\tau_{j})$, where $q(\tau_{j})$ has an Inverse-Gamma density with scale $\alpha_{0}+1/2$ and rate $\beta_{0}+\langle w_{j}^{2}\rangle/2$. Here, as common in the MFVB literature, $\langle\cdot\rangle$ denotes the expectation with respect to the variational distribution $q$. The optimal MFVB distribution for $\sigma^{2}$ is Inverse- Gamma with scale $n/2$ and shape $\frac{1}{2}\langle\sum_{i=1}^{n}(y_{i}-\eta(x_{i},w))^{2}\rangle$. The terms $\langle w_{j}^{2}\rangle$ and $\langle\sum_{i=1}^{n}(y_{i}-\eta(x_{i},w))^{2}\rangle$ can be approximated using the $w$-particles. The optimal MFVB distribution for the weights $w$ has the log-density $\log q(w)=-\frac{1}{2}\sum_{j=1}^{d_{w}}\langle\frac{1}{\tau_{j}}\rangle w_{j}^{2}-\frac{1}{2}\langle\frac{1}{\sigma^{2}}\rangle\sum_{i=1}^{n}(y_{i}-\eta(x_{i},w))^{2},$ (37) which indicates that $q(\text{\rm d}w)$ cannot be updated analytically. Note that the expectation terms $\langle\frac{1}{\tau_{j}}\rangle$ and $\langle\frac{1}{\sigma^{2}}\rangle$ can be computed analytically. Based on (37), a Langevin MC procedure in Algorithm 2 is used to approximate the distribution $q(\text{\rm d}w)$. #### A simulation study We simulate data from the following non-linear model (Tran et al.,, 2020) $y=5+10x_{1}+\frac{10}{x_{2}^{2}+1}+5x_{3}x_{4}+2x_{4}+5x_{4}^{2}+5x_{5}+2x_{6}+\frac{10}{x_{7}^{2}+1}+5x_{8}x_{9}+5x_{9}^{2}+5x_{10}+\epsilon,$ where $\epsilon\sim{\cal N}(0,1)$, $(x_{1},...,x_{20})^{\top}$ are generated from a multivariate normal distribution with mean zero and covariance matrix $(0.5^{|i-j|})_{i,j}$; the last ten variables are not in the regression. The training data has 100,000 observations, the validation and test datasets each have 10,000 observations. We compare the predictive performance of PMFVB with Gaussian VB of Tran et al., (2020), SGLD of Welling and Teh, (2011), preconditioned SGLD of Li et al., (2016) and the ADAM-based adaptive drift SGLD of Kim et al., (2022). We use 300 particles in PMFVB. For the SGLD methods, one must first transform $\tau$ and $\sigma^{2}$ into an unconstrained space, then apply the Langevin MC to jointly sample both the weights $w$, the transformed $\sigma^{2}$ and the latent (transformed) $\tau$. The dimension of this LMC is double that of the LMC used in the PMFVB method where one needs to sample $w$ only. We set the tuning parameters in the SGLD methods following the suggestions in the papers proposing the methods. The performance metrics include the best validation-data partial predictive score (PPS) and the test-data PPS, $\text{PPS}=-\frac{1}{|D|}\sum_{(x,y)\in D}p(y|x,\widehat{\theta})$ where $D$ is the validation and test data, respectively, $\widehat{\theta}$ is the posterior mean estimate of the model parameters. We also report the test- data MSE and the CPU running time (in minutes). For each method, the Bayesian neural network model is trained until the validation PPS no longer decreases after 100 iterations. Table 1 summarizes the results, which are based on 10 different runs for each algorithm. The PMFVB method achieves the best predictive performance; it is also stable across different runs as reflected by the standard deviations. In terms of the running time, the precondioned SGLD is the most efficient. We note, however, that the PMFVB is parallelisable and its running time can be greatly reduced if multiple-core computers are used. Method | Validation PPS | Test PPS | Test MSE | CPU ---|---|---|---|--- Gaussian VB | 1.2305 (0.0678) | 1.2513 (0.0714) | 4.3795 (0.6384) | 18.78 SGLD | 1.3751 (0.1060) | 1.3881 (0.1055) | 4.9997 (1.1608) | 2.74 Precondioned SGLD | 1.0527 (0.1274) | 1.0676 (0.1335) | 3.1517 (0.8158) | 1.36 Adam SGLD | 1.1849 (0.0485) | 1.2021 (0.0487) | 3.7987 (0.4874) | 4.26 PMFVB | 0.8239 (0.0642) | 0.8598 (0.0640) | 2.0631 (0.2603) | 16.29 Table 1: Simulation data: Predictive performance in term of the partial predictive score (PPS), the mean squared error (MSE) and CPU time (in minutes), averaged over 10 different runs. The numbers in brackets are the standard deviation across the replicates. The best scores for each performance metric are highlighted in bold. The structure of the neural net is (20,20,20,1), i.e., one input layer of 20 units, two hidden layers each of 20 units and one output unit. #### The HILDA data The Household, Income and Labour Dynamics in Australia (HILDA) Survey111The HILDA Survey was conducted by the Australian Government Department of Social Services (DSS). The findings and views reported in this paper, however, are those of the authors and should not be attributed to the Australian Government, DSS, or any of DSS’ contractors or partners. data consists of many household-based variables about economic and personal well-being. We apply the Bayesian neural network model to predict the Income, using 43 covariates many of which are dummy variables used to encode the categorical variables. The dataset is randomly divided into a training set of 14,010 observations for fitting the model, a validation set of of 1751 observations for stopping, and a test set of 1751 observations for final performance evaluation. We use a neural network with three hidden layers, each having 50 units, and use the same algorithm settings as in the previous simulation example. Table 2 summarizes the result which shows that the PMFVB algorithm achieves the best predictive performance together with the highest stability (across different runs). Method | Validation PPS | Test PPS | Test MSE | CPU ---|---|---|---|--- Gaussian VB | $-0.0957$ (0.0071) | $-0.1474$ (0.0143) | 0.2739 (0.0078) | 10.62 SGLD | $-0.1272$ (0.0035) | $-0.1850$ (0.0051) | 0.2541 (0.0026) | 2.40 Precondioned SGLD | $-0.1065$ (0.0141) | $-0.1621$ (0.0190) | 0.2612 (0.0037) | 5.42 Adam SGLD | $-0.1321$ (0.0035) | $-0.1897$ (0.0022) | 0.2516 (0.0011) | 1.57 PMFVB | ${\bf-0.1331}$ (0.0019) | ${\bf-0.1932}$ (0.0014) | 0.2498 (0.0007) | 9.07 Table 2: HILDA data: Predictive performance in term of the partial predictive score (PPS), the mean squared error (MSE) and CPU time (in minutes), averaged over 10 different runs. The numbers in brackets are the standard deviation across the replicas. The best scores for each performance metric are highlighted in bold. The structure of the neural net is (43,50,50,50,1), i.e. three hidden layers each of 50 units. #### 7.2.2 Bayesian neural networks for classification Consider the neural network binary classification model $\displaystyle y_{i}$ $\displaystyle\sim$ $\displaystyle\text{Binomial}\big{(}1,p(x_{i},w)\big{)},$ $\displaystyle p(x_{i},w)$ $\displaystyle=$ $\displaystyle\frac{1}{1+\exp\big{(}-\eta(x_{i},w)\big{)}},\;\;i=1,...,n$ where $\eta(x_{i},w)$ is the output from a neural network with the input $x_{i}$ and weights $w=\\{w_{j},j=1,...,d_{w}\\}$. We use the same regularisation prior as in the regression (35). The model parameters are $\theta=\big{(}w,\tau=\\{\tau_{j},j=1,...,d_{w}\\}\big{)}$. We consider the following factorization in the PMFVB $q(\theta)=q(w)q(\tau).$ The optimal MFVB distribution for $\tau$ is $q(\tau)=\prod_{j}q(\tau_{j})$, where $q(\tau_{j})$ has an Inverse-Gamma density with scale $\alpha_{0}+1/2$ and rate $\beta_{0}+\langle w_{j}^{2}\rangle/2$. The term $\langle w_{j}^{2}\rangle$ can be approximated from the $w$-particles. The optimal MFVB distribution for the weights $w$ has the log-density $\log q(w)=-\frac{1}{2}\sum_{j=1}^{d_{w}}\langle\frac{1}{\tau_{j}}\rangle w_{j}^{2}+\sum_{i=1}^{n}\Big{(}y_{i}\eta(x_{i},w)-\log(1+e^{\eta(x_{i},w)})\Big{)}.$ (38) The expectation term $\langle\frac{1}{\tau_{j}}\rangle$ can be computed analytically. Based on (38), a Langevin MC procedure in Algorithm 2 is used to approximate the distribution $q(\text{\rm d}w)$. #### The census data The census dataset obtained from the U.S. Census Bureau is available on the UCI Machine Learning Repository. The task is to classify whether a person’s income is over $50K per year, based on 14 covariates including age, work class, race. Using dummy variables to represent the categorical variables, there are a total of 103 input variables in the Bayesian neural network model. The full dataset of 45,221 observations is divided into a training set (53%), validation set (14%) and test set (33%). We use 200 particles in the PMFVB method. Table 3 summarizes the results which show that the PMFVB obtains the best predictive performance. Method | Validation PPS | Test PPS | Test MCR | CPU ---|---|---|---|--- SGLD | 0.3153 (0.0046) | 0.4304 (0.0348) | 0.1990 (0.0033) | 3.83 Precondioned SGLD | 0.3133 (0.0012) | 0.4407 (0.0251) | 0.2034 (0.0051) | 3.47 Adam SGLD | 0.3109 (0.0011) | 0.4422 (0.0211) | 0.2097 (0.0079) | 3.20 PMFVB | 0.3052 (0.0008) | 0.4217 (0.0069) | 0.1958 (0.0030) | 18.30 Table 3: Census data: Predictive performance in term of the partial predictive score (PPS), miss-classification rate (MCR), and CPU time (in minutes), averaged over 10 different runs. The numbers in brackets are the standard deviation across the runs. The best scores for each performance metric are highlighted in bold. The structure of the neural net is (103,100,100,1). ## 8 Discussion We propose a particle-based MFVB procedure for Bayesian inference, which extends the scope of classical MFVB, is widely applicable and enjoys attractive theoretical properties. The new method can also be used for training Bayesian deep learning models. The main limitation of MFVB methods including PMFVB is the use of factorized variational distributions, which might fail to capture the dependence structure between the blocks of variables. This limitation can be mitigated using the reparametrisation method of Tan, (2021). Write the target distribution as $\pi(x,y)=\pi_{x}(x)\pi_{y|x}(y|x)$. Let $b(x)$ and $H(x)$ be the gradient and minus Hessian of $\log\pi_{y|x}(y|x)$ at some point $y=y_{0}$. Assume that $H(x)$ is positive definite, consider the transformation $\widetilde{y}=H(x)^{1/2}\big{(}y-\mu(x)\big{)}$ with $\mu(x)=H(x)^{-1}b(x)+y_{0}$. The joint density of $x$ and $\widetilde{y}$ is $\widetilde{\pi}(x,\widetilde{y})=|H(x)|^{-1/2}\pi_{x}(x)\pi_{y|x}\big{(}H(x)^{-1/2}\widetilde{y}+\mu(x)|x\big{)}.$ The motivation is that, if $\pi_{y|x}(y|x)\sim N\big{(}\mu(x),H(x)^{-1}\big{)}$, then $\widetilde{y}\sim N(0,I)$ and is independent of $x$. In general, we can expect that the dependence between $x$ and $\widetilde{y}$ is reduced and is much less than the dependence between $x$ and $y$. Tan, (2021) considers this reparametrisation approach in Gaussian Variational Bayes and documents significant improvement in posterior approximation accuracy. Coupling the reparametrisation approach with PMFVB could lead to an efficient technique for Bayesian inference. This research is in progress. ## Appendix A: Derivation for the SV example The joint posterior of $\theta$ and latent $x$ is $\displaystyle\pi(\theta,x)$ $\displaystyle\propto$ $\displaystyle p(\theta)p(x|\theta)p(y|x,\theta)$ $\displaystyle=$ $\displaystyle p(\mu)p(\phi)p(\sigma^{2})N\big{(}x_{1}|\mu,\frac{\sigma^{2}}{1-\phi^{2}}\big{)}\prod_{t=2}^{T}N\big{(}x_{t}|\mu(1-\phi)+\phi x_{t-1},\sigma^{2}\big{)}\prod_{t=1}^{T}N\big{(}y_{t}|0,e^{x_{t}}\big{)},$ where $p(\mu)=N(\mu|0,\sigma^{2})$, $p(\phi)\propto\big{(}(1+\phi)/2\big{)}^{a_{0}-1}\big{(}(1-\phi)/2\big{)}^{b_{0}-1}$, and $p(\sigma^{2})\propto(\sigma^{2})^{-(1+\alpha_{0})}\exp(-\beta_{0}/\sigma^{2})$. Using (16), the optimal MFVB distribution $q(\mu)$ is $N(\mu_{q},\sigma_{q}^{2})$ with $\mu_{q}=B/A$ and $\sigma_{q}^{2}=1/A$, where $A=\frac{1}{\sigma_{0}^{2}}+\frac{\alpha_{\sigma^{2}}}{\beta_{\sigma^{2}}}\langle 1-\phi^{2}\rangle+(T-1)\frac{\alpha_{\sigma^{2}}}{\beta_{\sigma^{2}}}\langle(1-\phi)^{2}\rangle$ and $B=\frac{\alpha_{\sigma^{2}}}{\beta_{\sigma^{2}}}\langle(1-\phi^{2})x_{1}\rangle+(T-1)\frac{\alpha_{\sigma^{2}}}{\beta_{\sigma^{2}}}\langle(1-\phi)\sum_{t=2}^{T}(x_{t}-\phi x_{t-1})\rangle,$ with $\alpha_{\sigma^{2}}$ and $\beta_{\sigma^{2}}$ given below. Recall that $\langle\cdot\rangle$ denotes the expectation with respect to the variational distribution $q$. The optimal MFVB distribution $q(\sigma^{2})$ is Inverse- Gamma$(\alpha_{\sigma^{2}},\beta_{\sigma^{2}})$, where $\alpha_{\sigma^{2}}=\alpha_{0}+T/2$ and $\beta_{\sigma^{2}}=\beta_{0}+\frac{1}{2}\Big{\langle}(1-\phi^{2})\big{[}(x_{1}-\mu_{q})^{2}+\sigma_{q}^{2}\big{]}+\sum_{t=2}^{T}\big{[}(x_{t}-\mu_{q}(1-\phi)-\phi x_{t-1})^{2}+(1-\phi)^{2}\sigma_{q}^{2}\big{]}\Big{\rangle}.$ All the expectations $\langle\cdot\rangle$ in the expressions above are with respect to $q(\phi,x)$, which can be estimated from the $(\phi,x)$-particles. The logarithm of the optimal MFVB density for $(\phi,x)$ is $\displaystyle\log q(\phi,x)$ $\displaystyle=$ $\displaystyle\Big{\langle}\log p(\phi)+\log p(x_{1}|\theta)+\sum_{t=2}^{T}\log p(x_{t}|x_{t-1},\theta)+\sum_{t=1}^{T}\log p(y_{t}|x_{t})\Big{\rangle}+C$ $\displaystyle=$ $\displaystyle\Big{\langle}(a_{0}-1)\log(1+\phi)+(b_{0}-1)\log(1-\phi)+\frac{1}{2}\log(1-\phi^{2})-\frac{1-\phi^{2}}{2\sigma^{2}}(x_{1}-\mu)^{2}$ $\displaystyle-\sum_{t=2}^{T}\frac{1}{2\sigma^{2}}\big{(}x_{t}-\mu(1-\phi)-\phi x_{t-1}\big{)}^{2}-\sum_{t=1}^{T}\big{(}\frac{x_{t}}{2}+\frac{1}{2}y_{t}^{2}e^{-x_{t}}\big{)}\Big{\rangle}+C,$ where $C$ is a constant independent of $\phi$ and $x$. For the LMC step, we need the gradient $\nabla_{\phi}\log q(\phi,x)$ and $\nabla_{x}\log q(\phi,x)$. $\displaystyle\nabla_{\phi}\log q(\phi,x)$ $\displaystyle=$ $\displaystyle\frac{a_{0}-1}{1+\phi}-\frac{b_{0}-1}{1-\phi}-\frac{\phi}{1-\phi^{2}}+\phi\frac{\alpha_{\sigma^{2}}}{\beta_{\sigma^{2}}}\big{[}(x_{1}-\mu_{q})^{2}+\sigma_{q}^{2}\big{]}$ $\displaystyle+\frac{\alpha_{\sigma^{2}}}{\beta_{\sigma^{2}}}\sum_{t=2}^{T}\big{[}(x_{t-1}-\mu_{q})(x_{t}-\mu_{q})-\phi(x_{t-1}-\mu_{q})^{2}+(1-\phi)\sigma_{q}^{2}\big{]},$ $\nabla_{x_{1}}\log q(\phi,x)=-\frac{\alpha_{\sigma^{2}}}{\beta_{\sigma^{2}}}(1-\phi^{2})(x_{1}-\mu_{q})-\frac{1}{2}+\frac{y_{1}^{2}}{2}e^{-x_{1}}+\phi\frac{\alpha_{\sigma^{2}}}{\beta_{\sigma^{2}}}\big{(}x_{2}-(1-\phi)\mu_{q}-\phi x_{1}\big{)},$ $\nabla_{x_{t}}\log q(\phi,x)=-\frac{1}{2}+\frac{y_{t}^{2}}{2}e^{-x_{t}}-\frac{\alpha_{\sigma^{2}}}{\beta_{\sigma^{2}}}\big{(}x_{t}-(1-\phi)\mu_{q}-\phi x_{t-1}\big{)}+\phi\frac{\alpha_{\sigma^{2}}}{\beta_{\sigma^{2}}}\big{(}x_{t+1}-(1-\phi)\mu_{q}-\phi x_{t}\big{)},$ for $t=2,...,T-1$, and finally, $\nabla_{x_{T}}\log q(\phi,x)=-\frac{1}{2}+\frac{y_{T}^{2}}{2}e^{-x_{T}}-\frac{\alpha_{\sigma^{2}}}{\beta_{\sigma^{2}}}\big{(}x_{T}-(1-\phi)\mu_{q}-\phi x_{T-1}\big{)}.$ ## Appendix B: Technical proofs ###### Proof of Theorem 2. Consider two measures in ${\cal Q}={\mathbb{W}}_{2}({\cal X})\otimes{\mathbb{W}}_{2}({\cal Y})$: $q^{(1)}(\text{\rm d}x\times\text{\rm d}y)=q_{x}^{(1)}(\text{\rm d}x)q_{y}^{(1)}(\text{\rm d}y)$ and $q^{(2)}(\text{\rm d}x\times\text{\rm d}y)=q_{x}^{(2)}(\text{\rm d}x)q_{y}^{(2)}(\text{\rm d}y)$. Then, $W_{2}^{2}(q^{(1)},q^{(2)})=W_{2}^{2}(q_{x}^{(1)},q_{x}^{(2)})+W_{2}^{2}(q_{y}^{(1)},q_{y}^{(2)}).$ (39) With a generic measure $q(\text{\rm d}x\times\text{\rm d}y)=q_{x}(\text{\rm d}x)q_{y}(\text{\rm d}y)\in{\cal Q}$, write $F(q)$ as $F(q)=F_{x}(q_{x})+F_{y}(q_{y})+F_{xy}(q)$ (40) where $F_{x}(q_{x})=\int_{{\cal X}}q_{x}(x)\log q_{x}(x)\text{\rm d}x,\;\;\;F_{y}(q_{y})=\int_{{\cal Y}}q_{y}(y)\log q_{y}(y)\text{\rm d}y$ and $F_{xy}(q)=\int_{{\cal X}\times{\cal Y}}\big{(}-\log\pi(x,y)\big{)}q(x,y)\text{\rm d}x\times\text{\rm d}y.$ To show that $F(q)$ is lower semi-continuous (l.s.c), consider a sequence of measures $\\{q^{n}(\text{\rm d}x\times\text{\rm d}y)=q_{x}^{n}(\text{\rm d}x)q_{y}^{n}(\text{\rm d}y)\\}_{n\geq 1}\subset{\mathbb{W}}_{2}({\cal Q})$ weakly converging to $q^{*}(\text{\rm d}x\times\text{\rm d}y)=q_{x}^{*}(\text{\rm d}x)q_{y}^{*}(\text{\rm d}y)$, i.e. $W_{2}(q^{n},q^{*})\to 0,~{}\text{ as }n\to\infty.$ Then (39) implies that $W_{2}(q_{x}^{n},q_{x}^{*})\to 0,\;\;\;\;\text{ and }\;\;\;\;\;W_{2}(q_{y}^{n},q_{y}^{*})\to 0,$ hence $q_{x}^{n}\stackrel{{\scriptstyle w}}{{\longrightarrow}}q_{x}^{*},\;\;\;\;\text{ and }\;\;\;\;\;q_{y}^{n}\stackrel{{\scriptstyle w}}{{\longrightarrow}}q_{y}^{*}.$ By Proposition 7.7 of Santambrogio, (2015), $F_{x}(q_{x})$ and $F_{y}(q_{y})$ are l.s.c; hence we have $\liminf_{n}F_{x}(q_{x}^{n})\geq F_{x}(q_{x}^{*}),\;\;\;\;\text{ and }\;\;\;\;\;\liminf_{n}F_{y}(q_{y}^{n})\geq F_{y}(q_{y}^{*}).$ (41) As $\log\pi(x,y)$ is continuous, and hence bounded, on the compact set ${\cal X}\times{\cal Y}$, by the definition of weak convergence, we have that $\lim_{n}F_{xy}(q^{n})=F_{xy}(q^{*}).$ (42) From (41)-(42), $\liminf_{n}F(q^{n})\geq F(q^{*}),$ (43) proving that $F(q)$ is l.s.c. We now show that $F(q)$ is convex. Consider two measures $q^{(1)},q^{(2)}\in{\cal Q}$ and any $t\in(0,1)$. Because $f(z)=z\log(z)$ is convex, $F_{x}(q_{x})=\int f(q_{x})\text{\rm d}x$ and $F_{y}(q_{y})=\int f(q_{y})\text{\rm d}y$ are convex. Also, $F_{xy}(q)$ is linear and hence convex. Therefore, $\displaystyle F(tq^{(1)}+(1-t)q^{(2)})$ $\displaystyle=$ $\displaystyle F_{x}(tq_{x}^{(1)}+(1-t)q_{x}^{(2)})+F_{y}(tq_{y}^{(1)}+(1-t)q_{y}^{(2)})+F_{xy}(tq^{(1)}+(1-t)q^{(2)})$ $\displaystyle\leq$ $\displaystyle t\big{(}F_{x}(q_{x}^{(1)})+F_{y}(q_{y}^{(1)})+F_{xy}(q^{(1)})\big{)}+(1-t)\big{(}F_{x}(q_{x}^{(2)})+F_{y}(q_{y}^{(2)})+F_{xy}(q^{(2)})\big{)}$ $\displaystyle=$ $\displaystyle tF(q^{(1)})+(1-t)F(q^{(2)}).$ ∎ ###### Proof of Corollary 3. As ${\cal X}$ and ${\cal Y}$ are compact, by the Prokhorov theorem, ${\cal P}({\cal X})$ and ${\cal P}({\cal Y})$ are compact (w.r.t. to the weak convergence, and also w.r.t the Wasserstein metric). As ${\cal P}_{2}({\cal X})\subset{\cal P}({\cal X})$, for any sequence of measures $\\{\mu_{n}\\}$ in ${\cal P}_{2}({\cal X})$, there must exist a subsequence $\\{\mu_{n_{k}}\\}$ weakly converging to some measure $\mu\in{\cal P}({\cal X})$. As ${\cal X}$ is compact, $\int_{{\cal X}}|x|^{2}\mu(\text{\rm d}x)<\infty$, hence $\mu\in{\cal P}_{2}({\cal X})$. This implies that ${\cal P}_{2}({\cal X})$ is compact. Similarly, ${\cal P}_{2}({\cal Y})$ is compact, and therefore the product space ${\cal Q}={\mathbb{W}}_{2}({\cal X})\otimes{\mathbb{W}}_{2}({\cal Y})$ is compact. Recall that ${\mathbb{W}}_{2}({\cal X})$ (res. ${\mathbb{W}}_{2}({\cal Y})$) is ${\cal P}_{2}({\cal X})$ (res. ${\cal P}_{2}({\cal Y})$) equipped with the Wasserstein distance. From Theorem 2, $F(q)$ is l.s.c on the compact space ${\cal Q}$, by the Weierstrass theorem, there exists $q^{*}\in{\cal Q}$ such that $F(q^{*})=\min\\{F(q):q\in{\cal Q}\\}$. The uniqueness of $q^{*}$ is implied by the fact that $F(q)$ is convex. ∎ ###### Proof of Theorem 4. Given $q_{y}^{(t)}$, define $q_{x}^{*}(x)=\exp\Big{(}{\mathbb{E}}_{q_{y}^{(t)}}\big{[}\log\pi(x,y)\big{]}+C(q_{y}^{(t)})\Big{)},$ where $C(q_{y}^{(t)}$ is the normalising constant. We have that $\displaystyle F(q^{(t)})=\text{\rm KL}(q_{x}^{(t)}q_{y}^{(t)}\|\pi)$ $\displaystyle=$ $\displaystyle\int q_{x}^{(t)}(x)q_{y}^{(t)}(y)\log\frac{q_{x}^{(t)}(x)q_{y}^{(t)}(y)}{\pi(x,y)}\text{\rm d}x\times\text{\rm d}y$ $\displaystyle=$ $\displaystyle\int q_{x}^{(t)}(x)\Big{(}\log q_{x}^{(t)}(x)-{\mathbb{E}}_{q_{y}^{(t)}}\big{(}\log\pi(x,y)\big{)}\Big{)}\text{\rm d}x+E(q_{y}^{(t)})$ $\displaystyle=$ $\displaystyle\text{\rm KL}(q_{x}^{(t)}\|q_{x}^{*})+E(q_{y}^{(t)})+C(q_{y}^{(t)})$ with $E(q_{y}^{(t)})=\int q_{y}^{(t)}(y)\log q_{y}^{(t)}(y)\text{\rm d}y$. If the step size $h_{x}$ is sufficiently small, Lemma 1 guarantees that $\text{\rm KL}(q_{x}^{(t+1)}\|q_{x}^{*})\leq\text{\rm KL}(q_{x}^{(t)}\|q_{x}^{*});$ hence $F(q_{x}^{(t+1)}q_{y}^{(t)})\leq F(q_{x}^{(t)}q_{y}^{(t)})=F(q^{(t)}).$ (44) Given $q_{x}^{(t+1)}$, define $q_{y}^{*}(y)=\exp\Big{(}{\mathbb{E}}_{q_{x}^{(t+1)}}\big{[}\log\pi(x,y)\big{]}+C(q_{x}^{(t+1)})\Big{)}.$ We have that $F(q_{x}^{(t+1)}q_{y}^{(t)})=\text{\rm KL}(q_{y}^{(t)}\|q_{y}^{*})+E(q_{x}^{(t+1)})+C(q_{x}^{(t+1)}).$ Lemma 1 guarantees that $\text{\rm KL}(q_{y}^{(t+1)}\|q_{y}^{*})\leq\text{\rm KL}(q_{y}^{(t)}\|q_{y}^{*})$ and hence $F(q^{(t+1)})=F(q_{x}^{(t+1)}q_{y}^{(t+1)})\leq F(q_{x}^{(t+1)}q_{y}^{(t)}).$ (45) From (44)-(45), $F(q^{(t)})$ is reduced over $t$. By Corollary 3, $q^{(t)}$ must converge to the unique minimizer $q^{*}$ of $F(q)$. ∎ ###### Proof of Theorem 6. Under the conditions (A1) and (A2), by Theorem 1 of Zhang and Gao, (2019), ${\mathbb{E}}_{p_{0}^{(n)}}\Big{[}{\mathbb{E}}_{q_{n}^{*}}\Big{(}\|\theta-\theta_{0}\|_{2}^{2}\Big{)}\Big{]}=O(\varepsilon_{n}^{2}+\gamma_{n}^{2})$ (46) with $\gamma_{n}^{2}=\frac{1}{n}\min_{q\in\mathcal{Q}}{\mathbb{E}}_{p_{0}^{(n)}}\Big{[}\text{\rm KL}\big{(}q\|\pi_{n}\big{)}\Big{]}.$ (47) Denote by $p^{(n)}(X^{(n)})=\int p_{\theta}^{(n)}(X^{(n)})\pi_{0}(\text{\rm d}\theta)$ the marginal likelihood. For any $q\in{\cal Q}$, we have $\displaystyle\gamma_{n}^{2}$ $\displaystyle\leq$ $\displaystyle\frac{1}{n}{\mathbb{E}}_{p_{0}^{(n)}}\Big{[}\int\log\frac{q(\theta)p^{(n)}(X^{(n)})}{\pi_{0}(\theta)p_{\theta}^{(n)}(X^{(n)})}q(\text{\rm d}\theta)\Big{]}$ (48) $\displaystyle=$ $\displaystyle\frac{1}{n}\text{\rm KL}(q\|\pi_{0})+\frac{1}{n}\int\Big{(}p_{0}^{(n)}(X^{(n)})\int\log\frac{p^{(n)}(X^{(n)})}{p_{\theta}^{(n)}(X^{(n)})}q(\text{\rm d}\theta)\Big{)}\text{\rm d}X^{(n)}$ $\displaystyle=$ $\displaystyle\frac{1}{n}\text{\rm KL}(q\|\pi_{0})+\frac{1}{n}{\mathbb{E}}_{q}\Big{(}\int p_{0}^{(n)}(X^{(n)})\log\frac{p_{0}^{(n)}(X^{(n)})}{p_{\theta}^{(n)}(X^{(n)})}\text{\rm d}X^{(n)}-\int p_{0}^{(n)}(X^{(n)})\log\frac{p_{0}^{(n)}(X^{(n)})}{p^{(n)}(X^{(n)})}\text{\rm d}X^{(n)}\Big{)}$ $\displaystyle=$ $\displaystyle\frac{1}{n}\text{\rm KL}(q\|\pi_{0})+\frac{1}{n}{\mathbb{E}}_{q}\Big{(}\text{\rm KL}\big{(}p_{0}^{(n)}\|p_{\theta}^{(n)}\big{)}-\text{\rm KL}\big{(}p_{0}^{(n)}\|p^{(n)}\big{)}\Big{)}$ $\displaystyle\leq$ $\displaystyle\frac{1}{n}\text{\rm KL}(q\|\pi_{0})+\frac{1}{n}{\mathbb{E}}_{q}\Big{(}\text{\rm KL}\big{(}p_{0}^{(n)}\|p_{\theta}^{(n)}\big{)}\Big{)}.$ Select $q(\theta)=N(\theta_{0},1/nI_{d})$, and estimate the first term in (48). $\frac{1}{n}\text{\rm KL}(q\|\pi_{0})\leq\frac{1}{n}|{\mathbb{E}}_{q}\log q(\theta)|+\frac{1}{n}|{\mathbb{E}}_{q}\log\pi_{0}(\theta)|.$ We have that $\frac{1}{n}|{\mathbb{E}}_{q}\log q(\theta)|\leq\frac{1}{n}\big{(}\frac{d}{2}\log(2\pi)+\frac{d}{2}\log n+\frac{d}{2}\big{)}=o(1).$ By Assumption (A3), $\displaystyle\frac{1}{n}|{\mathbb{E}}_{q}\log\pi_{0}(\theta)|$ $\displaystyle\leq$ $\displaystyle\frac{1}{n}\Big{(}|\log\pi_{0}(\theta_{0})|+{\mathbb{E}}_{q}|\log\pi_{0}(\theta)-\log\pi_{0}(\theta_{0})|\Big{)}$ $\displaystyle\leq$ $\displaystyle\frac{1}{n}\Big{(}|\log\pi_{0}(\theta_{0})|+C_{3}{\mathbb{E}}_{q}\|\theta-\theta_{0}\|_{2}\Big{)}$ $\displaystyle\leq$ $\displaystyle\frac{1}{n}\Big{(}|\log\pi_{0}(\theta_{0})|+C_{3}\sqrt{\frac{d}{n}}\Big{)}=o(1).$ Therefore, $\frac{1}{n}\text{\rm KL}(q\|\pi_{0})=o(1).$ (49) We now estimate the second term in (48). By Assumption (A3), $\displaystyle\frac{1}{n}{\mathbb{E}}_{q}\Big{(}\text{\rm KL}\big{(}p_{0}^{(n)}\|p_{\theta}^{(n)}\big{)}\Big{)}$ $\displaystyle\leq$ $\displaystyle\frac{1}{n}{\mathbb{E}}_{q}\Big{(}{\mathbb{E}}_{p_{0}^{(n)}}\big{|}\log p_{\theta}^{(n)}(X^{(n)})-\log p_{\theta_{0}}^{(n)}(X^{(n)})\big{|}\Big{)}$ (50) $\displaystyle\leq$ $\displaystyle\frac{1}{n}C_{5}{\mathbb{E}}_{q}\big{(}\|\theta-\theta_{0}\|_{2}\big{)}$ $\displaystyle\leq$ $\displaystyle\frac{1}{n}C_{5}\sqrt{\frac{d}{n}}=o(1).$ Equations (48), (49) and (50) imply that $\gamma_{n}^{2}=o(1)$. Therefore, from (46) and noting that $\varepsilon_{n}^{2}=o(1)$, we have that ${\mathbb{E}}_{p_{0}^{(n)}}\Big{[}{\mathbb{E}}_{q_{n}^{*}}\Big{(}\|\theta-\theta_{0}\|_{2}^{2}\Big{)}\Big{]}=o(1).$ By Markov’s inequality ${\mathbb{E}}_{p_{0}^{(n)}}\Big{[}q_{n}^{*}\big{(}\|\theta-\theta_{0}\|_{2}^{2}>\epsilon\big{)}\Big{]}\leq\frac{{\mathbb{E}}_{p_{0}^{(n)}}\Big{[}{\mathbb{E}}_{q_{n}^{*}}\Big{(}\|\theta-\theta_{0}\|_{2}^{2}\Big{)}\Big{]}}{\epsilon}\to 0,$ implying $q_{n}^{*}\Big{(}\|\theta-\theta_{0}\|_{2}^{2}>\epsilon\Big{)}\stackrel{{\scriptstyle n\to\infty}}{{\longrightarrow}}0,\;\;\;\;p^{(n)}_{0}-a.s.$ ∎ ## References * Alquier and Ridgway, (2019) Alquier, P. and Ridgway, J. (2019). Concentration of tempered posteriors and of their variational approximations. Annals of Statistics (to appear). * Ambrosio et al., (2005) Ambrosio, L., Gigli, N., and Savaré, G. (2005). Gradient Flows In Metric Spaces and in the Space of Probability Measures. Birkhauser. * Attias, (1999) Attias, H. (1999). Inferring parameters and structure of latent variable models by variational Bayes. In Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence, pages 21–30. * Besag, (1994) Besag, J. (1994). Comments on “representations of knowledge in complex systems” by U. Grenander and MI Miller. J. Roy. Statist. Soc. Ser. B, 56(591-592):4. * Blei et al., (2017) Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. (2017). Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877. * Chen et al., (2014) Chen, T., Fox, E., and Guestrin, C. (2014). Stochastic gradient hamiltonian monte carlo. In Xing, E. P. and Jebara, T., editors, Proceedings of the 31st International Conference on Machine Learning, volume 32,2 of Proceedings of Machine Learning Research, pages 1683–1691, Bejing, China. PMLR. * Cheng and Bartlett, (2018) Cheng, X. and Bartlett, P. (2018). Convergence of Langevin MCMC in KL-divergence. In Janoos, F., Mohri, M., and Sridharan, K., editors, Proceedings of Algorithmic Learning Theory, volume 83 of Proceedings of Machine Learning Research, pages 186–211. PMLR. * Chopin et al., (2013) Chopin, N., Jacob, P. E., and Papaspiliopoulos, O. (2013). Smc2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75(3):397–426. * Dalalyan, (2017) Dalalyan, A. S. (2017). Theoretical guarantees for approximate sampling from a smooth and log-concave density. J. R. Stat. Soc. B, 79:651–676. * Galy-Fajou et al., (2021) Galy-Fajou, T., Perrone, V., and Opper, M. (2021). Flexible and efficient inference with particles for the variational Gaussian approximation. Entropy, 23(8). * Ghosal et al., (2000) Ghosal, S., Ghosh, J. K., and van der Vaart, A. W. (2000). Convergence rates of posterior distributions. Ann. Statist., 28(2):500–531. * Giordani et al., (2013) Giordani, P., Mun, X., Tran, M.-N., and Kohn, R. (2013). Flexible multivariate density estimation with marginal adaptation. Journal of Computational and Graphical Statistics, 22(4):814–829. * Girolami and Calderhead, (2011) Girolami, M. and Calderhead, B. (2011). Riemann manifold langevin and hamiltonian monte carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2):123–214. * Gunawan et al., (2022) Gunawan, D., Kohn, R., and Tran, M. (2022). Flexible and robust particle density tempering for state space models. Econometrics and Statistics. * Hastings, (1970) Hastings, W. K. (1970). Monte carlo sampling methods using markov chains and their applications. Biometrika. * Hwang et al., (2005) Hwang, C.-R., Hwang-Ma, S.-Y., and Sheu, S.-J. (2005). Accelerating diffusions. The Annals of Applied Probability, 15(2):1433 – 1444. * Jordan et al., (1998) Jordan, R., Kinderlehrer, D., and Otto, F. (1998). The variational formulation of the fokker–planck equation. SIAM journal on mathematical analysis, 29(1):1–17. * Kim et al., (1998) Kim, S., Shephard, N., and Chib, S. (1998). Stochastic volatility: likelihood inference and comparison with arch models. The review of economic studies, 65(3):361–393. * Kim et al., (2022) Kim, S., Song, Q., and Liang, F. (2022). Stochastic gradient langevin dynamics with adaptive drifts. Journal of Statistical Computation and Simulation, 92(2):318–336. * Lambert et al., (2022) Lambert, M., Chewi, S., ans Silvere Bonnabel, F. B., and Rigollet, P. (2022). Variational inference via Wasserstein gradient flows. NeurIPS 2022. * Li et al., (2016) Li, C., Chen, C., Carlson, D., and Carin, L. (2016). Preconditioned stochastic gradient langevin dynamics for deep neural networks. In Thirtieth AAAI Conference on Artificial Intelligence. * Metropolis et al., (1953) Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953). Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087–1092. * Otto, (2001) Otto, F. (2001). The geometry of dissipative evolution equations: the porous medium equation. Comm. Partial Differential Equations, 26:101–174. * Park and Casella, (2008) Park, T. and Casella, G. (2008). The bayesian lasso. Journal of the American Statistical Association, 103(482):681–686. * Pavliotis, (2014) Pavliotis, G. A. (2014). Stochastic Processes and Applications: Diffusion Processes, the Fokker-Planck and Langevin Equations. Springer. * Quiroz et al., (2019) Quiroz, M., Villani, M., Kohn, R., and Tran, M. (2019). Speeding up MCMC by efficient data subsampling. Journal of the American Statistical Association, 114:831–843. * Robert and Casella, (1999) Robert, C. P. and Casella, G. (1999). Monte Carlo statistical methods, volume 2. Springer. * Roberts and Tweedie, (1996) Roberts, G. O. and Tweedie, R. L. (1996). Exponential convergence of langevin distributions and their discrete approximations. Bernoulli, pages 341–363. * Santambrogio, (2015) Santambrogio, F. (2015). Optimal Transport for Applied Mathematicians. Birkhauser. * Tan, (2021) Tan, L. S. (2021). Use of model reparametrization to improve variational bayes. Journal of the Royal Statistical Society Series B, 83(1):30–57. * Taylor, (1982) Taylor, S. J. (1982). Financial returns modelled by the product of two stochastic processes — a study of daily sugar prices 1961-79. In Anderson, O. D., editor, Time Series Analysis: Theory and Practice, page 203–226. Amsterdam: North-Holland. * Tran et al., (2020) Tran, M.-N., Nguyen, N., Nott, D., and Kohn, R. (2020). Bayesian deep net glm and glmm. Journal of Computational and Graphical Statistics, 29(1):97–113. * Villani et al., (2009) Villani, C. et al. (2009). Optimal transport: old and new, volume 338. Springer. * Wand et al., (2011) Wand, M. P., Ormerod, J. T., Padoan, S. A., and Frühwirth, R. (2011). Mean field variational bayes for elaborate distributions. Bayesian Anal., 6(4):847–900. * Wang and Blei, (2013) Wang, C. and Blei, D. M. (2013). Variational inference in nonconjugate models. Journal of Machine Learning Research. * Waterhouse et al., (1996) Waterhouse, S., MacKay, D., and Robinson, T. (1996). Bayesian methods for mixtures of experts. In Touretzky, M. C. M. D. S. and Hasselmo, M. E., editors, Advances in Neural Information Processing Systems, pages 351–357. MIT Press. * Welling and Teh, (2011) Welling, M. and Teh, Y. W. (2011). Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 681–688. * Yao and Yang, (2022) Yao, R. and Yang, Y. (2022). Mean-field variational inference via wasserstein gradient flow. arXiv:2207.08074v1. * Zhang and Zhou, (2017) Zhang, A. Y. and Zhou, H. H. (2017). Theoretical and computational guarantees of mean field variational inference for community detection. The Annals of Statistics 48(5). * Zhang and Gao, (2019) Zhang, F. and Gao, C. (2019). Convergence rates of variational posterior distributions. Annals of Statistics (to appear).
$(s,a,s')$ is in $\IOAtrans(A)$, it means that $A$ can move from state $s$ to state $s'$ by executing action $a$. There is also an equivalence relation $\IOAtask(A)$ on the output and internal actions, which is used for enforcing fairness conditions—the basic idea is that in a fair execution some action in each equivalence class must be executed eventually (a more accurate definition will be given below). The I/O automaton model carries with it a lot of specialized jargon. We'll try to avoid it as much as possible. One thing that will be difficult to avoid in reading [186] is the notion of a signature, which is just the tuple $\IOAsig(A) = (\IOAin(A), \IOAout(A), \IOAint(A))$ describing the actions of an automaton $A$. §.§ Enabled actions An action $a$ is enabled in some state $s$ if $\IOAtrans(A)$ contains at least one transition $(s,a,s')$. Input actions are always enabled—this is a requirement of the model. Output and internal actions—the “locally controlled” actions—are not subject to this restriction. A state $s$ is quiescent if only input actions are enabled in $s$. §.§ Executions, fairness, and traces An execution of A is a sequence $s_{0} a_{0} s_{1} a_{1} \dots{}$ where each triple $(s_{i}, a_{i} s_{i+1})$ is in $\IOAtrans(A)$. Executions may be finite or infinite; if finite, they must end in a state. A trace of $A$ is a subsequence of some execution consisting precisely of the external (i.e., input and output) actions, with states and internal actions omitted. If we don't want to get into the guts of a particular I/O automaton—and we usually don't, unless we can't help it because we have to think explicitly about states for some reason—we can describe its externally-visible behavior by just giving its set of traces. §.§ Composition of automata Composing a set of I/O automata yields a new super-automaton whose state set is the Cartesian product of the state sets of its components and whose action set is the union of the action sets of its components. A transition with a given action $a$ updates the states of all components that have $a$ as an action and has no effect on the states of other components. The classification of actions into the three classes is used to enforce some simple compatibility rules on the component automata; in particular: * An internal action of a component is never an action of another component—internal actions are completely invisible. * No output action of a component can be an output action of another component. * No action is shared by infinitely many components.[Note that infinite (but countable) compositions are permitted.] In practice this means that no action can be an input action of infinitely many components, since the preceding rules mean that any action is an output or internal action of at most one component. All output actions of the components are also output actions of the composition. An input action of a component is an input of the composition only if some other component doesn't supply it as an output; in this case it becomes an output action of the composition. Internal actions remain internal (and largely useless, except for bookkeeping purposes). The equivalence relation is the union of the relations for the components: this turns out to give a genuine equivalence relation on output and internal actions precisely because the first two compatibility rules hold. Given an execution or trace $X$ of a composite automaton that includes $A$, we can construct the corresponding execution or trace $X|A$ of $A$ which just includes the states of $A$ and the actions visible to $A$ (events that don't change the state of $A$ drop out). The definition of composition is chosen so that $X|A$ is in fact an execution/trace of $A$ whenever $X$ is. §.§ Hiding actions Composing $A$ and $B$ continues to expose the outputs of $A$ even if they line up with inputs of $B$. While this may sometimes be desirable, often we want to shove such internal communication under the rug. The model lets us do this by redefining the signature of an automaton to make some or all of the output actions into internal actions. §.§ Fairness I/O automata come with a built-in definition of execution!fairfair executionfair executions, where an execution of $A$ is fair if, for each equivalence class $C$ of actions in * the execution is finite and no action in $C$ is enabled in the final state, or * the execution is infinite and there are infinitely many occurrences of actions in $C$, or * the execution is infinite and there are infinitely many states in which no action in $C$ is enabled. If we think of $C$ as corresponding to some thread or process, this says that $C$ gets infinitely many chances to do something in an infinite execution, but may not actually do them if it gives ups and stops waiting (the third case). The finite case essentially says that a finite execution isn't fair unless nobody is waiting at the end. The motivation for this particular definition is that it guarantees (a) that any finite execution can be extended to a fair execution and (b) that the restriction $X|A$ of a fair execution or trace $X$ is also fair. Fairness is useful e.g. for guaranteeing message delivery in a message-passing system: make each message-delivery action its own task class and each message will eventually be delivered; similarly make each message-sending action its own task class and a process will eventually send every message it intends to send. Tweaking the task classes can allow for possibilities of starvation, e.g. if all message-delivery actions are equivalent then a spammer can shut down the system in a “fair” execution where only his (infinitely many) messages are delivered. §.§ Specifying an automaton The typical approach is to write down preconditions and effects for each action (for input actions, the preconditions are empty). An example would be the spambot in Algorithm <ref>. IOAinputActioninput actionend action IOAoutputActionoutput actionend action IOApreconditionpreconditionend precondition IOAeffectseffectsend effects $\IOAspambotState ← m$ $\IOAspamAction = m$ none (keep spamming) Spambot as an I/O automaton (Plus an initial state, e.g. $\IOAspambotState = ⊥$, where $⊥$ is not a possible message, and a task partition, of which we will speak more below when we talk about liveness properties.) § HIGH-LEVEL VIEW: TRACES When studying the behavior of a system, traces are what we really care about, and we want to avoid talking about states as much as possible. So what we'll aim to do is to get rid of the states early by computing the set of traces (or fair traces) of each automaton in our system, then compose traces to get traces for the system as a whole. Our typical goal will be to show that the resulting set of traces has some desirable properties, usually of the form (1) nothing bad happens (a safety property); (2) something good eventually happens (a liveness property); or (3) the horribly complex composite automaton representing this concrete system acts just like that nice clean automaton representing a specification (a simulation). Very formally, a trace property specifies both the signature of the automaton and a set of traces, such that all traces (or perhaps fair traces) of the automata appear in the set. We'll usually forget about the first part. Tricky detail: It's OK if not all traces in $P$ are generated by $A$ (we want $\IOAtrace(A) \subseteq P$, but not necessarily $\IOAtrace(A) = P$). But $\IOAtrace(A)$ will be pretty big (it includes, for example, all finite sequences of input actions) so hopefully the fact that $A$ has to do something with inputs will tell us something useful. §.§ Example A property we might demand of the spambot above (or some other abstraction of a message channel) is that it only delivers messages that have previously been given to it. As a trace property this says that in any trace $t$, if $t_{k} = \IOAspamAction(m)$, then $t_{j} = \IOAsetMessageAction(m)$ for some $j < k$. (As a set, this is just the set of all sequences of external spambot-actions that have this property.) Call this property $P$. To prove that the spambot automaton given above satisfies $P$, we might argue that for any execution $s_{0}a_{0}s_{1}a_{1}\dots{}$, that $s_{i} = m$ in the last $\IOAsetMessageAction$ action preceding $s_{i}$, or $⊥$ if there is no such action. This is easily proved by induction on $i$. It then follows that since $\IOAspamAction(m)$ can only transmit the current state, that if $\IOAspamAction(m)$ follows $s_{i} = m$ that it follows some earlier $\IOAsetMessageAction(m)$ as claimed. However, there are traces that satisfy $P$ that don't correspond to executions of the spambot; for example, consider the trace $\IOAsetMessageAction(0) \IOAsetMessageAction(1) \IOAspamAction(0)$. This satisfies $P$ (0 was previously given to the automaton $\IOAspamAction(0)$), but the automaton won't generate it because the 0 was overwritten by the later $\IOAsetMessageAction(1)$ action. Whether this is indicates a problem with our automaton not being nondeterministic enough or our trace property being too weak is a question about what we really want the automaton to do. §.§ Types of trace properties §.§.§ Safety properties $P$ is a safety property if * $P$ is nonempty. * $P$ is prefix-closed, i.e. if $xy$ is in $P$ then $x$ is in $P$. * P is limit-closed, i.e. if $x_{1}, x_{1}x_{2}, x_{1}x_{2}x_{3}, \dots{}$ are all in $P$, then so is the infinite sequence obtained by taking their limit. Because of the last restrictions, it's enough to prove that $P$ holds for all finite traces of $A$ to show that it holds for all traces (and thus for all fair traces), since any trace is a limit of finite traces. Conversely, if there is some trace or fair trace for which $P$ fails, the second restriction says that $P$ fails on any finite prefix of $P$, so again looking at only finite prefixes is enough. The spambot property mentioned above is a safety property. Safety properties are typically proved using invariants, properties that are shown by induction to hold in all reachable states. §.§.§ Liveness properties $P$ is a liveness property of $A$ if any finite sequence of actions in $\IOAacts(A)$ has an extension in $P$. Note that liveness properties will in general include many sequences of actions that aren't traces of $A$, since they are extensions of finite sequences that $A$ can't do (e.g. starting the execution with an action not enabled in the initial state). If you want to restrict yourself only to proper executions of $A$, use a safety property. (It's worth noting that the same property $P$ can't do both: any $P$ that is both a liveness and a safety property includes all sequences of actions because of the closure rules.) Liveness properties are those that are always eventually satisfiable; asserting one says that the property is eventually satisfied. The typical way to prove a liveness property is with a progress a function $f$ on states that (a) drops by at least 1 every time something that happens infinitely often happens (like an action from an always-enabled task class) and (b) guarantees $P$ once it reaches 0. An example would be the following property we might demand of our spambot: any trace with at least one $\IOAsetMessageAction(\dots{})$ action contains infinitely many $\IOAspamAction(\dots{})$ actions. Whether the spambot automaton will satisfy this property (in fair traces) depends on its task partition. If all $\IOAspamAction(\dots{})$ actions are in the same equivalence class, then any execution with at least one will have some (…) action enabled at all times thereafter, so a fair trace containing a can't be finite (since spam is enabled in the last state) and if infinite contains infinitely many spam messages (since spam messages of some sort are enabled in all but an initial finite prefix). On the other hand, if $\IOAspamAction(m_1)$ and $\IOAspamAction(m_2)$ are not equivalent in $\IOAtask(A)$, then the spambot doesn't satisfy the liveness property: in an execution that alternates $\IOAsetMessageAction(m_1) \IOAsetMessageAction(m_2) \IOAsetMessageAction(m_1) \IOAsetMessageAction(m_2) \dots$ there are infinitely many states in which $\IOAspamAction(m_1)$ is not enabled, so fairness doesn't require doing it even once, and similarly for $\IOAspamAction(m_2)$. §.§.§ Other properties Any other property $P$ can be expressed as the intersection of a safety property (the closure of $P)$ and a liveness property (the union of $P$ and the set of all finite sequences that aren't prefixes of traces in $P$). The intuition is that the safety property prunes out the excess junk we threw into the liveness property to make it a liveness property, since any sequence that isn't a prefix of a trace in $P$ won't go into the safety property. This leaves only the traces in $P$. Example: Let $P = \{ 0^{n}1^{\infty} \}$ be the set of traces where we eventually give up on our pointless 0-action and start doing only 1-actions forever. Then $P$ is the intersection of the safety property $S = \{ 0^{n}1^{m} \} \cup P$ (the extra junk is from prefix-closure) and the liveness property $L = \{ 0^{n}11^{m}0x | x in \{0,1\}^{*} \} \cup P$. Property $S$ says that once we do a 1 we never do a 0, but allows finite executions of the form $0^{n}$ where we never do a 1. Property $L$ says that we eventually do a 1-action, but that we can't stop unless we later do at least one 0-action. §.§ Compositional arguments The product of trace properties $P_{1}, P_{2} \dots{}$ is the trace property $P$ where $T$ is in $P$ if and only if $T|\IOAsig(P_{i})$ is in $P_{i}$ for each $i$. If the $\{A_{i}\}$ satisfy corresponding propertties $\{P_{i}\}$ individually, then their composition satisfies the product property. (For safety properties, often we prove something weaker about the $A_{i}$, which is that each $A_{i}$ individually is not the first to violate $P$—i.e., it can't leave $P$ by executing an internal or output action. In an execution where inputs by themselves can't violate $P$, $P$ then holds.) Product properties let us prove trace properties by smashing together properties of the component automata, possibly with some restrictions on the signatures to get rid of unwanted actions. The product operation itself is in a sense a combination of a Cartesian product (pick traces $t_{i}$ and smash them together) filtered by a consistency rule (the smashed trace must be consistent); it acts much like intersection (and indeed can be made identical to intersection if we treat a trace property with a given signature as a way of describing the set of all $T$ such that $T|\IOAsig(P_{i})$ is in §.§.§ Example Consider two spambots $A_1$ and $A_2$ where we identify the $\IOAspamAction(m)$ operation of $A_1$ with the $\IOAsetMessageAction(m)$ operation of $A_2$; we'll call this combined action $\IOAspamAction_1(m)$ to distinguish it from the output actions of $A_2$. We'd like to argue that the composite automaton $A_1+A_2$ satisfies the safety property (call it $P_{m}$) that any occurrence of $\IOAspamAction(m)$ is preceded by an occurrence of $\IOAsetMessageAction(m)$, where the signature of $P_{m}$ includes $\IOAsetMessageAction(m)$ and $\IOAspamAction(m)$ for some specific $m$ but no other operations. (This is an example of where trace property signatures can be useful without being limited to actions of any specific component automaton.) To do so, we'll prove a stronger property $P'_{m}$, which is $P_{m}$ modified to include the $\IOAspamAction_1(m)$ action in its signature. Observe that $P'_m$ is the product of the safety properties for $A_1$ and $A_2$ restricted to $\IOAsig(P'_m)$, since the later says that any trace that includes $\IOAspamAction(m)$ has a previous $\IOAspamAction_1(m)$ and the former says that any trace that includes $\IOAspamAction_1(m)$ has a previous $\IOAsetMessageAction(m)$. Since these properties hold for the individual $A_1$ and $A_2$, their product, and thus the restriction $P'_m$, holds for $A_1+A_2$, and so $P_{m}$ (as a further restriction) holds for $A_1+A_2$ as well. Now let's prove the liveness property for $A_1+A_2$, that at least one occurrence of $\IOAsetMessageAction$ yields infinitely many actions. Here we let $L_{1} = \{ \text{at least one \IOAsetMessageAction action} ⇒ \text{infinitely many $\IOAspamAction_1$ actions} \}$ and $L_{2} = \{ \text{at least one $\IOAspamAction_1$ action} \text{infinitely many \IOAspamAction actions} \}$. The product of these properties is all sequences with (a) no actions or (b) infinitely many actions, which is what we want. This product holds if the individual properties $L_{1}$ and $L_{2}$ hold for $A_1+A_2$, which will be the case if we set $\IOAtask(A_1)$ and $\IOAtask(A_2)$ correctly. §.§ Simulation arguments Show that $\IOAtraces(A)$ is a subset of $\IOAtraces(B)$ (possibly after hiding some actions of $A$) by showing a simulation relation $f:\IOAstates(A) \rightarrow \IOAstates(B)$ between states of $A$ and states of $B$. Requirements on $f$ are * If $s$ is in $\IOAstart(A)$, then $f(s)$ includes some element of $\IOAstart(B)$. * If $(s,a,s')$ is in $\IOAtrans(A)$ and $s$ is reachable, then for any reachable $u$ in $f(s)$, there is a sequence of actions $x$ that takes $u$ to some $v$ in $f(s')$ with $\IOAtrace(x) = \IOAtrace(a)$. Using these we construct an execution of $B$ matching (in trace) an execution of $A$ by starting in $f(s_{0})$ and applying the second part of the definition to each action in the $A$ execution (including the hidden ones!) §.§.§ Example A single spambot $A$ can simulate the conjoined spambots $A_1+A_2$. Proof: Let $f(s) = (s,s)$. Then $f(⊥) = (⊥, ⊥)$ is a start state of $A_1+A_2$. Now consider a transition $(s,a,s')$ of $A$; the action a is either (a) $\IOAsetMessageAction(m)$, giving $s' = m$; here we let $x = \IOAsetMessageAction(m) \IOAspamAction_1(m)$ with $\IOAtrace(x) = \IOAtrace(a)$ since $\IOAspamAction_1(m)$ is internal and $f(s') = (m,m)$ the result of applying $x$; or (b) $a = \IOAspamAction(m)$, which does not change $s$ or $f(s)$; the matching $x$ is $\IOAspamAction(m)$, which also does not change $f(s)$ and has the same trace. A different proof could take advantage of $f$ being a relation by defining $f(s) = \{ (s,s') | s' \in \IOAstates(A_2) \}$. Now we don't care about the state of $A_2$, and treat a $\IOAsetMessageAction(m)$ action of $A$ as the sequence $\IOAsetMessageAction(m)$ in $A_1+A_2$ (which updates the first component of the state correctly) and treat a $\IOAspamAction(m)$ action as $\IOAspamAction_1(m) \IOAspamAction(m)$ (which updates the second component—which we don't care about—and has the correct trace.) In some cases an approach of this sort is necessary because we don't know which simulated state we are heading for until we get an action from $A$. Note that the converse doesn't work: $A_1+A_2$ don't simulate $A$, since there are traces of $A_1+A_2$ (e.g. $\IOAsetMessageAction(0) \IOAspamAction_1(0) \IOAsetMessageAction(1) \IOAspamAction(0)$) that don't restrict to traces of $A$. See <cit.> for a more complicated example of how one FIFO queue can simulate two FIFO queues and vice versa (a situation called bisimulation). Since we are looking at traces rather than fair traces, this kind of simulation doesn't help much with liveness properties, but sometimes the connection between states plus a liveness proof for $B$ can be used to get a liveness proof for $A$ (essentially we have to argue that $A$ can't do infinitely many action without triggering a $B$-action in an appropriate task class). Again see <cit.>. [1] Dan Alistarh and James Aspnes. Sub-logarithmic test-and-set against a weak adversary. In Distributed Computing: 25th International Symposium, DISC 2011, volume 6950 of Lecture Notes in Computer Science, pages 97–109. Springer-Verlag, September 2011. [2] Yehuda Afek, Noga Alon, Omer Barad, Eran Hornstein, Naama Barkai, and Ziv A biological solution to a fundamental distributed computing problem. science, 331(6014):183–185, 2011. [3] Yehuda Afek, Noga Alon, Ziv Bar-Joseph, Alejandro Cornejo, Bernhard Haeupler, and Fabian Kuhn. Beeping a maximal independent set. In Proceedings of the 25th International Conference on Distributed Computing, DISC'11, pages 32–50, Berlin, Heidelberg, 2011. [4] Dan Alistarh, James Aspnes, Keren Censor-Hillel, Seth Gilbert, and Morteza Optimal-time adaptive tight renaming, with applications to counting. In Proceedings of the Thirtieth Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, pages 239–248, June 2011. [5] James Aspnes, Hagit Attiya, and Keren Censor-Hillel. Polylogarithmic concurrent data structures from monotone circuits. Journal of the ACM, 59(1):2:1–2:24, February 2012. [6] James Aspnes, Hagit Attiya, Keren Censor-Hillel, and Faith Ellen. Limited-use snapshots with polylogarithmic step complexity. Journal of the ACM, 62(1):3, February 2015. [7] James Aspnes, Hagit Attiya, Keren Censor-Hillel, and Faith Ellen. Erratum: Limited-use atomic snapshots with polylogarithmic step J. ACM, 65(6):38:1–38:2, November 2018. [8] Yehuda Afek, James Aspnes, Edo Cohen, and Danny Vainstein. Brief announcement: Object oriented consensus. In Elad Michael Schiller and Alexander A. Schwarzmann, editors, Proceedings of the ACM Symposium on Principles of Distributed Computing, PODC 2017, Washington, DC, USA, July 25-27, 2017, pages 367–369. ACM, [9] Yehuda Afek, Hagit Attiya, Danny Dolev, Eli Gafni, Michael Merritt, and Nir Atomic snapshots of shared memory. J. ACM, 40(4):873–890, 1993. [10] Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Computation in networks of passively mobile finite-state sensors. Distributed Computing, pages 235–253, March 2006. [11] Dana Angluin, James Aspnes, and David Eisenstat. Stably computable predicates are semilinear. In PODC '06: Proceedings of the twenty-fifth annual ACM symposium on Principles of distributed computing, pages 292–299, New York, NY, USA, 2006. ACM Press. [12] Dana Angluin, James Aspnes, and David Eisenstat. Fast computation by population protocols with a leader. Distributed Computing, 21(3):183–199, September 2008. [13] Dana Angluin, James Aspnes, and David Eisenstat. A simple population protocol for fast robust approximate majority. Distributed Computing, 21(2):87–102, July 2008. [14] Dan Alistarh, Hagit Attiya, Seth Gilbert, Andrei Giurgiu, and Rachid Guerraoui. Fast randomized test-and-set and renaming. In Nancy A. Lynch and Alexander A. Shvartsman, editors, Distributed Computing, 24th International Symposium, DISC 2010, Cambridge, MA, USA, September 13-15, 2010. Proceedings, volume 6343 of Lecture Notes in Computer Science, pages 94–108. Springer, 2010. [15] Dan Alistarh, James Aspnes, Seth Gilbert, and Rachid Guerraoui. The complexity of renaming. In Fifty-Second Annual IEEE Symposium on Foundations of Computer Science, pages 718–727, October 2011. [16] Dan Alistarh, James Aspnes, George Giakkoupis, and Philipp Woelfel. Randomized loose renmaing in $O(\log \log n)$ time. In 2013 ACM Symposium on Principles of Distributed Computing, pages 200–209, July 2013. [17] Hagit Attiya, Amotz Bar-Noy, Danny Dolev, David Peleg, and Rüdiger Reischuk. Renaming in an asynchronous environment. J. ACM, 37(3):524–548, 1990. [18] Hagit Attiya, Amotz Bar-Noy, and Danny Dolev. Sharing memory robustly in message-passing systems. Journal of the ACM, 42(1):124–142, 1995. [19] Karl Abrahamson. On achieving consensus using a shared memory. In Proceedings of the 7th Annual ACM Symposium on Principles of Distributed Computing (PODC), pages 291–302, 1988. [20] Hagit Attiya and Keren Censor. Tight bounds for asynchronous randomized consensus. Journal of the ACM, 55(5):20, October 2008. [21] James Aspnes and Keren Censor. Approximate shared-memory counting despite a strong adversary. In SODA '09: Proceedings of the Nineteenth Annual ACM -SIAM Symposium on Discrete Algorithms, pages 441–450, Philadelphia, PA, USA, 2009. Society for Industrial and Applied Mathematics. [22] James Aspnes, Keren Censor-Hillel, Hagit Attiya, and Danny Hendler. Lower bounds for restricted-use objects. SIAM J. Comput., 45(3):734–762, 2016. [23] Hagit Attiya and Keren Censor-Hillel. Lower bounds for randomized consensus under a weak adversary. SIAM J. Comput., 39(8):3885–3904, 2010. [24] James Aspnes and Keren Censor-Hillel. Atomic snapshots in ${O}(log^3 n)$ steps using randomized helping. In Yehuda Afek, editor, Distributed Computing: 27th International Symposium, DISC 2013, Jerusalem, Israel, October 14–18, 2013. Proceedings, volume 8205 of Lecture Notes in Computer Science, pages 254–268. Springer Berlin Heidelberg, 2013. [25] James Aspnes and Keren Censor-Hillel. Atomic snapshots in expected ${O}(log^3 n)$ steps using randomized Submitted to Distributed Computing. Available from [26] Dan Alistarh, Keren Censor-Hillel, and Nir Shavit. Are lock-free concurrent algorithms practically wait-free? arXiv preprint arXiv:1311.3200, 2013. [27] James Aspnes and Faith Ellen. Tight bounds for anonymous adopt-commit objects. In 23rd Annual ACM Symposium on Parallelism in Algorithms and Architectures, pages 317–324, June 2011. [28] Yehuda Afek, Faith Ellen, and Eli Gafni. Deterministic objects: Life beyond consensus. In Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing, PODC '16, pages 97–106, New York, NY, USA, 2016. [29] E. A. Akkoyunlu, K. Ekanadham, and R. V. Huber. Some constraints and tradeoffs in the design of network SIGOPS Oper. Syst. Rev., 9:67–74, November 1975. [30] Hagit Attiya and Arie Fouren. Adaptive and efficient algorithms for lattice agreement and renaming. SIAM Journal on Computing, 31(2):642–664, 2001. [31] Eshrat Arjomandi, Michael J. Fischer, and Nancy A. Lynch. Efficiency of synchronous versus asynchronous distributed systems. J. ACM, 30(3):449–456, 1983. [32] Yehuda Afek and Eli Gafni. Time and message bounds for election in synchronous and asynchronous complete networks. SIAM Journal on Computing, 20(2):376–394, 1991. [33] Sarita V. Adve and Kourosh Gharachorloo. Shared memory consistency models: A tutorial. Technical Report 95/7, DEC Western Research Laboratory, 1995. [34] Dan Alistarh and Rati Gelashvili. Recent algorithmic advances in population protocols. SIGACT News, 49(3):63–73, October 2018. [35] Dan Alistarh, Seth Gilbert, Rachid Guerraoui, and Corentin Travers. Of choices, failures and asynchrony: The many faces of set agreement. In Yingfei Dong, Ding-Zhu Du, and Oscar H. Ibarra, editors, ISAAC, volume 5878 of Lecture Notes in Computer Science, pages 943–953. Springer, 2009. [36] Hagit Attiya, Rachid Guerraoui, Danny Hendler, and Petr Kouznetsov. Synchronizing without locks is inherently expensive. In PODC '06: Proceedings of the twenty-fifth annual ACM symposium on Principles of distributed computing, pages 300–307, New York, NY, USA, 2006. ACM. [37] Yehuda Afek, Eli Gafni, John Tromp, and Paul M. B. Vitányi. Wait-free test-and-set (extended abstract). In Adrian Segall and Shmuel Zaks, editors, Distributed Algorithms, 6th International Workshop, WDAG '92, Haifa, Israel, November 2-4, 1992, Proceedings, volume 647 of Lecture Notes in Computer Science, pages 85–94. Springer, 1992. [38] James Aspnes and Maurice Herlihy. Fast randomized consensus using shared memory. Journal of Algorithms, 11(3):441–461, September 1990. [39] James Aspnes and Maurice Herlihy. Wait-free data structures in the asynchronous PRAM model. In Second Annual ACM Symposium on Parallel Algorithms and Architectures, pages 340–349, July 1990. [40] Hagit Attiya, Eshcar Hillel, and Alessia Milani. Inherent limitations on disjoint-access parallel implementations of transactional memory. In Friedhelm Meyer auf der Heide and Michael A. Bender, editors, SPAA 2009: Proceedings of the 21st Annual ACM Symposium on Parallelism in Algorithms and Architectures, Calgary, Alberta, Canada, August 11-13, 2009, pages 69–78. ACM, 2009. [41] Hagit Attiya, Maurice Herlihy, and Ophir Rachman. Atomic snapshots using lattice agreement. Distributed Computing, 8(3):121–132, 1995. [42] James Aspnes, Maurice Herlihy, and Nir Shavit. Counting networks. Journal of the ACM, 41(5):1020–1048, September 1994. [43] James Aspnes, Bernhard Haeupler, Alexander Tong, and Philipp Woelfel. Allocate-on-use space complexity of shared-memory algorithms. In Ulrich Schmid and Josef Widder, editors, 32nd International Symposium on Distributed Computing, DISC 2018, New Orleans, LA, USA, October 15–19, 2018, volume 121 of LIPIcs, pages 8:1–8:17. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2018. [44] Hagit Attiya, Danny Hendler, and Philipp Woelfel. Tight RMR lower bounds for mutual exclusion and other problems. In Proceedings of the 40th annual ACM symposium on Theory of computing, STOC '08, pages 217–226, New York, NY, USA, 2008. ACM. [45] Baruch Awerbuch, Shay Kutten, Yishay Mansour, Boaz Patt-Shamir, and George Time optimal self-stabilizing synchronization. In Proceedings of the twenty-fifth annual ACM symposium on Theory of computing, pages 652–661. ACM, 1993. [46] Baruch Awerbuch, Shay Kutten, Yishay Mansour, Boaz Patt-Shamir, and George A time-optional self-stabilizing synchronizer using a phase clock. IEEE Transactions on Dependable and Secure Computing, 4(3):180–190, July–September 2007. [47] Hagit Attiya, Fabian Kuhn, C. Greg Plaxton, Mirjam Wattenhofer, and Roger Efficient adaptive collect using randomization. Distributed Computing, 18(3):179–188, 2006. [48] M. Ajtai, J. Komlós, and E. Szemerédi. An $o(n \log n)$ sorting network. In Proceedings of the fifteenth annual ACM symposium on Theory of computing, pages 1–9, New York, NY, USA, 1983. ACM. [49] James H. Anderson and Mark Moir. Towards a necessary and sufficient condition for wait-free synchronization (extended abstract). In André Schiper, editor, Distributed Algorithms, 7th International Workshop, WDAG '93, Lausanne, Switzerland, September 27-29, 1993, Proceedings, volume 725 of Lecture Notes in Computer Science, pages 39–53. Springer, 1993. [50] Hagit Attiya and Marios Mavronicolas. Efficiency of semisynchronous versus asynchronous networks. Mathematical Systems Theory, 27(6):547–571, November 1994. [51] Yehuda Afek and Michael Merritt. Fast, wait-free $(2k-1)$-renaming. In PODC, pages 105–112, 1999. [52] Yehuda Afek, Adam Morrison, and Guy Wertheim. From bounded to unbounded concurrency objects and back. In Proceedings of the 30th Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, pages 119–128. ACM, 2011. [53] Thomas E. Anderson. The performance of spin lock alternatives for shared-money IEEE Trans. Parallel Distrib. Syst., 1(1):6–16, 1990. [54] James H. Anderson. Multi-writer composite registers. Distributed Computing, 7(4):175–195, 1994. [55] Dana Angluin. Local and global properties in networks of processors (extended In Proceedings of the twelfth annual ACM symposium on Theory of computing, STOC '80, pages 82–93, New York, NY, USA, 1980. ACM. [56] Noa Agmon and David Peleg. Fault-tolerant gathering algorithms for autnonomous mobile robots. SIAM Journal on Computing, 36(1):56–82, 2006. [57] James Aspnes and Eric Ruppert. An introduction to population protocols. In Benoît Garbinato, Hugo Miranda, and Luís Rodrigues, editors, Middleware for Network Eccentric and Mobile Applications, pages 97–120. Springer-Verlag, 2009. [58] James Aspnes. Lower bounds for distributed coin-flipping and randomized consensus. Journal of the ACM, 45(3):415–450, May 1998. [59] James Aspnes. Slightly smaller splitter networks. Technical Report YALEU/DCS/TR-1438, Yale University Department of Computer Science, November 2010. [60] James Aspnes. Notes on randomized algorithms. <http://www.cs.yale.edu/homes/aspnes/classes/469/notes.pdf>, July [61] James Aspnes. Faster randomized consensus with an oblivious adversary. In 2012 ACM Symposium on Principles of Distributed Computing, pages 1–8, July 2012. [62] James Aspnes. A modular approach to shared-memory consensus, with applications to the probabilistic-write model. Distributed Computing, 25(2):179–188, May 2012. [63] Hagit Attiya, Marc Snir, and Manfred K. Warmuth. Computing on an anonymous ring. J. ACM, 35:845–875, October 1988. [64] Hagit Attiya. Lower bounds and impossibility results for transactional memory Bulletin of the European Association for Computer Science, 112:38–52, February 2014. [65] Yonatan Aumann. Efficient asynchronous consensus with the weak adversary scheduler. In PODC '97: Proceedings of the Sixteenth Annual ACM Symposium on Principles of Distributed Computing, pages 209–218, New York, NY, USA, 1997. ACM. [66] Yehuda Afek and Eytan Weisberger. The instancy of snapshots and commuting objects. J. Algorithms, 30(1):68–105, 1999. [67] Hagit Attiya and Jennifer Welch. Distributed Computing: Fundamentals, Simulations, and Advanced Wiley, second edition, 2004. On-line version: <http://dx.doi.org/10.1002/0471478210>. (This may not work outside Yale.). [68] Baruch Awerbuch. Complexity of network synchronization. J. ACM, 32:804–823, October 1985. [69] Yehuda Afek, Eytan Weisberger, and Hanan Weisman. A completeness theorem for a class of synchronization objects (extended abstract). In Proceedings of the Twelfth Annual ACM Symposium on Principles of Distributed Computing, pages 159–170, 1993. [70] K. E. Batcher. Sorting networks and their applications. In Proceedings of the AFIPS Spring Joint Computer Conference 32, pages 307–314, 1968. [71] P. Berenbrink, A. Brinkmann, R. Elsässer, T. Friedetzky, and L. Nagel. Randomized renaming in shared memory systems. In Parallel and Distributed Processing Symposium (IPDPS), 2015 IEEE International, pages 542–549, May 2015. [72] Christian Boulinier, Ajoy K. Datta, Lawrence L. Larmore, and Franck Petit. Space efficient and time optimal distributed BFS tree construction. Information Processing Letters, 108(5):273–278, November 2008. [73] Zohir Bouzid, Shantanu Das, and Sébastien Tixeuil. Wait-free gathering of mobile robots. CoRR, abs/1207.0226, 2012. [74] RE Bellman. On a routing problem. Quarterly of Applied Mathematics, 16:87–90, 1958. [75] S. Bellovin. The Security Flag in the IPv4 Header. RFC 3514 (Informational), April 2003. [76] Alex Brodsky, Faith Ellen, and Philipp Woelfel. Fully-adaptive algorithms for long-lived renaming. Distributed Computing, 24(2):119–134, 2011. [77] Elizabeth Borowsky and Eli Gafni. Generalized flp impossibility result for $t$-resilient asynchronous In STOC, pages 91–100, 1993. [78] Elizabeth Borowsky and Eli Gafni. A simple algorithmically reasoned characterization of wait-free computations (extended abstract). In PODC, pages 189–198, 1997. [79] Elizabeth Borowsky, Eli Gafni, and Yehuda Afek. Consensus power makes (some) sense! (extended abstract). In PODC, pages 363–372, 1994. [80] E. Borowsky, E. Gafni, N. Lynch, and S. Rajsbaum. The bg distributed simulation algorithm. Distrib. Comput., 14(3):127–146, October 2001. [81] Piotr Berman, Juan A. Garay, and Kenneth J. Perry. Towards optimal distributed consensus (extended abstract). In 30th Annual Symposium on Foundations of Computer Science, 30 October-1 November 1989, Research Triangle Park, North Carolina, USA, pages 410–415, 1989. [82] Mirza Ahad Baig, Danny Hendler, Alessia Milani, and Corentin Travers. Long-lived snapshots with polylogarithmic amortized step complexity. In Proceedings of the 39th Symposium on Principles of Distributed Computing, PODC '20, page 3140, New York, NY, USA, 2020. Association for Computing Machinery. [83] James E. Burns and Nancy A. Lynch. Bounds on shared memory for mutual exclusion. Inf. Comput., 107(2):171–184, 1993. [84] A. Bas-Noy and D. Dolev. Shared-memory vs. message-passing in an asynchronous distributed In Proceedings of the eighth annual ACM Symposium on Principles of distributed computing, PODC '89, pages 307–318, New York, NY, USA, 1989. [85] Michael Ben-Or. Another advantage of free choice: Completely asynchronous agreement protocols (extended abstract). In Proceedings of the Second Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, pages 27–30, Montreal, Quebec, Canada, August 1983. [86] Elizabeth Borowsky. Capturing the Power of Resiliency and Set Consensus in Distributed Systems. PhD thesis, University of California, Los Angeles, 1995. [87] Harry Buhrman, Alessandro Panconesi, Riccardo Silvestri, and Paul Vitányi. On the importance of having an identity or, is consensus really Distrib. Comput., 18:167–176, February 2006. [88] Gabriel Bracha and Ophir Rachman. Randomized consensus in expected $O(n^2 \log n)$ operations. In Sam Toueg, Paul G. Spirakis, and Lefteris M. Kirousis, editors, Distributed Algorithms, 5th International Workshop, volume 579 of Lecture Notes in Computer Science, pages 143–150, Delphi, Greece, 7–9 October 1991. Springer, 1992. [89] Quentin Bramas and Sébastien Tixeuil. Wait-free gathering without chirality. In Christian Scheideler, editor, Structural Information and Communication Complexity: 22nd International Colloquium, SIROCCO 2015, Montserrat, Spain, July 14-16, 2015. Post-Proceedings, pages 313–327, Cham, 2015. Springer International Publishing. [90] James E. Burns. A formal model for message passing systems. Technical Report 91, Computer Science Department, Indiana University, September 1980. [91] Luca Cardelli and Attila Csikász-Nagy. The cell cycle switch computes approximate majority. Scientific reports, 2, 2012. [92] Soma Chaudhuri. More choices allow more faults: Set consensus problems in totally asynchronous systems. Inf. Comput., 105(1):132–158, 1993. [93] Tushar Deepak Chandra. Polylog randomized wait-free consensus. In Proceedings of the Fifteenth Annual ACM Symposium on Principles of Distributed Computing, pages 166–175, Philadelphia, Pennsylvania, USA, 23–26 May 1996. [94] Anne Condon, Monir Hajiaghayi, David Kirkpatrick, and Ján Maňuch. Approximate majority analyses using tri-molecular chemical reaction Natural Computing, pages 1–22, 2019. [95] Tushar Deepak Chandra, Vassos Hadzilacos, and Sam Toueg. The weakest failure detector for solving consensus. J. ACM, 43:685–722, July 1996. [96] Benny Chor, Amos Israeli, and Ming Li. Wait-free consensus using asynchronous hardware. SIAM J. Comput., 23(4):701–712, 1994. [97] Alejandro Cornejo and Fabian Kuhn. Deploying wireless networks with beeps. In Proceedings of the 24th International Conference on Distributed Computing, DISC'10, pages 148–162, Berlin, Heidelberg, 2010. [98] K. Mani Chandy and Leslie Lamport. Distributed snapshots: Determining global states of distributed ACM Trans. Comput. Syst., 3(1):63–75, 1985. [99] Michael B. Cohen, Yin Tat Lee, Gary L. Miller, Jakub Pachocki, and Aaron Geometric median in nearly linear time. In Daniel Wichs and Yishay Mansour, editors, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 9–21. ACM, 2016. [100] Ernest Chang and Rosemary Roberts. An improved algorithm for decentralized extrema-finding in circular configurations of processes. Commun. ACM, 22:281–283, May 1979. [101] Ran Canetti and Tal Rabin. Fast asynchronous byzantine agreement with optimal resilience. In S. Rao Kosaraju, David S. Johnson, and Alok Aggarwal, editors, Proceedings of the Twenty-Fifth Annual ACM Symposium on Theory of Computing, May 16-18, 1993, San Diego, CA, USA, pages 42–51. ACM, 1993. [102] Armando Castañeda and Sergio Rajsbaum. New combinatorial topology upper and lower bounds for renaming. In Rida A. Bazzi and Boaz Patt-Shamir, editors, Proceedings of the Twenty-Seventh Annual ACM Symposium on Principles of Distributed Computing, PODC 2008, Toronto, Canada, August 18-21, 2008, pages 295–304. ACM, 2008. [103] Tushar Deepak Chandra and Sam Toueg. Unreliable failure detectors for reliable distributed systems. J. ACM, 43:225–267, March 1996. [104] Richard Cole and Uzi Vishkin. Deterministic coin tossing with applications to optimal parallel list Information and Control, 70(1):32–53, 1986. [105] Robert Danek and Vassos Hadzilacos. Local-spin group mutual exclusion algorithms. In Rachid Guerraoui, editor, Distributed Computing, 18th International Conference, DISC 2004, Amsterdam, The Netherlands, October 4-7, 2004, Proceedings, volume 3274 of Lecture Notes in Computer Science, pages 71–85. Springer, 2004. [106] Cynthia Dwork, Maurice Herlihy, and Orli Waarts. Contention in shared memory algorithms. J. ACM, 44(6):779–805, 1997. [107] Edsger W. Dijkstra. Self-stabilizing systems in spite of distributed control. Communications of the ACM, 17(11):643–644, November 1974. [108] Edsger W. Dijkstra. Guarded commands, non-determinacy and formal derivation of programs. Communications of the ACM, 18(8):453–457, 1975. [109] Danny Dolev, Nancy A. Lynch, Shlomit S. Pinter, Eugene W. Stark, and William E. Reaching approximate agreement in the presence of faults. J. ACM, 33(3):499–516, 1986. [110] Cynthia Dwork, Nancy A. Lynch, and Larry J. Stockmeyer. Consensus in the presence of partial synchrony. J. ACM, 35(2):288–323, 1988. [111] Shlomi Dolev. MIT Press, 2000. [112] Danny Dolev and H. Raymond Strong. Authenticated algorithms for byzantine agreement. SIAM J. Comput., 12(4):656–666, 1983. [113] David Doty and David Soloveichik. Stable leader election in population protocols requires linear time. In Yoram Moses, editor, Distributed Computing: 29th International Symposium, DISC 2015, Tokyo, Japan, October 7-9, 2015, Proceedings, pages 602–616, Berlin, Heidelberg, 2015. Springer Berlin [114] Faith Ellen, Rati Gelashvili, Nir Shavit, and Leqi Zhu. A complexity-based hierarchy for multiprocessor synchronization: [extended abstract]. In Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing, PODC '16, pages 289–298, New York, NY, USA, 2016. [115] Faith Ellen, Danny Hendler, and Nir Shavit. On the inherent sequentiality of concurrent objects. SIAM Journal on Computing, 41(3):519–536, 2012. [116] Robert Elsässer, Tomasz Radzik, et al. Recent results in population protocols for exact majority and leader Bulletin of EATCS, 3(126), 2018. [117] Faith Ellen and Philipp Woelfel. An optimal implementation of fetch-and-increment. In Yehuda Afek, editor, Distributed Computing, pages 284–298, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg. [118] Keir Fraser and Timothy L. Harris. Concurrent programming without locks. ACM Trans. Comput. Syst., 25(2), 2007. [119] Faith Ellen Fich, Maurice Herlihy, and Nir Shavit. On the space complexity of randomized synchronization. J. ACM, 45(5):843–862, 1998. [120] Faith Ellen Fich, Danny Hendler, and Nir Shavit. Linear lower bounds on real-world implementations of concurrent In Foundations of Computer Science, Annual IEEE Symposium on, pages 165–173, Los Alamitos, CA, USA, 2005. IEEE Computer Society. [121] Faith Fich. How hard is it to take a snapshot? In Peter Vojtáš, Mária Bieliková, Bernadette Charron-Bost, and Ondrej Sýkora, editors, SOFSEM 2005: Theory and Practice of Computer Science, volume 3381 of Lecture Notes in Computer Science, pages 28–37. Springer Berlin / Heidelberg, 2005. [122] Colin J. Fidge. Logical time in distributed computing systems. IEEE Computer, 24(8):28–33, 1991. [123] Panagiota Fatourou and Nikolaos D. Kallimanis. Time-optimal, space-efficient single-scanner snapshots & multi-scanner snapshots using CAS. In Indranil Gupta and Roger Wattenhofer, editors, Proceedings of the Twenty-Sixth Annual ACM Symposium on Principles of Distributed Computing, PODC 2007, Portland, Oregon, USA, August 12-15, 2007, pages 33–42. ACM, [124] Michael J. Fischer and Nancy A. Lynch. A lower bound for the time to assure interactive consistency. Inf. Process. Lett., 14(4):183–186, 1982. [125] Greg N. Frederickson and Nancy A. Lynch. Electing a leader in a synchronous ring. J. ACM, 34(1):98–115, 1987. [126] Rui Fan and Nancy A. Lynch. An $\omega(n \log n)$ lower bound on the cost of mutual exclusion. In Eric Ruppert and Dahlia Malkhi, editors, Proceedings of the Twenty-Fifth Annual ACM Symposium on Principles of Distributed Computing, PODC 2006, Denver, CO, USA, July 23-26, 2006, pages 275–284. ACM, 2006. [127] Ian Fleming. Jonathan Cape, 1959. [128] Michael J. Fischer, Nancy A. Lynch, and Michael Merritt. Easy impossibility proofs for distributed consensus problems. Distributed Computing, 1(1):26–39, 1986. [129] Faith Ellen Fich, Victor Luchangco, Mark Moir, and Nir Shavit. Obstruction-free algorithms can be practically wait-free. In Pierre Fraigniaud, editor, Distributed Computing, 19th International Conference, DISC 2005, Cracow, Poland, September 26-29, 2005, Proceedings, volume 3724 of Lecture Notes in Computer Science, pages 78–92. Springer, 2005. [130] Michael J. Fischer, Nancy A. Lynch, and Michael S. Paterson. Impossibility of distributed consensus with one faulty process. Journal of the ACM, 32(2):374–382, April 1985. [131] Lester Randolph Ford. Network Flow Theory. Rand Corporation, 1956. [132] Michael J. Fischer and Michael O. Rabin. Super-exponential complexity of presburger arithmetic. In Bob F. Caviness and Jeremy R. Johnson, editors, Quantifier Elimination and Cylindrical Algebraic Decomposition, pages 122–135. Springer Vienna, Vienna, 1998. [133] Roland Flury and Roger Wattenhofer. Slotted programming for sensor networks. In Tarek F. Abdelzaher, Thiemo Voigt, and Adam Wolisz, editors, Proceedings of the 9th International Conference on Information Processing in Sensor Networks, IPSN 2010, April 12-16, 2010, Stockholm, Sweden, pages 24–34. ACM, 2010. [134] Eli Gafni. Round-by-round fault detectors: Unifying synchrony and asynchrony (extended abstract). In Proceedings of the Seventeenth Annual ACM Symposium on Principles of Distributed Computing, pages 143–152, 1998. [135] Eli Gafni. The extended BG-simulation and the characterization of In Proceedings of the 41st annual ACM symposium on Theory of computing, pages 85–92. ACM, 2009. [136] Robert G. Gallager. Distributed minimum hop algorithms. Technical Report LIDS-P-1175, M.I.T. Laboratory for Information and Decision Systems, January 1982. [137] Rati Gelashvili. On the optimal space complexity of consensus for anonymous processes. In Yoram Moses, editor, Distributed Computing: 29th International Symposium, DISC 2015, Tokyo, Japan, October 7-9, 2015, Proceedings, pages 452–466, Berlin, Heidelberg, 2015. Springer Berlin [138] Wojciech Golab, Vassos Hadzilacos, Danny Hendler, and Philipp Woelfel. RMR-efficient implementations of comparison primitives using read and write operations. Distributed Computing, 25(2):109–162, May 2012. [139] George Giakkoupis, Maryam Helmi, Lisa Higham, and Philipp Woelfel. An $o(\sqrt{n})$ space bound for obstruction-free leader election. In Proceedings of the 27th International Symposium on Distributed Computing (DISC), pages 46–60, October 14–18 2013. [140] George Giakkoupis, Maryam Helmi, Lisa Higham, and Philipp Woelfel. Test-and-set in optimal space. In Proceedings of the Forty-seventh Annual ACM Symposium on Theory of Computing, STOC '15, pages 615–623, New York, NY, USA, 2015. ACM. [141] Wojciech Golab, Lisa Higham, and Philipp Woelfel. Linearizable implementations do not suffice for randomized distributed computation. In Proceedings of the Forty-third Annual ACM Symposium on Theory of Computing, STOC '11, pages 373–382, New York, NY, USA, 2011. ACM. [142] Mohsen Ghaffari and Fabian Kuhn. Deterministic distributed vertex coloring: Simpler, faster, and without network decomposition. CoRR, abs/2011.04511, 2020. [143] Rachid Guerraoui, Petr Kuznetsov, Matteo Monti, Matej Pavlovič, and Dragos-Adrian Seredinschi. The consensus number of a cryptocurrency. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, PODC '19, page 307–316, New York, NY, USA, 2019. Association for Computing Machinery. [144] Juan A. Garay and Yoram Moses. Fully polynomial byzantine agreement for $n>3t$ processors in $t + 1$ SIAM J. Comput., 27(1):247–290, 1998. [145] Wojciech M. Golab. A complexity separation between the cache-coherent and distributed shared memory models. In Cyril Gavoille and Pierre Fraigniaud, editors, Proceedings of the 30th Annual ACM Symposium on Principles of Distributed Computing, PODC 2011, San Jose, CA, USA, June 6-8, 2011, pages 109–118. ACM, 2011. [146] Jim Gray. Notes on data base operating systems. In Operating Systems, An Advanced Course, pages 393–481. Springer-Verlag, London, UK, 1978. [147] Ronald L. Graham, Bruce L. Rothschild, and Joel H. Spencer. Ramsey Theory. Wiley-Interscience, 2nd edition, 1990. [148] George Giakkoupis and Philipp Woelfel. On the time and space complexity of randomized test-and-set. In Darek Kowalski and Alessandro Panconesi, editors, ACM Symposium on Principles of Distributed Computing, PODC '12, Funchal, Madeira, Portugal, July 16-18, 2012, pages 19–28. ACM, 2012. [149] George Giakkoupis and Philipp Woelfel. A tight RMR lower bound for randomized mutual exclusion. In Proceedings of the 44th symposium on Theory of Computing, pages 983–1002. ACM, 2012. [150] G. Giakkoupis and P. Woelfel. Randomized mutual exclusion with constant amortized RMR complexity on the DSM. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 504–513, Oct 2014. [151] George Giakkoupis and Philipp Woelfel. Randomized abortable mutual exclusion with constant amortized RMR complexity on the cc model. In Proceedings of the ACM Symposium on Principles of Distributed Computing, PODC '17, pages 221–229, New York, NY, USA, 2017. ACM. [152] Maurice Herlihy. Impossibility results for asynchronous PRAM (extended abstract). In Proceedings of the third annual ACM symposium on Parallel algorithms and architectures, SPAA '91, pages 327–336, New York, NY, USA, 1991. ACM. [153] Maurice Herlihy. Wait-free synchronization. ACM Trans. Program. Lang. Syst., 13(1):124–149, January 1991. [154] Maurice Herlihy. A methodology for implementing highly concurrent objects. ACM Trans. Program. Lang. Syst., 15(5):745–770, 1993. [155] Timothy L. Harris, Keir Fraser, and Ian A. Pratt. A practical multi-word compare-and-swap operation. In Dahlia Malkhi, editor, Distributed Computing, 16th International Conference, DISC 2002, Toulouse, France, October 28-30, 2002 Proceedings, volume 2508 of Lecture Notes in Computer Science, pages 265–279. Springer, 2002. [156] Vassos Hadzilacos, Xing Hu, and Sam Toueg. On linearizability and the termination of randomized algorithms, [157] Danny Hendler and Vitaly Khait. Complexity tradeoffs for read and update operations. In Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, PODC '14, pages 186–195, New York, NY, USA, 2014. [158] Maurice Herlihy, Victor Luchangco, and Mark Moir. Obstruction-free synchronization: Double-ended queues as an example. In 23rd International Conference on Distributed Computing Systems (ICDCS 2003), 19-22 May 2003, Providence, RI, USA, pages 522–529. IEEE Computer Society, 2003. [159] Maurice Herlihy and J. Eliot B. Moss. Transactional memory: Architectural support for lock-free data In ISCA, pages 289–300, 1993. [160] Daniel S. Hirschberg and J. B. Sinclair. Decentralized extrema-finding in circular configurations of Commun. ACM, 23(11):627–628, 1980. [161] Maurice Herlihy and Nir Shavit. The topological structure of asynchronous computability. J. ACM, 46(6):858–923, 1999. [162] Maurice Herlihy and Jeannette M. Wing. Linearizability: A correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst., 12(3):463–492, 1990. [163] Danny Hendler and Philipp Woelfel. Randomized mutual exclusion with sub-logarithmic RMR-complexity. Distributed Computing, 24(1):3–19, 2011. [164] Michiko Inoue, Toshimitsu Masuzawa, Wei Chen, and Nobuki Tokura. Linear-time snapshot using multi-writer multi-reader registers. In Gerard Tel and Paul Vitányi, editors, Distributed Algorithms, volume 857 of Lecture Notes in Computer Science, pages 130–140. Springer Berlin / Heidelberg, 1994. [165] Damien Imbs and Michel Raynal. Visiting Gafni’s reduction land: From the BG simulation to the extended BG simulation. In Stabilization, Safety, and Security of Distributed Systems, pages 369–383. Springer, 2009. [166] Prasad Jayanti. Robust wait-free hierarchies. J. ACM, 44(4):592–614, 1997. [167] Prasad Jayanti. A time complexity lower bound for randomized implementations of some shared objects. In Proceedings of the Seventeenth Annual ACM Symposium on Principles of Distributed Computing, PODC '98, pages 201–210, New York, NY, USA, 1998. ACM. [168] Prasad Jayanti. $f$-arrays: implementation and applications. In Proceedings of the twenty-first annual symposium on Principles of distributed computing, PODC '02, pages 270–279, New York, NY, USA, 2002. ACM. [169] Prasad Jayanti. personal communication, 19 October 2011. [170] Prasad Jayanti and Sam Toueg. Some results on the impossibility, universality, and decidability of In Adrian Segall and Shmuel Zaks, editors, Distributed Algorithms: 6th International Workshop, WDAG '92 Haifa, Israel, November 2–4, 1992 Proceedings, pages 69–84, Berlin, Heidelberg, 1992. Springer Berlin Heidelberg. [171] Prasad Jayanti, King Tan, and Sam Toueg. Time and space lower bounds for nonblocking implementations. SIAM J. Comput., 30(2):438–456, 2000. [172] Jawal Y. Kawash. Limitations and Capabilities of Weak Memory Consistency PhD thesis, University of Calgary, January 2000. [173] Michael C. Loui and Hosame H. Abu-Amara. Memory requirements for agreement among unreliable asynchronous In Franco P. Preparata, editor, Parallel and Distributed Computing, volume 4 of Advances in Computing Research, pages 163–183. JAI Press, 1987. [174] Leslie Lamport. Concurrent reading and writing. Communications of the ACM, 20(11):806–811, November 1977. [175] Leslie Lamport. Time, clocks, and the ordering of events in a distributed system. Commun. ACM, 21(7):558–565, 1978. [176] L. Lamport. How to make a multiprocessor computer that correctly executes multiprocess programs. Computers, IEEE Transactions on, C-28(9):690–691, Sept 1979. [177] Leslie Lamport. The weak byzantine generals problem. J. ACM, 30(3):668–676, 1983. [178] Leslie Lamport. A fast mutual exclusion algorithm. ACM Trans. Comput. Syst., 5(1):1–11, 1987. [179] Leslie Lamport. The part-time parliament. ACM Trans. Comput. Syst., 16(2):133–169, 1998. [180] Leslie Lamport. Paxos made simple. SIGACT News, 32(4):18–25, 2001. [181] Nathan Linial. Locality in distributed graph algorithms. SIAM J. Comput., 21(1):193–201, 1992. [182] Gérard Le Lann. Distributed systems—towards a formal approach. In B. Gilchrist, editor, Information Processing 77, pages 155–160. North-Holland, 1977. [183] Juhana Laurinharju and Jukka Suomela. Brief announcement: Linial's lower bound made easy. In Magnús M. Halldórsson and Shlomi Dolev, editors, ACM Symposium on Principles of Distributed Computing, PODC '14, Paris, France, July 15-18, 2014, pages 377–378. ACM, 2014. [184] Christoph Lenzen, Jukka Suomela, and Roger Wattenhofer. Local algorithms: Self-stabilization on speed. In Rachid Guerraoui and Franck Petit, editors, Stabilization, Safety, and Security of Distributed Systems: 11th International Symposium, SSS 2009, Lyon, France, November 3-6, 2009. Proceedings, volume 5873 of Lecture Notes in Computer Science, pages 17–34, Berlin, Heidelberg, 2009. Springer Berlin Heidelberg. [185] Nancy A. Lynch and Mark R. Tuttle. Hierarchical correctness proofs for distributed algorithms. In Proceedings of the Sixth Annual ACM Symposium on Principles of Distributed Computing, PODC '87, pages 137–151, New York, NY, USA, 1987. [186] Nancy A. Lynch. Distributed Algorithms. Morgan Kaufmann, 1996. [187] Mark Moir and James H. Anderson. Wait-free algorithms for fast, long-lived renaming. Sci. Comput. Program., 25(1):1–39, 1995. [188] Friedemann Mattern. Efficient algorithms for distributed snapshots and global virtual time approximation. J. Parallel Distrib. Comput., 18(4):423–434, 1993. [189] John M. Mellor-Crummey and Michael L. Scott. Algorithms for scalable synchronization on shared-memory ACM Trans. Comput. Syst., 9(1):21–65, 1991. [190] Shlomo Moran. Using approximate agreement to obtain complete disagreement: the output structure of input-free asynchronous computations. In Third Israel Symposium on the Theory of Computing and Systems, pages 251–257, January 1995. [191] Dahlia Malkhi and Michael K. Reiter. Byzantine quorum systems. Distributed Computing, 11(4):203–213, 1998. [192] Michael Merideth and Michael Reiter. Selected results from the latest decade of quorum systems research. In Bernadette Charron-Bost, Fernando Pedone, and André Schiper, editors, Replication, volume 5959 of Lecture Notes in Computer Science, pages 185–206. Springer, 2010. [193] Achour Mostefaoui, Sergio Rajsbaum, Michel Raynal, and Corentin Travers. The combined power of conditions and information on failures to solve asynchronous set agreement. SIAM Journal on Computing, 38(4):1574–1601, 2008. [194] Dahlia Malkhi, Michael K. Reiter, Avishai Wool, and Rebecca N. Wright. Probabilistic quorum systems. Inf. Comput., 170(2):184–206, 2001. [195] Yves Métivier, John Michael Robson, and Akka Zemmari. On distributed computing with beeps. CoRR, abs/1507.02721, 2015. [196] Gil Neiger and Sam Toueg. Substituting for real time and common knowledge in asynchronous distributed systems. In Proceedings of the sixth annual ACM Symposium on Principles of distributed computing, PODC '87, pages 281–293, New York, NY, USA, 1987. [197] Moni Naor and Avishai Wool. The load, capacity, and availability of quorum systems. SIAM J. Comput., 27(2):423–447, 1998. [198] Chris Okasaki. Purely Functional Data Structures. Cambridge University Press, 1999. [199] David Peleg. Distributed Computing: A Locality-Sensitive Approach. SIAM, 2000. [200] Carl Adam Petri. Kommunikation mit Automaten. PhD thesis, Technische Hochschule Darmstadt, 1962. [201] Gary L. Peterson. Myths about the mutual exclusion problem. Inf. Process. Lett., 12(3):115–116, 1981. [202] Gary L. Peterson. An $O(n \log n)$ unidirectional algorithm for the circular extrema ACM Trans. Program. Lang. Syst., 4(4):758–762, 1982. [203] Gary L. Peterson and Michael J. Fischer. Economical solutions for the critical section problem in a distributed system (extended abstract). In John E. Hopcroft, Emily P. Friedman, and Michael A. Harrison, editors, Proceedings of the 9th Annual ACM Symposium on Theory of Computing, May 4-6, 1977, Boulder, Colorado, USA, pages 91–97. ACM, 1977. [204] S. A. Plotkin. Sticky bits and universality of consensus. In Proceedings of the eighth annual ACM Symposium on Principles of distributed computing, PODC '89, pages 159–175, New York, NY, USA, 1989. [205] J. Postel. Transmission Control Protocol. RFC 793 (INTERNET STANDARD), September 1981. Updated by RFCs 1122, 3168, 6093, 6528. [206] Alessandro Panconesi and Romeo Rizzi. Some simple distributed algorithms for sparse networks. Distributed Comput., 14(2):97–100, 2001. [207] M. Presburger. Über die Vollständigkeit eines gewissen Systems der Arithmetik ganzer Zahlen, in welchem die Addition als einzige Operation hervortritt. In Comptes-Rendus du I Congrès de Mathématiciens des Pays Slaves, Warszawa, pages 92–101, 1929. [208] Giusseppe Principe. CORDA: Distributed coordination of a set of autonomous mobile In Proceedings of the European Research Seminar on Advances in Distributed Systems, pages 185–190, 2001. [209] M. Pease, R. Shostak, and L. Lamport. Reaching agreement in the presence of faults. Journal of the ACM, 27(2):228–234, April 1980. [210] David Peleg and Avishai Wool. The availability of quorum systems. Inf. Comput., 123(2):210–223, 1995. [211] David Peleg and Avishai Wool. The availability of crumbling wall quorum systems. Discrete Applied Mathematics, 74(1):69–83, 1997. [212] David Peleg and Avishai Wool. Crumbling walls: A class of practical and efficient quorum systems. Distributed Computing, 10(2):87–97, 1997. [213] M. O. Rabin. Randomized byzantine generals. In 24th Annual Symposium on Foundations of Computer Science (sfcs 1983), pages 403–409, Nov 1983. [214] Yaron Riany, Nir Shavit, and Dan Touitou. Towards a practical snapshot algorithm. Theor. Comput. Sci., 269(1-2):163–201, 2001. [215] Eric Ruppert. Determining consensus numbers. SIAM J. Comput., 30(4):1156–1168, 2000. [216] Eric Schenk. Faster approximate agreement with multi-writer registers. In 36th Annual Symposium on Foundations of Computer Science, Milwaukee, Wisconsin, 23-25 October 1995, pages 714–723. IEEE Computer Society, 1995. [217] E. Sperner. Neuer Beweis für die Invarianz der Dimensions­zahl und des Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 6:265–272, 1928. [218] Christian Scheideler, Andréa W. Richa, and Paolo Santi. An o(log n) dominating set protocol for wireless ad-hoc networks under the physical interference model. In Xiaohua Jia, Ness B. Shroff, and Peng-Jun Wan, editors, Proceedings of the 9th ACM Interational Symposium on Mobile Ad Hoc Networking and Computing, MobiHoc 2008, Hong Kong, China, May 26-30, 2008, pages 91–100. ACM, 2008. [219] M. Saks, N. Shavit, and H. Woll. Optimal time randomized consensus - making resilient algorithms fast in practice. In Proc. of the 2nd ACM Symposium on Discrete Algorithms (SODA), pages 351–362, 1991. [220] Nir Shavit and Dan Touitou. Software transactional memory. Distributed Computing, 10(2):99–116, 1997. [221] Ichiro Suzuki and Masafumi Yamashita. Distributed anonymous mobile robots: formation of geometric patterns. SIAM Journal on Computing, 28(4):1347–1363, 1999. [222] Michael E. Saks and Fotios Zaharoglou. Wait-free $k$-set agreement is impossible: The topology of public SIAM J. Comput., 29(5):1449–1483, 2000. [223] John Tromp and Paul M. B. Vitányi. Randomized two-process wait-free test-and-set. Distributed Computing, 15(3):127–135, 2002. [224] George Varghese and Nancy A. Lynch. A tradeoff between safety and liveness for randomized coordinated attack protocols. In Proceedings of the Eleventh Annual ACM Symposium on Principles of Distributed Computing, PODC '92, pages 241–250, New York, NY, USA, 1992. ACM. [225] Jennifer L. Welch. Simulating synchronous processors. Inf. Comput., 74(2):159–170, 1987. [226] Jae-Heon Yang and James H. Anderson. A fast, scalable mutual exclusion algorithm. Distributed Computing, 9(1):51–60, 1995. [227] Haifeng Yu. Signed quorum systems. Distributed Computing, 18(4):307–323, 2006. [228] Leqi Zhu. A tight space bound for consensus. In Daniel Wichs and Yishay Mansour, editors, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 345–350. ACM, 2016.
# Handling missing data in large healthcare dataset: a case study of unknown trauma outcomes E. M. Mirkes<EMAIL_ADDRESS>Department of Mathematics, University of Leicester, Leicester, LE1 7RH, UK T.J. Coats<EMAIL_ADDRESS>Emergency Medicine Academic Group, Department of Cardiovascular Sciences, University of Leicester, Leicester, LE1 7RH, UK J. Levesley<EMAIL_ADDRESS>Department of Mathematics, University of Leicester, Leicester, LE1 7RH, UK A. N. Gorban <EMAIL_ADDRESS>Department of Mathematics, University of Leicester, Leicester, LE1 7RH, UK ###### Abstract Handling of missed data is one of the main tasks in data preprocessing especially in large public service datasets. We have analysed data from the Trauma Audit and Research Network (TARN) database, the largest trauma database in Europe. For the analysis we used 165,559 trauma cases. Among them, there are 19,289 cases (13.19%) with unknown outcome. We have demonstrated that these outcomes are not missed ‘completely at random’ and, hence, it is impossible just to exclude these cases from analysis despite the large amount of available data. We have developed a system of non-stationary Markov models for the handling of missed outcomes and validated these models on the data of 15,437 patients which arrived into TARN hospitals later than 24 hours but within 30 days from injury. We used these Markov models for the analysis of mortality. In particular, we corrected the observed fraction of death. Two naïve approaches give 7.20% (available case study) or 6.36% (if we assume that all unknown outcomes are ‘alive’). The corrected value is 6.78%. Following the seminal paper of Trunkey (1983) the multimodality of mortality curves has become a much discussed idea. For the whole analysed TARN dataset the coefficient of mortality monotonically decreases in time but the stratified analysis of the mortality gives a different result: for lower severities the coefficient of mortality is a non-monotonic function of the time after injury and may have maxima at the second and third weeks. The approach developed here can be applied to various healthcare datasets which experience the problem of lost patients and missed outcomes. ###### keywords: Missed data , Big data , Data cleaning , Mortality , Markov models , Risk evaluation ## 1 Introduction Enthusiasm for the use of big data in the improvement of health service is huge but there is a concern that without proper attention to some specific challenges the mountain of big data efforts will bring forth a mouse [1]. Now, there is no technical problem with “big” in healthcare. Electronic health records include hundreds of millions of outpatient visits and tens of millions of hospitalizations, and these numbers grow exponentially. The main problem is in quality of data. “Big data” very often means “dirty data” and the fraction of data inaccuracies increases with data volume growth. Human inspection at the big data scale is impossible and there is a desperate need for intelligent tools for accuracy and believability control. The second big challenge of big data in healthcare is missed information. There may be many reasons for data incompleteness. One of them is in health service “fragmentation”. This problem can be solved partially by the national and international unification of the electronic health records (see, for example, Health Level Seven International (HL7) standards [13] or discussion of the template for uniform reporting of trauma data [37]). However, some fragmentation is unavoidable due to the diverse structure of the health service. In particular, the modern tendency for personalization of medicine can lead to highly individualized sets of attributes for different patients or patient groups. There are several universal technologies for the handling of missing data [40, 41, 36, 46, 23, 22, 14]. Nevertheless, the problem of handling missed values in large healthcare datasets is certainly not completely solved. It continues to attract the efforts of many researchers (see, for example, [8]) because the popular universal tools can lead to bias or loss of statistical power [21, 51]. For each system, it is desirable to combine various existing approaches for the handling of missing data (or to invent new ones) to minimize the damage to the results of data analysis. For the best possible solution, we have to take into account the peculiarities of each database and to specify the further use of the cleaned data (it is desirable to understand in advance how we will use the preprocessed data). In our work we analyze missed values in the TARN database [54]. We use the preprocessed data for: * 1. the evaluation of the risk of death, * 2. the identification of the patterns of mortality, * 3. approaching several old problems like the Trunkey hypothesis about the trimodal distribution of trauma mortality [55]. The ‘two stage lottery’ non-stationary Markov model developed in the sequel can be used for the analysis of missing outcomes in a much wider context than the TARN database and could be applied to the handling of data gaps in healthcare datasets which experience the problem of transferred and lost patients and missing outcomes. In this paper we analyze the unknown outcomes. The next task will be the analysis of missed data in the most common “input” attributes. ## 2 Data set There are more than 200 hospitals which send information to TARN (TARN hospitals). This network is gradually increasing. Participation in TARN is recommended by the Royal College of Surgeons of England and the Department of Health. More than 93% of hospitals across England and Wales submit their data to TARN. TARN also receives data from Dublin, Waterford (Eire), Copenhagen, and Bern. We use TARN data collected from 01.01.2008 (start of treatment) to 05.05.2014 (date of discharge). The database contains 192,623 records and more than 200 attributes. Sometimes several records correspond to the same trauma case because the patients may be transferred between TARN hospitals. We join these records. The resulting database includes data of 182,252 different trauma cases with various injuries. Figure 1: The groups of the patients for analysis of mortality. FOD in the group ‘Available W30D’ can be calculated from the data directly. Mortality in the group ‘OUT30’ will be evaluated on the basis of the non-stationary Markov model. The group of 16,693 patients which arrived (were transferred from other institutions) to TARN hospitals later than 24 hours after injury was excluded from the mortality analysis. Its subgroup ‘IN30’ of 15,437 patients is used for validation of the Markov model for ‘OUT30’ group. The subgroups with age$<65$ and age$\geq 65$ should be separated because for age$\geq 65$ the following traumas are excluded from the database: Acetabulum fractures (AIS 8562xx), Pelvic/Acetabulum fractures (AIS 8563xx), Pelvic ring fractures (AIS 8561xx), Pubic rami and Femoral neck fractures (AIS 85316x). 16,693 records correspond to patients, who arrived (transferred from other institutions) to TARN hospitals later than 24 hours after injury. This sample is biased, for example the Fraction Of Dead outcomes (FOD) for this sample is 3.34% and FOD for all data is 6.05%. This difference is very significant for such a big sample. (If all the outcomes in a group of the trauma cases are known then we use the simple definition of FOD in the group: the ratio of the number of registered deaths in this group to the total number of patients there. Such a definition is not always applicable. The detailed and more sophisticated analysis of this notion follows in the next section.) We remove these 16,693 trauma cases from analysis but use them later for validation of the “mortality after transfer” model. Among them, there are 15,437 patients who arrived at a TARN hospital within 30 days after injury. We call this group ‘IN30’ for short (Fig. 1). As a result we have 165,559 records for analysis (‘Main group’). This main group consists of two subgroups: 146,270 patients from this group approached TARN during the first day of injury and remained in TARN hospitals or discharged to a final destination during the first 30 days after injury. We call this group the ‘Available within 30 days after injury’ cases (or ‘Available W30D’ for short). The other 19,289 patients have been transferred within 30 days after injury to a hospital or institution (or unknown destination) who did not return data to the TARN system. We call them ‘Transferred OUT OF TARN within 30 days after injury’ or just ‘OUT30’ (Fig. 1). The patients with the non-final discharge destinations ‘Other Acute hospital’ and ‘Other institution’ were transferred from a TARN hospital to a hospital (institution) outside TARN and did not return to the TARN hospitals within 30 days after injury. The database includes several indicators for evaluation of the severity of the trauma case, in particular, Abbreviated Injury Scale (AIS), Injury Severity Score (ISS) and New Injury Severity Score (NISS). For a detailed description and comparison of the scores we refer readers to reviews [30, 32]. The comparative study of predictive ability of different scores has a long history [19, 43, 7, 42]. The scores are used for mortality predictions and are tested on different datasets [52, 29, 53, 4]. ## 3 Definitions and distributions of outcomes The widely used definition of the endpoint outcome in trauma research is survival or death within 30 days after injury [9, 4, 48]. A substantial number of TARN in-hospital deaths following trauma occur after 30 days: there are 957 such cases (or 8% of TARN in-hospital death) among 11,900 cases with ‘Mortuary’ discharge destination. This proportion is practically the same in the main group (165,559 cases): 894 deaths after 30 days in hospital (or 7.9%) among 11,347 cases with ‘Mortuary’ discharge destination. Death later than 30 days after injury may be considered as caused by co- morbidity rather than the direct consequence of the injury [4]. These later deaths are not very interesting from the perspective of an acute trauma care system (as we cannot influence them), but they might be very interesting from the perspective of a geriatric rehabilitation centre or of an injury prevention program for elderly patients. On the other hand, when “end of acute care” is used as an outcome definition then a significant portion of deaths remains unnoticed. For example, in the 3332 trauma cases treated in the Ulleval University Hospital (Oslo, Norway, 2000-2004) 18% of deaths occurred after discharge from the hospital [48]. The question of whether it is possible to neglect trauma caused mortality within 30 days after trauma for the patients with the discharge destination ‘Home’, ‘Rehabilitation’ and other ‘recovery’ outcomes is not trivial [48]. Moreover, here are two questions: * 1. How do we collect all the necessary data after discharge within 30 days after trauma – a technical question? * 2. How do we classify the death cases after discharge within 30 days after trauma; are they consequences of the trauma or should they be considered as comorbidity with some additional reasons? The best possible answer to the first question requires the special combination of technical and business process to integrate data from different sources. The recent linkage from TARN to the Office for National Statistics (ONS) gives the possibility to access the information about the dates of death in many cases. It is expected that the further data integration process will recover many gaps in the outcome data. The last question is far beyond the scope of data management and analysis and may be approached from different perspectives. Whether or not the late deaths are important in a model depends on the question being asked. From the data management perspective, we have to give the formal definition of the outcome in the terms of the available database fields. It is impossible to use the standard definition as survival or death within 30 days after injury because these data are absent. We define the outcome ‘Alive W30D’ for the TARN database being as close to the standard definition as it is possible. In the TARN database discharge destinations ‘Home (own)’, ‘Home (relative or other carer)’, ‘Nursing Home’, and ‘Rehabilitation’ are considered as final. If we assume that these trauma cases have the outcome ‘Alive W30D’ then we loose some cases of death. From the acute care perspective these cases can be considered as irrelevant. Let us accept this definition. There still remain many cases with unknown outcome. For analysis of these cases we introduce the outcome category ‘Transferred’. In this category we include the cases which left the TARN registry to a hospital or other institution outside TARN, or to an unknown destination within 30 days. The relations between the discharge destinations and these three outcomes are presented in Table 1. Table 1: Distribution of outcomes in the main group (W30D means within 30 days after injury). Subgroup | Alive W30D | Dead W30D | Unknown | Total ---|---|---|---|--- Available W30D | 135,733 | 10,537 | 0 | 146,270 OUT30 | 0∗ | 0∗ | 19,279 | 19,289 Total | 135,733 | 10,537 | 19,289 | 165,559 ∗No known survival or deaths. As we can see from Table 1, 19,289 trauma cases (or 11.35% of all cases) have unknown outcome. The first standard question is: can we delete these data and apply available case analysis? For this purpose we have to consider these outcome data as “Missing Completely at Random” (MCAR) [39, 40, 36, 46]. This is definitely not the case. The group with unknown outcomes is exactly the ‘OUT30’ group. The probability of belonging to this group depends, for example, on the severity of injury (which can be measured, by the maximal severity, by NISS, by GCS or by another severity score). The $\chi^{2}$ test of independence shows that transfer depends on the severity with $p$-value $p<10^{-300}$ (this is the probability that such a strong dependence might appear by chance). One can consider all these cases as alive because these patients have been alive at the point of discharge from TARN hospitals. If we consider all transferred as alive then the FOD is 6.35%. If we delete all the transferred patients (study only the Available W30D group) then the FOD is 7.2%. If we test this hypothesis on 15,437 patients of the group ‘IN30’ transferred to TARN hospitals from outside the network within 30 days after injury then we find that the nonzero mortality for them (3.10%). The data table with known outcomes is necessary for further machine learning and the main goal is outcome prediction and risk evaluation. We choose to remove the OUT30D group from data table but simultaneously to adjust the weights of the retained cases to compensate for the removal. The information about the OUT30D cases will be used in the construction of the weights. It is necessary to evaluate the mortality of the patients transferred from TARN before removing their records and reweighting of the rest. In the next section we develop, identify and validate Markov models for the analysis of the mortality of transferred patients. Another method for handling missed outcomes is multiple imputation of the outcomes (about multiple imputations see, for example, [22]). Both methods use similar stochastic models of mortality and transfer. The large number of cases allows us to use the reweighting approach. A significant majority of the evaluated weights are between 0.9 and 1.1 (see Section 6). ## 4 Non-stationary Markov model for the analysis of missing outcomes ### 4.1 Structure of model Figure 2: a) The basic Markov model of mortality (‘recovery/death lottery’) with two absorbing states (states from which patients do not leave), ‘D’ (death) and ‘R’ (recovery). b) The ‘lottery of transfer’ (from the TARN network) with one absorbing state ‘L’ (‘left’). The transition probabilities $\alpha=\alpha(t,s)$, $\nu=\nu(t,s)$ and $\mu(t,s)$ depend on the time after injury $t$ and on the state of the patient on the first day after trauma presented by the values of attributes $s$. Figure 3: The Markov model of mortality and transfer from TARN hospitals to hospitals outside TARN for the limit case of ‘advanced transfer’, when the lottery of transfer (Fig. 2 b) occurs every day before the lottery of survival (Fig. 2 a). It has six states: ‘H’ (an alive patient in a TARN hospital), ‘L’ (an alive patient in a hospital outside TARN), ‘D’ (death in a TARN hospital), ‘${\rm D_{L}}$’ (death in a hospital outside TARN), ‘R’ (recovery of a patient in a TARN hospital) and ‘${\rm R_{L}}$’ (recovery of a patient in a hospital outside TARN). Four of them are absorbing: ‘D’, ‘${\rm D_{L}}$’, ‘R’, and ‘${\rm R_{L}}$’. The transitions from H to ${\rm D_{L}}$ and ${\rm R_{L}}$ are superpositions of the same day transitions: ${\rm H\to L\to D_{L}}$ and $\rm H\to L\to R_{L}$ Figure 4: The Markov model of mortality and transfer from TARN hospitals to hospitals outside TARN for the limit case of ‘retarded transfer’, when the lottery of transfer (Fig. 2 b) occurs every day after the lottery of survival (Fig. 2 a). It has the same states as the model with advanced transfer (Fig. 3) but different transition probabilities. We propose a system of Markov models for evaluation of mortality in trauma datasets. In these models each day each patient can participate in two ‘lotteries’ (Fig. 2). The first lottery (recovery/death), Fig. 2 a, has three outcomes: ‘R’ (recovery), ‘D’ (death), and ‘H’ (remains in a TARN hospital). The second lottery (of transfer), Fig. 2 b, has two outcomes: ‘H’ (remains in a TARN hospital) and ‘L’ (transfer from the TARN hospital to a hospital or ‘other institution’ outside TARN). The probabilities of outcomes depend on the time from the injury $t$ and on the state of the patient after injury $s$. It is important to stress that $s$ in our models characterizes the state of the patient on the first day after trauma and may include severity, type of injury (blunt/penetrating), localization of traumas, age, gender, airway status, systolic and diastolic blood pressure, etc, but cannot change in time. The description of state $s$ may vary in the level of detail depending on the available information. We have fitted and tested two models based on the severity of trauma: the maximal severity model and the (binned) NISS model. In Section 5 we demostrate that it is necessary to refine the model and to include the age group in $s$ for low severities. For different purposes the mortality model can include more detail. The lotteries (Fig. 2) do not commute. We consider two limit cases: ‘advanced transfer’ (Fig. 3) and ‘retarded transfer’ (Fig. 4). In models with advanced transfer the lottery of transfer Fig. 2 b) each day precedes the lottery of recovery/death (Fig. 2 a). In models with retarded transfer, conversely, the lottery of recovery/death precedes the lottery of transfer. These two models are important because many other much more general Markov models are between them in the following exact sense. It is a very strong assumption that every day there are two steps only: the recovery/death lottery and the transfer lottery. It may be more realistic to assume that every day there are many ‘fractional steps’ of recovery/death and of transfer from TARN and the result of the day is the aggregate result of all of these fractional steps. Assume that the events of recover, death and transfer are sampled for every day after injury $t$ from a number $M$ consecutive random choices with probabilities $\alpha_{i},\nu_{i}$ for recovery/death and $\mu_{i}$ for transfer out of TARN ($i=1,\ldots,M$), and this chain of choices is Markovian (the choices for a patient do not depend on the previous choices directly but only on the current state, H, R or L). It is non-stationary because the transition probabilities depend on time. They are different for different days after injury. This sequence of choices is displayed as a sequence of fractional steps: $\begin{split}\mbox{recovery/death}_{1}&\to\mbox{transfer}_{1}\to\ldots\\\ &\to\mbox{recovery/death}_{M}\to\mbox{transfer}_{M}.\end{split}$ The probability of in-TARN death in the above model of sequential choice, on a given day after trauma is $\nu_{1}+\nu_{2}(1-\alpha_{1}-\nu_{1})(1-\mu_{1})+\ldots+\nu_{M}\prod_{i=1}^{M-1}(1-\alpha_{i}-\nu_{i})(1-\mu_{i}).$ Similarly, the probability for recovery is $\alpha_{1}+\alpha_{2}(1-\alpha_{1}-\nu_{1})(1-\mu_{1})+\ldots+\alpha_{M}\prod_{i=1}^{M-1}(1-\alpha_{i}-\nu_{i})(1-\mu_{i}).$ Finally, the probability of transfer to a hospital outside of TARN is $\begin{split}\mu_{1}(1-\alpha_{1}-\nu_{1})&+\mu_{2}(1-\mu_{1})(1-\alpha_{1}-\nu_{1})(1-\alpha_{2}-\nu_{2})\\\ &+\ldots+\mu_{M}\prod_{i=1}^{M-1}(1-\mu_{i})\prod_{j=1}^{M}(1-\alpha_{j}-\nu_{j}){.}\end{split}$ The probabilities $\alpha_{i},\nu_{i}$ for the fractional steps should be consistent with the daily probabilities $\alpha,\nu$: if there is no transfer then the resulting probabilities of recovery or death should be the same: $\begin{split}&\alpha_{1}+\alpha_{2}(1-\alpha_{1}-\nu_{1})+\ldots+\alpha_{M}\prod_{i=1}^{M-1}(1-\alpha_{i}-\nu_{i})=\alpha,\\\ &\nu_{1}+\nu_{2}(1-\alpha_{1}-\nu_{1})+\ldots+\nu_{M}\prod_{i=1}^{M-1}(1-\alpha_{i}-\nu_{i})=\nu.\\\ &\mbox{{Also,} }\prod_{i=1}^{M}(1-\alpha_{i}-\nu_{i})=1-\alpha-\nu.\end{split}$ (1) Similarly, for $\mu_{i}$ we get the conditions $\begin{split}\mu_{1}+\mu_{2}(1-\mu_{1})+\ldots+\mu_{M}\prod_{i=1}^{M-1}(1-\mu_{i})&=\mu\\\ \mbox{ {and} }\prod_{i=1}^{M}(1-\mu_{i})&=1-\mu{.}\end{split}$ (2) ###### Proposition 1. The probability of in-TARN death in the described model of sequential choice for every day after trauma is between the probabilities for the Markovian model with advanced transfer (Fig. 3) and the Markovian model with retarded transfer (Fig. 4): $\begin{split}\nu(1-\mu)\leq\nu_{1}&+\nu_{2}(1-\alpha_{1}-\nu_{1})(1-\mu_{1})+\ldots\\\ &+\nu_{M}\prod_{i=1}^{M-1}(1-\alpha_{i}-\nu_{i})(1-\mu_{i})\leq\nu.\end{split}$ (3) ###### Proof. According to conditions (1), (2), $\begin{split}&\nu(1-\mu)\\\ &=\left[\nu_{1}+\nu_{2}(1-\alpha_{1}-\nu_{1})+\ldots+\nu_{M}\prod_{i=1}^{M-1}(1-\alpha_{i}-\nu_{i})\right]\\\ &\quad\times\prod_{i=1}^{M}(1-\mu_{i}).\end{split}$ (4) Notice that for every $j$ ($1\leq j\leq M$), $\prod_{i=1}^{M}(1-\mu_{i})\leq\prod_{i=1}^{j}(1-\mu_{i})$ because $0\leq 1-\mu_{i}\leq 1$ for all probabilities $\mu_{i}$. Therefore, $\nu_{j}\prod_{i=1}^{j}(1-\alpha_{i}-\nu_{i})\prod_{k=1}^{M}(1-\mu_{k})\leq\nu_{j}\prod_{i=1}^{j}(1-\alpha_{i}-\nu_{i})(1-\mu_{i})$ and the following inequality holds $\begin{split}&\left[\nu_{1}+\nu_{2}(1-\alpha_{1}-\nu_{1})+\ldots+\nu_{M}\prod_{i=1}^{M-1}(1-\alpha_{i}-\nu_{i})\right]\\\ &\times\prod_{i=1}^{M}(1-\mu_{i})\\\ &\leq\nu_{1}+\nu_{2}(1-\alpha_{1}-\nu_{1})+\ldots+\nu_{M}\prod_{i=1}^{M-1}(1-\alpha_{i}-\nu_{i}).\end{split}$ (5) The left inequality in (3) is proven. The right inequality in (3) follows from condition (1) because for every product $\nu_{j}\prod_{i=1}^{j}(1-\alpha_{i}-\nu_{i})(1-\mu_{i})\leq\nu_{j}\prod_{i=1}^{j}(1-\alpha_{i}-\nu_{i}){.}$ ∎ The proofs of the following propositions are very similar ###### Proposition 2. The probability of in-TARN recovery in the described model of sequential choice for every day after trauma is between the probabilities for the Markovian model with advanced transfer (Fig. 3) and the Markovian model with retarded transfer (Fig. 4): $\begin{split}\alpha(1-\mu)\leq\alpha_{1}&+\alpha_{2}(1-\alpha_{1}-\nu_{1})(1-\mu_{1})+\ldots\\\ &+\alpha_{M}\prod_{i=1}^{M-1}(1-\alpha_{i}-\nu_{i})(1-\mu_{i})\leq\alpha{.}\end{split}$ (6) $\square$ ###### Proposition 3. The probability of transfer outside TARN in the described model of sequential choice for every day after trauma is between the probabilities for the Markovian model with advanced transfer (Fig. 3) and the Markovian model with retarded transfer (Fig. 4): $\mu(1-\alpha-\nu)\leq\sum_{j=1}^{M}\mu_{j}(1-\alpha_{j}-\nu_{j})\prod_{i=1}^{j-1}(1-\alpha_{i}-\nu_{i})(1-\mu_{i})\leq\mu{.}$ (7) $\square$ ### 4.2 Transition probabilities and their evaluation In the above models (Figs. 3 and 4), death and recovery of the transferred patients have the same probabilities as for the patients of TARN hospitals. These probabilities are defined by the state of the patient $s$ and by the time after injury. Of course, in reality there is often a hope that the transfer will improve the situation and the probability of death will decrease for the same state of the patient. Nevertheless, in this paper we will neglect the changes of probabilities after transfer (just because we have no sufficient reason for such a change). Of course, these models could be extended to include the changes of mortality for transferred patients, if necessary. Another question is the definition of $s$. Which attributes should be included in the ‘state’ for the models (Figs. 3, 4)? To motivate this choice, we should take into account two considerations: 1. 1. The models will be used to analyse data with unknown outcomes. Trauma cases with missed outcomes make up 10-12% of the dataset. Therefore, an error of 10% in mortality for data with unknown outcomes will cause an error of $\sim$1% in mortality for the whole dataset and it is possible to use relatively coarse models (see below). 2. 2. The description of the state $s$ should include attributes whose values are known for a significant majority of cases. This is especially important because for cases with unknown outcomes many of the attributes are often also unknown (a more detailed analysis of data with missed attributes is presented in the next section). Formally, there are many possibilities for defining $s$. It could include the initial state after trauma (characteristics of injury and coma status, for example), age, gender, the current state ($t$ days after trauma), fragments of history, etc. For our purposes, we select, identify and compare three coarse models: 1. 1. The coarsest model $s=\emptyset$. 2. 2. The maximal severity model, $s=$the maximal severity score (an integer from 1 to 6). 3. 3. The binned NISS model with seven bins: NISS=1-3, 4-8, 9, 10-16, 17-24, 25-35, 36+; $s$ is the bin number (7 values). The bins for $s=2,\ldots,7$ have approximately equal depth whereas the bin with $s=1$ (NISS=1-3) is much smaller. We observe that the cases with maximal severity 1 (or NISS=1-3, which is the same) are very special. First of all, the age distributions in this group for the ‘Available W30D’ and the ‘OUT30’ subgroups are very different (Fig. 5). If we do not take into account this difference then we overestimate mortality in this group. The necessary refinement of the model with isolation of elderly patients with low severity of trauma is presented in Section 5. Figure 5: Age distributions for two groups of low severity cases (NISS bin 1-3). The age distribution for the low severity patients in TARN (‘Available W30D’ AND NISS=1-3) for age binned in five bins (0-5.5; 5.5-15.5; 15.5-54.5; 54.5-74.5; $>$74.5) has clear maximum for elderly patients (age $>$74.5), whereas the absolute majority of the the low severity patients which left TARN without registered outcome (‘OUT30’ AND NISS=1-3) belong to the group with age 15.5-54.5. Our approach may be combined with any stochastic model for early outcome prediction (see, for example, [29, 47, 5]). For the finite set of $s$ values, evaluation of all the coefficients $\alpha(t,s)$, $\nu(t,s)$, and $\mu(t,s)$ is a particular case of a standard statistical problem of proportion estimate for each given value of $s$; we use the Wilson score interval (CI) [56]: $\frac{1}{1+\frac{z^{2}}{n}}\left[\hat{p}+\frac{z^{2}}{2n}\pm z\sqrt{\frac{\hat{p}(1-\hat{p})}{n}+\frac{z^{2}}{4n^{2}}}\right],$ (8) where $\hat{p}$ is the coefficient estimate, $z$ is the error percentile ($z=1.96$ for the 95% confidence interval), and $n$ is the number of degrees of freedom (for a dataset without weights this is just the sample size). For the coarsest model the fraction of patients transferred outside TARN is 11.65%. This is just the fraction of patients transferred (within 30 days after injury) in Table 1. The 95% CI (8) for this fraction is 11.5–11.8%. For the maximal severity (Table 2) and the binned NISS (Table 3) models the fraction of patients transferred outside TARN depends on $s$ (bins) and the CI in each bin is larger than for the total fraction in the coarsest models. Nevertheless, the CIs for different bins in these models do not intersect (the only exclusion is the CI for the smallest bin, maximal severity 6, in the maximal severity model, Table 2). In particular, this means that the probability of transfer outside TARN hospitals depends strongly on the trauma severity. Table 2: Sizes of bins and fractions of transfer out of TARN (within 30 days after injury) for the maximal severity models. Max severity | OUT30 | Total | Fraction of OUT30 | 95% CI ---|---|---|---|--- 1 | 1,905 | 3,005 | 63.39% | 61.66–65.10% 2 | 3,094 | 35,109 | 8.81% | 8.52–9.11% 3 | 6,203 | 77,518 | 8.00% | 7.81–8.20% 4 | 4,535 | 29,603 | 15.32% | 14.91–15.73% 5 | 3,542 | 20,175 | 17.56% | 17.04–18.09% 6 | 10 | 149 | 6.71% | 3.88–11.72% Table 3: Sizes of bins and fractions of patients transferred to a hospital or institution (or unknown destination) (within 30 days after injury) for the binned NISS models. NISS bin | OUT30 | Total | Fraction of OUT30 | 95% CI ---|---|---|---|--- 1-3 | 1,905 | 3,005 | 63.39% | 61.66–65.10% 4-8 | 2,078 | 24,982 | 8.32% | 7.98–8.67% 9 | 2,159 | 36,722 | 5.88% | 5.64–6.12% 10-16 | 2,710 | 29,237 | 9.27% | 8.94–9.61% 17-24 | 2,882 | 25,074 | 11.49% | 11.11–11.89% 25-35 | 3,603 | 23,557 | 15.29% | 14.84–15.76% 36+ | 3,952 | 22,982 | 17.20% | 16.71–17.69% For each value of $s$ and time after injury $t$ the following quantities are found for the analysed dataset: * 1. $H(t,s)$ – the number of patients in state $s$ registered as alive in a TARN hospital at any time during day $t$ after injury (in this number we include the patients which have stayed at a TARN hospital during day $t$ after injury, the patients who have died on this day in a TARN hospital, have been discharged, or have been transferred outside TARN on this day); * 2. $\Delta D(t,s)$ – the number of patients in state $s$ who died in TARN hospitals on day $t$ after injury; * 3. $\Delta R(t,s)$ – the number of patients in state $s$ who recovered (discharged to one of the final recovery destinations) in TARN hospitals on day $t$ after injury; * 4. $\Delta L(t,s)$ – the number of patients in state $s$ who transferred out of TARN hospitals to other hospitals, institutions or unknown destinations on day $t$ after injury. Just for control, the following identity should hold: $H(t+1,s)=H(t,s)-\Delta D(t,s)-\Delta R(t,s)-\Delta L(t,s)$ because state $s$ in our models does not change in time. For the model with advanced transfer from TARN hospitals the coefficients are defined following the scheme presented in Fig. 3: $\begin{split}&\mu(t,s)=\frac{\Delta L(t,s)}{H(t,s)};\;\nu(t,s)=\frac{\Delta D(t,s)}{(1-\mu(t,s))H(t,s)};\;\\\ &\alpha(t,s)=\frac{\Delta R(t,s)}{(1-\mu(t,s))H(t,s)}.\end{split}$ (9) For the model with retarded transfer from TARN hospitals the coefficients are defined following the scheme presented in Fig. 4: $\begin{split}&\nu(t,s)=\frac{\Delta D(t,s)}{H(t,s)};\;\alpha(t,s)=\frac{\Delta R(t,s)}{H(t,s)};\;\\\ &\mu(t,s)=\frac{\Delta L(t,s)}{(1-\alpha(t,s)-\nu(t,s))H(t,s)}.\end{split}$ (10) ### 4.3 Evaluation of FOD Each model provides us with the corrected FOD. We use the basic assumption that the probability of dying at time $t$ after injury depends on $s$ but is the same inside and outside TARN. For each $t$ and $s$ we define the specific cumulative FOD (scFOD$(t,s)$) as the fraction of patients with state $s$ who died during the time interval $[1,t]$: $\begin{split}\mbox{scFOD}(t,s)=\nu(1,s)&+\nu(2,s)(1-\alpha(1,s)-\nu(1,s))+\ldots\\\ &+\nu(t,s)\prod_{i=1}^{t-1}(1-\alpha(i,s)-\nu(i,s)).\end{split}$ (11) The cumulative FOD at time $t$ (cFOD$(t)$) for the whole model (for all $s$ together) is $\mbox{cFOD}(t)=\frac{\sum_{s}\mbox{scFOD}(t,s)H(1,s)}{H_{0}}{,}$ (12) where $H_{0}=\sum_{s}H(1,s)$ is the total number of patients in our dataset (in our case study, $H_{0}=165,559$). The functions cFOD$(t)$ and scFOD$(t,s)$ for all $s$, grow monotonically with $t$. If we define the final outcome as survival or death within 30 days after injury then the target value is FOD= cFOD(30). Let us compare two following naïve approaches to the handling of missing outcomes with the Markov models we have created. * 1. Available case analysis. Just delete all of the 19,289 cases with the outcome ‘Transferred OUT OF TARN within 30 days after injury’ from the dataset. In the remaining cases all outcomes are known and the FOD is the ratio $\frac{\mbox{Dead (W30D)}}{\mbox{Total}}$ in the reduced dataset. * 2. Consider all transferred patients as alive. In this case, the total number of patients does not change and the FOD is the ratio $\frac{\mbox{Dead (W30D)}}{\mbox{Total}}$, where the number ‘Dead (W30D)’ is the same but the number ‘Total’ is calculated for the whole original dataset (Table 1). ###### Remark 1. If we apply available case analysis then none of the numbers $\Delta D(t,s)$ and $\Delta R(t,s)$ change but the numbers $H(t,s)$ of the patients in TARN will decrease for all $t$ and $s$ (or do not change if there is nothing to delete). The corresponding mortality coefficients $\nu(t,s)$ they will be larger then the coefficients (9), (10) for all the Markov models considered before. This means that the MCAR (Missing Completely At Random) approach to missed outcomes always overestimates mortality, while the second naïve approach (‘Consider all transferred patients as alive’) always underestimates mortality. We have created six Markov models for mortality of transferred patients. They differ by the state variable $s$ (the coarsest model without $s$, the maximal severity model with six states and the binned NISS model with seven states) and by the order of the ‘recovery/death’ and ‘transfer’ lotteries (Fig. 2). In Table 4 we compare the mortality evaluated by these models, and by the two naïve models. We can see that the difference between all of our Markov models is not significant; we cannot reject the hypothesis that they coincide with any one of them ($p$-value is between 0.20 and 0.56). Both of the naïve models differ significantly from all of the six Markov models. The difference between the naïve models is also significant. The interval of mortality predicted by the Markov models is $(6.77\%,6.91\%)$. The average of the six Markovian predictions is 6.84%. None of the Markov model predictions differ significantly from this average. Both of the naïve predictions are significantly different. Table 4: FOD for different models; the $p$-value is the probability that the difference between FOD and 6.85% (FOD for the coarsest model with advanced transfer) appears by chance. Model | Alive | Dead | FOD | $p$-value ---|---|---|---|--- Available case study | 135,733 | 10,537 | 7.20% | $1.3\times 10^{-8}$ All transferred are alive | 155,022 | 10,537 | 6.36% | $5.0\times 10^{-15}$ Coarsest advanced | 154,217 | 11,342 | 6.85% | 1.00 Coarsest retarded | 154,350 | 11,209 | 6.77% | 0.20 Max severity, advanced | 154,120 | 11,439 | 6.91% | 0.34 Max severity, retarded | 154,266 | 11,293 | 6.82% | 0.41 NISS binned, advanced | 154,145 | 11,414 | 6.89% | 0.48 NISS binned, retarded | 154,292 | 11,267 | 6.81% | 0.57 ### 4.4 Validation of the models on the excluded trauma cases: patients transferred to TARN (‘IN30’) For each type of model the coefficients $\mu$, $\alpha$ and $\nu$ are evaluated using the dataset of 165,559 patients entering TARN in the first day of injury (Fig 1, Main Group). Let us test the models with evaluated coefficients we have described here on data we have not used before. These data consist of the 16,693 cases who came to TARN hospitals more than one day after injury, which we deleted from the original set before modelling. This is a special and biased sample, ‘IN30’ (see Fig. 1). We now apply the models developed and identified in the previous subsections to analyse this sample. We expect that there should be some similarity between the groups of patients transferred from TARN (‘OUT30’) and the patients transferred to TARN (‘IN30’) (Fig. 1). We do not expect quantitative coincidence of the results for the groups ‘OUT30’ and ‘IN30’ because there is no precise symmetry between the patients moved to TARN and the patients moved from TARN. The hospitals in TARN are those with a special interest in trauma - in particular the large major trauma centers, so the transfers in (mainly for acute specialist care) will not be the same as those transferred out (mainly for complex rehabilitation, or special geriatric care, etc.). Therefore, the estimated behavior of the mortality of the group transferred from TARN can be qualitatively validated using the observed mortality in the group who moved to TARN. We consider survival during the first 30 days. Hence we have to use the records which correspond to this period only. There are 15,437 such records among the 16,693 in ‘IN30’. In these estimates of the FOD we explicitly use the empirical fluxes into and from TARN hospitals. For each $t,s$ we have the following quantities: * 1. $L_{\rm in}(t,s)$ – the number of patients in state $s$ which came to TARN on day $t$ after injury; * 2. $L_{\rm out}(t,s)$ – the number of patients in state $s$ from ‘IN30’ which were transferred from TARN on day $t$ after injury. * 3. $h_{\rm IN30}(t,s)$ – the number of patients in IN30 in state $s$ on day $t$ after injury. * 4. $D_{\rm IN30}(t,s)$ – the number of deaths in TARN of the patients from IN30 in state $s$ by day $t$ after injury (cumulative). * 5. $R_{\rm IN30}(t,s)$ – the number of patients in ‘IN30’ in state $s$ who recovered by day $t$ after injury (cumulative). We use the values $L_{\rm in}(t,s)$ and $L_{\rm out}(t,s)$ from the database, evaluate $h_{\rm IN30}(t,s)$, $D_{\rm IN30}(t,s)$, and $R_{\rm IN30}(t,s)$ for every model and then compare the resulting outcomes (evaluated numbers of death in TARN of the patients from ‘IN30’ within 30 days of injury, $\sum_{s}D_{\rm IN30}(30,s)$) to empirical data from TARN records. For each model with advanced transfer the variables $h_{\rm IN30}(t,s)$, $D_{\rm IN30}(t,s)$, and $R_{\rm IN30}(t,s)$ are evaluated by recurrence formulas: $\begin{split}h_{\rm IN30}(t+1,s)&=[h_{\rm IN30}(t,s)+L_{\rm in}(t+1,s)\\\ -&L_{\rm out}(t+1,s)][1-\alpha(t+1,s)-\nu(t+1,s)];\\\ R_{\rm IN30}(t+1,s)&=R_{\rm IN30}(t,s)+\alpha(t+1,s)\\\ \times[&h_{\rm IN30}(t,s)+L_{\rm in}(t+1,s)-L_{\rm out}(t+1,s)];\\\ D_{\rm IN30}(t+1,s)&=D_{\rm IN30}(t,s)+\nu(t+1,s)\\\ \times[&h_{\rm IN30}(t,s)+L_{\rm in}(t+1,s)-L_{\rm out}(t+1,s)]{,}\end{split}$ (13) with initial condition $h_{\rm IN30}(0,s)=R_{\rm IN30}(0,s)=D_{\rm IN30}(0,s)=0.$ For each model with retarded transfer the variables $h_{\rm IN30}(t,s)$, $D_{\rm IN30}(t,s)$, and $R_{\rm IN30}(t,s)$ are evaluated by recurrence formulas: $\begin{split}h_{\rm IN30}(t+1,s)=&h_{\rm IN30}(t,s)[1-\alpha(t+1,s)-\nu(t+1,s)]\\\ &+L_{\rm in}(t+1,s)-L_{\rm out}(t+1,s);\\\ R_{\rm IN30}(t+1,s)=&R_{\rm IN30}(t,s)+\alpha(t+1,s)h_{\rm IN30}(t,s);\\\ D_{\rm IN30}(t+1,s)=&D_{\rm IN30}(t,s)+\nu(t+1,s)h_{\rm IN30}(t,s){,}\end{split}$ (14) with initial condition $h_{\rm IN30}(0,s)=R_{\rm IN30}(0,s)=D_{\rm IN30}(0,s)=0.$ For each model, the coefficients $\alpha(t,s)$ and $\nu(t,s)$ are evaluated using the previously analysed dataset (without IN30) by formulas (9) and (10). The results are presented in Table 5. Table 5: Comparison of the models with the empirical data about patients from ‘IN30’. Model | Alive | Dead | Total | FOD | CI 95 ---|---|---|---|---|--- Empirical data | 13,038.00 | 417.00 | 13,455.00 | 3.10% | 2.82–3.41% Coarsest advanced | 12,834.55 | 620.45 | 13,455.00 | 4.61% | 4.27–4.98% Coarsest retarded | 12,933.67 | 521.33 | 13,455.00 | 3.87% | 3.56–4.21% Max severity, advanced | 12,824.90 | 630.10 | 13,455.00 | 4.68% | 4.34–5.05% Max severity, retarded | 12,920.71 | 534.29 | 13,455.00 | 3.97% | 3.65–4.31% NISS binned, advanced | 12,885.93 | 569.07 | 13,455.00 | 4.23% | 3.90–4.58% NISS binned, retarded | 12,971.22 | 483.78 | 13,455.00 | 3.60% | 3.29–3.92% We can see that all the models overestimate mortality in ‘IN30’. The available case analysis demonstrates the worst performance (the relative error exceeds 100% of empirical mortality). Models with retarded transfer perform better in this test than the models with advanced transfer. The NISS binned model with retarded transfer is the best (the relative error in prediction of FOD is 16% of the empirical data and, at least, the 95% confidence intervals for the result of this model and for the empirical data intersect). There exist further possibilities for improving the models presented but already the relative error of $16\%$ for ‘IN30’ in the estimation for the total database will give the input in the relative error in the FOD $\lesssim$1% (or absolute error $\lesssim$0.07%). That is much better than the errors of the available case evaluations or of the approach ‘all are alive’ to the evaluation of mortality of transferred patients. ## 5 Model refinement We use a coarse model based on the severity of trauma for the evaluation of FOD in the group ‘OUT30’. The reason for selection of such a coarse model is that a fraction of cases in this ‘OUT30’ cohort is relatively small with respect to the‘Available W30D’ cases. As we can see from Table 3, this fraction is relatively small in all cells except small severities (NISS bin 1-3). For refinement of the Markov model for this cell, we compare the age structure of the ‘Available W30D’ and the ‘OUT30’ fractions of this severity bin (Fig. 5). We see that the fraction of elderly patients with low severities in TARN hospitals is high, whereas for patients transferred from TARN this fraction is much lower. Mortality in the group of patients 74.5+ is much higher than in the adult group, therefore the model overestimates mortality in the low severity states. To refine the model let us use two cells for low severity: ‘NISS 1-3 y’ (NISS bin 1-3 and age $<54.5$) and ‘NISS 1-3 o’ (NISS bin 1-3 and age $>54.5$). This refined model gives a significantly different FOD for NISS 1-3. In the cell ‘NISS 1-3 y’ the corrected FOD is 0.54% and in the cell ‘NISS 1-3 o’ it is 4.08% (almost eight times greater). The corrected overall FOD for NISS 1-3 is 1.42% versus 2.68% in the NISS retarded model without the above refinement. The effect of the refinement on the FOD for trauma cases is less because the fraction of traumas with NISS severity 1-3 is relatively small (2.0%). For the refined model with retarded transfer the FOD for transferred patients decreases from 3.79% (retarded transfer NISS model) to 3.59% and the total fraction of death is changed from 6.81% to 6.78% (compare to Table 4). ## 6 Weighting adjustment of death cases for further analysis Single imputation of missed values does not reflect the uncertainty in data properly. From the probabilistic point of view, a datapoint with missed values should be considered as a conditional probability distribution of the form $\mathbf{P}(\mbox{missed values }|\mbox{ known values}).$ Two approaches utilize this idea the multiple imputation and weighting adjustment. In the multiple imputation approach several replicas of the database are created, which differ in the imputed values [40, 41, 23, 51]. The distribution of this values should reflect the conditional means and conditional variances of the imputing attributes. It is not completely clear, how many imputations should be generated. Rubin claims that “typically as few as five multiple imputations (or even three in some cases) is adequate under each model for nonresponse” [41]. Nevertheless, more recently, Graham et al produced practical recommendations for selection of number of imputations $m$ and demonstrated that a reasonable choice is $m\geq 20$ and for some cases $m=100$ is not enough [23]. The multiple imputation algorithms are implemented in the standard statistical software [38]. Sterne et al [51] discussed use and misuse of imputation in epidemiological and clinical research and tried to produce a standard for reporting of handling of missed data in medical research. The weighting adjustment approach substitutes a datapoint with missed values by a set of additional weights on the complete datapoints [33, 26, 34]. The simplest version of this approach is the cell weighting adjustment. This follows the assumption that complete datapoints within a cell represent the incomplete datapoints within that cell. An incomplete datapoint within the cell is substituted by the equidistribution on the complete datapoints there. Of course, cell weighting can inflate the variances for large cells. In this section, we use cell weighting adjustments for the handling of missed outcomes. Cells are defined by state $s$ and the outcome. We will use the database for evaluation of the death risk for trauma patients. The ‘Main Group’ selected for further analysis includes the ‘OUT30’ subgroup with 19,289 data cases transferred from TARN hospitals within 30 days after injury (Fig 1). The targeted outcome (alive or dead within 30 days after injury) is unknown for these patients. Data without outcome cannot be used for risk evaluation and should be deleted. Let us call the result of deletion the truncated database. It is demonstrated in the previous sections that the simple removal of the cases with unknown outcome shifts the risk estimates; the proportion of Dead and Alive outcomes in the truncated database differs from reality and the risk is overestimated (the pessimistic evaluation). This bias may be compensated by reweighting of the cases with known outcomes. There are 146,270 such ‘Available W30D’ cases. In this subsection we estimate weights $w(t,s)$ that should be assigned to the cases of death on day $t$ after injury with state $s$ to hold the probability of death for the truncated database. For the estimation of the proper FOD that should be kept we use the Markov model of mortality based on binned NISS with delayed transfer out of TARN (after selection dead and recovered patient, see Fig. 4). This model demonstrates the best verification results (Table 5) and is the most plausible from the common sense point of view. According to the model, the probability of the patient in state $s$ dying on day $t$ after injury is evaluated as $p_{d}(t,s)=\frac{\Delta D(t,s)+\Delta D_{L}(t,s)}{H_{0}(s)}{,}$ where $H_{0}(s)=H(1,s)$ is the initial number of patients in state $s$ on the first day after injury. For the truncated data with weights this probability is evaluated as the ratio of the sums with weights: $p_{d}^{w}(t,s)=\frac{w(t,s)\Delta D(t,s)}{H_{0}^{w}(s)},$ (15) where ${H_{0}^{w}(s)}=H(31,s)+R(30,s)+\sum_{t=1}^{30}w(t,s)\Delta D(t,s)$ (16) and the superscript $w$ corresponds to the truncated dataset with weights. The numbers $H(t,s)$, $R(t,s)$ and $\Delta D(t,s)$ are the same for the original and truncated datasets. The probability of dying within 30 days from injury is evaluated as the proportion of deaths (we use the model to find $D_{L}(30,s)$) $p_{d}(s)=\frac{D(30,s)+D_{L}(30,s)}{H_{0}(s)}{.}$ For the truncated database $p_{d}(s)$ is evaluated as the proportion of weighted deaths: $p^{w}_{d}(s)=\frac{\sum_{t=1}^{30}w(t,s)\Delta D(t,s)}{H_{0}^{w}(t,s)}.$ This should be the same number. Therefore, the weighted sum of deaths for the truncated database is: $\sum_{t=1}^{30}w(t,s)\Delta D(t,s)=\frac{p_{d}(s)}{1-p_{d}(s)}(H(31,s)+R(30,s)){.}$ The last expression in the brackets is just the number of ‘Alive within 30 days’ outcomes. Immediately we get $H_{0}^{w}(s)={\frac{1}{1-p_{d}(s)}(H(31,s)+R(30,s))}.$ The formula for the calculation of the weights of death cases in the truncated database is $w(t,s)=\frac{p_{d}(t,s)H_{0}^{w}(s)}{\Delta D(t,s)}.$ (17) The weighting procedure changes the number of effective degrees of freedom can affect the statistical power of the dataset but for the TARN dataset this change is rather minor. For example, for the standard problem of the evaluation of the confidence interval in the proportion estimate the number of degrees of freedom $n_{w}$ in the weighted database with weights $w_{i}$ is $n_{w}=\frac{\left(\sum_{i}w_{i}\right)^{2}}{\sum_{i}{w_{i}^{2}}}.$ (18) For our dataset $n_{w}=143,574.85$ and the number of Available W30D records is 146,270 (Fig. 1). The difference of degrees of freedom for the non-weighted and weighted datasets is less than 2%. ## 7 FOD and patterns of mortality The models we have developed alow us to evaluate the FOD for various groups of patients. The rich TARN data give us the chance of studying various special groups and detailed stratifications of the trauma cases: by the severities of various injuries in combined traumas, by the age of patients, and by time (day) after trauma. Each example below is supplemented by a medical commentary. ### 7.1 Example: FOD as function of age The age distribution of trauma cases and the dependence of FOD on age are shown in Fig. 6. Here we find surprisingly high accuracy of the piecewise linear approximation of FOD for adult and elderly patients with a jump in the slope at $\mbox{age}\approx 62$. Figure 6: Age distribution of trauma cases in ‘Available W30D’ group and the FOD (corrected) as a function of age. The piecewise linear segmentation of FOD(age) has an obvious break point at $\mbox{age}\approx 62$. The number of cases per year in the dataset drops down at age 65 because for age$\geq 65$ some traumas are excluded from the database (see Fig. 1). #### Medical commentary The increase in mortality with age is well established. Previous versions of the standard trauma outcome prediction system had two different models with an age cutoff at 55 years. More recent models have age as a weighted continuous variable with an interaction term between gender and age. There has been a dramatic change in the trauma population over the last 10 years, with a rapid increase in the number of older patients with major injury. Understanding the effects of age on trauma care and adapting to a changing population will be a key challenge for trauma systems in the developed world over the next 10 years. ### 7.2 Example: FOD of combined traumas of various severity Evaluation of the severity of combined traumas is a classical problem. The very popular solution is NISS – sum of squares of three maximal severities, $s_{1}^{2}+s_{2}^{2}+s_{3}^{2}$ ($s_{1}\geq s_{2}\geq s_{3}$) (see, for example [29, 53, 52]). The best severity score should give the best evaluation of mortality. This is a basic and rather old idea for defining and comparing trauma indices [44]. Of course, it is possible to use three (or more) severities together as a multi-dimensional trauma severity index (‘severity profile’ [43]) but the combination in one index may be beneficial from different points of view. The simplest method of combination is: * 1. Calculate FOD for every combination of severities for combined traumas for a large database; * 2. Either use this FOD instead of the severity score * 3. Or find and use a convenient analytic approximation for this FOD (smoothed FOD). Of course, such evaluation of probabilities for several input attributes were used by many authors and compared to other approaches [4, 5]. In this paper, we use TARN database and evaluate FOD of combined traumas as a function of three input attributes, three biggest severity scores $s_{1}\geq s_{2}\geq s_{3}$ (like in NISS). We use the dataset of 146,270 ‘Available W30D’ patients approached TARN during the first day of injury and remained in TARN or were discharged to a final destination within the first 30 days after injury (Fig. 1). Using our models, we calculate estimates with weights which take into account modeled mortality/survival of the patients transferred from TARN and other patients with unknown outcomes. Results for the maximal severity $s_{1}=5$ are presented in Table 6. The available case analysis gives qualitatively the same results, hence, the effects we observe are not generated by the reweighting procedure. Table 6: FOD for the maximal severity $s_{1}=5$ and various $s_{2}$ and $s_{3}$ for data after reweighting. | $s_{3}$ ---|--- $s_{2}$ | 0 | 1 | 2 | 3 | 4 | 5 0 | 0.3590 | | | | | 1 | 0.2324 | 0.2906 | | | | 2 | 0.1566 | 0.1496 | 0.0791 | | | 3 | 0.2466 | 0.2064 | 0.1315 | 0.1439 | | 4 | 0.2579 | 0.2881 | 0.1643 | 0.2105 | 0.3113 | 5 | 0.4073 | 0.5668 | 0.4067 | 0.3666 | 0.4140 | 0.5908 The results presented in Table 6 seem to be counterintuitive: FOD for combined injuries with severities $s_{1}=5$ and $1\leq s_{2}\leq 4$ are less than FOD for $s_{2}=s_{3}=0$ and the same maximal severity $s_{1}=5$. Similar non- monotonic behavior is observed for other values of the maximal severities. Elementary estimates demonstrate that the probability $p$ of obtaining these (or larger) deviations to below from the FOD for single injuries ($s_{1}=5$, $s_{2}=s_{3}=0$) for all cases with $1\leq s_{2}\leq 4$ simultaneously is less than $10^{-10}$. The number of cases used for these estimates are given in Table 7. If the second severity coincides with the maximal one, $s_{2}=s_{1}=5$ then the FOD is larger than for single traumas. Table 7: Number of cases for the maximal severity $s_{1}=5$ and various $s_{2}$ and $s_{3}$. | $s_{3}$ ---|--- $s_{2}$ | 0 | 1 | 2 | 3 | 4 | 5 0 | 1,376 | | | | | 1 | 276 | 101 | | | | 2 | 302 | 163 | 332 | | | 3 | 577 | 243 | 645 | 1,580 | | 4 | 349 | 140 | 203 | 2,653 | 2,301 | 5 | 387 | 102 | 95 | 807 | 2,159 | 1,842 It may be convenient to have formulas for estimation of FOD. This smoothed FOD (${\rm sFOD}_{s_{1}}$) is found for $s_{1}=2,\ldots,5$ as a linear combination of $s_{2,3}$ and $s^{2}_{2,3}$ (19). For $s_{1}=1$ the simple formulas do not have much sense and we have to use a refined model with the inclusion of age (Sec. 5) The number of cases is not sufficient for good approximation for this extended model. For $s_{1}=6$ the number of cases is not sufficient and we use three bins for trauma severities marked by the values of the coarse-grained variable $\hat{s}$: $0\leq s_{2}\leq 2$ ($\hat{s}_{2}=0$, 48 cases), $3\leq s_{2}\leq 4$ ($\hat{s}_{2}=1$, 53 cases), and $5\leq s_{2}\leq 6$ ($\hat{s}_{2}=2$, 38 cases). ${\rm sFOD}_{6}$ is presented as a quadratic function of $\hat{s}_{2}$. $\begin{split}{\rm sFOD}_{2}=&0.01910+0.02124s_{2}+0.00037s_{3}\\\ &-0.01054s_{2}^{2}-0.00084s_{3}^{2};\\\ {\rm sFOD}_{3}=&0.02202+0.00256s_{2}-0.00238s_{3}\\\ &+0.00099s_{2}^{2}+0.00101s_{3}^{2};\\\ {\rm sFOD}_{4}=&0.06571-0.02075s_{2}-0.03116s_{3}\\\ &+0.00706s_{2}^{2}+0.01086s_{3}^{2};\\\ {\rm sFOD}_{5}=&0.35899-0.13335s_{2}-0.10879s_{3}\\\ &+0.02963s_{2}^{2}+0.02748s_{3}^{2};\\\ {\rm sFOD}_{6}=&0.80297-0.08750\hat{s}_{2}+0.06102\hat{s}_{2}^{2}.\end{split}$ (19) All the coefficients are estimated using weighted least squares method. The weight of the severities combination $(s_{1},s_{2},s_{3})$ is defined as the sum of weights of the corresponding trauma cases. #### Medical commentary The complete outcome dataset derived from this work allows all patients to be included in the analysis of the effect of combined injuries. The counter- intuitive results from this analysis (some combinations of injuries seem to have better outcomes than a single injury of the same severity) provides a fertile area for further work. It may be that the explanation is technical, within the way that the continuum of human tissue destruction from trauma is reduced to a simple 5 point scale. Each point on the scale is actually a band that covers a range of tissue damage. There might also be a true physiological explanation for the lower lethality of combined injuries, as each injury absorbs some of the force of impact. The same concept is used in Formula 1, where the cars are designed to break into pieces, with each piece absorbing some of the impact. In humans there is a well known concept that the face can act as a ‘crumple zone’ and mitigate effect of force on the brain. The effect of injury combinations shown in Table 6 is a novel finding that requires further analysis. ### 7.3 Example. Time after trauma, non-monotone and multimodal mortality coefficients In the early 1980s a hypothetical statement was published that the deaths from trauma have a trimodal distribution with the following peaks: immediate, early and late death [3, 35]. This concept was clearly articulated in a popular review paper in Scientific American [55]. The motivation for this hypothesis is simple: Trunkey [55] explains that the distribution of death is the sum of three peaks: “The first peak (‘Immediate deaths’) corresponds to people who die very soon after an injury; the deaths in this category are typically caused by lacerations of the brain, the brain stem, the upper spinal cord, the heart or one of the major blood vessels. The second peak (‘Early deaths’) corresponds to people who die within the first few hours after an injury; most of these deaths are attributable to major internal hemorrhages or to multiple lesser in­juries resulting in severe blood loss. The third peak (‘Late deaths’) corresponds to people who die days or weeks after an injury; these deaths are usually due to infection or multiple organ failure.” Strictly speaking, the sum of three peaks does not have to be a trimodal distribution. Many groups have published refutations of trimodality: they did not find the trimodal distribution of death. In 1995, Sauaia et al reported that the “greater proportion of late deaths due to brain injury and lack of the classic trimodal distribution” [45]. Wyatt et al could not find this trimodal distribution in data from the Lothian and Borders regions of Scotland between 1 February 1992 and 31 January 1994 [57]. They hypothesised that this may be (partly) due to improvements in care. Recently, more data has become available and many such reports have been published [12, 27, 6]. The suggestion that the improvement in care has led to the destruction of the second thd third peaks has been advanced a number of times [27]. In 2012, Clark et al performed an analysis of the distribution of survival times after injury using interval censored survival models [10]. They considered the trimodal hypothesis of Trunkey as an artifact and provide arguments that the observed (in some works) second peak is a result of differences in the definition of death. Figure 7: Daily coefficient of mortality – evaluated probability of a patient to die on day $t$ under condition that he/she survived during days $1\div t-1$: a) for NISS=1-8, b) for all dataset. The coefficient is filtered by moving 5-day average starting from the 3rd day. The mortality coefficients are evaluated with the Markov models with retarded transfer. Data for age$<65$ and age$\geq 65$ are represented separately. K. Søreide et al analysed the time distribution from injury to death stratified by cause of death. They demonstrated that the trimodal structure may be, probably, extracted from data but its manifestation is model–dependent (see Fig. 6 in [50]). There were several discussion papers published: “Trimodal temporal distribution of fatal trauma – Fact or fiction?” [2, 28]. The trimodal hypothesis was tested on TARN data [31]. It was demonstrated that “the majority of in hospital trauma deaths occur soon after admission without further peaks in mortality”. We reproduce the same results, indeed. But TARN database, the largest European trauma database, allows us to make a stratified analysis of mortality and the preliminary results demonstrate the richness of the possible patterns of death. Let us test the famous Trunkey hypothesis. In Fig. 7 the daily mortality coefficients are presented for low severities (a) (NISS severities 1-8, 27,987 cases in database, 508 death in TARN, 3,983 patients transferred from TARN within 30 days after injury), and for the whole database (b). For the prediction of death in the ‘OUT30’ group we used the model with retarded transfer. The non-monotonicity and peaks in the mortality for low severities of injury are illustrated in Fig. 7. Further analysis of these patterns should involve other attributes such as the age of the patient and the type and localization of the injury. #### Medical commentary It has been widely accepted that the Trunkey trimodal distribution was a theoretical concept designed to illustrate the different modes of dying following injury. Previous analysis of trauma data has looked at all patients and has not shown any mortality peaks, however this new analysis shows that there are peaks (patterns) if subgroups are studied. The underlying clinical or patient factors are not immediately obvious, but future analysis giving a better understanding of patterns of death could act as a stimulus to look for the clinical correlates of these patterns - with the potential to find modifiable factors. The pattern of death in various subgroups as shown in Figure 7 is a novel finding that requires further analysis. ## 8 Discussion Handling of data with missed outcomes is one of the first data cleaning tasks. For many healthcare datasets, the problem of lost patients and missed outcomes (in 30 days, in six months or any other period of interest) is important. There are two main approaches for solving this problem: 1. 1. To find the lost patients in other national and international databases; 2. 2. To recover the distribution of the missed outcomes and all their correlations using statistical methods, data mining and stochastic modelling. Without any doubt the first approach is preferable if it is available: it is better to have complete information when it is possible. Nevertheless, there may be various organizational, economical and informational restrictions. It may be too costly to find the necessary information, or this information may be unavailable or even does not exist in databases. If there are only small number of lost cases (dozens or even hundreds) then they may be sought individually. However if there are thousands of losses then we need either a data integration system with links to appropriate databases like the whole NHS and ONS data stores (with the assumption that the majority of the missed data may be taken from these stores) or a system of models for the handling of missed data, or both because we might not expect all missed data to be found in other databases. In the TARN dataset, which we analyse in this paper the outcome is unavailable for 19,289 patients. The available case study paradigm cannot be applied to deal with missed outcomes because they are not missed ‘completely at random’. Non-stationary Markov models of missed outcomes allow us to correct the fraction of death. Two naïve approaches give 7.20% (available case study) or 6.36% (if we assume that all unknown outcomes are ‘alive’). The corrected value is 6.78% (refined model with retarded transfer). The difference between the corrected and naïve models is significant, whereas the difference between different Markov corrections is not significant despite the large dataset. Non-stationary Markov models for unknown outcomes can utilize any scheme of predictive models with using any set of available attributes. We demonstrate the construction of such models using maximal severity model, binned NISS model and binned NISS supplemented by the age structure at low severities. We use weighting adjustment to compensate for the effect of unknown outcomes. The large TARN dataset allows us to use this method without significant damage to the statistical power. Analysis of mortality for a combination of injuries gives an unexpected result. If $s_{1}\geq s_{2}\geq s_{3}$ are the three maximal severities of injury in a trauma case then the expected mortality (FOD) is not a monotone function of $s_{3}$, $s_{3}$, under given $s_{1}$. For example, for $s_{1}=4,5$ expected FOD first decreases when $s_{2,3}$ grow from 0 to 1-2 and then increases when $s_{2}$ approaches $s_{1}$. Following the seminal Trunkey paper [55], multimodality of the mortality curves is a widely discussed problem . For the complete TARN dataset the coefficient of mortality monotonically decreases in time but stratified analysis of the mortality gives a different result: for lower severities FOD is a non-monotonic function of the time after injury and may have maxima at the second and third weeks after injury. Perhaps, this effect may be (partially) related to geriatric traumas. We found that the age distribution of trauma cases is strongly multimodal (Fig. 6). This is important for healthcare planning. The next step should be the handling of missed values of input attributes in the TARN database Firstly, we should follow the “Guidelines for reporting any analysis potentially affected by missing data” [51], report the number of missing values for each variable of interest, and try to “clarify whether there are important differences between individuals with complete and incomplete data”. Already preliminary analysis of the patterns in the distribution of the missed input data in the TARN dataset demonstrates that the gaps in data are highly correlated and need further careful analysis. Secondly, we have to test and compare various methods of handling missing input attributs in the TARN database. It is not necessary to analyse all attributes in the database for mortality prediction and risk evaluation. It is demonstrated that there may exist an optimal set of input attributes for mortality prediction in emergency medicine and additional variables may even reduce the value of predictors [20]. Therefore, before the analysis of imputation efficiency, it is necessary to select the set of most relevant variables of interest. The models developed in this case study can be generalized in several directions. Firstly, for trauma datasets, different attributes could be included in the ‘state’ $s$ for the non-stationary Markov models (Figs. 3, 4). We did not explore all such possibilities but have studied just simple models of the maximal severity and binned NISS. An example of model refinement with inclusion of age in the state variable $s$ is presented in Section 5. Secondly, the ‘two stage lottery’ non-stationary Markov model could be used as a general solution applicable to any health dataset where ‘TRANSFER IN’ or ‘TRANSFER OUT’ is a feature. Transfer between hospitals is common in healthcare, therefore, we expect that models of this type will be useful for all large healthcare data repositories. ## 9 Summary 1. 1. The Trauma Audit and Research Network (TARN) have collected the largest European trauma database. We have analysed 192,623 cases from the TARN database. We excluded from the analysis 16,693 patients (8.67%), who arrived into TARN hospitals later than 24 hours after injury. The other 146,270 patients (75.94%) approached TARN during the first day of injury and remained in TARN or discharged to a final destination within 30 days of injury. 19,289 patients (13.19%) from this group transferred from TARN to another hospital or institution (or unknown destination) within 30 days of injury. For this subgroup the outcome is unknown. 2. 2. Analysis of the missed outcomes demonstrated that they cannot be considered as misses ‘completely at random’. Therefore, the analysis of available cases is not applicable for the TARN database. Special efforts are needed to handle data with missed outcomes. 3. 3. We have developed a system of non-stationary Markov models for the handling of missed outcomes and validated these models on the data arising from patients who moved to TARN (and excluded from the model fitting). We have analysed mortality in the TARN database using the Markov models which we have developed and also validated. 4. 4. The results of analysis were used for weighting adjustment in the available cases database (reweighting of the death cases). The database with adjusted weights can be used for further data mining tasks and will keep the proper fraction of deaths. 5. 5. The age distribution of trauma cases is essentially multimodal, which is important for healthcare planning. 6. 6. Our analysis of the mortality coefficient in the TARN database demonstrates that (i) for complex traumas the fraction of death is not a monotone function of all severities of injuries and (ii) for lower severities the fraction of death is not a monotonically decreasing function of time after injury and may have intermediate peaks in the second and third weeks after injury. 7. 7. The approach developed here can be applied to various healthcare datasets which have the problem of lost patients, inter–hospitals transfer and missing outcomes. ## References * [1] J. Adler-Milstein, A.K. Jha, Healthcares “Big Data” challenge, The American Journal of Managed Care 19 (7) (2013), 537–538. * [2] S. Aldrian, T. Nau, V. Vecsei, Trimodal temporal distribution of fatal trauma – Fact or fiction? Injury 39 (8) (2008), 961–962. * [3] C.C. Baker, L. Oppenheimer, B. Stephens, F.R. Lewis, D.D. Trunkey, Epidemiology of trauma deaths, The American Journal of Surgery 140 (1) (1980), 144–150. * [4] O. Bouamra, A. Wrotchford, S. Hollis, A. Vail, M. Woodford, F. Lecky, A new approach to outcome prediction in trauma: A comparison with the TRISS model, Journal of Trauma-Injury, Infection, and Critical Care 61 (3) (2006), 701–710. * [5] T. Brockamp, M. Maegele, C. Gaarder, J.C. Goslings, M.J. Cohen, R. Lefering, P. Joosse, P.A. Naess, N.O. Skaga, T. Groat, S. Eaglestone, M.A. Borgman, P.C. Spinella, M.A. Schreiber, K. Brohi, Comparison of the predictive performance of the BIG, TRISS, and PS09 score in an adult trauma population derived from multiple international trauma registries, Critical Care 17 (2013), R134, http://ccforum.com/content/17/4/R134 * [6] D. Chalkley, G. Cheung, M. Walsh, N. Tai, Deaths from trauma in London – a single centre experience, Emergency medicine journal (2010), doi:10.1136/emj.2009.085613. * [7] H.R. Champion, W.S. Copes, W.J. Sacco, C.F. Frey, J.W. Holcroft, D.B. Hoyt, J.A. Weigelt, Improved predictions from a severity characterization of trauma (ASCOT) over Trauma and Injury Severity Score (TRISS): results of an independent evaluation, Journal of Trauma-Injury, Infection, and Critical Care 40 (1) (1996), 42–49. * [8] F. Cismondia, A.S. Fialho, S.M. Vieira, S.R. Reti, J.M.C. Sousa, S.N. Finkelstein, Missing data in medical databases: Impute, delete or classify? Artificial Intelligence in Medicine 58 (2013), 63–72. * [9] D.E. Clark, K.L. Anderson, D.R. Hahn, Evaluating an inclusive trauma system using linked population-based data, Journal of Trauma-Injury, Infection, and Critical Care, 57 (2004), 501–509. * [10] D.E. Clark, J. Qian, K.C. Sihler, L.D. Hallagan, R.A. Betensky, The distribution of survival times after injury, World Journal of Surgery 36 (7) (2012), 1562–1570. * [11] T.J. Coats, F. Lecky, M. Woodford, Beyond the trauma registry, Journal of the Royal Society of Medicine 102 (8) (2009), 308–309. * [12] D. Demetriades, B. Kimbrell, A. Salim, G. Velmahos, P. Rhee, C. Preston, G. Gruzinski, L. Chan, Trauma deaths in a mature urban trauma system: is “trimodal” distribution a valid concept? Journal of the American College of Surgeons 201 (3) (2005), 343–348. * [13] R.H. Dolin, L. Alschuler, S. Boyer, C. Beebe, F.M. Behlen, P.V. Biron, A. Shabo (Shvo), HL7 clinical document architecture, release 2, Journal of the American Medical Informatics Association 13 (1) (2006), 30–39. * [14] A.R.T. Donders, G.J.M.G. van der Heijden, T. Stijnen, K.G.M. Moons, Review: a gentle introduction to imputation of missing values, Journal of clinical epidemiology 59 (10) (2006), 1087–1091. * [15] G. Fuller, O. Bouamra, M. Woodford, T. Jenks, H. Patel, T.J. Coats, P. Oakley, A.D. Mendelow, T. Pigott, P.J. Hutchinson, F. Lecky, The effect of specialist neurosciences care on outcome in adult severe head injury: a cohort study, Journal of neurosurgical anesthesiology 23 (3) (2011), 198–205. * [16] G. Fuller, O. Bouamra, M. Woodford, T. Jenks, S. Stanworth, S. Allard, T.J. Coats, K. Brohi, F. Lecky, Recent massive blood transfusion practice in England and Wales: view from a trauma registry, Emergency Medicine Journal 29 (2) (2012), 118–123. * [17] G. Fuller, O. Bouamra, M. Woodford, T. Jenks, H. Patel, T. J. Coats, P. Oakley, A.D. Mendelow, T. Pigott, P.J. Hutchinson, F. Lecky, Temporal trends in head injury outcomes from 2003 to 2009 in England and Wales, British journal of neurosurgery 25 (3) (2011), 414–421. * [18] B.J. Gabbe, Grad Dip Biostat, F.E. Lecky, O. Bouamra, M. Woodford, T. Jenks, T.J. Coats, P.A. Cameron, The effect of an organized trauma system on mortality in major trauma involving serious head injury: a comparison of the United Kingdom and Victoria, Australia, Annals of surgery 253 (1) (2011), 138–143. * [19] M.A. Goldfarb, W.J. Sacco, M.A. Weinstein, T.F. Ciurej, R.A. Cowley, H.R. Champion, W. Gill, W.B. Long, T.C. McAslan, Two prognostic indices for the trauma patient, Computers in biology and medicine 7 (1) (1977), 21–25. * [20] S. Goodacre, J. Turner, J. Nicholl, Prediction of mortality among emergency medical admissions, Emergency Medicine Journal 23 (5) (2006), 372–375. * [21] M.H. Gorelick, Bias arising from missing data in predictive model, Journal of Clinical Epidemiology 59 (10) (2006), 1115–1123. * [22] J.W. Graham, Missing Data: Analysis and Design, Series: Statistics for Social and Behavioral Sciences, Springer, 2012. * [23] J.W. Graham, A.E. Olchowski, T.D. Gilreath, How many imputations are really needed? Some practical clarifications of multiple imputation theory, Prevention Science 8 (3) (2007), 206–213. * [24] H.R. Guly, O. Bouamra, M. Spiers, P. Dark, T. Coats, F.E. Lecky, Vital signs and estimated blood loss in patients with major trauma: testing the validity of the ATLS classification of hypovolaemic shock, Resuscitation 82 (5) (2011), 556–559. * [25] J.M. Jones, A.D. Redmond, J. Templeton, Uses and abuses of statistical models for evaluating trauma care, Journal of Trauma-Injury, Infection, and Critical Care 38 (1) (1995), 89–93. * [26] G. Kalton, I. Flores-Cervantes, Weighting methods, in New Methods for Survey Research, A. Westlake, J. Martin, M. Rigg, C. Skinner (Eds), Association for Survey Computing, Chesham, Bucks, 1998, 79–98. * [27] C. de Knegt, S.A.G. Meylaerts, L.P.H. Leenen, Applicability of the trimodal distribution of trauma deaths in a Level I trauma centre in the Netherlands with a population of mainly blunt trauma, Injury 39 (9) (2008), 993–1000. * [28] A.J. Krüger, K. Søreide, Trimodal temporal distribution of fatal trauma – Fact or fiction? Injury 39 (8) (2008), 960–961. * [29] A. Lavoie, L. Moore, N. LeSage, M. Liberman, J.S. Sampalis, The New Injury Severity Score: a more accurate predictor of in-hospital mortality than the Injury Severity Score. Journal of Trauma-Injury, Infection, and Critical Care 56 (2004), 1312–1320. * [30] R. Lefering, Trauma score systems for quality assessment, European Journal of Trauma 28 (2) (2002), 52–63. * [31] T. Leckie, I. Roberts, F. Lecky, Timing of trauma deaths within UK hospitals, TARN e-print. * [32] F. Lecky, M. Woodford, A. Edwards, O. Bouamra, T. Coats, Trauma scoring systems and databases, British Journal of Anaesthesia 113 (2) (2014), 286–94. * [33] R.J.A. Little, Missing-data adjustments in large surveys, Journal of Business & Economic Statistics 6 (30 (1988), 287–296. * [34] R.J.A. Little, S. Vartivarian, On weighting the rates in non-response weights, Statistics in Medicine 22 (9) (2003), 1589–1599. * [35] D.K. Lowe, H.L. Gately, J. R. Goss, C.L. Frey, C.G. Peterson, Patterns of death, complication, and error in the management of motor vehicle accident victims: implications for a regional system of trauma care, Journal of Trauma-Injury, Infection, and Critical Care 23 (6) (1983), 503–509. * [36] T.D. Pigott, A review of methods for missing data, Educational Research and Evaluation 7 (4) (2001), 353–383. * [37] K.G. Ringdal, T.J. Coats, R. Lefering, S. Di Bartolomeo, P.A. Steen, O. Røise, L. Handolin, H.M. Lossius, and Utstein TCD expert panel, The Utstein template for uniform reporting of data following major trauma: a joint revision by SCANTEM, TARN, DGU-TR and RITG, Scandinavian journal of trauma, resuscitation and emergency medicine 16 (1) (2008), 7. * [38] P. Royston, Multiple imputation of missing values, The Stata Journal 4 (3) (2004), 227–241. * [39] D.B. Rubin, Inference and missing data. Biometrika, 63 (1976), 581–592. * [40] D.B. Rubin, Multiple imputation for nonresponse in surveys. New York: Wiley, 1987. * [41] D.B. Rubin, Multiple imputation after 18+ years, Journal of the American Statistical Association 91 (434) (1996), 473–489. * [42] R. Rutledge, T. Osler, S. Emery, S. Kromhout-Schiro, The end of the Injury Severity Score (ISS) and the Trauma and Injury Severity Score (TRISS): ICISS, an International Classification of Diseases, ninth revision-based prediction tool, outperforms both ISS and TRISS as predictors of trauma patient survival, hospital charges, and hospital length of stay, Journal of Trauma-Injury, Infection, and Critical Care 44 (1) (1998), 41–49. * [43] W.J. Sacco, J.W. Jameson, W.S. Copes, M.M. Lawnick, S.L. Keast, H.R. Champion, Progress toward a new injury severity characterization: Severity profiles, Computers in Biology and Medicine 18 (6) (1988), 419–429. * [44] W.J. Sacco, A.V. Milholland, W.P. Ashman, C.L. Swann, L.M. Sturdivan, R.A. Cowley, H.R. Champion, W. Gill, W.B. Long, T. C. McAslan, Trauma indices, Computers in biology and medicine 7 (1) (1977), 9–20. * [45] A. Sauaia, F.A. Moore, E.E. Moore, K.S. Moser, R. Brennan, R.A. Read, P.T. Pons, Epidemiology of trauma deaths: a reassessment, Journal of Trauma – Injury, Infection, and Critical Care 38 (2) (1995), 185–193. * [46] J.L. Schafer, J.W. Graham, Missing data: our view of the state of the art, Psychological methods 7 (2) (2002), 147–177. * [47] W.C. Shoemaker, D.S. Bayard, C.C.J. Wo, A. Botnen, L.S. Chan, L-C. Chien, K. Lu, D. Demetriades, H. Belzberg, R.W. Jelliffe, Stochastic model for outcome prediction in acute illness, Computers in biology and medicine 36 (6) (2006), 585–600. * [48] N.O. Skaga, T. Eken, J.M. Jones, P.A. Steen, Different definitions of patient outcome: Consequences for performance analysis in trauma, Injury 39 (5) (2008), 612–622. * [49] J. Sobrino, S. Shafi, Timing and causes of death after injuries, Proceedings (Baylor University. Medical Center) 26 (2) (2013), 120–123. * [50] K. Søreide, A.J. Krüger, A. Line Vårdal, C.L. Ellingsen, E. Søreide, H.M. Lossius, Epidemiology and contemporary patterns of trauma deaths: changing place, similar pace, older face, World journal of surgery 31 (11) (2007), 2092–2103. * [51] J.A.C. Sterne, I.R. White, J.B. Carlin, M. Spratt, P. Royston, M.G. Kenward, A.M. Wood, J.R. Carpenter, Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls, British Journal of Medicine 338 (2009), b2393. * [52] T. Sullivan, A. Haider, S.M. DiRusso, P. Nealon, A. Shaukat A, M. Slim, Prediction of mortality in pediatric trauma patients: new injury severity score outperforms injury severity score in the severely injured, Journal of Trauma-Injury, Infection, and Critical Care 55 (2003), 1083–1087. * [53] S.Y. Tay, E.P. Sloan, L. Zun, P. Zaret, Comparison of the New Injury Severity Score and the Injury Severity Score, Journal of Trauma-Injury, Infection, and Critical Care 56 (2004), 162–164. * [54] Trauma Audit and Research Network: TARN. https://www.tarn.ac.uk/ website. * [55] D.D. Trunkey, Trauma, Scientific American 249 (2) (1983), 28–35. * [56] E.B. Wilson, Probable inference, the law of succession, and statistical inference. Journal of the American Statistical Association 22 (158) (1927), 209–212. * [57] J. Wyatt, D. Beard, A. Gray, A. Busuttil, C. Robertson, The time of death after trauma, British Medical Journal 310 (6993) (1995), 1502. Evgeny Mirkes (Ph.D., Sc.D.) is a Research Fellow at the University of Leicester. He worked for Russian Academy of Sciences, Siberian Branch, and Siberian Federal University (Krasnoyarsk, Russia). His main research interests are biomathematics, data mining and software engineering, neural networks and artificial intelligence. He led and supervised many medium-sized projects in data analysis and development of decision-support systems for computational diagnosis and treatment planning. Timothy J. Coats (FRCS (Eng), MD, FCEM) is a Professor of Emergency Medicine at the University of Leicester. Chair FAEM Research Committee 2000-2009, Chair Trauma Audit and Research Network (TARN), Chair NIHR Injuries and Emergencies National Specialist Group. Research Interests: Diagnostics and monitoring in Emergency Care, Coagulation following injury, Predictive modeling of outcome following injury. Jeremy Levesley (Ph.D, FIMA) is a Professor in the Department of Mathematics at the University of Leicester. His research area is kernel based approximation methods in high dimensions, in Euclidean space and on manifolds. He is interested in developing research at the interface of mathematics and medicine, and sees interpretation of medical data sets as a key future challenge for mathematics. Alexander N. Gorban (Ph.D., Sc.D., Professor) holds a Personal Chair in applied mathematics at the University of Leicester since 2004. He worked for Russian Academy of Sciences, Siberian Branch (Krasnoyarsk, Russia), and ETH Zürich (Switzerland), was a visiting Professor and Research Scholar at Clay Mathematics Institute (Cambridge, MA), IHES (Bures-sur-Yvette, Île de France), Courant Institute of Mathematical Sciences (New York), and Isaac Newton Institute for Mathematical Sciences (Cambridge, UK). His main research interests are dynamics of systems of physical, chemical and biological kinetics; biomathematics; data mining and model reduction problems.
# Collisional effects on the electrostatic shock dynamics in thin-foil targets driven by an ultraintense short pulse laser A Sundström1, L Gremillet2,3, E Siminos4 and I Pusztai1 1 Department of Physics, Chalmers University of Technology, SE-412 96 Göteborg, Sweden 2 CEA, DAM, DIF, F-91297 Arpajon, France 3 Université Paris-Saclay, CEA, LMCE, F-91680 Bruyères-le-Châtel, France 4 Department of Physics, Gothenburg University, SE-412 96 Göteborg, Sweden<EMAIL_ADDRESS> ###### Abstract We numerically investigate the impact of Coulomb collisions on the ion dynamics in high-$Z$, solid density caesium hydride and copper targets, irradiated by high-intensity ($I\approx 2{-}5\times 10^{20}\,\mathrm{Wcm^{-2}}$), ultrashort (${\sim}10\,\mathrm{fs}$), circularly polarized laser pulses, using particle-in-cell simulations. Collisions significantly enhance electron heating, thereby strongly increasing the speed of a shock wave launched in the laser-plasma interaction. In the caesium hydride target, collisions between the two ion species heat the protons to ${\sim}100{-}1000\,\mathrm{eV}$ temperatures. However, in contrast to previous work (A.E. Turrell et al., 2015 _Nat. Commun._ 6 8905), this process happens in the upstream only, due to nearly total proton reflection. This difference is ascribed to distinct models used to treat collisions in dense/cold plasmas. In the case of a copper target, ion reflection can start as a self-amplifying process, bootstrapping itself. Afterwards, collisions between the reflected and upstream ions heat these two populations significantly. When increasing the pulse duration to $60\,\mathrm{fs}$, the shock front more clearly decouples from the laser piston, and so can be studied without direct interference from the laser. The shock wave formed at early times exhibits properties typical of both hydrodynamic and electrostatic shocks, including ion reflection. At late times, the shock is seen to evolve into a hydrodynamic blast wave. Originally published in Plasma Phys. Control. Fusion 62 085015, © The Authors, 2020 Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. doi:10.1088/1361-6587/ab9a62 ## 1 Introduction The use of lasers to accelerate ions is a field of intense research [1], with many demonstrated or envisioned applications, such as imaging of electromagnetic fields in plasmas [2, 3], creation of warm dense matter [4, 5, 6], production of intense neutron sources [7], material testing [8, 9], laboratory astrophysics [10], and ion-beam therapy [11, 12]. Among the few laser-based ion acceleration mechanisms considered so far, including the extensively studied, and particularly robust, target normal sheath acceleration (TNSA), collisionless shock acceleration (CSA) is of particular interest due to its potential to produce a relatively narrowly peaked ion energy spectrum [13, 14, 15, 16, 17, 18]. Collisionless shocks also play a role in particle energization in astrophysical plasmas [19, 20]. As the shock front passes by, the plasma is rapidly compressed and directional kinetic energy is converted into thermal energy. This can take place either through collisional processes, such as in hydrodynamic shocks – relevant in, e.g., inertial fusion plasmas [21, 22] and relativistic laser-plasma experiments [23] – or collisionless mechanisms, involving longitudinal electrostatic fields generated by space charge effects from shock compression [14]. Collisionless shocks can also hinge upon self-generated magnetic fields, such as those resulting from the Weibel instability [24, 25], yet such shocks, of turbulent character, develop at Mach numbers much larger than those of the laminar electrostatic shocks that we shall address here [26]. In relativistic laser-plasma interactions, electrostatic shocks can arise either from the forward push exerted by the laser’s ponderomotive force (or “laser piston”) [14] in the radiation pressure acceleration (RPA) regime, or from electron pressure gradients in nonuniform plasmas [17]. While “collisionless shocks”, as the name suggests, are sustained through collective collisionless plasma processes, Coulomb collisions may play a role in their dynamics. Indeed, a finite collisionality, while affecting the shock, does not necessarily disrupt it [27]. Although the effect of collisions is often deemed negligible in high-intensity laser-plasma interactions, due to the high particle energies at play, it can become important when using solid or near-solid density targets, especially if they contain elements of high atomic numbers. In this paper, we consider two scenarios where collisions play an important role: one has basic science interest while the other is relevant for high energy density applications. We also present cases with parameters in between, to clarify how changes in laser and target parameters affect the ion dynamics, and in particular the properties of the resulting electrostatic shocks. In all cases, we will consider a circularly polarized femtosecond ($10{-}60\,\mathrm{fs}$) laser pulse. The first case we consider is motivated by the work by Turrell, Sherlock & Rose [28] (hereafter referred to as TSR), where it was reported that inter- species collisions in a caesium hydride (CsH) target induce ultrafast collisional ion heating, and essentially affect the shock dynamics. We find significantly different results compared to what is reported by TSR, even though we study essentially the same physical setup. Importantly, we do not observe the occurrence of ultrafast proton heating downstream of the shock, as most of the protons are reflected, and as such, there is no appreciable inter- species friction in the downstream. As we will discuss, this discrepancy is likely due to a different behaviour, at the high densities considered, of the different collision algorithms employed by TSR and us. The other case we address was first considered in a recent study of ours [29] investigating ionization and collisional electron heating effects in solid copper targets, relevant for warm-dense-matter generation. Here, we focus on the ion dynamics and examine the impact on the generated shock of the increased electron density in copper compared to CsH. We also assess the sensitivity of the ion dynamics to the laser parameters and target thickness. When using circular laser polarization, collisions dominate the electron heating, which, in turn, results in the formation of a stronger electrostatic shock compared to a purely collisionless simulation. In the scenarios with copper, the evolution of the shock is studied, from a hybrid hydrodynamical–electrostatic shock, through a gradual dissipation of its energy, to the transition to a hydrodynamical blast wave. In particular, the onset of shock ion-reflection is found to be self-amplifying. Collisional friction between the upstream and reflected ions heats the upstream ion population, which enhances the fraction of reflected ions. ## 2 Simulation study In this paper, we investigate two different target materials, caesium hydride (CsH) and pure copper (Cu), both at their respective solid densities. We perform one-dimensional (1D) particle-in-cell (PIC) simulations with the Smilei PIC code [30] (version 4.1), which has a collision module that has been benchmarked [31] in the high-density/low-temperature regimes relevant for this paper. In all cases, we use a circularly polarized (CP), $\lambda=800\,\mathrm{nm}$ wavelength laser with a Gaussian temporal profile. The simulation box consists of 51200 cells over a length of $20\,\mathrm{\textrm{\textmu}{m}}$ (resolution $\Delta{x}=0.39\,\mathrm{nm}$), and a $4^{\rm th}$ order interpolation shape function is employed. The use of a high-order shape function ensures good energy conservation despite the Debye length in our collisionless simulation being somewhat lower than the mesh size. The electrons are initialized at a temperature of $T_{\mathrm{e},0}=1{-}10\,\mathrm{eV}$ and the ions at a temperature of $0.1{-}1.0\,\mathrm{eV}$. Both target materials contain a highly charged, $Z^{*}$, ion species, such that the effect of collisions is significant. This high collisionality turns out to be of crucial importance for the electron heating. Since CP is used, the target electrons are energized through inverse Bremsstrahlung rather than from the strongly inhibited $j{\times}B$ [32] or vacuum heating [33, 34] mechanisms. In our recent work [29], we showed that collisional electron heating produces well-thermalized electron populations with temperatures in the ${\sim}1{-}10\,\mathrm{keV}$-range. The use of the CsH target was inspired by the work by TSR [28]. As a target material, CsH could be of interest for laser acceleration of protons since it contains hydrogen volumetrically, like a plastic target. An advantage of this material over plastic, though, is the much higher ionization degree ($Z^{*}$) that can be reached, hence enhancing collisional effects. Although practically challenging, due to the high chemical reactivity of CsH and difficulties in the target fabrication, it would, in principle, be possible to use this material in an experiment. The CsH target is composed of an equal number mixture of protons and caesium ions. The charge state of the Cs ions is set to a fixed value of $Z^{*}=27$, corresponding to full ionization of the three outermost shells. The resulting quasi-neutral electron density is ${n_{\mathrm{e},0}=250\,n_{\mathrm{c}}}$, where $n_{\mathrm{c}}=\epsilon_{0}m_{\mathrm{e}}\omega^{2}/e^{2}\approx 1.7\times 10^{21}\,\mathrm{cm^{-3}}$ is the critical density ($\epsilon_{0}$ is the vacuum permittivity, $m_{\mathrm{e}}$ is the electron mass, $\omega$ is the laser frequency and $e$ is the elementary charge), corresponding to a collisionless skin depth of $l_{\mathrm{s}}=8.0\,\mathrm{nm}$ which is well resolved. The target thickness is $300\,\mathrm{nm}$, as in the simulations of TSR. Copper, on the other hand, lacks the embedded protons but is, from a practical standpoint, much more readily available as a target material. Copper is also relatively highly charged, and hence presents a collisionality comparable to CsH. The lack of embedded protons111The copper is also modelled without any proton contamination layer on the surfaces. While such a contamination layer would affect the TNSA process, and somewhat the laser absorption, it is not expected to have a significant impact on the shock dynamics, that is the focus of this paper. makes copper less suitable for volumetric proton acceleration, but its high collisionality could be beneficial for other applications, such as warm-dense-matter generation [29]. In the simulations, the copper ions are initialized with three fully ionized atomic shells ($Z^{*}=27$), and at solid density (corresponding to $n_{\mathrm{e},0}=1307\,n_{\mathrm{c}}$ and $l_{\mathrm{s}}=3.5\,\mathrm{nm}$). This choice is informed by simulation results for a copper target including field and collisional ionization processes, analyzed in Ref. [29], showing that the average $Z^{*}$ rapidly reaches this value, then it stagnates, due to a significant jump in ionization energy beyond the three atomic shells. We found that retaining the ionization dynamics has no significant impact on the ion dynamics. With the copper targets, two different target thicknesses and two different laser parameters were considered. The thinner target is $300\,\mathrm{nm}$ thick, as in the CsH simulations, which has the advantage of quicker heating and homogenization compared to a thicker target. The thicker ($2.5\,\mathrm{\textrm{\textmu}{m}}$) target, on the other hand, can be more suitable for warm-dense-matter applications: A high energy density will be maintained over a longer time since hydrodynamic expansion takes longer to reach the interior of a thicker target. We note that at the high densities and ionization degrees considered here, the useful lifetime of the target can also be affected by radiative losses, dominantly through Bremsstrahlung at the temperatures of interest. We find, however, that for our parameters, the radiative cooling time is typically of several picoseconds, so that Bremsstrahlung losses should not greatly impact the plasma dynamics during the integration time (${\leq}1\,\mathrm{ps}$) of our simulations. For the same reason, internal radiative energy transport was also not modelled in the simulation. We considered two different sets of laser parameters: an amplitude of $a_{0}=15$ ($I\approx 5\times 10^{20}\,\mathrm{Wcm^{-2}}$) and full-width-at- half-maximum (FWHM) duration of $10\,\mathrm{fs}$, as well as $a_{0}=10$ ($I\approx 2\times 10^{20}\,\mathrm{Wcm^{-2}}$) and a FWHM duration of $60\,\mathrm{fs}$. The former is used with both the CsH and Cu thin targets, and the latter is used for both the thin and thick Cu targets. The use of thicker targets goes along with increased integration times, allowing a larger number of fast particles to reach the domain boundaries. To keep them inside the domain, the thicker target is initialized with its front at $x=7.5\,\mathrm{\textrm{\textmu}{m}}$ compared to the other targets located at $1\,\mathrm{\textrm{\textmu}{m}}$. For an accurate modelling of Coulomb collisions, employing the relativistic PIC algorithm of [31] (to be further discussed in Sec. 3.2), a relatively high number of particles per cell is needed. In the thinner target, 500 macro- particles per species per cell was used, while in the thicker target, the particle number was reduced somewhat to 400 macro-particles per species per cell. Resolution tests, with halved particle number or halved spatial resolution (with same total number of particles), for the Cu thin target simulation show that the simulations are numerically converged. ## 3 Ion dynamics in the CsH target Motivated by the previous work by TSR, we performed a similar set of simulations in CsH. However, despite virtually identical setups, our results differ significantly from the ones by TSR. ### 3.1 Comparison of collisional and collisionless results Figure 1: Electron distributions at (left) and after the laser peak intensity (middle and right), with (top) and without (bottom) collisions, using CP. Green curve: electron density. Please note the different momentum scales for the collisional and collisionless simulations. The primary effect of the strong target collisionality is to significantly enhance electron heating through inverse Bremsstrahlung [29]. As an illustration of the collisional electron heating, Fig. 1 shows the electron phase space of the collisional (top row) and collisionless (bottom row) CsH simulations at three successive times: during peak laser intensity at $t=21\,\mathrm{fs}$, right after the laser pulse has ended at $t=45\,\mathrm{fs}$, and even later at $t=70\,\mathrm{fs}$. In figures hereafter, the phase space distribution functions $f$ are normalized to the maximum value of each respective _initial_ Maxwellian distribution, $f_{\rm max}$. The electrons in the target front layer are energized in the transverse ($y$-$z$) plane by the laser electric field. Then collisions scatter their momentum into the longitudinal direction, as seen through the large spread in $p_{x}$ near the plasma front in the $t=21\,\mathrm{fs}$ frame of the collisional distribution. Collisions then entail a fast thermalization of the electrons to a Maxwellian distribution, yielding a bulk temperature of $T_{\mathrm{e}}\approx 10\,\mathrm{keV}$ that corresponds to an ion-acoustic speed of $c_{\mathrm{s}}\approx(Z^{*}_{{\mathrm{Cs}}}T_{\mathrm{e}}/m_{{\mathrm{Cs}}})^{1/2}\approx 1.5\times 10^{-3}c$, where $c$ is the speed of light in vacuum. The electron density is also indicated in Fig. 1 (green solid curve, right axis). Compared to the collisionless case, the collisional simulation shows smoother spatial structures likely due to a combination of higher temperature, collisional dissipation and dispersion of non-linear waves. The Debye length is $\lambda_{\mathrm{D}}\approx 1\,\mathrm{nm}$ and $\lambda_{\mathrm{D}}\approx 0.1\,\mathrm{nm}$ in the collisional and collisionless cases, respectively. The collisional electron density profile also shows signs of an electrostatic shock wave: a density jump moving away from the target front is visible in the $t=45\,\mathrm{fs}$ and $70\,\mathrm{fs}$ panels. In the collisionless case, the density profile exhibits two peaks in both time frames. The rightmost density jump is due to the leading edge of the radiation-pressure-accelerated Cs ions (Fig. 2), while the leftmost density peak corresponds to an electrostatic shock, which, due to the low electron temperature, is too slow for its propagation to be noticeable over the displayed time and length scales. Figure 2: Proton (top frame) and caesium ion (bottom frame) distributions in the $300\,\mathrm{nm}$ CsH target at peak laser intensity ($t=21\,\mathrm{fs}$) and after the pulse has passed ($t=45\,\mathrm{fs}$ and $70\,\mathrm{fs}$), with (upper panels) and without (lower panels) collisions, using CP. The longitudinal electric field is also plotted (turquoise solid line, right axes). Note the different electric field scales between the collisional and collisionless panels. In Fig. 2, the evolution of the ion distributions in the collisional and collisionless CsH targets are shown. The top frame shows the proton, the lower one the Cs ion phase spaces, with the upper (lower) rows in both frames corresponding to the collisional (collisionless) simulations, at times $t=21\,\mathrm{fs}$, $t=45\,\mathrm{fs}$ and $t=70\,\mathrm{fs}$. At $t=21\,\mathrm{fs}$, the difference between the collisional and collisionless simulations is quite small; in both cases, the protons and Cs ions are pushed by the laser piston. However, due to the lower charge-to-mass ratio of the Cs ions compared to the protons, the Cs ions react more slowly to the radiation pressure (RP) induced electrostatic field (at $x\approx 1\,\mathrm{\textrm{\textmu}{m}}$) than the protons, as seen by the almost four times higher velocity reached by the protons ($p_{x}/mc\approx 0.02$) at $t=21\,\mathrm{fs}$. Owing to the short pulse duration ($10\,\rm fs$), the Cs ions do not have enough time to react to RP before the pulse ends. Also shown in Fig. 2 is the longitudinal electric field, $E_{x}$ (turquoise curve), normalized to $m_{\mathrm{e}}c\omega/e\approx 4.013\times 10^{12}\,\mathrm{V/m}$. The charge separation during the RPA phase creates a strong longitudinal electric field, visible as a positive spike in $E_{x}$ close to $x=1\,\mathrm{\textrm{\textmu}{m}}$ in the $t=21\,\mathrm{fs}$ panels. Note that the peaks of the RP field are cut off in the display. The collisionless RP field reaches a normalized amplitude of $eE_{x}/(m_{\mathrm{e}}c\omega)=8.6$, while the field in the collisional simulation reaches only $5.6$. However, the RP field in the collisional simulation has a wider spatial extent. When the electric field is integrated, the potential drop across the RP field is $e\phi\approx 220\,\mathrm{keV}$ and $280\,\mathrm{keV}$ in the collisionless and collisional cases, respectively. Thus, collisions do not affect the RPA process significantly, as apparent from the comparison of the collisional and collisionless panels at $t=21\,\mathrm{fs}$ in Fig. 2. With collisions, the electrostatic structure caused by RPA transforms into an electrostatic shock, as evidenced by the single strong oscillation of $E_{x}$ and modulations in the downstream ion distribution in the $t=45\,\mathrm{fs}$ and $70\,\mathrm{fs}$ frames of Fig. 2. A close inspection of the collisionless simulation reveals the same behaviour (although barely visible in Fig. 2), indicating that an electrostatic shock has also formed there. However, due to the high electron temperature from collisional heating, the shock is much stronger and faster in the collisional case. In absolute units, the average shock velocity between $t=45\,\mathrm{fs}$ and $70\,\mathrm{fs}$ was $v_{\mathrm{sh}}/c\approx 4.3\times 10^{-3}$ and $v_{\mathrm{sh}}/c\approx 0.9\times 10^{-3}$ in the collisional and collisionless simulations, respectively. Yet, the higher electron temperature in the collisional target ($T_{\mathrm{e}}\approx 10\,\mathrm{keV}$ vs. $T_{\mathrm{e}}\approx 0.2\,\mathrm{keV}$) leads to a lower Mach number ($\mathcal{M}\approx 2.9$ vs. $\mathcal{M}\approx 4$). The low shock speed in the collisionless simulation implies that the shock-reflected ions have a significantly lower energy compared to those originating from the initial burst of the RPA. In both the collisional and collisionless cases, given its limited energy reservoir provided by the ultrashort ($10\,\mathrm{fs}$) laser pulse, the shock wave steadily loses its energy, as seen by the declining field amplitude and the sloped reflected ion structure in the proton and Cs phase spaces (i.e. the shocks are losing speed). Another consequence of the efficient inverse Bremsstrahlung electron heating is that the collisional simulation displays TNSA at the target rear boundary, whereas it is virtually nonexistent in the collisionless simulation, as evident at $t=70\,\mathrm{fs}$ in Fig. 2. Due to the use of CP, the electrons are weakly energized in the collisionless case, hence quenching TNSA. In the collisional simulation, the TNSA protons attain energies slightly lower than the RPA protons at the final simulation time. In the collisional case, we also see that the reflected and upstream proton and Cs ion populations are being significantly heated, in contrast to their collisionless counterparts. By fitting Maxwellians to the proton distribution in the range $x=1.15{-}1.18\,\mathrm{\textrm{\textmu}{m}}$ (close to, but still beyond direct influence from the shock front) at time $t=70\,\mathrm{fs}$, the upstream proton population is found to have already been heated to $T_{\mathrm{p}}^{\rm(u)}=120\,\mathrm{eV}$, while the reflected protons are at a temperature of $T_{\mathrm{p}}^{\rm(r)}=750\,\mathrm{eV}$. We recall that the initial ion temperature was $0.1\,\mathrm{eV}$. Simulations in which various types of collisions (e.g., proton–Cs or ion–electron) have been selectively switched off (not shown here), reveal that the heating of the reflected ions proceeds from their friction with the background Cs ions, while the upstream ions are mainly collisionally heated by the fast electrons. Figure 3: Proton (top panel) and Cs ion (bottom panel) distributions in the shock frame of reference at $t=70\,\mathrm{fs}$, together with the shock electrostatic potential, $e\phi/T_{\mathrm{e}}$ (blue solid line, right axes), using $T_{\mathrm{e}}=10\,\mathrm{keV}$. Also shown are contours of constant energy, $\mathcal{E}=mv^{2}/2+eZ(\phi-\phi_{\max})$ (black, dashed or dotted lines). The black dashed line corresponds to $\mathcal{E}=0$ at the potential peak, $\phi_{\max}$. To get a more detailed picture of the vicinity of the electrostatic shock front, close-ups of the proton (top) and Cs (bottom) distributions, at $t=70\,\mathrm{fs}$, are displayed in Fig. 3. The distributions have been shifted to the shock rest frame (at velocity $v_{\mathrm{sh}}/c\approx 3.1\times 10^{-3}$), relative to the position of the potential maximum, $x_{\mathrm{sh}}$; the velocities are normalized to the ion-acoustic sound speed, $c_{\mathrm{s}}$. The electrostatic potential, $\phi(x)=-\int_{x_{0}}^{x}E_{x}(x^{\prime})\,\mathrm{d}{x^{\prime}}$, where $x_{0}$ is such that $\phi$ averages to zero in the range $8\leq(x-x_{\mathrm{sh}})/\lambda_{\mathrm{D}}\leq 10$222In an idealized electrostatic shock, $\phi\to 0$ as $x\to\infty$. However, in practice, the electrostatic potential presents spatial variation even well upstream of the shock front, from sources other than the shock, which motivates this averaging procedure. The choice of the $x$-range to average over is somewhat arbitrary, but it is chosen reasonably close to the shock front, while sufficiently outside the shock width., is also plotted (blue line), along with corresponding constant energy contours (black dashed or dotted lines). The black dashed line represents the constant energy contour which has zero (shock-frame) kinetic energy at the peak of $\phi$. This line is an approximate boundary between the reflected and passing ions; in a steady state, this would be a separatrix. The top frame shows that almost all protons are located within the reflected region of phase space. Meanwhile, only around $5{-}10\%$ of the Cs ions are reflected, and accordingly, the upstream Cs distribution mostly lies below the passing–reflected boundary. The difference in ion reflection between the two ion species is due to their different charge-to-mass ratios [35]. The electrostatic potential is seen to oscillate downstream of the shock (left side in Fig. 3), which creates regions of ion trapping. In a perfectly steady- state and collisionless electrostatic shock, these regions would be empty, as there would be no means for the ions to cross the separatrix. However, due to the slowly decreasing amplitude and speed of the shock, the trapping regions experience a steady influx of Cs ions. These adiabatic effects are likely more important here than collisional scattering [27]. While the Cs ions mainly enter the trapping regions from the leftmost potential hump in Fig. 3, almost no protons pass the shock front and hence only few protons ever enter the trapped region. The protons trapped in those regions are mostly remnants of the protons left behind the main RPA (seen to the left of the shock front in the $t=45\,\mathrm{fs}$ frame of Fig. 2). ### 3.2 Ultrafast ion heating revisited The theoretical study of TSR [28] predicts that an ultrafast collisional ion heating may take place in plasmas composed of light and heavy ion species. This result is born out by 1D collisional PIC simulations performed with the Epoch [36] code, considering a CsH target almost identical to that in the current paper. The authors ascribe the observed ultrafast heating to collisional friction between the protons and Cs ions as they experience a differential acceleration in the electrostatic field of the shock. The CsH setup presented in this paper is almost identical to that of TSR – apart from a mere $1\%$ difference in the electron density, the laser polarization and the increased resolution in our case. We have also run a Smilei simulation with exactly the same physical parameters (including linear polarization and numerical resolution) as TSR. As regards the ion dynamics, this simulation yields results virtually identical to the CsH simulation presented in Fig 2 (therefore, they are not presented here separately). However, none of our simulations reproduce the main findings of TSR, namely, the collisional downstream proton heating ascribed to inter-species ion friction and the absence of ion reflection. By contrast, our simulations indicate that the collisional interaction between the ion species does not inhibit the proton reflection; in fact, as shown in e.g. Fig. 3, nearly all protons are reflected, and these are subsequently heated through collisional friction through the ambient (upstream) Cs ions. The proton heating is strongest in the _reflected_ ion population. The PIC results of TSR are interpreted by a two-fluid model retaining the momentum and energy moments of the Fokker-Planck equation, assuming Maxwellian distributions. It provides steady-state expressions for the longitudinal derivatives of the temperatures and velocities of the two fluid species, which are then integrated over the spatial width of the shock front. The energy input to the system comes from an electric field term representing the electrostatic shock field. Importantly, the possibility of ion reflection is ruled out by construction: protons are forced to pass through the barrier and gain all the available potential energy, which is consistent with their simulation results, but not with ours. In the collisionless case the protons should clearly be reflected due to their higher charge to mass ratio than that of Cs. The only way to avoid proton reflection is if a very strong friction between the two species pulls the ions across the potential barrier. This, however, requires a much stronger collisional coupling than what we observe. Thus, we believe that the difference between TSR’s results and ours is (at least partly333Modifications to the implementation of the Sentoku & Kemp [37] collision model in Epoch over time make a direct comparison to TSR difficult.) a consequence of the different collision algorithms used. The version of the Epoch [36] code used by TSR was equipped with a collision module based on the algorithm proposed by Sentoku & Kemp [37] (SK), while Smilei employs the scheme developed by Pérez et al. [31] (NYP), which generalizes the Nanbu & Yonemura scheme [38, 39] to the relativistic regime. Both collision models are designed to reproduce the Fokker–Planck limit, where small-angle collision events dominate, which is relevant for high-temperature and/or low-density plasmas. However, at the high plasma densities considered here, which are susceptible to quantum degeneracy and coupled plasma effects, corrections must be made to avoid unphysically high collision frequencies444It should be emphasized though, that these PIC simulations are intrinsically classical, and as such, a self-consistent treatment of quantum effects is clearly outside their scope. Thus the extensions of any binary collision model to dense/cold plasma regions are ad-hoc models designed to reproduce plasma-averaged collisional properties expected from advanced warm-dense-matter or condensed- matter models [40].. This is also a major point where the SK and NYP algorithms differ. In the high-density/low-temperature regime, the SK model forces the effective temperature of the interacting species to stay above the Fermi temperature, in order to emulate the Fermi-degenerate regime. This leads to the maximum collision frequency $\hat{\nu}_{\alpha\beta}^{\rm(SK)}=m_{\mathrm{e}}Z^{*}_{\beta}e^{4}\log\Lambda/(12\pi^{3}\epsilon_{0}^{2}\hbar^{3})$ between two particles of species $\alpha$ and $\beta$ [37, eq. (10)]. By contrast, drawing from the prescription of Ref. [41] for coupled plasmas, the NYP model applies a lower bound on the collisional mean free path, which can never get smaller than the mean inter-particle distance $r_{\beta}\sim(4\pi n_{\beta}/3)^{-1/3}$. This yields the saturated collision frequency $\hat{\nu}_{\alpha\beta}^{\rm(NYP)}=(4\pi n_{\beta}/3)^{1/3}(T_{\alpha}/2m_{\alpha})^{1/2}$ [31, sec. I-C]. At the considered ion density ($n_{{\mathrm{Cs}}}=1.5\times 10^{28}\,\mathrm{m^{-3}}$) and a representative ion temperature of $T_{\mathrm{i}}=100\,\mathrm{eV}$, we find $\nu_{\mathrm{p}\,{\mathrm{Cs}}}^{\rm(Spitzer)}\approx 1.5\times 10^{15}\,\mathrm{s^{-1}}$, and $\lambda_{\mathrm{p}\,{\mathrm{Cs}}}\approx 0.06\,\mathrm{nm}$ that is significantly smaller than the inter-atomic distance ${\sim}0.2\,\mathrm{nm}$. Thus the dense-plasma limit of NYP should hold under such conditions but not the degenerate SK limit. One thus obtains that $\nu_{\mathrm{p}\,{\mathrm{Cs}}}^{\rm(SK)}=\nu_{\mathrm{p}\,{\mathrm{Cs}}}^{\rm(Spitzer)}\approx 1.5\times 10^{15}\,\mathrm{s^{-1}}$ is more than $5$ times larger than the dense-plasma value $\nu_{\mathrm{p}\,{\mathrm{Cs}}}^{\rm(NYP)}\approx 2.7\times 10^{14}\,\mathrm{s^{-1}}$. This discrepancy is only strengthened when considering Cs–Cs collisions. Again at $n_{{\mathrm{Cs}}}=1.5\times 10^{28}\,\mathrm{m^{-3}}$ and $T_{\mathrm{i}}=100\,\mathrm{eV}$, one finds $\nu_{{\mathrm{Cs}}\,{\mathrm{Cs}}}^{\rm(Spitzer)}\approx 6.6\times 10^{16}\,\mathrm{s^{-1}}$, which is over three orders of magnitude larger than the dense-plasma NYP value, $\nu_{{\mathrm{Cs}}\,{\mathrm{Cs}}}^{\rm(NYP)}\approx 2.4\times 10^{13}\,\mathrm{s^{-1}}$. Regarding the electron–Cs ion collisions, one has $\nu_{\mathrm{e}\,{\mathrm{Cs}}}^{\rm(SK)}=\nu_{\mathrm{e}\,{\mathrm{Cs}}}^{\rm(Spitzer)}\approx 6.3\times 10^{16}\,\mathrm{s^{-1}}$ at $T_{\mathrm{e}}=100\,\mathrm{eV}$, but this corresponds to a mean free path $\lambda_{\mathrm{e}\,{\mathrm{Cs}}}\approx 0.06\,\mathrm{nm}\ll r_{{\mathrm{Cs}}}$, so again the dense-plasma limit applies, which gives $\nu_{\mathrm{e}\,{\mathrm{Cs}}}^{\rm(NYP)}\approx 1.2\times 10^{16}\,\mathrm{s^{-1}}$. The difference is even larger at the lower temperatures associated with the early-time interaction. Moreover, the SK and NYP schemes handle colliding particles with non-equal statistical weights differently, which impacts the accuracy of energy conservation. However, that is likely not the cause of the diverging simulation results, since the number of computational particles is large in both cases, so as to limit statistical noise. A recent simulation study [42] of dense ($n_{\mathrm{e}}=60n_{\mathrm{c}}$) plasmas driven at relativistic laser intensities, comparing the results of the SK and NYP555Since version 4.17 (June 2019) Epoch has the full NYP algorithm implemented as well. modules in Epoch and the NYP module of Smilei, confirms that the SK model indeed results in stronger effective collisionality. In addition, a good agreement between Epoch and Smilei was found when both employed the NYP algorithm. Which of these two treatments of collisions in dense/cold plasmas is more physically correct is still a debated issue. Therefore, along with further numerical investigation, experimental verification should be sought for in order to determine the parameter regions of validity, and accuracy, of the two collision algorithms. Our results suggest that such differentiation between the algorithms is possible using laser-plasma experiments in multi-species, dense plasmas, such as the CsH case presented here. A good benchmarking test would be to compare ion energy spectra in cases where collisions are sufficiently strong to suppress ion reflection according to SK but not according to NYP. Such experiments might need to control the target density profile on the rear side, e.g. through laser ablation, in order to suppress TNSA, and make the shock accelerated ion population clearer. A potentially suitable experiment has recently been performed [18], but it would require further investigation to see whether the accuracy of the two models can be assessed from the obtained data (which clearly showed ion reflection). ## 4 Ion dynamics in copper targets Figure 4: Copper ion phase-space distribution in the $300\,\mathrm{nm}$ thick Cu target from the collisional simulation, at times $t=21\,\mathrm{fs}$, $t=45\,\mathrm{fs}$ and $70\,\mathrm{fs}$. We will now turn to the pure copper simulations, first considering similar target and laser parameters to the CsH case, and subsequently changing these parameters one by one. The two main differences compared to CsH are the lack of multi-species effects and the ${\sim}5$ times higher electron density (assuming $Z^{*}=27$). However, just as in the CsH target, the primary effect of collisions in the Cu plasma is the inverse Bremsstrahlung-type electron heating. The bulk electrons are heated to $T_{\mathrm{e}}\approx 3.7\,\mathrm{keV}$, corresponding to a sound speed of $c_{\mathrm{s}}\approx 1.3\times 10^{-3}c$. Figure 4 shows the collisional Cu ion phase-space distribution, at peak laser intensity ($t=21\,\mathrm{fs}$), close after the laser irradiation ($t=45\,\mathrm{fs}$), and even later in time ($t=70\,\mathrm{fs}$). Similarly to Cs, the Cu ions have a rather low charge-to-mass ratio ($Z^{*}/A=0.42$) and do not have time to fully respond to the laser piston during the short-pulse irradiation. Again, the initial perturbation from the laser piston transforms into an electrostatic shock, yet it is losing energy faster than in the CsH target. Since the copper plasma does not contain any protons, all of the reflected charge, needed to sustain the shock, consists of Cu ions. Owing to their high charge ($Z^{*}=27$), the collisional interaction between the reflected and the upstream ions is stronger than in the collisional CsH case, resulting in a noticeable heating of these two populations, as seen in the collisional Cu ion distributions at $45\,\mathrm{fs}$ and $70\,\mathrm{fs}$ in Fig. 4. Some heating is observed in the collisional proton and Cs ion distributions of Fig. 2 as well, but significantly weaker than in the copper plasma. Figure 5: Copper ion phase-space distribution in the $300\,\mathrm{nm}$ thick Cu target from the collisional simulation, with an $a_{0}=10$ and $60\,\mathrm{fs}$ duration laser pulse. The distribution is shown at times $t=90\,\mathrm{fs}$, $t=150\,\mathrm{fs}$ and $200\,\mathrm{fs}$. Figure 6: Temporal evolution of the normalized potential drop across the shock front, $\hat{\phi}=e\phi/T_{\mathrm{e}}$, and shock Mach number, $\mathcal{M}$, for the thin copper target, long pulse collisional simulation. The blue line represents a moving average (over three data points) of $\hat{\phi}$. The vertical lines indicate the time of peak (solid) and half (dotted) laser intensity. For both the normalization of the potential and for the ion acoustic sound speed, an electron temperature of $T_{\mathrm{e}}=4\,\mathrm{keV}$ was used as a representative value. We have also studied a scenario wherein the copper plasma is illuminated by a laser pulse of longer duration ($60\,\mathrm{fs}$ FWHM) and lower intensity ($a_{0}=10$). The Cu ion phase-space distribution from the collisional simulation is displayed in Fig. 5, shown at times $t=90\,\mathrm{fs}$ (at peak laser intensity), $t=150\,\mathrm{fs}$ (close to the end of the pulse) and $t=200\,\mathrm{fs}$. Like in the two previous setups, an electrostatic shock is generated. It forms out of a perturbation that detaches from the laser piston already as early as $t\approx 60\,\mathrm{fs}$, before the pulse has reached half its maximum intensity. It displays electrostatic shock-like properties, such as a sharp rise in lab-frame ion velocity in conjunction with a steep electrostatic potential barrier, but it lacks any ion reflection, as seen in the $t=90\,\mathrm{fs}$ frame of the collisional simulation. Figure 6 shows the evolution of the normalized666Using a fixed value of $T_{\mathrm{e}}=4\,\mathrm{keV}$, derived from Maxwellian fits to the electron energy spectrum (whole plasma). The measured electron temperature stays fairly close to this value during the entire duration of the pre-shock and the electrostatic shock. potential jump $\hat{\phi}=e\phi/T_{\mathrm{e}}$ and Mach number $\mathcal{M}$ of the shock from the time of detachment from the laser piston to its demise. The transition to a fully developed, ion- reflecting, electrostatic shock occurs when $\hat{\phi}\gtrsim\mathcal{M}^{2}/2$, which is at around $t\approx 90\,\mathrm{fs}$. The longer pulse duration and more gradual increase in intensity, detaches the onset of shock reflection from RPA. The peaks in $\hat{\phi}$ and $\mathcal{M}$ are followed by a more gradual decrease in the Mach number and in the shock potential peak, starting at around $t\approx 110\,\mathrm{fs}$. The vertical lines in Fig. 6 represent the time of peak (solid) and half (dotted) laser intensity. The peaks thus occur before the laser intensity has halved. The delayed peaks in shock speed and potential relative to the peak laser intensity likely originate from the fact that the interaction has reached a stage where the laser is no longer able to supply more power than the energy dissipation rate of the shock. The reflection of ions appears as a process bootstrapping itself. After the first few ions have been reflected, collisional heating between the upstream and reflected ions cause a broadening of the longitudinal momentum distribution of the upstream ions, leading to more ions entering the reflected region of phase-space. This upstream heating is seen in the collisional $t=150\,\mathrm{fs}$ frame of Fig. 5. Towards the end of the simulation, the upstream and reflected ion populations start to merge into each other, after which the determination of the shock speed relative to the upstream population becomes unreliable. The shock ends somewhat abruptly when it collides with the rarefaction wave emanating from the back of the target, which occurs at roughly $t\approx 250\,\mathrm{fs}$. Figure 7: Distribution of copper ions in a $2.5\,\mathrm{\textrm{\textmu}{m}}$ thick target, with collisions, at times $t=110\,\mathrm{fs}$, $t=150\,\mathrm{fs}$ and $500\,\mathrm{fs}$. Note that the initial position of the target front is at $x=7.5\,\mathrm{\textrm{\textmu}{m}}$, different from the other simulations presented. Figure 8: Spatial profiles of the copper ion temperature and density at $t=500\,\mathrm{fs}$ and $t=1000\,\mathrm{fs}$. Both profiles display a sharp jump; the temperature jump by about a factor of $2.6$, while the density jumps by about a factor $2.0$. As our final setup, we switch to a $2.5\,\mathrm{\textrm{\textmu}{m}}$ copper target, driven by an $a_{0}=10$ and $60\,\mathrm{fs}$ FWHM duration pulse. Those parameters may be of interest to warm-dense-matter studies [29]. The simulation results are shown in Fig. 7. Despite the significant increase in target areal density, the measured electron temperature still reaches $T_{\mathrm{e}}\approx 3.5\,\mathrm{keV}$, thus the initial evolution of the shock is very similar to that in the thin-target simulation. Indeed, the Mach number and shock potential evolve as those displayed in Fig. 6, both qualitatively and quantitatively (when accounting for a time shift of ${\simeq}15\,\mathrm{fs}$ corresponding to the different target position). The initial shock wave displays characteristic features of an electrostatic shock, such as ion reflection and a velocity modulation in the downstream. However, it also shows signs of a collisional shock, such as isotropization of the downstream ion distribution (i.e. the longitudinal and transverse temperatures are comparable to each other). The shock can therefore be claimed to be in a hybrid regime between a collisionless electrostatic shock and a hydrodynamic shock. Since the target is now significantly thicker, the shock wave has time to further dissipate its energy, and the ion reflection terminates at $t\approx 300\,\mathrm{fs}$, well before the shock front encounters the rarefaction wave from the back of the target. As the shock steadily loses speed – and the electrostatic potential drop across the shock front decreases – a point is reached when the electrostatic potential barrier is too weak to cause ion reflection (in fact, the electric field reaches the level of statistical noise). However, even though the ion reflection is absent, the steady propagation of a clear shock front structure in phase space is clearly visible inside the target, in the $t=500\,\mathrm{fs}$ panel of Fig. 7. There are corresponding discontinuities in the ion temperature, $T_{{\mathrm{Cu}}}$, and density, $n_{{\mathrm{Cu}}}$, profiles: Figure 8 shows that $T_{{\mathrm{Cu}}}$ and $n_{{\mathrm{Cu}}}$ jump by a factor of $2.6$ and $2.0$, respectively. Since the laser no longer exerts radiation pressure on the target front side, the latter rapidly expands towards the vacuum as a rarefaction wave propagates into the shocked plasma. At $t=500\,\mathrm{fs}$, this rarefaction wave has caught up with the shock front to create a weakly supersonic ($\mathcal{M}\approx 1.3$), planar blast wave [43, Sec. 4.3], which slowly decays away (compare the ion temperature and density jumps at $t=500\,\mathrm{fs}$ and $t=1000\,\mathrm{fs}$). To study how the qualitative features of the shock dynamics depend on laser parameters, further simulations have been performed; $a_{0}$ ranging from $2$ to $14$ and the pulse FWHM duration ranging from $15\,\mathrm{fs}$ to $120\,\mathrm{fs}$. In addition, two simulations have been run with $a_{0}=7$ and $14$, with the respective pulse durations varied such that the pulse energy would stay the same as in the case presented above ($a_{0}=10$ and $60\,\mathrm{fs}$ FWHM duration). We could identify two qualitatively different regimes for the ion dynamics. At lower intensities, the pulse is not strong enough to initiate ion reflection at any point; instead, a shock-like structure similar to the one displayed at $t=110\,\mathrm{fs}$ in Fig. 7 is launched, and is sustained for several hundred femtoseconds, with its speed and amplitude decaying rather slowly. This behaviour is observed here for $a_{0}\leq 7$, and also in the simulation with $a_{0}=7$ and $120\,\mathrm{fs}$ FWHM duration. The latter indicates that that both the laser intensity and energy are important for the onset of ion reflection. At higher intensities, the behaviour is qualitatively similar to the one shown in Fig. 7 – ion reflection is initiated, followed by a gradual loss of energy, until ion reflection no longer occurs, and the shock turns into a collisionally sustained blast wave. However, the time scale for this transition to happen depends on the laser parameters: both higher intensity and shorter pulses result in an earlier onset of the ion reflection, as well as a faster transition into a blast wave. The reason for the faster demise of ion reflection may be linked to the rapid collisional heating of the upstream ions by the reflected ions. A hotter upstream favours ion reflection, thus hastening the shock dissipation. Remarkably, the ions in the downstream of the blast wave are heated to several tens of $\mathrm{keV}$ temperatures, in the first ${\sim}100\,\mathrm{fs}$ after the ion reflection has ended. For instance, the temperature recorded in Fig. 8 is ${\sim}20\,\mathrm{keV}$ at $t=500\,\mathrm{fs}$. In the case of $a_{0}=14$ and FWHM duration of $30\,\mathrm{fs}$, the downstream ion temperature reaches ${\sim}60\,\mathrm{keV}$ at $t=300\,\mathrm{fs}$, then dropping down to ${\sim}30\,\mathrm{keV}$ at $t=500\,\mathrm{fs}$. Unlike the heating scenario put forward by TSR [28], the heating process of the downstream heavy ions revealed by our simulations does not involve inter-species friction induced in the shock electrostatic potential. Another trend observed in the scans is that shorter duration pulses generate faster shock evolution, i.e. a faster onset of ion refection, as well as a faster decay into a blast wave. This is also likely linked to the interaction of the laser piston and the plasma. Shorter laser pulses are quicker to reach their maximum intensity. The laser piston may therefore reach sufficient strength to reflect ions, before any pre-shock perturbation (e.g. as that in the $t=110\,\mathrm{fs}$ panel in Fig. 7) would have time to form and overtake the piston. The early onset of ion reflection then leads to a rapid transition to a blast wave, as discussed in the previous paragraph. In relation to the transition from hybrid shock to a blast wave, we note that the end of the ion reflection is accompanied by an increase in the width of the shock front, from $\Delta x_{\mathrm{sh}}\sim 1.6\,\mathrm{nm}$ (i.e., a few times the Debye length $\lambda_{\mathrm{D}}\approx 0.3\,\mathrm{nm}$) to $6{-}9\,\mathrm{nm}$. This width is about an order of magnitude larger than the collisional ion mean free path, here estimated as the inter-atomic distance, $\lambda_{\rm mfp}\approx 0.25\,\mathrm{nm}$. Our finding is consistent with previous estimates of the width of weakly supersonic ($\mathcal{M}\approx 2$) hydrodynamic shocks ($\Delta{x}_{\mathrm{sh}}\approx 20\lambda_{\rm mfp}$) [22, 44]. Finally, the robustness of the ion dynamics observed in the thick copper target has been tested against possible multidimensional effects on the laser- driven electron energization and subsequent ion dynamics, through a two- dimensional simulation of the thick copper target, detailed further in [29]. This simulation reveals that the situation studied here is sufficiently collisional that the shock does not suffer from transverse modulations. ## 5 Conclusions Using particle-in-cell simulations, we have numerically investigated the impact of Coulomb collisions on the ion dynamics in high-$Z^{*}$, solid density caesium hydride and copper targets, irradiated by high-intensity ($I\approx 2{-}5\times 10^{20}\,\mathrm{Wcm^{-2}}$), ultrashort ($10{-}60\,\mathrm{fs}$), circularly polarized laser pulses. In all cases collisional absorption through inverse Bremsstrahlung heats the electrons up to $3{-}10\,\mathrm{keV}$ temperatures throughout the target, while the use of CP reduces the creation of high-energy electrons. Subsequently, collisions quickly relax the electrons to a Maxwellian distribution. The impact of the laser pulse launches an electrostatic shock wave. In all cases studied here, the collisionally enhanced electron heating results in faster shock waves, with higher potential drops across the shock front, than in the corresponding collisionless simulations. In the CsH target, the different charge-to-mass ratios of the hydrogen and caesium ions result in strong proton reflection. In contrast to the results of TSR [28], we do not observe a large number of protons passing through the shock front and get heated via collisional friction with the Cs ions. Instead, inter-species friction results in the _reflected_ ions being heated up to ${\sim}\mathrm{keV}$ temperatures. The difference in proton reflection between our results and those of TSR appears to be a consequence of distinct collision models in the dense/cold plasmas where the Spitzer theory no longer applies. This suggests that laser plasma experiments, using targets containing a highly charged species and protons volumetrically, may be utilized to differentiate between numerical collision models. In pure Cu targets, the collisional coupling between the reflected and upstream ions is stronger, causing an appreciable heating of these two. Also, the higher density of both ions and electrons causes a faster decay of the shock in the CsH target. When turning to a somewhat lower-intensity, but longer-duration laser pulse, the initial stages of the shock launching process become more decoupled from the laser pulse and the RPA. Here, the shock forms already prior to the on-target laser peak. However, the shock front continues to accelerate until about ${\sim}20\,\mathrm{fs}$ after the on-target laser peak. Because of the quick launch of the electrostatic shock, the maximum energy of the accelerated ions has less sharp temporal variation, since there is no transition from the RPA ions to the CSA ions. Yet, the shock initially lacks ion reflection, the onset of which appears to be bootstrapping itself via heating of the upstream ions by the reflected ones. Lastly, we increased the target thickness in order to follow the electrostatic shock evolution over a longer duration, and to become more relevant to high- energy-density-physics applications. While the shock wave is at no point purely electrostatic, as it exhibits some features of hydrodynamic shocks, we observe the shock speed and potential drop to decay until the shock loses its capability to reflect ions. At this stage, the electrostatic potential drop across the shock front has also disappeared, and a rarefaction wave launched from the target front side has overtaken the shock front, turning it into a weakly supersonic ($\mathcal{M}\approx 1.3$) collisional blast wave. This formation is capable of locally heating up the downstream ions to tens of $\mathrm{keV}$ temperatures for a duration of about ${\sim}100\,\mathrm{fs}$. The authors are grateful for fruitful discussions with L. Hesslow and T. Fülöp, as well as to M. Grech and F. Pérez for support with Smilei, and A. E. Turrel for providing inputs for simulations presented in [28]. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 647121, the Swedish Research Council (grant no. 2016-05012), and the Knut och Alice Wallenberg Foundation. The simulations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at Chalmers Centre for Computational Science and Engineering (C3SE) and High Performance Computing Center North (HPC2N). ## References ## References * [1] Macchi A, Borghesi M and Passoni M 2013 Rev. Mod. Phys. 85(2) 751–793 * [2] Borghesi M, Campbell D H, Schiavi A, Haines M G, Willi O, MacKinnon A J, Patel P, Gizzi L A, Galimberti M, Clarke R J, Pegoraro F, Ruhl H and Bulanov S 2002 Physics of Plasmas 9 2214–2220 * [3] Romagnani L, Fuchs J, Borghesi M, Antici P, Audebert P, Ceccherini F, Cowan T, Grismayer T, Kar S, Macchi A, Mora P, Pretzler G, Schiavi A, Toncian T and Willi O 2005 Phys. Rev. Lett. 95(19) 195001 * [4] Patel P K, Mackinnon A J, Key M H, Cowan T E, Foord M E, Allen M, Price D F, Ruhl H, Springer P T and Stephens R 2003 Phys. Rev. Lett. 91(12) 125004 * [5] Dyer G M, Bernstein A C, Cho B I, Osterholz J, Grigsby W, Dalton A, Shepherd R, Ping Y, Chen H, Widmann K and Ditmire T 2008 Phys. Rev. Lett. 101(1) 015002 * [6] Mančić A, Lévy A, Harmand M, Nakatsutsumi M, Antici P, Audebert P, Combis P, Fourmaux S, Mazevet S, Peyrusse O, Recoules V, Renaudin P, Robiche J, Dorchies F and Fuchs J 2010 Phys. Rev. Lett. 104(3) 035002 * [7] Roth M, Jung D, Falk K, Guler N, Deppert O, Devlin M, Favalli A, Fernandez J, Gautier D, Geissel M, Haight R, Hamilton C E, Hegelich B M, Johnson R P, Merrill F, Schaumann G, Schoenberg K, Schollmeier M, Shimada T, Taddeucci T, Tybo J L, Wagner F, Wender S A, Wilde C H and Wurden G A 2013 Phys. Rev. Lett. 110(4) 044802 * [8] Dromey B, Coughlan M, Senje L, Taylor M, Kuschel S, Villagomez-Bernabe B, Stefanuik R, Nersisyan G, Stella L, Kohanoff J et al. 2016 Nat. Commun. 7 10642 * [9] Barberio M, Scisciò M, Vallières S, Cardelli F, Chen S, Famulari G, Gangolf T, Revet G, Schiavi A, Senzacqua M et al. 2018 Nat. Commun. 9 372 * [10] Higginson D, Korneev P, Ruyer C, Riquier R, Moreno Q, Béard J, Chen S, Grassi A, Grech M, Gremillet L et al. 2019 Commun. Phys. 2 1–7 * [11] Bulanov S, Esirkepov T, Khoroshkov V, Kuznetsov A and Pegoraro F 2002 Physics Letters A 299 240 – 247 * [12] Linz U and Alonso J 2007 Phys. Rev. ST Accel. Beams 10(9) 094801 * [13] Denavit J 1992 Phys. Rev. Lett. 69 3052–3055 * [14] Silva L O, Marti M, Davies J R, Fonseca R A, Ren C, Tsung F S and Mori W B 2004 Phys. Rev. Lett. 92(1) 015002 * [15] Romagnani L, Bulanov S V, Borghesi M, Audebert P, Gauthier J C, Löwenbrück K, Mackinnon A J, Patel P, Pretzler G, Toncian T and Willi O 2008 Phys. Rev. Lett. 101 025004 * [16] Haberberger D, Tochitsky S, Fiuza F, Gong C, Fonseca R A, Silva L O, Mori W B and Joshi C 2012 Nat. Phys. 8 95–99 * [17] Fiuza F, Stockem A, Boella E, Fonseca R A, Silva L O, Haberberger D, Tochitsky S, Gong C, Mori W B and Joshi C 2012 Phys. Rev. Lett. 109(21) 215001 * [18] Pak A, Kerr S, Lemos N, Link A, Patel P, Albert F, Divol L, Pollock B B, Haberberger D, Froula D, Gauthier M, Glenzer S H, Longman A, Manzoor L, Fedosejevs R, Tochitsky S, Joshi C and Fiuza F 2018 Phys. Rev. Accel. Beams 21 103401 * [19] Karimabadi H, Roytershteyn V, Vu H X, Omelchenko Y A, Scudder J, Daughton W, Dimmock A, Nykyri K, Wan M, Sibeck D, Tatineni M, Majumdar A, Loring B and Geveci B 2014 Physics of Plasmas 21 062308 * [20] Dieckmann M E, Doria D, Sarri G, Romagnani L, Ahmed H, Folini D, Walder R, Bret A and Borghesi M 2017 Plasma Physics and Controlled Fusion 60 014014 * [21] Perkins L J, Betti R, LaFortune K N and Williams W H 2009 Phys. Rev. Lett. 103(4) 045004 * [22] Bellei C, Rinderknecht H, Zylstra A, Rosenberg M, Sio H, Li C K, Petrasso R, Wilks S C and Amendt P A 2014 Physics of Plasmas 21 056310 * [23] Santos J J, Vauzour B, Touati M, Gremillet L, Feugeas J L, Ceccotti T, Bouillaud R, Deneuville F, Floquet V, Fourment C, Hadj-Bachir M, Hulin S, Morace A, Nicolaï P, d’Oliveira P, Reau F, Samaké A, Tcherbakoff O, Tikhonchuk V T, Veltcheva M and Batani D 2017 New Journal of Physics 19 103005 * [24] Spitkovsky A 2008 The Astrophysical Journal 682 L5–L8 * [25] Lemoine M, Gremillet L, Pelletier G and Vanthieghem A 2019 Phys. Rev. Lett. 123(3) 035101 * [26] Stockem A, Fiuza F, Bret A, Fonseca R and Silva L 2014 Scientific reports 4 3934 * [27] Sundström A, Juno J, TenBarge J M and Pusztai I 2019 J. Plasma Phys. 85(1) 905850108 * [28] Turrell A E, Sherlock M and Rose S J 2015 Nat. Commun. 6 8905 * [29] Sundström A, Siminos E, Gremillet L and Pusztai I 2020 Journal of Plasma Physics 86(2) 755860201 * [30] Derouillat J, Beck A, Pérez F, Vinci T, Chiaramello M, Grassi A, Flé M, Bouchard G, Plotnikov I, Aunai N, Dargent J, Riconda C and Grech M 2018 Comput. Phys. Commun. 222 351 * [31] Pérez F, Gremillet L, Decoster A, Drouin M and Lefebvre E 2012 Phys. of Plasmas 19 083104 * [32] Kruer W L and Estabrook K 1985 The Phys. of Fluids 28 430–432 * [33] Bauer D and Mulser P 2007 Physics of Plasmas 14 023301 * [34] May J, Tonge J, Fiuza F, Fonseca R A, Silva L O, Ren C and Mori W B 2011 Phys. Rev. E 84 025401 * [35] Pusztai I, TenBarge J M, Csapó A N, Juno J, Hakim A, Yi L and Fülöp T 2018 Plasma Physics and Controled Fusion 60 035004 * [36] Arber T D, Bennett K, Brady C S, Lawrence-Douglas A, Ramsay M G, Sircombe N J, Gillies P, Evans R G, Schmitz H, Bell A R and Ridgers C P 2015 Plasma Physics and Controlled Fusion 57 113001 * [37] Sentoku Y and Kemp A 2008 J. of Comput. Phys. 227 6846–6861 * [38] Nanbu K 1997 Phys. Rev. E 55(4) 4642–4652 * [39] Nanbu K and Yonemura S 1998 Journal of Computational Physics 145 639 – 654 ISSN 0021-9991 * [40] Dharma-wardana M W C, Klug D D, Harbour L and Lewis L J 2017 Phys. Rev. E 96(5) 053206 * [41] Lee Y T and More R M 1984 The Physics of Fluids 27 1273–1286 * [42] Bhadoria S, Kumar N and Keitel C H 2019 Stable quasi-monoenergetic ion acceleration from the laser-driven shocks in a collisional plasma Version 2 contains a comparison between two PIC collision algorithms. (Preprint https://arxiv.org/abs/1707.03309v2) * [43] Drake R P 2006 High Energy Density Physics: Fundamentals, Inertial Fusion and Experimental Astrophysics (Berlin Heidelberg: Springer-Verlag) ISBN 978-3-540-29315-6 * [44] Vidal F, Matte J P, Casanova M and Larroche O 1993 Physics of Fluids B: Plasma Physics 5 3182–3190
# Measurement Error Mitigation via Truncated Neumann Series Kun Wang<EMAIL_ADDRESS>Institute for Quantum Computing, Baidu Research, Beijing 100193, China Yu-Ao Chen<EMAIL_ADDRESS>Institute for Quantum Computing, Baidu Research, Beijing 100193, China Xin Wang <EMAIL_ADDRESS>Institute for Quantum Computing, Baidu Research, Beijing 100193, China ###### Abstract Measurements on near-term quantum processors are inevitably subject to hardware imperfections that lead to readout errors. Mitigation of such unavoidable errors is crucial to better explore and extend the power of near- term quantum hardware. In this work, we propose a method to mitigate measurement errors in computing quantum expectation values using the truncated Neumann series. The essential idea is to cancel the errors by combining various noisy expectation values generated by sequential measurements determined by terms in the truncated series. We numerically test this method and find that the computation accuracy is substantially improved. Our method possesses several advantages: it does not assume any noise structure, it does not require the calibration procedure to learn the noise matrix a prior, and most importantly, the incurred error mitigation overhead is independent of system size, as long as the noise resistance of the measurement device is moderate. All these advantages empower our method as a practical measurement error mitigation method for near-term quantum devices. ## I Introduction Quantum computers hold great promise for a variety of scientific and industrial applications McArdle _et al._ (2020); Cerezo _et al._ (2020); Bharti _et al._ (2021); Endo _et al._ (2021). However, in the current stage noisy intermediate-scale quantum (NISQ) computers Preskill (2018) introduce significant errors that must be dealt with before performing any practically valuable tasks. Errors in a quantum computer are typically classified into quantum gate errors and measurement errors. For quantum gate errors, various quantum error mitigation techniques have been proposed to mitigate the damages caused by errors on near-term quantum devices Temme _et al._ (2017); Endo _et al._ (2018); Li and Benjamin (2017); McClean _et al._ (2017, 2020); McArdle _et al._ (2019); Bonet-Monroig _et al._ (2018); He _et al._ (2020); Giurgica-Tiron _et al._ (2020); Kandala _et al._ (2019); Endo _et al._ (2021); Sun _et al._ (2021); Czarnik _et al._ (2020). For measurement errors, experimental works have demonstrated that measurement errors in quantum devices can be well understood in terms of classical noise models Chow _et al._ (2012); Kandala _et al._ (2019); Chen _et al._ (2019), which is recently rigorously justified Geller (2020). Specifically, a $n$-qubit noisy measurement device can be characterized by a noise matrix $A$ of size $2^{n}\times 2^{n}$. The element in the $\bm{x}$-th row and $\bm{y}$-th column, $A_{\bm{x}\bm{y}}$, is the probability of obtaining a outcome $\bm{x}$ provided that the true outcome is $\bm{y}$. If one has access to this stochastic matrix, it is straightforward to classically reverse the noise effects simply by multiplying the probability vector obtained from experimental statistics by this matrix’s inversion. However, there are several limitations of this matrix inversion approach: (i) The complete characterization of $A$ requires $2^{n}$ calibration experiment setups and thus is not scalable. (ii) The matrix $A$ may be singular for large $n$, preventing direct inversion. (iii) The inverse $A^{-1}$ is hard to compute and might not be a stochastic matrix, indicating that it can produce negative probabilities. Several approaches have been proposed to deal with these issues Maciejewski _et al._ (2020); Tannu and Qureshi (2019); Nachman _et al._ (2019); Hicks _et al._ (2021); Bravyi _et al._ (2020); Geller and Sun (2020); Murali _et al._ (2020); Kwon and Bae (2020); Funcke _et al._ (2020); Zheng _et al._ (2020); Maciejewski _et al._ (2021); Barron and Wood (2020). For example, Ref. Chen _et al._ (2019); Maciejewski _et al._ (2020) elucidated that the quality of the measurement calibration and the number of measurement samples affected the performance of measurement error mitigation methods dramatically. Motivated by the unfolding algorithms in high energy physic, Ref. Nachman _et al._ (2019); Hicks _et al._ (2021) used the iterative Bayesian unfolding approach to avoid pathologies from the matrix inversion. Ref. Bravyi _et al._ (2020) introduced a new classical noise model based on the continuous time Markov processes and proposed an error mitigation approach that cancels errors using the quasiprobability decomposition technique Pashayan _et al._ (2015); Temme _et al._ (2017); Howard and Campbell (2017); Endo _et al._ (2018); Takagi (2020); Jiang _et al._ (2020); Regula _et al._ (2021). However, most of these works make an explicit assumption on the physical noise model and require the calibration procedure to learn the stochastic matrix $A$, and thus is not scalable in general. Recently, Ref. Berg _et al._ (2020) proposed a noise model-free measurement error mitigation method that forces the bias in the expectation value to appear as a multiplicative factor that can be removed. In this work, we propose a measurement error mitigation method motivated by the Neumann series, applicable for any quantum algorithms where the measurement statistics are used for computing the expectation values of observables. The idea behind this method is to cancel the measurement errors by utilizing the noisy expectation values generated by sequential measurements, each determined by a term in the truncated Neumann series. The method is deliberately simple, does not make any assumption about the actual physical noise model, and does not require calibrating the stochastic matrix a priori. The paper is organized as follows. Section II describes the quantum task of computing expectation values and explains how the noisy measurement incurred bias to the results. Section III presents the error mitigation technique via truncated Neumann series. Section IV reports the experimental demonstration of our error mitigation method. The Appendices summarize technical details used in the main text. ## II Computing the expectation value Let $\rho$ be an $n$-qubit quantum state generated by a quantum circuit. Most of the quantum computing tasks end with computing the expectation value $\operatorname{Tr}[O\rho]$ of a given observable $O$ within a prefixed precision $\varepsilon$, by post-processing the measurement outcomes of the quantum state. This task is the essential component of multifarious quantum algorithms, notable practical examples of which are variational quantum eigensolvers Peruzzo _et al._ (2014); McClean _et al._ (2016), quantum approximate optimization algorithm Farhi _et al._ (2014), and quantum machine learning Biamonte _et al._ (2017); Havlíček _et al._ (2019). For simplicity, we assume that the observable $O$ is diagonal in the computational basis and its elements take values in the range $[-1,1]$, i.e., $\displaystyle O=\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})|{\bm{x}}\rangle\\!\langle{\bm{x}}|,\quad|O(\bm{x})|\leq 1,$ (1) where $O(\bm{x})$ is the $\bm{x}$-th diagonal element of $O$ and $|\alpha|$ is the absolute value of $\alpha$. Note that we adopt the convention that the diagonal elements are indexed from $0$. Consider $M$ independent experiments where in each round we prepare the state $\rho$ using the same quantum circuit and measure each qubit in the computational basis (see, e.g., Fig. 1). Let $\bm{s}^{m}\in\\{0,1\\}^{n}$ be the measurement outcome observed in the $m$-th round. We further define the empirical mean value $\displaystyle\eta^{(0)}\mathrel{\mathop{\mathchar 58\relax}}=\frac{1}{M}\sum_{m=1}^{M}O(\bm{s}^{m}).$ (2) Let $\operatorname{vec}(\rho)$ be the $2^{n}$-dimensional column vector formed by the diagonal elements of $\rho$. Then Bravyi _et al._ (2020) $\displaystyle E^{(0)}\mathrel{\mathop{\mathchar 58\relax}}=\mathbb{E}[\eta^{(0)}]=\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|\operatorname{vec}(\rho)=\operatorname{Tr}[O\rho],$ (3) where $\mathbb{E}[X]$ is the expectation of the random variable $X$. Eq. (3) implies that $\eta^{(0)}$ is an unbiased estimator of $\operatorname{Tr}[O\rho]$. What’s more, the standard deviation $\sigma(\eta^{(0)})\leq 1/\sqrt{M}$. By Hoeffding’s inequality Hoeffding (1963), $M=2\log(2/\delta)/\varepsilon^{2}$ would guarantee that $\displaystyle\operatorname{Pr}\\{|\eta^{(0)}-\operatorname{Tr}[O\rho]|\leq\varepsilon\\}\geq 1-\delta,$ (4) where $\operatorname{Pr}\\{\cdot\\}$ is the event’s probability, $\delta$ is the specified confidence, and all logarithms are in base $2$ throughout this paper. Figure 1: Computing the expectation value $\operatorname{Tr}[O\rho]$ with the ideal measurement device (left) and the noisy measurement device (right) not . However, measurement devices on current quantum hardware inevitably suffer from hardware imperfections that lead to readout errors, which are manifested as a bias toward the expectation values we aim to compute (cf. the right side of Fig. 1). As previously mentioned, in the most general scenario, these errors are modeled by a $2^{n}\times 2^{n}$ noise matrix $A$. If there were no measurement error at all, $A$ is the identity matrix $I$. The off-diagonal elements of $A$ completely characterize the readout errors. By definition, $A$ is column-stochastic in the sense that the elements of each column are non- negative and sum to $1$. Suppose now that we adopt the same procedure for computing $\eta$ in (2), where we perform $M$ independent experiments and collect the measurement outcomes. Denote by $\bm{s}^{m,1}\in\\{0,1\\}^{n}$ the outcome observed in the $m$-th round, where the superscript $1$ indicates that the noisy measurement is applied. As (2), we define $\displaystyle\eta^{(1)}\mathrel{\mathop{\mathchar 58\relax}}=\frac{1}{M}\sum_{m=1}^{M}O(\bm{s}^{m,1}).$ (5) We prove in Appendix A that $\displaystyle E^{(1)}\mathrel{\mathop{\mathchar 58\relax}}=\mathbb{E}[\eta^{(1)}]=\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|A\operatorname{vec}(\rho),$ (6) indicating that $\eta^{(1)}$ is no longer an estimator of $\operatorname{Tr}[O\rho]$. Comparing Eqs. (3) and (6), we find that in the ideal case, the sampled probability distribution approximates $\operatorname{vec}(\rho)$ due to the weak law of large numbers, while in the noisy case, the sampled probability distribution approximates $A\operatorname{vec}(\rho)$, leading to a bias in the estimator. ## III Error mitigation via truncated Neumann series A direct approach to eliminate the measurement errors from $A\operatorname{vec}(\rho)$ is to apply the inverse matrix $A^{-1}$. However, this approach is resource-consuming and only feasible when $n$ is small. To deal with this difficulty, we simulate the effect of $A^{-1}$ using a truncated Neumann series. That is, $A^{-1}$ is approximated by a linear combination of the terms $A^{k}$ for different $k$, with carefully chosen coefficients. This idea has previously been applied for linear data detection in massive multiuser multiple-input multiple-output wireless systems Wu _et al._ (2013). Define the _noise resistance_ of the noise matrix $A$ as $\displaystyle\xi\mathrel{\mathop{\mathchar 58\relax}}=2\left(1-\min_{\bm{x}\in\\{0,1\\}^{n}}\left\langle{\bm{x}}\right|A\left|{\bm{x}}\right\rangle\right).$ (7) By definition, $1-\xi/2$ is the minimal diagonal element of $A$. Intuitively, $\xi/2$ characterizes the noisy measurement device’s behavior in the worst- case scenario since it is the maximal probability for which the true outcome should be $\bm{x}$ yet the actual outcome is not $\bm{x}$. In the following, we assume $\xi<1$, which is equivalent to the condition that the minimal diagonal element of $A$ is larger than $0.5$. This assumption is reasonable since otherwise the measurement device is too noisy to be applied from the practical perspective. Under this assumption, the stochastic matrix $A$ is nonsingular and the Neumann series implies that (Stewart, 1998, Theorem 4.20) $\displaystyle A^{-1}$ $\displaystyle=\sum_{k=0}^{\infty}(I-A)^{k}$ (8a) $\displaystyle=\sum_{k=0}^{K}(I-A)^{k}+\mathcal{O}((I-A)^{K+1})$ (8b) $\displaystyle=\sum_{k=0}^{K}c_{K}(k)A^{k}+\mathcal{O}((I-A)^{K+1}),$ (8c) where for arbitrary non-negative integers $0\leq k\leq K$, the coefficient function is defined as $\displaystyle c_{K}(k)\mathrel{\mathop{\mathchar 58\relax}}=(-1)^{k}\binom{K+1}{k+1},$ (9) and $\binom{n}{k}$ is the binomial coefficient. Intuitively, Eq. (8) indicates that one may approximate the inverse matrix $A^{-1}$ using the first $K$ Neumann series terms, if the behavior of the remaining terms $\mathcal{O}((I-A)^{K+1})$ can be bounded. We show that this is indeed the case in the measurement error mitigation task. More specifically, using the first $K+1$ terms in the expansion (8) of $A^{-1}$, we obtain the following. ###### Theorem 1. For arbitrary positive integer $K$, it holds that $\displaystyle\left|\operatorname{Tr}[O\rho]-\sum_{k=1}^{K+1}c_{K}(k-1)E^{(k)}\right|\leq\xi^{K+1},$ (10) where $\displaystyle E^{(k)}\mathrel{\mathop{\mathchar 58\relax}}=\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|A^{k}\operatorname{vec}(\rho).$ (11) The proof is given in Appendix B. As evident from Theorem 1, the noise resistance $\xi$ of the noise matrix $A$ determines the number of terms required in the truncated Neumann series to approximate $A^{-1}$ to the desired precision. What is more, since $\xi<1$, the approximation error decays exponentially in terms of $K$. By the virtue of (6), $E^{(k)}$ can be viewed as the noisy expectation value generated by a noisy measurement device whose corresponding noise matrix is $A^{k}$. Let $\overline{E}\mathrel{\mathop{\mathchar 58\relax}}=\sum_{k=1}^{K+1}c_{K}(k-1)E^{(k)}$. Theorem 1 inspires a systematic way to estimate the expectation value $\operatorname{Tr}[O\rho]$ in two steps. Figure 2: Experimental setup for estimating $E^{(4)}$, in which the noisy measurement device (box in blue) is executed $4$ times sequentially. Firstly, we choose $K$ for which the RHS. of (10) evaluates to $\varepsilon$, yielding the optimal truncated number $\displaystyle K=\left\lceil\frac{\log\varepsilon}{\log\xi}-1\right\rceil.$ (12) Such a choice guarantees that $\overline{E}$ is $\varepsilon$-close to the expectation value $\operatorname{Tr}[O\rho]$. Secondly, we compute the quantity $\overline{E}$ by estimating each term $E^{(k)}$ and computing the linear combination according to the coefficients $c_{K}$. Since $\overline{E}$ itself is only an $\varepsilon$-estimate of $\operatorname{Tr}[O\rho]$, it suffices to approximate $\overline{E}$ within an error $\varepsilon$. Motivated by the relation between $\eta^{(1)}$ and $E^{(1)}$ (see the discussions and calculations in obtaining (5)), we declare that each $E^{(k)}$ can be estimated via the following procedure: 1. 1. Generate a quantum state $\rho$. 2. 2. Using $\rho$ as input, execute the noisy measurement device $k$ times _sequentially_ and collect the outcome produced by the final measurement device, i.e., the $k$-th measurement device. 3. 3. Repeat the above two steps $M$ rounds and collect the measurement outcomes. 4. 4. Define an average analogous to (5) and output it as an estimate of $E^{(k)}$. We elaborate thoroughly on the concept of sequential measurement in Appendix C and show that the classical noise model describing the sequential measurement repeating $k$ times is effectively characterized by the stochastic matrix $A^{k}$. For a sequential measurement repeating $k$ times, one can think of the rightmost $k-1$ measurements as implementing the calibration subroutine since they accept the computational basis states as inputs. To some extent, this is a _dynamic_ calibration procedure where we do not statically enumerate all computational bases as input states but dynamically prepare the input states based on the output information of the target state from the first measurement device. For illustrative purpose, we demonstrate in Fig. 2 the experimental setup for estimating the noisy expectation value $E^{(4)}$, where the measurement device is repeated four times in each round. We summarize the whole procedure in the following Algorithm 1. Algorithm 1 Error mitigation via truncated Neumann series 0: Quantum circuit generating the $n$-qubit state $\rho$, the $n$-qubit quantum observable $O$, the $n$-qubit noisy measurement device, noise resistance $\xi$, probability tolerance $\delta$, precision parameter $\varepsilon$. 0: $\eta$, as an estimate of $\operatorname{Tr}[O\rho]$. 1: Compute $K=\left\lceil\log\varepsilon/\log\xi-1\right\rceil$; 2: Compute $\Delta=\binom{2K+2}{K+1}-1$; 3: Compute $M=\lceil 2(K+1)\Delta\log(2/\delta)/\varepsilon^{2}\rceil$; 4: for $k=1,\cdots,K+1$ do 5: for $m=1,\cdots,M$ do 6: Run the quantum circuit to generate $\rho$; 7: Execute the measurement device $k$ times sequentially; 8: Obtain the measurement outcome $\bm{s}^{m,k}$; 9: end for 10: Compute $\eta^{(k)}=\frac{1}{M}\sum_{m=1}^{M}O(\bm{s}^{m,k})$; 11: end for 12: Compute $\eta=\sum_{k=1}^{K+1}c_{K}(k-1)\eta^{(k)}$, where $c_{K}$ is defined in (9); 13: Output $\eta$. We claim that the output $\eta$ of Algorithm 1 approximates the expectation value $\operatorname{Tr}[O\rho]$ pretty well, as captured by the following proposition. ###### Proposition 2. The output $\eta$ of Algorithm 1 satisfies $\displaystyle\operatorname{Pr}\left\\{|\operatorname{Tr}[O\rho]-\eta|\leq 2\varepsilon\right\\}\geq 1-\delta.$ (13) Proof of the proposition is given in Appendix D. Intuitively, Eq. (13) says that the output $\eta$ of Algorithm 1 estimates the ideal expectation value $\operatorname{Tr}[O\rho]$ with error $2\varepsilon$ at a probability greater than $1-\delta$. Analyzing Algorithm 1, we can see that we ought to expand the Neumann series to the $K$-th order, where $K$ is computed via (12), and estimate the $K+1$ noisy expectation values $E^{(1)},\cdots,E^{(K+1)}$ individually. For each expectation, we need $M$ copy of quantum states. As so, the total number of quantum states consumed is given by $\displaystyle M(K+1)$ $\displaystyle=2(K+1)^{2}\Delta\log(2/\delta)/\varepsilon^{2}$ $\displaystyle\approx 4^{K}\log(2/\delta)/\varepsilon^{2}.$ (14) In other words, our error mitigation method increases the number of quantum states that is required to achieve the given precision $\varepsilon$ by a factor of $4^{K}$ compared with the case of the ideal measurement. In Fig. 3, we plot the optimal truncated number $K$ (12) as a function of the noise resistance $\xi$, the error tolerance parameter is fixed as $\varepsilon=0.01$. One can check from the figure that $K\leq 10$ whenever the noise resistance satisfies $\xi\leq 0.657$ (Equivalently, the minimal diagonal element of $A$ is larger than $0.67$). That is to say, the incurred error mitigation overhead $4^{K}$ is independent of the system size, so long as the noise resistance $\xi$ is moderate, in the sense that it is below a certain threshold (say $0.657$). On the other hand, the number of noisy quantum measurements applied in Algorithm 1 is given by $\displaystyle\left(\sum_{k=1}^{K+1}k\right)M\approx 2(K+1)^{3}\Delta\log(2/\delta)/\varepsilon^{2}.$ (15) Compared to (14), our method has used more number of measurements than the number of quantum states by a multiplier $K+1$. We remark that both costs are roughly characterized by the prominent factor $4^{K}$. Figure 3: The optimal truncated number $K$ (12) as a function of the noise resistance $\xi$, where the precision parameter is $\varepsilon=0.01$. ### III.1 Discussion on the noise resistance In practical applications, $\xi$ can be obtained from the specifications of NISQ devices. For example, in IBM quantum devices, the specifications are often reported $2-10\%$ Kwon and Bae (2020). If such information is not available, we may perform calibration to obtain $A$ first and then compute $\xi$, which is still resource-efficient compared to computing the inverse matrix $A^{-1}$. When defining $\xi$ in (7), we do not consider any structure of $A$. If certain noise model is assumed, the calculation of $\xi$ can be simplified. In the following, we consider the tensor product noise model and show that the noise resistance can be compute analytically. Assume $A$ is a tensor product of $n$ $2\times 2$ stochastic matrices, i.e., $\displaystyle A_{\operatorname{tp}}=\begin{bmatrix}1-\alpha_{1}&\beta_{1}\\\ \alpha_{1}&1-\beta_{1}\end{bmatrix}\otimes\cdots\otimes\begin{bmatrix}1-\alpha_{n}&\beta_{n}\\\ \alpha_{n}&1-\beta_{n}\end{bmatrix},$ (16) where $\alpha_{i}$ and $\beta_{i}$ are error rates describing the $i$-th qubit’s readout errors $0\to 1$ and $1\to 0$, respectively. One can show that $\displaystyle\xi(A_{\operatorname{tp}})=2\left(1-\prod_{i=1}^{n}\min\\{1-\alpha_{i},1-\beta_{i}\\}\right).$ (17) Specially, if $\alpha_{i},\beta_{i}\ll 1$, then $\xi\approx 2(1-1/e^{\gamma})$, where $\gamma\mathrel{\mathop{\mathchar 58\relax}}=\sum_{i=1}^{n}\max\\{\alpha_{i},\beta_{i}\\}$ is called the noise strength in Bravyi _et al._ (2020). ## IV Experimental results We apply the proposed error mitigation method to the following illustrative example and demonstrate its performance. Consider the input state $\rho=|{\Phi}\rangle\\!\langle{\Phi}|$, where $\displaystyle\left|{\Phi}\right\rangle\mathrel{\mathop{\mathchar 58\relax}}=\frac{1}{\sqrt{2^{n}}}\sum_{i=0}^{2^{n}-1}\left|{i}\right\rangle,$ (18) which is the maximal superposition state. The observable $O$ is a tensor product of Pauli $Z$ operators, i.e., $O=Z^{\otimes n}$. The ideal expectation value is $\operatorname{Tr}[O\rho]=0$. We choose $n=8$ and randomly generate a noise matrix $A^{\ast}$ whose noise resistance satisfies $\xi(A^{\ast})\approx 0.657$ (as so the noise matrix is moderate). We repeat the procedure for producing the noisy expectation value $\eta^{(1)}$ and Algorithm 1 for producing the mitigated expectation value $\eta$ a total number of $1000$ times. Note that all these experiments assume the same noise matrix $A$, and the parameters are chosen as $\varepsilon=\delta=0.01$. The obtained expectation values are scatted in Fig. 4. It is easy to see from the figure that the noisy measurement device, characterized by the noise matrix $A^{\ast}$, incurs a bias $\approx-0.007$ to the estimated expectation values. On the other hand, the error mitigated expectation values distributed evenly around the ideal value $0$ within a distance of $0.01$ with high probability. As evident from Fig. 4, several mitigated expectation values fall outside the expected region. These statistical outcomes match our conclusion in Proposition 2, validating the correctness and performance of the proposed error mitigation method. Fig. 5 shows the (noisy and mitigated) expectation values estimated via the above procedure as a function of the number of qubits. In our numerical setup, for an experiment whose number of qubits $n$ is less than $8$, its corresponding noise matrix is obtained by partially tracing out the rightmost $8-n$ qubit systems from $A^{\ast}$. The entire experiment for each $n$ was repeated $1000$ times in order to estimate the error bars. The reason that the noisy estimates behave well for single and two qubits is that the underlying noise matrices are close to the identity in the total variation distance Maciejewski _et al._ (2020). It can be seen that the cross-talk noise presented in the noisy measurement device severely distorts the estimated expectation value while our error mitigation method is insensitive to this kind of error. Figure 4: $1000$ noisy estimates $\eta^{(1)}$ (blue triangles) and error mitigated estimates $\eta$ (red dots) for the ideal expectation value $\operatorname{Tr}[O\rho]=0$. Here, the number of qubits is $8$. Figure 5: Average expectation values for $1\leq n\leq 8$ qubits obtained with (red dots) and without (blue triangles) the error mitigation method. Each error bar is estimated by repeating the experiment $1000$ times. ## V Conclusions We have introduced a scalable method to mitigate measurement errors in computing expectation values of quantum observables, an essential building block of numerous quantum algorithms. The idea behind this method is to approximate the inverse of the noise matrix determined by the noisy measurement device using a small number of the Neumann series terms. Our method via the truncated Neumann series outperforms the exact matrix inversion method by significantly reducing the resource costs in time and samples of quantum states while only slightly degrading the error mitigation performance. In particular, our method works for any classical noise model and does not require the calibration procedure to learn the noise matrix a prior. Most importantly, the incurred error mitigation overhead is independent of the system size, as long as the noise resistance of the noisy measurement device is moderate. This property is beneficial and will be more and more important as the quantum circuit sizes increase. We have numerically tested this method and found that the computation accuracy is substantially improved. We believe that the proposed method will be useful for experimental measurement error mitigation in NISQ quantum devices. ## Acknowledgements We thank Runyao Duan for helpful suggestions. We would like to thank Zhixin Song for collecting the experiment data. ## References * McArdle _et al._ (2020) S. McArdle, S. Endo, A. Aspuru-Guzik, S. C. Benjamin, and X. Yuan, Reviews of Modern Physics 92, 015003 (2020). * Cerezo _et al._ (2020) M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio, _et al._ , arXiv preprint arXiv:2012.09265 (2020). * Bharti _et al._ (2021) K. Bharti, A. Cervera-Lierta, T. H. Kyaw, T. Haug, S. Alperin-Lea, A. Anand, M. Degroote, H. Heimonen, J. S. Kottmann, T. Menke, _et al._ , arXiv preprint arXiv:2101.08448 (2021). * Endo _et al._ (2021) S. Endo, Z. Cai, S. C. Benjamin, and X. Yuan, Journal of the Physical Society of Japan 90, 032001 (2021). * Preskill (2018) J. Preskill, Quantum 2, 79 (2018). * Temme _et al._ (2017) K. Temme, S. Bravyi, and J. M. Gambetta, Physical Review Letters 119, 180509 (2017). * Endo _et al._ (2018) S. Endo, S. C. Benjamin, and Y. Li, Physical Review X 8, 031027 (2018). * Li and Benjamin (2017) Y. Li and S. C. Benjamin, Physical Review X 7, 021050 (2017). * McClean _et al._ (2017) J. R. McClean, M. E. Kimchi-Schwartz, J. Carter, and W. A. De Jong, Physical Review A 95, 042308 (2017). * McClean _et al._ (2020) J. R. McClean, Z. Jiang, N. Rubin, R. Babbush, and H. Neven, Nature Communications 11, 636 (2020). * McArdle _et al._ (2019) S. McArdle, X. Yuan, and S. Benjamin, Physical Review Letters 122, 180501 (2019). * Bonet-Monroig _et al._ (2018) X. Bonet-Monroig, R. Sagastizabal, M. Singh, and T. O’Brien, Physical Review A 98, 062339 (2018). * He _et al._ (2020) A. He, B. Nachman, W. A. de Jong, and C. W. Bauer, Physical Review A 102, 012426 (2020). * Giurgica-Tiron _et al._ (2020) T. Giurgica-Tiron, Y. Hindy, R. LaRose, A. Mari, and W. J. Zeng, arXiv preprint arXiv:2005.10921 (2020). * Kandala _et al._ (2019) A. Kandala, K. Temme, A. D. Córcoles, A. Mezzacapo, J. M. Chow, and J. M. Gambetta, Nature 567, 491 (2019). * Sun _et al._ (2021) J. Sun, X. Yuan, T. Tsunoda, V. Vedral, S. C. Benjamin, and S. Endo, Physical Review Applied 15, 034026 (2021). * Czarnik _et al._ (2020) P. Czarnik, A. Arrasmith, P. J. Coles, and L. Cincio, arXiv preprint arXiv:2005.10189 (2020). * Chow _et al._ (2012) J. M. Chow, J. M. Gambetta, A. D. Corcoles, S. T. Merkel, J. A. Smolin, C. Rigetti, S. Poletto, G. A. Keefe, M. B. Rothwell, J. R. Rozen, _et al._ , Physical Review Letters 109, 060501 (2012). * Chen _et al._ (2019) Y. Chen, M. Farahzad, S. Yoo, and T.-C. Wei, Physical Review A 100, 052315 (2019). * Geller (2020) M. R. Geller, Quantum Science and Technology 5, 03LT01 (2020). * Maciejewski _et al._ (2020) F. B. Maciejewski, Z. Zimborás, and M. Oszmaniec, Quantum 4, 257 (2020). * Tannu and Qureshi (2019) S. S. Tannu and M. K. Qureshi, in _Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture_ (2019) pp. 279–290. * Nachman _et al._ (2019) B. Nachman, M. Urbanek, W. A. de Jong, and C. W. Bauer, arXiv preprint arXiv:1910.01969 (2019). * Hicks _et al._ (2021) R. Hicks, C. W. Bauer, and B. Nachman, Physical Review A 103, 022407 (2021). * Bravyi _et al._ (2020) S. Bravyi, S. Sheldon, A. Kandala, D. C. Mckay, and J. M. Gambetta, arXiv preprint arXiv:2006.14044 (2020). * Geller and Sun (2020) M. R. Geller and M. Sun, arXiv preprint arXiv:2001.09980 (2020). * Murali _et al._ (2020) P. Murali, D. C. McKay, M. Martonosi, and A. Javadi-Abhari, in _Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems_ (2020) pp. 1001–1016. * Kwon and Bae (2020) H. Kwon and J. Bae, IEEE Transactions on Computers (2020). * Funcke _et al._ (2020) L. Funcke, T. Hartung, K. Jansen, S. Kühn, P. Stornati, and X. Wang, arXiv preprint arXiv:2007.03663 (2020). * Zheng _et al._ (2020) M. Zheng, A. Li, T. Terlaky, and X. Yang, arXiv preprint arXiv:2010.09188 (2020). * Maciejewski _et al._ (2021) F. B. Maciejewski, F. Baccari, Z. Zimborás, and M. Oszmaniec, arXiv preprint arXiv:2101.02331 (2021). * Barron and Wood (2020) G. S. Barron and C. J. Wood, arXiv preprint arXiv:2010.08520 (2020). * Pashayan _et al._ (2015) H. Pashayan, J. J. Wallman, and S. D. Bartlett, Physical Review Letters 115, 070501 (2015). * Howard and Campbell (2017) M. Howard and E. Campbell, Physical Review Letters 118, 090501 (2017). * Takagi (2020) R. Takagi, arXiv preprint arXiv:2006.12509 (2020). * Jiang _et al._ (2020) J. Jiang, K. Wang, and X. Wang, arXiv preprint arXiv:2012.10959 (2020). * Regula _et al._ (2021) B. Regula, R. Takagi, and M. Gu, arXiv preprint arXiv:2102.07773 (2021). * Berg _et al._ (2020) E. v. d. Berg, Z. K. Minev, and K. Temme, arXiv preprint arXiv:2012.09738 (2020). * Peruzzo _et al._ (2014) A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’brien, Nature Communications 5, 4213 (2014). * McClean _et al._ (2016) J. R. McClean, J. Romero, R. Babbush, and A. Aspuru-Guzik, New Journal of Physics 18, 023023 (2016). * Farhi _et al._ (2014) E. Farhi, J. Goldstone, and S. Gutmann, arXiv preprint arXiv:1411.4028 (2014). * Biamonte _et al._ (2017) J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, Nature 549, 195 (2017). * Havlíček _et al._ (2019) V. Havlíček, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M. Chow, and J. M. Gambetta, Nature 567, 209 (2019). * Hoeffding (1963) W. Hoeffding, Journal of the American Statistical Association 58, 13 (1963). * (45) The quantum circuit diagrams in this work are created using the quantikz package Kay (2018). * Wu _et al._ (2013) M. Wu, B. Yin, A. Vosoughi, C. Studer, J. R. Cavallaro, and C. Dick, in _2013 IEEE International Symposium on Circuits and Systems (ISCAS)_ (IEEE, 2013) pp. 2155–2158. * Stewart (1998) G. W. Stewart, _Matrix Algorithms: Volume 1: Basic Decompositions_ (SIAM, 1998). * Kay (2018) A. Kay, arXiv preprint arXiv:1809.03842 (2018), 10.17637/rh.7000520.v4. * Wilde (2016) M. M. Wilde, _Quantum Information Theory 2nd Edition_ (Cambridge University Press, 2016). * Naghiloo (2019) M. Naghiloo, arXiv preprint arXiv:1904.09291 (2019). * Egger _et al._ (2018) D. J. Egger, M. Werninghaus, M. Ganzhorn, G. Salis, A. Fuhrer, P. Mueller, and S. Filipp, Physical Review Applied 10, 044030 (2018). * Magnard _et al._ (2018) P. Magnard, P. Kurpiers, B. Royer, T. Walter, J.-C. Besse, S. Gasparinetti, M. Pechal, J. Heinsoo, S. Storz, A. Blais, _et al._ , Physical Review Letters 121, 060502 (2018). * Yirka and Subasi (2020) J. Yirka and Y. Subasi, arXiv preprint arXiv:2010.03080 (2020). ## Appendix A Proof of Eq. (6) ###### Proof. By the definition of $\eta^{(1)}$, we have $\displaystyle\eta^{(1)}=\frac{1}{M}\sum_{m=1}^{M}O(\bm{s}^{m,1})=\frac{1}{M}\sum_{m=1}^{M}\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|\bm{s}^{m,1}\rangle=\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|\left(\frac{1}{M}\sum_{m=1}^{M}|\bm{s}^{m,1}\rangle\right).$ (19) The expectation value can be evaluated as $\displaystyle E^{(1)}\mathrel{\mathop{\mathchar 58\relax}}=\mathbb{E}[\eta^{(1)}]$ $\displaystyle=\mathbb{E}\left[\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|\left(\frac{1}{M}\sum_{m=1}^{M}|\bm{s}^{m,1}\rangle\right)\right]$ (20) $\displaystyle=\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|\mathbb{E}\left[\frac{1}{M}\sum_{m=1}^{M}|\bm{s}^{m,1}\rangle\right]$ (21) $\displaystyle=\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|A\operatorname{vec}(\rho).$ (22) ∎ ## Appendix B Proof of Theorem 1 ###### Proof. First of all, notice that $\displaystyle\left|\operatorname{Tr}[O\rho]-\sum_{k=1}^{K+1}c_{K}(k-1)E^{(k)}\right|$ $\displaystyle=\left|\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|\operatorname{vec}(\rho)-\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|\left(\sum_{k=1}^{K+1}c_{K}(k-1)A^{k}\operatorname{vec}(\rho)\right)\right|$ (23) $\displaystyle=\left|\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|\left(I-\sum_{k=1}^{K+1}c_{K}(k-1)A^{k}\right)\operatorname{vec}(\rho)\right|$ (24) $\displaystyle=\left|\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|\left(I-A\left(\sum_{k=1}^{K+1}c_{K}(k-1)A^{k-1}\right)\right)\operatorname{vec}(\rho)\right|$ (25) $\displaystyle=\left|\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|\left(I-A\left(\sum_{k=0}^{K}c_{K}(k)A^{k}\right)\right)\operatorname{vec}(\rho)\right|$ (26) $\displaystyle=\left|\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|\left(I-A\left(\sum_{k=0}^{K}(I-A)^{k}\right)\right)\operatorname{vec}(\rho)\right|$ (27) $\displaystyle=\left|\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|\left(I-\left(I-(I-A)^{K+1}\right)\right)\operatorname{vec}(\rho)\right|$ (28) $\displaystyle=\left|\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|(I-A)^{K+1}\operatorname{vec}(\rho)\right|,$ (29) where (27) follows from (8) and (28) follows from the closed-form formula of a geometric series. Now we show that the quantity in (29) can be bounded from above. Define the induced matrix $1$-norm of a $m\times n$ matrix $B$ as $\displaystyle\left\lVert B\right\rVert_{1}\mathrel{\mathop{\mathchar 58\relax}}=\max_{1\leq j\leq n}\sum_{i=1}^{n}|B_{ij}|\equiv\max_{1\leq j\leq n}\sum_{i=1}^{n}|\left\langle{i}\right|B\left|{j}\right\rangle|,$ (30) which is simply the maximum absolute column sum of the matrix. Let $\rho(\bm{y})$ is the $\bm{y}$-th diagonal element of the quantum state $\rho$. Consider the following chain of inequalities: $\displaystyle\left|\sum_{\bm{x}\in\\{0,1\\}^{n}}O(\bm{x})\langle\bm{x}|(I-A)^{K+1}\operatorname{vec}(\rho)\right|$ $\displaystyle=\left|\sum_{\bm{x}\in\\{0,1\\}^{n}}\sum_{\bm{y}\in\\{0,1\\}^{n}}O(\bm{x})\rho(\bm{y})\langle\bm{x}|(I-A)^{K+1}|\bm{y}\rangle\right|$ (31a) $\displaystyle\leq\sum_{\bm{x}\in\\{0,1\\}^{n}}\sum_{\bm{y}\in\\{0,1\\}^{n}}|O(\bm{x})|\cdot\rho(\bm{y})\cdot\left|\langle\bm{x}|(I-A)^{K+1}|\bm{y}\rangle\right|$ (31b) $\displaystyle\leq\sum_{\bm{x}\in\\{0,1\\}^{n}}\sum_{\bm{y}\in\\{0,1\\}^{n}}\rho(\bm{y})\left|\langle\bm{x}|(I-A)^{K+1}|\bm{y}\rangle\right|$ (31c) $\displaystyle=\sum_{\bm{y}\in\\{0,1\\}^{n}}\rho(\bm{y})\sum_{\bm{x}\in\\{0,1\\}^{n}}\left|\langle\bm{x}|(I-A)^{K+1}|\bm{y}\rangle\right|$ (31d) $\displaystyle\leq\sum_{\bm{y}\in\\{0,1\\}^{n}}\rho(\bm{y})\lVert(I-A)^{K+1}\lVert_{1}$ (31e) $\displaystyle=\lVert(I-A)^{K+1}\lVert_{1}$ (31f) $\displaystyle\leq\lVert I-A\lVert_{1}^{K+1}$ (31g) $\displaystyle=\xi^{K+1},$ (31h) where (31c) follows from the assumption that $|O(\bm{x})|\leq 1$ (cf. Eq. (1)), (31e) follows from the definition of induced matrix $1$-norm, (31f) follows from the fact that $\rho$ is a quantum state and thus $\sum_{\bm{y}}\rho(\bm{y})=1$, (31g) follows from the submultiplicativity property of the induced matrix norm, and (31h) follows from Lemma 3 stated below. We are done. ∎ ###### Lemma 3. Let $A$ be a column stochastic matrix of size $d\times d$. It holds that $\displaystyle\left\lVert I-A\right\rVert_{1}=\xi(A),$ (32) where $\xi(A)$ is defined in (7). ###### Proof. Since $A$ is column stochastic, $I-A$ has non-negative diagonal elements and negative off-negative elements. Thus $\displaystyle\left\lVert I-A\right\rVert_{1}$ $\displaystyle=\max_{1\leq j\leq d}\left(1-A_{jj}+\sum_{i\neq j}A_{ij}\right)$ (33) $\displaystyle=\max_{1\leq j\leq d}\left(1-A_{jj}+1-A_{jj}\right)$ (34) $\displaystyle=2\max_{1\leq j\leq d}\left(1-A_{jj}\right)$ (35) $\displaystyle=2-2\min_{1\leq j\leq d}A_{jj}$ (36) $\displaystyle=\mathrel{\mathop{\mathchar 58\relax}}\xi(A),$ (37) where the second line follows from the fact that $A$ is column stochastic. ∎ ## Appendix C Sequential measurements In the Appendix, we prove that the classical noise model describing the sequential measurement repeating $k$ times is effectively characterized by the stochastic matrix $A^{k}$. We begin with the simple case $k=2$. Since the noise model is classical and linear in the input, it suffices to consider the computational basis states as inputs. As shown in Fig. 6, we apply the noisy quantum measurement device two times sequentially on the input state $|{\bm{x}}\rangle\\!\langle{\bm{x}}|$ in computational basis where $\bm{x}\in\\{0,1\\}^{n}$. Assume the measurement outcome of the first measurement is $\bm{y}$ and the measurement outcome of the second measurement is $\bm{z}$, where $\bm{y},\bm{z}\in\\{0,1\\}^{n}$. Assume that the noise matrix associated with this sequential measurement is $A^{\prime}$. That is, the probability of obtaining the outcome $\bm{z}$ provided the true outcome is $\bm{x}$ is given by $A^{\prime}_{\bm{z}\bm{x}}$. Practically, we input $|{\bm{x}}\rangle\\!\langle{\bm{x}}|$ to the first noisy measurement device and obtain the outcome $\bm{y}$. The probability of this event is $A_{\bm{y}\bm{x}}$, by the definition of the noise matrix. Similarly, we input $|{\bm{y}}\rangle\\!\langle{\bm{y}}|$ to the second noisy measurement device and obtain the outcome $\bm{z}$. The probability of this event is $A_{\bm{z}\bm{y}}$. Inspecting the chain $\bm{x}\to\bm{y}\to\bm{z}$, we have $\displaystyle A^{\prime}_{\bm{z}\bm{x}}=\sum_{\bm{y}\in\\{0,1\\}^{n}}A_{\bm{y}\bm{x}}A_{\bm{z}\bm{y}}=A^{2}_{\bm{z}\bm{x}}.$ (38) The above analysis justifies that the classical noise model describing the sequential measurement repeating $2$ times is effectively characterized by the stochastic matrix $A^{2}$. The general case can be analyzed similarly. Figure 6: Apply the noisy quantum measurement device two times sequentially on the input state $|{\bm{x}}\rangle\\!\langle{\bm{x}}|$ where $\bm{x}\in\\{0,1\\}^{n}$. The measurement outcome of the first measurement is $\bm{y}$ and the measurement outcome of the second measurement is $\bm{z}$. Mathematically, quantum measurements can be modeled as quantum-classical quantum channels (Wilde, 2016, Chapter 4.6.6) where they take a quantum system to a classical one. Experimentally, the implementation of quantum measurement is platform-dependent and has different characterizations. For example, the fabrication and control of quantum coherent superconducting circuits have enabled experiments that implement quantum measurement Naghiloo (2019). Based on the outcome data, experimental measurements are typically categorized into two types: those only output classical outcomes and those output both classical outcomes and quantum states. That is, besides the usually classical outcome sequences, the measurement device will also output a quantum state on the computational basis corresponding to the classical outcome. For the former type, we can implement the sequential measurement via the _qubit reset_ Egger _et al._ (2018); Magnard _et al._ (2018); Yirka and Subasi (2020) approach, by which we mean the ability to re-initialize the qubits into a known state, usually a state in the computational basis, during the course of the computation. Technically, when the $i$-th noisy measurement outputs an outcome sequence $\bm{s}^{i}\in\\{0,1\\}^{n}$, we use the qubit reset technique to prepare the computational basis state $|\bm{s}^{i}\rangle\\!\langle\bm{s}^{i}|$ and feed it to the $(i+1)$-th noisy measurement (cf. Fig. 6). In this case, the noisy measurement device can be reused. For the latter type, the sequential measurement can be implemented efficiently: when the $i$-th noisy measurement outputs a classical sequence and a quantum state on the computational basis, we feed the quantum state to the $(i+1)$-th noisy measurement. ## Appendix D Proof of Proposition 2 ###### Proof. By definition, $\displaystyle\eta=\sum_{k=1}^{K+1}c_{K}(k-1)\eta^{(k)}=\frac{1}{M}\sum_{k=1}^{K+1}\sum_{m=1}^{M}c_{K}(k-1)O(\bm{s}^{m,k})=\frac{1}{M(K+1)}\sum_{k=1}^{K+1}\sum_{m=1}^{M}(K+1)c_{K}(k-1)O(\bm{s}^{m,k}).$ (39) Introducing the new random variables $X_{m,k}\mathrel{\mathop{\mathchar 58\relax}}=(K+1)c_{K}(k-1)O(\bm{s}^{m,k})$, we have $\displaystyle\eta=\frac{1}{M(K+1)}\sum_{k=1}^{K+1}\sum_{m=1}^{M}X_{m,k}.$ (40) Intuitively, Eq. (40) says that $\eta$ can be viewed as the empirical mean value of the set of random variables $\displaystyle\left\\{X_{m,k}\mathrel{\mathop{\mathchar 58\relax}}m=1,\cdots,M;k=1,\cdots,K+1\right\\}.$ (41) First, we show that the absolute value of each $X_{m,k}$ is upper bounded as $\displaystyle|X_{m,k}|=|(K+1)c_{K}(k-1)O(\bm{s}^{m,k})|\leq(K+1)|c_{K}(k-1)||O(\bm{s}^{m,k})|\leq(K+1)|c_{K}(k-1)|,$ (42) where the second inequality follows from the assumption of $O$ (cf. Eq. (1)). Then, we show that $\eta$ is an unbiased estimator of the quantity $\sum_{k=1}^{K+1}c_{K}(k-1)E^{(k)}$: $\displaystyle\mathbb{E}[\eta]$ $\displaystyle=\mathbb{E}\left[\frac{1}{M(K+1)}\sum_{k=1}^{K+1}\sum_{m=1}^{M}X_{m,k}\right]$ (43a) $\displaystyle=\mathbb{E}\left[\frac{1}{M}\sum_{k=1}^{K+1}\sum_{m=1}^{M}c_{K}(k-1)O(\bm{s}^{m,k})\right]$ (43b) $\displaystyle=\sum_{k=1}^{K+1}c_{K}(k-1)\left(\sum_{\bm{x}}O(\bm{x})\langle\bm{x}|\mathbb{E}_{M}\left[\frac{1}{M}\sum_{m=1}^{M}|\bm{s}^{m,k}\rangle\right]\right)$ (43c) $\displaystyle=\sum_{k=1}^{K+1}c_{K}(k-1)\left(\sum_{\bm{x}}O(\bm{x})\langle\bm{x}|A^{k}\operatorname{vec}(\rho)\right)$ (43d) $\displaystyle=\sum_{k=1}^{K+1}c_{K}(k-1)E^{(k)},$ (43e) where the last equality follows from (11). Eqs. (42) and (43) together guarantee that the prerequisites of the Hoeffding’s inequality hold. By the Hoeffding’s equality, we have $\displaystyle\Pr\left\\{\left|\eta-\sum_{k=1}^{K+1}c_{K}(k-1)E^{(k)}\right|\geq\varepsilon\right\\}$ $\displaystyle\leq 2\exp\left(-\frac{2M^{2}(K+1)^{2}\varepsilon^{2}}{4\sum_{k=1}^{K+1}\sum_{m=1}^{M}((K+1)c_{K}(k))^{2}}\right)$ (44) $\displaystyle=2\exp\left(-\frac{2M^{2}(K+1)^{2}\varepsilon^{2}}{4M(K+1)^{3}\left(\sum_{k=0}^{K}[c_{K}(k)]^{2}\right)}\right)$ (45) $\displaystyle=2\exp\left(-\frac{M\varepsilon^{2}}{2(K+1)\Delta}\right),$ (46) where $\Delta\mathrel{\mathop{\mathchar 58\relax}}=\sum_{k=1}^{K+1}[c_{K}(k)]^{2}=\binom{2K+2}{K+1}-1$. Solving $\displaystyle 2\exp\left(-\frac{M\varepsilon^{2}}{2(K+1)\Delta}\right)\leq\delta$ (47) gives $\displaystyle M\geq 2(K+1)\Delta\log(2/\delta)/\varepsilon^{2}.$ (48) To summarize, choosing $K=\left\lceil\log\varepsilon/\log\xi-1\right\rceil$ and $M=\lceil 2(K+1)\Delta\log(2/\delta)/\varepsilon^{2}\rceil$, we are able obtain the following two statements $\displaystyle\Pr\left\\{\left|\eta-\sum_{k=1}^{K+1}c_{K}(k-1)E^{(k)}\right|\geq\varepsilon\right\\}\leq\delta,$ (49) $\displaystyle\left|\operatorname{Tr}[O\rho]-\sum_{k=1}^{K+1}c_{K}(k-1)E^{(k)}\right|\leq\varepsilon,$ (50) where the first one is shown above and the second one is proved in Theorem 1. Using the union bound and the triangle inequality, we conclude that $\eta$ can estimate the ideal expectation value $\operatorname{Tr}[O\rho]$ with error $2\varepsilon$ at a probability greater than $1-\delta$. ∎
# Stabilizer Tensor Networks: universal quantum simulator on a basis of stabilizer states Sergi Masot-Llima Barcelona Supercomputing Center, Barcelona 08034, Spain Universitat de Barcelona, Barcelona 08007, Spain Artur Garcia-Saez Barcelona Supercomputing Center, Barcelona 08034, Spain Qilimanjaro Quantum Tech, Barcelona 08007, Spain ###### Abstract Efficient simulation of quantum computers relies on understanding and exploiting the properties of quantum states. This is the case for methods such as tensor networks, based on entanglement, and the tableau formalism, which represents stabilizer states. In this work, we integrate these two approaches to present a generalization of the tableau formalism used for Clifford circuit simulation. We explicitly prove how to update our formalism with Clifford gates, non-Clifford gates, and measurements, enabling universal circuit simulation. We also discuss how the framework allows for efficient simulation of more states, raising some interesting questions on the representation power of tensor networks and the quantum properties of resources such as entanglement and magic, and support our claims with simulations. Simulation of quantum computing is crucial for two main reasons: driving science in fields like condensed matter physics [1, 2] or quantum chemistry [3, 4, 5], as long as we do not have large, error-corrected devices, and testing quantum advantage claims [6, 7, 8] made by cutting-edge devices [9, 10]. To simulate efficiently beyond a few dozen qubits, we must find alternative characterisations due to the exponential growth of brute force approaches. Thus, a large effort is put towards identifying which states are easy and why, given the absence of a universal description of simulablity. Resource theories [11] are a useful tool for this task: they characterize the operations that are easy to do (free operations) in a certain framework. We are particularly interested in entanglement [12] and stabilizer rank [13, 14], for their relation to tensor networks (TN) and the stabilizer formalism, respectively. The interest in relating different resources, particularly these two, is not new. Previous research has found some of these states present maximal entanglement, at least in the bipartite sense [15], although most types of entangled states are not achievable with these circuits. Recently, magic in Matrix Product States (MPS), a type of TN, has also been characterized and looked into [16, 17, 18], and it is noteworthy that separable states with a lot of magic are complex in the stabilizer formalism, even though they are trivial to simulate with resource theories of entanglement. This means that these resources are, in some sense, orthogonal, as depicted in Fig. 1a. In this article, we unify simulation strategies for entanglement and magic by using a special basis [19] in conjunction with tensor networks, as shown in Fig. 1b, and we focus on how the proposed method can simulate arbitrary circuits. Figure 1: Showcase of the stabilizer formalism. In a), we classify states under two different resources. For the resource of entanglement, in blue, (non-stabilizerness, orange), axis y=0 (x=0) represents the free states, while its adjacent region represents states with low amounts of entanglement (non- stabilizerness), which are classically simulatable with tensor networks (stabilizer tableaus). They are simultaneously simulatable with stabilizer tensor networks, in green, and can be characterized with a different resource $R$. In b) we show how stabilizer TN joins the other methods: the tableau formalism (b1) encodes a stabilizer state and a set of destabilizer generators which are used to form a basis $\mathcal{B}(\mathcal{S},\mathcal{D})$ for the Hilbert space. The first $n$ rows of the tableau encode the decomposition of $n$ destabilizer generators, and rows $n+1$, … , $2n$ encode $n$ stabilizer generators. An extra column $r$ indicates the phase of each generator, since $S$ and $-S$ stabilize different states. The amplitudes of decomposing a state $\ket{\psi}$ into $\mathcal{B}(\mathcal{S},\mathcal{D})$ are stored in a tensor network (green, b2). Entanglement as a resource for simulation is usually characterized by bipartite entanglement between sectors of the system. This applies to methods such as circuit cutting [20], entanglement forging [21] or tensor networks [22]. These simulations rely on limited entanglement, mostly between close neighbors [23], or on a hierarchical structure of entanglement [24, 25]. Free operations consist of single-qubit (local) gates and classical communication [26, 27]. In the extreme case of no entanglement, the simulation cost of a system grows linearly with its size; in more complex cases, systems can adhere to an area law [28] that allows TN methods like DMRG [29] to perform efficient simulations with great success. Tensor networks encode high-dimensional tensors into the product of smaller, low-dimensional ones, and are in general advantageous whenever long-range correlations are restricted. The tensors can be connected through bonds in various geometries, and networks with more complexity and expressive power will entail higher performance costs. We focus on a 1D MPS structure to encode the amplitudes of a quantum state: $\mathcal{T}^{i_{1}i_{2}\dots i_{n}}=\sum_{k_{1}k_{2}\dots k_{n-1}}(T_{1})^{i_{1}}_{k_{1}}(T_{2})^{i_{2}k_{1}}_{k_{2}}(T_{3})^{i_{3}k_{2}}_{k_{3}}\dots(T_{N})^{i_{n}k_{n-1}}$ (1) In this structure, the dimension $\chi$ of a given bond $k$ corresponds to the entropy between the two subsystems it connects, as measured by the Schmidt rank [30]. We call this $\chi$ the bond dimension. A separable state can be encoded into a TN with $\chi=1$, whereas a state with mostly local entanglement (AKLT state [31]) needs $\chi=2$, and a maximally entangled state requires up to $\chi=2^{n/2}$. The bond dimension can also be artificially limited at the cost of precision. The stabilizer tableau formalism [32], on the other hand, can simulate efficiently any circuit composed only by Clifford gates with a classical computer. The states that can be prepared under these constraints are known as stabilizer states. In this context, the set $\mathcal{C}$ of Clifford gates are the free operations, and non-Clifford gates increase the stabilizer rank [33], which constitutes the resource. However, resource theories of stabilizer states are typically studied with other measures such as magic [34], stabilizer Rényi entropy [35] or Wigner positivity [36] due to their interesting properties. This simulation formalism is based on a stabilizer set $\mathcal{S}$ of Pauli operators ($\mathcal{P}_{n}$). It uniquely defines a state that fulfills $S\ket{\psi_{\mathcal{S}}}=\ket{\psi_{\mathcal{S}}}$ for any $S\in\mathcal{S}$. A Pauli operator P can be described with two boolean vectors and a phase, $P=\alpha\cdot(x_{1}x_{2}\dots x_{n})\cdot(z_{1}z_{2}\dots z_{n})$, meaning we only need $2n+1$ boolean values to represent one. Also, a set of $n$ generators of $\mathcal{S}$ are enough to fully define the group. This means a tableau of $n\times(2n+1)$ boolean entries stores all the information about $\mathcal{S}$ (see Fig. 1, b1). Since $\mathcal{P}_{n}$ is closed under the action of any gate $C\in\mathcal{C}$, a Clifford circuit can be simulated by finding the new tableau after applying each gate $C$ in it. An efficient ($O(n^{2})$ time) approach to update the tableaus is known [37], and it works by also storing the generators $d_{i}$ of the destabilizer group $\mathcal{D}$ – these operators fulfill $\\{d_{i},s_{i}\\}=0,[d_{i},d_{j}]=0$ and $[d_{i},s_{j}]=0$ for any $i\neq j$. In the following equations, we use the notation $d_{\hat{i}}$ for a generic destabilizer in $\mathcal{D}$, defined by $\hat{i}$ and the generators $d_{i}$ as $d_{\hat{i}}=d_{1}^{i_{1}}\dots d_{n}^{i_{n}}$; the same follows for stabilizers $s_{\hat{i}}$. Paired with the stabilizer state $\ket{\psi_{\mathcal{S}}}$, they define the set $\\{d_{\hat{i}}\ket{\psi_{\mathcal{S}}}\\}_{\hat{i}}$, which forms a basis $\mathcal{B}(\mathcal{S},\mathcal{D})$[19] of the Hilbert space $\mathcal{H}^{n}$: $\ket{\psi}=\sum_{i=0}^{2^{n}}\nu_{i}d_{\hat{i}}\ket{\psi_{\mathcal{S}}}.$ (2) This is shown and proven in Lemma 1. We encode these amplitudes on an MPS using $\ket{\nu}=\sum_{i}\nu_{i}\ket{i}$, and show how they change when applying any unitary gate or measurement with the following update rules: 1. 1. Clifford gate $G$: Update the stabilizer basis $\mathcal{B}(\mathcal{S},\mathcal{D})$ by conjugating with $G$, following the rules in the tableau formalism (see [37] or Appendix D) for the update $\ket{\psi_{\mathcal{S}}}\rightarrow G\ket{\psi_{\mathcal{S}}}=\ket{\psi_{\tilde{\mathcal{S}}}}$. This gives a new basis $\mathcal{B}(\tilde{\mathcal{S}},\tilde{\mathcal{D}})$: $G\ket{\psi}=\sum_{i}\nu_{i}Gd_{i}\ket{\psi_{\mathcal{S}}}=\sum_{i}\nu_{i}\tilde{d}_{i}\ket{\psi_{\tilde{\mathcal{S}}}},$ (3) and leaves the coefficient state $\ket{\nu}$ unchanged $G\ket{\nu}=\ket{\nu}.$ (4) 2. 2. Non-Clifford gate $\mathcal{U}$: Find the decomposition $\mathcal{U}=\sum_{i}\phi_{i}\delta_{\hat{d}_{i}}\sigma_{\hat{s}_{i}}$, then modify $\ket{\psi}$ as: $\mathcal{U}\ket{\psi}=\sum_{i,j}((-1)^{j\cdot{\hat{s}_{i}}}\phi_{i}\nu_{j})\>d_{j+\hat{d}_{i}}\ket{\psi_{\mathcal{S}}}.$ (5) When the decomposition only has two terms, this is equivalent to a rotation on $\ket{\nu}$: $\ket{\nu^{\prime}}=\cos(\theta)I-i\sin(\theta)X_{I_{x}}Y_{I_{y}}Z_{I_{z}}\ket{\nu}.$ (6) The basis $\mathcal{B}(\mathcal{S},\mathcal{D})$ stays unchanged. 3. 3. Measurement of observable $O$: Find the decomposition $O=\alpha\delta_{\hat{d}}\sigma_{\hat{s}}$ and the value of $\braket{\mathcal{O}}$ with: $\braket{\mathcal{O}}=\alpha\braket{\nu}{X_{\hat{d}}Z_{\hat{s}}}{\nu}.$ (7) Now choose an outcome $m\in\\{+,-\\}$ with probability $p_{+}=\frac{I+\braket{O}}{2}$, $p_{-}=1-p_{+}$, and let $k$ be the position of the first 1 in $\hat{d}$. Then update the stabilizer basis to $\mathcal{B^{\prime}}(\mathcal{S^{\prime}},\mathcal{D^{\prime}})$ following the rules for a measurement in the original formalism, and find $\ket{\psi^{\prime}}=(I+m\mathcal{O})/2\ket{\psi}$ as: $\ket{\psi^{\prime}}=\sum_{i}\delta_{i_{k},0}\left(\frac{1}{\sqrt{2}}\nu_{\hat{i}}+m\frac{\alpha(-1)^{\hat{i}\cdot\hat{s}}}{\sqrt{2}}\nu_{\hat{i}+\hat{d}}\right)d_{\hat{i}}\ket{\psi_{\mathcal{S^{\prime}}}},$ (8) which equals to a projection and rotation on $\ket{\nu}$: $\ket{\nu^{\prime}}=\ket{0}\bra{0}_{k}\left(\frac{1}{\sqrt{2}}I+m\frac{\alpha(-i)^{|I_{y}|}}{\sqrt{2}}X_{I_{x}}Y_{I_{y}}Z_{I_{z}}\right)\ket{\nu}.$ (9) When simulating a circuit, we use a decomposition into CNOT and single qubit rotations, as is usually done in real devices. This ensures that all non- Clifford gates conform to the particular case of Eq. 6. The measured observables $O$ are decomposed into (de)stabilizers, which we distinguish from the generators of the basis $d_{\hat{i}}$ by writing $\sigma_{\hat{s}}$ ($\delta_{\hat{d}}$) instead. In Annex A we explain the rules in more detail and prove two more Lemmas that justify the equations shown here. To compute the change of basis for Clifford gates, we employ already known [37] efficient methods to update the tableau. Since the amplitudes do not change, they preserve $\chi$. Non-Clifford gates and measurements, on the other hand, can introduce correlations between amplitudes that increase $\chi$ and make calculations more expensive. Consequently, $\chi$ constitutes our resource and the free operations include all Clifford gates and also non-Clifford gates $\mathcal{U}$ such that Eq. 6 is a local rotation on $\ket{\nu}$. We prove this in corollary 2.1, in the annex. The key ingredient to stabilizer tensor networks is allowing the basis to change beyond local rotations. The tableau algorithm replaces the computational basis with a basis of stabilizer states, which can have some entanglement, and forgoes the correspondence between qubits and tensors. In a way, entanglement is transferred from the tensor network $\ket{\nu}$ representation into the basis, at the cost of single qubit gates potentially becoming entangling on the amplitudes of $\ket{\nu}$. In general, this only happens if the circuit already contained entangling gates, and thus that part of the circuit was entangling to begin with. Therefore, we argue that the formalism does not generate fictitious entanglement. Instead, we say we store potential entanglement in the basis. We mentioned several resources linked to non-stabilizerness. Among those, stabilizer rank has a direct link to our formalism. For an arbitrary state $\ket{\psi}$, its stabilizer rank is the smallest $\xi$ that allows a decomposition into stabilizer states $\ket{\psi_{S}}$: $\ket{\psi}=\sum_{i=1}^{\xi}\alpha_{i}\ket{\psi_{S}^{i}}.$ (10) Stabilizer states have $\xi=1$. The structure on the basis states we use does not mean that a low stabilizer rank translates into a simple $\ket{\nu}$, as the necessary stabilizer states might not be simultaneously in $\mathcal{B}(\mathcal{S},\mathcal{D})$. We can define a pseudo-stabilizer rank $\tilde{\xi}$ as the amount of non-zero coefficients in $\ket{\nu}$. This is obviously an upper bound to $\xi$, but a thorough characterization of how these quantities relate is left for future work. Let us demonstrate that our formalism can efficiently simulate two different scenarios: low entanglement and low stabilizer rank. In the original tableau formalism [37], it was already shown how we can simulate any circuit and encode any state in the $n$ qubit Hilbert space with a superposition of tableaus. However, this is akin to a brute-force simulation with the statevector approach, and grows exponentially with the number $t$ of non- Clifford gates. Instead, our formalism can take advantage of all the tools that have been developed for tensor networks simulations. Consider the state $\ket{T}^{n}$, which can be prepared with: $\ket{T}^{n}=\prod_{i=1}^{n}T_{i}\prod_{i=1}^{n}H_{i}\ket{0}^{\otimes n}.$ (11) The first layer of Hadamards, which are Clifford gates and only updates the tableau, sets the stabilizer basis to $s_{i}=X_{i},d_{i}=Z_{i}$. In this basis, each $T$-gate on qubit $i$ $T_{i}=\cos(\frac{\pi}{8})I-i\sin(\frac{\pi}{8})Z_{i}=\cos(\frac{\pi}{8})I-i\sin(\frac{\pi}{8})d_{i},$ (12) fulfills the criteria for a free operation, so the resulting state is represented by a trivial MPS with $\chi=1$. Notice that, in this case, the pseudo-stabilizer rank is maximal $\tilde{\xi}=2^{n}$. With the conventional generalization of tableaus, we would need $2^{n}$ copies, as each $T$-gate duplicates the number of necessary tableaus. This is not the best that can be achieved: the stabilizer rank of $\ket{T}^{n}$ for small $n$ has been shown to be low [38], meaning an optimal decomposition requires fewer tableaus (and also $\xi<<\tilde{\xi}$). However, a general method to find these decompositions is not known. Most importantly, the growth of $\xi$ with $n$ is expected to be exponential [39] unless quantum computing is completely simulatable (even though a supralinear lower bound has not been found [33]), whereas stabilizer tensor networks can represent these states efficiently for any $n$. On the other hand, some stabilizer states have been shown to have maximum bipartite entanglement [15], as mentioned earlier. These states can be prepared with a Clifford circuit, so in a simulation with stabilizer tensor networks they will be an element of the basis $\mathcal{B}(\mathcal{S},\mathcal{D})$, and therefore trivial to represent with $\xi=\tilde{\xi}=1$, despite being expensive with a regular MPS. In addition to these examples, there likely exists a different resource $R$ that captures the power of the approach, defining whether a state can be efficiently represented or not with a single metric as illustrated in Fig. 1a. This resource $R$ must be related to the two discussed resources, in the sense that both low entanglement or low stabilizer rank imply low $R$, because we have seen that we can simulate either case. However, since stabilizer states can be very entangled, and separable states can have high non-stabilizerness, it follows that high entanglement does not imply high $R$, nor does low stabilizer rank imply high $R$, indicating that it isn’t trivially connected to these resources. Beyond the cases of complete stabilizerness or no entanglement, the efficiency should persist when these resources are present in a low amount for the formalism to be useful. Our discussion so far highlights two advantages in that regard. First, notice that we can always process Clifford gates directly on $\ket{\nu}$ instead of changing the basis so that the TN behaves traditionally. This means any state simulatable with tensor networks is also feasible in our approach. Additionally, we have seen that Clifford gates don’t change $\ket{\nu}$, independently of its $\chi$, so we can always move freely in the space of states with fixed stabilizer rank. Figure 2: Average and maximum increase of entanglement after applying a single T-gate on a random Clifford tableau, measured with $log(\chi^{\prime})$ where $\chi^{\prime}$ is the maximum bond dimension of the MPS in the stabilizer TN. The average is done over $\sim n^{2}$ uniformly sampled Clifford circuits. In the inset, the distribution of $log(\chi^{\prime})$ for $n=40$. Nonetheless, a gate that entangles $\ket{\psi}$ by a certain amount could in principle become a more entangling gate on $\ket{\nu}$; alternatively, a gate that does not increase stabilizer rank by much could also add a lot of entanglement to $\ket{\nu}$. To show this is not the case, it suffices to look at single-qubit rotations $\mathcal{R}$ for both cases, due to the decomposition we employ. When $\theta$ is not a multiple of $\pi/4$, this rotation is also a good example of an operation that only increases stabilizer rank slightly (Eq. 12). We can bind the growth of $\chi$ in our MPS after applying $\mathcal{R}$ by using Eq. 6 (which is equivalent to a CNOT cascade [40]). The worst-case scenario for an MPS is $\chi^{\prime}=2^{4}\chi$, as proven in Annex B, although our simulations (Fig. 2) show that, on average, it is only $\sim 2^{2.46}$, and that it does not grow as $n\rightarrow\infty$. For other TN structures with more connectivity, this bound can decrease. Regardless, since it is bounded, we can ensure efficient simulation with a low amount of non-Clifford single-qubit rotations. Notice that the worst-case scenario only happens if the circuit applies CNOT gates before $\mathcal{R}$, which were free in the stabilizer TN and stored potential entanglement in the basis. Conclusions and Outlook: We have presented a new approach to circuit simulation, unifying two different frameworks each with its characterizing resource. We have also shown in which instances it offers an advantage, and identified its free operations for the characterization of a resource theoretic description. In addition, developing the formalism has identified several interesting research directions. First, one could decide to not always use a Clifford gate to update the stabilizer basis of the TN, and apply it directly to $\ket{\nu}$ instead. The criteria used for this decision would directly affect the growth of the resource in the simulation. Also, the storage and retrieval of the potential entanglement in $\mathcal{B}(\mathcal{S},\mathcal{D})$ is possible with entangling Clifford gates, but changing the basis also changes $\ket{\nu}$. This means that, during a simulation, we cannot trivially decrease the cost of the next non- stabilizer gate without potentially increasing $\chi$ of the current TN. A thorough study of how to optimally allocate this resource to decrease the cost of the simulation is left for future work. Bounding $\chi$ of the MPS and checking the accuracy of results on states with different amounts of entanglement, magic, or other resources is a strong candidate to characterize the resource $R$, which relates non-trivially to both entanglement and stabilizer rank. In general, being able to relate $\chi$ to a magnitude other than bipartite entanglement also opens up the field of tensor networks to the use of other resources. The evident locality of $\chi$ is an obstacle for which resources can be used in this way, making a link between magic and $\chi$ of our vector $\ket{\nu}$ a specially interesting objective. We can also use stabilizer TNs as a practical tool to bound the simulation hardness of a circuit based on the amount of T-gates it contains, a research that is usually restricted to theoretical approaches [41]. The method described in this article is available in a Python implementation 111https://github.com/bsc-quantic/stabilizer-TN that can simulate any circuit. Thus, one can look into improving the efficiency of the implementation. An obvious candidate is integration with the handling of tableaus as done by STIM [43], the most performant Python approach to stabilizer simulation. Acknowledgements: We want to thank Ema Puljak, Berta Casas and Axel Pérez- Obiol for their comments on the manuscript, together with all members of BSC’s Quantic group for their suggestions and support. ## References * Vidal [2008] G. Vidal, Class of Quantum Many-Body States That Can Be Efficiently Simulated, Physical Review Letters 101, 110501 (2008). * Frérot _et al._ [2023] I. Frérot, M. Fadel, and M. Lewenstein, Probing quantum correlations in many-body systems: A review of scalable methods, Reports on Progress in Physics 86, 114001 (2023). * Shang _et al._ [2022] H. Shang, L. Shen, Y. Fan, Z. Xu, C. Guo, J. Liu, W. Zhou, H. Ma, R. Lin, Y. Yang, F. Li, Z. Wang, Y. Zhang, and Z. Li, Large-scale simulation of quantum computational chemistry on a new sunway supercomputer, in _Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis_ , SC ’22 (IEEE Press, 2022) pp. 1–14. * Dalton _et al._ [2024] K. Dalton, C. K. Long, Y. S. Yordanov, C. G. Smith, C. H. W. Barnes, N. Mertig, and D. R. M. Arvidsson-Shukur, Quantifying the effect of gate errors on variational quantum eigensolvers for quantum chemistry, npj Quantum Information 10, 1 (2024). * Dalzell _et al._ [2023] A. M. Dalzell, S. McArdle, M. Berta, P. Bienias, C.-F. Chen, A. Gilyén, C. T. Hann, M. J. Kastoryano, E. T. Khabiboulline, A. Kubica, G. Salton, S. Wang, and F. G. S. L. Brandão, Quantum algorithms: A survey of applications and end-to-end complexities (2023), 2310.03011 . * Liu _et al._ [2022] Y. Liu, Y. Chen, C. Guo, J. Song, X. Shi, L. Gan, W. Wu, W. Wu, H. Fu, X. Liu, D. Chen, G. Yang, and J. Gao, Validating quantum-supremacy experiments with exact and fast tensor network contraction (2022), 2212.04749 . * Tindall _et al._ [2024] J. Tindall, M. Fishman, E. M. Stoudenmire, and D. Sels, Efficient tensor network simulation of IBM’s Eagle kicked Ising experiment, PRX Quantum 5, 010308 (2024), 2306.14887 . * Begušić _et al._ [2024] T. Begušić, J. Gray, and G. K.-L. Chan, Fast and converged classical simulations of evidence for the utility of quantum computing before fault tolerance, Science Advances 10, eadk4321 (2024), 2308.05077 . * Arute _et al._ [2019] F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, F. G. S. L. Brandao, D. A. Buell, B. Burkett, Y. Chen, Z. Chen, B. Chiaro, R. Collins, W. Courtney, A. Dunsworth, E. Farhi, B. Foxen, A. Fowler, C. Gidney, M. Giustina, R. Graff, K. Guerin, S. Habegger, M. P. Harrigan, M. J. Hartmann, A. Ho, M. Hoffmann, T. Huang, T. S. Humble, S. V. Isakov, E. Jeffrey, Z. Jiang, D. Kafri, K. Kechedzhi, J. Kelly, P. V. Klimov, S. Knysh, A. Korotkov, F. Kostritsa, D. Landhuis, M. Lindmark, E. Lucero, D. Lyakh, S. Mandrà, J. R. McClean, M. McEwen, A. Megrant, X. Mi, K. Michielsen, M. Mohseni, J. Mutus, O. Naaman, M. Neeley, C. Neill, M. Y. Niu, E. Ostby, A. Petukhov, J. C. Platt, C. Quintana, E. G. Rieffel, P. Roushan, N. C. Rubin, D. Sank, K. J. Satzinger, V. Smelyanskiy, K. J. Sung, M. D. Trevithick, A. Vainsencher, B. Villalonga, T. White, Z. J. Yao, P. Yeh, A. Zalcman, H. Neven, and J. M. Martinis, Quantum supremacy using a programmable superconducting processor, Nature 574, 505 (2019). * Kim _et al._ [2023] Y. Kim, A. Eddins, S. Anand, K. Wei, E. Berg, S. Rosenblatt, H. Nayfeh, Y. Wu, M. Zaletel, K. Temme, and A. Kandala, Evidence for the utility of quantum computing before fault tolerance, Nature 618, 500 (2023). * Chitambar and Gour [2019] E. Chitambar and G. Gour, Quantum resource theories, Rev. Mod. Phys. 91, 025001 (2019). * Christandl _et al._ [2023] M. Christandl, V. Lysikov, V. Steffan, A. H. Werner, and F. Witteveen, The resource theory of tensor networks (2023), 2307.07394 . * Bravyi _et al._ [2019] S. Bravyi, D. Browne, P. Calpin, E. Campbell, D. Gosset, and M. Howard, Simulation of quantum circuits by low-rank stabilizer decompositions, Quantum 3, 181 (2019). * Veitch _et al._ [2014] V. Veitch, S. A. Hamed Mousavian, D. Gottesman, and J. Emerson, The resource theory of stabilizer quantum computation, New Journal of Physics 16, 013009 (2014). * Smith and Leung [2006] G. Smith and D. Leung, Typical entanglement of stabilizer states, Physical Review A 74, 062314 (2006), quant-ph/0510232 . * Haug and Piroli [2022] T. Haug and L. Piroli, Quantifying Nonstabilizerness of Matrix Product States (2022), 2207.13076 . * Tarabunga _et al._ [2024] P. S. Tarabunga, E. Tirrito, M. C. Banuls, and M. Dalmonte, Nonstabilizerness via matrix product states in the Pauli basis (2024), 2401.16498 . * Lami and Collura [2024] G. Lami and M. Collura, Learning the stabilizer group of a Matrix Product State (2024), 2401.16481 . * Yoder [2012] T. J. Yoder, A generalization of the stabilizer formalism for simulating arbitrary quantum circuits (2012). * Peng _et al._ [2020] T. Peng, A. Harrow, M. Ozols, and X. Wu, Simulating Large Quantum Circuits on a Small Quantum Computer, Physical Review Letters 125, 150504 (2020), 1904.00102 . * Eddins _et al._ [2022] A. Eddins, M. Motta, T. P. Gujarati, S. Bravyi, A. Mezzacapo, C. Hadfield, and S. Sheldon, Doubling the Size of Quantum Simulators by Entanglement Forging, PRX Quantum 3, 010309 (2022). * Orus [2014] R. Orus, A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States, Annals of Physics 349, 117 (2014), 1306.2164 . * Cirac _et al._ [2021] I. Cirac, D. Perez-Garcia, N. Schuch, and F. Verstraete, Matrix Product States and Projected Entangled Pair States: Concepts, Symmetries, and Theorems, Reviews of Modern Physics 93, 045003 (2021), 2011.12127 . * Vidal [2007] G. Vidal, Entanglement Renormalization, Physical Review Letters 99, 220405 (2007). * Evenbly and Vidal [2011] G. Evenbly and G. Vidal, Tensor Network States and Geometry, Journal of Statistical Physics 145, 891 (2011). * Bennett _et al._ [1999] C. H. Bennett, D. P. DiVincenzo, C. A. Fuchs, T. Mor, E. Rains, P. W. Shor, J. A. Smolin, and W. K. Wootters, Quantum Nonlocality without Entanglement, Physical Review A 59, 1070 (1999), quant-ph/9804053 . * Chitambar _et al._ [2014] E. Chitambar, D. Leung, L. Mančinska, M. Ozols, and A. Winter, Everything You Always Wanted to Know About LOCC (But Were Afraid to Ask), Communications in Mathematical Physics 328, 303 (2014). * Eisert _et al._ [2010] J. Eisert, M. Cramer, and M. B. Plenio, Colloquium: Area laws for the entanglement entropy, Reviews of Modern Physics 82, 277 (2010). * McCulloch [2008] I. P. McCulloch, Infinite size density matrix renormalization group, revisited (2008), 0804.2509 . * Nielsen and Chuang [2010] M. A. Nielsen and I. L. Chuang, Introduction to quantum mechanics, in _Quantum Computation and Quantum Information: 10th Anniversary Edition_ (Cambridge University Press, 2010) pp. 60–119. * Affleck _et al._ [1987] I. Affleck, T. Kennedy, E. H. Lieb, and H. Tasaki, Rigorous results on valence-bond ground states in antiferromagnets, Physical Review Letters 59, 799 (1987). * Gottesman [1997] D. Gottesman, Stabilizer Codes and Quantum Error Correction (1997), quant-ph/9705052 . * Peleg _et al._ [2022] S. Peleg, A. Shpilka, and B. L. Volk, Lower Bounds on Stabilizer Rank, Quantum 6, 652 (2022), 2106.03214 . * Haug and Piroli [2023] T. Haug and L. Piroli, Stabilizer entropies and nonstabilizerness monotones (2023), 2303.10152 . * Leone _et al._ [2022] L. Leone, S. F. E. Oliviero, and A. Hamma, Stabilizer Rényi Entropy, Physical Review Letters 128, 050402 (2022). * Mari and Eisert [2012] A. Mari and J. Eisert, Positive Wigner functions render classical simulation of quantum computation efficient, Physical Review Letters 109, 230503 (2012), 1208.3660 . * Aaronson and Gottesman [2004] S. Aaronson and D. Gottesman, Improved Simulation of Stabilizer Circuits, Physical Review A 70, 052328 (2004), quant-ph/0406196 . * Qassim _et al._ [2021] H. Qassim, H. Pashayan, and D. Gosset, Improved upper bounds on the stabilizer rank of magic states, Quantum 5, 606 (2021), 2106.07740 . * Bravyi _et al._ [2016] S. Bravyi, G. Smith, and J. A. Smolin, Trading Classical and Quantum Computational Resources, Physical Review X 6, 021043 (2016). * Mansky _et al._ [2023] M. B. Mansky, V. R. Puigvert, S. L. Castillo, and C. Linnhoff-Popien, Decomposition Algorithm of an Arbitrary Pauli Exponential through a Quantum Circuit (2023), 2305.04807 . * Ross and Selinger [2016] N. J. Ross and P. Selinger, Optimal ancilla-free Clifford+T approximation of z-rotations (2016), 1403.2975 . * Note [1] https://github.com/bsc-quantic/stabilizer-TN. * Gidney [2021] C. Gidney, Stim: A fast stabilizer circuit simulator, Quantum 5, 497 (2021), 2103.02202 . * Note [2] Hadamard multiplication is an element wise multiplication of two tensors $a$ and $b$ of the same shape, such that the entries of the result $c$ follow: $c_{i_{1}\dots i_{n}}=(a\circ_{\textit{h}}b)_{i_{1}\dots i_{n}}=a_{i_{1}\dots i_{n}}\cdot b_{i_{1}\dots i_{n}}$. * Zanardi _et al._ [2000] P. Zanardi, C. Zalka, and L. Faoro, Entangling power of quantum evolutions, Physical Review A 62, 030301 (2000). * Guan _et al._ [2014] Z. Guan, H. He, Y.-J. Han, C.-F. Li, F. Galve, and G.-C. Guo, Entangling power of two-qubit gates on mixed states, Physical Review A 89, 012324 (2014). * Mariën _et al._ [2016] M. Mariën, K. M. R. Audenaert, K. Van Acoleyen, and F. Verstraete, Entanglement Rates and the Stability of the Area Law for the Entanglement Entropy, Communications in Mathematical Physics 346, 35 (2016). * Eisert [2021] J. Eisert, Entangling Power and Quantum Circuit Complexity, Physical Review Letters 127, 020501 (2021). * Galve _et al._ [2013] F. Galve, F. Plastina, M. G. A. Paris, and R. Zambrini, Discording Power of Quantum Evolutions, Physical Review Letters 110, 010501 (2013). * Nielsen _et al._ [2003] M. A. Nielsen, C. M. Dawson, J. L. Dodd, A. Gilchrist, D. Mortimer, T. J. Osborne, M. J. Bremner, A. W. Harrow, and A. Hines, Quantum dynamics as a physical resource, Physical Review A 67, 052301 (2003), quant-ph/0208077 . * Jonnadula _et al._ [2020] B. Jonnadula, P. Mandayam, K. Życzkowski, and A. Lakshminarayan, Entanglement measures of bipartite quantum gates and their thermalization under arbitrary interaction strength, Physical Review Research 2, 043126 (2020). * Balakrishnan and Sankaranarayanan [2011] S. Balakrishnan and R. Sankaranarayanan, Operator-Schmidt decomposition and the geometrical edges of two-qubit gates, Quantum Information Processing 10, 449 (2011). ## Annex ## Appendix A Lemmas and proofs Here we show and prove the lemmas that generate the update rules in the main text. We also include extra comments that make the notation and intuition behind the formalism clearer. ###### Lemma 1. For a given stabilizer basis $\mathcal{B}(S,D)$, any state $\ket{\psi}$ in an n-dimensional Hilbert space $\mathcal{H}^{n}$ can be described as $\ket{\psi}=\sum_{i}\nu_{i}d_{\hat{i}}\ket{\psi_{S}}$, where $\hat{i}=(i_{1}\dots i_{n})$, $\nu_{i}$ are complex coefficients fulfilling $\sum_{i}|\nu_{i}|^{2}=1$, and $d_{\hat{i}}=d_{1}^{i_{1}}\cdot d_{2}^{i_{2}}\cdot\dots\cdot d_{n}^{i_{n}}$ with respect to the destabilizer generators $d_{i}\in D$. While this property underlies the use of stabilizers in error-correction, and thus can be deduced with their formalism, it can also be seen very concisely entirely within the formalism used in this paper. Proof: We show that all $d_{\hat{i}}\ket{\psi_{S}}$ are i) normalized and ii) mutually orthogonal, so they form an orthonormal basis, then that iii) the space they generate is the same dimension as the full n-dimensional Hilbert space. * • i) The basis states are normal: $\bra{\psi_{S}}d_{\hat{i}}d_{\hat{i}}\ket{\psi_{S}}=\braket{\psi_{S}}{\psi_{S}}=1$, using that $\delta_{\hat{i}}\in\mathcal{P}^{n}$ implies $(\delta_{\hat{i}})^{2}=Id$. * • ii) The basis states are orthogonal: If we take two different states $d_{\hat{i}}\ket{\psi_{S}}$, $d_{\hat{j}}\ket{\psi_{S}}$, then there is a stabilizer generator $d_{k}$ from D that such that $i_{k}=1,j_{k}=0$ or $i_{k}=0,j_{k}=1$. Taking the first case without loss of generality, the stabilizer generator $s_{k}$ anticommutes with $d_{\hat{i}}$ and commutes with $d_{\hat{j}}$, so that $\bra{\psi_{S}}d_{\hat{i}}d_{\hat{j}}\ket{\psi_{S}}=\bra{\psi_{S}}d_{\hat{i}}d_{\hat{j}}s_{k}\ket{\psi_{S}}=-\bra{\psi_{S}}d_{\hat{i}}s_{k}d_{\hat{j}}\ket{\psi_{S}}=-\bra{\psi_{S}}s_{k}d_{\hat{i}}d_{\hat{j}}\ket{\psi_{S}}=-\bra{\psi_{S}}d_{\hat{i}}d_{\hat{j}}\ket{\psi_{S}}$ (13) Therefore $\bra{\psi_{S}}d_{\hat{i}}d_{\hat{j}}\ket{\psi_{S}}=0$. * • iii) The basis generates a space of dimension $2^{n}$: since there are $n$ destabilizers $d_{i}$, $\hat{i}=(i_{1}\dots i_{n})$ can take $2^{n}$ different values, so the basis $\\{d_{\hat{i}}\ket{\psi_{S}}\\}_{\hat{i}}$ has that many elements. $\hfill\square$ Other than the basic structure, we need to understand how arbitrary gates modify $\ket{\psi}$ and $\ket{\nu}$. First, we check how different gates are decomposed into the gates of the basis $\mathcal{B}(\mathcal{S},\mathcal{D})$. Then, we find which operations they correspond to within this formalism. We can see from the definition of the stabilizer basis (Eq. 2) that a destabilizer $\delta_{\hat{j}}=d_{1}^{j_{1}}\dots d_{n}^{j_{n}}$ takes us from one element of the basis to another. $\delta_{\hat{j}}\ket{\psi}=\delta_{\hat{j}}\sum_{i=0}^{2^{n}-1}\nu_{i}d_{\hat{i}}\ket{\psi_{\mathcal{S}}}=\sum_{i=0}^{2^{n}-1}\nu_{i}d_{\hat{i}+\hat{j}}\ket{\psi_{\mathcal{S}}}=\sum_{i=0}^{2^{n}-1}\nu_{i+j}d_{\hat{i}}\ket{\psi_{\mathcal{S}}}.$ (14) On the other hand, the multiplication of a stabilizer $\sigma_{\hat{j}}=s_{1}^{j_{1}}\dots s_{n}^{j_{n}}$ introduces a sign depending on the element of the basis due to anticommutation. Notice that, because $d_{i}$ only anticommutes with $s_{i}$, checking for commutativity is as simple as doing the inner product of their boolean vectors $\hat{d}\cdot\hat{s}$. $\sigma_{\hat{j}}\ket{\psi}=\sum_{i=0}^{2^{n}-1}\nu_{i}\sigma_{\hat{j}}d_{\hat{i}}\ket{\psi_{\mathcal{S}}}=\sum_{i=0}^{2^{n}-1}\nu_{i}(-1)^{\hat{i}\cdot\hat{j}}d_{\hat{i}}\sigma_{\hat{j}}\ket{\psi_{\mathcal{S}}}=\sum_{i=0}^{2^{n}-1}(-1)^{\hat{i}\cdot\hat{j}}\nu_{i}d_{\hat{i}}\ket{\psi_{\mathcal{S}}}.$ (15) These are equivalent to $X$ and $Z$ operations, respectively, on the computational basis. Therefore, on $\ket{\nu}$ we have: $\delta_{\hat{d}}\ket{\psi}=X_{\hat{d}}\ket{\nu}\quad,\quad\sigma_{\hat{s}}\ket{\psi}=Z_{\hat{s}}\ket{\nu}.$ (16) Since $\mathcal{S}\cup\mathcal{D}$ are a basis for $\mathcal{P}^{n}$, we can decompose any operator as: $\mathcal{U}=\sum_{i}\phi_{i}\delta_{\hat{d}_{i}}\sigma_{\hat{s}_{i}}.$ (17) The previous observations tell us how to apply each factor individually, but any decomposition with more than one term is more complicated. Observe that the difference between the update on $\ket{\psi}$ and on $\ket{\nu}$ is only the changing basis, therefore the transformation to $\ket{\nu}$ must also be a unitary operation. This means that our tensor network representation can use the same tools as with circuit simulation, even if the equivalency is not trivial. The following lemma shows us one useful instance. ###### Lemma 2. For a given stabilizer basis $\mathcal{B}(S,D)$, any unitary that can be decomposed in the form $\mathcal{U}=\phi_{1}\delta_{\hat{d}_{1}}\sigma_{\hat{s}_{1}}+\phi_{2}\delta_{\hat{d}_{2}}\sigma_{\hat{s}_{2}},$ (18) is equivalent, in the stabilizer tensor network formalism, to a change of basis with Clifford gates $\delta_{\hat{d}_{1}}\sigma_{\hat{s}_{1}}$ followed by a single multi-qubit rotation over the X,Y and Z axes on $\ket{\nu}$: $\mathcal{R}_{X_{I_{x}}Y_{I_{y}}Z_{I_{z}}}(2\theta)=\cos(\theta)I-i\sin(\theta)X_{I_{x}}Y_{I_{y}}Z_{I_{z}},$ (19) with $\theta=\arccos{(Re(\phi_{1}))}$. Using $\circ_{h}$ 222Hadamard multiplication is an element wise multiplication of two tensors $a$ and $b$ of the same shape, such that the entries of the result $c$ follow: $c_{i_{1}\dots i_{n}}=(a\circ_{\textit{h}}b)_{i_{1}\dots i_{n}}=a_{i_{1}\dots i_{n}}\cdot b_{i_{1}\dots i_{n}}$. , the chosen axes $I_{x}$,$I_{y}$ and $I_{z}$ related to $\delta_{\hat{d}_{1}},\sigma_{\hat{s}_{1}},\delta_{\hat{d}_{2}},\sigma_{\hat{s}_{2}}$ as follows: $I_{y}=(\hat{d}_{1}+\hat{d}_{2})\otimes_{h}(\hat{s}_{1}+\hat{s}_{2})\quad,\quad I_{x}=(\hat{d}_{1}+\hat{d}_{2})+I_{y}\quad,\quad I_{z}=(\hat{s}_{1}+\hat{s}_{2})+I_{y}$ (20) Proof: To make notation a bit easier to read, we refer to operators $\sigma_{\hat{s}_{i}}$,$\delta_{\hat{d}_{j}}$ as $\delta_{i},\sigma_{j}$ and use $\delta_{i}\cdot\sigma_{j}$ to mean $\hat{s}_{i}\cdot\hat{d}_{j}$, except where there might be ambiguity. This helps us keep track of the equations without remembering which operator carries which index. Remember that $\delta\cdot\sigma=1$ when the operators anticommute and $0$ when they commute, and also that any (de)stabilizer is hermitian. It can be checked that unitarity implies the following conditions: $\begin{split}\mathcal{U}^{\dagger}\mathcal{U}=I\iff&(\phi_{1}^{*}\sigma_{1}^{\dagger}\delta_{1}^{\dagger}+\phi_{2}^{*}\sigma_{2}^{\dagger}\delta_{2}^{\dagger})(\phi_{1}\delta_{1}\sigma_{1}+\phi_{2}\delta_{2}\sigma_{2})=\phi_{1}^{*}\phi_{1}I+\phi_{2}^{*}\phi_{2}I+\phi_{2}^{*}\phi_{1}\sigma_{2}\delta_{2}\delta_{1}\sigma_{1}+\phi_{1}^{*}\phi_{2}\sigma_{1}\delta_{1}\delta_{2}\sigma_{2}=\\\ &=(\phi_{1}^{*}\phi_{1}+\phi_{2}^{*}\phi_{2})I+(\phi_{2}^{*}\phi_{1}(-1)^{(\sigma_{2}\cdot\delta_{2}+\delta_{1}\cdot\sigma_{2}+\delta_{2}\cdot\sigma_{1}+\sigma_{1}\cdot\delta_{1})}+\phi_{1}^{*}\phi_{2})\sigma_{1}\delta_{1}\delta_{2}\sigma_{2}=I\\\ \iff&\begin{cases}\phi_{1}\phi_{1}^{*}+\phi_{2}\phi_{2}^{*}=1\\\ \phi_{2}^{*}\phi_{1}(-1)^{(\delta_{1}+\delta_{2})\cdot(\sigma_{1}+\sigma_{2})}=-\phi_{1}^{*}\phi_{2}\end{cases}.\end{split}$ (21) The first condition tells us that we can rewrite the coefficients with trigonometric functions. We can also factorize the complex phase like $\phi_{1}=e^{i\varphi_{1}}\cos(\theta)$ and $\phi_{2}=e^{i\varphi_{1}}e^{i\omega}\sin(\theta)$, and ignore the term $e^{i\varphi_{1}}$ as a global phase. Substituting this in the second condition: $\begin{split}e^{-i\omega}\sin(\theta)\cos(\theta)(-1)^{(\delta_{1}+\delta_{2})\cdot(\sigma_{1}+\sigma_{2})}&=-\cos(\theta)e^{i\omega}\sin(\theta)\rightarrow\\\ \rightarrow e^{2i\omega}=(-1)^{(\delta_{1}+\delta_{2})\cdot(\sigma_{1}+\sigma_{2})+1}\rightarrow e^{i\omega}&=\pm(-i)^{(\delta_{1}+\delta_{2})\cdot(\sigma_{1}+\sigma_{2})+1}.\end{split}$ (22) We can also rewrite the unitary as: $\mathcal{U}=(\cos(\theta)I+e^{i\omega}\sin(\theta)\delta_{2}\sigma_{2}\>\delta_{1}\sigma_{1})\>\delta_{1}\sigma_{1}=(\cos(\theta)I+e^{i\omega}\sin(\theta)(-1)^{\delta_{1}\cdot\sigma_{2}}\delta_{2}\delta_{1}\>\sigma_{2}\sigma_{1})\>\delta_{1}\sigma_{1}.$ (23) As we have seen in Eqs. 14,15, we treat a (de)stabilizer operator as a set of Z (X) gates. Therefore, the Pauli operator $\delta_{1}\sigma_{1}$ to the right is strictly a Clifford update (in fact, with only X,Y and Z gates) that we can apply first. It’s left to check that the remaining factor behaves like a rotation. $\tilde{\mathcal{U}}=\cos(\theta)I+e^{i\omega}\sin(\theta)(-1)^{\delta_{1}\cdot\sigma_{2}}\delta_{2}\delta_{1}\>\sigma_{2}\sigma_{1}$ (24) Similarly, applying $\sigma_{\hat{s}_{i}}$ ($\delta_{\hat{d}_{j}}$) is equivalent to the transformation $Z_{\hat{s}_{i}}$ ($X_{\hat{d}_{j}}$), and doing two such transformations consecutively is as simple as adding the boolean vectors: $Z_{\hat{s}_{j}}Z_{\hat{s}_{i}}=Z_{\hat{s}_{j}+\hat{s}_{i}}$. With $I_{x}$,$I_{y}$ and $I_{z}$ defined as above, one can check that $\begin{split}I_{x}+I_{y}=\hat{\delta}_{1}+\hat{\delta}_{2}&\rightarrow X_{I_{x}+I_{y}}=\delta_{2}\delta_{1}\\\ I_{y}+I_{z}=\hat{\sigma}_{1}+\hat{\sigma}_{2}&\rightarrow Z_{I_{y}+I_{z}}=\sigma_{2}\sigma_{1}\\\ I_{y}=(\hat{\delta}_{1}+\hat{\delta}_{2})\circ_{\textit{h}}(\hat{\sigma}_{1}+\hat{\sigma}_{2})&\rightarrow|I_{y}|=\sum_{a\in I_{y}}a=(\hat{\delta}_{1}+\hat{\delta}_{2})\cdot(\hat{\sigma}_{1}+\hat{\sigma}_{2})\end{split},$ (25) which is almost the form in Eq.19. Notice that $I_{x}$ has the unique 1s of $\hat{d}_{1}+\hat{d}_{2}$, with $0$s elsewhere, $I_{z}$ those of $\hat{s}_{1}+\hat{s}_{2}$, whereas $I_{y}$ has the common $1$s between $\hat{d}_{1}+\hat{d}_{2}$ and $\hat{s}_{1}+\hat{s}_{2}$, with the Hadamard product (element-wise multiplication) enabling the closed form description of $I_{y}$. Since $X\cdot Z=-iY$, we can rewrite 19 as $\mathcal{R}_{X_{I_{x}}Y_{I_{y}}Z_{I_{z}}}(2\theta)=\cos(\theta)I+\sin(\theta)(-i)^{|I_{y}|+1}X_{I_{x}+I_{y}}Z_{I_{y}+I_{z}}.$ (26) Putting eq.25 and eq.22 together means that the sinus term has the correct phase to be a unitary, so the proposed transformation is indeed a rotation and its coefficients relate to the original unitary as stated. Since the sign can always be changed with the angle of rotation, we are done. $\hfill\square$ The values $v_{i}$ of the vectors $I_{x}$ ($I_{y}$,$I_{z}$) indicate that we rotate qubit $i$ over $X$ ($Y$,$Z$) if $v_{i}=1$, and we do nothing if $v_{i}=0$. Notice that this gate can be implemented with a cascade of CNOT gates on the affected qubits and a single qubit rotation, plus the appropriate basis changes from $X$ to $Y,Z$ [40]. In particular, we implement the rotation on the innermost affected qubit and add CNOT cascades to each side. And example can be seen in Fig. 4. Lemma 2 is very useful because the basic $R_{X}$, $R_{Y}$ and $R_{Z}$ gates have this form, so we know how to update with any non-clifford single qubit gate. With the $\ket{\nu}$ notation, lemma 2 can be summarized with: $\mathcal{U}\ket{\psi}=(\phi_{1}\delta_{1}\sigma_{1}+\phi_{2}\delta_{2}\sigma_{2})\ket{\psi}=\mathcal{R}_{X_{I_{x}}Y_{I_{y}}Z_{I_{z}}}(2\theta)\ket{\nu}$ (27) In the algorithm, we have to check the value of $\delta_{1}\cdot\sigma_{2}$ to get the sign of the transformation angle right. We also have a corollary that tells us what the free operations are, which is needed to identify the resource: ###### Corollary 2.1. In the context of stabilizer simulation as a resource theory, the free operations depend on the basis $\mathcal{B}(S,D)$ and correspond to $\mathcal{U}=\text{cos}(\theta/2)\alpha\delta\sigma+i\text{sin}(\theta/2)d_{i}s_{i}\alpha\delta\sigma$, where $d_{i}\in D$, $s_{i}\in S$ are generators and $P=\alpha\delta\sigma$ is the decomposition of a Pauli matrix $P$ into $\mathcal{B}(S,D)$. Proof: Since it fits the conditions of lemma 2, we apply first the Pauli operator $P$, which consists only of Clifford gates. Then, since $d_{i}s_{i}\ket{\psi_{S}}=d_{i}\ket{\psi_{S}}$ is an element of the basis, using 20, we see that $\cos(\theta/2)+i\sin(\theta/2)d_{i}s_{i}$ is equivalent to a single qubit rotation $R_{X}(-\theta)$ on $\ket{\nu}$, which does not increase its bond dimension. $\hfill\square$ For an observable $\mathcal{O}=\alpha\delta_{\hat{n}}\sigma_{\hat{m}}$ we have: $\begin{split}\braket{\psi}{\mathcal{O}}{\psi}=\bra{\psi_{\mathcal{S}}}\sum_{j}\nu_{\hat{j}}^{*}d_{\hat{j}}^{*}\>\alpha\delta_{\hat{n}}\sigma_{\hat{m}}\sum_{i}\nu_{\hat{i}}d_{\hat{i}}\ket{\psi_{\mathcal{S}}}&=\sum_{i,j}\alpha\nu_{\hat{j}}^{*}\nu_{\hat{i}}(-1)^{{\hat{m}}\cdot{\hat{i}}}\braket{\psi_{\mathcal{S}}}{d_{\hat{j}}d_{\hat{n}}d_{\hat{i}}}{\psi_{\mathcal{S}}}=\\\ =\sum_{i,j}\alpha\nu_{\hat{j}}^{*}\nu_{\hat{i}}(-1)^{{\hat{m}}\cdot{\hat{i}}}\delta^{\prime}_{\hat{j},\hat{i}+\hat{n}}&=\alpha\sum_{i}(-1)^{{\hat{m}}\cdot{\hat{i}}}\nu^{*}_{\hat{i}+\hat{n}}\nu_{\hat{i}},\end{split}$ (28) where $\delta^{\prime}$ is a Kronecker delta. This is much simpler on $\ket{\nu}$: $\braket{\psi}{\mathcal{O}}{\psi}=\alpha\braket{\nu}{X_{\hat{n}}Z_{\hat{m}}}{\nu}.$ (29) Notice that the $\alpha$ phase comes from forcing a specific decomposition on $\mathcal{B}(\mathcal{S},\mathcal{D})$, which might mean we have to write $XZ=-iY$ instead of $Y$ directly; it does not mean we allow non-physical observables, that is, with non-real expected values. A measurement of this observable $\mathcal{O}$ projects the state $\ket{\psi}$: $\ket{\psi}\rightarrow\frac{I\pm\mathcal{O}}{2}\ket{\psi}.$ (30) The sign is $+$ ($-$) when projecting to the positive $\ket{\psi_{+}}$ (negative $\ket{\psi_{-}}$) eigenstate. We can calculate $\braket{\mathcal{O}}$ with Eq. 29 and randomly decide the output with probability $p=\frac{1+\braket{\mathcal{O}}}{2}$ for $\ket{\psi_{+}}$ and $1-p=\frac{1-\braket{\mathcal{O}}}{2}$ for $\ket{\psi_{-}}$. Then the following lemma shows how to update the coefficients. ###### Lemma 3. For a given stabilizer basis $\mathcal{B}(S,D)$ and an observable $\mathcal{O}$ that decomposes as $\mathcal{O}=\alpha\delta_{\hat{n}}\sigma_{\hat{m}},$ (31) the projection $\braket{I\pm\mathcal{O}}{2}$ onto the positive (negative) eigenstate is equivalent to the following non-unitary operation on $\ket{\nu}$: $P_{k}\cdot\tilde{\mathcal{R}}_{X_{I_{x}}Y_{I_{y}}Z_{I_{z}}}=P_{k}\cdot\left(\frac{1}{\sqrt{2}}I\pm\frac{\alpha(-i)^{|I_{y}|}}{\sqrt{2}}X_{I_{x}}Y_{I_{y}}Z_{I_{z}}\right),$ (32) where $k$ is the position of the first $1$ in $\hat{n}$, $P_{k}$ is the projector $\ket{0}\bra{0}$ on qubit $k$, and the choice of rotation axes is given by $\delta_{\hat{n}},\sigma_{\hat{m}}$ as $I_{y}=\hat{n}\circ_{h}\hat{m}\quad,\quad I_{x}=\hat{n}+I_{y}\quad,\quad I_{z}=\hat{m}+I_{y}$ (33) The resulting state $\ket{\psi^{\prime}}$ is a valid quantum state when renormalized as $\sqrt{\frac{2}{1\pm\braket{\psi}{\mathcal{O}}{\psi}}}\ket{\psi^{\prime}}$. Proof: We can expand the projection of $\frac{I\pm\mathcal{O}}{2}$ into: $\frac{I\pm\mathcal{O}}{2}\ket{\psi}=\frac{1}{2}\sum_{\hat{i}}(I\pm\alpha\delta_{\hat{n}}\sigma_{\hat{m}})\nu_{\hat{i}}d_{\hat{i}}\ket{\psi_{\mathcal{S}}}=\frac{1}{2}\sum_{\hat{i}}(\nu_{\hat{i}}\pm\alpha(-1)^{\hat{i}\cdot\hat{m}}\nu_{\hat{i}+\hat{n}})d_{\hat{i}}\ket{\psi_{\mathcal{S}}}.$ (34) Then we must consider the update to the stabilizer basis. When $\hat{n}=0$, we are projecting onto a stabilizer of $\ket{\psi_{\mathcal{S}}}$ and there is no update to $\mathcal{B}(\mathcal{S},\mathcal{D})$. In this case we are directly left with: $\frac{I\pm\mathcal{O}}{2}\ket{\psi}=\frac{1}{2}\sum_{\hat{i}}\nu_{\hat{i}}(1\pm\alpha(-1)^{\hat{i}\cdot\hat{m}})d_{\hat{i}}\ket{\psi_{\mathcal{S}}}\quad\text{if }\hat{n}=0.$ (35) In the case $\delta_{\hat{n}}\neq 0$, it was shown in [19] how to update the basis in terms similar to our Eq. 34. Adapting those results to our notation for $\ket{\nu}$, we get the new basis $\mathcal{B^{\prime}}(\mathcal{S}^{\prime},\mathcal{D}^{\prime})$ and : $\frac{I\pm\mathcal{O}}{2}\ket{\psi}=\sum_{\hat{i}}\left[\frac{1}{2}(\pm\alpha(-1)^{\hat{i}\cdot\hat{m}})^{\hat{i}_{k}}\nu_{\hat{i}}\right]\>d_{\hat{i}+\hat{i}_{k}\cdot\hat{n}}^{\prime}\ket{\psi_{\mathcal{S}^{\prime}}}\quad\text{if }\hat{n}\neq 0,$ (36) where $k$ is the position of the first $1$ in $\hat{n}$ and $\hat{i}_{k}$ the $k^{th}$ element of $\hat{i}$. Notice that $\hat{i}_{k}=0$ implies $(\hat{i}+\hat{n})_{k}=1$, that is, $d_{\hat{i}+\hat{i}_{k}\cdot\hat{n}}=d_{\hat{i}}$ and $d_{(\hat{i}+\hat{n})+(\hat{i}+\hat{n})_{k}\cdot\hat{n}}=d_{(\hat{i}+\hat{n})+\hat{n}}$, so the coefficient $\frac{1}{2}(\alpha(-1)^{\hat{i}\cdot\hat{m}})^{\hat{i}_{k}}\nu_{\hat{i}}$ stays in $d_{\hat{i}}\ket{\psi_{\mathcal{S}}}$ and $\frac{1}{2}(\alpha(-1)^{(\hat{i}+\hat{n})\cdot\hat{m}})^{(\hat{i}+\hat{n})_{k}}\nu_{\hat{i}+\hat{n}}$ moves to $d_{(\hat{i}+\hat{n})+\hat{n}}\ket{\psi_{\mathcal{S}}}=d_{\hat{i}}\ket{\psi_{\mathcal{S}}}$, leaving $\nu_{\hat{i}+\hat{n}}$ empty; for $\hat{i}_{k}=1$ both coefficients concentrate on $\nu_{\hat{i}+\hat{n}}$ and leave $\nu_{\hat{i}}$ empty instead: the measurement halves the non-zero coefficients whenever $\hat{n}\neq 0$. This means we can rewrite the above as: $\frac{1\pm\mathcal{O}}{2}\ket{\psi}=\sum_{\hat{i}}\delta_{\hat{i}_{k},0}\left(\frac{1}{\sqrt{2}}\nu_{\hat{i}}\pm\frac{\alpha(-1)^{\hat{i}\cdot\hat{m}}}{\sqrt{2}}\nu_{\hat{i}+\hat{n}}\right)d_{\hat{i}}\ket{\psi_{\mathcal{S}}}\quad\text{if }n\neq 0.$ (37) where the Kronecker delta filters the non-zero coefficients. Defining $\hat{i}_{k}\equiv 0$ when $\hat{n}=0$, we see the only difference between Eq. 35 and Eq. 37 is a factor of $\sqrt{2}$. Since we have to normalize at the end anyway, we can rejoin both cases and proceed with Eq. 37. In terms of $\ket{\nu}$, we can prepare the superposition on all states, which looks almost like a rotation: $\tilde{\mathcal{R}}\ket{\nu}=\tilde{\mathcal{R}}\sum_{i}\nu_{\hat{i}}=\sum_{i}\frac{1}{\sqrt{2}}\nu_{\hat{i}}\pm\frac{\alpha(-1)^{\hat{i}\cdot\hat{m}}}{\sqrt{2}}\>\nu_{\hat{i}+\hat{n}},$ (38) and remove the duplicate coefficients afterwards with a projection on the $\ket{0}$ state of qubit $k$: $P_{k}=I_{0}\otimes\dots I_{k-1}\otimes\begin{pmatrix}1&0\\\ 0&0\end{pmatrix}\otimes I_{k+1}\dots\otimes I_{n}\equiv\ket{0}\bra{0}_{k},$ (39) so that $\frac{I+\mathcal{O}}{2}\ket{\nu}=P_{k}\>\tilde{\mathcal{R}}\ket{\nu}$. This transformation is similar to the rotation in lemma 2, but removing the $i$ phase. We can reuse the reasoning there to find: $\tilde{\mathcal{R}}_{X_{I_{x}}Y_{I_{y}}Z_{I_{z}}}=\frac{1}{\sqrt{2}}I\pm\frac{\alpha(-i)^{|I_{y}|}}{\sqrt{2}}X_{I_{x}}Y_{I_{y}}Z_{I_{z}},$ (40) where $I_{x},I_{y},I_{z}$ are related to $\delta_{\hat{n}},\sigma_{\hat{m}}$ exactly as in Eq. 33. Now, $I_{x}$ has the unique 1s of $\hat{n}$, with $0$s elsewhere, $I_{z}$ those of $\hat{m}$, whereas $I_{y}$ has the common $1$s between $\hat{n}$ and $\hat{n}$. Although it is not a unitary operation, it’s equivalent to a projection, so the output is not normalized but is otherwise a valid state. To find the normalization term, we reuse Eq. 28: $\braket{\psi}{\mathcal{O}}{\psi}=\alpha\sum_{\hat{i}}(-1)^{\hat{i}\hat{m}}\nu^{*}_{\hat{i}+\hat{n}}\nu_{\hat{i}}=\alpha^{*}\sum_{\hat{i}}(-1)^{\hat{i}\hat{m}}\nu^{*}_{\hat{i}}\nu_{\hat{i}+\hat{n}}.$ (41) The second equality is a consequence of $\braket{\psi}{\mathcal{O}}{\psi}$ being real. Now: $\begin{split}&\qquad\qquad\qquad\qquad\qquad\mathcal{N}^{2}=\left(\bra{\psi_{\mathcal{S}}}\frac{1\pm\mathcal{O}^{\dagger}}{2}\right)\left(\frac{1\pm\mathcal{O}}{2}\ket{\psi_{\mathcal{S}}}\right)=\\\ &=\sum_{\hat{i},\hat{j}}\bra{\psi_{\mathcal{S}}}d_{\hat{j}}\left(\frac{1}{\sqrt{2}}\nu_{\hat{j}}^{*}\pm\frac{\alpha^{*}(-1)^{\hat{j}\cdot\hat{m}}}{\sqrt{2}}\nu_{\hat{j}+\hat{n}}^{*}\right)\delta_{\hat{j}_{k},0}\delta_{\hat{i}_{k},0}\left(\frac{1}{\sqrt{2}}\nu_{\hat{i}}\pm\frac{\alpha^{*}(-1)^{\hat{i}\cdot\hat{m}}}{\sqrt{2}}\nu_{\hat{i}+\hat{n}}\right)d_{\hat{i}}\ket{\psi_{\mathcal{S}}}=\\\ &=\sum_{\hat{i},\hat{j}}\delta_{\hat{j}_{k},0}\delta_{\hat{i}_{k},0}\left(\frac{1}{\sqrt{2}}\nu_{\hat{j}}^{*}\pm\frac{\alpha^{*}(-1)^{\hat{j}\cdot\hat{m}}}{\sqrt{2}}\nu_{\hat{j}+\hat{n}}^{*}\right)\left(\frac{1}{\sqrt{2}}\nu_{\hat{i}}\pm\frac{\alpha^{*}(-1)^{\hat{i}\cdot\hat{m}}}{\sqrt{2}}\nu_{\hat{i}+\hat{n}}\right)\braket{\psi_{\mathcal{S}}}{d_{\hat{j}}d_{\hat{i}}}{\psi_{\mathcal{S}}}=\\\ &=\sum_{\hat{i},\hat{j}}\delta_{\hat{j}_{k},0}\delta_{\hat{i}_{k},0}\left(\frac{1}{2}\nu_{\hat{j}}^{*}\nu_{\hat{i}}\pm\frac{\alpha^{*}(-1)^{\hat{j}\cdot\hat{m}}}{2}\nu_{\hat{j}+\hat{n}}^{*}\nu_{\hat{i}}\pm\frac{\alpha(-1)^{\hat{i}\cdot\hat{m}}}{2}\nu_{\hat{j}}^{*}\nu_{\hat{i}+\hat{n}}+\frac{|\alpha|^{2}}{2}v_{\hat{j}+\hat{n}}^{*}v_{\hat{i}+\hat{n}}\right)\delta_{\hat{i},\hat{j}}=\\\ &\quad=\sum_{\hat{i}}\delta_{\hat{i}_{k},0}\left(\frac{1}{2}|\nu_{\hat{i}}|^{2}+\frac{|\alpha|^{2}}{2}v_{\hat{i}+\hat{n}}^{*}v_{\hat{i}+\hat{n}}\pm\frac{\alpha^{*}(-1)^{\hat{i}\cdot\hat{m}}}{2}\nu_{\hat{i}+\hat{n}}^{*}\nu_{\hat{i}}\pm\frac{\alpha(-1)^{\hat{i}\cdot\hat{m}}}{2}\nu_{\hat{i}}^{*}\nu_{\hat{i}+\hat{n}}\right)=\\\ &\qquad\qquad\qquad\quad=\frac{1}{2}\sum_{\hat{i}}\delta_{\hat{i}_{k},0}|\nu_{\hat{i}}|^{2}+\frac{1}{2}\sum_{\hat{i}}\delta_{\hat{i}_{k},0}|\nu_{\hat{i}+\hat{n}}|^{2}\pm\sum_{\hat{i}}\delta_{\hat{i}_{k},0}\braket{\psi}{\mathcal{O}}{\psi}=\\\ &\qquad\qquad\qquad=\frac{1}{2}\sum_{\hat{i}}(\delta_{\hat{i}_{k},0}+\delta_{(\hat{i}+\hat{n})_{k},0})|\nu_{i}|^{2}\pm\frac{1}{2}\braket{\psi}{\mathcal{O}}{\psi}=\frac{1\pm\braket{\psi}{\mathcal{O}}{\psi}}{2}.\end{split}$ (42) We used that $\hat{i}_{k}=0\leftrightarrow(\hat{i}+\hat{n})_{k}=1$ again, so the sum of $(\delta_{\hat{i}_{k},0}+\delta_{(\hat{i}+\hat{n})_{k},0})$ is always $1$. Because $(\hat{i}+\hat{n})+\hat{n}=\hat{i}$, each contribution to the sum appears twice, so the Kronecker delta selects one of each pair and is thus equivalent to the factor $\frac{1}{2}$ in front of the expected value. This equation proves we have a valid quantum state in all cases with a renormalization term of: $\mathcal{N}=\sqrt{\frac{1\pm\braket{\psi}{\mathcal{O}}{\psi}}{2}}.$ (43) $\hfill\square$ We can implement the rotation we found with a CNOT cascade similarly to Eq. 19, but instead of a central $R_{X}$ rotation we need the following one-qubit (non-unitary) operation: $\tilde{\mathcal{R}}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&\pm\alpha(-i)^{|I_{y}|}\\\ \pm\alpha(-i)^{|I_{y}|}&1\end{pmatrix}.$ (44) ## Appendix B Entangling power of an arbitrary gate There is not a unique way to define the entangling power of a gate. There are many notions of multipartite entanglement that one can use, and the entanglement of the final state depends on the initial state. This means we have to look at an average over all possible states or a worst/best case, which can also complicate the calculation of the chosen metric. One of the first proposals [45] used the average linear entropy over all possible product states $\ket{\psi_{A}}\otimes\ket{\phi_{B}}$ on a preset bipartition $A,B$ of the whole space: $e(U)\coloneqq\overline{E(U\ket{\psi_{A}}\otimes\ket{\phi_{B}})}^{\psi_{A},\phi_{B}}.$ (45) This approach has appeared specially useful when studying entangling power on mixed states [46]. Others are based on unitary evolution and focus on the norm of an infinitesimal transformation, such as [47, 48]. Metrics other than entanglement have also been used, such as quantum discord [49]. Since we deal with an MPS, we focus on the bipartite entanglement case, and instead of the average, we find the maximum bond dimension $\chi^{\prime}$ needed in our MPS after applying a gate on an MPS that had maximum bond dimension $\chi$. We use the Schmidt decomposition of unitaries [50], which has been used previously to characterize arbitrary gates [51] and tells us we can decompose any unitary as $\mathcal{U}=\sum_{i=1}^{k}s_{i}A_{i}\otimes B_{i},$ (46) where $s_{i}\geq 0$, $\sum_{i}|s_{i}|^{2}=1$ and $A_{i},B_{i}$ are an orthogonal operator basis [52]. When applying the rotation $\mathcal{R}$ in Eq. 6, we can focus on any arbitrary bond of our MPS by decomposing it as in Fig. 3. This way, we only need to check how the gates that cross the chosen bond affect $\chi$. Figure 3: Possible decomposition of the rotation 6 showcasing how the gates affect $\chi$ for different bonds. All gates that can be grouped into a unitary on partition A (B) are irrelevant. The bond in blue starts with $\chi_{b}$ and each CNOT can increase it by at most a factor of $2$, reaching $\chi_{b}^{\prime}\leq 4\chi_{b}$ after the transformation. For the orange bond, starting with $\chi_{o}$, the implementation of a CNOT across far away qubits on an MPS requires that we apply a SWAP gate at each line crossing, which increases $\chi$ by at most $4$, so that at the end we have $\chi_{o}^{\prime}\leq 16\chi_{o}$. This argument is independent of the initial bonds. With the Schmidt decomposition [30] of the initial state $\ket{\psi}$ as: $\ket{\psi}=\sum_{i=1}^{\chi}\lambda_{i}\ket{\psi_{A}^{i}}\otimes\ket{\psi_{B}^{i}},$ (47) limited to rank $\chi$, applying a two qubit gate with Schmidt number $k$ means we get that $\chi^{\prime}$ is at most: $\mathcal{U}_{k}\ket{\psi}=\sum_{j=1}^{k}\sum_{i=1}^{\chi}(s_{j}\lambda_{i})A_{j}\ket{\psi_{A}^{i}}\otimes B_{j}\ket{\psi_{B}^{i}}.=\sum_{i=1}^{k\chi}\tilde{\lambda}_{i}\ket{\phi_{A}^{i}}\otimes\ket{\phi_{B}^{i}}$ (48) The final form is still a valid Schmidt decomposition thanks to the orthonormality of $A_{i},B_{i}$ and $\sum_{i}s_{k}^{2}=1$. A CNOT gate has Schmidt number $k=2$ [52], so the set of two CNOTs that are applied to a particular bond can increase at most $\chi^{\prime}\leq 4\chi$. Counter to intuition, the worst-case scenario when using an MPS is not an update that affects all qubits, but one that affects qubits that are far apart: a CNOT gate over tensors that are not neighbours is implemented with SWAPs on our MPS. A SWAP gate has Schmidt rank 4, so Eq. 48 gives the bound $\chi^{\prime}\leq 16\chi$ instead, as stated in the main text and fitting the simulations in 2. Since this maximum is a consequence of SWAP gates, TN geometries other than MPS that adapt to the connectivity of the simulated circuit can reduce the bound to $4\chi$; this entails a bigger complexity in the TN contraction, as is the case in general for higher dimensional networks. ## Appendix C Stabilizer TN update example It is useful to illustrate an example of a coefficient update with a non- Clifford gate in terms of $\ket{\nu}$. We take an arbitrary basis $\mathcal{B}(\mathcal{S},\mathcal{D})$ for $5$ qubits and a unitary that decomposes as $\mathcal{U}=\frac{\sqrt{3}}{2}\delta_{\hat{d}_{1}}\sigma_{\hat{s}_{1}}+\frac{1}{2}\delta_{\hat{d}_{2}}\sigma_{\hat{s}_{2}}=\left(\cos(\pi/6)+\sin(\pi/6)\delta_{\hat{d}_{2}}\sigma_{\hat{s}_{2}}\delta_{\hat{d}_{1}}\sigma_{\hat{s}_{1}}\right)\delta_{\hat{d}_{1}}\sigma_{\hat{s}_{1}}=\left(\cos(\pi/6)-\sin(\pi/6)(\delta_{\hat{d}_{2}}\delta_{\hat{d}_{1}})(\sigma_{\hat{s}_{2}}\sigma_{\hat{s}_{1}})\right)\delta_{\hat{d}_{1}}\sigma_{\hat{s}_{1}}.$ (49) Let us assume that the vector representation of $\delta,\sigma$ is: $\begin{split}\left.\begin{array}[]{c}\hat{d}_{1}=(1,1,0,0,0)\\\ \hat{d}_{2}=(1,0,0,1,0)\end{array}\right\\}&\rightarrow\hat{d}_{2}\hat{d}_{1}=(0,1,0,1,0)\\\ \left.\begin{array}[]{c}\hat{s}_{1}=(0,0,0,1,0)\\\ \hat{s}_{2}=(0,0,1,0,0)\end{array}\right\\}&\rightarrow\hat{s}_{2}\hat{s}_{1}=(0,0,1,1,0)\end{split},$ (50) which also implies $\hat{d}_{1}\cdot\hat{s}_{2}=1$. Then using Eqs. 14,15 and the $\ket{\nu}$ notation we can rewrite $\mathcal{U}\ket{\nu}=\left(\cos(\pi/6)-\sin(\pi/6)(X_{1}X_{3})(Z_{2}Z_{3})\right)X_{0}X_{1}Z_{3}\ket{\nu}=\left(\cos(\pi/6)+i\sin(\pi/6)X_{1}Y_{3}Z_{2}\right)X_{0}X_{1}Z_{3}\ket{\nu}$ (51) We see that this fits Eq. 6. Then the coefficients can be updated with the corresponding multiqubit rotation, which we show in Fig. 4. We also show the decomposition that we have used in our Python implementation, which uses two cascades centred on the middle qubit instead of a single CNOT cascade. Figure 4: Example of coefficient update for the unitary described in Eq. 49. Horizontal lines are ”qubit” sites in the traditional MPS tensor network representation of a quantum state. Gates are applied from left to right. The resulting TN of this example is more entangled than the initial $\ket{\nu}$. Most circuit simulations compile an input gate set into a specific set of gates, since practical realizations of quantum computers are similarly bound by a limited set of native gates. Our simulation approach can handle any circuit with a $\\{CNOT,R_{X},R_{Y},R_{Z}\\}$ decomposition, so it’s compatible with most circuits despite the limitations of lemma 2. Other characterizations are possible and still compatible with the stabilizer TN framework, but the implementation of unitaries of arbitrary decomposition is left for future work. ## Appendix D Tableau update rules Our formalism relies on the update rules for the original tableau: $\left(\begin{array}[]{ccc|ccc|c}x_{1,1}&\cdots&x_{1,n}&z_{1,1}&\cdots&x_{1,n}&r_{1}\\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots&\vdots\\\ x_{n,1}&\cdots&x_{n,n}&z_{n,1}&\cdots&z_{n,n}&r_{n}\\\ \hline\cr x_{n+1,1}&\cdots&x_{n+1,n}&z_{n+1,1}&\cdots&x_{n+1,n}&r_{n+1}\\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots&\vdots\\\ x_{2n,1}&\cdots&x_{2n,n}&z_{2n,1}&\cdots&z_{2n,n}&r_{2n}\\\ \end{array}\right).$ (52) This section follows [37] exactly. It is included to give a self-contained explanation of our formalism’s update rules, since the original update rules are also used. Considering that a measurement over $X$ or $Y$ basis can be set to a Clifford operator followed by a $Z$ basis measurement, the basic operations we need are, using always base 2: 1. 1. CNOT operator with control qubit $a$ and target qubit $b$: For every row $i$, update entries $\textit{x}_{ib}$, $\textit{z}_{ia}$, $\textit{r}_{i}$ as $r_{i}:=r_{i}\oplus x_{ia}z_{ib}(x_{ib}\oplus z_{ia}\oplus 1)\quad;\quad x_{ib}:=x_{ib}\oplus x_{ia}\quad;\quad z_{ia}:=z_{ia}\oplus z_{ib}.$ (53) 2. 2. Hadamard operator on qubit $a$: For every row $i$, update entries $\textit{x}_{ia}$, $\textit{z}_{ia}$, $\textit{r}_{i}$ as $r_{i}:=r_{i}\oplus x_{ia}z_{ia}\quad;\quad x_{ia}:=z_{ia}\quad;\quad z_{ia}:=x_{ia}.$ (54) 3. 3. Phase operator on qubit $a$: For every row $i$, update entries $\textit{z}_{ia}$, $\textit{r}_{i}$ as $r_{i}:=r_{i}\oplus x_{ia}z_{ia}\>;\>z_{ia}:=z_{ia}\oplus x_{ia}.$ (55) 4. 4. Measurement over $Z$ basis on qubit $a$: The update is different whether the measurement commutes with the current stabilizers. If it does, we do not need to update the tableau. Then, for each row $i\in\\{1\dots 2n\\}$, we do the operation rowsum(i,t) on an auxiliary row $t$ that starts with all zeroes, and the phase of $t$ tells us if we measure 0 or 1. Otherwise, it must anticommute, so the outcome $r$ is random with equal probability, but we must change the tableau. To do so, we choose the row $i$ of one of the anticommuting stabilizers $H$ (those with $x_{ia}=1$), do the operation rowsum(i,h) for all $h\in H\setminus\\{i\\}$, and finally we add the observable $Z_{a}$ to the list of stabilizers at $i$ and store the former stabilizer $i$ as a destabilizer at row $i-n$. While not technically an operation that appears in the circuit, we also need to define what this ”rowsum” operation does: 1. 5. rowsum(a,b): Sets generator $a$ to $a+b$, that is, $x_{aj}=x_{aj}\oplus x_{bj}$ and $z_{aj}=z_{aj}\oplus z_{bj}$ for all $j\in\\{1\dots n\\}$, while properly changing its phase too. To do so, we need a function $g(x_{1},z_{1},x_{2},z_{2})$ that returns the exponent of the phase we get from multiplying $x_{1}z_{1}\cdot x_{2}z_{2}$. That is: $\begin{split}x_{1}=z_{1}=0&\rightarrow g=0\\\ x_{1}=z_{1}=1&\rightarrow g=z_{2}-x_{2}\\\ x_{1}=1,z_{1}=0&\rightarrow g=z_{2}(2x_{2}-1)\\\ x_{1}=0,z_{1}=1&\rightarrow g=x_{2}(1-2z_{2})\end{split}.$ (56) Then the phase $r_{a}$ is: $\begin{split}r_{a}=0\text{ if }2r_{a}+2r_{b}+\sum_{j=1}^{n}g(x_{bj},z_{bj},x_{aj},z_{aj}\equiv 0\>(\text{mod }4)\\\ r_{a}=1\text{ if }2r_{a}+2r_{b}+\sum_{j=1}^{n}g(x_{bj},z_{bj},x_{aj},z_{aj}\equiv 2\>(\text{mod }4)\\\ \end{split}.$ (57)
# Infrared and Optical Detectability of Dyson Spheres at White Dwarf Stars B. Zuckerman Department of Physics and Astronomy, University of California, Los An geles, CA 90095-1562, USA ###### Abstract It has been hypothesized that advanced technological civilizations will construct giant space colonies and supporting infrastructures to orbit about their home stars. With data from recent satellites that operate at infrared and optical wavelengths (Spitzer, WISE, TESS, Kepler), in company with a few modest assumptions, it is now possible to begin to constrain observationally the frequency of such space-based civilizations in our Milky Way Galaxy. Key words: white dwarfs – infrared: stars – astrobiology ## 1 Introduction The number of technological civilizations in our Milky Way Galaxy is a topic of endless debate and speculation. It is generally agreed that, if such civilizations are abundant, then they must be long lived. Thus, such civilizations will ultimately have to deal with the evolution of their stars, first into red giants and then into white dwarfs. Two possible paths would be mass migration to another star (Zuckerman 1985; Hansen & Zuckerman 2021) or remaining at the evolving star and accommodating to the changes in luminosity and stellar wind flux (Gertz 2020). These are not mutually exclusive and both paths could be taken by an advanced civilization. One plausible picture of an advanced civilization that accommodates stellar evolution – rather than entirely fleeing from it – would include numerous giant space colonies and other giant constructs in orbit in sphere- or ring- like configurations about a white dwarf or an old main sequence star. This type of technological civilization is often referred to as a Dyson sphere or ring, after Freeman Dyson who first described it (Dyson 1960). One motivation for giant constructs would be to supply energy to the space colonies, especially those in orbit around white dwarfs, because white dwarfs are of low and declining luminosity. Whatever the detailed arrangement, one anticipates that energy collection devices would be situated close to stars so as to minimize their size. Their temperatures would probably lie in the range 300 to 1000 K and they would radiate primarily between a few and 10 $\mu$m wavelengths (Wright et al 2014). This emission would be in addition to (an excess above) emission from the star itself. It seems unlikely that carbon/water based life could ever exist at temperatures much above 300 K. But should such life evolve (redesign itself) into something very different, then perhaps the space colonies themselves would reside at temperatures substantially above 300 K. The focus of the present paper is the detectability of artificial constructs in orbit around white dwarf stars, although main sequence stars are also considered (Section 6). The past two decades have witnessed the birth of the study of planetary systems around white dwarfs via detection of elements heavier than helium in their photospheres, detection of infrared emission from orbiting dust particles and optical spectral lines from orbiting gas, and occasional discovery of orbiting planets and brown dwarfs via either their IR emission or a transit in front of a white dwarf. The photospheric heavy elements are seen in at least 25% of all white dwarfs and is a marker for the presence of orbiting dust particles and gas. The dust can reveal itself directly via excess IR emission above that emitted by the white dwarf photosphere, but such excess is seen in only a few percent of all white dwarfs (Rocchetto et al. 2015, Farihi 2016, Wilson et al. 2019). Even less common is excess IR emission emitted by a cool brown dwarf or cool white dwarf. Thus, in order to identify a Dyson sphere/ring at a white dwarf, one must eliminate dust particles and brown and white dwarfs as the carrier of excess IR emission. In addition to absorbing radiation emitted b‌y the white dwarf, the space constructs will sometimes pass (transit) between the star and our telescopes, thus causing a dip in the received optical light. Arnold (2005) discussed the transit light-curve signatures of very large artificial objects. Agol (2011) considered transits of Earths in the habitable zones of white dwarfs. The first examples of cool obects transiting white dwarfs have recently been obtained with the Zwicky Transient Facility and NASA’s TESS satellite (Vanderbosch et al 2020 & 2021; Vanderburg et al 2020; van Roestel et al 2021). Zuckerman (1985) showed that a few percent of the technological civilizations in our Milky Way galaxy may already have experienced the evolution of their star from main sequence to white dwarf (see Section 4 below). Thus, if technological civilizations are common, as some believe, then finding one in orbit around a white dwarf could be quite plausible. In the following we refer to Dyson spheres/rings as DSRs. The primary purpose of the present paper is to consider and compare the observational limits that can currently be placed on the number (frequency) of DSRs at white dwarf stars via their infrared emission and/or transit frequency. Many guesstimates for the number of technological civilizations in the Milky Way can be found in the literature (see Section 2). But now, thanks to advances in discovery and analysis of extrasolar planetary systems, it is possible to frame the question in a more quantitative way: On what fraction of planets in the habitable zone of sun-like stars do long-lived technological civilizations arise and eventually build a DSR? In the following section we define what is meant by the number of technological civilizations in our Galaxy. In Section 3 we consider the carriers of excess IR emission at white dwarf stars. Section 4 is a discussion of how infrared bright a DSR would have to be to have already been detected at a white dwarf star and what constraints such measurements place on the frequency of DSRs. Section 5 is a consideration of transit detectability of DSRs and how it compares with detection via infrared emission. Section 6 describes the current observational situation at main sequence stars. Section 7 compares the advantages and disadvantages of white dwarfs compared to main sequence stars when one is searching for a DSR. ## 2 What is meant by the number of technological civilizations in the Galaxy? In order to give context to limits on the frequency of DSRs in the Galaxy, one should have some idea of how many long-lived technological civilizations “N” might be anticipated. The range suggested in the literature is huge, extending from zero (Hart 1975) to as many as 10 billion (F. Drake as quoted in Tarter 2001). Other estimates include: 50,000 to 1,000,000 (Shklovskii & Sagan 1966), one billion (Oliver & Billingham 1971, Goldsmith & Owen 1992), and 10 million (Sagan 1980). As a consequence of the discovery during the past few decades that there are more planets than stars in the Milky Way, many persons would likely expect a (very) large value for N. In each of the above examples, N refers to the number of independently arising technologies and does not include the possibility that a technological civilization might expand to occupy one or more additional star systems. The relevance for the present paper of any such migrations depends strongly on the motivations for migration. If the only motivation for interstellar travel is to escape from one’s home planetary system so as to avoid the evolution of the home star to red giant and white dwarf (Zuckerman 1985, Hansen & Zuckerman 2021), then one would never expect a technological civilization to build a DSR around any white dwarf other than the white dwarf that their home star evolves into. Because it is hard to imagine any reason why a technological civilization would want to migrate to a white dwarf and then construct a DSR, it is probably safe to assume that the frequency of DSRs at white dwarf stars will depend only on the number of independently arising technological civilizations. In any event, in the unlikely event that migration to white dwarfs does occur frequently, then this would only serve to increase the likelihood of finding a DSR at a white dwarf. By contrast, interstellar travel and colonization might increase the number of DSRs that exist at main sequence stars. As one example, if curiosity about biology on other worlds is an important motivation for interstellar travel (e.g., Zuckerman 2019), then some DSRs at main sequence stars might be constructed by a technological civilization that first arose at a different star. In the discussion to follow we assume that N is equal to the number of independently arising technological civilizations in the Milky Way. We want to estimate what constraints existing and potential future infrared and optical observations of white dwarf and main sequence stars place on the frequency of DSRs and thus on N. ## 3 White dwarfs with excess infrared emission A few percent of white dwarf stars display excess infrared emission – characteristic temperatures $\sim$1000 K – above their photospheres (e.g., Rocchetto et al 2015). From the first discovery, such emission has generally been attributed to either orbiting dust particles or an orbiting brown dwarf (Zuckerman & Becklin 1987). It is now understood that the carrier of the IR emission is almost always orbiting dust particles, with but few examples of (spatially unresolved) brown dwarf or cool white dwarf companions (e.g., see Introduction in van Roestal et al 2021). A summary of studies of IR emission from white dwarfs can be found in the Introduction and in Table 1 in Xu et al (2020). When dust is responsible for an IR excess, then, with no reliably known counter examples, “pollution” of the white dwarf photosphere by elements heavier than helium is also observed. The generally accepted model is that the material contained in the dust particles is accreted onto the white dwarf (Veras 2016 & 2021; Farihi 2016; Jura & Young 2014, Zuckerman & Young 2018). The resulting level of photospheric pollution for stars with excess IR emission is typically large with respect to the average pollution level seen in all polluted white dwarfs. The parent body for the dust is usually a rocky asteroid or asteroids (Jura 2003 & 2008) that have been torn apart by the strong gravitational field of the white dwarf. Thus, if heavy elements are detected in the photosphere of a white dwarf with excess IR emission, then it is likely that the excess is due to dust particles and not to a brown dwarf or a DSR. Because excess IR emission carried by dust particles is less likely for cool (cooling times $>$1000 Myr) white dwarfs (e.g., Table 3 in Rocchetto et al 2015), statistically, white dwarfs with excess IR emission and relatively low temperatures $<$8000 K could more likely host a DSR. But some white dwarfs with temperatures as low as $\sim$6000 K (Debes et al 2019; Gentile Fusillo et al 2019) and 4200 K (Hollands et al 2021) appear to host excess IR emission likely due to dust. A typical blackbody temperature of the dust at white dwarfs is $\sim$1000 K, but a few are known with temperatures as low as 300-400 K (Table 3 in Rocchetto et al 2015). As mentioned above, these are plausible temperatures for a DSR. WD2328+107 probably has an IR excess (Wilson et al 2019) and has a progenitor main sequence mass similar to that of the Sun (Rocchetto et al 2015), but no evidence for heavy elements in its atmosphere. Still, the putative excess IR emission might be carried by a brown dwarf or a cool white dwarf companion (Wilson et al 2019). A JWST observation should be able to clarify the situation. Based on the above considerations, with the possible exception of WD2328+017, it seems unlikely that any currently known example of excess IR emission would be a good DSR candidate. Elimination of dust as the carrier of IR emission requires, at a minimum, the absence of photospheric pollution and silicate emission features in the 10 $\mu$m spectrum; such features are usually present (Jura et al 2009) when dust is present. ## 4 The Infrared Detectability of Dyson Spheres at white dwarf stars What limits can currently be placed on the frequency of DSRs at white dwarfs? While it might be possible to discover a narrowband signal generated by an extraterrestrial intelligence (ETI) from a white dwarf system simply by accident, there have not been any published radio or optical searches of white dwarfs meant specifically to detect ETI. Thus, the limits on N described below are the first to be obtained for white dwarfs. We frame the following discussion in two ways. In the first, we estimate upper limits to N that are implied by Spitzer Space Telescope surveys of white dwarfs in two limits: (1) all N technological civilizations in the Galaxy arise on stars with masses similar to that of the Sun (see Equation 2), and (2) technological civilizations arise with equal probability on stars with masses less than or about equal to that of the Sun. A second way of approaching the matter does not involve N directly, but rather asks the question: if a certain fraction, $\alpha$, of G-type stars in the Galaxy contain an appropriate rocky planet in the habitable zone such that life might originate, then what fraction of these potentially habitable planets eventually evolve a technological civilization that builds a DSR? Estimates for $\alpha$ have been derived from Kepler data (see below). Zuckerman (1985) calculated the number of technological civilizations that have originated on planets that orbit around main sequence stars that have evolved to white dwarfs in a time that is less than the age of our Galaxy which he, and we, assume to be 1010 years. We redo these calculations here, but using more recent information. Sollima (2019) considers a variety of star formation histories. For the purposes of the present paper, assumption of a constant birthrate during the past 1010 years is adequate. For stars with masses about equal to and greater than a solar mass, following Sollima, we assume that dQ/dt, the rate of formation of stars in the Galaxy with masses between M and M+dM, is given by : $\frac{dQ}{dt}\ =\ 1.55\frac{dM}{M^{2.55}}$ (1) In the above and below expressions, Q is simply a number, but we cannot label it with “N” because historically and, as above, N has been reserved to denote the number of technological civilizations in the Galaxy The exponent 2.55 is the mean of the range of power law slopes, 2.4-2.7, suggested by Sollima. And, following Iben (1984), we have normalized to a Galactic birthrate of one star per year with a mass greater than M⊙. Sollima gives a history of the evolution of the power law exponent going back to Salpeter (1955) who adopted 2.35 over the range of stellar masses of interest here. The white dwarf initial-final mass relation for progenitor stars with main sequence masses between 0.85 and 7.5 M⊙ is given in Cummings et al (2018). Mamajek (2021)111http://www.pas.rochester.edu/$\sim$emamajek/EEM_dwarf_UBVIJHK_colors_Teff.txt has compiled an extensive list of the properties of main sequence stars, in particular, spectral type vs. stellar luminosity and mass. We assume that stars with main sequence lifetimes that are shorter than 4.5 x $10^{9}$ years (M $>$1.25 M⊙) do not last sufficiently long to permit a technological civilization to evolve on an orbiting planet. And we assume a minimum main sequence mass of 0.95 M⊙ to allow some time for early production of metals with which to make rocky planets. Therefore, the interesting range of stellar masses lies between 0.95 and 1.25 M⊙. According to Mamajek, these masses correspond to spectral types G7 and F6, respectively, and with luminosities 0.74 and 2.69 L⊙ Thus, stars with nearly the mass of the Sun have main- sequence lifetimes that vary like M-3.8. The number of stars in the Milky Way born in the mass range 0.95 to 1.25 M⊙ during the past $10^{10}$ years is: $Q_{\rm i}\ =\ 10^{10}\int_{0.95}^{1.25}\frac{1.55\,dM}{M^{2.55}}\ =\ 3.75\times 10^{9}\,{\rm stars.}$ (2) The number of these stars still on the main sequence is: $Q_{\rm a}\ =\ \int_{0.95}^{1.25}1.55\,\frac{dM}{M^{2.55}}\int_{0}^{10^{10}/M^{3.8}}dt\ =\ 2.95\times 10^{9}\,{\rm stars.}$ (3) So the number of stars that have “died” and become white dwarfs is: $Q_{\rm i}-Q_{\rm a}\ =\ 8\times 10^{8}$ (4) Now we want to estimate how many technological civilizations (N*) might have existed on planets orbiting these 8 x $10^{8}$ white dwarfs. Let’s first assume that all N technological civilizations in the Galaxy originated around stars in the main sequence mass range 0.95 to 1.25 M⊙. Then N*max = (8 x $10^{8}$)N/(3.75 x $10^{9}$) = 0.2 N If, for example, N = $10^{9}$, so N*max = 2 x $10^{8}$ is distributed among 8 x $10^{8}$ white dwarfs. Then 25% of all white dwarfs could potentially display a DSR. How many white dwarfs have been searched for excess IR emission and to what level of sensitivity? And how many of these had masses on the main sequence in the range 0.95 to 1.25 M⊙? The relevant satellites are Spitzer and the Wide- field Infrared Survey Explorer (WISE). Various published papers describe Spitzer surveys for excess IR emission of white dwarfs. Table A1 in Rocchetto et al (2015) lists 134 white dwarfs of which 30 had progenitors with main sequence masses $\leq$1.25 M⊙. Wilson et al. (2019) carried out a more extensive survey (of 217 white dwarfs) that included the Rocchetto stars as a subset. Assuming the same fraction of low mass progentors among the 217 stars as among the 134, yields a total of about 50 white dwarfs in the relevant mass range. Figure 10 in Farihi (2016) shows 108 white dwarfs with photospheres that are polluted with heavy elements that were searched with Spitzer for IR excess emission during its first 7 cycles. Most of these stars are cooler than the stars in the Rocchetto/Wilson sample, so there is little overlap. Wilson et al note that only about one in 30 white dwarfs with polluted photospheres exhibits an IR excess in 3-4 $\mu$m IRAC photometry. Putting all of these considerations together, suggests that the Farihi sample of 108 stars contains about 25 that lack detectable excess IR emission and had main sequence masses that fall in the relevant range (less than 1.25 M⊙). Yet another major white dwarf survey was by Mullally et al (2007), with $\sim$100 searched. Finally there were some smaller Spitzer surveys, for example by Farihi et al (2008) of 17 white dwarfs. In total, it appears that Spitzer surveyed at least 100 white dwarfs that had masses on the main sequence in the range 0.95 to 1.25 M⊙ and with no evidence for excess IR emission. Table 3 in Rocchetto et al (2015) indicates that the typical search sensitivity – excess IR luminosity divided by bolometric luminosity – was about 0.1%, for excess emission at temperatures between about 300 and 1700 K. Thus, the considerations that follow apply only for DSRs of fractional IR luminosity greater than 0.1%. As noted just above, if all technological civilizations originated on stars with main sequence spectral types between G7 and F6, and if N is equal to one billion, then about 25% of the 100 Spitzer-observed white dwarfs with these spectral types and without orbiting dust or a cool companion could potentially display a DSR. Yet none have been reported in the literature where an IR excess exists that could not definitely or probably be attributed to dust or to a companion. If N = 100 million, then we might have expected to detect 2.5 DSRs among the 100 relevant Spitzer white dwarfs. Given statistics of small numbers, these few DSRs might not be included among the 100 observed white dwarfs. So, for the situation where all technological civilizations in the Milky Way orginate on main sequence stars with spectral types betwen G7 and F6, and where a DSR around the white dwarf that these stars evolve into has a fractional excess IR luminosity of at least 0.1%, then the implication is that N is less than 100 million. Based on the IMF for stellar masses less than a solar mass deduced by Sollima (2019), G7 through F6 stars comprise about 10% of all stars with main sequence lifetimes that are long enough to allow evolution of life to technology. If we assume that the N technological civilizations arise with equal probability around all such stars with no preference for stellar mass, then the implication from white dwarfs observed with Spitzer is that N is less than $10^{9}$. In summary, Spitzer observations of white dwarfs indicate that the upper limit to N is about 300 million, plus or minus a few hundred million; the better ($<$$10^{8}$) and poorer ($<$$10^{9}$) bounds to N apply, respectively, in the cases where technological civilizations preferentially form on planets at G-type stars or, alternatively if they form with equal probabilites on planets that orbit stars with masses less than or equal to that of the Sun. Again, the technological civilization must have contructed a DSR with fractional IR luminosity of at least 0.1% for these limits to apply (see Section 7 for consideration of whether this is a plausible DSR luminosity). From Equation 4 we see that about a billion F6 through G7 stars that were on the main sequence are now white dwarfs. Studies of Kepler and other data bases by Zink & Hansen (2019) and by Bryson et al. (2021) suggest that about 30% of G-type stars are orbited by a potentially habitable planet, or about 300 million such planets that orbit the white dwarfs of interest here. If as many as one in 30 of these planets spawns life that eventually evolves to a state where it constructs a DSR with luminosity at least 0.1% that of its host white dwarf, then in a sample of 100 white dwarfs we might have expected to see a DSR. Thus, fewer than 3% of the habitable planets that orbit sun-like stars host life that evolves to technology, survives to the white dwarf stage of stellar evolution, and builds a DSR with fractional IR luminosity of at least 0.1%. The WISE mid-infrared survey of the sky (Wright et al 2010) was sufficiently sensitive to yield significant limits for N for main sequence stars (see Section 6). But for the much fainter white dwarfs, clear identification of excess IR emission is plagued by confusion (e.g., Dennihy et al 2020; Xu et al 2020) with other sources of IR emission. Hopefully, future studies with the WISE database will enable these data to contribute to our understanding of the frequency of DSRs at white dwarfs. ## 5 Optical Detectability of Transits of Dyson Spheres at white dwarf stars As noted in the Introduction, a DSR, or part of it, might be detected as it transits between its star and a space-based telescope such as TESS or Kepler. In the present section we consider the relative sensitivity of the optical transit and infrared excess detection techniques. As noted in Section 4, the current infrared sensitivity limit – excess IR luminosity due to a DSR divided by bolometric luminosity – is about 0.1%. This limit depends on total solid angle blocked by the entire entourage of artificial constructs as seen from the surface of the white dwarf, but with no constraints on the size of individual structures. In contrast, the minimal requirement for detection by an optical transit depends on the size of the largest construct, while the total solid angle blocked by the DSR can be far less than 0.1%. We ask whether a DSR might be detected via transits with Kepler K2 (Howell et al 2014) or with TESS (Ricker et al. 2015). van Sluijs & Van Eylen (2018) investigated the sensitivity of K2 to substellar objects that transit white dwarfs. K2 targets were observed with either long (30 min) or short (1 min) exposure times; about 1000 non-composite, confirmed, white dwarfs were observed with the long cadence mode and about 300 with the short. For a 7000 K white dwarf (cooling time to reach this temperature $\sim$1.5 Gyr (Dufour et al. 2017), a black body with an orbital semi-major axis equal to a solar radius would have a temperature of 500 K. The transit time of a small object with this semi-major axis is about 1 min. So, for a transiting object of given size the detection efficiency is larger in the short cadence mode (see Figure 2 in van Sluijs & Van Eylen (2018). The diameters of the smallest objects for which the detection efficiencies are as large as $\sim$10% are comparable to that of Ceres ($\sim$1000 km). If a structure has an orbital semi-major axis of a solar radius, then the probability that its orbit would give rise to a transit as seen from Earth would be $\sim$0.01. If there are ”n” large structures that lie in sufficiently different planes, then the transit probability would increase by a factor of n. If a large construct were closer to the white dwarf than a solar radius, either hotter or with high albedo (light shields), then the transit probability could be a few percent. If the white dwarf were cooler than 7000 K then the orbital semi-major axes of constructs of a given temperature could be even smaller. But for a cooling time of $\sim$6 Gyr the white dwarf would still be $\sim$5000 K, and thus a transit probability only a factor of two larger for a structure of a given temperature. To have been detected with K2, a DSR would have to include a large structure (diameter $>$1000 km) with an appropriate orbital inclination with respect to our line of sight. No such transiting objects were found with K2. No summary of TESS studies of white dwarfs is yet available in the literature. TESS observes a given region of the sky for $\sim$27 days with a number of cadences. Andrew Vanderburg (personal communication, 2022) is leading a collaboration that should eventually observe, with 2 minute cadence, about 10,000 white dwarfs; to date about 5000 have been observed. Because of TESS’ large (21”) pixels, background objects in addition to the white dwarf of interest are often included in the white dwarf pixel. After correction for the presence of such objects in the white dwarf pixel, the scatter in the measured flux is typically about 10% for a white dwarf with Gaia G magnitude = 16 (A. Vanderburg, personal communication 2022). Based on statistics given in Gentile-Fusillo et al (2021) and in Dufour et al (2017) one anticipates about 2500 white dwarfs will be this bright or brighter. With a representative 3 hour orbital period, a given structure will transit a few 100 times in 27 days, after which one can construct a final phase folded light curve with flux scatter of about 1% for each of the 2500 white dwarfs. Thus, a signal-to-noise ratio of, say, 10 in a 27 day average would require a transit depth of $\sim$10%. As with K2, such a deep transit requires DSR constucts with diameter of order 1000 km. The full TESS sample of 10,000 white dwarfs should contain $\sim$800 with G mag = 15 or brighter, so for these brightest stars a transit depth of a few % for detection. Ultimately, when the final TESS white dwarf sample is observed and analyzed, the detection limits quoted in this paragraph may turn out to be too conservative – because a significant fraction of the final sample should eventually be observed with 20 sec cadence, and/or for longer than 27 days (specifically multiples of 27 by a factor of 2 or more) (A. Vanderburg, personal communication 2022). As in Section 4, it is possible to derive constraints on the value of N with use of the Kepler K2 and TESS databases (when the latter becomes available). However, such constraints will apply only to DSRs that include at least one structure with diameter $\sim$1000 km and will be sensitive to the number of such constructs and the inclination of their orbital planes relative to our line of sight. Should transits with small fractional dips be detected, it might be possible to distinguish natural from artificial objects by study of the transit shape (e.g., Arnold 2005; Sandford & Kipping 2019). However, this would likely require an additional factor of $\sim$10 in signal-to-noise ratio (G. Marcy, personal communication 2022). ## 6 The frequency of technological civilizations at main sequence stars Evidence of ETI at main sequence stars can be in the form of excess IR emission from a DSR or as radiation beamed toward Earth at radio, IR, or optical wavelengths for the purpose of communication. Each technique has its own advantages and disadvantages. Concerning beamed transmissions, at radio wavelengths one representative major project is “SETI Observations of Exoplanets with the Allen Telescope Array” (Harp et al 2016). A representative search at optical wavelengths would be “A Search for Laser Emission with Megawatt Thresholds from 5600 FGKM Stars” (Tellis & Marcy 2017). The website technosearch.seti.org gives a comprehensive list of (perhaps all) radio and optical search programs. Based on the discussion in Sections 4 and 7, perhaps it would be worthwhile to devote more time to white dwarfs as targets in such searches. Searches for beamed transmissions usually assume a signal that is so narrow in frequency space and/or has a timing profile such that it cannot be produced by a “natural” source. One huge advantage for a search for a beamed transmission, compared to an IR or transit search for a DSR, is that once one has ruled out terrestrial interference, there are no false positives to contend with. Mid-infrared satellites have observed numerous main sequence stars in both pointed (Spitzer) and scanning mode (WISE). In all cases two important questions are: (1) what is the minimum fractional IR luminosity (LIR/L*) due to a DSR that can be detected, and (2) if an IR excess is detected, can it be shown with certainty that it must be coming from a DSR rather than from some ”natural” source such as dust or a brown dwarf. Main sequence stars are much brighter than white dwars, so many more can be searched for a DSR in a brightness limited sample (see, e.g., Cotten & Song 2016). Nonetheless, as outlined in Section 7, in some ways white dwarfs are better targets. Wright et al (2014) outline a plan to search with Spitzer and WISE at mid- infrared wavelengths for the waste energy of advanced technological civilizations. They review the idea of Kardashev Type I, II, and III technological civilizations; these correspond, respectively, to technologies that command planetary, circumstellar and galactic energy sources. Homo sapiens are a Type I civilization and DSRs correspond to Type II. However, the focus of the Wright et al paper is on Type III civilizations that, they say, are “very rare in the local universe”. There are two published studies relevant to the frequency of Type II civilizations (= DSRs): Kennedy & Wyatt (2013) and Moor et al. (2021). Kennedy & Wyatt surveyed $\sim$24,000 A8 to K5 type main sequence stars for excess IR emission in the 12 $\mu$m (W3) channel of WISE. Their goal was to discover stars that are orbited by warm (T $>$ 200 K) dusty disks with ages of at least 8 Myr. There is no mention in their paper of DSRs or Kardashev civilizations. Perhaps this is why the Kennedy & Wyatt paper is not cited by Wright et al (2014). Only about two dozen warm IR excess stars emerged from this extensive study; many of these were previously known (see Table 1 in Kennedy & Wyatt (2013). Important for the present study, none of the Table 1 stars satisfies both the age ($>$4.5 Gyr) and spectral type (F6-G7) of interest to us here, but with the possible exception of HD 154593. However, from a Gaia proper motion anomaly, Kervella et al (2019) deduce that HD 154593 has a cool unseen companion. C. Melis (personal communication, 2021) has carried out an extensive search of the literature over a much broader range of stellar phase space than covered by Kennedy & Wyatt for stars that would satisfy our constraints for age and spectral type; Melis found no such stars with a reliable IR excess. Thus, none of the Kennedy & Wyatt sample of 24,000 stars shows evidence for a DSR; only a fraction of the 24,000 satisfy our age and spectral type constraints. Their Figure 1 of spectral types suggests that about 12,000 stars are in the range F6-G7. Their Figure 2 gives the age distribution; $\sim$20% are older than 4.6 Gyr. Thus we can say that none of $\sim$2400 stars with the appropriate age and spectral type show evidence of a DSR. Similar to the white dwarfs discussed in Section 4, the absence of a DSR at these 2400 main sequence stars is for fractional IR luminosities – excess IR luminosity divided by bolometric luminosity – of about 0.1% or greater. In both classes of target stars one is dealing with a brightness limited sample. But even though the white dwarfs are fainter than the main sequence stars, for the former sample the IR data are from Spitzer, in contrast to the less sensitive WISE data used for the main sequence sample. From Equation 3 we see that about 3 billion F6 through G7 stars are now on the main sequence. As noted in Section 4, studies of Kepler and other data bases by Zink & Hansen (2019) and by Bryson et al. (2021) suggest that about 30% of G-type stars have a potentially habitable planet, or about 109 such planets. If a million (106) of these evolve to a technological civilization that constructs a DSR, then one should have been found in the Kennedy & Wyatt sample of 2400 stars. Thus, we can say that at most about one in a 1000 G-type stars with habitable planets harbors a planet with life that eventually evolves to a state where it constructs a DSR with luminosity at least 0.1% that of its host star. It would be worthwhile to analyze the Moor et al. (2021) database for DSRs. But this is not possible to do without more detailed information on stellar spectral types and age than is presented in their paper. We can, however, say that none of the warm IR excess stars identified by Moor et al appears to be a good candidate for a DSR. The above discussion and that in Section 4 assume that all N civilizations arise independently and do not colonize other stars. Hansen & Zuckerman (2021) argue that many such migrations will be motivated by stellar evolution and will be to M-type stars. If so, then M stars would represent a plausible host for DSRs. However, the false positive rate for excess IR emission at M stars that are cross correlated with the WISE database is large (Silverberg et al 2018) and, at any rate, no one has ever confidentally identified an M star of age $\sim$5 Gyr or greater that possesses an IR excess. ## 7 Discussion In Sections 4 and 6 we appraised the occurrence frequency of DSRs at white dwarfs and at main sequence stars based on various observations and assumptions. Here we compare the relative advantages and shortcomings involved when white dwarfs or main sequence stars are utilized as determinants of DSR frequency. The IR surveys referred to in Sections 4 and 6, have similar sensitivities to DSRs ($\sim$0.1%) when the luminosity of a DSR is measured as a fraction of the bolometric luminosity of a central star; we will call this fraction $\tau$. At this point in time, no evidence exists of a DSR of this fractional brightness or larger at any of the surveyed white dwarfs and main sequence stars. No doubt, motivation is a major factor that determines whether an advanced technological civilization would want to construct a DSR. As noted in the Introduction, coping with stellar evolution is one unavoidable motivation. Interior planets will be destroyed during the giant (AGB) phase of stellar evolution, while surviving outer planets will become exceedingly cold as the white dwarf cools. Thus, arguably, eventually all organisms will have migrated from planetary surfaces to artificial space colonies. If so, then any civilization that orbits a white dwarf must produce a DSR. A question then is, what is a likely value for $\tau$? One can envision reasons why $\tau$ at a white dwarf would be quite large. Far more organisms can live in an ensemble of space colonies than can live on a planet such as Earth. Thus, the value of $\tau$ depends on the numbers of organisms, how much space each one would desire, and how much energy would be required per capita. Needless to say, from our youthful, naive, vantage point we simply don’t know the answers to such questions. About all that one might say is that most Homo sapiens would surely prefer to live in a huge space colony, the bigger the better. If this motivation is retained even in an advanced long-lived civilization, then $\tau$ could be quite large; easily 0.1%. And, as noted in Section 5, really large constructs might produce detectable transit signals. Thus, $\tau$ = 0.1% would seem to be a significant search sensitivity, and upper limits to the frequency of DSRs with this luminosity at white dwarfs, meaningful. Unfortunately, given their low apparent brightness, as noted in Section 4, only of order 100 white dwarfs with main sequence progenitors in the appropriate mass range were examined with Spitzer. Given this small sample size, we estimated that at least 3% of all habitable planets at G-type main sequence stars must spawn life that eventually evolves to technology in order for at least one to have been detected with Spitzer at a white dwarf. This seems a bit “optimistic”. Because a few 1000 G-type main sequence stars with appropriate age and spectral type were examined with WISE, as noted in Section 6, only one in a 1000 or so G-type stars needs to spawn life that eventually develops technology for a DSR with $\tau$$>$0.1% to have been detected in the Kennedy & Wyatt (2013) survey. However, main sequence stars suffer at least two disadvantages as target stars when compared to white dwarfs; one disadvantage would be less motivation to build a substantial DSR because one’s home planet remains a good abode for life. In our own solar system, if a sunshield is constructed and employed at the inner Earth-Sun Lagrange point – to counter the increasing solar luminosity – then Earth could remain quite habitable for a few Gyr more into the future. Perhaps a more important consideration would be the greater luminosity, say about a factor of 1000, of the Sun compared to a typical white dwarf. For a DSR with the same $\tau$ and temperature around the Sun as one around a white dwarf, a DSR at the former would have to have 1000 times the area of one at the latter. While there is sufficient material in the asteroid belt to build such an extensive DSR, would the motivation to do so exist? For transits of main sequnce stars by structures with temperatures in the range 300 to 1000 K, the orbital period would be much longer than around white dwarfs, thus relatively few transits per year. For a given structure, the probability of proper alignment of orbital plane and line of sight to Earth would be small and its required cross section would be larger than that of Ceres. ## 8 CONCLUSIONS We have considered the detection of a Dyson Sphere or Ring (DSR) at a white dwarf star via its infrared emission or via a transit between our telescopes and the star. Of order 100 white dwarfs of appropriate mass were observed in the infrared with the Spitzer Space Telescope; no plausible DSR candidate has been identified. We also considered DSRs at main sequence stars; a few 1000 of appropriate age were observed with the Wide-field Infrared Survey Explorer (WISE) and no plausible DSR candidate was identified. These results, along with a few reasonable assumptions, can be used to place limits on the number of technological civilizations in the Galaxy or, alternatively, on the fraction of habitable planets that orbit solar type stars and on which a technological civilization eventually emerges and subsequently constructs a DSR. We discussed transit surveys of white dwarfs with the Kepler 2 and TESS missions. These could, in principle, detect a DSR that includes at least one structure with diameter $\sim$1000 km. Detection of a DSR via its infrared excess emission depends on the total solid angle blocked by the entire entourage of artificial constructs as seen from the surface of the white dwarf, but does not require the existence of any really large individual structure. In contrast, the minimal requirement for detection by an optical transit depends on the size of the largest construct, while the total solid angle blocked by a DSR can be far less than is required for detection of an infrared excess. Both the white dwarf and main sequence studies were sensitive to DSRs whose infrared luminosity “$\tau$” is about 0.1% that of the bolometric luminosity of their central star. Regarding the number of number “N” of technological civilizations in the Milky Way Galaxy, based on the white dwarf studies, if all N originate at solar-type main sequence stars and construct DSRs with $\tau$ at least as large as 0.1%, then N $<$100 million. If technological civilizations emerge with equal probability around all stars of solar mass or less, then N$<$$10^{9}$. An alternative way to interpret the observational results is to ask: what fraction of potentially habitable planets that orbit solar-type stars spawn living organisms that eventually evolve to a technology that then constructs a DSR with $\tau$ at least as large as 0.1%? From the white dwarf studies we estimate that this fraction is at most 3%. From the main sequence studies the upper limit to this fraction could be as small as 0.1%, but with a few potentially significant caveats. Additional examination of the existing databases would be worthwhile to ensure that no DSR was missed. Substantial progress to improve the current limits, or to detect a DSR, will require a new mid-infrared space telescope. The diameters of WISE, Spitzer, and JWST are 40, 85, and 650 cm, respectively. JWST with its long slew time and numerous Galactic and extragalactic objects waiting in line to be observed is not the right telescope with which to search for DSRs. The median Gaia G-magnitude of the Roccetto et al (2015) white dwarf sample was 15.4. According to Gentillo-Fusillo et al (2019), there are about 5000 white dwarfs within 200 pc of Earth and brighter than magnitude 17. Given that about 25% of white dwarfs have masses in the (low mass) range of relevance for the possible existence of a DSR (as discussed in Section 4), a scanning telescope operated in a similar way to WISE, but with the diameter of Spitzer, could improve by an order of magnitude the limits on DSR frequency derived in this paper. Such a telescope would yield much new science in general and would not be too costly. The 30-m class ground-based telescopes currently envisioned or in construction could be used to vet any potential DSR candidates. Future telescopes, such as LSST, may be suitable for detection of transits of DSRs at white dwarfs. But individual constructs would have to have diameters $\sim$1000 km, i.e. comparable to that of Ceres (Cortés & Kipping 2019). ESA’s planned PLATO spacecraft, if successful, should be at least a few times more senstitive than TESS for detection of very shallow transits at white dwarfs and thus able to detect constructs with diameters a few times smaller than 1000 km. The author is grateful to Andrew Vanderburg and Beth Klein for a variety of assistance, to referee Geoffrey Marcy for many useful suggestions, and to Carl Melis, Jay Farihi, Jill Tarter, and Brad Hansen for helpful advice. This research was supported in part by grants to UCLA from NASA and the NSF. We have made use of NASA’s Astrophysics Data System. DATA AVAILABILITY No new data were generated or analysed in support of this research. ## References * Agol (2011) Agol, E. 2011, ApJL, 731, L31 * Arnold (2005) Arnold, L. 2005, ApJ 627, 534 * Bryson (2021) Bryson, S., Kunimoto, M., Kopparapu, R. K. et al. 2021, AJ, 161, 36 * Cortés & Kipping (2019) Cortés, J. & Kipping, D. 2019, MNRAS, 488, 1695 * Cotten & Song (2016) Cotten, T.H. & Song, I. 2016, ApJS, 225, 15 * Cummings et al. (2018) Cummings, J. D., Kalirai, J.S., Trembaly, P.-E. et al 2018, ApJ, 866, 21 * Debes et al. (2019) Debes, J. H., Thevenot, M., Kuchner, M. et al. 2019, ApJL, 872, L25 * Dennihy et al. (2020) Dennihy, E., Farihi, J., Fusillo, N. P. G. & Debes, J. H. 2020, ApJ, 891, 97 * Dufour et al. (2017) Dufour, P., Blouin, S., Coutu, S. et al. 2017, ASPC, 509, 3 * Dyson (1960) Dyson, F. 1960, Science, 131, 1667 * Farihi (2016) Farihi, J., 2016, New Astronomy Reviews 71, 9 * Farihi et al. (2008) Farihi, J., Zuckerman, B. & Becklin, E.E. 2008, ApJ, 674, 431 * Gentille Fusillo et al. (2021) Gentile Fusillo, N. P., Tremblay, P. -E., Cukanovaite, E. et al 2021, MNRAS, 508, 3877 * Gentille Fusillo et al. (2019) Gentille Fusillo, N. P., Tremblay, E.-P., Gaensicke, B. T. et al. 2019, MNRAS, 482, 4570 * Gertz (2020) Gertz, J. 2020, arXiv e-prints, arXiv:2001.00673 * Goldsmith, D. & Owen, T. (1980) Goldsmith, D. & Owen, T. 1992, The Search For Life in the Universe, Second Edition (Reading Massachusetts : Addison-Wesley) * Hansen & Zuckerman (2021) Hansen, B. M. S. & Zuckerman, B. 2021, AJ, 161, 145 * Harp et al. (2106) Harp, G.R., Richards, J., Tarter, J.C. et al. 2016, AJ, 152, 181 * Hart (1975) Hart, M. H. 1975, QJRAS 16, 128 * Hollands et al. (2021) Hollands, M. A., Tremblay, P.-E., Gaensicke, B., T. et al. 2021, Nature Astronomy, 5, 451 * Howell et al. (2014) Howell, S., Sobeck, C., Haas, M. et al. 2014, PASP, 126, 398 * Iben (1984) Iben, I. 1984, In IAU Symp. No. 105 (Dordrecht, Holland: Reidel) 3 * Jura (2003) Jura, M. 2003, ApJ, 584, L91 * Jura (2008) Jura, M. 2008, AJ, 135, 1785 * Jura et al. (2009) Jura, M., Farihi, J. & Zuckerman, B. 2009, AJ, 137, 3191 * Jura & Young (2104) Jura, M. & Young, E. 2014, AREPS, 42, 45 * Kennedy & Wyatt (2013) Kennedy, G.M. & Wyatt, M.C. 2013, MNRAS, 433, 2334 * Kervella et al. (2019) Kervella, P., Arenou, F., Mignard, F., & Thevenin, F. 2019, A&A, 623, 72 * Moor et al. (2021) Moor, A., Abraham, P., Szabo, G. et al. 2021, ApJ, 910, 27 * Mullally et al (2007) Mullally, F., Kilic, M., Reach, W. et al. 2007, ApJS, 171, 206 * Oliver & Billingham (1971) Oliver, B. M. & Billingham, J. 1971, Project Cyclops: A Design Study of a System for Detecting Extraterrestrail Intelligent Life (NASA Ames Research Center, Moffett Field) * Ricker et al (2015) Ricker, G. R., Winn, J. N., Vanderspek, R. et al. 2015, JATIS, 1, 014003 * Rocchetto et al. (2015) Rocchetto, M., Farihi, J., Gaensicke, B. & Bergfors, C. 2015, MNRAS 449, 574 * Sagan (1980) Sagan, C. 1980, Cosmos (New York City: Random House) * Salpeter (1955) Salpeter, E. E. 1955, ApJ 121, 161 * Sandford & Kipping (2019) Sandford, E. & Kipping, D. 2019, AJ 157, 42 * Silverberg et al. (2018) Silverberg, S. M., Kuchner, M. J., Wisniewski, J. P. et al. 2018, ApJ, 868, 43 * Sollima (2019) Sollima, A. 2019, MNRAS, 489, 2377 * Shklovskii & Sagan (1966) Shklovskii, I. S. & Sagan, C. 1966, Intelligent Life In the Universe (San Francisco : Holden-Day) * Tarter (2001) Tarter 2001, ARA&A 39, 511 * Tellis & Marcy (2017) Tellis, N.K. & Marcy, G. W. 2017, AJ, 153, 251 * Vanderbosch et al. (2020) Vanderbosch, Z., Hermes, J. J., Dennihy, E. et al. 2020, ApJ, 897, 171 * Vanderbosch et al. (2021) Vanderbosch, Z., Rappaport, S., Guidry, J. et al. 2021, ApJ, 917, 41 * Vanderburg et al. (2020) Vanderburg, A., Rappaport, S., Xu, S. et al. 2020, Nature, 585, 363 * van Roestel et al. (2021) van Roestel, J., Kupfer, T., Bell K. J. et al. 2021, ApJ, 919, L26 * van Sluijs & Van Eylen (2018) van Sluijs, L. & Van Eylen, V. 2018, MNRAS, 474, 4603 * Veras (2016) Veras, D. 2016, Royal Society Open Science, Vol. 3, id.150571 * Veras (2021) Veras, D. 2021, orel.book, 1. doi:10.1093/acrefore/9780190647926.013.238 * Wilson et al. (2019) Wilson, T. G., Farihi, J., Gaensicke, B. & Swan, A. 2019, MNRAS 487, 133 * Wright et al. (2010) Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K. et al. 2010, AJ, 140, 1868 * Wright et al. (2014) Wright, J. T., Griffith, R. L., Sigurdsson, S. et al. 2014, ApJ, 792, 27 * Xu et al. (2020) Xu, S., Lai, S. & Dennihy, E. 2020, ApJ, 902, 127 * Zink & Hansen (2019) Zink, J. K. & Hansen, B.M.S. 2019, MNRAS, 487, 246 * Zuckerman (1985) Zuckerman, B, 1985, QJRAS, 26, 56 * Zuckerman (2019) Zuckerman, B. 2019, arXiv:1912.08386 * Zuckerman & Becklin (1987) Zuckerman, B. & Becklin, E.E. 1987, Nature 330, 138 * Zuckerman & Young (2018) Zuckerman, B. & Young E. 2018, haex.book, 14. doi:10.1007/978-3-319-55333-7_14
# Anomalies and phases of strongly-coupled chiral gauge theories: recent developments Stefano Bolognesi(1,2), Kenichi Konishi(1,2), Andrea Luzio(3,2) (1)Department of Physics ”E. Fermi”, University of Pisa, Largo Pontecorvo, 3, Ed. C, 56127 Pisa, Italy (2)INFN, Sezione di Pisa, Largo Pontecorvo, 3, Ed. C, 56127 Pisa, Italy (3)Scuola Normale Superiore, Piazza dei Cavalieri, 7, 56127 Pisa, Italy <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract After many years of investigations, our understanding of the dynamics of strongly-coupled chiral gauge theories is still quite unsatisfactory today. Conventional wisdom about strongly-coupled gauge theories, successfully applied to QCD, is not always as useful in chiral gauge theories. Recently some new ideas and techniques have been developed, which involve concepts of generalized symmetries, of gauging a discrete center symmetry, and of generalizing the ’t Hooft anomaly matching constraints to include certain mixed symmetries. This new development has been applied to chiral gauge theories, leading to many interesting, sometimes quite unexpected, results. For instance, in the context of generalized Bars-Yankielowicz and generalized Georgi-Glashow models, these new types of anomalies give a rather clear indication in favor of the dynamical Higgs phase, against confining, flavor symmetric vacua. Another closely related topics is strong anomaly and the effective low-energy action representing it. It turns out that they have significant implications on the phase of chiral gauge theories, giving indications consistent with the findings based on the generalized anomalies. Some striking analogies and contrasts between the massless QCD and chiral gauge theories seem to emerge from these discussions. The aim of this work is to review these developments. ###### Contents 1. 1 Introduction 2. 2 Computation of the mixed anomalies 1. 2.1 Gauging a 1-form ${\mathbbm{Z}}_{k}\subset{\mathbbm{Z}}_{N}$ center symmetry 2. 2.2 Color-flavor locked ${\mathbbm{Z}}_{N}$ center symmetry: Master formula 3. 2.3 Comments on the paper [78] 4. 2.4 Comments on the papers [79, 80] 5. 2.5 Higgs phase and anomaly-matching 3. 3 Physics of models with ${\mathbbm{Z}}_{k}\subset{\mathbbm{Z}}_{N}$ center symmetry 1. 3.1 Self-adjoint antisymmetric tensor matter 1. 3.1.1 $SU(6)$ gauge group 2. 3.1.2 $SU(N)$ models 2. 3.2 Adjoint QCD 3. 3.3 QCD with ”tensor quarks” 4. 3.4 Chiral theories with $\tfrac{N-4}{k}$ $\psi^{\\{ij\\}}$’s and $\tfrac{N+4}{k}$ ${\bar{\chi}_{[ij]}}$’s 4. 4 Generalized anomalies and phases of the generalized BY and GG models 1. 4.1 Bars-Yankielowicz models 2. 4.2 Georgi-Glashow models 5. 5 Strong anomaly and phases of chiral gauge theories 1. 5.1 $U(1)_{A}$ problem and the $\theta$ dependence in QCD 1. 5.1.1 ${\cal N}=1$ supersymmetric theories 2. 5.2 $\psi\eta$ model and strong-anomaly 3. 5.3 Strong anomaly: the generalized BY models 4. 5.4 Strong anomaly and the $\chi\eta$ model 5. 5.5 Generalized GG models and strong anomaly 6. 5.6 Strong anomaly in the chiral gauge theories considered in Sec. 3 1. 5.6.1 $SU(6)$ model with a single fermion in a self-adjoint antisymmetric representation 2. 5.6.2 Adjoint QCD with $N_{\rm c}=N_{\rm f}=2$ 6. 6 Summary and discussion 7. A Chirally symmetric confining $\psi\eta$ model 8. B Higgs phase of the $\psi\eta$ model 9. C Symmetric confining phase for the $\chi\eta$ model 10. D Higgs phase of the $\chi\eta$ model 11. E Confining symmetric phase of the BY models 12. F Higgs phase in the BY models 13. G Confining symmetric phase of the GG models 14. H Higgs phase in the GG models ## 1 Introduction One of the mysteries of the world we live in is the fact that it has a nontrivial chiral property. The macroscopic structures such as biological bodies have often approximately left-right symmetric forms, but not exactly. At the molecular level, $O(10^{-6}\,{\rm cm})$, the structure of DNA possesses a definite chiral spiral form. At the microscopic scales of the fundamental interactions, $O(10^{-14}\,{\rm cm})$, the left-handed and right-handed quarks and leptons have distinct couplings to the $SU(3)\times SU(2)_{L}\times U(1)_{Y}$ gauge bosons. The parity violation in the ”weak interactions” processes, though it was first considered somewhat weird, later found a natural explanation [2]: the fundamental entity of our world is a set of Weyl fermions in the $\big{(}\tfrac{1}{2},0\big{)}$ representation of the Lorentz group. If such fermions are in a generic complex representation of the gauge group, the resulting theory will break parity. In other words, the origin of parity violation is to be traced to the type of building blocks our world is made of, rather than to a peculiar property of the Fermi interactions. Most of grand unification schemes such as those based on $SU(5),SO(10)$ or $E_{7}$ groups are also all based on chiral gauge theories. The Glashow-Weiberg-Salam (GWS) $SU(2)_{L}\times U(1)_{Y}$ theory (as well as its GUT generalizations) is a weakly coupled theory and, as such, is well understood within the framework of perturbation theory. But this also means that the theory should be regarded, at best, as a very good low-energy effective theory. In particular, it is unlikely that the gauge symmetry breaking sector described by a potential term for the Higgs scalar, though phenomenologically quite successful, is a self-consistent, fundamental description. Nevertheless, attempts to replace it by new, QCD-like strongly- coupled gauge theories (such as Technicolor, Extended technicolor, Walking technicolor, etc.) have not been entirely successful so far. On the other hand, our understanding of strongly-coupled chiral gauge theories is today uncomfortably limited [3]-[22]. This is in a striking contrast to the case of vector-like gauge theories, for which we have an extensive literature. They include some general theorems [23], [24], lattice simulations [25]-[28], the effective Lagrangians [29]-[37], the conventional ’t Hooft anomaly analysis [3], the powerful exact results in ${\cal N}=2$ supersymmetric theories [38]-[44], space compactification with semi-classical approximation [45]-[47], and so on. These theoretical tools are unfortunately unavailable for the analysis of strongly-coupled chiral gauge theories, with a few exceptions such as the large $N$ approximation, the ’t Hooft anomaly matching constraints, and some general considerations based on the renormalization group. Taken together, they yield some useful but not very detailed knowledge about the dynamics and phases of the chiral gauge theories. Partial list of the papers on these efforts are found in [3]-[22]. This kind of situation is certainly limiting rather severely our capability of finding the place for chiral gauge theories in the context of a realistic theory of the fundamental interactions beyond the standard model, e.g., in the context of composite models for the quarks and leptons, the composite Higgs boson models, the composite dynamical models for dark matter, and so on. The need for new ideas for making true progress in this research field is badly felt. In our view, a little hope for a breakthrough comes from the very recent ideas involving the concept of generalized symmetries [48]-[51]. Pure Yang-Mills theories and QCD-like theories have been analyzed by using new, stronger, version of ’t Hooft anomaly matching constraints, involving $0$-form and $1$-form symmetries together [52]-[70]. The generalized symmetries are symmetries which do not act on local field operators, as the conventional symmetries, but only on extended objects, such as closed lines and surfaces. As the corresponding gauge functions are now 1-form, 2-form fields (in the standard gauging of global symmetries the gauge transformation parameters which appear in the exponent are just - 0-form - functions of the spacetime), these new types of symmetries are sometimes called 1-form-, 2-form-, etc symmetries. A familiar example of the 1-form symmetry is the ${\mathbbm{Z}}_{N}$ center symmetry in Euclidean $SU(N)$ Yang-Mills theory at finite temperature, which acts on the Polyakov loop. The unbroken (or broken) center symmetry by the vacuum expectation value (VEV) of the Polyakov loop, is a valid criterion for confinement (or de-confinement) phase. Similarly, the area law / perimeter law of the VEV of the Wilson loops can be considered as the criterion for confinement / Higgs phase. Note however that the center symmetry ${\mathbbm{Z}}_{N}$ as used this way is a global 1-form symmetry. The main input of the new development [52]-[70] is the idea of gauging 1-form symmetries. In fact, these generalized symmetries are symmetries of the systems being considered, even if they act in a way different from the conventional symmetry action. We are free to decide to ”gauge” these new types of symmetries. Anomalies one encounters in doing so are obstructions of gauging a symmetry, which is by definition a ’t Hooft anomaly. And as in the usual requirement of the ultraviolet (UV) - infrared (IR) ”anomaly matching”, a similar conditions arise in gauging the generalized (higher-form) symmetries together with some conventional (”0-form”) symmetries, which have come to be known in recent literature as a ”mixed ’t Hooft anomaly”. As will be seen below, these constraints carry significant information on the dynamics of wide classes of chiral gauge theories as well, which is our main interest. Very recently, the present authors have realized [96] that the strong anomaly, which plays a prominent role in the solution of the so-called $U(1)_{A}$ problem in QCD, can also be significant in the study of the phases of chiral gauge theories, such as those discussed in this review. A key observation is that the well-known low-energy strong-anomaly effective action for QCD, which reproduces the effects of the strong anomaly in the low-energy effective action, has a nontrivial implication on the symmetry breaking pattern itself. For some reason these ideas have not been applied much to the study of strongly-interacting chiral gauge theories until now. It is found that, quite remarkably, the considerations based on the strong anomaly yield similar indications on the phase of the chiral gauge theories, as those found by applying the generalized ’t Hooft anomalies. Even though they are arguments fully independent of each other, the agreement of the results should not probably be considered entirely accidental. Indeed they both originate from the proper treatment of the strong anomaly on various $U(1)$ symmetries present in the theory. The rest of the work is organized as follows. In Sec. 2 the procedure of the computing anomalies associated with the gauging of certain 1-form discrete symmetry is discussed, as this constitutes one of the main theoretical tools of our analysis. The detail of the discussion is further divided in two parts. The first part, Sec. 2.1, concerns models with 1-form center symmetry ${\mathbbm{Z}}_{k}\subset{\mathbbm{Z}}_{N}\subset SU(N)$ which does not act on the matter fermions. These models have ordinary (0-form) discrete symmetries also, call ${\mathbbm{Z}}_{\ell}$, which are nonanomalous remnants of anomalous $U(1)$ symmetries. The second class of models contain some matter fermions in the fundamental or antifundamental representation of $SU(N)$. Normally one would conclude that 1-form center symmetry ${\mathbbm{Z}}_{N}$ is simply absent in such models. However, as explained in Sec. 2.2, it turns out that it is still possible to define a color-flavor locked 1-form center symmetry ${\mathbbm{Z}}_{N}$. In Sec. 3 and Sec. 4, applications to various chiral gauge theories of these new ’t Hooft anomaly constraints are explored. In Sec. 3, physics of various chiral gauge theories of the first kind are discussed, by using the general results of Sec. 2.1. Sec. 4 is dedicated to the applications of the formulas found in Sec. 2.2 to two large classes of chiral gauge theories, the so-called generalized Bars-Yankielowicz (BY) and Georgi-Glashow (GG) models. After discussing the new, generalized anomalies and their implications in various kinds of chiral gauge theories, we explore in Sec. 5 the implications of the strong anomaly on the phases of the same classes of chiral gauge theories, studied in Sec. 2 $\sim$ Sec. 4. We conclude in Sec. 6 by summarizing the results found, and by discussing interesting analogies and contrasts between the dynamics of massless QCD and chiral gauge theories. A clearer picture of infrared dynamics of many strongly-coupled chiral gauge theories seems to emerge. Appendices A \- H are a collection of tables summarizing the massless fermions and their quantum numbers in various possible phases of BY and GG models. ## 2 Computation of the mixed anomalies Gauging of a discrete (1-form) center symmetry and calculating anomalies induced by it in some otherwise nonanomalous global discrete symmetry \- a generalized ’t Hooft anomaly - is the central theme of the work reviewed here. Let us go through the basic elements of the analysis and enlist main formulas needed to get to the physics results discussed in the subsequent sections. For more general introduction and theoretical considerations on the generalized symmetries the reader can consult the original literature [50]-[71]. We need to distinguish two different classes of models: the first concerns the systems where the fermions do not transform under a ${\mathbbm{Z}}_{k}$ ($k$ is a divisor of $N$) subgroup of the $SU(N)$ center. These systems possess a ${\mathbbm{Z}}_{k}$ 1-form symmetry (the ”center symmetry”), which acts naturally on fundamental Wilson loops. Their analysis is relatively straightforward. In the second class of system the fundamental fermions transform non-trivially under the center of the gauge group, ${\mathbbm{Z}}^{\rm c}_{N}$, and only the diagonal combination ${\mathbbm{Z}}_{N}\subset{\mathbbm{Z}}^{\rm c}_{N}\times G_{\rm f}$ (being $G_{\rm f}$ the flavor symmetry group) leaves them invariant. The study of these models requires a careful determination of the global structure of the symmetry group involved. ### 2.1 Gauging a 1-form ${\mathbbm{Z}}_{k}\subset{\mathbbm{Z}}_{N}$ center symmetry First consider $SU(N)$ theories with an exact center ${\mathbbm{Z}}_{k}\subset{\mathbbm{Z}}_{N}$ symmetry, $k$ being a divisor of $N$, under which the matter fermions do not transform. Examples are: pure $SU(N)$ YM theory or the adjoint QCD, where $k=N$, or various models with fermions neutral with respect to some ${\mathbbm{Z}}_{k}$, see Sec. 3. The procedure was formulated in [51] building upon some earlier results [48]-[50], and used in [52] for $SU(N)$ Yang-Mills theory at $\theta=\pi$. The methods have been further developed and found other areas of applications [53]-[68]. 1-form center symmetry can be simply understood in the formalism of principal bundles. Here the gauge and the fermions fields are defined locally on open patches $U_{i}$ of our spacetime. These local definitions are glued together by $SU(N)$ valued transitions functions, $g_{ij}:U_{i}\cap U_{j}\rightarrow SU(N)$. In particular, $\psi_{i}(x)=R(g_{ij}(x))\psi_{j}(x)\quad x\in U_{i}\cap U_{j}\;,$ (2.1) where $\psi_{i}$ and $\psi_{j}$ are the local expressions of the field $\psi$ (which transform in the representation R) in the patches $U_{i}$ and $U_{j}$. To require that the theory is an $SU(N)$ theory (i.e. the fundamental Wilson loops are meaningful) enforces the cocycle condition, $g_{ij}g_{jk}g_{ki}=\mathbbm{1}\;,$ (2.2) in the triple intersection, $U_{ijk}=U_{i}\cap U_{j}\cap U_{k}$. In this language a global 1-form symmetry transformation multiplies the transition functions $g_{ij}$ by $\mathbbm{Z}_{k}$ elements, $z_{ij}$ (one for each simple intersection, $U_{ij}=U_{i}\cap U_{j}$), which satisfy their own consistency condition $z_{ij}z_{jk}z_{ki}=1\;.$ (2.3) This transformation is a symmetry of the system if it does not spoil the equation (2.1), i.e. if $\mathbbm{Z}_{k}$ does not act on fermions. However, it can act non-trivially on fundamental Wilson loops.111It acts on non- contractible Wilson loop, therefore global 1-form gauge transformations are indexed by elements of $H^{1}(\mathcal{M},\mathbbm{Z}_{k})$. The $z_{ij}$ implement a Čech version of this cohomology group. If one relaxes the cocycle consistency condition, allowing $z_{ij}z_{jk}z_{ki}=z_{ijk}\in\mathbbm{Z}_{N}\;,$ (2.4) one obtains a gauge 1-form symmetry transformation. In this case the condition (2.2) does not make sense, and must be replaced by $g_{ij}g_{jk}g_{ki}=B_{ijk}\in\mathbbm{Z}_{k}\;,$ (2.5) where the new data, $B_{ijk}$, are (a discretized version of) a 2-form connection.222Similarly, the $B_{ijk}$ are representatives of the second Čeck cohomology group, $H^{2}(M,\mathbbm{Z}_{k})$. The closeness of $B$ can be seen on quadruple overlaps. This construction defines an $\frac{SU(N)}{\mathbbm{Z}_{k}}$ gauge bundle. If one consider the $B_{ijk}$ data dynamical, summing on them in the functional integral, one obtains a $\frac{SU(N)}{\mathbbm{Z}_{k}}$ gauge theory. In [49] and [51] a useful construction is presented that reproduces this gauging in terms of continuous fields. We adopt this description, which is reviewed below briefly. The rough idea is to replace the discrete $\mathbbm{Z}_{k}$ 1-form symmetry with a $U(1)$ 1-form symmetry, at the price of introducing other new degrees of freedom. Gauge fixing these new degrees of freedom, one can gauge-fix most of the continuous 1-form symmetry. What remains is the discrete 1-form symmetry. As a first step, one must introduce a pair of $U(1)$ 2-form and 1-form333In most part of this review a compact differential-form notation is used. For instance, $a\equiv T^{\rm c}A_{\mu}^{\rm c}(x)\,dx^{\mu}$; $F=da+a^{2}$ ; $F^{2}\equiv F\wedge F=\frac{1}{2}F^{\mu\nu}F^{\rho\sigma}dx_{\mu}dx_{\nu}dx_{\rho}dx_{\sigma}=\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}F^{\mu\nu}F^{\rho\sigma}d^{4}x=F^{\mu\nu}{\tilde{F}}_{\mu\nu}d^{4}x$ , and so on. gauge fields $\big{(}B_{\rm c}^{(2)},B_{\rm c}^{(1)}\big{)}$ such that [51] ${k}B^{(2)}_{\mathrm{c}}=dB^{(1)}_{\mathrm{c}}\;,$ (2.6) satisfying $\displaystyle B^{(2)}_{\mathrm{c}}\to B^{(2)}_{\mathrm{c}}+d\lambda_{\mathrm{c}}\;,\qquad B^{(1)}_{\mathrm{c}}\to B^{(1)}_{\mathrm{c}}+{k}\lambda_{\mathrm{c}}\;.$ (2.7) $\lambda_{\mathrm{c}}$ is a 1-form gauge function. The $SU(N)$ gauge field $a$ is embedded into a $U(N)$ field, $\widetilde{a}=a+\frac{1}{k}B^{(1)}_{\mathrm{c}},$ (2.8) and one requires invariance under $U(N)$ gauge transformation. The gauge field tensor $F(a)$ is replaced by $F(a)\to{\tilde{F}}({\tilde{a}})-B^{(2)}_{\mathrm{c}}\;.$ (2.9) This fixes the manner these $\mathbb{Z}_{k}^{\rm c}$ gauge fields are coupled to the standard gauge fields $a$. The matter fields must also be coupled to the $U(N)$ gauge fields, so that the 1-form gauge invariance, (2.7) is respected. For a Weyl fermion $\psi$ in the representation $R$ this can be done having the kinetic term as ${\bar{\psi}}\,\gamma^{\mu}\left(\partial+R({\tilde{a}})-\frac{{\cal N}(R)}{k}B_{\rm c}^{(1)}\right)_{\mu}P_{L}\,\psi\;,$ (2.10) where $R({\tilde{a}})$ is the appropriate matrix form for the representation; ${\cal N}(R)$ is the $N$-ality of $R$. $P_{L}$ is the projection operator on the left-handed fermions. We introduce an external $U(1)_{\psi}$ gauge field $A_{\psi}$ to study the anomaly, e.g., of a $U(1)_{\psi}$ symmetry $\psi\to e^{i\alpha}\psi$, or of a discrete subgroup of it, and couple it to the fermion as ${\bar{\psi}}\,\gamma^{\mu}\left(\partial+R({\tilde{a}})-\frac{{\cal N}(R)}{k}B_{\rm c}^{(1)}+A_{\psi}\right)_{\mu}P_{L}\,\psi\;.$ (2.11) The standard anomaly calculation for $\psi\to e^{i\alpha}\psi\simeq\psi+i\alpha\psi$, gives $\delta S_{\delta A_{\psi}^{(0)}}=\frac{2\,T(R)}{8\pi^{2}}\int{\mathrm{tr}}F^{2}\,\delta\alpha=2\,T(R)\,{\mathbbm{Z}}\,\delta\alpha\;.$ (2.12) ${\mathbbm{Z}}$ is the integer instanton number, and it leads to the well- known result that a discrete subgroup ${\mathbbm{Z}}_{2T(R)}\subset U(1)_{\psi}$ (2.13) remains. $T(R)$ is twice the Dynkin index, $\mathrm{tr}\,T^{a}T^{b}=\delta^{ab}D(R)\;,\qquad D\Big{(}\raisebox{-3.0pt}{\yng(1)}\Big{)}=\frac{1}{2}\;,\qquad T(R)\equiv 2\,D(R)\;.$ (2.14) With $\big{(}B_{\rm c}^{(2)},B_{\rm c}^{(1)}\big{)}$ fields in Eq. (2.11), $U(1)_{\psi}$ symmetry can further be broken due to the replacement, ${\mathrm{tr}}F^{2}\to{\mathrm{tr}}\big{(}{\tilde{F}}-B^{(2)}_{\rm c}\big{)}^{2}\;.$ (2.15) Indeed, $\frac{1}{8\pi^{2}}\int_{\Sigma_{4}}{\mathrm{tr}}\big{(}{\tilde{F}}-B^{(2)}_{\rm c}\big{)}^{2}=\frac{1}{8\pi^{2}}\int_{\Sigma_{4}}\big{\\{}{\mathrm{tr}}{\tilde{F}}^{2}-N(B^{(2)}_{\rm c})^{2}\big{\\}}\;:$ (2.16) we recall that $B^{(2)}_{\rm c}$ is Abelian, $\propto{\mathbbm{1}}_{N}$, and that ${\mathrm{tr}}{\tilde{F}}=N\,B^{(2)}_{\rm c}$. The first term is an integer. The second is $-\frac{N}{8\pi^{2}}\int_{\Sigma_{4}}\big{(}B^{(2)}_{\rm c}\big{)}^{2}=-\frac{N}{8\pi^{2}k^{2}}\int_{\Sigma_{4}}dB^{(1)}_{\rm c}\wedge dB^{(1)}_{\rm c}=\frac{N}{k^{2}}\,{\mathbbm{Z}}\;,$ (2.17) which is generally fractional. This explains the origin of various 0-form-1-form (mixed) anomalies and the consequent stronger anomaly conditions in many models discussed in Sec. 3. ### 2.2 Color-flavor locked ${\mathbbm{Z}}_{N}$ center symmetry: Master formula Subtler situations present themselves, when a gauge theory of our interest contains matter Weyl fermions in the fundamental, or antifundamental, representation of the gauge group $SU(N)$. Ordinarily, this means that the center symmetry is lost, leaving no possibilities of gauging the 1-form ${\mathbbm{Z}}_{N}$ center symmetry. Actually, in order to consider the ’t Hooft anomalies one must externally gauge also the flavor symmetry group, $G_{\rm f}$. Having done so, in the systems of our interest $SU(N)_{\rm c}\times G_{\rm f}$ is found not to act faithfully. In particular, there is a $\mathbbm{Z}_{N}$ subgroup that leaves all the fields invariant. In other words, there is a $\mathbbm{Z}_{N}$ ”color-flavor-locked” 1-form symmetry.444This hinges upon a quite remarkable property of the generalized symmetries, that they are all Abelian. This reflects the fact it is not possible to define time ordering between two extended operators, hence impossible to define equal-time commutators between them. In the case of particles, how the (equal-time) commutators can arise in the operator formalism, as a limit of time-ordered products taken in different orders, is best explained in Feynman’s book on Path-Integral formulation of quantum mechanics [72]. Similarly to the previous case, gauging of this 1-form symmetry allows us to gauge the faithful symmetry group of the system, $\frac{SU(N)_{\rm c}\times G_{\rm f}}{\mathbbm{Z}_{N}}$.555One should keep in mind that there are ”more” configurations in $\frac{SU(N)}{\mathbbm{Z}_{k}}$ ($\frac{SU(N)_{\rm c}\times G_{\rm f}}{\mathbbm{Z}_{N}}$) than in the $SU(N)$ ($SU(N)\times G_{\rm f}$) gauge bundles, i.e. any of the latter always belong to the former, but not the other way around. To introduce this kind of systems, and to discuss the method of analysis developed, we consider the concrete example of an $SU(N)$ gauge theory with matter left-handed fermions in the reducible, complex representation, $\yng(2)\oplus(N+4)\,{\bar{{\yng(1)}}}\,$ (2.18) that is, $\displaystyle\psi^{\\{ij\\}}\,,\quad\eta_{i}^{B}\,,\qquad{\footnotesize i,j=1,2,\ldots,N\;,\quad B=1,2,\ldots,N+4}\;,$ (2.19) (which is the simplest of the so-called Bars-Yankielowicz models). This model will be referred to as the ”$\psi\eta$” model below. The symmetry group of the model is $G_{\rm f}=SU(N+4)\times U(1)_{\psi\eta}\;,$ (2.20) where $U(1)_{\psi\eta}$ indicates the anomaly-free combination of $U(1)_{\psi}$ and $U(1)_{\eta}$, associated with the two types of matter Weyl fermions of the theory. In this model, the conventional ’t Hooft anomaly matching discussion allows, apparently, a confining phase, with no condensates and with full unbroken global symmetry, and with some simple set of massless composite fermions - ”baryons” - saturating the anomaly matching equations, see Appendix A. Notably, the anomaly constraints are also consistent with a dynamical Higgs phase, in which the color and (part of) the flavor symmetry are dynamically broken by certain bi-fermion condensates, see Appendix B. The situation is analogous in a large class of chiral gauge theories, to be discussed in Sec. 4 below. Clearly, the conventional ’t Hooft anomaly matching requirement is not powerful enough to discriminate among possible (confining or dynamical Higgs) vacua. To go beyond the conventional (perturbative) ’t Hooft anomaly analyses, it is necessary to consider the global properties of the symmetry groups, not only the algebra. For even $N$ the true symmetry group of the model is found to be [70]: $SU(N)_{\rm color}\times G_{\rm f}\;,\qquad G_{\rm f}=\frac{SU(N+4)\times U(1)_{\psi\eta}\times(\mathbb{Z}_{2})_{F}}{\mathbb{Z}_{N}\times\mathbb{Z}_{N+4}}\;,$ (2.21) and not (2.20), where $(\mathbb{Z}_{2})_{F}$ is the fermion parity, $\psi,\eta\to-\psi,-\eta$. Indeed, as promised, there is a subgroup of $SU(N)_{\rm c}\times SU(N+4)\times U(1)_{\psi\eta}\times(\mathbbm{Z}_{2})_{F}$, ${\mathbbm{Z}}_{N}=SU(N)\cap\\{U(1)_{\psi\eta}\times(\mathbb{Z}_{2})_{F}\\}\;,$ (2.22) which leaves the matter fields invariant.666There is another independent subgroup, $\mathbbm{Z}_{N+4}$, which does not act on matter filed, leading to another $\mathbbm{Z}_{N+4}$ 1-form center symmetry. In [70] the effects of gauging this flavor center symmetry and the resulting mixed anomalies in the $\psi\eta$ model have also been taken into account. None of the main results however were found to depend on it. Here for simplicity we consider only the gauging of the color-flavor locked center symmetry ${\mathbbm{Z}}_{N}$, together with $U(1)_{\psi\eta}$ and $({\mathbbm{Z}}_{2})_{F}$. The gauge transformation with $\mathrm{e}^{\frac{2\pi\mathrm{i}}{N}}\in\mathbb{Z}_{N}\subset SU(N)$, $\psi\to\mathrm{e}^{\frac{4\pi\mathrm{i}}{N}}\psi\;,\;\qquad\eta\to\mathrm{e}^{-\frac{2\pi\mathrm{i}}{N}}\eta\;,$ (2.23) can be undone by the following $(\mathbb{Z}_{2})_{F}\times U(1)_{\psi\eta}$ transformation: $\psi\to(-1)\,\mathrm{e}^{\mathrm{i}\frac{N+4}{2}\frac{2\pi}{N}}\psi=\mathrm{e}^{-\mathrm{i}\frac{N}{2}\frac{2\pi}{N}}\,\mathrm{e}^{\mathrm{i}\frac{N+4}{2}\frac{2\pi}{N}}\psi\;,\qquad\eta\to(-1)\,\mathrm{e}^{-\mathrm{i}\frac{N+2}{2}\frac{2\pi}{N}}\eta=\mathrm{e}^{\mathrm{i}\frac{N}{2}\frac{2\pi}{N}}\,\mathrm{e}^{-\mathrm{i}\frac{N+2}{2}\frac{2\pi}{N}}\eta\;.$ (2.24) A relevant fact is that the odd elements of $\mathbb{Z}_{N}$ belong to the disconnected component of $U(1)_{\psi\eta}\times(\mathbb{Z}_{2})_{F}$ whereas the even elements belong to the connected component of the identity. The presence of a subgroup which acts trivially means that there is a 1-form global symmetry. Again, in the discrete language introduced before, it acts on transition functions. In particular, if $g^{\rm c}_{ij}$, $u_{ij}$ and $q_{ij}$ are the transition functions for $SU(N)$, $U(1)_{\psi\eta}$ and $(\mathbbm{Z}_{2})_{F}$, one may introduce some $\mathbbm{Z}_{N}$ transitions functions (a $\mathbbm{Z}_{N}$ gauge field), $z_{ij}$, and transform $g_{ij}\rightarrow z_{ij}g_{ij}\;,\quad u_{ij}\rightarrow(z_{ij})^{-1}u_{ij}\;,\quad\text{and}\quad q_{ij}\rightarrow(z_{ij})^{-\frac{N}{2}}\;q_{ij}\;.$ (2.25) If one drops the cocycle condition for $z_{ij}$, one gauges the 1-form symmetry. In this case one must introduce also the 2-form connection 777Again, an element of $H^{2}(M,\mathbbm{Z}_{N})$, $B_{ijk}\in\mathbbm{Z}_{N}$., described by the new data $B_{ijk}\in\mathbbm{Z}_{N}$, which are read from the transition functions $g_{ij}g_{jk}g_{ki}=B_{ijk}\;,\quad u_{ij}u_{jk}u_{ki}=(B_{ijk})^{-1}\;,\quad q_{ij}q_{ji}q_{ki}=(B_{ijk})^{-\frac{N}{2}}\;.$ (2.26) This definition assures that all fields (matter, gauge) are well defined in the triple intersections. Again, let us turn to the continuous language. As a first step, we have to gauge $U(1)_{\psi\eta}\times(\mathbbm{Z}_{2})_{F}$, introducing 1. 1. $A$: $U(1)_{\psi\eta}$ 1-form gauge field, 2. 2. $A_{2}^{(1)}$: $({\mathbbm{Z}}_{2})_{F}$ 1-form gauge field, in addition to the dynamical color gauge $SU(N)$ field, $a$. The gauging of 1-form discrete $\mathbb{Z}_{N}$ symmetry is done by introducing $NB_{\mathrm{c}}^{(2)}=dB_{\mathrm{c}}^{(1)}\;.$ (2.27) These 2-form gauge fields must be coupled to the $SU(N)$ gauge fields $a$ and $U(1)_{\psi\eta}\times(\mathbb{Z}_{2})_{F}$ gauge fields ($A$ and $A_{2}^{(1)}$) appropriately. We first embed $a$ in a $U(N)$ gauge field $\widetilde{a}$ as $\widetilde{a}=a+\frac{1}{N}B^{(1)}_{\mathrm{c}}$ (2.28) and requiring the invariance $\displaystyle B_{\mathrm{c}}^{(2)}$ $\displaystyle\to B_{\mathrm{c}}^{(2)}+\mathrm{d}\lambda_{\mathrm{c}}\;,\qquad B_{\mathrm{c}}^{(1)}\to B_{\mathrm{c}}^{(1)}+N\lambda_{\mathrm{c}}\;,$ $\displaystyle\widetilde{a}$ $\displaystyle\to\widetilde{a}+\lambda_{\mathrm{c}}\;.$ (2.29) In these equations $\lambda_{\mathrm{c}}$ is a properly normalized $U(1)$ gauge field, which satisfies its own Dirac quantization condition. To reproduce (2.25) correctly in this continuous language, $U(1)_{\psi\eta}$ and $(\mathbb{Z}_{2})_{F}$ gauge fields must also transform, $A\to A-\lambda_{\mathrm{c}}\;,\qquad A_{2}^{(1)}\to A_{2}^{(1)}+\frac{N}{2}\lambda_{\mathrm{c}}\;.\;$ (2.30) The last equation needs a comment, as $A^{(1)}_{2}$ is a $\mathbbm{Z}_{2}$ gauge field, while $\lambda_{\mathrm{c}}$ is a $U(1)$ gauge field. To be precise, it is more correct to proceed as it has been done with the $SU(N)$ gauge field. In particular one should write a $U(1)$ gauge connection $\tilde{A}_{2}=A_{2}+\frac{1}{2}B^{(1)}_{c},$ (2.31) and impose $\tilde{A}_{2}\rightarrow\tilde{A}_{2}+\frac{N}{2}\lambda_{\mathrm{c}}\;.$ (2.32) As before $a$ is not a globally defined $SU(N)$ gauge field while $\tilde{a}$ is a correctly normalized $U(N)$ gauge field, and now $\tilde{A}_{2}$ is a correctly normalized $U(1)$ field. One has now an $\frac{SU(N)}{{\mathbbm{Z}}_{N}}$ connection rather than $SU(N)$. It implies that $\frac{1}{2\pi}\int_{\Sigma_{2}}B_{\mathrm{c}}^{(2)}=\frac{n_{1}}{N}\;,\qquad n_{1}\in{\mathbbm{Z}}_{N}\;,$ (2.33) in a closed two-dimensional subspace, ${\Sigma_{2}}$. On a topologically nontrivial four dimensional spacetime of Euclidean signature which contains such sub-spaces one has $\frac{1}{8\pi^{2}}\int_{\Sigma_{4}}(B_{\mathrm{c}}^{(2)})^{2}=\frac{n}{N^{2}}\;,$ (2.34) where $n\in{\mathbbm{Z}}_{N}$. The fermion kinetic term with the background gauge fields is determined by the minimal coupling procedure as $\displaystyle\overline{\psi}\gamma^{\mu}\left(\partial+\mathcal{R}_{\mathrm{S}}(\widetilde{a})+\frac{N+4}{2}A+\tilde{A}_{2}\right)_{\mu}P_{\mathrm{L}}\psi\;$ $\displaystyle+\,\overline{\eta}\gamma^{\mu}\left(\partial+\mathcal{R}_{\mathrm{F}^{*}}(\widetilde{a})-\frac{N+2}{2}A-\tilde{A}_{2}\right)_{\mu}P_{\mathrm{L}}\eta\;.$ (2.35) (with an obvious notation). Note that each of the kinetic terms is invariant under (2.29) and (2.30). The 1-form gauge invariance of our system can be made completely manifest, by rewriting the above as $\displaystyle\overline{\psi}\gamma^{\mu}\left(\partial+[\mathcal{R}_{\mathrm{S}}(\widetilde{a})-\frac{2}{N}B_{\mathrm{c}}^{(1)}]+\frac{N+4}{2}[A+\frac{1}{N}B_{\mathrm{c}}^{(1)}]+[\tilde{A}_{2}-\frac{1}{2}B_{\mathrm{c}}^{(1)}]\right)_{\mu}P_{\mathrm{L}}\psi\;$ $\displaystyle+\,\overline{\eta}\gamma^{\mu}\left(\partial+[\mathcal{R}_{\mathrm{F}^{*}}(\widetilde{a})+\frac{1}{N}B_{\mathrm{c}}^{(1)}]-\frac{N+2}{2}[A+\frac{1}{N}B_{\mathrm{c}}^{(1)}]-[\tilde{A}_{2}-\frac{1}{2}B_{\mathrm{c}}^{(1)}]\right)_{\mu}P_{\mathrm{L}}\eta\;.$ (2.36) Written this way, the expression inside each square bracket is invariant under (2.29) and (2.30). This leads to the gauge field strength for the $\psi$ and $\eta$, in the form used in the analysis à la Stora-Zumino descent procedure [73, 74], in [70, 71]. The final answer of course does not depend on the rewriting of the kinetic terms as (2.36); the original form (2.35) is perfectly adequate for the calculation of the anomaly in a more straightforward approach explained below. Under the fermion parity, such as $\psi\to-\psi$, $\eta\to-\eta$ in the $\psi\eta$ model, the contribution to the $(\mathbbm{Z}_{2})_{F}-[{\mathbbm{Z}}_{N}]^{2}$ anomaly from a fermion in the representation ${R}$ is given by the phase in the partition function, $c_{2}\,\frac{1}{8\pi^{2}}\int_{\Sigma_{4}}{\mathrm{tr}}_{R}\left[\big{(}F(\tilde{a})\big{)}^{2}\right]\,(\pm\pi)$ (2.37) where $c_{2}$ is the ${\mathbbm{Z}}_{2}$ charge of the fermion. In the case of the $\psi\eta$ model, for instance, $c_{2}(\psi)=1$, $c_{2}(\eta)=-1$, see (2.35). Now $\displaystyle{\mathrm{tr}}_{R}\left[\big{(}F(\tilde{a})\big{)}^{2}\right]$ $\displaystyle=$ $\displaystyle{\mathrm{tr}}_{R}\left[\big{(}F(\tilde{a})-B^{(2)}_{\rm c}+B^{(2)}_{\rm c}\big{)}^{2}\right]$ (2.38) $\displaystyle=$ $\displaystyle{\mathrm{tr}}\left[\left(\mathcal{R}_{R}\big{(}F(\tilde{a})-B^{(2)}_{\rm c}\big{)}+{\cal N}(R)B^{(2)}_{\rm c}{\mathbbm{1}}_{d(R)}\right)^{2}\right]$ $\displaystyle=$ $\displaystyle{\mathrm{tr}}\left[\mathcal{R}_{R}\big{(}F(\tilde{a})-B^{(2)}_{\rm c})^{2}+{\cal N}(R)^{2}\big{(}B^{(2)}_{\rm c}\big{)}^{2}{\mathbbm{1}}_{d(R)}\right]\;.$ $\mathcal{R}_{R}$ is the matrix form for the representation $R$ and ${\cal N}(R)$ its $N$-ality, and we used the fact that ${\mathrm{tr}}_{R}\big{(}F(\tilde{a})-B^{(2)}_{\rm c}\big{)}=0\;,$ (2.39) valid for an $SU(N)$ element in a general representation. ${\mathbbm{1}}_{d(R)}$ is the $d(R)\times d(R)$ unit matrix ($d(R)$ is the dimension of the representation $R$). One finds $\displaystyle{\mathrm{tr}}_{R}\left[\big{(}F(\tilde{a})\big{)}^{2}\right]$ $\displaystyle=$ $\displaystyle D(R)\,{\mathrm{tr}}_{F}\left[\big{(}F(\tilde{a})-B^{(2)}_{\rm c}\big{)}^{2}\right]+d(R){\cal N}(R)^{2}\big{(}B^{(2)}_{\rm c}\big{)}^{2}=$ (2.40) $\displaystyle=$ $\displaystyle D(R)\,{\mathrm{tr}}_{F}\left[F({\tilde{a}})\right]^{2}+\left[-D(R)\cdot N+d(R){\cal N}(R)^{2}\right]\big{(}B^{(2)}_{\rm c}\big{)}^{2}\;,$ where $D(R)$ is twice the Dynkin index $T_{R}$, (2.14). Note that $\frac{1}{8\pi^{2}}\int_{\Sigma_{4}}{\mathrm{tr}}_{F}\left[F({\tilde{a}})^{2}\right]\in{\mathbbm{Z}}\;:$ (2.41) the first term in Eq. (2.40) corresponds to the conventional instanton contribution to the $({\mathbbm{Z}}_{2})_{F}$ anomaly. In all models of interest here, however, the sum of the instanton contribution from the fermions is of th form, $({\rm an}\,{\rm even}\,{\rm integer})\times\frac{1}{8\pi^{2}}\int_{\Sigma_{4}}{\mathrm{tr}}_{F}\left[F({\tilde{a}})^{2}\right]\,\times({\pm\pi})=2\pi{\mathbbm{Z}}\;,$ (2.42) which is trivial. The fact that $({\mathbbm{Z}}_{2})_{F}$ anomaly is absent in the standard instanton analysis because of a (nonvanishing) even coefficient, and of the quantized instanton flux, but not because of an algebraic cancellation from different fermions, is of utmost importance. Indeed, the gauging of the 1-form ${\mathbbm{Z}}_{N}$ by the introduction of the 2-form gauge fields $B^{(2)}_{\rm c}$ basically amounts to the fractionalization of the instanton flux à la ’t Hooft, (2.34), and as a consequence, a nonvanishing mixed anomaly involving $({\mathbbm{Z}}_{2})_{F}$ can appear. Thus the non-vanishing mixed $({\mathbbm{Z}}_{2})_{F}-[{\mathbbm{Z}}_{N}]^{2}$ anomaly comes only from the second term of Eq. (2.40), containing the 2-form gauge field, $\boxed{\Delta S^{({\rm Mixed}\,{\rm anomaly})}=(\pm\pi)\cdot\sum_{\rm fermions}c_{2}\,\Big{(}d(R){\cal N}(R)^{2}-N\cdot D(R)\Big{)}\,\frac{1}{8\pi^{2}}\int_{\Sigma_{4}}\big{(}B^{(2)}_{\rm c}\big{)}^{2}\;.}$ (2.43) This is the master formula. The formula (2.43) gives the result for the mixed anomaly in all models considered in [70, 71] at once. For instance, for the $\psi\eta$ model, one gets $\displaystyle\Delta S^{({\rm Mixed}\,{\rm anomaly})}$ (2.44) $\displaystyle=$ $\displaystyle\frac{\pm\pi}{N^{2}}\left[\left(\frac{N(N+1)}{2}\cdot 4-N(N+2)\right)-(N+4)\left(N\cdot 1-N\cdot 1\right)\right]=\pm\pi\;,$ which means that there is a $({\mathbbm{Z}}_{2})_{F}$ anomaly in the presence of the ${\mathbbm{Z}}_{N}$ gauging. More precisely, there is an obstruction for gauging simultaneously ${\mathbbm{Z}}_{N}$, $U(1)_{\psi\eta}$ and $(\mathbb{Z}_{2})_{F}$, by keeping the equivalence ${\mathbbm{Z}}_{N}\subset SU(N)\sim{\mathbbm{Z}}_{N}\subset\\{U(1)_{\psi\eta}\times(\mathbb{Z}_{2})_{F}\\}\;.$ (2.45) One can repeat the same calculation in the possible confining phase without symmetry breaking, Appendix A. The result is very simple: there is no such anomaly. This can be traced back to the fact that the massless baryons are $SU(N)_{\rm c}$ singlet, and any appearance of $B^{(1)}_{\rm c}$ in their covariant derivative simply cancel. Clearly this mismatch of anomaly forbids confinement without symmetry breaking. The same cannot be said in the dynamical Higgs scenario, see Appendix B, as the color group is broken. Even though, for concreteness, we discussed above the particularly simple model, the $\psi\eta$ model, the master formula found above is actually applicable to any theory, after the correct symmetry is found and after the fermion kinetic terms, invariant under the 1-form ${\mathbbm{Z}}_{N}$ gauge symmetry are written down. The results for the $({\mathbbm{Z}}_{2})_{F}-[{\mathbbm{Z}}_{N}]^{2}$ anomaly in the $\chi\eta$ model as well as all other generalized Bars-Yankielowicz and Georgi-Glashow models [70], [71] discussed below in Sec. 4 indeed follow straightforwardly from the master formula (2.43) this way. See Sec. 4 below. ### 2.3 Comments on the paper [78] In a recent paper [78] the $\psi\eta$ model and $\chi\eta$ model (in our notation) are studied, and the authors claim that “there is no $({\mathbbm{Z}}_{2})_{F}$ anomaly”, of the type discussed in the previous section. Such a statement is however intrinsically ambiguous. It is unclear whether the authors’ claim is that there is no $({\mathbbm{Z}}_{2})_{F}$ symmetry, or that there is one but is nonanomalous. Indeed, their argument in their Sec. 2.1 seems to indicate the former; but as we have made explicit here in (2.21) [71], an independent $({\mathbbm{Z}}_{2})_{F}$ symmetry exists, but only in the $SU(N)/{\mathbbm{Z}}_{N}$ gauge theory, not in the original $SU(N)$ theory. Their comments in Sec. 2.5 also seem to be in line with the first. But the fact that the $({\mathbbm{Z}}_{2})_{F}$ symmetry we are interested in here coincides with the angle $2\pi$ space rotation, is well known, and it has been taken into account in our papers [70, 71]. Any $({\mathbbm{Z}}_{2})_{F}$ anomaly could be cancelled by a space rotation, so in that sense, there would never be a $({\mathbbm{Z}}_{2})_{F}$ anomaly. But this is not the point. As there is no a priori guarantee that the Lorentz invariance cannot be dynamically broken, a $({\mathbbm{Z}}_{2})_{F}$ anomaly arising from the gauge dynamics cannot be allowed, if the Lorentz invariance is to be maintained. Their discussion in Sec. 3 about the Higgs phase in these models does not contain anything new, as compared to what we have discussed about the Higgs phase, see [71], and Sec. 2.5 and in Appendices B, D, F, H, in this work, for the $\chi\eta$, $\psi\eta$ models and for all other BY and GG models. The main point of the paper [78] seems to be in Sec. 2.2, which apparently leads to the second conclusion, that there is a $({\mathbbm{Z}}_{2})_{F}$ symmetry but is non anomalous. They argue that, by choosing the normalization of the $\mathbbm{Z}_{2}$ gauge field as $\int_{\Sigma}dA_{2}^{(1)}=2\pi\,{\mathbbm{Z}}\;,$ (2.46) (a formula in the line below Eq. (2.13) of [78]), the relation $2A_{2}^{(1)}-B^{(1)}_{\mathrm{c}}-B^{(1)}_{\mathrm{f}}=dA_{2}^{(0)}\;,$ (2.47) leads to $2\,\mathrm{d}A_{2}^{(1)}-{N}B^{(2)}_{\mathrm{c}}-(N+4)B^{(2)}_{\mathrm{f}}=0\;:$ (2.48) by taking the derivatives of the both sides, hence to the constraints on the fluxes of $\mathbbm{Z}_{2}$ and $\mathbbm{Z}_{N+4}$ gauge fields, $\int_{\Sigma_{2}}N\,B^{(2)}_{\mathrm{c}}+\int_{\Sigma_{2}}(N+4)\,B^{(2)}_{\mathrm{f}}=4\pi k\;,\qquad k\in{\mathbbm{Z}}\;.$ (2.49) If one chooses not to introduce the 1-form gauging $B^{(2)}_{\mathrm{f}}$ (as we did in [71]) one would simply get $\int_{\Sigma_{2}}N\,B^{(2)}_{\mathrm{c}}=4\pi k\;,\qquad k\in{\mathbbm{Z}}\;,$ (2.50) and our anomaly (2.43) would indeed disappear. The rest of Sec. 2 in [78] all follows from the normalization, (2.46). However, (2.46) means that their background $\mathbbm{Z}_{2}$ gauge field corresponds to $\psi\to\psi\;,\qquad\eta\to\eta\;,$ (2.51) i.e., no transformation (the trivial element of $\mathbbm{Z}_{2}$). The fact that one finds no anomaly in such a background is certainly correct, but it is not what one is interested in. The correct normalization for a $\mathbbm{Z}_{2}$ gauge field is the one we have adopted, $\oint A_{2}^{(1)}=\frac{2\pi m}{2}\;,\qquad m\in{\mathbbm{Z}}\;,$ (2.52) that (for the nontrivial element) corresponds to the holonomy, $\psi\to-\psi\;,\qquad\eta\to-\eta\;,$ (2.53) This leads to (we ignore the ${\mathbbm{Z}}_{N+4}$ gauge field) $\oint dx^{\mu}\big{(}2A_{2}^{(1)}-B^{(1)}_{\mathrm{c}}\big{)}_{\mu}=\oint dA_{2}^{(0)}=2\pi n\;,\qquad n\in{\mathbbm{Z}}\;,$ (2.54) and $\int_{\Sigma_{2}}N\,B^{(2)}_{\mathrm{c}}=2\pi k\;,\qquad k\in{\mathbbm{Z}}\;,$ (2.55) and this leads to the anomaly, (2.43). In other words, our assumption is that it is possible to choose the smallest cycle of $B^{(2)}$ compatible with the Dirac flux quantization for $B^{(1)}$, i.e. $\int_{\Sigma}B^{(2)}=\frac{1}{N}\int_{\Sigma}dB^{(1)}=\frac{2\pi}{N}\;,$ (2.56) without any topological obstruction. In the discrete language, the analogous assumption is to be able to choose $B_{ijk}$ to be any element in $H^{2}(\mathcal{M},\mathbbm{Z}_{N})$. Actually, if one insists on working with the theory on a smooth manifold, without any topological defect for the $\mathbbm{Z}_{2}$ gauge field $A_{2}$, the assumption made above cannot be maintained, as pointed out by ourselves [70]. And this seems to be the point on which the authors of [78] are trying to make a clean mathematical statement. However, $A_{2}$ is not a proper $(\mathbbm{Z}_{2})_{F}$ gauge field, as $a$ is not an $SU(N)$ gauge field. In particular its cocycle condition in triple overlap might fail, leading to curvature-like insertions at discrete points. Moreover, the 1-form gauging of $\mathbbm{Z}_{N}$ invalidates the naive Dirac quantization condition for $A_{2}$, as it can be checked directly: if one separates a 2D cycle $\Sigma$ through the curve $\gamma$ in $\Sigma_{1}\cup\Sigma_{2}$, one obtains $\displaystyle\int_{\Sigma}dA_{2}$ $\displaystyle=$ $\displaystyle\int_{\Sigma_{1}}dA_{2}+\int_{\Sigma_{2}}dA_{2}=\int_{\gamma}(A_{2})_{\Sigma_{1}}-\int_{\gamma}(A_{2})_{\Sigma_{2}}$ (2.57) $\displaystyle=$ $\displaystyle\frac{N}{2}\int_{\gamma}\lambda_{12}=k\pi\;,\qquad k\in{\mathbbm{Z}}$ (see (2.30)) as $\lambda_{12}$ (1-form gauge transition functions) 888The simple fact that the $\mathbbm{Z}_{2}$ gauge field transforms non-trivially and changes from a patch to another, means that gauging of 1-form symmetry has been appropriately implemented. Indeed, thanks to (2.57), a Wilson loop of the form $e^{\oint A_{2}}$ is not 1-form gauge invariant, i.e., it is not a proper line operator. This is where we differ from part of the analysis of [78]. is a $\mathbbm{Z}_{N}$ 1-form gauge field, satisfying $\oint_{\gamma}\lambda=\frac{2\pi}{N}\;.$ (2.58) This leads to the more general flux quantization (2.56), and allows to insert an odd number of flux insertion in the surface. To recapitulate, in half of [78] the authors argue that there is no $(\mathbbm{Z}_{2})_{F}$ symmetry; in the other half, they discuss the background $(\mathbbm{Z}_{2})_{F}$-${\mathbbm{Z}}_{N}$ 1-form gauge fields, corresponding however to the trivial element of the 1-form $(\mathbbm{Z}_{2})_{F}$ transformation, finding no anomaly. ### 2.4 Comments on the papers [79, 80] Two interesting papers appeared recently, which discuss the $\psi\eta$ and $\chi\eta$ models. The authors of [79, 80] start from the ${\cal N}=1$ supersymmetric version of the models, and introduce a particular (“anomaly- mediated”) supersymmetry breaking perturbation. In the second paper (on the $\psi\eta$ model) this is done by making use of the known (Seiberg-) duality for this systems, at the origin of the moduli space of this model. In the $\psi\eta$ model, with $SU(N)$ gauge group and with a global symmetry group $G_{\rm f}=SU(N+4)\times U(1)_{\psi\eta}\;,$ (2.59) the authors claim [80] that for $N\geq 21$ the global symmetry is broken to $SO(N)$, with no massless composite fermions, whereas for $N<21$ the system flows into a conformal fixed point in the IR. For the $\chi\eta$ model, with odd $N$, they argue [79] that the global symmetry $SU(N-4)$ is spontaneously broken to $Sp(N-5)$. For $N$ even the unbroken symmetry is claimed to be $Sp(N-4)$. We shall not go into the details and merits of their analyses, but will make only a few general comments on their use of the supersymmetric models as the starting point of the analysis. First of all, in supersymmetric version of these models, there are often nontrivial quantum moduli space of vacua (vacuum degeneracies, or flat directions), whereas in the nonsupersymmetric chiral models we are studying here the vacuum is always unique and strongly coupled. It is a nontrivial question which point in the moduli space of the supersymmetric theory (apart from which perturbation to use) is the correct one to choose, to start the analysis 999This subtle problem is discussed in [81] in a slightly different but basically similar context, of perturbing a ${\cal N}=2$ supersymmetric model to ${\cal N}=0$ (nonsupersymmetric) model, in the attempt of finding out the correct infrared dynamics of the non supersymmetric $SU(2)$ theories with different number of (adjoint) flavors. . Secondly, all bifermion condensates such as $\langle\psi\eta\rangle$ (in the $\psi\eta$ model) and $\langle\chi\eta\rangle$ and $\langle\chi\chi\rangle$ (in the $\chi\eta$ model), which are analogue of the quark condensate in QCD, and play the central roles in the (candidate) Higgs vacua of these models, are forbidden in supersymmetric version of the models, as can be easily proven by use of supersymmetric Ward-Takahashi identities [82]. In other words, these condensates are absent, unless supersymmetry is dynamically broken, which does not occur in general, supersymmetric chiral gauge theories [82]. Also, in supersymmetric models, the global symmetry breaking occurs due to the condensation of scalar fields, which do not exist in nonsupersymmetric theories. Because of all this, the infrared dynamics of supersymmetric and nonsupersymmetric theories are usually very different, even though the gauge group and the global symmetry group are the same. A strong bifermion condensates such as $\langle\psi\eta\rangle\sim\Lambda^{3}$ or $\langle\chi\eta\rangle\sim\Lambda^{3}$ are intrinsically nonperturbative effects. They cannot be found via small perturbations in a theory in which they vanish by symmetries. The crucial question whether or not a phase transition occurs when the supersymmetry breaking mass parameters introduced reach some critical values, seems to be unanswered in [79, 80]. ### 2.5 Higgs phase and anomaly-matching As said above, it is possible to satisfy the standard ’t Hooft anomaly matching also in the Higgs phase, see Appendix B. Even though these results are known from the earlier work [6]-[19] and in [71], the remarkable way it works, as compared to the matching equations in the ”confining vacua”, is perhaps not generally known. The Higgs phase of these chiral theories are, in general, described by massless NG bosons together with some massless fermions. These fermions saturate the conventional ’t Hooft anomaly triangles with respect to the unbroken flavor symmetries. The way they do is, however, quite remarkable, and in our view, truly significant. As can be seen from Table 3, Table 5, and in similar Tables 7, 8, 10 and 11 for the generalized BY and GG models (in Appendices B, D, F, H), the set of fermions remaining massless in UV and those in the IR are identical in their quantum numbers, charges, and multiplicities. Therefore, the matching of anomalies (in the unbroken global symmetries) is completely automatic, and natural. No arithmetic equations need be solved. We may further argue that this way the system ”solves” ’t Hooft’s anomaly-matching conditions in the true sense. Note that this solution (Higgs phase vacua, with given sets of condensates) is stable, in the sense that any extra (1-form) gauging or possible new mixed anomalies would not introduce any new constraints: the matching continues to be automatic. ## 3 Physics of models with ${\mathbbm{Z}}_{k}\subset{\mathbbm{Z}}_{N}$ center symmetry In this section we review the study of symmetry breaking, implied by the various mixed anomalies of the type, ${\mathbbm{Z}}_{\ell}^{(0)}-\big{[}{\mathbbm{Z}}_{k}^{(1)}\big{]}^{2}$, where ${\mathbbm{Z}}_{\ell}^{(0)}$ is some 0-form (ordinary) discrete symmetry, and ${\mathbbm{Z}}_{k}^{(1)}$ is a 1-form symmetry, based on a subgroup, ${\mathbbm{Z}}_{k}\subset{\mathbbm{Z}}_{N}$ of the color $SU(N)$ center. The method of analysis has already been explained in Sec. 2.1. ${\mathbbm{Z}}_{\ell}$ and ${\mathbbm{Z}}_{k}$ depend on the particular model considered, but as we will see, many interesting models can be analyzed this way [69]. ### 3.1 Self-adjoint antisymmetric tensor matter We start with a class of $SU(N)$ gauge theories ($N$ even) where the matter fermions are in the $\frac{N}{2}$ rank fully antisymmetric representation. There is a 1-form ${\mathbbm{Z}}_{\frac{N}{2}}^{\rm c}$ center symmetry present, and we wish to know if some mixed ’t Hooft anomaly with the 0-form (ordinary) symmetries might arise. #### 3.1.1 $SU(6)$ gauge group Our first example is an $N=6$ theory with $N_{\rm f}$ flavors of Weyl fermions in the representation ${\underline{20}}\,=\,\yng(1,1,1)\ .$ (3.1) This ($SU(6)$) is the simplest nontrivial case of interest, as we will see. Moreover, if $N_{f}\leq 10$ the theory is asymptotic free, and discussions basically similar to those below can be worked out [69]. There is a $U(1)_{\psi}$ global symmetry, in all these cases, broken by the instantons to a global discrete ${\mathbbm{Z}}_{6N_{\rm f}}^{\psi}$ symmetry, which is further broken by the 1-form gauging to ${\mathbbm{Z}}_{2N_{\rm f}}^{\psi}$. This last step is due to a mixed ’t Hooft anomaly, that is, there is an obstruction to gauging such a ${\mathbbm{Z}}_{\frac{N}{2}}^{\rm c}$ discrete center symmetry, together with the global ${\mathbbm{Z}}_{6N_{\rm f}}^{\psi}$ symmetry. Below, we will take $N_{\rm f}=1$. The model was studied first in [59] and then in [69]. This model has a nonanomalous ${\mathbbm{Z}}_{6}^{\psi}$ symmetry, ${\mathbbm{Z}}_{6}^{\psi}\,:\quad\psi\to e^{\frac{2\pi i}{6}j}\psi\ ,\qquad j=1,2,\dots,6\ .$ (3.2) which is a subset of $U(1)_{\psi}$. The system has also an exact center symmetry acting Wilson loops as ${\mathbbm{Z}}_{3}^{\rm c}\,:\quad e^{i\oint A}\to e^{\frac{2\pi i}{6}k}e^{i\oint A}\ ,\qquad k=2,4,6\ ,$ (3.3) which does not act on $\psi$. By introducing the ${\mathbbm{Z}}_{3}^{\rm c}$ gauge fields, ${3}B^{(2)}_{\mathrm{c}}=dB^{(1)}_{\mathrm{c}}\;,$ (3.4) use of (2.15)-(2.17) gives, for the anomalous ${\mathbbm{Z}}_{6}^{\psi}$ transformation, $\left(\frac{6}{8\pi^{2}}\int{\mathrm{tr}}{\tilde{F}}^{2}-\frac{6N}{8\pi^{2}}\int(B_{\rm c}^{(2)})^{2}\right)\delta A_{\psi}^{(0)}\;,\qquad\delta A_{\psi}^{(0)}=\frac{2\pi{\mathbbm{Z}}_{6}^{\psi}}{6}\;.$ (3.5) The first term in (3.5) is trivial, as $\frac{1}{8\pi^{2}}\int{\mathrm{tr}}{\tilde{F}}^{2}\in{\mathbbm{Z}}\,,\qquad A_{\psi}=dA_{\psi}^{(0)}\;,$ (3.6) (which is the standard gauge anomaly, reducing $U(1)_{\psi}\longrightarrow{\mathbbm{Z}}_{6}^{\psi}$). Due to the second term in (3.5), $\delta A_{\psi}^{(0)}$ gets now multiplied by ($N=6$ here) $-\frac{6N}{8\pi^{2}}\int\big{(}B_{\rm c}^{(2)}\big{)}^{2}=-6N\Big{(}\frac{1}{3}\Big{)}^{2}{\mathbbm{Z}}=-6\,\frac{2}{3}{\mathbbm{Z}}\ .$ (3.7) We see the reduction of the global chiral ${\mathbbm{Z}}_{6}^{\psi}$ symmetry $\delta A_{\psi}^{(0)}=\frac{2\pi\ell}{6}\ ,\qquad\ell=1,2,\ldots,6$ (3.8) to its subgroup ($\ell=3,6$), ${\mathbbm{Z}}_{6}^{\psi}\longrightarrow{\mathbbm{Z}}_{2}^{\psi}\ .$ (3.9) Thus it is not possible that the vacuum of this system is confining, has a mass gap, and with no condensates breaking the ${\mathbbm{Z}}_{6}^{\psi}$ symmetry. What are the implications of (3.9) on the phase of the theory? First of all, it implies a threefold vacuum degeneracy, under the assumption that the system confines (with mass gap) and that no massless fermions are present in the infrared, on which $\frac{{\mathbbm{Z}}_{6}^{\psi}}{{\mathbbm{Z}}_{2}^{\psi}}$ can act. A natural assumption is that there are some condensates which ”explain” such a reduction of the symmetry in the infrared. However, physics depends on which condensates form. The simplest assumption is that a bi- fermion condensate $\langle\psi\psi\rangle\sim\Lambda^{3}\neq 0$ (3.10) forms. As $\psi\in{\underline{20}}$ a scalar bi-fermion composite may be in one of the irreducible representations of $SU(6)$, appearing on the right hand side of the composition-decomposition $\yng(1,1,1)\otimes\yng(1,1,1)=\yng(1,1,1,1,1,1)\oplus\yng(2,1,1,1,1)\oplus+\ldots\;.$ (3.11) The most attractive channel is the first, ${\underline{1}}$, it vanishes by the Fermi statistics. This leaves us with the second best possibility that $\psi\psi$ in the adjoint representation acquires a VEV (dynamical Higgs mechanism) [4, 18, 19]. Note that even though such a condensate should necessarily be understood as a gauge-dependent form of some gauge invariant VEV, it unambiguously determines the breaking of global discrete chiral symmetry as (3.9), where the broken symmetry $\frac{{\mathbbm{Z}}^{\psi}_{6}}{{\mathbbm{Z}}^{\psi}_{2}}$ acts on the degenerate vacua, permuting them. The reason for this is that as the global symmetry group ${\mathbbm{Z}}_{6}^{\psi}$ commutes with the color $SU(6)$ a gauge transformation cannot undo the nontrivial transformation of the condensate under ${\mathbbm{Z}}_{6}^{\psi}$. Of course, four-fermion, gauge-invariant condensates such as $\langle\psi\psi\psi\psi\rangle\neq 0\;,\qquad{\rm or}\qquad\langle{\bar{\psi}}{\bar{\psi}}\psi\psi\rangle\neq 0\;,$ (3.12) could also form, first of which also breaks ${\mathbbm{Z}}^{\psi}_{6}$ as in (3.9).101010We however do not share the view expressed in [59] that the gauge non-invariance of the bi-fermion composite $\psi\psi$ means $\langle\psi\psi\rangle=0\;;$ $\langle\psi\psi\psi\psi\rangle\neq 0$. The bi-fermion $\psi\psi$ condensate being in the adjoint representation, it is possible that physics in the infrared is described by full dynamical Abelianization [18, 19]. The low-energy theory could be an Abelian $U(1)^{5}$ theory. In such a case, although the infrared theory may look trivial, there is a remnant of the ${\mathbbm{Z}}_{6}$ symmetry of the UV theory. Domain walls which connect the three vacua would exist, and nontrivial infrared $3$D physics can appear there. $SU(6)$ models with $N_{\rm f}\geq 2$ have been studied also. Physics implications from the mixed anomaly turn out to depend quite nontrivially on the value of $N_{\rm f}$[69]. #### 3.1.2 $SU(N)$ models We next consider $SU(N)$ ($N$ general, even) theory, with left-handed fermions $\psi$ in the self-adjoint, totally antisymmetric representation. It exhibits some interesting features of the generalized anomalies. The first coefficient of the beta function is $b_{0}=\frac{11N-2N_{\rm f}T_{R}}{3}\ .$ (3.13) The twice Dynkin index is given by $2\,T_{R}={{N-2}\choose{\frac{N-2}{2}}}\ .$ (3.14) See Table 3.15 for $2T_{R}$ and $d(R)$ for some even values of $N$ . $\begin{tabular}[]{|c |c cccc| }\hline\cr$N$&$4$&$6$&$8$&$10$&$12$\\\ \hline\cr$2\,T_{R}$&$2$&$6$&$20$&$70$&$252$\\\ $d(R)$&$6$&$20$&$70$&$252$&$924$\\\ \hline\cr\end{tabular}\ .$ (3.15) We limit ourselves to asymptotically free theories ($N\leq 10$). The system has an exact 1-form symmetry: ${\mathbbm{Z}}_{\frac{N}{2}}^{\rm c}\,:\quad e^{i\oint A}\to e^{\frac{2\pi i}{N}k}\,e^{i\oint A}\ ,\qquad k=2,4,\ldots N\ ,$ (3.16) as well as a global discrete symmetry: ${\mathbbm{Z}}_{2T_{R}}\,:\quad\psi\to e^{\frac{2\pi i}{2T_{R}}j}\,\psi\ ,\qquad j=1,2,\ldots 2T_{R}\,.$ (3.17) By introducing a 1-form gauge fields $\big{(}B_{\rm c}^{(2)},B_{\rm c}^{(1)}\big{)}$ $\frac{N}{2}B_{\rm c}^{(2)}=dB_{\rm c}^{(1)}\ $ (3.18) (see Sec. 3), one arrives at the conclusion that the phase of the partition function is transformed by $-\frac{2\pi k}{2T_{R}}2T_{R}\frac{4}{N}{\mathbbm{Z}}=-{2\pi k}\frac{4}{N}{\mathbbm{Z}}\ ,\qquad k=1,2,\dots,2T_{R}\ ,$ (3.19) under ${\mathbbm{Z}}_{2T_{R}}$. In other words, ${\mathbbm{Z}}_{2T_{R}}$ become in general anomalous. The consequence of this mixed anomaly however depends on $N$ nontrivially: (i) For $N=4$, the mixed anomaly vanishes: $\frac{4}{N}=1\ .$ (3.20) (ii) For $N=4\ell$, $\ell\geq 2$, instead, $\frac{4}{N}=\frac{1}{\ell}\ ,$ (3.21) and the discrete symmetry breaking takes the form, ${\mathbbm{Z}}_{2T_{R}}^{\psi}\longrightarrow{\mathbbm{Z}}_{\frac{2T_{R}}{\ell}}^{\psi}\;.$ (3.22) For $N=4\ell$, we note that $2T_{R}$ is an integer multiple of $\ell$. (iii) Finally, for $N=4\ell+2$, $2T_{R}\cdot\frac{4}{N}=2T_{R}\cdot\frac{2}{2\ell+1}\ ,$ (3.23) thus the breaking of the discrete symmetry is ${\mathbbm{Z}}_{2T_{R}}^{\psi}\longrightarrow{\mathbbm{Z}}_{\frac{2T_{R}}{2\ell+1}}^{\psi}\;.$ (3.24) Just for a check, for $N=4\ell+2$, $2\ell+1$ is a divisor of $2T_{R}$ (see Appendix of [69]). The systematics of different cases, (i) $\sim$ (iii), above, can be nicely understood in terms of the properties of the fractional instantons (torons) of this model. See [69]. ### 3.2 Adjoint QCD $SU(N)$ theories with $N_{\rm f}$ Weyl fermions $\lambda$ in the adjoint representation, are sometimes called ”the adjoint QCD”. These systems have been extensively studied by using different methods: by semi-classical analysis [83], by direct lattice simulations [84], and more recently, by studying the mixed anomalies [52, 53, 56]. See also [85], and more recent work [60, 61]. Here the color ${\mathbbm{Z}}_{N}^{\rm c}$ center 1-form symmetry is exact, which can be fully gauged. The system has also a nonanomalous ordinary ($0$-form) discrete chiral symmetry, ${\mathbbm{Z}}_{2N_{\rm f}N}^{\lambda}:\quad\lambda\to e^{\frac{2\pi i}{2N_{\rm f}N}k}\lambda\ ,\qquad k=1,2,\ldots,2N_{\rm f}N\;,$ (3.25) as is well known. A set of gauge fields are introced: * • $A_{\lambda}$: ${\mathbbm{Z}}_{2N_{\rm f}N}^{\lambda}$ 1-form gauge field, for (3.25); * • $B^{(2)}_{\rm c}$: $\mathbb{Z}_{N}^{\rm c}$ 2-form gauge field. ${\mathbbm{Z}}_{2N_{\rm f}N}^{\lambda}$ is found to induces a phase change in the partition function, $\left(\frac{2NN_{\rm f}}{8\pi^{2}}\int{\mathrm{tr}}{\tilde{F}}^{2}-\frac{2N^{2}N_{\rm f}}{8\pi^{2}}\int(B_{\rm c}^{(2)})^{2}\right)\delta A_{\lambda}^{(0)}\ ,$ (3.26) $\delta A_{\lambda}^{(0)}\in\frac{2\pi i}{2NN_{\rm f}}{\mathbbm{Z}}\;.$ (3.27) The first term conserves ${\mathbbm{Z}}_{2NN_{\rm f}}^{\lambda}$; the second term $\Delta S(\delta A_{\lambda}^{(0)})\in\frac{2\pi i}{N}{\mathbbm{Z}}\ ,$ (3.28) breaks the chiral discrete symmetry further as ${\mathbbm{Z}}_{2NN_{\rm f}}^{\lambda}\longrightarrow{\mathbbm{Z}}_{2N_{\rm f}}^{\lambda}$ (3.29) as found in [52, 53, 56]. The case of $SU(2)$, $N_{\rm f}=2$, is of particular interest. In this case, the discrete chiral symmetry ${\mathbbm{Z}}_{8}^{\lambda}$ is broken by the 1-form ${\mathbbm{Z}}_{2}^{\rm c}$ gauging as ${\mathbbm{Z}}_{8}^{\lambda}\longrightarrow{\mathbbm{Z}}_{4}^{\lambda}\ .$ (3.30) The invariance of the standard $SU(2)$ theory $\lambda\to e^{\pm\frac{2\pi i}{8}}\lambda\ ,$ (3.31) becomes anomalous. What is the implication of these results on the physics in the infrared? A familiar lore about the infrared dynamics of this system is (for instance, see [85]) that a condensate $\langle\lambda^{\\{I}\lambda^{J\\}}\rangle\neq 0\ ,\qquad SU(2)_{\rm f}\longrightarrow SO(2)_{\rm f}\;$ (3.32) ($I,J=1,2$ being the flavor $SU(2)_{\rm f}$ indices) forms. That would lead to four-fold degenerate vacua, and in each of them, two NG bosons. In an interesting work [56] Anber and Poppitz have proposed that the system instead may develop a four-fermion condensate, $\langle\lambda\lambda\lambda\lambda\rangle\neq 0\ ,\quad{\rm with}\quad\langle\lambda\lambda\rangle=0\;.$ (3.33) Such a condensate breaks ${\mathbbm{Z}}_{8}^{\lambda}$ spontaneously to ${\mathbbm{Z}}_{4}^{\lambda}$, leaving only doubly degenerate $SU(2)_{\rm f}$ symmetric vacua (and no NG bosons). Massless baryons of spin $\tfrac{1}{2}$ $B\sim\lambda\lambda\lambda\;$ (3.34) (which is necessarily a doublet of the unbroken $SU(2)_{\rm f}$) should appear in the infrared spectrum to saturate all the conventional ’t Hooft and Witten anomaly matching conditions. The action of the broken ${\mathbbm{Z}}_{8}^{\lambda}/{\mathbbm{Z}}_{4}^{\lambda}$ is a permutation between the two degenerate vacua, $\langle\lambda\lambda\lambda\lambda\rangle\to-\langle\lambda\lambda\lambda\lambda\rangle\ .$ (3.35) As usual in an anomaly-matching discussion one can tell some dynamical scenario is consistent, but not that such a vacuum is necessarily realized. It remains to establish which between the familiar $SO_{f}(2)$ symmetric vacuum and the proposed $SU(2)_{\rm f}$ symmetric one, is actually realized. The adjoint QCD with general $N$ reduces to ${\cal N}=1$ supersymmetric Yang- Mills theory, for the spacial case of $N_{\rm f}=1$. A great number of results on nonperturbative aspects are known [37, 86, 82, 87] there. Note that in this case (3.29) leads to an $N$ fold vacuum degeneracy, in agreement with the well-known Witten index of pure ${\cal N}=1$ $SU(N)$ Yang-Mills theory. One may also start from the ${\cal N}=2$ supersymmetric $SU(2)$ Yang-Mills theory, where many exact results for the infrared physics are known [38, 39, 44]. It can be deformed to ${\cal N}=1$ theory by an adjoint-scalar mass perturbation, which yields a confining, chiral symmetry breaking vacua. Exact calculation of gauge fermion condensates $\langle\lambda\lambda\rangle$ from this viewpoint can be found in [88, 89]. The pure ${\cal N}=2$ theory could also be perturbed directly to ${\cal N}=0$ [81]. In principle, such an approach can give hints about $N_{\rm f}=2$ adjoint QCD, even though it is not a simple task to identify correctly the vacuum which can be reached by such a deformation. ### 3.3 QCD with ”tensor quarks” We now move to theories with matter fermions in $N_{\rm f}$ pairs of $\psi,{\tilde{\psi}}=\yng(2)\oplus{\bar{\yng(2)}}$ (3.36) or $\psi,{\tilde{\psi}}=\yng(1,1)\oplus{\bar{\yng(1,1)}}\;.$ (3.37) For reference, the standard QCD quarks are in $\yng(1)\oplus{\bar{\yng(1)}}$. The first beta-function coefficient is $b_{0}=\frac{11N-2N_{\rm f}(N\pm 2)}{3}\ .$ (3.38) As the $k=\frac{N}{2}$ element of the center ${\mathbbm{Z}}_{N}$ does not act, there is a ${\mathbbm{Z}}_{2}^{\rm c}\subset{\mathbbm{Z}}_{N}^{\rm c}$ (3.39) center symmetry.111111This model has been considered by Cohen [90], in particular in relation with such a center symmetry, and concerning the possible existence of an order parameter for confinement. Also there is a discrete axial symmetry subgroup ${\mathbbm{Z}}_{2N_{\rm f}(N\pm 2)}^{\psi}\,:\quad\psi\to e^{\tfrac{2\pi i}{2N_{\rm f}(N\pm 2)}}\,\psi\ ,\quad{\tilde{\psi}}\to e^{\tfrac{2\pi i}{2N_{\rm f}(N\pm 2)}}\,{\tilde{\psi}}\,,$ (3.40) respected by instantons. The $\pm$ refer respectively to two types of models, Eq. (3.36) and Eq. (3.37). Let us study for simplicity the $N_{\rm f}=1$ theory: the analysis is similar to the cases discussed in the preceding sections. The anomaly is given by $\left(-\frac{2(N\pm 2)}{8\pi^{2}}{\mathrm{tr}}{\tilde{F}}^{2}+\frac{2N(N\pm 2)}{8\pi^{2}}(B_{\rm c}^{(2)})^{2}\right)\delta A_{\psi}^{(0)}$ (3.41) where $\delta A_{\psi}^{(0)}\in\frac{2\pi}{2(N\pm 2)}{\mathbbm{Z}}_{2(N\pm 2)}\;.$ (3.42) Now $\frac{2(N\pm 2)}{8\pi^{2}}\int{\mathrm{tr}}{\tilde{F}}^{2}\in 2(N\pm 2){\mathbbm{Z}}\ ,$ (3.43) means that the first term of (3.41) is trivial. By using $\frac{1}{8\pi^{2}}\int(B_{\rm c}^{(2)})^{2}=\frac{1}{4}{\mathbbm{Z}}\ ,$ (3.44) the second term gives an anomaly $A=2\pi\frac{N}{4}\,{\mathbbm{Z}}\;.$ (3.45) We find therefore no anomaly for $N=4\ell$; for $N=4\ell+2$, the (1-form) gauging breaks the discrete symmetry as ${\mathbbm{Z}}_{2(N\pm 2)}^{\psi}\longrightarrow{\mathbbm{Z}}_{N\pm 2}^{\psi}\ .$ (3.46) The assumption of the ”quark condensate” $\langle\psi{\tilde{\psi}}\rangle\neq 0\;,$ (3.47) is consistent with (3.46). The bi-fermion condensate (3.47) however breaks the discrete symmetry as ${\mathbbm{Z}}_{2(N\pm 2)}^{\psi}\longrightarrow{\mathbbm{Z}}_{2}^{\psi}\ ,$ (3.48) i.e., stronger than suggested by (3.46). ### 3.4 Chiral theories with $\tfrac{N-4}{k}$ $\psi^{\\{ij\\}}$’s and $\tfrac{N+4}{k}$ ${\bar{\chi}_{[ij]}}$’s Our next theoretical laboratory is chiral $SU(N)$ gauge theories with matter fermions in a complex representation, $\tfrac{N-4}{k}$ $\psi^{\\{ij\\}}$’s and $\tfrac{N+4}{k}$ ${\bar{\chi}_{[ij]}}$, or $\frac{N-4}{k}\,\,\yng(2)\oplus\frac{N+4}{k}\,\,{\bar{\yng(1,1)}}\ .$ (3.49) $k$ is a common divisor of $(N-4,N+4)$ and $N\geq 5$. Asymptotic freedom requirement $11N-\frac{2}{k}(N^{2}-8)>0\ ,$ (3.50) is compatible with various possible choices for $(N,k)$. We studied two simple models in [69]: (i) $(N,k)=(6,2)$: $SU(6)$ with $\yng(2)\oplus 5\,\,{\bar{\yng(1,1)}}\ ;$ (3.51) (ii) $(N,k)=(8,4)$: $SU(8)$ with $\yng(2)\oplus 3\,\,{\bar{\yng(1,1)}}\ .$ (3.52) Below we review the results of the analysis of the first model, $SU(6)$ theory with matter fields in $21$ $\oplus$ $5$ $\otimes$ ${15}$∗. The implications of the mixed anomalies turn out to be quite subtle even for this simple model, as will be seen. Classically the symmetry group is $SU(5)\times U(1)_{\psi}\times U(1)_{\chi}\ .$ (3.53) The anomalies: $\displaystyle U(1)_{\psi}-[SU(6)]^{2}$ $\displaystyle=$ $\displaystyle\frac{T_{\tiny\yng(2)}}{T_{\tiny\yng(1)}}=N+2=8\ ,$ $\displaystyle U(1)_{\chi}-[SU(6)]^{2}$ $\displaystyle=$ $\displaystyle\frac{5T_{\tiny\bar{\yng(1,1)}}}{T_{\tiny\yng(1)}}=5(N-2)=20\ ,$ (3.54) fixes the charges of the nonanomalous $U(1)_{\psi\chi}\subset U(1)_{\psi}\times U(1)_{\chi}$ symmetry: $(Q_{\psi},Q_{\chi})=(5,-2)\ .$ (3.55) There are also unbroken discrete groups: $U(1)_{\psi}\longrightarrow{\mathbbm{Z}}_{8}^{\psi}\ ,\qquad U(1)_{\chi}\longrightarrow{\mathbbm{Z}}_{20}^{\chi}\ .$ (3.56) By studying the overlap of ${\mathbbm{Z}}_{8}^{\psi}\times{\mathbbm{Z}}_{20}^{\chi}$ and $U(1)_{\psi\chi}$ one arrives at $\frac{U(1)_{\psi\chi}\times{\mathbbm{Z}}_{8}^{\psi}\times{\mathbbm{Z}}_{20}^{\chi}}{{\mathbbm{Z}}_{40}}\sim U(1)_{\psi\chi}\times{\mathbbm{Z}}_{4}\;$ (3.57) (see (3.63) and (3.64) below). Taking furthermore the color center and $SU_{f}(5)$ center into account, the true anomaly-free symmetry group is: $\frac{SU(5)\times U(1)_{\psi}\times U(1)_{\chi}}{{\mathbbm{Z}}_{6}^{\rm c}\times{\mathbbm{Z}}_{5}^{f}}\longrightarrow\frac{SU(5)\times U(1)_{\psi\chi}\times{\mathbbm{Z}}_{4}}{{\mathbbm{Z}}_{6}^{\rm c}\times{\mathbbm{Z}}_{5}^{f}}\ .$ (3.58) From the consideration of the standard ’t Hooft anomaly analysis and by the impossibility of finding an appropriate set of massless baryons [69], one concludes that if the system confines the global symmetry (3.57) must be broken spontaneously, at least partially. Therefore the question is whether the 1-form gauging of a center symmetry can tell us anything useful. First of all, one finds that both ${\mathbbm{Z}}_{8}^{\psi}$ and ${\mathbbm{Z}}_{20}^{\chi}$ are broken by the 1-form ${\mathbbm{Z}}_{2}^{\rm c}$ gauging: ${\mathbbm{Z}}_{8}^{\psi}\longrightarrow{\mathbbm{Z}}_{4}^{\psi}\ ,\qquad{\mathbbm{Z}}_{20}^{\chi}\longrightarrow{\mathbbm{Z}}_{10}^{\chi}\ .$ (3.59) $\delta A_{\psi}^{(0)}=\frac{2\pi k}{8}\ ,\quad k=2,4,\ldots,8\ ,\qquad\delta A_{\chi}^{(0)}=-\frac{2\pi\ell}{20}\ ,\quad\ell=2,4,\ldots,20\ .$ (3.60) In order for the system to ”match” the reduction of the symmetry (3.59) in the infrared, some condensates are expected to form. A more careful analysis is, however, needed to find out which bifemion condensates actually occur in the infrared, in order to be consistent with the systematics of the mixed- anomalies. The division by ${\mathbbm{Z}}_{40}$, $\psi\to e^{5i\alpha}\psi\ ,\qquad\chi\to e^{-2i\alpha}\chi\ ,$ (3.61) $\alpha=\frac{2\pi k}{40}\ ,\qquad k=1,2,\ldots,40\ ,$ (3.62) in the global symmetry group, (3.57), is relevant. The quotient ${\mathbbm{Z}}_{4}\sim\frac{{\mathbbm{Z}}_{20}\times{\mathbbm{Z}}_{8}}{{\mathbbm{Z}}_{40}}$ (3.63) also forms a subgroup acting as $\psi\rightarrow e^{2\pi i\frac{2k}{8}}\psi=e^{2\pi i\frac{k}{4}}\psi\;,\qquad\chi\rightarrow e^{-2\pi i\frac{5k}{20}}\chi=e^{-2\pi i\frac{k}{4}}\chi\;,$ (3.64) or $\delta A_{\psi}^{(0)}=\frac{2\pi k}{4}\ ,\qquad\delta A_{\chi}^{(0)}=-\frac{2\pi k}{4}\ ,\qquad k=1,2,3,4\,.$ (3.65) The subtlety is that ${\mathbbm{Z}}_{40}$ remains nonanomalous, even after 1-form gauging of ${\mathbbm{Z}}_{2}^{\rm c}$: $-\left(8\cdot\frac{2\pi k}{8}-20\cdot\frac{2\pi k}{20}\right)\,\frac{1}{8\pi^{2}}\left[\int{\mathrm{tr}}{\tilde{F}}^{2}-6\,(B_{\rm c}^{(2)})^{2}\right]=0\;.$ (3.66) At the same time, ${\mathbbm{Z}}_{4}$ itself is affected by the gauging of the center ${\mathbbm{Z}}_{2}^{\rm c}$ symmetry. We find [69] the generalized anomaly: $-3\cdot 2\pi k\,\frac{1}{8\pi^{2}}\left[\int{\mathrm{tr}}{\tilde{F}}^{2}-6\,(B_{\rm c}^{(2)})^{2}\right]=2\pi k\cdot\left({\mathbbm{Z}}+3\cdot 6\cdot\frac{\mathbbm{Z}}{4}\right)\;.$ (3.67) Thus ${\mathbbm{Z}}_{4}$ is reduced to ${\mathbbm{Z}}_{2}$ ($k=2,4$). These are the fates of the discrete symmetries ${\mathbbm{Z}}_{20}\times{\mathbbm{Z}}_{8}\sim{\mathbbm{Z}}_{40}\times{\mathbbm{Z}}_{4}$ (3.68) under the gauged 1-form center symmetry ${\mathbbm{Z}}_{2}^{\rm c}$. What do they imply on possible condensates such as $\psi\chi\;,\qquad\psi\psi,\qquad\chi\chi\;$ (3.69) ? The MAC criterion might suggest formation of condensates in one or more of the channels $\displaystyle A:\qquad\psi\left(\raisebox{-2.0pt}{\yng(2)}\right)\,\psi\left(\raisebox{-2.0pt}{\yng(2)}\right)\quad{\rm forming}\quad\,\raisebox{-6.0pt}{\yng(2,2)}\;;$ $\displaystyle B:\qquad\chi\left(\bar{\raisebox{-9.0pt}{\yng(1,1)}}\right)\,\chi\left(\bar{\raisebox{-9.0pt}{\yng(1,1)}}\right)\qquad\ \ {\rm forming}\quad\bar{\raisebox{-12.0pt}{\yng(1,1,1,1)}}\;;$ $\displaystyle C:\qquad\psi\left(\raisebox{-2.0pt}{\yng(2)}\right)\,\chi\left(\bar{\raisebox{-9.0pt}{\yng(1,1)}}\right)\quad\ \ {\rm forming~{}adjoint~{}representation}\,\,.$ (3.70) The one-gluon exchange strengths corresponding to these scalar composites are proportional to $\frac{16}{6},\frac{28}{6},\frac{32}{6}$, respectively. Of these, the last is the most attractive, and thus one might be tempted to assume that the only condensate in the system is $\langle(\psi\chi)_{\rm adj}\rangle\neq 0\;.$ (3.71) However, the mixed anomaly analysis as sketched here shows that at least two different types of condensates must form in the infrared. For instance the ${\mathbbm{Z}}_{4}\to{\mathbbm{Z}}_{2}$ breaking would not be accounted for if (3.71) were the only condensate. See [69] for more details. One is led to conclude that two or all of the condensates (3.70), are generated by the system. ## 4 Generalized anomalies and phases of the generalized BY and GG models In this section we review the generalized anomaly in a large class of chiral gauge theories, BY (Bars-Yankielowicz) and GG (Georgi-Glashow) models. The procedure for computing the anomaly against gauging color-flavor locked 1-form ${\mathbbm{Z}}_{N}$ symmetry has been exhibited in Sec. 2.2, by using the simplest of this class of models, $\psi\eta$ model. The master formula found there, however, can be applied to any of the BY and GG models. ### 4.1 Bars-Yankielowicz models The BY models are $SU(N)$ gauge theories with Weyl fermions $\displaystyle\psi^{ij}\,,\quad\eta_{i}^{A}\,\,,\quad\xi^{i,a}$ (4.1) in $\yng(2)\oplus(N+4+p)\,{\bar{{\yng(1)}}}\;\oplus p\,{{{\yng(1)}}}\;.$ (4.2) The indices run as $\displaystyle\footnotesize i,j=1,\ldots,N\;,\quad A=1,\ldots,N+4+p\;,\quad a=1,\ldots,p\;.$ (4.3) The $\psi\eta$ model corresponds to $p=0$. The number of the extra fundamental pairs $p$ is limited by $\frac{9}{2}N-3$ before asymptotic freedom (AF) is lost. The classical symmetry group is $SU(N)_{\mathrm{c}}\times U(1)_{\psi}\times U(N+4+p)_{\eta}\times U(p)_{\xi}\;.$ (4.4) Strong anomaly breaks the symmetry group (4.4) to $\displaystyle p=0:\quad SU(N)_{\mathrm{c}}\times SU(N+4)_{\eta}\times U(1)_{\psi\eta}\;,$ $\displaystyle p=1:\quad SU(N)_{\mathrm{c}}\times SU(N+5)_{\eta}\times U(1)_{\psi\eta}\times U(1)_{\psi\xi}\;,$ $\displaystyle p>1:\quad SU(N)_{\mathrm{c}}\times SU(N+4+p)_{\eta}\times SU(p)_{\xi}\times U(1)_{\psi\eta}\times U(1)_{\psi\xi}\;,$ (4.5) where the anomaly-free combinations are: $U(1)_{\psi\eta}:\qquad\psi\to\mathrm{e}^{\mathrm{i}(N+4+p)\alpha}\psi\;,\quad\eta\to\mathrm{e}^{-\mathrm{i}(N+2)\alpha}\eta\;,$ (4.6) with $\alpha\in\mathbbm{R}$, and $U(1)_{\psi\xi}:\qquad\psi\to\mathrm{e}^{\mathrm{i}p\beta}\psi\;,\quad\xi\to\mathrm{e}^{-\mathrm{i}(N+2)\beta}\xi\;,$ (4.7) with $\beta\in\mathbbm{R}$. The choice of these two unbroken $U(1)$’s is arbitrary. For example another $U(1)_{\eta\xi}$ $U(1)_{\eta\xi}:\qquad\eta\to\mathrm{e}^{\mathrm{i}p\gamma}\eta\;,\quad\xi\to\mathrm{e}^{-\mathrm{i}(N+4+p)\gamma}\xi\;,$ (4.8) with $\gamma\in\mathbbm{R}$ may be chosen. Table 1 summarizes the charges. | $SU(N)_{\rm c}$ | $SU(N+4+p)$ | $SU(p)$ | ${U}(1)_{\psi\eta}$ | ${U}(1)_{\psi\xi}$ ---|---|---|---|---|--- $\psi$ | ${\yng(2)}$ | $\frac{N(N+1)}{2}\cdot(\cdot)$ | $\frac{N(N+1)}{2}\cdot(\cdot)$ | $N+4+p$ | $p$ $\eta$ | $(N+4+p)\cdot{\bar{\yng(1)}}$ | $N\cdot{\yng(1)}$ | $N(N+4+p)\,\cdot(\cdot)$ | $-(N+2)$ | $0$ $\xi$ | $p\cdot{{\yng(1)}}$ | $Np\,\cdot(\cdot)$ | $N\cdot{{\yng(1)}}$ | $0$ | $-(N+2)$ Table 1: The multiplicity, charges and the representations. $(\cdot)$ is a singlet representation. The standard ’t Hooft anomaly matching study, based on the (perturbative) symmetry group, (4.5), led to the observation that the anomaly triangles associated with (4.5) can be all matched in the infrared, assuming confinement, no condensate formation, and assuming a simple sets of massless composite fermions - baryons - saturating all the anomaly triangles. This is highly non trivial, as seen in the summary given in Appendix E. At the same time, the anomaly-matching equations are consistent with dynamics Higgs phase also, where certain bi-fermion condensates form, which break the color and part of the flavor symmetries dynamically, leaving still some non- trivial unbroken chiral symmetry in the infrared. See the review in Appendix F. It is thus important to find out whether or not new, mixed type of anomalies are present in the theories, and whether more stringent anomaly constraints arise, capable of discriminating these two dynamical possibilities. As already emphasized, the study of the mixed anomalies require clarifying the global structure of the symmetry group, beyond their local properties, (4.5). The result of a detailed analysis done in [71] is that the true symmetry group of the BY model is $SU(N)_{\rm c}\times\frac{SU(N+4+p)\times SU(p)\times{\cal H}}{\mathbb{Z}_{N}\times\mathbb{Z}_{N+4+p}\times\mathbb{Z}_{p}}\;,$ (4.9) where ${\cal H}=U(1)_{1}\times U(1)_{2}\times({\mathbbm{Z}}_{2})_{F}\;,$ (4.10) when $p$ and $N$ are both even. That is, it has two disconnected components. $U(1)_{1}$ and $U(1)_{2}$ being any two out of $U(1)_{\psi\eta}$, $U(1)_{\psi\xi}$, and $U(1)_{\eta\xi}$. When $p$ and/or $N$ is odd, instead, ${\cal H}=U(1)_{1}\times U(1)_{2}\;:$ (4.11) it has only one connected component. In these cases symmetry group is connected, and perturbative (algebra) aspects of the ’t Hooft anomaly triangles exhaust the UV-IR anomaly matching conditions. See [70]. Thus the most interesting BY models are those with $p$ and $N$ both even, to which we turn now. The fact that ${\mathbbm{Z}}_{N}\subset U(1)_{\psi\eta}\times U(1)_{\psi\xi}\times({\mathbbm{Z}}_{2})_{F}\;$ (4.12) for $N$, $p$ both even, can be shown explicitly, by choosing ${\alpha}=\frac{2\pi}{N}\;,\quad{\beta}=-\frac{2\pi}{N}\;.$ (4.13) We couple the system to the appropriate background gauge fields, * • $A_{\psi\eta}$: $U(1)_{\psi\eta}$ 1-form gauge field, * • $A_{\psi\xi}$: $U(1)_{\psi\xi}$ 1-form gauge field, * • $A_{2}$: $(\mathbb{Z}_{2})_{F}$ 1-form gauge field, * • ${\tilde{a}}$: $U(N)_{\rm c}$ 1-form gauge field, * • $B^{(2)}_{\mathrm{c}}$: $\mathbb{Z}_{N}$ 2-form gauge field. Under the 1-form gauge transformation the fields transform as $\displaystyle B^{(2)}_{\mathrm{c}}$ $\displaystyle\to B^{(2)}_{\mathrm{c}}+{\mathrm{d}}\lambda_{\mathrm{c}}\;,\qquad\ \,B^{(1)}_{\mathrm{c}}\to B^{(1)}_{\mathrm{c}}+{N}\lambda_{\mathrm{c}}\;,$ $\displaystyle{\tilde{a}}$ $\displaystyle\to{\tilde{a}}+\lambda_{\rm c}\;,\qquad\qquad\ {\tilde{F}}({\tilde{a}})\to{\tilde{F}}({\tilde{a}})+d\lambda_{\rm c}\;,$ $\displaystyle A_{\psi\eta}$ $\displaystyle\to A_{\psi\eta}-\lambda_{\rm c}\;,$ $\displaystyle A_{\psi\xi}$ $\displaystyle\to A_{\psi\xi}+\lambda_{\rm c}\;,$ $\displaystyle A_{2}$ $\displaystyle\to A_{2}+\frac{N}{2}\lambda_{\rm c}\;.$ (4.14) The fermion kinetic terms are: $\displaystyle\overline{\psi}\gamma^{\mu}\left(\partial+\mathcal{R}_{\mathrm{S}}(\widetilde{a})+\frac{N+4+p}{2}A_{\psi\eta}+\frac{p}{2}A_{\psi\xi}+A_{2}\right)_{\mu}P_{\mathrm{L}}\psi+$ $\displaystyle\overline{\eta}\gamma^{\mu}\left(\partial+\mathcal{R}_{\mathrm{F}^{*}}(\widetilde{a})-\frac{N+2}{2}A_{\psi\eta}-A_{2}\right)_{\mu}P_{\mathrm{L}}\eta+$ $\displaystyle\overline{\xi}\gamma^{\mu}\left(\partial+\mathcal{R}_{\mathrm{F}}(\widetilde{a})-\frac{N+2}{2}A_{\psi\xi}+A_{2}\right)_{\mu}P_{\mathrm{L}}\xi\;.$ (4.15) Knowing the $({\mathbbm{Z}}_{2})_{F}$ charges $+1$, $-1$, $+1$ for the fermions $\psi$ $\eta$ and $\xi$, respectively, and their representations under $SU(N)$, the master formula (2.43) gives straightforwardly the result for the mixed anomaly: the result is $N^{2}\,{1\over 8\pi^{2}}\int_{\Sigma^{4}}(B^{(2)}_{\mathrm{c}})^{2}\,\frac{1}{2}\delta A_{2}^{(0)}=N^{2}\times\frac{\mathbbm{Z}}{N^{2}}\,({\pm\pi})=\pm\pi\times{\mathbbm{Z}}\;:$ (4.16) a $({\mathbbm{Z}}_{2})_{F}-[{\mathbbm{Z}}_{N}]^{2}$ mixed anomaly. One finds no $({\mathbbm{Z}}_{2})_{F}$ anomaly in the IR, if the system is in the symmetric vacuum of Appendix. E. Such an inconsistency would be avoided, if one assumed instead that the system is in the dynamical Higgs phase (Appendix. F), as the color-flavor locked 1-form symmetry would be spontaneously broken. ### 4.2 Georgi-Glashow models The GG models have matter fermions in $\displaystyle\chi^{ij}\,,\quad\eta_{i}^{A}\,\,,\quad\xi^{i,a}\;,$ (4.17) i.e., in $\yng(1,1)\oplus(N-4+p)\,{\bar{{\yng(1)}}}\;\oplus p\,{{{\yng(1)}}}\;,$ (4.18) where $\displaystyle\footnotesize i,j=1,\ldots,N\;,\quad A=1,\ldots,N-4+p\;,\quad a=1,\ldots,p\;.$ (4.19) The simplest of the GG models (with $p=0$) - call it the $\chi\eta$ model - can be analyzed following the same steps taken in the case of the $\psi\eta$ model, in Sec. 2.2. The (true) symmetry group of the $\chi\eta$ model is $SU(N)\times G_{\rm f}\;,\qquad G_{\rm f}=\frac{SU(N-4)\times U(1)_{\chi\eta}\times(\mathbb{Z}_{2})_{F}}{\mathbb{Z}_{N}\times\mathbb{Z}_{N-4}}\;.$ (4.20) for even $N$. $U(1)_{\chi\eta}$ and $(\mathbb{Z}_{2})_{F}$ act as $U(1)_{\chi\eta}\;:\qquad\psi\to e^{i\tfrac{N-4}{2}\beta}\psi\;,\qquad\eta\to e^{-i\tfrac{N-2}{2}\beta}\eta\;;$ (4.21) $(\mathbb{Z}_{2})_{F}\;:\qquad\chi,\eta\to-\chi,-\eta\;.$ (4.22) The division by ${\mathbb{Z}}_{N}$ in (4.20) is due to the equivalence relation $(\mathrm{e}^{\mathrm{i}\beta},(-1)^{n})\sim(\mathrm{e}^{\mathrm{i}(\beta-\frac{2\pi}{N})},(-1)^{n}\mathrm{e}^{\mathrm{i}\frac{2\pi}{N}\frac{N}{2}})\,,$ (4.23) meaning that $U(1)_{\chi\eta}$ gauge field $A$ and $(\mathbb{Z}_{2})_{F}$ gauge field $A_{2}^{(1)}$ have charge $-1$ and $\frac{N}{2}$, respectively. By introducing the gauge fields * • $A_{2}$: $({\mathbbm{Z}}_{2})_{F}$ 1-form gauge field, * • $A$: $U(1)=U(1)_{\chi\eta}$ 1-form gauge field, * • $B^{(2)}_{\mathrm{c}}$: $\mathbb{Z}_{N}$ 2-form gauge field, the analysis follows step by step that done in the $\psi\eta$ model. The result of the calculation, by use of the master formula (Sec. 2.2), is that there is a mixed $(\mathbbm{Z}_{2})_{F}-[{\mathbbm{Z}}_{N}]^{2}$ anomaly in the UV, $\Delta S^{(4)}_{\rm UV}=\pm i\pi{\mathbbm{Z}}\;:$ (4.24) the partition function changes sign under (4.22). Of course, the ”massless baryons” lead to no anomalies in the infrared. The conclusion is that in the confining phase (see Appendix G) the mixed $(\mathbbm{Z}_{2})_{F}-[{\mathbbm{Z}}_{N}]^{2}$ anomaly does not UV-IR match. In other words such a symmetric confining phase cannot be the correct vacuum of the system. There is no difficulty for the dynamical Higgs phase. The fact that in this particular (and only) case of the simplest of the GG model ($p=0$), the $\chi\eta$ model, the confining, no-condensate phase (Appendix C) and the dynamical Higgs phase (Appendix D) happen to have the same global symmetry of the massless sector does not necessarily imply that these phases are the same phase (see a more detailed discussion on this issue in [96]). More general GG models with $p\neq 0$ have also been analysed in detail, in [70]. The analysis is similar to that done for the $\chi\eta$ model and for the general $BY$ models reviewed above. The conclusion is that the confining, symmetric vacuum (Appendix G) is not consistent with the implications of the generalized anomalies. The Higgs phase (Appendix H) seems to be perfectly consistent with these new constraints. ## 5 Strong anomaly and phases of chiral gauge theories Very recently the present authors have realized [96] that strong anomaly, which plays a prominent role in the solution of the so-called $U(1)_{A}$ problem in QCD, can also be significant in the study of the phases of chiral gauge theories, such as those discussed so far in this review. More concretely, a key observation is that the well-known low-energy strong-anomaly effective action for QCD, which reproduces the effects of the strong anomaly in the low-energy effective action, has a nontrivial implication on the symmetry breaking pattern itself. Note that there is a subtlety in this argument, as the well-known QCD effective sigma model action already assumes the standard chiral symmetry breaking into vectorlike residual symmetries (see (5.2) below). The criterion we adopt to study chiral gauge theories of unknown low-energy symmetries and phases, is that it should be possible to write a low-energy strong-anomaly local effective action by using the low-energy degrees of freedom (NG bosons and/or massless composite fermions) present in the assumed phase. The simple form of such a low-energy action we will find in the dynamical Higgs phase, in contrast to the impossibility of writing analogous terms with massless baryons only (in the confining phase), provides another, independent, indication that the first type of phase (dynamical Higgs phase, characterized by certain bi-fermion condensates) represents a more plausible IR phase of this class of models. Before considering the chiral gauge theories we discussed in the previous sections from this new angle, we first review quickly what is known about the $U(1)_{A}$ problem in the standard QCD [91, 92]. ### 5.1 $U(1)_{A}$ problem and the $\theta$ dependence in QCD In QCD the $U(1)_{A}$ symmetry is broken by the strong anomaly, and also spontaneously broken by the quark condensate $\langle{\bar{\psi}_{L}}\psi_{R}\rangle\sim-\Lambda^{3}\neq 0\;,$ (5.1) which breaks the nonanomalous chiral symmetry to its vectorlike subgroup, $SU(N_{\rm f})_{L}\times SU(N_{\rm f})_{R}\to SU(N_{\rm f})_{V}\;.$ (5.2) At this point one expects, besides the NG bosons of the $SU(N_{\rm f})_{A}$ symmetry (the pions), another NG boson relative to $U(1)_{A}$, ($\eta$ or $\eta^{\prime}$)121212Here $\eta,\eta^{\prime}$ are the $SU(2)$ or $SU(3)$ singlet pseudoscalar mesons of the real world, as in the Particle Data Booklet. The reader will not confuse them with the Weyl fermion in the chiral $\psi\eta$ or $\chi\eta$ models being studied here., which would get mass due to the strong anomaly. The chiral Lagrangian allows us to understand how this works qualitatively, and quantitatively in the large $N$ limit. To reproduce the effects of the strong anomaly, the authors of [30]-[34] add to the standard chiral Lagrangian $L_{0}=\frac{F_{\pi}^{2}}{2}\mathop{Tr}\nolimits\,\partial_{\mu}U\partial^{\mu}U^{\dagger}+\mathop{Tr}\nolimits M\,U+{\rm h.c.}+\ldots\;,\qquad U\equiv{\bar{\psi}}_{R}\psi_{L}$ (5.3) ($F_{\pi}$ is the usual pion-decay constant), a new term ${\hat{L}}=\frac{i}{2}q(x)\,\log\det U/U^{\dagger}+\frac{N}{a_{0}F_{\pi}^{2}}q^{2}(x)-\theta\,q(x)\;,$ (5.4) where $q(x)$ is the topological density $q(x)=\frac{g^{2}}{32\pi^{2}}F_{\mu\nu}^{a}{\tilde{F}}^{a,\mu\nu}\;,$ (5.5) and $a_{0}$ is some constant of the order of unity. The variation of $\hat{L}$ under the action of $U(1)_{A}$ $\psi_{L}\to e^{i\alpha}\psi_{L}\;,\quad\psi_{R}\to e^{-i\alpha}\psi_{R}\;,\quad U\to e^{2i\alpha}\,U$ (5.6) reproduces the variation of the phase of the partition function, $\Delta S=2N_{\rm f}\alpha\int d^{4}x\frac{g^{2}}{32\pi^{2}}F_{\mu\nu}^{a}{\tilde{F}}^{a,\mu\nu}\;,$ (5.7) due to the strong anomaly. The expression (5.4) can be manipulated as shown in [30]-[34]: integrating away the topological charge $q(x)$ one obtains an equivalent expression ${\hat{L}}=-\frac{F_{\pi}^{2}\,a_{0}}{4N}(\theta-\frac{i}{2}\log\det U/U^{\dagger})^{2}\;,$ (5.8) which is well defined as $\langle U\rangle\propto{\mathbf{1}}\neq 0\;.$ (5.9) Expanding (5.8) around this VEV, $U\propto e^{i\tfrac{\pi^{a}t^{a}}{F_{\pi}}+i\tfrac{\eta\,t^{0}}{F_{\pi}^{(0)}}}={\mathbf{1}}+i\frac{\pi^{a}t^{a}}{F_{\pi}}+i\frac{\eta\,t^{0}}{F_{\pi}^{(0)}}+\ldots\;,$ (5.10) one finds the mass term for the would-be NG boson, $\eta$. The idea here is to reverse the logics: one can actually argue that the presence of such an effective action needed for reproducing the strong anomaly implies a nonvanishing condensate, $\langle U\rangle=\langle\bar{\psi}_{R}\psi_{L}\rangle\neq 0$, and hence, indirectly, also the spontaneous breaking of nonanomalous chiral symmetry, (5.2), affecting the low-energy physics. Even if in QCD there is a powerful direct argument [23] for such a vectorlike symmetry-breaking pattern, it is interesting to note that the requirement of faithfully representing the $U(1)_{A}$ anomaly in the infrared seems to imply the same conclusion. It is this kind of consideration that has led recently the present authors to apply an analogous argument to chiral gauge theories [96], by requiring the effective low-energy theory to be able to express the strong anomaly appropriately. The following (Sec. 5.2-Sec. 5.5) is the review of some of the results found. #### 5.1.1 ${\cal N}=1$ supersymmetric theories Before proceeding to the discussion of chiral gauge theories, let us make a brief comment on ${\cal N}=1$ supersymmetric models. In the context of ${\cal N}=1$ supersymmetric gauge theories, the strong-anomaly effective action is derived by using the so-called Veneziano-Yankielowicz (VY) and Affleck-Dine- Seiberg (ADS) superpotentials [35, 36, 37]. They correctly reproduce in the infrared effective theory the effects of instantons and supersymmetric Ward- Takahashi identities, and embody the anomaly of [93, 94]. This last one, known as Konishi anomaly, has direct implications on the vacuum properties of the theory under consideration. It is a straightforward consequence of the strong anomaly, via supersymmetry 131313The Konishi anomaly can also be viewed as representing an anomalous supersymmetry transformation law for some composite fields [93, 94].. The VY and ADS superpotentials are indeed crucial in determining the infrared dynamics and phases of the ${\cal N}=1$ supersymmetric gauge theories. For a review, see for instance [95]. ### 5.2 $\psi\eta$ model and strong-anomaly Let us apply a similar idea, i.e. of writing an effective action which reproduces the strong anomaly of the UV theory in the low energy theory, to one of the simplest chiral gauge theories, the $\psi\eta$ model (see Sec. 2.2). Let us remind ourselves briefly of the symmetries of the model. At the infinitesimal level the quantum symmetry group of $\psi\eta$ is $SU(N+4)_{\eta}\times U(1)_{\psi\eta}\;,$ (5.11) while any combinations of $U(1)_{\psi}\times U(1)_{\eta}$ different form $U(1)_{\psi\eta}$ is broken by a strong anomaly. The low-energy effective action must capture this strong anomaly correctly. In Sec. 2.2 it was shown that the $\psi\eta$ model cannot confine maintaining the full global symmetry unbroken. The system instead can break the gauge symmetry dynamically (as well as part of the global symmetry), and a color- flavor locking condensate forms: $\langle\psi^{ij}\eta^{A}_{j}\rangle=\begin{cases}c_{\psi\eta}\,\delta^{iA}\;,\qquad A=1,\dots N\;,\\\ 0\;,\qquad A=N+1,\dots N+4\;.\end{cases}$ (5.12) Unlike ${\bar{\psi}_{R}}\psi_{L}$ in QCD, ${\tilde{\phi}}=\sum_{k,j}^{N}\psi^{kj}\eta_{j}^{k}\;,$ is not gauge invariant. It is convenient to re-express this condensate in a gauge invariant form, i.e. $\det U\;,\qquad U_{k\ell}\equiv\psi^{kj}\eta_{j}^{\ell}\;.$ (5.13) Such a gauge invariant condensate is fully equivalent to (5.12). It causes the breaking $SU(N+4)\times U(1)\rightarrow SU(N)_{{}_{\rm cf}}\times SU(4)\times U(1)^{\prime}\;,$ (5.14) (Appendix B). $U(1)^{\prime}$ is the unbroken combination of $U(1)_{\psi\eta}$ and $U(1)_{D}$, where $U(1)_{D}$ is $U(1)\subset SU(N+4)$ generated by $T_{D}={\rm diag}(4\cdot\mathbbm{1}_{N\times N},-N\cdot\mathbbm{1}_{4\times 4})$. At this point it is useful to look into the NG boson sector of the theory, which leads to an apparent puzzle. From the symmetry breaking one expects to find $8N$ nonabelian NG bosons relative to $\frac{SU(N+4)}{SU(N)\times SU(4)\times U(1)_{D}}$, interpolated in a gauge invariant fashion by $\phi^{A}=(\psi^{ij}\eta_{j}^{a})^{*}(T^{A})^{a}_{b}(\psi^{ik}\eta^{b}_{k})\;.$ (5.15) Here $T_{A}$ are the $8N$ broken generators that connect the $N$ dimensional subspace (where $SU(N)$ acts) and the $4$ dimensional one (where $SU(4)$ acts). The problem emerges when one considers $U(1)$ NG boson(s). There is certainly a physical massless NG boson, living in $\frac{U(1)_{D}\times U(1)_{\psi\eta}}{U(1)^{\prime}}\;.$ (5.16) The gauge-invariant field that interpolates it can be taken as the (imaginary part of the) condensate $\det\;U$ itself. However, with the condensate (5.12) alone, there is no space for another possible NG boson, associated with the symmetry breaking of an anomalous $U(1)$ symmetry (any generic combination of $U(1)_{\psi}$ and $U(1)_{\eta}$ other than $U(1)_{\psi\eta}$ is in fact spontaneously broken by ${\langle\psi\eta\rangle}$). This would-be NG boson would get a mass from the strong anomaly, but in any case it needs to be described by an interpolating field (which?). Another related fact is that there is actually a particular anomalous symmetry (a special combination of $U(1)_{\psi}$ and $U(1)_{\eta}$) $U(1)_{A}:\ \begin{cases}\psi\rightarrow e^{i\alpha}\psi\;,\\\ \eta\rightarrow e^{-i\alpha}\eta\;,\end{cases}$ (5.17) which is not spontaneously broken by the $\psi\eta$ condensate. How would such a symmetry manifest itself in the infrared? These are the first hints that the description in terms of the condensate $\psi\eta$ (or $\det\;U$) is not a complete one. Another reason to look for other condensates is that it is not possible to write an effective Lagrangian which realizes all the (nonanomalous) global symmetries, with the composite field $\det U$ alone. For further details see [96]. With these considerations in mind, let us construct the correct form of the strong-anomaly effective action systematically. We start from the very beginning, ${\cal L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+{\cal L}^{\rm fermions}\;,$ (5.18) ${\cal L}^{\rm fermions}=-i\overline{\psi}{\bar{\sigma}}^{\mu}\left(\partial+\mathcal{R}_{\mathrm{S}}(a)\right)_{\mu}\psi\;-i\overline{\eta}{\bar{\sigma}}^{\mu}\left(\partial+\mathcal{R}_{\mathrm{F}^{*}}(a)\right))_{\mu}\eta\;,$ (5.19) where $a$ is the $SU(N)$ gauge field, and the matrix representations appropriate for $\psi$ and $\eta$ fields are indicated with $\cal R_{\mathrm{S}}$ and $\cal R_{\mathrm{F}^{*}}$. We change the variables by ${\cal L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+{\cal L}^{\rm fermions}+\mathop{Tr}\nolimits[(\psi\eta)^{*}U]+{\rm{\rm h.c.}}+{B}\,(\psi\eta\eta)^{*}+{\rm{\rm h.c.}}\;,$ (5.20) where $U$ is the composite scalars of $N\times(N+4)$ color-flavor mixed matrix form, $\mathop{Tr}\nolimits[(\psi\eta)^{*}U]\equiv(\psi^{ij}\eta_{j}^{m})^{*}U^{im}\;,$ (5.21) and ${B}$ are the baryons $B\sim\psi\eta\eta$, $B^{mn}=\psi^{ij}\eta_{i}^{m}\eta_{j}^{n}\;,\qquad m,n=1,2,\ldots,N+4\;,$ (5.22) antisymmetric in $m\leftrightarrow n$. In writing down the lagrangian (5.20) we have anticipated the fact that these baryon-like composite fields, present in the Higgs phase together with the composite scalars $~{}\psi\eta$ (see Appendix B), are also needed to write down the strong-anomaly effective action. This allows us to dodge the problem about the NG bosons (and to break $U(1)_{A}$), as we are going to explain. Integrating $\psi$ and $\eta$ out, one gets ${\cal L}^{\rm eff}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\mathop{Tr}\nolimits({\cal D}U)^{\dagger}{\cal D}U-i\overline{B}\,{\bar{\sigma}}^{\mu}\partial_{\mu}{B}-V\;.$ (5.23) The potential $V$ is assumed to be such that its minimum is of the form: $\langle U^{im}\rangle=\,c_{\psi\eta}\,\Lambda^{3}\delta^{im}\;,\qquad\quad\ i,m=1,2,\dots N\;,$ (5.24) and contains the strong anomaly term, $V=V^{(0)}+{\hat{L}}_{\rm an}\;.$ (5.25) ${\hat{L}}_{\rm an}$ of the form, ${\hat{L}}_{\rm an}={{\rm const}}\,\left[\log\left(\epsilon\,{B}{B}\,{\det}U\right)-\log\left(\epsilon\,{B}{B}\det U\right)^{\dagger}\right]^{2}\;$ (5.26) which is analogue of (5.8) in QCD. The argument of the logarithm $\epsilon\,{B}{B}\,\det U\equiv\epsilon^{m_{1},m_{2},\ldots,m_{N+4}}\epsilon^{i_{1},i_{2},\ldots,i_{N}}{B}_{m_{N+1},m_{N+2}}{B}_{m_{N+3},m_{N+4}}U_{i_{1}m_{1}}U_{i_{2}m_{2}}\ldots U_{i_{N}m_{N}}\;,$ (5.27) is invariant under the full (nonanomalous) symmetry, $SU(N)_{\rm c}\times SU(N+4)\times U(1)_{\psi\eta}\;$ (5.28) as it should be. Moreover it contains $N+2$ $\psi$’s and $N+4$ $\eta$’s, the correct numbers of the fermion zeromodes in the instanton background: it corresponds to a ’t Hooft’s instanton n-point function, e.g., $\langle\psi\eta\eta(x_{1})\psi\eta\eta(x_{2})\psi\eta(x_{3})\ldots\psi\eta(x_{N+2})\rangle\;.$ (5.29) This effective Lagrangian is well defined only if the argument of the logarithm takes a VEV. In particular it is natural to assume $\langle\epsilon^{(4)}{B}{B}\rangle\neq 0\;,\qquad\langle\det U\rangle\neq 0\;,$ (5.30) where $\epsilon^{(4)}{B}{B}=\epsilon_{\ell_{1}\ell_{2}\ell_{3}\ell_{4}}{B}^{\ell_{1}\ell_{2}}{B}^{\ell_{1}\ell_{2}}\;,\qquad\ell_{i}=N+1,\ldots,N+4\;.$ (5.31) As $\langle\det U\rangle\propto{\mathbf{1}}_{N\times N}\;$ (5.32) takes up all flavors up to $N$ (the flavor $SU(N+4)$ symmetry can be used to orient the symmetry breaking this way), ${B}{B}$ must be made of the four remaining flavors, as in (5.31). These baryons were not among those considered in the earlier studies [12, 71], but assumed to be massless here, and indicated as $B^{[A_{2}B_{2}]}$ in Table 3. This is possible because these fermions do not have any perturbative anomaly with respect to the unbroken symmetry group, $SU(N)\times SU(4)\times U(1)$: ’t Hooft anomaly matching considerations cannot tell if they are massive or not, either the two options are possible. Now we see how the apparent puzzle about the NG bosons hinted at above is solved. We can define the interpolating fields of the two NG boson by expanding the condensates, $\displaystyle\det U=\langle\det U\rangle+\ldots\propto{\mathbf{1}}+\frac{i}{F_{\pi}^{(0)}}\,\phi_{0}+\ldots\;;$ $\displaystyle\epsilon^{(4)}{B}{B}=\langle\epsilon^{(4)}{B}{B}\rangle+\ldots\propto{\mathbf{1}}+\frac{i}{F_{\pi}^{(1)}}\,\phi_{1}+\ldots\;,$ (5.33) (here $F_{\pi}^{(0)}$ and $F_{\pi}^{(1)}$ are some constants with dimension of mass). Clearly in general the physical NG boson and the anomalous would-be NG boson will be interpolated by two linear combinations of $\phi_{0}$ and $\phi_{1}$. The effective Lagraingian allows us to fix these linear combinations. Indeed, as the effective Lagrngian (5.26) is invariant under the nonanomalous symmetry group, and in particular $U(1)_{\psi\eta}$ and $U(1)_{D}$ do not act on $\epsilon\;BB\;\det U$, the NG boson which appears in the strong-anomaly effective action as the fluctuation of $\epsilon BB\;\det U$, ${\tilde{\phi}}\equiv N_{\pi}\left[\frac{1}{F_{\pi}^{(0)}}\,\phi_{0}+\frac{1}{F_{\pi}^{(1)}}\,\phi_{1}\right]\;,\qquad N_{\pi}=\frac{F_{\pi}^{(0)}F_{\pi}^{(1)}}{\sqrt{\big{(}F_{\pi}^{(0)}\big{)}^{2}+\big{(}F_{\pi}^{(1)}\big{)}^{2}}}\;,$ (5.34) cannot be the massless physical one: it is the would-be NG boson relative to the anomalous symmetry. Indeed the effective action provides a mass term for this NG boson. The orthogonal combination ${\phi}\equiv N_{\pi}\left[\frac{1}{F_{\pi}^{(1)}}\phi_{0}-\frac{1}{F_{\pi}^{(0)}}\phi_{1}\right]\;,$ (5.35) i.e. the interpolating field of the physical NG boson living in the coset (5.16), remains massless. Before we have included in the low-energy description some massless baryon which are neither required, nor excluded by the ’t Hooft anomaly matching. Now one can see their ultimate fate using the strong-anomaly effective Lagrangian. In particular (5.26) contains a 4-fermion coupling between these baryons, which, plugging the VEVs (5.30), provides a mass term for them. The last remark is that the strong-anomaly effective action does not depend on the absolute value of the condensates, $BB$ and $\det U$, separately. This simply means that these symmetry considerations alone cannot determine the mechanism of condensation. In particular, even if $\langle\det U\rangle\neq 0$ is somehow expected, a VEV for $BB$ is more surprising, and probably due to residual dipole interactions between the baryons. However a more in-depth study on how these two flat directions are lifted by quantum effects is needed to precisely understand how these two condensate form. ### 5.3 Strong anomaly: the generalized BY models As the solution given above on the $\psi\eta$ model is notably subtle, one might wonder whether a similar mechanism is at work in the generalized Bars- Yankielowicz models, an $SU(N)$ gauge theory with Weyl fermions $\displaystyle\psi^{ij}\,,\quad\eta_{i}^{A}\,\,,\quad\xi^{i,a}$ (5.36) in the direct-sum representation $\yng(2)\oplus(N+4+p)\,{\bar{{\yng(1)}}}\;\oplus p\,{{{\yng(1)}}}\;.$ (5.37) Also in this case (Ref. [71] and Appendix C) the conventional ’t Hooft anomaly matching equations allow a chirally symmetric confining vacuum, with massless baryons ${({B}_{1})}^{[AB]}=\psi^{ij}\eta_{i}^{A}\eta_{j}^{B}\;,\qquad{({B}_{2})}^{a}_{A}=\bar{\psi}_{ij}\bar{\eta}^{i}_{A}\xi^{j,a}\;\qquad{({B}_{3})}_{\\{ab\\}}=\psi^{ij}{\bar{\xi}}_{i,a}{\bar{\xi}}_{j,b}\;,$ (5.38) (the first is anti-symmetric in $A\leftrightarrow B$ and the third is symmetric in $a\leftrightarrow b$), saturating all conventional ’t Hooft anomaly triangles. The study of the generalized anomaly in Sec. 4.1 has however shown that such a vacuum is not consistent. A dynamical Higgs phase with condensates $\displaystyle\langle U^{iB}\rangle=\langle\psi^{ij}\eta_{i}^{B}\rangle=\,c_{\psi\eta}\,\Lambda^{3}\delta^{jB}\neq 0\;,\qquad j,B=1,\dots,N\;,$ $\displaystyle\langle V^{aA}\rangle=\langle\xi^{i,a}\eta_{i}^{A}\rangle=\,c_{\eta\xi}\,\Lambda^{3}\delta^{N+4+a,A}\neq 0\;,$ $\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad a=1,\dots,p\;,\quad A=N+5,\dots,N+4+p\;,$ (5.39) and with symmetry breaking $\displaystyle SU(N)_{\mathrm{c}}\times SU(N+4+p)_{\eta}\times SU(p)_{\xi}\times U(1)_{\psi\eta}\times U(1)_{\psi\xi}$ $\displaystyle\xrightarrow{\langle\xi\eta\rangle,\langle\psi\eta\rangle}SU(N)_{{\rm cf}_{\eta}}\times SU(4)_{\eta}\times SU(p)_{\eta\xi}\times U(1)_{\psi\eta}^{\prime}\times U(1)_{\psi\xi}^{\prime}\;,$ (5.40) is fully consistent with the gauging of the center symmetry (Ref. [71] and Sec. 4.1). A strong-anomaly effective action for the BY theories can be constructed in a way similar to the $\psi\eta$ model. Instead of (5.27), one has now $\displaystyle\epsilon\left({{B}_{1}}{{B}_{1}}\det U\det V\right)\equiv\epsilon^{m_{1},m_{2},\ldots,m_{N+4+p}}\epsilon^{i_{1},i_{2},\ldots,i_{N}}\epsilon^{k_{1},k_{2},\ldots,k_{p}}\times$ $\displaystyle\times$ $\displaystyle{B}_{1}^{[m_{N+1},m_{N+2}]}{B}_{1}^{[m_{N+3},m_{N+4}]}U^{i_{1}m_{1}}U^{i_{2}m_{2}}\ldots U^{i_{N}m_{N}}V^{m_{N+5}k_{1}}\ldots V^{m_{N+4+p}k_{p}}\;.$ The rest of the analysis can be completed by closely following that of the $\psi\eta$ model discussed in Sec. 5.2. We skip the details of the analysis. Let us note only that the strong anomaly effective action with such a logarithm, is perfectly consistent with, and perhaps implies, the condensates, (5.39): i.e., that the system is in dynamical Higgs phase, (5.40). It is, instead, not possible to rewrite a strong-anomaly effective action with logarithmic argument (LABEL:insteadofBis), in terms of massless composite fermions (5.38) alone. ### 5.4 Strong anomaly and the $\chi\eta$ model It is an interesting exercise to apply the same reasoning about the strong anomaly to the $\chi\eta$ model. We will find that there are good analogies with the $\psi\eta$ case studied above, but also quite significant differences. In this model any combination of $U(1)_{\chi}$ and $U(1)_{\eta}$, except $U(1)_{\chi\eta}$ (see Table 5), is anomalous, therefore some term similar to (5.4) (in QCD) should appear. In the dynamical Higgs scenario for the $\chi\eta$ model, there are two bi- fermon condensates, $\langle\chi^{ij}\eta_{j}^{m}\rangle=c_{\chi\eta}\,\delta^{im}\,\Lambda^{3}\;,\qquad i,m=1,2,\ldots,N-4\;,$ (5.42) and $\langle\chi\chi\rangle\neq 0\;.$ (5.43) This implements a two-step breaking, $\displaystyle SU(N)\times SU(N-4)\times U(1)_{\chi\eta}$ $\displaystyle\xrightarrow{\langle\chi\eta\rangle}$ $\displaystyle SU(N-4)_{\rm cf}\times SU(4)_{\rm c}\times U(1)^{\prime}$ (5.44) $\displaystyle\xrightarrow{\langle\chi\chi\rangle}$ $\displaystyle SU(N-4)_{\rm cf}\times U(1)^{\prime}\;.$ As before, in order to construct a fully consistent effective action, one should keep the full nvariance of the original theory, either spontaneously broken or not. To do so, it is convenient to re-express the condensates (5.42) in a gauge invariant way. The answer is to write a single gauge-invariant condensate $\displaystyle U$ $\displaystyle=$ $\displaystyle\epsilon_{i_{1}i_{2}\ldots i_{N}}\epsilon_{m_{1}m_{2}\ldots m_{N-4}}(\chi\eta)^{i_{1}m_{1}}(\chi\eta)^{i_{2}m_{2}}\ldots(\chi\eta)^{i_{N-4}m_{N-4}}\chi^{i_{N-3}i_{N-2}}\chi^{i_{N-1}i_{N}}$ (5.45) $\displaystyle\sim$ $\displaystyle\epsilon\,(\chi\eta)^{N-4}(\chi\chi)\;,$ which encodes both of the two (gauge depending) ones. This choice suggests that the correct strong-anomaly effective action for the $\chi\eta$ model is $\frac{i}{2}q(x)\log\epsilon(\chi\eta)^{N-4}(\chi\chi)+{\rm h.c.}\;,$ (5.46) where, again, $q(x)$ is the topological density defined in Eq. (5.5). Clearly it is by construction invariant under the whole (nonanomalous) symmetry group $SU(N)_{\rm c}\times SU(N-4)\times U(1)_{\chi\eta}\;.$ (5.47) Let us comment briefly some features that suggest that this is indeed the correct result. * • The argument of the logarithmic function here matches the correct number of the fermion zeromodes in the instanton background ($N_{\chi}=N-2$ and $N_{\eta}=N-4$); * • In contrast, there is no way of writing the strong anomaly effective action (5.46) in terms of the ”baryons”, $B\sim\chi\eta\eta$, of the assumed confining, chirally symmetric phase (Appendix C). No combination of the baryons can saturate the correct number of the fermion zeromodes, cfr (5.45). This anomaly effective action (5.46) agrees with the one proposed by Veneziano [7] for the case of $SU(5)$, and generalizes it to all $SU(N)$ $\chi\eta$ models. A key observation we share with [7] and generalizes to models with any $N$, is that this strong anomaly effective action, which should be there in the low-energy theory to reproduce correctly the (anomalous and nonanomalous) symmetries of the UV theory, implies nonvanishing condensates, $\langle\chi\eta\rangle\neq 0\;,\qquad\langle\chi\chi\rangle\neq 0\;,$ (5.48) i.e., that the system is in dynamical Higgs phase, Appendix D. Up to now the story has been very similar to the one about the $\psi\eta$ model. However there are some differences. Differently from the $\psi\eta$ model, where the baryon condensate mush enter in the strong-anomaly effective action, here the structure of the effective action simplifies, and no baryon is needed. Moreover, contrary to the $\psi\eta$ model, the $\chi\eta$ system has no physical $U(1)$ NG boson: it is eaten by a color $SU(N)$ gauge boson. However the counting of the broken and unbroken $U(1)$ symmetries is basically similar in the two models. Of the two nonanomalous symmetries ($U(1)_{\rm c}$ and $U(1)_{\chi\eta}$), a combination remains a manifest physical symmetry, and the other becomes the longitudinal part of a color gauge boson. Still another, anomalous, $U(1)$ symmetry exists, which is any combination of $U(1)_{\chi}$ and $U(1)_{\eta}$ other than $U(1)_{\chi\eta}$. This symmetry is also spontaneously broken, hence it must be associated with a NG boson, even though it will get mass by the strong anomaly. As in the $\psi\eta$ model, one can describe this situation explicitly, by expanding the composite $\chi\eta$ and $\chi\chi$ fields around their VEV’s, $\displaystyle(\det U)^{\prime}=\langle(\det U)^{\prime}\rangle+\ldots\propto{\mathbf{1}}+i\frac{1}{F_{\pi}^{(0)}}\,\phi_{0}^{\prime}+\ldots\;;$ $\displaystyle\chi\chi=\langle\chi\chi\rangle+\ldots\propto{\mathbf{1}}+i\frac{1}{F_{\pi}^{(1)}}\,\phi_{1}^{\prime}+\ldots\;,$ (5.49) where $(\det U)^{\prime}$ is defined in the $N-4$ dimensional color-flavor mixed space, and $\chi\chi\equiv\epsilon_{i_{1},i_{2},i_{3},i_{4}}\chi^{i_{1}i_{2}}\chi^{i_{3}i_{4}}\;,\qquad N-3\leq i_{j}\leq N\;.$ (5.50) Now one can see that the strong-anomaly effective action (5.46) gives mass to ${\tilde{\phi}}^{\prime}\equiv N_{\pi}\left[\frac{1}{F_{\pi}^{(0)}}\phi_{0}^{\prime}+\frac{1}{F_{\pi}^{(1)}}\phi_{1}^{\prime}\right]\;,\qquad N_{\pi}=\frac{F_{\pi}^{(0)}F_{\pi}^{(1)}}{\sqrt{(F_{\pi}^{(0)})^{2}+(F_{\pi}^{(1)})^{2}}}\;,$ (5.51) whereas an orthogonal combination ${\phi}^{\prime}\equiv N_{\pi}\left[\frac{1}{F_{\pi}^{(1)}}\phi_{0}^{\prime}-\frac{1}{F_{\pi}^{(0)}}\phi_{1}^{\prime}\right]\;$ (5.52) remains massless. The latter corresponds to the potential NG boson which is absorbed by the color $T_{\rm c}$ gauge boson. ### 5.5 Generalized GG models and strong anomaly Let us now turn to the generalized GG models [71], i.e. $SU(N)$ gauge theories with Weyl fermions $\displaystyle\chi^{[ij]}\,,\quad\eta_{i}^{A}\,\,,\quad\xi^{i,a}$ (5.53) in the direct-sum representation, $\yng(1,1)\oplus(N-4+p)\,{\bar{{\yng(1)}}}\;\oplus p\,{{{\yng(1)}}}\;.$ (5.54) It turns out that the simple structure of the strong-anomaly effective action (5.46), which does not need bi-baryon condensate, works out also in this case: $\frac{i}{2}q(x)\log\epsilon\chi\chi\det\,(\chi\eta)\det(\xi\eta)+{\rm h.c.}\;,$ (5.55) where a shorthand notation $\displaystyle\epsilon\chi\chi\det\,(\chi\eta)\det(\xi\eta)=$ $\displaystyle=$ $\displaystyle\epsilon_{i_{1}i_{2}\ldots i_{N}}\,\epsilon_{k_{1}k_{2}\ldots k_{p}}\,\epsilon_{m_{1}m_{2}\ldots m_{N-4+p}}$ $\displaystyle\times$ $\displaystyle(\chi\eta)^{i_{1}m_{1}}(\chi\eta)^{i_{2}m_{2}}\ldots(\chi\eta)^{i_{N-4}m_{N-4}}(\chi^{i_{N-3}i_{N-2}}\chi^{i_{N-1}i_{N}})(\xi\eta)^{m_{N-3}k_{1}}\ldots(\xi\eta)^{m_{N-4+p}k_{p}}\;$ has been used. The strong anomaly action (5.55) requires the condensates $\displaystyle\langle\chi^{ij}\eta_{i}^{A}\rangle={\rm const}.\,\Lambda^{3}\delta^{jA}\neq 0\;,\qquad j=1,\dots,N-4\;,\quad A=1,\dots,N-4\;,$ $\displaystyle\langle\xi^{i,a}\eta_{i}^{B}\rangle={\rm const}.\,\Lambda^{3}\delta^{N-4+a,B}\neq 0\;,\qquad a=1,\dots,p\;,\quad B=N-3,\dots,N-4+p\;,$ and $\langle\chi^{j_{1},j_{2}}\chi^{j_{3},j_{4}}\rangle={\rm const}.\,\epsilon^{j_{1},j_{2},\ldots,j_{4}}\Lambda^{3}\neq 0\;,\qquad j_{1},\ldots,j_{4}=N-3,N-2,\ldots,N\;.$ (5.58) Hearteningly, these are exactly the set of condensates expected to occur in the dynamical Higgs phase of the GG models (Ref. [71] and Appendix H). ### 5.6 Strong anomaly in the chiral gauge theories considered in Sec. 3 Up to now we have analysed the implications of the strong anomaly in models discussed in Sec. 2.2 and in Sec. 4. In these models, being able to discriminate between different types of phases (confinement with unbroken global symmetry versus dynamical Higgs phase) was clearly very important, as the conventional ’t Hooft anomaly-matching algorithm could not tell us which type of the vacua were the correct ones. We found indeed that both the recent generalized anomaly study (Sec. 2.2 and Sec. 4) and strong anomaly consideration (Sec. 5.2 \- Sec. 5.5) seem to favor the dynamical Higgs phase, in all these models. Of course, consideration of the strong-anomaly effective action is relevant also in other models. For illustration we discuss below a few models studied in Sec. 3. #### 5.6.1 $SU(6)$ model with a single fermion in a self-adjoint antisymmetric representation Consider an $SU(6)$ model with a single left-handed fermion in the representation, ${\underline{20}}\,=\,\yng(1,1,1)\,$ (5.59) which was studied in [59, 69] and reviewed in Sec. 3.1.1 above. The lessons learned from the the gauging of the 1-form ${\mathbbm{Z}}_{3}^{\rm c}$ symmetry have been that the nonanomalous ${\mathbbm{Z}}_{6}^{\psi}\,$ symmetry must break spontaneously as ${\mathbbm{Z}}_{6}^{\psi}\longrightarrow{\mathbbm{Z}}_{2}^{\psi}\;,$ (5.60) implying a three-fold vacuum degeneracy[59, 69]. This could either be because of a four-fermion condensate [59] $\langle\psi\psi\psi\psi\rangle\sim\Lambda^{6}\neq 0\;,\qquad\langle\psi\psi\rangle=0\;,$ (5.61) or due to a gauge-symmetry breaking bi-fermion condensate [69] $\langle\psi\psi\rangle\sim\Lambda^{3}\neq 0\;,$ (5.62) with $\psi\psi$ in the adjoint representation of $SU(6)$. Both scenarios are consistent. Let us see if considerations on the strong-anomaly can clarify which scenario is actually realized. A particularly simple representation of the strong- anomaly is $\frac{i}{2}q(x)\log\psi\psi\psi\psi\psi\psi+{\rm h.c.}\;.$ (5.63) Based on our viewpoint that the argument of the logarithmic function acquires a nonvanishing VEV, the assumption of four-fermion condensate (5.61) appears to leads to a difficulty. In constrast, the assumption of bi-fermion condensates (5.62) looks perfectly consistent, with $\langle\psi\psi\psi\psi\psi\psi\rangle\sim\langle\psi\psi\rangle^{i}_{j}\langle\psi\psi\rangle^{j}_{k}\langle\psi\psi\rangle^{k}_{i}\neq 0\;.$ (5.64) #### 5.6.2 Adjoint QCD with $N_{\rm c}=N_{\rm f}=2$ It is interesting to apply the same logics also to the adjoint QCD, previously analized from the point of view of generalized symmetries and their anomalies. In particular let us focus on the $N_{\rm c}=2$, $N_{\rm f}=2$ case, because of the renewed interest in this particular model, raised by the work of Anber and Poppitz [56]. The conventional thinking holds that a gauge invariant bi-fermion condensate $\langle\lambda\lambda\rangle\neq 0$ (5.65) forms, breaking the flavor symmetry as $SU(2)_{\rm f}\to SO(2)_{\rm f}$, leading to $2$ NG bosons, and reducing the discrete ${\mathbbm{Z}}_{8}$ symmetry to ${\mathbbm{Z}}_{2}$ resulting four degenerate vacua. The Anber and Poppitz’s proposal [56] is that, instead, the system might develop a four- fermion condensates but not bifermion condensate: $\langle\lambda\lambda\lambda\lambda\rangle\neq 0\;,\qquad\langle\lambda\lambda\rangle=0\;.$ (5.66) The discrete $\mathbbm{Z}_{8}$ symmetry is now broken to its $\mathbbm{Z}_{4}$ subgroup (therefore there are only two degenerate vacua). Massless baryons $\sim\lambda\lambda\lambda$ (5.67) (necessarily a doublet of $SU(2)_{\rm f}$) matches the UV-IR Witten anomaly of $SU(2)_{f}$. As said above, the two possibilities are both consistent with the generalized ’t Hooft anomaly matching, therefore an indication from the strong-anomaly effective action would be very welcome. The analogue of (5.4), (5.46) and (5.63), is in this case, $\frac{i}{2}q(x)\log\lambda\lambda\ldots\lambda+{\rm h.c.}\;.$ (5.68) with eight $\lambda$’s inside the argument of the logarithmic function. Therefore, in contrast to what we saw in the preceding model Sec. 5.6.1, our strong-anomaly algorithm does not seem to be able to discriminate the two dynamical possibilities, (5.65), or (5.66). Before concluding this section, we note that in the case with $N_{\rm f}=1$, arbitrary $N_{\rm c}$, the adjoint QCD becomes ${\cal N}=1$ supersymmetric Yang-Mills theory. The strong-anomaly effective action (5.68) with $2N_{\rm f}N_{\rm c}=2N_{\rm c}$ $\lambda$’s, reduces precisely to the Veneziano- Yankielowicz effective potential [35], implying $\langle\lambda\lambda\rangle\neq 0$. In this case the assumption of the bi- fermion condensate, $\langle\lambda\lambda\rangle\neq 0$, the breaking of the discrete symmetry ${\mathbbm{Z}}_{2N_{\rm c}}\to{\mathbbm{Z}}_{2}$, and the resulting $N_{\rm c}$ fold degeneracy of the vacua (Witten’s index), are generally accepted as a well-established fact. ## 6 Summary and discussion We have reviewed in this article the first applications of generalized anomalies and discussed what the consequent stronger anomaly matching conditions tell us about various chiral gauge theories based on $SU(N)$ gauge group. Our discussion was divided in two class of models. In the first, the system has a 1-form ${\mathbbm{Z}}_{k}$ symmetry ($k$ being a divisor of $N$) under which the matter fermions do not transform. The treatment in this case is relatively straightforward: certain discrete symmetries, respected by instantons, are often found to be further made anomalous, due to the fractional ’t Hooft fluxes accompanying the gauging of the discrete 1-form symmetries. The discussion of Sec. 3 has illustrated that their consequences depend nontrivially on the types of matter fermions present, and an interesting and rich variety of predictions on the possible condensates, symmetry breaking patterns and phases, have been found. A second group of models (BY and GG models) have a color-flavor locked ${\mathbbm{Z}}_{N}$ 1-form symmetry, in which matter fermions transform together with the $SU(N)$ gauge field. A careful analysis of the global properties of the symmetry group is needed before actually introducing the gauging of this discrete 1-form symmetry. This has been worked out in detail in all of the generalized Bars-Yankielowicz and Georgi-Glashow models, and the results of the analysis reviewed in Sec. 4. A surprising implication is that, at least for even $N$ theories, color-flavor locked ${\mathbbm{Z}}_{2}-U(1)-{\mathbbm{Z}}_{N}$ 1-form symmetry does not allow its gauging which is a new kind of ’t Hooft anomaly, and that this is consistent with the low-energy system being in dynamical Higgs phase characterized by certain bifermion condensates. After the examination of the results from the generalized anomalies (involving the 1-form symmetries and gauging of some discrete center symmetries), we changed the topics, and turned to the very recent observation [96] about the strong-anomaly associated effects. This is the idea discussed in the context of the so-called $U(1)_{A}$ problem in QCD many years ago, but for some reason it was almost never applied to the discussion of the physics of strongly- coupled chiral gauge theories. It is found that the requirement that the massless degrees of freedom of the hypothesized infrared phase should be able to describe appropriately the strong anomaly gives a rather clear indication on the physics of BY and GG models: the structure of the strong-anomaly effective action favors the dynamical Higgs vacua, against the confining, fully flavor symmetric vacua, in agreement with the implications from the generalized anomaly matching algorithm, explored in the first part of this review. The fact that both mixed anomalies [70, 71] and the strong-anomaly effective action [96] imply dynamical Higgs phase in chiral BY and GG models is certainly not accidental. Both arise by taking properly the strong chiral $U(1)$ anomalies into account. These discussions, rather unexpectedly, brought us to note certain analogies and contrasts between the strong-interaction dynamics of vector-like and chiral gauge theories [96]. Let us now compare the standard QCD with $N_{\rm f}$ light flavors of quarks and antiquarks, and the $\psi\eta$, $\chi\eta$ models as well as more general Bars-Yankielowicz and Georgi-Glasow models. In many senses, the bifermion condensates such as $U=\psi\eta$ in the $\psi\eta$ model (and $\chi\eta$, $\chi\chi$ condensates in the $\chi\eta$ model), can be regarded as a perfect analogue of the quark condensate $U={\bar{\psi}}_{R}\psi_{L}$ in QCD. All of these composite scalars enter the strong-anomaly effective action in a similar way, as ${\hat{L}}=\frac{i}{2}q(x)\log\det U/U^{\dagger}\;,\qquad q(x)=\frac{g^{2}}{32\pi^{2}}F_{\mu\nu}^{a}{\tilde{F}}^{a,\mu\nu}\;.$ (6.1) (See Sec. 5 for more careful discussions.) And in all cases this implies condensation of $\langle U\rangle\propto{\mathbf{1}}$, i.e., the color-flavor- locked Higgs phase in the $\psi\eta$ or $\chi\eta$ models on the one hand, and the chiral-symmetry broken vacuum in QCD., on the other. Another fact pointing to a similarity between massless QCD and BY and GG models is the following. In general Bars-Yankielowicz models (with $p$ pairs of additional matter fermions), we saw that there are two natural bifermion condensate channels: $\displaystyle\psi\Big{(}\raisebox{-3.0pt}{\yng(2)}\Big{)}\,\eta\Big{(}\bar{\raisebox{-3.0pt}{\yng(1)}}\Big{)}\qquad\ $ $\displaystyle{\rm forming}\qquad\raisebox{-3.0pt}{\yng(1)}\;,$ $\displaystyle{\rm and}$ $\displaystyle\quad\xi\Big{(}{\raisebox{-3.0pt}{\yng(1)}}\Big{)}\,\eta\Big{(}\bar{\raisebox{-3.0pt}{\yng(1)}}\Big{)}\qquad\ $ $\displaystyle{\rm forming}\qquad(\cdot)\;:$ (6.2) the gluon-exchange strengths in the two channels are, respectively, proportional to $-\frac{(N+2)(N-1)}{N}$ and $-\frac{N^{2}-1}{N}$ . The $\psi\eta$ channel is slightly more attractive; the strength is however identical in the large $N$ limit. Note also that $\xi\eta$ has the same quantum numbers as ${\bar{\psi}}_{R}\psi_{L}$ in QCD. Similarly for the comparison between the condensates, $\langle\chi\eta\rangle$ and $\langle\xi\eta\rangle$ in the Georgi-Glashow models. These considerations, based on rather naïve MAC [4] idea and thus are not rigorous, nevertheless give supports to the idea that the quark condensates in QCD and the bifermions condensates in the chiral gauge theories under study in this review, are really on a very similar footing. Of course, there are significant differences, or contrast, in the vectorlike and chiral gauge theories. The quark condensate $\langle{\bar{\psi}}_{R}\psi_{L}\rangle$ is a color singlet, $SU(N_{\rm f})_{L}\times SU(N_{\rm f})_{R}$ flavor matrix. $\langle\psi\eta\rangle$ is instead in a color-flavor bifundamental form, which means that it breaks the color completely, and reduces (partially or totally) the unbroken flavor symmetry. The most important difference however is the existence of colored NG bosons in the $\psi\eta$ (or in the $\chi\eta$) models. It means that these are coupled linearly to the gauge boson fields, making them massive. These processes are absent in QCD, as all NG bosons are color singlets. It is in this sense that one talks about confinement phase in QCD, in spite of the fact that the inter-quark confining strings can be broken by the spontaneous quark- pair production from the vacuum. The mass spectra are also qualitatively different in QCD and in the chiral gauge theories discussed here. One is the presence of certain degenerate massive vector bosons (the color-flavor locked $SU(N)_{\rm cf}$ symmetry) found in the chiral gauge theories in the Higgs phase. But especially the massless spectrum exhibits striking differences. In all chiral gauge theories studied here, it contains in general both a number of composite fermions (baryons) as well as some composite scalars (pions), a feature certainly not shared by massless QCD. In other words, the way the chiral symmetries of the theory are realized in the IR, is notably different, in vector-like and chiral gauge theories. It is possible to see a closer analogy - from a formal point of view - between the vector-like theories and chiral theories, if one considers color superconductivity phase in the high-density region of QCD [100, 101]. The dynamics of QCD in that phase is believed to be such that some colored di- quark condensates $\langle\psi_{L}\psi_{L}\rangle\neq 0\;,\qquad\langle\psi_{R}\psi_{R}\rangle\neq 0\;.$ (6.3) form. In particular, in the case with $N_{\rm f}=3$ flavors these are condensates of color-flavor diagonal form, showing some similarity to $\langle\psi\eta\rangle$ or $\langle\chi\eta\rangle$ in the chiral theories discussed here. Of course, the details of the dynamics will be quite different. Summarizing, the implications of new, mixed anomalies and the associated stricter anomaly-matching constraints reviewed in this article, and the consideration of the strong-anomaly effective actions, together, seem to allow us to get a clearer picture of the infrared dynamics of many strongly-coupled chiral gauge theories than before. It is to be seen whether some of these developments will turn out to be useful in a future effort to construct a realistic theory of Nature beyond the standard Glashow-Weinberg-Salam-QCD model of the fundamental interactions. ## Acknowledgments The work is supported by the INFN special research initiative grant, ”GAST” (Gauge and String Theories). ## References * [1] * [2] R. P. Feynman and M. Gell-Mann, “Theory of Fermi interaction,” Phys. Rev. 109, 193-198 (1958). * [3] G. ’t Hooft, ”Naturalness, Chiral Symmetry, and Spontaneous Chiral Symmetry Breaking,” in Recent Developments In Gauge Theories, Eds. G. ’t Hooft, C. Itzykson, A. Jaffe, H. Lehmann, P. K. Mitter, I. M. Singer and R. Stora, (Plenum Press, New York, 1980) [Reprinted in Dynamical Symmetry Breaking, Ed. E. Farhi et al. (World Scientific, Singapore, 1982) p. 345 and in G. ’t Hooft, Under the Spell of the Gauge Principle, (World Scientific, Singapore, 1994), p. 352]. * [4] S. Raby, S. Dimopoulos, and L. Susskind, “Tumbling Gauge Theories,” Nucl. Phys. B169, 373 (1980). * [5] S. Dimopoulos, S. Raby and L. Susskind, “Light Composite Fermions,” Nucl. Phys. B 173, 208-228 (1980). * [6] I. Bars and S. Yankielowicz, “Composite quarks and leptons as solutions of anomaly constraints,” Phys. Lett. 101B(1981) 159. * [7] G. Veneziano, “Tumbling and the Strong Anomaly,” Phys. Lett. B 102, 139-143 (1981). * [8] J. Goity, R. D. Peccei and D. Zeppenfeld, “Tumbling and Complementarity in a Chiral Gauge Theory,” Nucl. Phys. B 262, 95 (1985). * [9] E. Eichten, R. D. Peccei, J. Preskill and D. Zeppenfeld, “Chiral Gauge Theories in the 1/n Expansion,” Nucl. Phys. B 268, 161 (1986). * [10] C. Q. Geng and R. E. Marshak, “Two Realistic Preon Models With SU($N$) Metacolor Satisfying Complementarity,” Phys. Rev. D 35, 2278 (1987). * [11] T. Appelquist, A. G. Cohen, M. Schmaltz and R. Shrock, “New constraints on chiral gauge theories,” Phys. Lett. B 459, 235 (1999) [hep-th/9904172]. * [12] T. Appelquist, Z. y. Duan and F. Sannino, “Phases of chiral gauge theories,” Phys. Rev. D 61, 125009 (2000) [hep-ph/0001043]. * [13] M. Shifman and M. Ünsal, “On Yang-Mills Theories with Chiral Matter at Strong Coupling,” Phys. Rev. D 79, 105010 (2009) [arXiv:0808.2485 [hep-th]]. * [14] E. Poppitz and Y. Shang, “Chiral Lattice Gauge Theories Via Mirror-Fermion Decoupling: A Mission (im)Possible?,” Int. J. Mod. Phys. A 25, 2761 (2010) [arXiv:1003.5896 [hep-lat]]. * [15] A. Armoni and M. Shifman, “A Chiral SU(N) Gauge Theory Planar Equivalent to Super-Yang-Mills,” Phys. Rev. D 85, 105003 (2012) [arXiv:1202.1657 [hep-th]]. * [16] Y. L. Shi and R. Shrock, “$A_{k}\bar{F}$ chiral gauge theories,” Phys. Rev. D 92,105032 (2015) [arXiv:1510.07663 [hep-th]]. * [17] Y. L. Shi and R. Shrock, “Renormalization-Group Evolution and Nonperturbative Behavior of Chiral Gauge Theories with Fermions in Higher-Dimensional Representations,” Phys. Rev. D 92, 125009 (2015) [arXiv:1509.08501 [hep-th]]. * [18] S. Bolognesi, K. Konishi and M. Shifman, “Patterns of symmetry breaking in chiral QCD,” Phys. Rev. D 97, no. 9, 094007 (2018) [arXiv:1712.04814 [hep-th]]. * [19] S. Bolognesi and K. Konishi, “Dynamics and symmetries in chiral $SU(N)$ gauge theories,” Phys. Rev. D 100, no.11, 114008 (2019) [arXiv:1906.01485 [hep-th]]. * [20] L. E. Ibanez and G. G. Ross, “Discrete gauge symmetry anomalies,” Phys. Lett. B 260, 291 (1991). * [21] C. Csaki and H. Murayama, “Discrete anomaly matching,” Nucl. Phys. B 515, 114 (1998) [hep-th/9710105]. * [22] G. Cacciapaglia, S. Vatani and Z. W. Wang, “Tumbling to the Top,” arXiv:1909.08628 [hep-ph]. * [23] C. Vafa and E. Witten, “Restrictions on Symmetry Breaking in Vector-Like Gauge Theories,” Nucl. Phys. B 234 (1984), 173-188. * [24] C. Vafa and E. Witten, “Parity Conservation in QCD,” Phys. Rev. Lett. 53 (1984), 535. * [25] L. Del Debbio, “Recent progress in simulations of gauge theories on the lattice,” J. Phys. Conf. Ser. 640 (2015) no.1, 012049, and references therein. * [26] C. Bonati, M. D’Elia, M. Mariti, M. Mesiti, F. Negro and F. Sanfilippo, “Curvature of the chiral pseudocritical line in QCD: Continuum extrapolated results,” Phys. Rev. D 92, no.5, 054503 (2015) [arXiv:1507.03571 [hep-lat]]. * [27] F. Karsch and M. Lutgemeier, “Deconfinement and chiral symmetry restoration in an SU(3) gauge theory with adjoint fermions,” Nucl. Phys. B 550 (1999), 449-464 [arXiv:hep-lat/9812023 [hep-lat]]. * [28] A. Athenodorou, Bennett, G. Bergner and B. Lucini, “Infrared regime of SU(2) with one adjoint Dirac flavor,” Phys. Rev. D 91 (2015) no.11, 114508 [arXiv:1412.5994 [hep-lat]].
On the Nature of Spatial Universes in 3D Lorentzian Quantum Gravity J. Brunekreef and R. Loll Institute for Mathematics, Astrophysics and Particle Physics, Radboud University Heyendaalseweg 135, 6525 AJ Nijmegen, The Netherlands. email<EMAIL_ADDRESS><EMAIL_ADDRESS> Abstract Three-dimensional Lorentzian quantum gravity, expressed as the continuum limit of a nonperturbative sum over spacetimes, is tantalizingly close to being amenable to analytical methods, and some of its properties have been described in terms of effective matrix and other models. To gain a more detailed understanding of three-dimensional quantum gravity, we perform a numerical investigation of the nature of spatial hypersurfaces in three-dimensional Causal Dynamical Triangulations (CDT). We measure and analyze several quantum observables, the entropy exponent, the local and global Hausdorff dimensions, and the quantum Ricci curvature of the spatial slices, and try to match them with known continuum properties of systems of two-dimensional quantum geometry. Above the first-order phase transition of CDT quantum gravity, we find strong evidence that the spatial dynamics lies in the same universality class as two-dimensional Euclidean (Liouville) quantum gravity. Below the transition, the behaviour of the spatial slices does not match that of any known quantum gravity model. This may indicate the existence of a new type of two-dimensional quantum system, induced by the more complex nature of the embedding three-dimensional quantum geometry. ###### Contents 1. 1 Introduction 2. 2 Three-dimensional CDT quantum gravity 3. 3 Implementation 4. 4 Geometric observables on spatial hypersurfaces 1. 4.1 Vertex order 2. 4.2 Entropy exponent 3. 4.3 Hausdorff dimension 1. 4.3.1 Local Hausdorff dimension 2. 4.3.2 Global Hausdorff dimension 3. 4.3.3 Discussion of results 4. 4.4 Curvature profile 5. 5 Summary and conclusion 6. A Estimating the entropy exponent 7. B Determining confidence intervals for best fit parameters 1. B.1 Best fit parameter estimation 2. B.2 Confidence intervals 3. B.3 Potential caveats ## 1 Introduction A complete theory of quantum gravity may offer insights into how the spacetime we observe and inhabit can emerge from first principles. The nonperturbative gravitational path integral is a promising route toward such a theory, formulated within a purely quantum field-theoretic setting [1]. If one is interested in concrete Planckian or near-Planckian results in the full, four- dimensional theory, like information on the spectra of diffeomorphism- invariant observables, Causal Dynamical Triangulations or CDT quantum gravity [2, 3] is arguably the path integral approach that is furthest developed. Recall that the continuum path integral for pure gravity is given by $Z=\int\limits_{{\cal G}(M)}\\!\\!{\cal D}[g]\,{\rm e}^{\,iS^{\rm EH}[g]},\;\;\;\;\;\;S^{\rm EH}[g]=\frac{1}{16\pi G_{\rm N}}\,\int\limits_{M}d^{4}x\,\sqrt{-\det(g)}\,(R-2\Lambda),$ (1) where ${\cal G}(M)$ denotes the space of diffeomorphism-equivalence classes $[g]$ of Lorentzian metrics $g_{\mu\nu}(x)$ on the manifold $M$, and $S^{\rm EH}$ is the Einstein-Hilbert action. In the CDT set-up this formal expression is given a precise meaning, namely, as the continuum limit of a regularized version of (1), with ${\cal G}(M)$ approximated by a space of piecewise flat Lorentzian spacetimes. Although the primary physical interest is in spacetime dimension $D\\!=\\!4$, the CDT path integral has also been studied in two and three dimensions. Hallmarks of this strictly nonperturbative approach are (i) the presence of a well-defined analytic continuation or “Wick rotation”, mapping the regularized path integral to a real partition function, which enables its analytical evaluation in $D\\!=\\!2$ [4] and numerical evaluation in $D\\!=\\!2$, 3 and 4 [5, 6], (ii) its formulation on a space of geometries, avoiding the need to gauge-fix the diffeomorphism symmetry and isolate its physical degrees of freedom, (iii) following the logic of critical phenomena, a high degree of uniqueness and universality if a continuum limit can be shown to exist, (iv) a nonperturbative cure of the conformal divergence, which by default renders Euclidean path integrals in $D\\!\geq\\!3$ ill defined [7], and (v) unitarity, in the form of reflection positivity of the regularized path integral, with respect to a notion of discrete proper time [6, 2]. In terms of results in $D\\!=\\!4$, in addition to the presence of second- order phase transitions [8, 9, 10], necessary for the existence of a continuum limit, an important finding of CDT is the emergence of an extended four- dimensional universe [11, 12]. With the standard choice $M\\!=\\!S^{1}\\!\times\\!S^{3}$ for the topology, in terms of the quantum observables measured so far (spectral and Hausdorff dimensions [13, 14], shape of the universe, including quantum fluctuations [15, 16], average Ricci curvature [17]), its behaviour on sufficiently coarse-grained scales is compatible with that of a de Sitter universe. This is remarkable because it represents nontrivial evidence of a classical limit, one of the high hurdles to clear for any nonperturbative and manifestly background-independent approach to quantum gravity. After Wick rotation, the gravitational path integrals of CDT become partition functions of statistical systems, whose elementary geometric building blocks (flat $D$-dimensional simplices, see Sec. 2 for further details) are assembled into piecewise flat manifolds $T$ – the triangulations – each one contributing with a Boltzmann weight $\exp(-S^{\rm EH}[T])$.111Note that $S^{\rm EH}[T]$ is the so-called bare action of the regularized theory, depending on bare coupling constants, which in the continuum limit will typically undergo renormalization. There are a couple of reasons why such seemingly simple ingredients can give rise to interesting continuum theories of quantum gravity and quantum geometry. On the one hand, there is the highly nontrivial combinatorics of how the simplicial building blocks can be glued together to yield distinct curved spacetimes $T$. Especially in dimension $D\\!\geq\\!3$, this reflects the complexities of local geometry and curvature, already familiar from the classical theory. On the other hand, there is a complicated interplay between “energy” (the bare action) and “entropy” (the number of distinct triangulations for a given value of the bare action), which depends on the values of the bare coupling constants, i.e. the point in phase space at which the path integral is evaluated. An enormous amount has been learned about such nonperturbative systems of geometry over the last 35 years, beginning with the Euclidean analogue and precursor of the Lorentzian CDT theory, based on Euclidean dynamical triangulations or dynamical triangulations (DT) for short [18, 19, 20]. A crucial role in the exploration of these systems has been played by Monte Carlo methods, which are employed to numerically evaluate the path integral and expectation values of observables by importance sampling [21, 22]. This is also true in dimension $D\\!=\\!2$, where in addition a variety of nonperturbative analytical solution techniques are available, e.g. combinatorial, matrix model and transfer matrix methods [23, 4, 24], leading to compatible results. Monte Carlo simulations should be seen as numerical experiments, providing tests and feedback for the construction of the theory. For full quantum gravity, the quantitative information on the nonperturbative sector obtained from numerical analysis is extremely valuable, since it cannot currently be substituted by anything else. Although we do not know in detail what a theory of quantum gravity will eventually look like, it seems unlikely that it will come in closed analytic form. A potential scenario would be akin to QCD, where we manage to extract nonperturbative information about the theory’s spectrum (of suitable quantum-geometric observables) with ever greater accuracy, using a background-independent analogue of lattice gauge theory such as (C)DT. Despite its conventional, quantum field-theoretic setting and the absence of any exotic ingredients, this type of lattice gravity has already uncovered unexpected features of strongly quantum-fluctuating geometry, like the dynamical dimensional reduction of spacetime near the Planck scale [13], which is conjectured to be universal [25]. The focus of the present work will be the Lorentzian CDT path integral in three spacetime dimensions [26]. More specifically, as a stepping stone towards a more detailed geometric understanding of this quantum gravity model, we will investigate the geometry of its two-dimensional spatial hypersurfaces. A key question is whether in a continuum limit the behaviour of these surfaces falls into one of the known universality classes [27] of nonperturbative quantum gravity in two dimensions, or whether there is evidence for a different type of quantum dynamics. The two universality classes in question are that of (the scaling limit of) two-dimensional DT [18, 20], which also contains Liouville quantum gravity, and that of two-dimensional CDT quantum gravity [4, 28]. Our study will be numerical in nature, but – depending on the outcome – may well provide input for further analytical work, which could be technically feasible because of the effective two-dimensional character of the spatial slices. Note that we do not claim that there is a direct physical interpretation of the properties of these spatial geometries from a three- dimensional point of view (inasmuch as a lower-dimensional toy model of quantum gravity can be called “physical” at all). Although in our set-up a spatial slice at constant proper time is an invariantly defined concept222It is defined as the set of all points at a given proper-time distance to a given initial spatial surface or an initial singularity, in the spirit of similar constructions in the continuum [29]. Note also that the proper-time slicing is not related to any gauge-fixing, since the CDT set-up is manifestly diffeomorphism-invariant (see e.g. [3] for a detailed discussion)., it is not clear to what extent its properties can be thought of as “observable”, because of the highly nonlocal construction of the hypersurfaces and because of their singular nature (“moments in time”) from the point of view of the quantum theory. To obtain true quantum observables in a three-dimensional, spacetime sense would presumably require some smearing in the time direction. Nevertheless, our measurements within the slices of constant time are perfectly well defined operationally and give us a quantitative handle on the influence of the three-dimensional quantum geometry in which the spatial slices are embedded, as we will demonstrate. We will start by investigating the distribution of the vertex order in the slices, which counts the number of spatial edges meeting at a vertex. This quantity is not per se related to a continuum observable, but can be compared with known exact results for the ensembles of two-dimensional DT and CDT geometries. The core of the paper consists of measuring and analyzing the following quantum observables: (i) the entropy exponent $\gamma$, also known as the string susceptibility, which determines the subexponential growth of the partition function at fixed two- volume $A$, as a function of $A$; (ii) the Hausdorff dimension $d_{H}$, obtained by comparing volumes with their linear extension, where we distinguish between a local and a global variant; (iii) the so-called curvature profile $\mathcal{R}(\delta)$ of the spatial slices, measuring the average quantum Ricci curvature [30] of the surfaces as a function of a linear coarse-graining scale $\delta$. We find convincing evidence that the effective dynamics of the spatial slices in the so-called degenerate phase of three- dimensional CDT quantum gravity is described by two-dimensional DT quantum gravity. However, we do not find a match with any known two-dimensional system of quantum geometry in the so-called de Sitter phase, where the dynamics of the hypersurfaces is much richer due to the nontrivial influence of the embedding three-geometry. Further research is needed to determine the continuum nature of the effective spatial dynamics in this phase. The remainder of the paper is structured as follows. In the next section, we recall the main ingredients of CDT quantum gravity in $D\\!=\\!3$ and review previous research on the subject, and what it has revealed about its phase structure and physical characteristics. In Sec. 3, we discuss the numerical implementation of the three-dimensional CDT path integral in terms of Markov chain Monte Carlo methods. Sec. 4 contains a detailed description of the properties of the spatial slices that we have studied numerically. We present the results of our measurements, and describe the overall picture that emerges from them. In Sec. 5 we summarize and discuss our findings. A couple of technical discussions have been relegated to appendices, to improve the readability of the main part of the paper. ## 2 Three-dimensional CDT quantum gravity Quantum gravity in three spacetime dimensions [31] provides an interesting test case for the full gravitational path integral. Although the pure gravity theory does not have any local propagating degrees of freedom, the path integral has the same functional form in terms of the three-dimensional metric as its four-dimensional counterpart (1), and therefore looks equally ill- behaved with regard to its behaviour under renormalization. How to reconcile the difficulties of solving this metric path integral with the “topological” nature of three-dimensional gravity333More precisely, the physical degrees of freedom of three-dimensional gravity are global modes of the metric, described by Teichmüller parameters, which are present when the genus of the spatial slices is larger than or equal to 1. The present work uses spherical slices, without such parameters., which leads to considerable simplifications in a first-order, Chern-Simons formulation, without any quantum field-theoretic divergences, is only partially understood (see [32] for a discussion). The CDT formulation has thrown some light on the nonperturbative aspects of this question, uncovering both similarities and differences between the three- dimensional and the physical, four-dimensional theory [26, 33, 34]. For a better understanding of the issues involved and to set the stage for the main part of the paper, let us briefly recall the set-up in three dimensions. After applying the Wick rotation mentioned in the previous section, the regularized CDT path integral in $D\\!=\\!3$ takes the form of a partition function $Z=\sum\limits_{\text{triang.}\,T}\frac{1}{C_{T}}\,{\rm e}^{-S^{\rm EH}[T]},\;\;\;\;\;\;\;S^{\rm EH}[T]=-k_{0}N_{0}(T)+k_{3}N_{3}(T),$ (2) where $S^{\rm EH}[T]$ denotes the Regge form of the Einstein-Hilbert action on the piecewise flat triangulation $T$, $N_{0}(T)$ and $N_{3}(T)$ are the numbers of vertices (“zero-simplices”) and tetrahedra (“three-simplices”), and $C_{T}$ is the order of the automorphism group of $T$. The coupling $k_{0}$ is proportional to the inverse bare Newton constant and $k_{3}$ depends linearly on the bare cosmological constant (see [26] for details). The sum is taken over simplicial manifolds of a given, fixed topology, which in our case will be $S^{1}\\!\times\\!S^{2}$, a periodically identified time interval times a two-sphere. The triangulated configurations $T$ of the Lorentzian CDT path integral have a discrete product structure, representing a simplicial version of global hyperbolicity, and are assembled from flat, Minkowskian tetrahedra [2, 3]. A given spacetime geometry can be thought of as a sequence of two-dimensional curved, spacelike triangulations, labeled by an integer proper time $t=1,2,3,\dots,t_{\rm tot}$, and made of equilateral triangles. The spacetime volume between each pair of adjacent constant-time slices is completely filled in with tetrahedra, resulting in a “sandwich” of simplicial three-dimensional spacetime with topology $[0,1]\times S^{2}$. The tetrahedral edges linking neighbouring spatial slices are _timelike_ (and all of equal length), while the edges lying within a spatial slice are of course _spacelike_ (and also of equal length). We can therefore classify the building blocks according to their constituent vertices. A tetrahedron of type $(p,q)$ is defined as having $p$ vertices in slice $t$ and $q$ vertices in slice $t\\!+\\!1$, giving rise to the types (1,3), (2,2) and (3,1), as illustrated by Fig. 1. Note that up to time reversal a (1,3)- and a (3,1)-tetrahedron are geometrically identical. Since the analytic continuation only affects the edge length assignments and not the topology of the triangulation, this characterization of the tetrahedra continues to be meaningful after the Wick rotation. Figure 1: Three types of tetrahedral building blocks of three-dimensional CDT: type (1,3) (left), type (2,2) (centre), and type (3,1) (right). Note that the two-dimensional triangulations at times $t$ and $t\\!+\\!1$ are not drawn isometrically; they are in general curved surfaces. The key findings of the original, mostly numerical investigation of three- dimensional CDT quantum gravity on $S^{1}\\!\times\\!S^{2}$ inside the range $k_{0}\\!\in\\![3,7]$ were as follows [26, 33, 34]. After fine-tuning the bare “cosmological” constant $k_{3}$ to its critical value from inside the region of convergence444This region exists because the number of three-dimensional CDT configurations is exponentially bounded as a function of the discrete volume $N_{3}$ [35]. of the partition function Z, a two-phase structure was found, consisting of what we shall call a _degenerate phase_ for $k_{0}\\!\geq k_{0}^{\rm c}$ and a _de Sitter phase_ for $k_{0}\\!\leq k_{0}^{\rm c}$. These two phases are very reminiscent of corresponding phases in four-dimensional CDT quantum gravity [2] with regard to their volume profiles, i.e. the behaviour of their spatial volume $V_{2}$ as a function of the proper time $t$. In the degenerate phase, $V_{2}(t)$ oscillates wildly, indicating that spacetime disintegrates into a sequence of uncorrelated two-dimensional geometries (see also Fig. 4 below). By contrast, in the de Sitter phase a nontrivial “blob” forms when the time extension is chosen sufficiently large, whose shape matches that of a Euclidean de Sitter space (with $\langle V_{2}(t)\rangle\propto\cos^{2}(\mathit{c}\,t)$), analogous to what has been observed in $D\\!=\\!4$.555Volume profiles for nonperiodic boundary conditions in time, with random, fixed two-spheres of various sizes as spatial boundaries have been considered in [36, 37]. This constitutes nontrivial evidence that a well-defined and macroscopically three-dimensional ground state of geometry666in the sense of minimizing the effective Euclidean action governing the nonperturbative quantum dynamics exists nonperturbatively. However, unlike in four dimensions the transition at the critical point $k_{0}^{c}$ appears to be a first- and not a second-order phase transition, and no fine-tuning of the inverse gravitational coupling $k_{0}$ is needed to obtain a continuum limit. This is in line with the expectation that no higher-order transitions are present, due to the absence of propagating degrees of freedom. Following these results, three-dimensional CDT quantum gravity has been studied from various perspectives. Considerable effort has been focused on the transfer matrix associated with a single time step $\Delta t\\!=\\!1$, which captures the amplitude of going from one spatial two-geometry to an adjacent one. More precisely, one usually considers a simpler, reduced transfer matrix, whose in- and out-states are labelled by the spatial two-volume (and possibly Teichmüller parameters). Given our knowledge about the physical degrees of freedom of three-dimensional gravity, these are the parameters that are _expected_ to be the only relevant ones in the continuum limit. The sandwich geometries contributing to the transfer matrix are closer to two-dimensional quantities and therefore potentially more amenable to an analytic treatment. In this spirit, a variant of the model was introduced in [38], in which the (1,3)- and (3,1)-building blocks are substituted by (1,4)- and (4,1)-pyramids, something that is not expected to affect the universal properties of the model. The motivation for considering this variant is that taking a midsection of a sandwich geometry at half-integer time yields a quadrangulation, whose dual graph is a configuration described by a Hermitian two-matrix model with $ABAB$-interaction, for which analytical results are available. However, the bicoloured graph configurations generated by the matrix model form a much larger class than those coming from CDT sandwich geometries, and correspond to geometries that in general violate the simplicial manifold conditions of the two-dimensional slices and of the interpolating three-dimensional piecewise flat geometries in specific ways. Three of these four conditions (following the enumeration in [38]) are considered mild, in the sense that violating them is conjectured not to affect the universality class of the CDT model. As corroborating evidence, [38] cites new numerical simulations of the CDT model in $D\\!=\\!3$, where these conditions are relaxed, but which nevertheless reproduce the results found in the de Sitter phase of the earlier work that used strict simplicial manifolds [26]. Interestingly, they note that the degenerate phase completely disappears in the simulations of this generalized variant of CDT quantum gravity, and conjecture that the presence of this phase constitutes a discretization artefact. Although our present work does not directly address these various conjectures (and works with simplicial manifolds only), our results suggest that there may be more scope for different universality classes in three dimensions than has been considered up to now, and that it may be fruitful to re-examine the influence of regularity conditions on continuum results in greater detail. (Of course, not all universality classes may be associated with interesting models of quantum gravity.) A similar sentiment was expressed in [39], which gives a precise characterization of the bicoloured two-dimensional cell complexes associated with midsections of CDT geometries with spherical and disc-like spatial slices. The configurations described by the $ABAB$-matrix model violate also a fourth regularity condition [38], which is associated with a considerable enlargement of the space of three-geometries. It allows for the appearance of spatial wormholes and a new phase, not present in standard CDT quantum gravity, where these wormholes are abundant. Whether this phase is interesting from a physics point of view remains to be understood. The association of CDT quantum gravity with the $ABAB$-matrix model was also used to analyze the behaviour of the bare coupling constants of the former under renormalization [40]. An asymmetric version of the matrix model was studied in [41], motivated by the search for a Hamiltonian associated with the reduced transfer matrix. Without invoking matrix models, a continuum Hamiltonian of this kind was derived for the first time in [42], for spatial slices of cylinder topology, albeit for a sub-ensemble of CDT configurations with certain ordering restrictions. The effective action for the two-volumes of spatial slices with toroidal topology was investigated in [43], and the dynamics of its Teichmüller parameters in [44, 45]. The phase structure of a one-dimensional balls-in-boxes model, meant to capture the effective dynamics of the two-volume of CDT quantum gravity, was analyzed in [46], and shown to reproduce certain features of CDT as well as Hořava-Lifshitz-inspired gravity models (see also [47]). The spectral dimension of the CDT model in the de Sitter phase was measured and found to be compatible with the classical value of 3 on large scales and to exhibit a dynamical dimensional reduction to a value compatible with 2 on short scales, similar to what happens in CDT for $D\\!=\\!4$ [48]. Lastly, a generalized model of CDT quantum gravity with causally well-behaved configurations, but without a preferred proper-time slicing was defined and investigated numerically, and found to reproduce the volume profile of a de Sitter space [49, 50]. ## 3 Implementation Expectation values of geometric observables $\mathcal{O}$ in CDT quantum gravity are computed as $\langle\mathcal{O}\rangle=\frac{1}{Z}\,\sum_{T}\frac{1}{C_{T}}\,\mathcal{O}[T]\,{\rm e}^{-S^{\textrm{EH}}[T]},$ (3) where $Z$ is the partition function defined in (2). As already mentioned, the focus of our present work is a set of observables pertaining to the two- dimensional spatial triangulations of constant integer time $t$ of the three- dimensional CDT configurations, which will be the subject of Sec. 4. Since it is not known how to compute $Z$ analytically in three dimensions, we will compute statistical estimates of the expectation value of an observable by sampling the CDT ensemble through Monte Carlo simulations.777Our implementation code can be found at [51]. In these simulations, we construct a random walk in the ensemble of CDT geometries by performing local updates (“moves”) on a triangulation. The basic set of moves we used is shown in Fig. 2, see also [26]. If we impose so-called detailed balance [52] on the updating procedure, by accepting or rejecting such moves with an appropriate probability, this random walk corresponds to a sample of the ensemble where geometries appear with a relative rate according to their Boltzmann weight. Since subsequent geometries in a random walk are almost identical, we must perform a large number of local moves on a given geometry to obtain a new and sufficiently independent one. This procedure is iterated to obtain a sequence of independent triangulations, and an estimate of the expectation value of an observable is computed as the weighted average (3) over this sequence. To study the continuum properties of observables, the number of building blocks should be taken to infinity. Since this is impossible in practice, due to the finiteness of our computational resources, we use finite-size scaling methods [22] to estimate the behaviour of the system in the continuum limit. For more details on computer simulations of three-dimensional CDT we refer the interested reader to [33]. Figure 2: Three basic local moves in 3D CDT quantum gravity together with their inverses, and their effect on spatial slices. Spacelike edges are drawn in blue and timelike ones in red. Top left: subdivision of a spatial triangle into three; top right: flip of a spatial edge. The move at the bottom does not affect the spatial triangulations (time inverse not shown). For a given value of the gravitational coupling $k_{0}$, the cosmological coupling $k_{3}$ is always tuned to its $k_{0}$-dependent pseudocritical value888which in the limit $N_{3}\\!\rightarrow\\!\infty$ would become the critical value $k_{3}^{c}(k_{0})$, which means that we are investigating a one-dimensional phase space parametrized by $k_{0}$. The location of the critical point $k_{0}^{c}$ along this line, associated with the first-order transition mentioned in Sec. 2, is not a universal quantity. For example, it depends on the regularity conditions imposed on the ensemble, and the time extension $t_{\rm tot}$ of the geometries [26]. In our analysis of the spatial slices we will use standard CDT simplicial manifolds999The effects of relaxing the local manifold constraints have been investigated further in [53], where it was found that the order of the phase transition is likely unchanged, although the location of the critical point shifts to a smaller value as the restrictions are loosened. and $t_{\rm tot}\\!=\\!3$ with periodic boundary conditions in time, for which we have found $k_{0}^{c}\\!\approx\\!6.24$. Figure 3: Expectation value of the order parameter $O_{2}\\!=\\!N_{22}/N_{31}$ as a function of the bare coupling $k_{0}$, exhibiting a first-order phase transition at $k_{0}^{c}\\!\approx\\!6.24$. To the left of the transition is the de Sitter phase, and to its right the degenerate phase. Our measurements on spatial slices are taken for the $k_{0}$-values $0.0$, $5.0$ and $8.0$, as indicated. A convenient order parameter to locate the phase transition is the ratio $O_{2}(T)=\frac{N_{22}(T)}{N_{31}(T)},$ (4) where $N_{22}(T)$ and $N_{31}(T)$ denote the numbers of (2,2)- and (3,1)-simplices of the triangulation $T$ respectively. Its expectation value $\left\langle O_{2}\right\rangle$ is nonvanishing for small $k_{0}$ and drops to zero rapidly as the transition point $k_{0}^{c}$ is approached, beyond which it remains zero for all values of $k_{0}\\!>\\!k_{0}^{c}$ we have investigated. The measured values of $\left\langle O_{2}\right\rangle$ as a function of $k_{0}$ are shown in Fig. 3, obtained in a system with $N_{3}\\!=\\!64.000$ and $t_{\rm tot}=3$. The regions to the left and right of the transition correspond to the de Sitter and degenerate phases introduced earlier. Snapshots of typical volume profiles $V_{2}(t)$, counting the number of triangles in the spatial slice at time $t$, are depicted in Fig. 4 for a system with $N_{31}\\!=\\!16.000$ and $t_{\rm tot}\\!=\\!32$. The volumes of neighbouring slices in the degenerate phase are largely uncorrelated, while they tend to align in the de Sitter phase. When taking an ensemble average of the latter with the “centres of volume” aligned, the expectation value $\langle V_{2}(t)\rangle$ matches that of a three-dimensional Euclidean de Sitter universe in a proper-time parametrization [26]. Figure 4: Volume profiles $V_{2}(t)$ of typical configurations appearing in the de Sitter phase (left) and the degenerate phase (right) of the three- dimensional CDT model on $S^{1}\\!\times\\!S^{2}$. We will perform measurements of the spatial slices for three different $k_{0}$-values, two in the de Sitter phase at $k_{0}\\!=\\!0.0$ and $5.0$ and one in the degenerate phase at $k_{0}\\!=\\!8.0$, cf. Fig. 3. The volumes $N_{31}$ of the systems we investigate take values in the range $[1.000,96.000]$. In choosing these particular values of $k_{0}$, we are staying away from the direct vicinity of the first-order transition at $k_{0}^{c}\\!\approx\\!6.24$, to avoid that the system jumps between the two phases during the simulation. The two points chosen in the de Sitter phase are spaced well apart, while taking into account that the Monte Carlo algorithm becomes increasingly inefficient as $k_{0}$ is lowered. It may be worth pointing out that the expression for the discretized, bare Einstein-Hilbert action in (2) is highly non-unique, and that the value $k_{0}\\!=\\!0.0$ is therefore in no way physically distinguished. The observed decoupling of neighbouring slices provides a strong argument for an effective slice behaviour that is characteristic for the universality class of Euclidean dynamical triangulations, something our data in Sec. 4 below will confirm. As a cross check that the system’s behaviour stays the same throughout the degenerate phase, we have performed a short series of measurements for all observables (except the curvature profile) at the much higher value of $k_{0}\\!=\\!15.0$, which found no differences compared with $k_{0}\\!=\\!8.0$. Little is known about the slice behaviour in the de Sitter phase; a preliminary investigation of the Hausdorff dimension for small slice volumes $V_{2}(t)\sim 1.000$ was made in [26], resulting in the estimate $d_{H}\\!=\\!3.4\pm 0.4$. Furthermore, a measure of homogeneity for the spatial slices was formulated and implemented in [54], but with inconclusive results. A final technical issue to be discussed before turning to a description of the measurement results is that of volume-fixing. As usual, we perform all measurements at fixed spacetime volume; more precisely, since the Monte Carlo moves are not volume-preserving, the simulations are run in the vicinity of a target volume. The latter can be stated in terms of the total volume $N_{3}$ or in terms of $N_{31}$, which is what we will do below. Both prescriptions are essentially equivalent, since in the case of periodic boundary conditions in time we have the identity $N_{3}\\!=\\!2N_{31}+N_{22}$, and since for fixed $k_{0}$ the ratio between $N_{22}$ and $N_{31}$ is approximately constant, and can be read off the graph in Fig. 3. Note also that $N_{31}$ is equal to the total number of _spatial_ triangles in the triangulation, i.e. the sum over all $t$ of $V_{2}(t)$. The approximate volume-fixing is implemented by adding a term $S_{\textrm{fix}}[T]=\epsilon\,\big{(}N_{31}(T)-\tilde{N}_{31}\big{)}^{2}$ (5) to the bare action, where $\tilde{N}_{31}$ denotes the desired target volume and the value of the small, positive parameter $\epsilon$ determines the typical size of the fluctuations of $N_{31}$ around $\tilde{N}_{31}$. In the simulations performed for this work, we generally set $\epsilon$ to values on the order of $10^{-5}$. In addition, since we are interested in the intrinsic properties of the spatial slices and in extracting their continuum behaviour from finite-size scaling, we must collect measurement data at different, fixed _slice_ volumes. Instead of adding further volume-fixing terms for the individual slices to the action, which would run the risk of introducing an unwanted bias, we let the individual slices fluctuate freely, subject only to the total volume constraint (5), but take data only when a slice hits a precise desired value $\tilde{V}_{2}$. More concretely, in between two measurements we first perform a fixed number of attempted moves101010These attempted moves will be accepted or rejected according to the detailed balance condition mentioned earlier., collectively referred to as a _sweep_. Different observables can have different autocorrelation times (measured in Monte Carlo steps) and therefore require different sweep sizes. A typical sweep size is taken to be on the order of 1.000 times the target volume $\tilde{N}_{31}$. After completion of a sweep, we continue performing local updates until one of the slices hits the target two-volume $\tilde{V}_{2}$. We then perform a measurement of the observable under consideration on this slice, and subsequently start a new sweep. The choice of the target three-volume $\tilde{N}_{31}$ that maximizes the probability of encountering slices with target two-volume $\tilde{V}_{2}$ depends on the phase. In the de Sitter phase and for the small time extension $t_{\rm tot}$ we have used, the total volume spreads roughly evenly over the available slices and an appropriate choice is $\tilde{N}_{31}\\!=\\!t_{\rm tot}\cdot\tilde{V}_{2}$. In the degenerate phase, the volume tends to concentrate on one of the slices, and a good choice is $\tilde{N}_{31}\\!=\\!\tilde{N}_{2}+m$, where for $t_{\rm tot}\\!=\\!3$ and $\tilde{V}_{2}\\!>\\!1.000$ we found that $m\\!=\\!100$ is a convenient choice. To maximize the volume of the spatial slices, we work with $t_{\rm tot}\\!=\\!3$, the minimal number allowed by our simulation code. Both this choice and our choice of periodic boundary in time can in principle have an influence on the behaviour of observables, even if our measurements are confined to individual slices. Investigations of the transfer matrix in four- dimensional CDT quantum gravity have indicated that such a set-up can be appropriate, at least for selected observables [55]. As an extra check, we have performed a few measurements on systems with larger $t_{\rm tot}\\!\lesssim\\!32$, and found the same behaviour for the slice geometries. This is reassuring, but not a substitute for a more systematic investigation of the influence of these global choices, which goes beyond the scope of our present work. This proviso should be kept in mind when interpreting the outcomes of our research, which will be presented next. ## 4 Geometric observables on spatial hypersurfaces The main objective of our work is a detailed measurement of the geometric properties of the two-dimensional spatial hypersurfaces of constant integer proper time $t$ in three-dimensional CDT quantum gravity, both in the de Sitter and the degenerate phase. We will compare our findings with known results for nonperturbative models of two-dimensional quantum gravity. There are two pure-gravity systems in $D\\!=\\!2$ available for reference, Euclidean DT and Lorentzian CDT quantum gravity. However, especially in the de Sitter phase there are no stringent reasons why the slice geometries should match these known systems, since they are part of a larger, three-dimensional geometry. For example, the ambient geometry may induce extrinsic curvature terms on the spatial slices, which are not present in intrinsically two- dimensional situations. The following subsections will deal in turn with the four quantities we have studied on the spatial slices: the vertex order, the entropy exponent, the Hausdorff dimension and the curvature profile. ### 4.1 Vertex order We first examined the distribution of the vertex order, which counts the number $q(v)$ of spacelike edges meeting at a given vertex $v$. For the simplicial manifolds we use in the simulations, the possible vertex orders are $q(v)\\!=\\!3,4,5,\dots$. As already mentioned earlier, the distribution of the $q(v)$ is not strictly speaking an observable. It is a non-universal property of the discrete lattice, which depends on the details of the lattice discretization and does not have an obvious continuum counterpart. For example, using quadrangulations instead of triangulations leads to the same continuum theory of two-dimensional Euclidean DT quantum gravity [56], but the two models have different distributions of vertex orders. We have studied this quantity nevertheless, since it is known analytically for both the DT and CDT ensembles and gives us a first idea of whether and how our system changes as a function of the bare coupling $k_{0}$. The normalized probability distribution $P(q)$ for the vertex order in two- dimensional DT quantum gravity in the thermodynamic limit has been determined analytically as [57] $P_{\textrm{DT}}(q)=16\left(\frac{3}{16}\right)^{q}\frac{(q-2)(2q-2)!}{q!(q-1)!},\quad\quad q\geq 2,$ (6) with a large-$q$ behaviour $\sim\\!\left(\frac{3}{4}\right)^{q}$. The corresponding probability distribution for two-dimensional CDT quantum gravity is given by [58] $P_{\textrm{CDT}}(q)=\frac{q-3}{2^{q-2}},\quad\quad q\geq 4,$ (7) with a fall-off behaviour $\sim\\!\left(\frac{1}{2}\right)^{q}$ for large $q$. Both distributions are shown in Fig. 5. Figure 5: Probability distribution of the vertex order $q$ for the ensembles of two-dimensional Euclidean DT and Lorentzian CDT quantum gravity, in the limit of infinitely many triangles. We took measurements at the three chosen phase space points $k_{0}=0.0,5.0$ and $8.0$, with a target volume $\tilde{V}_{2}\\!=\\!4.000$ for the spatial slices, corresponding to a target three-volume $\tilde{N}_{31}\\!=\\!12.000$ in the de Sitter phase, and $\tilde{N}_{31}\\!=\\!4.400$ in the degenerate phase. The sweep size was set to $100\cdot\tilde{N}_{31}$ in each case. For each value of $k_{0}$, we collected measurements of $P(q)$ for 100k different slices, by recording for each slice the full set of vertex orders $q(v)$ for all vertices and normalizing the resulting histogram. We then approximated the eigenvalue $\langle P(q)\rangle$ by taking the ensemble average over this data set according to the prescription (3). The results of the measurements for $q\\!\in\\![3,20]$ are shown in Fig. 6, and are clearly distinct for the three $k_{0}$-values. While the distribution for $k_{0}\\!=\\!8.0$ in the degenerate phase is a very good match for the analytical result, the distributions for $k_{0}\\!=\\!0.0$ and $5.0$ in the de Sitter phase are not good matches and are also different from each other. This is confirmed when plotting the measurements on a logarithmic scale, taking into account a much larger range of $q\\!\leq\\!180$, as depicted in Fig. 7, which also includes the known distributions for Euclidean DT and CDT as thin straight lines. Figure 6: Measured probability distribution of the vertex order $q$ on spatial slices of volume $V_{2}\\!=\\!4.000$ in three-dimensional CDT, for $k_{0}\\!=\\!0.0$, $5.0$ and $8.0$ and $q\\!\leq\\!20$. For comparison, the line connecting the exact values for DT from Fig. 5 are included. We see that within measurement accuracy, the distribution of vertex orders in the degenerate phase of the model is indistinguishable from (6), a result that is also compatible with the numerical simulations of pure Euclidean DT quantum gravity performed in [57]. By contrast, the distributions obtained in the de Sitter phase exhibit a very different behaviour, at least for large $q$. High- order vertices are relatively speaking more probable, and the data cannot be fitted to a single exponential over the $q$-range we have explored. Moreover, unlike what happens in the degenerate phase, the distributions within the de Sitter phase depend on the value of $k_{0}$. Figure 7: Measured probability distribution of the vertex order $q$ (logarithmic scale) on spatial slices of volume $V_{2}\\!=\\!4.000$ in three- dimensional CDT, for $k_{0}\\!=\\!0.0$, $5.0$ and $8.0$. The unbroken straight line is the analytical result (6) for two-dimensional DT, and the dashed straight line the analytical result (7) for two-dimensional CDT. To obtain a more detailed picture of the dependence on $k_{0}$, we performed a series of shorter simulation runs at additional points in the de Sitter phase. When entering the de Sitter phase from the degenerate phase by crossing the phase transition at $k_{0}^{c}$, the vertex order distribution jumps discontinuously to a shape similar to that for $k_{0}\\!=\\!5.0$. As $k_{0}$ is decreased further inside this phase, the distribution changes shape in a continuous way; the distributions we measured for points in the interval $0.0\\!<\\!k_{0}\\!<\\!5.0$ interpolate in a straightforward manner between the ones for $k_{0}=0.0$ and $k_{0}=5.0$ shown in Figs. 6 and 7. To summarize, the measurements of the vertex order distribution in the degenerate phase produce an excellent match with the known one for DT quantum gravity. By contrast, the distributions found in the de Sitter phase do not match those of the standard DT or CDT ensembles in $D\\!=\\!2$. As mentioned earlier, this does not necessarily mean that the slice geometries in the de Sitter phase do not lie in either of the associated universality classes, but it is a first indication that they may not. By looking at genuine observables next, we will be able to make more definite statements about the universal geometric properties of the slice geometries. ### 4.2 Entropy exponent An important parameter characterizing two-dimensional systems of random geometry is the entropy exponent $\gamma$, which contains information about the behaviour of the partition function at fixed two-volume $N_{2}$ (number of triangles), in the limit as $N_{2}$ becomes large.111111We use $N_{2}$ to denote two-volumes in two-dimensional models of quantum gravity and $V_{2}$ to denote the two-volume of a spatial slice in three-dimensional quantum gravity. Recall that the path integral of DT quantum gravity in $D\\!=\\!2$ with bare cosmological constant $\lambda$ can be written as the infinite sum $Z(\lambda)=\sum_{N_{2}}Z(N_{2})\,{\rm e}^{-\lambda N_{2}},$ (8) which is the (discrete) Laplace transform of the partition function $Z(N_{2})$ for fixed volume. For large $N_{2}$, $Z(N_{2})$ behaves like $Z(N_{2})\sim{\rm e}^{\lambda^{c}N_{2}}N_{2}^{\gamma-3}\left(1+{\mathcal{O}}(1/N_{2})\right),$ (9) whose leading exponential growth is governed by a (non-universal) critical cosmological constant $\lambda^{c}\\!>\\!0$ and whose subleading power-law behaviour defines the universal entropy exponent $\gamma$, which for DT quantum gravity is given by $\gamma\\!=\\!-1/2$. The asymptotic functional form (9) continues to hold when conformal matter of central charge $c\\!<\\!1$ is added to the Euclidean quantum gravity model, giving rise to the entropy exponents [59] $\gamma=\frac{1}{12}\left(c-1-\sqrt{(25-c)(1-c)}\right),$ (10) with $c\\!=\\!0$ corresponding to the pure-gravity case. Two-dimensional CDT quantum gravity, which is not described by formula (10), is characterized by $\gamma\\!=\\!1/2$ [4]. Computer simulations have demonstrated that adding matter with $c\\!=\\!4$ to the CDT system induces a phase transition in the geometry [60], but the corresponding entropy exponent is not known. According to [61], the distribution of so-called baby universes in two- dimensional Euclidean quantum gravity – parts of a geometry that are connected to the larger bulk geometry via a thin neck – depends in a simple way on the entropy exponent $\gamma$. This insight was used subsequently to formulate a prescription of how to extract $\gamma$ by measuring the distribution of minimal-neck baby universes (“minbus”) in the DT ensemble with the help of Monte Carlo simulations [62]. A minbu is a simply connected subset of disk topology of a two-dimensional triangulation $T$, which is connected to the rest of $T$ along a loop consisting of three edges, which is the minimal circumference of a neck allowed by the simplicial manifold conditions. As shown in [61, 62], it follows from relation (9) that the average number $\bar{n}_{N_{2}}(B)$ of minbus of volume $B$ (counting the number of triangles in the minbu) in a spherical triangulation of volume $N_{2}$ for sufficiently large $B$, $N_{2}$ behaves like $\bar{n}_{N_{2}}(B)\sim(N_{2}-B)^{\gamma-2}B^{\gamma-2}.$ (11) By measuring the distribution of minbus across a range of volumes $B$ for fixed $N_{2}$ in a DT ensemble and fitting the results to the function (11), the expected results $\gamma\\!=\\!-1/2$ for pure gravity and $\gamma\\!=\\!-1/3$ for gravity coupled to Ising spins ($c\\!=\\!1/2$) were reproduced within measuring accuracy [62]. We have carried out a similar analysis on the spatial slices of triangulations generated by Monte Carlo simulations of three-dimensional CDT quantum gravity. There is no obvious reason why (9) should hold for some “effective” fixed- volume partition function for the two-volume $N_{2}\\!=\\!V_{2}$ of a single spatial slice in this three-dimensional system. However, if the number of (2,2)-simplices drops essentially to zero _and_ neighbouring slices decouple, as is the case in the degenerate phase, the three-dimensional partition function at fixed volume will depend only on $N_{31}$ (equal to the total two- volume), which makes it plausible that (9) holds on individual spatial slices, with $N_{2}\\!=\\!V_{2}$. In the de Sitter phase, if the spatial geometries can be described in terms of two-dimensional DT quantum gravity, the minbu method will presumably also lead to $\gamma\\!=\\!-1/2$. We measured the distribution $\bar{n}_{V_{2}}(B)$ of minbu sizes $B$ for target slice volumes $\tilde{V}_{2}\\!=\\!1.000$ and $2.000$ at the three phase space points $k_{0}=0.0,5.0$ and $8.0$. The sweep size was set to $10^{4}\\!\cdot\\!\tilde{V}_{2}$ for measurements in the degenerate phase, and $10^{5}\\!\cdot\\!\tilde{V}_{2}$ in the de Sitter phase. We used longer sweeps in the de Sitter phase because the observed autocorrelations were much larger, especially at $k_{0}\\!=\\!0.0$, where the algorithm is much less efficient. We collected on the order of $5\\!\cdot\\!10^{4}$ minbu size histograms for $\tilde{V}_{2}\\!=\\!1.000$ and $1.5\\!\cdot\\!10^{5}$ histograms for $\tilde{V}_{2}\\!=\\!2.000$ in the de Sitter phase, and $4\cdot 10^{5}$ histograms for $\tilde{V}_{2}\\!=\\!1.000$ and $2\cdot 10^{5}$ histograms for $\tilde{V}_{2}\\!=\\!2.000$ in the degenerate phase. The resulting expectation values $\langle\bar{n}_{V_{2}}\rangle$ of the minbu size distribution as a function of the normalized ratio $B/V_{2}\in[0,1/2]$ are shown in Fig. 8, together with best fits of the form (11) for specific values of $\gamma$. The best fits were determined following the procedure used in [62], and involved a subleading correction term to the power law $B^{\gamma-2}$, as is described in more detail in Appendix A below. Figure 8: Expectation values of the distribution $\bar{n}_{V_{2}}$ of minbu sizes for spatial slices of volume $V_{2}\\!=\\!1.000$ and $2.000$ in three- dimensional CDT configurations, in the degenerate phase ($k_{0}\\!=\\!8.0$, top) and the de Sitter phase ($k_{0}\\!=\\!0.0$, bottom left, and $k_{0}\\!=\\!5.0$, bottom right), on a log-log scale. The continuous lines are best fits of the form (11) for specific values of $\gamma$. Error bars are smaller than the dot size. In the degenerate phase (Fig. 8, top), the choice $\gamma\\!=\\!-1/2$ fits the data extremely well throughout the entire range of $B/V_{2}$, with the exception of the smallest minbu sizes. Our results coincide within error bars with those reported in Table 1 of [62]. This confirms that the spatial slices in the degenerate phase exhibit behaviour consistent with DT quantum gravity in two dimensions. In the de Sitter phase, there is no $\gamma$-value that leads to a good fit over the full range of $B/V_{2}$, even when we disregard the region of small minbu size $B$, where (11) is known to be inaccurate. The plots for $k\\!=\\!0.0$ and $k_{0}\\!=\\!5.0$ (Fig. 8, bottom) illustrate the optimum of what can be achieved, namely, a fit that works reasonably well for an intermediate range of $B/V_{2}$, in this case, a fit corresponding to $\gamma\\!=\\!-1$. However, this clearly does not fit the data for large minbus near $B/V_{2}\\!=\\!1/2$, especially not for the smaller value of $k_{0}$, and the discrepancy seems to get worse with increasing volume $V_{2}$. We conclude that the minbu distribution in the de Sitter phase does not follow the functional form of the right-hand side of relation (11), at least not for the slice volumes we have considered (and which seem to be sufficient in the degenerate phase). A possible explanation is that the “effective” partition function $Z(V_{2})$, which is obtained from the three-dimensional CDT partition function $Z(N_{3})$ for fixed three-volume by integrating out all degrees of freedom except for a single slice volume $V_{2}$, does not have the asymptotic form (9). This would imply that it is not in the universality class of a DT model with central charge $c\\!<\\!1$ or of CDT quantum gravity. A more subtle scenario would be that $Z(V_{2})$ does behave according to (9) (and perhaps does correspond to a known gravity model in $D\\!=\\!2$), but that the derivation of the minbu distribution (11) is invalidated by the presence of correlations between the bulk and a baby universe that exist because of the embedding of the spatial slice in a three-dimensional simplicial geometry. Such correlations are not present in the degenerate phase because of the absence of (2,2)-simplices. In the following two subsections, we will look at observables which also characterize the intrinsic geometry of the spatial slices, but whose determination is less subtle than that of the entropy exponent. ### 4.3 Hausdorff dimension The Hausdorff dimension is a notion of fractal dimension that can be used to characterize a quantum geometry in an invariant manner. Broadly speaking, it is extracted by comparing volumes with their characteristic linear size, measured in terms of a geodesic distance. There have been many studies of the Hausdorff dimension in the context of DT and CDT quantum gravity, including extensive investigations in two-dimensional Euclidean DT models with and without matter (see, for example, [63, 64, 20, 65] and references therein). Following [63], we will investigate a local and a global (“cosmological”) variant of this observable on the spatial slices. From analytical considerations, it is known that in two-dimensional Euclidean DT quantum gravity without matter both types of Hausdorff dimension are equal to four (i.e. different from the topological dimension of the triangular building blocks), while for two-dimensional CDT quantum gravity they are both equal to two [4, 66]. When measuring the Hausdorff dimension numerically, one can use either the link distance or the dual link distance as a discrete implementation of the geodesic distance. In known systems of pure gravity in $D\\!=\\!2$, they lead to equivalent notions of geodesic distance in the continuum limit, but for finite lattice sizes, one particular choice may be more convenient. This is true for our investigation below, where we will use the dual link distance, which is defined between dual vertices (equivalently, centres of triangles) and given by the length of the shortest path along dual links between the vertices. When comparing our results with previous measurements of the Hausdorff dimension in the context of two-dimensional Euclidean DT [67, 64], one must take into account that the latter employed a larger ensemble of geometries. This generalized ensemble allows for local gluings of the equilateral triangles that violate the strict simplicial manifold conditions121212two triangles cannot share more than one edge, any two vertices cannot be connected by more than one edge. When characterizing the triangulations in terms of their dual, trivalent graphs, the generalization consists in allowing for tadpole and self-energy insertions. It has been demonstrated to reduce finite-size effects [68], and is justified by the fact that the model on the enlarged ensemble can be shown to lie in the same universality class (see e.g. [69] and references therein). Since no analogous result is available in three- dimensional CDT quantum gravity, it is prudent to use only simplicial manifolds, which implies that the spatial slices are simplicial manifolds too. This may affect the quality of our results, compared with the earlier, purely two-dimensional investigations. Note also that nonlocal minbu surgery moves were used in [64] to complement the standard local Monte Carlo moves and reduce autocorrelation times, something we cannot easily implement on two- dimensional embedded triangulations. #### 4.3.1 Local Hausdorff dimension A key quantity in determining the local Hausdorff dimension $d_{h}$ of a two- dimensional triangulation $T$ is the shell volume $S(r)$, which in our implementation counts the number of dual vertices (equivalently, triangles) at dual link distance $r$ from a given dual vertex. The corresponding observable is the quantity $\bar{S}(r)$, obtained by averaging $S(r)$ over all dual vertices of $T$. The reason for using the dual link distance is that shells with respect to the link distance quickly cover a large fraction of the geometry as $r$ grows. This implies that the average shell volumes $\bar{S}(r)$ cover only a small range of radii $r$ before dropping to zero, yielding too few data points to make reliable estimates of either Hausdorff dimension. The local Hausdorff dimension is extracted from the expectation value $\bar{S}(r)$ in the ensemble at fixed two-volume $N_{2}$ according to $\langle\bar{S}(r)\rangle_{N_{2}}\sim r^{d_{h}-1},$ (12) for small $r$. In other words, $d_{h}$ captures the initial, volume- independent power-law growth of small geodesic spherical shells, where $r$ must be sufficiently large to avoid dominance by discretization artefacts and sufficiently small to avoid significant corrections to the simple power law behaviour (12), if such a behaviour is indeed present. We have measured the expectation values of average shell volumes as a function of the dual link distance $r$, at slice volumes $V_{2}\\!=\\!16k$ and $32k$ and at the three chosen phase space points $k_{0}$, which is all straightforward. The local Hausdorff dimension $d_{h}$ was extracted by fitting the measured data to the functional form $\langle\bar{S}(r)\rangle_{V_{2}}=c\cdot(r+a)^{d_{h}-1},$ (13) where – following [64] – we have introduced an offset $a$ to account for short-distance discretization artefacts, and $c$ is a multiplicative parameter. The dependence of the Hausdorff dimension on the chosen fitting range $r\\!\in\\![r_{\textrm{min}},r_{\textrm{max}}]$ will be analyzed in more detail below. Figure 9: Expectation values $\langle\bar{S}(r)\rangle_{V_{2}}$ of average shell volumes in the degenerate phase $(k_{0}\\!=\\!8.0)$ for slice volumes $V_{2}\\!=\\!16k,32k$ (left), and for all three phase space points $k_{0}\\!=\\!0.0$, $5.0$ and $8.0$ for slice volume $V_{2}\\!=\\!32k$ (right). Error bars are smaller than dot size. The measured expectation values of the average shell volume are shown in Fig. 9 for $0\\!\leq\\!r\\!\leq\\!40$. The plot on the left, describing the behaviour of the system in the degenerate phase, illustrates the difference between the data for slice volumes $16k$ and $32k$. The data points for the smaller volume start deviating from the common behaviour around $r\\!\approx\\!20$, indicating that a power-law fit becomes inadequate beyond this point. The plot on the right illustrates the dependence of the initial slope on the value of $k_{0}$. Note that although the curve for $k_{0}\\!=\\!0.0$ lies in between the two other curves, the corresponding Hausdorff dimension obtained from fitting to the functional form (13) comes out lower than that for $k_{0}\\!=\\!5.0$, see below. Since the criteria for fixing the fitting range $r\\!\in\\![r_{\textrm{min}},r_{\textrm{max}}]$ for eq. (13) are only approximate, it is important to understand which choice is most appropriate and how stable the results for $d_{h}$ are when the range is varied. Some earlier work used the range $r\\!\in\\![5,15]$ in terms of the dual link distance, and for a system of volume $N_{2}\\!=\\!64k$ [70]. To investigate the influence of the fitting range in a systematic way, we have performed fits for a set of ranges $r\\!\in\\![r_{\textrm{min}},r_{\textrm{min}}\\!+\\!w]$ of varying width $w$, and with $r_{\textrm{min}}\\!\in\\![5,14]$. The resulting best fit values for the local Hausdorff dimension $d_{h}$ as a function of $r_{\textrm{min}}$ are shown in Fig. 10, for two different widths $w\\!=\\!10$ and $12$. Figure 10: Best fit values for $d_{h}$ from fitting (13) to the measured expectation values $\langle\bar{S}(r)\rangle_{V_{2}}$ in the range $r\\!\in\\![r_{\textrm{min}},r_{\textrm{min}}\\!+\\!w]$ for spatial slices in the degenerate phase ($k_{0}\\!=\\!8.0$, slice volumes $V_{2}\\!=\\!16k,32k$) and in the de Sitter phase ($k_{0}\\!=\\!0.0,5.0$, slice volume $V_{2}\\!=\\!32k$). The error bars correspond to the 95% confidence intervals of a $\chi^{2}$-test. Starting our analysis in the degenerate phase, we observe a good stability of the value of $d_{h}$ when $r_{\textrm{min}}$ is increased away from its lowest, “canonical” cutoff value of 5, which indicates that the region $r_{\textrm{min}}\\!\gtrsim\\!5$ is not affected by short-distance artefacts and that data in the corresponding interval $r\\!\in\\![r_{\textrm{min}},r_{\textrm{min}}\\!+\\!w]$ are well approximated by a pure power law. Shifting the fitting range to start beyond $r_{\textrm{min}}\\!=\\!7$ changes the extracted best $d_{h}$, mildly for the larger volume $V_{2}\\!=\\!32k$ and more strongly for $V_{2}\\!=\\!16k$. Taking into account Fig. 9, left, this indicates that one is leaving the region where the functional form (13) is an appropriate fit. Lastly, setting $w\\!=\\!12$ instead of 10 reduces the error bars without appreciably changing $d_{h}$, and is therefore preferable. To conclude, the optimal choice of range among the possibilities we have investigated at $k_{0}\\!=\\!8.0$ appears to be $r\\!\in\\![5,17]$. The associated local Hausdorff dimension for $V_{2}\\!=\\!32k$ is given by $d_{h}=3.31(4),\quad\quad\textrm{degenerate phase }(k_{0}=8.0),$ (14) obtained from fitting to (13), with fit parameters $a\\!=\\!4.0(2)$ and $c\\!=\\!0.08(1)$. To illustrate the excellent quality of the fit, Fig. 11 shows the measured data together with the best-fit curve and the fits at the edges of the 95% confidence interval, which basically fall on top of each other (see Appendix B for details on how to compute such confidence intervals). The fits for the data at $k_{0}\\!=\\!5.0$ and $k_{0}\\!=\\!0.0$ are of similar quality. We postpone a discussion of the compatibility of this result with the analytical value $d_{h}\\!=\\!4$ for DT quantum gravity to later, after having investigated the global Hausdorff dimension. Figure 11: Expectation values $\langle\bar{S}(r)\rangle_{V_{2}}$ of average shell volumes in the degenerate phase ($k_{0}\\!=\\!8.0$) and slice volume $V_{2}\\!=\\!32k$. The curves are plots of the fit function (13) for three different sets of fit parameters: the optimal fit which minimizes $\chi^{2}$, and the two fits at the boundaries of the 95% confidence interval. The fit is performed to the data points in the range $5\leq r\leq 17$, as indicated by the unshaded region. Turning next to a discussion of the de Sitter phase, Fig. 10 shows the best fit values for $d_{h}$ we extracted from the shell volume data. They are consistently smaller than those in the degenerate phase, and within the de Sitter phase decrease further with decreasing $k_{0}$. We observe only a shorter range of values $r_{\textrm{min}}\\!\gtrsim\\!5$ where the Hausdorff dimension is reasonably stable. Examining the corresponding curves in Fig. 9, right, there does not seem to be a very extended initial-growth regime, before the curves straighten out to become approximately linear. It is possible that finite-size effects affect the data points at the upper end of the chosen ranges $[r_{\textrm{min}},r_{\textrm{max}}]$ and cause the observed deviations from a simple power-law behaviour. Alternatively, this behaviour may be a bona-fide feature of the embedded slices. The error bars for larger $r_{\textrm{min}}$ are significantly larger than in the degenerate phase, especially for $k_{0}\\!=\\!0.0$. At least in part, this appears to be due to a statistical uncertainty of the measured shell volumes for increasing $r$. As before, the error bars are smaller for $w=12$ than for $w=10$. For $k_{0}\\!=\\!5.0$, the local Hausdorff dimension seems to decrease slightly with growing $r_{\textrm{min}}$, but in view of the width of the 95% confidence interval this may not have any significance. From a best fit in the interval $r\\!\in\\![5,17]$, we find for the local Hausdorff dimension $\displaystyle d_{h}$ $\displaystyle=2.91(5),\quad\quad\textrm{de Sitter phase }(k_{0}=0.0),$ (15) $\displaystyle d_{h}$ $\displaystyle=3.10(4),\quad\quad\textrm{de Sitter phase }(k_{0}=5.0),$ (16) for the fit parameters $a\\!=\\!2.5(3)$, $c\\!=\\!0.26(5)$ and $a\\!=\\!3.9(3)$, $c\\!=\\!0.11(2)$ respectively. For the time being, we take note of these results and refer to Sec. 4.3.3 below for a summary and attempted interpretation of all Hausdorff dimension measurements. #### 4.3.2 Global Hausdorff dimension The global Hausdorff dimension of a two-dimensional triangulation describes its behaviour as a whole and can again be characterized by the average volume $\bar{S}_{N_{2}}(r)$ of spherical shells of radius $r$, where we have added an explicit subscript $N_{2}$ to emphasis that the total volume of the triangulation will now play an important role. Given an ensemble of geometries of volume $N_{2}$, a global Hausdorff dimension $d_{H}$ can be extracted if in the limit of large $N_{2}$ the eigenvalue of the distribution of shell volumes over the entire $r$-range can be described by the functional form $\langle\bar{S}_{N_{2}}(r)\rangle=N_{2}^{1-1/d_{H}}\mathcal{F}(x),\quad\quad x=\frac{r}{N_{2}^{1/d_{H}}},$ (17) where $\mathcal{F}$ is a universal function that depends on the rescaled geodesic distance $x$. This is known to be the case for DT quantum gravity in two dimensions, where $\mathcal{F}$ has been computed explicitly [64], but there is no guarantee that the scaling law (17) holds for general systems of geometries in $D\\!=\\!2$. Even when a global Hausdorff dimension $d_{H}$ can be assigned in this manner, it need not be equal to the local Hausdorff dimension $d_{h}$ [64, 71]. We will attempt to extract a global Hausdorff dimension for the spatial slices in three-dimensional CDT by performing a finite-size scaling analysis where we collect data for $\langle\bar{S}_{V_{2}}(r)\rangle$ for the full range of radii $r$ and several slice sizes $V_{2}$, and then try to rescale the resulting distributions according to (17). If we can find a single value $d_{H}$ such that the rescaled distributions fall on top of each other for all volumes $V_{2}$, we define this $d_{H}$ to be the global Hausdorff dimension of the system. Following [64], we will work with the normalized shell volume distributions $n_{V_{2}}(r)\\!:=\\!\langle\bar{S}_{V_{2}}(r)\rangle/V_{2}$, for which the scaling law (17) assumes the form $n_{V_{2}}(r)=V_{2}^{-1/d_{H}}\mathcal{F}(x),\quad\quad x=\frac{r}{V_{2}^{1/d_{H}}}.$ (18) Note that the measured distributions $n_{V_{2}}(r)$ are functions of a discrete variable $r\in\mathbb{N}_{0}$. To perform a smooth rescaling, we first construct continuous functions that interpolate between these discrete values, which by slight abuse of notation we continue to call $n_{V_{2}}(r)$. Following the methodology of [65], we then rescale, for each system volume $V_{2}$ separately, the corresponding distribution $n_{V_{2}}(r)$ such that it maximally overlaps with the normalized distribution $n_{V_{\textrm{max}}}(r)$ for the largest slice volume $V_{\textrm{max}}$ in the simulation, which we are using as a reference distribution. We denote these rescaled distance profiles by $\tilde{n}_{V_{2}}(\tilde{r})$, where $\tilde{r}$ is a rescaled length variable. They take the form $\tilde{n}_{V_{2}}(\tilde{r}_{V_{2}})=\left(\frac{V_{2}}{V_{\textrm{max}}}\right)^{1/d}n_{V_{2}}(\tilde{r}),\quad\quad\tilde{r}_{V_{2}}=\left(\frac{V_{2}}{V_{\textrm{max}}}\right)^{1/d}(r+a)-a,$ (19) where the two fit parameters are a rescaling dimension $d$ and a phenomenological shift $a$ [64, 65] that corrects for discretization effects at small $r$, similar to the prescription (13) we used for the local Hausdorff dimension. Figure 12: Distributions $\tilde{n}_{V_{2}}(\tilde{r}_{V_{2}})$ of shell volumes rescaled with the averaged parameters $(\bar{d},\bar{a})$ in the degenerate phase ($k_{0}\\!=\\!8.0$, top) and the de Sitter phase ($k_{0}\\!=\\!5.0$, bottom left, and $k_{0}\\!=\\!0.0$, bottom right). Note that $\tilde{r}_{V_{\textrm{max}}}$ is equal to the original discrete length parameter $r$, so we can use them interchangeably, and that $\tilde{n}_{V_{\textrm{max}}}\\!=\\!n_{V_{\textrm{max}}}$. For each system size $V_{2}\\!<\\!V_{\textrm{max}}$ we determine the corresponding fit parameters $a$, $d$ from the condition that the sum of the squared differences $\left(\tilde{n}_{V_{2}}(\tilde{r}_{V_{2}})-\tilde{n}_{V_{\textrm{max}}}(r)\right)^{2}$ should be minimized, where $\tilde{r}_{V_{2}}$ depends implicitly on the discrete parameter $r$. We perform the fit in the range of integers $r$ where $\tilde{n}_{V_{\textrm{max}}}(r)>\frac{1}{5}\,\textrm{max}_{r}\,\tilde{n}_{V_{\textrm{max}}}(r)$, which means we disregard contributions from the tails of the distribution, where discretization effects are more likely to be present. If over a large range of $V_{2}$ the rescaled distributions overlap to a common curve or are reasonably close to doing so, we take the mean $\bar{d}$ of all the rescaling dimensions $d$ and define this to be the global Hausdorff dimension, $d_{H}\\!:=\\!\bar{d}$. We also average the shift parameters to obtain one optimal shift $\bar{a}$ for the system. If we do not find sufficient overlap, the method fails and we cannot assign a global Hausdorff dimension. We measured the expectation value of the shell volume distribution for eight different slice volumes $V_{2}\\!\in\\![1.000,32.000]$ in the degenerate phase, and for seven slice volumes $V_{2}\\!\in\\![1.000,8.000]$ in the de Sitter phase. Again, the autocorrelation times are much larger in the de Sitter phase, and the resulting uncertainties especially large near the peaks of the distributions. Since the location and height of these peaks are important features for finding the appropriate rescaling dimension required for a collapse, large uncertainties in this region imply large error bars for the best fit parameters. This led to our choice for the smaller volume range in the de Sitter phase. Figure 13: Values for $d$ extracted by comparing measured distributions of shell volumes at a given volume $V_{2}$ with those at the top volume, for $k_{0}\\!=\\!0.0$, $5.0$ and $8.0$, as explained in the text. For each of the three phase space points, Fig. 12 shows a collection of curves for a range of volumes $V_{2}$, which have been rescaled according to (19), using the averaged pair $(\bar{d},\bar{a})$ for all of them. The topmost graph, for $k_{0}\\!=\\!8.0$, makes a convincing case for the presence of finite-size scaling in the degenerate phase. Using $\bar{d}\\!=\\!3.30$ and $\bar{a}\\!=\\!2.64$ as the joint rescaling parameters leads to a curve collapse of good, although not perfect quality for slice volumes up to $32k$. This is not the case for the rescaled curves in the de Sitter phase (Fig. 12, bottom), which were obtained for the averaged fit parameters $\bar{d}\\!=\\!2.68$ and $\bar{a}\\!=\\!-0.46$ at $k_{0}\\!=\\!5.0$ and $\bar{d}\\!=\\!3.02$ and $\bar{a}\\!=\\!2.08$ at $k_{0}\\!=\\!0.0$. Although the slice volumes span a significantly smaller range than in the degenerate phase, our rescaled data do not support the presence of finite-size scaling in this phase. In addition to the mismatch among the curves, we also note that the distributions in the two phases look different as a function of the rescaled radius $x\\!=\\!r/V_{2}^{1/\bar{d}}$. While in the degenerate phase its range extends to around 6.7, and the peak is located near 2.4, the slice geometries in the de Sitter phase have a smaller linear extension, with $x$ reaching on the order of 4 (5.7) and the peak located near 1.4 (2.0) for $k_{0}\\!=\\!5.0$ (0.0). One could be tempted to disregard the lack of overlap between the curves for different volumes and simply _define_ the global Hausdorff dimension to be equal to the average $\bar{d}\\!=\\!2.68$ for $k_{0}\\!=\\!5.0$ and $\bar{d}\\!=\\!3.02$ for $k_{0}\\!=\\!0.0$. However, Fig. 13 shows that this would be misguided, since the values for $d$ exhibit a strong dependence on the volume in the range we have investigated. Especially the curve for $k_{0}\\!=\\!0.0$ shows a steep rise, with little indication of asymptoting to a constant value, very different from the curve for the degenerate phase, which we have included for comparison. Note also that the $d$-values extracted from measurements in the de Sitter phase are not in any obvious way related to the values we found for the local Hausdorff dimension, namely, $d_{h}\\!=\\!3.10(4)$ for $k_{0}\\!=\\!5.0$ and $d_{h}\\!=\\!2.91(5)$ for $k_{0}\\!=\\!0.0$. This appears to be yet another indication that the behaviour of the shell distributions is not governed by a single scale and that the scaling hypothesis (17) is simply not valid in the de Sitter phase. Based on these observations, we are unable to associate a global Hausdorff dimension with the phase space points in the de Sitter phase. By contrast, finite-size scaling is observed in the degenerate phase, and the associated global Hausdorff dimension is given by $d_{H}=3.30(2),\quad\quad\textrm{degenerate phase }(k_{0}=8.0),$ (20) which within error margins coincides with the local Hausdorff dimension $d_{h}\\!=\\!3.31(4)$ of eq. (14) we determined in the previous subsection. #### 4.3.3 Discussion of results Let us consider first the results we obtained in the degenerate phase. We found clear evidence for the presence of finite-size scaling when analyzing the global Hausdorff dimension, and mutually consistent values for both Hausdorff dimensions, with $d_{h}\\!=\\!3.31(4)$ for the local and $d_{H}\\!=\\!3.30(2)$ for the global variant. Despite the large discrepancy with the analytical value $d_{h}\\!=\\!d_{H}\\!=\\!4$ for two-dimensional DT quantum gravity, we nevertheless believe that the observed Hausdorff dimension of the spatial slices is compatible with this model of quantum gravity. This interpretation is supported by known difficulties in numerically extracting the Hausdorff dimension in two-dimensional systems of random geometry (see [72] and references therein), with a tendency of the Hausdorff dimension measurements to underestimate its true value.131313More precisely, this is true for DT models; numerical measurements of the Hausdorff dimension for two- dimensional CDT found $d_{H}\\!\approx\\!2$ [58] and $d_{H}\\!=\\!2.2(2)$ [72]. In previous works, these difficulties have motivated the use of a more general geometric ensemble, the introduction of additional minbu moves and of a phenomenological shift parameter, as well as refined fitting techniques [64, 67, 65]. As mentioned earlier, several of these improvements are unfortunately not directly applicable in our case, because our two-dimensional geometries are parts of larger, three-dimensional triangulations. Earlier numerical results for pure DT quantum gravity whose derivation most closely resembles our treatment are the finite-size scalings obtained for $N_{2}\\!\leq\\!32k$ from collapsing curves for the shell volume distributions in terms of the dual link distance in [67]. Depending on the fitting method, they yielded the values $d_{H}\\!=\\!3.150(31)$ and $d_{H}\\!=\\!3.411(89)$. Unlike ours, this work used a generalized ensemble, but the final results for the Hausdorff dimension are broadly in line with our findings. Another aspect well illustrated by [67] is the increase of the measured Hausdorff dimension with the system volume, something we also observed in the degenerate phase (Fig. 10). Our results in the de Sitter phase are less clear-cut, but point to a system that is in a different universality class from DT quantum gravity. The values we determined for the local Hausdorff dimensions, $d_{h}\\!=\\!3.10(4)$ for $k_{0}\\!=\\!5.0$ and $d_{h}\\!=\\!2.91(5)$ for $k_{0}\\!=\\!0.0$, are even further removed from 4, but this might conceivably still be due to some even more serious underestimate than in the degenerate phase. More significant is the absence of finite-size scaling and the ensuing impossibility to associate a consistent global Hausdorff dimension to the system. The most likely explanation is the absence of a single scale governing the dynamics, which would imply a different universality class from that of the degenerate phase. Discarding an interpretation of the de Sitter phase in terms of DT quantum gravity leaves us without any obvious alternative candidate theory to explain these results. There are a number of two-dimensional CDT models with matter coupling that have a local and/or global Hausdorff dimension of or near 3, including CDT with eight Ising spins [60], several massless scalar fields [73], or restricted hard dimers [74]. Also two pure-gravity models of so- called locally causal dynamical triangulations, generalizing the strict slicing of two-dimensional CDT, were found to have Hausdorff dimensions near 3 [72]. There is nothing obvious that would link one of these models to the Euclidean slices we are dealing with here, but we cannot exclude this possibility either without a further investigation, which however would take us beyond the scope of the present work. ### 4.4 Curvature profile The last quantity we will measure on the spatial hypersurfaces is a curvature observable. It is based on the quantum Ricci curvature, a generalized notion of Ricci curvature introduced in [30]. The quantum Ricci curvature depends on a neighbourhood of linear size $\delta$ of a point $x$ and has the interpretation of a coarse-grained Ricci curvature associated with the scale $\delta$. It is defined on a range of metric spaces of Riemannian signature, including nonsmooth ones, and its introduction was motivated by the search for a notion of (renormalized) curvature suitable for nonperturbative quantum gravity. It can also be defined on classical, smooth Riemannian spaces, where in the limit $\delta\\!\rightarrow\\!0$ it reproduces the standard notion of Ricci curvature [30, 75]. The simplest way to turn this (quasi-)local notion of curvature into a quantum observable depending on the scale $\delta$, dubbed the _curvature profile_ [76], is by averaging it over all points of a given metric space and in each point over all directions, leading to a coarse- grained, averaged notion of a Ricci scalar. The quantum Ricci curvature and associated curvature profiles have been studied for a wide range of smooth and piecewise flat spaces [75, 76] and used to characterize the curvature properties of DT quantum gravity in $D\\!=\\!2$ [75], CDT quantum gravity in $D\\!=\\!2$ [77], and full, four-dimensional CDT quantum gravity, producing further evidence for the de Sitter nature of the quantum geometry in four dimensions [17]. In the following, we will recall briefly the ingredients and construction of the curvature profile, and refer to the literature [30, 76, 77] for more detailed discussions. The main ingredient in determining the quantum Ricci curvature is the measurement of the _average sphere distance_ $\bar{d}(S_{p}^{\delta},S_{p^{\prime}}^{\delta})$ of two overlapping geodesic spheres $S_{p}^{\delta}$, $S_{p^{\prime}}^{\delta}$, each of radius $\delta$, whose centres $p$ and $p^{\prime}$ are also a geodesic distance $\delta$ apart. To compute the ($\delta$-dependent) quantity $\bar{d}$, we average over the distance between all pairs of points $(q,q^{\prime})\in S_{p}^{\delta}\times S_{p^{\prime}}^{\delta}$ on the two spheres. The geometry of the situation is depicted in Fig. 14 for the two-dimensional case, where the spheres are given by circles. When applying the prescription on a smooth Riemannian manifold, one extracts the Ricci curvature $Ric(v,v)$ at $p$ in the limit $\delta\\!\rightarrow\\!0$, where $v$ is the tangent vector at $p$ in the direction of $p^{\prime}$ [30, 75]. However, we will implement the prescription on piecewise flat triangulations and for non-infinitesimal $\delta$, where distances are defined in terms of an integer-valued link distance or dual link distance, and the limit $\delta\\!\rightarrow\\!0$ is neither well defined nor physically interesting because of short-distance discretization artefacts. Figure 14: The average sphere distance $\bar{d}$ of two geodesic circles $S_{p}^{\delta}$ and $S_{p^{\prime}}^{\delta}$, whose centres $p$ and $p^{\prime}$ are a distance $\delta$ apart, is obtained by averaging over the distance $d(q,q^{\prime})$ of all point pairs $(q,q^{\prime})$ along the two circles. In what follows, we will use the link distance $d(q,q^{\prime})$ between pairs $q,q^{\prime}$ of vertices, to allow for a direct comparison with previous measurements of the quantum Ricci curvature in two-dimensional DT quantum gravity, which also used the link distance [75], on the ensemble of regular, simplicial manifolds. In the triangulated setting, a geodesic “sphere” $S_{p}^{\delta}$ centred at the vertex $p$ is defined as the set of vertices at link distance $\delta$ from $p$, $N_{0}(S_{p}^{\delta})$ counts the number of vertices in this set, and the average sphere distance takes the form of a normalized double sum $\bar{d}(S^{\delta}_{p},S^{\delta}_{p^{\prime}})=\frac{1}{N_{0}(S^{\delta}_{p})}\frac{1}{N_{0}(S^{\delta}_{p^{\prime}})}\sum_{q\in S^{\delta}_{p}}\sum_{q^{\prime}\in S^{\delta}_{p^{\prime}}}d(q,q^{\prime}),\;\;\;\;\;d(p,p^{\prime})=\delta.$ (21) We have used inverted commas since the vertices of a point set $S_{p}^{\delta}$ will in general not form a genuine sphere: it is not required that the vertices form a sequence of nearest neighbours which can be joined pairwise by $N_{0}(S_{p}^{\delta})$ edges, resulting in a unique one- dimensional simplicial submanifold of the topology of a circle. Rather, such a procedure generally yields multiple circles, and results in self-intersections and -overlaps. The quantum Ricci curvature $K_{q}(p,p^{\prime})$ associated with the point pair $(p,p^{\prime})$ is defined in terms of the normalized average sphere distance as $\bar{d}(S_{p}^{\delta},S_{p^{\prime}}^{\delta})/\delta=:c_{q}\,(1-K_{q}(p,p^{\prime})),\quad\quad\delta=d(p,p^{\prime}).$ (22) The factor $c_{q}$ is a positive constant which describes the $\delta$-independent part of the average sphere distance. In the continuum, it can be defined by the limit $c_{q}\\!:=\\!\lim_{\delta\to 0}\bar{d}/\delta$ and depends only on the dimension of the manifold. The function $K_{q}(p,p^{\prime})$ captures the non-trivial dependence of the average sphere distance on the direction of the vector $\overline{pp^{\prime}}$ and the scale $\delta$. The most straightforward way to construct a genuine, diffeomorphism-invariant curvature observable is by taking an average $\bar{d}_{\rm av}$ of the average sphere distance $\bar{d}$ of eq. (22) over all pairs $(p,p^{\prime})$ of centre points at a fixed distance $\delta=d(p,p^{\prime})$ in the triangulation $T$, $\bar{d}_{\rm av}(\delta):=\frac{1}{{\cal N}_{\delta}}\sum_{p\in T}\sum_{p^{\prime}\in T}\bar{d}(S_{p}^{\delta},S_{p^{\prime}}^{\delta})\,\delta_{K}(d(p,p^{\prime}),\delta).$ (23) The symbol $\delta_{K}$ denotes a discrete Kronecker delta, implementing the distance constraint on $p,p^{\prime}$, and the normalization ${\cal N}_{\delta}$ is defined by the double sum ${\cal N}_{\delta}=\sum_{p\in T}\sum_{p^{\prime}\in T}\delta_{K}(d(p,p^{\prime}),\delta).$ (24) The so-called curvature profile, introduced in [76], is given by the quotient $\bar{d}_{\rm av}(\delta)/\delta$. Since the double sum (23) includes an average over all directions, it allows us to extract a scale-dependent quantum Ricci scalar $K_{\rm av}(\delta)$ from the curvature profile via $\bar{d}_{\rm av}(\delta)/\delta=:c_{\rm av}(1-K_{\rm av}(\delta)).$ (25) Since in the simplicial setting the factor $c_{\textrm{av}}$ cannot be fixed through a limit $\delta\\!\to\\!0$, it is set to the expectation value of the normalized average sphere distance for the minimal value of $\delta$ such that discretization artefacts are no longer dominant. In [75], this value was taken to be $\delta\\!=\\!5$. Figure 15: Expectation value $\langle\bar{d}_{\rm av}/\delta\rangle$ of the normalized average sphere distance, measured in two-dimensional DT quantum gravity (blue squares) [75], and for the spatial slices of three-dimensional CDT quantum gravity in the degenerate phase (yellow dots), both for volume $V_{2}\\!=\\!60k$. (Error bars are smaller than dot size.) Continuing our investigation of three-dimensional CDT quantum gravity, we measured the expectation value $\langle\bar{d}_{\rm av}/\delta\rangle$ of the curvature profile of its spatial hypersurfaces at the phase space points $k_{0}\\!=\\!0.0$, 5.0 and 8.0. We used slice volumes in the range $V_{2}\\!\in\\![4k,60k]$ in the degenerate phase and $V_{2}\\!\in\\![4k,20k]$ in the de Sitter phase. For the volumes $V_{2}\\!=\\!20,30,40$ and $60k$, we compared the curvature profiles in the degenerate phase at $k_{0}\\!=\\!8.0$ to those of two-dimensional DT quantum gravity [75], and in each case found agreement within statistical error bars 141414We thank N. Klitgaard for making the original data available to us.. For illustration, the results for $V_{2}\\!=\\!60k$ are shown in Fig. 15. There is an excellent match of our present data, taken for $\delta\\!\in\\![1,25]$, with those of DT quantum gravity in the range where they overlap, except at $\delta\\!=\\!1$. The fact that this agreement extends even into most of the region of discretization artefacts at small $\delta$ provides additional evidence that the spatial hypersurfaces in the degenerate phase behave like two-dimensional DT geometries. The monotonically decreasing curvature profiles we found in both the degenerate and the de Sitter phase clearly indicate the presence of positive curvature, as can be seen from eq. (25). Motivated by the fact that curvature profiles of two-dimensional DT quantum gravity can best be fitted to those of a five-dimensional continuum sphere with some effective curvature radius $\rho_{\textrm{eff}}$ [75], we tried to do the same for our data. As expected from the match of the curvature profiles, our results for the effective curvature radii in the degenerate phase are close to the values listed in Table 1 of [75]. Note that a rough estimate for the onset of finite-size effects in measuring $\langle\bar{d}_{\rm av}/\delta\rangle$ on a sphere of curvature radius $\rho$ is $\delta\\!\approx\rho$, where the extension $3\delta$ of the double circle of Fig. 14 is approximately equal to $\pi\rho$, half of the circumference of the sphere. This is in good agreement with the findings in [75]. Consistent with this argument, we found that for $V_{2}\\!=\\!60k$, a fitting range $\delta\\!\in\\![5,15]$ is appropriate, while for the smaller slice volume $V_{2}\\!=\\!20k$, which is the maximal size available for measurements in the de Sitter phase, the smaller range $\delta\\!\in\\![5,10]$ should be used. The measured curvature profile for $V_{2}\\!=\\!20k$ at $k_{0}\\!=\\!5.0$ in the de Sitter phase is shown in Fig. 16, where for comparison we present it alongside the result for the same slice volume at $k_{0}\\!=\\!8.0$ in the degenerate phase. The continuous lines are best fits to a 5D continuum sphere. Following [75], an additive shift was used such that the data point at $\delta\\!=\\!5$ always lies on the continuum curve. Because of the small fitting range we cannot and do not claim that the data taken at this (or even smaller) volume represent convincing evidence for the curvature behaviour of a sphere, and the effective curvature radii extracted ($\rho_{\textrm{eff}}\\!=\\!13.5$ for $k_{0}\\!=\\!8.0$, $\rho_{\textrm{eff}}\\!=\\!11.1$ for $k_{0}\\!=\\!5.0$) should be taken with a large grain of salt.151515The data at $k_{0}\\!=\\!0.0$ are somewhat similar to those at $k_{0}\\!=\\!5.0$, but their quality is even worse, and we do not show them here. In the degenerate phase we can say a bit more, as we have seen, since we can go up to volume $V_{2}\\!=\\!60k$ and essentially reproduce the results of DT quantum gravity. With regard to the de Sitter phase, we can at this stage only conclude that the curvature profiles are not in contradiction with those of a 5D continuum sphere, but it is clear that the quality of our results is not sufficient to definitely say that it is a sphere, let alone determine its dimension and curvature radius reliably. This would require us to probe much larger systems, which especially in the de Sitter phase was not feasible in our set-up. Figure 16: Comparing measured curvature profiles in the range $\delta\\!\in\\![5,10]$ with those of five-dimensional continuum spheres, for slice volume $V_{2}\\!=\\!20k$. Left: fit to a sphere with $\rho_{\textrm{eff}}\\!=\\!13.5$ in the degenerate phase ($k_{0}\\!=\\!8.0$). Right: fit to a sphere with $\rho_{\textrm{eff}}\\!=\\!11.1$ in the de Sitter phase ($k_{0}\\!=\\!5.0$). ## 5 Summary and conclusion We set out to gain a more detailed understanding of the properties of three- dimensional CDT quantum gravity by studying the intrinsic geometric properties of its spatial slices at integer proper time. We worked with the “classic” ensemble of three-dimensional simplicial manifolds, which implies that the spatial hypermanifolds under consideration also satisfy manifold conditions, and can be characterized by dual trivalent graphs without tadpoles or self- energy insertions. The original work on this quantum gravity model found two distinct phases on either side of a first-order transition as a function of the bare inverse gravitational coupling $k_{0}$ [26]: a de Sitter phase of extended geometry for $k_{0}\\!<\\!k_{0}^{c}$ and a degenerate phase for $k_{0}\\!>\\!k_{0}^{c}$, characterized by a strongly fluctuating volume profile, the almost complete absence of (2,2)-tetrahedra and an approximate decoupling of nearby spatial slices. We investigated the intrinsic geometric properties of the spatial slices in both phases, well away from the critical coupling $k_{0}^{c}\\!\approx\\!6.24$, at $k_{0}\\!=\\!0.0$ and $5.0$ in the de Sitter and at $k_{0}\\!=\\!8.0$ in the degenerate phase. The quantities considered were the expectation values of the average coordination number of vertices in the slices, of the entropy exponent extracted from the distribution of minimal-neck baby universes, of the local and global Hausdorff dimension, and of the curvature profile, obtained by averaging the quantum Ricci curvature. They are for the most part well studied in two-dimensional DT and CDT quantum gravity, the primary systems of reference for our results on the spatial slices. One aim of our investigation, motivated by the observed decoupling behaviour of the spatial slices, was to verify that the behaviour of the slices in the degenerate phase lies in the same universality class as DT quantum gravity in $D\\!=\\!2$. What happens in the de Sitter phase, and to what extent the embedding three-dimensional geometry influences the effective dynamics of the hypersurfaces in this phase is much less clear a priori. Summarizing our results, we found convincing evidence that the behaviour of the spatial slices in the degenerate phase is indeed compatible with that of two-dimensional DT quantum gravity. The measured distribution of the vertex order follows the analytical prediction almost perfectly (Fig. 6). The same is true for the distribution of (sufficiently large) minbu sizes (Fig. 8, top), yielding an entropy exponent $\gamma\\!=\\!-1/2$, the known value for pure Euclidean gravity in $D\\!=\\!2$. Measurement of the local and global Hausdorff dimension yielded mutually compatible results, with $d_{h}\\!=\\!3.31(4)$ and $d_{H}\\!=\\!3.30(2)$, exhibiting finite-size scaling for the latter. We argued that the discrepancy between the measured values and the analytically known value $d_{h}\\!\equiv\\!d_{H}\\!=\\!4$ is in line with expectations for a simplicial manifold ensemble and the relatively small volumes under consideration here. Finally, the measured curvature profile matched very well that of a previous investigation of DT quantum gravity in $D\\!=\\!2$ (Fig. 15), at least up to the slice volume $V_{2}\\!=\\!60k$ we could investigate. By contrast, in the de Sitter phase we could not establish a corresponding overall match of the behaviour of the measured observables with that of any known quantum gravity model in two dimensions. In particular, we did not find any evidence that the spatial slices behave according to DT quantum gravity in $D\\!=\\!2$, which one may argue is the most natural hypothesis, given the absence in the slices of a preferred direction or a time-space asymmetry, which characterizes two-dimensional CDT configurations. We also found that the behaviour within the de Sitter phase depends on the value of the bare coupling constant $k_{0}$. Since it was tangential to our main focus, we did not study the nature of this $k_{0}$-dependence more closely, which may be a worthwhile project in itself. Within the limited range of couplings $k_{0}\\!\in\\![3.0,6.0]$, earlier work found some evidence that results inside the de Sitter phase can be mapped onto each other by a $k_{0}$-dependent rescaling of the time- and space-like length units [26]. It would be interesting to understand whether this also extends to the value $k_{0}\\!=\\!0$ we have been using, or even to negative $k_{0}$. Returning to the specifics of our results, the vertex order in the de Sitter phase was found to obey a very different distribution from that in the degenerate phase, with large coordination numbers being more prevalent (Fig. 7). We also saw that the distribution for $k_{0}\\!=\\!5.0$ is even further removed from that for $k_{0}\\!=\\!8.0$ than the distribution for $k_{0}\\!=\\!0.0$. The method of determining the entropy exponent $\gamma$ from the distribution of minbu sizes does not appear to be applicable in the de Sitter phase, which we conjectured to be due to the presence of correlations in the three-dimensional embedding triangulations. Although this does not necessarily disprove a DT-like behaviour of the spatial slices, it does not present any evidence in favour of it either. The measured local Hausdorff dimensions, given by $d_{h}\\!=\\!3.10(4)$ for $k_{0}\\!=\\!5.0$ and $d_{h}\\!=\\!2.91(5)$ for $k_{0}\\!=\\!0.0$ are significantly smaller than the value found in the degenerate phase, and point to a different continuum limit than that of two-dimensional DT quantum gravity. Of course, it could be the case that the de Sitter phase is subject to much larger discretization artefacts, because of the genuinely three-dimensional nature of the underlying geometries, and that one needs to go to larger slice volumes to get a better approximation of continuum behaviour. However, even taking this possibility into account, a yet stronger indication that the slices do not exhibit DT-like behaviour comes from the absence of finite-size scaling at fixed $k_{0}$ to extract a global Hausdorff dimension, from which we deduced that the slice dynamics is likely governed by more than just one scale. Finally, the measurement of the curvature profiles showed the presence of a positive average quantum Ricci scalar. Matching with a continuum sphere was in principle possible, but should at this stage be regarded as inconclusive, since it was based on only a handful of measurement points, which could be compatible with other curvature profiles also. It therefore cannot serve as evidence that the system is equivalent to two-dimensional DT quantum gravity. Having largely dismissed an interpretation of the spatial slices in the de Sitter phase in terms of two-dimensional DT or CDT quantum gravity does not leave any obvious alternatives to associate them with (the universality classes of) other known systems of random geometry. DT gravity coupled to matter with a conformal charge $c\\!<\\!1$ is disfavoured, because the apparent absence of finite-size scaling for the global Hausdorff dimension of the spatial slices contradicts the scale-invariance of these systems. On the one hand, the quality of our data is far removed from the precision measurements of the Hausdorff dimension of such systems [65], and we cannot entirely exclude that finite-size scaling will appear at much larger volumes than the ones we could probe here. On the other hand, we would urge caution when comparing to ensembles with less stringent regularity conditions, like those used in [65]: even if the relaxation of simplicial manifold conditions does not change the universality class in specific two-dimensional models, this may not hold in general in three-dimensional quantum gravity models, and may depend on the details of the regularity conditions. This is part of a more general question, namely, are there natural larger ensembles of three- dimensional triangulations which contain the simplicial manifold ensemble of CDT, but lie in the same universality class? Conversely, are there strictly smaller ensembles contained in that of standard CDT quantum gravity, which still belong to the same universality class? Larger ensembles may facilitate numerical simulations and lead to faster convergence, while smaller ensembles may be easier to enumerate and handle analytically (see [78] for a recent example in three-dimensional DT quantum gravity). Of course, there may be more than one physically interesting universality class associated with three- dimensional Lorentzian random geometries, like the wormhole phase described by the ABAB-matrix model [38] already mentioned in the introduction. Returning to ensembles of simplicial manifolds, our results in the de Sitter phase may indicate the existence of another, new model of two-dimensional quantum geometry, where the embedding three-dimensional geometry induces some effective dynamics on the spatial slices, presumably through $k_{0}$-dependent extrinsic curvature contributions. Further research is needed to understand whether such an induced model exists and whether it can in turn be interpreted as a two-dimensional quantum field theory with properties like locality and unitarity, as is the case in the degenerate phase. Contrasting our relatively straightforward verification of the DT nature of the spatial dynamics in the degenerate phase with the difficulties we encountered when investigating the de Sitter phase highlights the fact that quantum geometry in three dimensions – here by leaving its imprint on the hypersurfaces – is significantly more complex and complicated than quantum geometry in two dimensions. Much remains to be done to illuminate its mathematical and physical properties. Acknowledgments. This work was partly supported by a Projectruimte grant of the Foundation for Fundamental Research on Matter (FOM, now defunct), financially supported by the Netherlands Organisation for Scientific Research (NWO). J.B. would like to thank Wouter van Amsterdam for useful comments on Appendix B. ## Appendix A Estimating the entropy exponent This appendix describes the procedure used in Sec. 2.1 of [62] to determine best fit values for the entropy exponent $\gamma$ with associated error bars, together with the results of this analysis applied to our data. One introduces a subleading correction to the right-hand side of (11) by replacing $B^{\gamma-2}\to B^{\gamma-2}\left(1+\frac{c}{B}+O\left(\frac{1}{B^{2}}\right)\right),$ (26) which allows for a better fit in the regime of small $B$, without significantly affecting the behaviour of the function at intermediate and large $B$. One then takes the logarithm on both sides of (11) to obtain $\log(\bar{n}_{N_{2}}(B))=a+(\gamma-2)\log\left(B(N_{2}-B)\right)+\frac{c}{B},$ (27) where the best fit parameters $a$, $\gamma$ and $c$ should now be determined from the observed baby universe distributions shown in Fig. 8.161616Note that the argument in the logarithm on the right-hand side of this expression differs from eq. (2.2) in [62] by a multiplicative factor $N_{2}$, which is absorbed by the fit parameter $a$. The goodness of fit is defined through the $\chi^{2}$-statistic, described in greater detail in App. B below. As mentioned before, the prediction (11) is only expected to hold for sufficiently large baby universes, where discretization effects are negligible. We therefore introduce a lower cut-off $B_{0}$ on $B$ on the data before extracting the best fit parameters. The resulting values of the entropy exponent $\gamma$ as a function of the cut-off $B_{0}$ are shown in Fig. 17 for $N_{2}\\!\equiv V_{2}\\!=\\!1.000$, both with and without the correction term $c/B$ in (27). Our figure resembles Fig. 2 of reference [62] extremely closely, including the magnitude of the error bars. The authors of [62] subsequently obtain an estimate for $\gamma$ by fitting an exponential of the form $\gamma(B_{0})=\gamma-c_{1}\,e^{-c_{2}\,B_{0}}.$ (28) The resulting values for $\gamma$ shown in Table 1 of [62] are (within statistical error) identical to the ones we found from our data, using the same method. Figure 17: Fitted values of the entropy exponent $\gamma$ in the degenerate phase for slice volume $N_{2}\\!=\\!1.000$ and different lower cut-offs $B_{0}$. We show the best fit values with and without the correction term appearing in (27). ## Appendix B Determining confidence intervals for best fit parameters While researching the literature on numerical estimates of quantities like the Hausdorff dimension and the entropy exponent, we found that previous work is often not very explicit about the methodology used to determine the uncertainty in the best fit parameters, if such margins of error are provided at all. Since our goal was to investigate whether the behaviour of the spatial slices is consistent with that of known models of two-dimensional random geometry, we considered it important to attach a degree of confidence to the results obtained. This led us to a more detailed study of the statistical methods available for performing such an analysis. Here we summarize our findings, in the hope that others may find them useful. We also comment on the assumptions required to make use of these methods, and to what extent they apply in the present context. Our main aim is to motivate the choices made in computing the confidence intervals; we do not aim for full mathematical or statistical rigour. The generic starting point consists of a set of $N$ data points $y_{n}\in\mathbb{R}$ measured at positions $\bm{x}_{n}$, to which we want to fit a function $f(\bm{x},\bm{\theta})$ with parameters $\bm{\theta}$. Firstly, we require a measure of the goodness-of-fit which allows us to find the set of best fit parameters $\bm{\theta}_{\textrm{min}}$ that minimizes this measure. Secondly, since the measurements are inherently noisy, we are interested in specifying a range for the fit parameters in which we expect to find the “true” values with a certain degree of confidence. In the method of least squares, the goodness-of-fit is defined through the sum of squares of the residuals, and the optimal fit is obtained when this sum is minimized. However, the standard unweighted sum of least squares assumes equal variances on all the data points, which typically is not the case for the measurements we perform in lattice quantum gravity. In what follows, we will show how the $\chi^{2}$-distribution can be used in the context of a linear regression model to define a goodness-of-fit and corresponding confidence intervals for the parameters $\bm{\theta}$. The models we fit in the main text of this work do not fall into this class of linear models, so we subsequently discuss how the analysis is affected when the condition of linearity is relaxed. ### B.1 Best fit parameter estimation A linear model takes the form $f\left(\bm{x};\bm{\theta}\right)=\sum_{i=1}^{p}\theta_{i}f_{i}(\bm{x}),$ (29) where the $f_{i}$ are functions of the independent variables. These functions are allowed to be nonlinear in the $x_{i}$ — the linearity condition applies to the fit parameters $\theta_{i}$ only. Suppose the “correct” model has fit parameters $\bm{\theta}_{0}$. If the errors $\sigma_{n}$ on the $N$ measurement outcomes are Gaussian, we can consider the $y_{n}$ to be normally distributed random variables with mean $f(\bm{x}_{n};\bm{\theta}_{0})$ and standard deviation $\sigma_{n}$. We can turn these into $N$ standard normals $Z_{n}=\frac{y_{n}-f(\bm{x}_{n};\bm{\theta}_{0})}{\sigma_{n}}$. Let us define the _$\chi^{2}$ -statistic_ of a choice of fit parameters $\bm{\theta}$ as the sum of the squares of the $Z_{n}$, $\chi^{2}\left(\bm{\theta}\right)=\sum_{n=1}^{N}\left(\frac{y_{n}-f\left(\bm{x}_{n};\bm{\theta}\right)}{\sigma_{n}\,}\right)^{2}.$ (30) The reason we call this the $\chi^{2}$-statistic is that the sum of $k$ independent standard normals follows a so-called $\chi^{2}$-distribution with $k$ degrees of freedom. Such a distribution has expectation value $k$ and variance $2k$. The quantity $\chi^{2}\left(\bm{\theta}\right)$ should be minimized to find a _maximum likelihood estimator_ for $\bm{\theta}_{0}$, corresponding to the set of best fit parameters. In the current situation, where we are trying to fit a model to the data points, we must take into account that the individual $y_{n}$ are explicitly _not_ independent — after all, we have hypothesized the existence of a model $f(\bm{x};\bm{\theta})$ that can predict the outcomes of our measurements. The sum of $N$ squared standard normal distributions therefore follows a $\chi^{2}$-distribution with a certain number $k\\!<\\!N$ of degrees of freedom. For a linear model without priors (i.e. restrictions on the fit parameters), we have $k\\!=\\!N-p$, where $p$ is the number of fit parameters. Therefore, when fitting a linear model $f(\bm{x};\bm{\theta})$ with $p$ fit parameters to $N$ data points $y_{n}$ with Gaussian errors $\sigma_{n}$, we expect the $\chi^{2}$-statistic to follow a $\chi^{2}$-distribution with $k=N-p$ degrees of freedom. As mentioned earlier, the best fit parameters $\bm{\theta}_{\textrm{min}}$ are determined by finding the minimum possible value $\chi^{2}_{\textrm{min}}$ of the $\chi^{2}$-statistic. The fit is considered good when $\chi^{2}_{\textrm{min}}\\!\approx\\!k$, since this is the expectation value of the corresponding $\chi^{2}$-distribution. Significantly larger values of $\chi^{2}_{\textrm{min}}$ indicate that no reasonable fit could be found (e.g. our assumptions about the model may be wrong), whereas a $\chi^{2}_{\textrm{min}}$ significantly lower than $k$ could mean we are overfitting the data or overestimating the measurement errors $\sigma_{n}$. ### B.2 Confidence intervals With the best fit parameters $\bm{\theta}_{\textrm{min}}$ at our disposal, we can now turn to determining _confidence intervals_ on these parameters. After all, slightly varying the parameters around their best fit values should produce approximately equal $\chi^{2}$-statistics. Moreover, the measurement data we are using to compute $\chi^{2}$ is inherently noisy, which implies that we have merely obtained an estimate of the “true” best fits. To specify a range in which we expect to find the true values with a certain degree of confidence, we can use the properties of the $\chi^{2}$-distribution. Confidence intervals (CIs) are computed at a certain _confidence level_ , specified by a percentage (a common choice is the 95% CI). Alternatively, we can specify a significance level $\alpha$, corresponding to a $(1\\!-\\!\alpha)\%$ confidence level. Given $\alpha$, we can determine the _critical value_ $\chi^{2}_{\textrm{crit},\alpha}$ of the $\chi^{2}$-statistic for our fitted model containing $p$ parameters by solving $P(\chi^{2}\\!\leq\\!\chi^{2}_{\textrm{crit},\alpha})\\!=\\!(1-\\!\alpha)$, where $P(\chi^{2}\\!\leq\\!x)$ is the cumulative distribution function for a $\chi^{2}$-distribution with $k\\!=\\!p$ degrees of freedom.171717When estimating the best fit entropy exponent and local Hausdorff dimension, we were only interested in one out the $p$ fit parameters. In this case, we should match to a $\chi^{2}$-distribution with one degree of freedom. We can determine $\chi^{2}_{\textrm{crit},\alpha}$ by the use of quantile functions or lookup tables. The $(1\\!-\\!\alpha)\%$ confidence interval for the fit parameters $\bm{\theta}$ is then defined [79] as the region for which $\chi^{2}(\bm{\theta})-\chi^{2}_{\textrm{min}}<\chi^{2}_{\textrm{crit},\alpha}.$ (31) Typically, this region is an ellipsoid in $\bm{\theta}$-space, which can be determined numerically by performing a grid search around $\bm{\theta}_{\textrm{min}}$. As an example, when determining the 95% confidence intervals for a linear model with two parameters, we find $\chi^{2}_{\textrm{crit},0.05}\\!\approx\\!5.991$, and the joint confidence intervals of the two fit parameters are the region in $\mathbb{R}^{2}$ for which (31) holds. ### B.3 Potential caveats We have used the procedure just described to determine the confidence intervals for the best fit parameters in the main text of this work. However, as pointed out earlier, the analysis rests on several assumptions that do not necessarily apply to our models and measurements. An important prerequisite for using the $\chi^{2}$-distribution is that the measurement errors are Gaussian, otherwise the sum (30) is not a sum of squares of standard normals. We often found a slight degree of skewness in the distribution of our measurement results, potentially invalidating the use of $\chi^{2}$-methods. However, the skewness factors were always near zero, so that we may still consider the computed bounds of the confidence intervals to be good approximations to their true values. A second potential issue is that the models we fit in our work are not linear. Both for the minbu sizes and the microscopic Hausdorff dimension, one of the fit parameters appears in the exponent of an independent variable. Although the best fit parameters for such models can still be obtained by minimizing (30), determining the correct number of degrees of freedom is known to be difficult [80]. This means that we do not know the proper expectation value of (30), and therefore do not have a reference point to compare our $\chi^{2}_{\textrm{min}}$ to. However, not knowing the true number of degrees of freedom has more serious consequences for computing confidence intervals. The number $p$ of fit parameters in our models is small, $p\\!<\\!4$, and choosing a different number of degrees of freedom near zero has a strong effect on the resulting $\chi^{2}_{\textrm{crit},\alpha}$. This can significantly alter the width of the corresponding confidence interval. For our purposes, we do not consider this to be a major issue. The main goal of our analysis was to check whether our results are consistent with previously known results from the literature, requiring an order-of-magnitude estimate of the confidence interval. This order of magnitude is not affected if we over- or underestimate the degree-of-freedom count by a few units. Furthermore, since all confidence intervals in this work were obtained by using the same methods, any comparison among our confidence intervals is likely to still be meaningful. ## References * [1] R. Loll, G. Fabiano, D. Frattulillo and F. Wagner “Quantum Gravity in 30 Questions” arXiv, Proceedings of Science (to appear), 2022 arXiv:2206.06762 [hep-th] * [2] J. Ambjørn, A. Görlich, J. Jurkiewicz and R. Loll “Nonperturbative Quantum Gravity” In _Physics Reports_ 519.4-5, 2012, pp. 127–210 DOI: 10.1016/j.physrep.2012.03.007 * [3] R. Loll “Quantum Gravity from Causal Dynamical Triangulations: A Review” In _Classical and Quantum Gravity_ 37.1 IOP Publishing, 2019, pp. 013002 DOI: 10.1088/1361-6382/ab57c7 * [4] J. Ambjørn and R. Loll “Non-Perturbative Lorentzian Quantum Gravity, Causality and Topology Change” In _Nuclear Physics B_ 536.1, 1998, pp. 407–434 DOI: 10.1016/S0550-3213(98)00692-0 * [5] J. Ambjørn, J. Jurkiewicz and R. Loll “A Non-Perturbative Lorentzian Path Integral for Gravity” In _Physical Review Letters_ 85.5, 2000, pp. 924–927 DOI: 10.1103/PhysRevLett.85.924 * [6] J. Ambjørn, J. Jurkiewicz and R. Loll “Dynamically Triangulating Lorentzian Quantum Gravity” In _Nuclear Physics B_ 610.1, 2001, pp. 347–382 DOI: 10.1016/S0550-3213(01)00297-8 * [7] A. Dasgupta and R. Loll “A Proper-Time Cure for the Conformal Sickness in Quantum Gravity” In _Nuclear Physics B_ 606.1, 2001, pp. 357–379 DOI: 10.1016/S0550-3213(01)00227-9 * [8] J. Ambjørn, S. Jordan, J. Jurkiewicz and R. Loll “A Second-Order Phase Transition in CDT” In _Physical Review Letters_ 107.21, 2011, pp. 211303 DOI: 10.1103/PhysRevLett.107.211303 * [9] J. Ambjørn, S. Jordan, J. Jurkiewicz and R. Loll “Second- and First-Order Phase Transitions in CDT” In _Physical Review D_ 85.12, 2012, pp. 124044 DOI: 10.1103/PhysRevD.85.124044 * [10] D.. Coumbe, J. Gizbert-Studnicki and J. Jurkiewicz “Exploring the New Phase Transition of CDT” In _Journal of High Energy Physics_ 2016.2, 2016, pp. 144 DOI: 10.1007/JHEP02(2016)144 * [11] J. Ambjørn, J. Jurkiewicz and R. Loll “Emergence of a 4D World from Causal Quantum Gravity” In _Physical Review Letters_ 93.13 American Physical Society, 2004, pp. 131301 DOI: 10.1103/PhysRevLett.93.131301 * [12] Jan Ambjørn et al. “CDT Quantum Toroidal Spacetimes: An Overview” In _Universe_ 7.4 Multidisciplinary Digital Publishing Institute, 2021, pp. 79 DOI: 10.3390/universe7040079 * [13] J. Ambjørn, J. Jurkiewicz and R. Loll “The Spectral Dimension of the Universe Is Scale Dependent” In _Physical Review Letters_ 95.17 American Physical Society, 2005, pp. 171301 DOI: 10.1103/PhysRevLett.95.171301 * [14] J. Ambjørn, J. Jurkiewicz and R. Loll “Reconstructing the Universe” In _Physical Review D_ 72.6 American Physical Society, 2005, pp. 064014 DOI: 10.1103/PhysRevD.72.064014 * [15] J. Ambjørn, A. Görlich, J. Jurkiewicz and R. Loll “Planckian Birth of the Quantum de Sitter Universe” In _Physical Review Letters_ 100.9, 2008, pp. 091304 DOI: 10.1103/PhysRevLett.100.091304 * [16] J. Ambjørn, A. Görlich, J. Jurkiewicz and R. Loll “The Nonperturbative Quantum de Sitter Universe” In _Physical Review D_ 78.6, 2008, pp. 063544 DOI: 10.1103/PhysRevD.78.063544 * [17] N. Klitgaard and R. Loll “How Round Is the Quantum de Sitter Universe?” In _The European Physical Journal C_ 80.10, 2020, pp. 990 DOI: 10.1140/epjc/s10052-020-08569-5 * [18] F. David “Planar Diagrams, Two-Dimensional Lattice Gravity and Surface Models” In _Nuclear Physics B_ 257, 1985, pp. 45–58 DOI: 10.1016/0550-3213(85)90335-9 * [19] V.. Kazakov, I.. Kostov and A.. Migdal “Critical Properties of Randomly Triangulated Planar Random Surfaces” In _Physics Letters B_ 157.4, 1985, pp. 295–300 DOI: 10.1016/0370-2693(85)90669-0 * [20] Jan Ambjørn, Bergfinnur Durhuus and Thordur Jonsson “Quantum Geometry: A Statistical Field Theory Approach” Cambridge Univ. Press, 1997 DOI: 10.1017/CBO9780511524417 * [21] K. Binder “Applications of Monte Carlo Methods to Statistical Physics” In _Reports on Progress in Physics_ 60.5 IOP Publishing, 1997, pp. 487–559 DOI: 10.1088/0034-4885/60/5/001 * [22] M… Newman and G.. Barkema “Monte Carlo Methods in Statistical Physics” Clarendon Press, 1999 * [23] P Di Francesco, Paul Ginsparg and Jean Zinn-Justin “2D Gravity and Random Matrices”, 1995, pp. 1–133 arXiv:hep-th/9306153 * [24] J. Ambjørn, R. Loll, J.. Nielsen and J. Rolf “Euclidean and Lorentzian Quantum Gravity—Lessons from Two Dimensions” In _Chaos, Solitons & Fractals_ 10.2, 1999, pp. 177–195 DOI: 10.1016/S0960-0779(98)00197-0 * [25] S. Carlip “Dimension and Dimensional Reduction in Quantum Gravity” In _Classical and Quantum Gravity_ 34.19, 2017, pp. 193001 DOI: 10.1088/1361-6382/aa8535 * [26] J. Ambjørn, J. Jurkiewicz and R. Loll “Nonperturbative 3D Lorentzian Quantum Gravity” In _Physical Review D_ 64.4 American Physical Society, 2001, pp. 044011 DOI: 10.1103/PhysRevD.64.044011 * [27] Nigel Goldenfeld “Lectures on Phase Transitions and the Renormalization Group” Boca Raton: CRC Press, 2019 DOI: 10.1201/9780429493492 * [28] J. Ambjørn and A. Ipsen “Universality of 2D Causal Dynamical Triangulations” In _Physics Letters B_ 724.1, 2013, pp. 150–154 DOI: 10.1016/j.physletb.2013.06.005 * [29] L. Andersson, G.. Galloway and R. Howard “The Cosmological Time Function” In _Classical and Quantum Gravity_ 15.2, 1998, pp. 309–322 DOI: 10.1088/0264-9381/15/2/006 * [30] N. Klitgaard and R. Loll “Introducing Quantum Ricci Curvature” In _Physical Review D_ 97.4 American Physical Society, 2018, pp. 046008 DOI: 10.1103/PhysRevD.97.046008 * [31] Steven Carlip “Quantum Gravity in 2+1 Dimensions”, Cambridge Monographs on Mathematical Physics Cambridge: Cambridge University Press, 1998 DOI: 10/d43dgt * [32] S. Carlip “Quantum Gravity in 2+1 Dimensions: The Case of a Closed Universe” In _Living Reviews in Relativity_ 8.1, 2005, pp. 1 DOI: 10.12942/lrr-2005-1 * [33] J. Ambjørn, J. Jurkiewicz and R. Loll “Computer Simulations of 3-D Lorentzian Quantum Gravity” In _Nuclear Physics B - Proceedings Supplements_ 94.1-3 North-Holland, 2001, pp. 689–692 DOI: 10.1016/S0920-5632(01)00878-7 * [34] J. Ambjørn, J. Jurkiewicz and R. Loll “3D Lorentzian, Dynamically Triangulated Quantum Gravity” In _Nuclear Physics B - Proceedings Supplements_ 106–107, 2002, pp. 980–982 DOI: 10.1016/S0920-5632(01)01904-1 * [35] Bergfinnur Durhuus and Thordur Jonsson “Exponential Bounds on the Number of Causal Triangulations” In _Communications in Mathematical Physics_ 340.1, 2015, pp. 105–124 DOI: 10.1007/s00220-015-2453-2 * [36] Joshua H. Cooperman and Jonah Miller “A First Look at Transition Amplitudes in (2+1)-Dimensional Causal Dynamical Triangulations” In _Classical and Quantum Gravity_ 31.3, 2014, pp. 035012 DOI: 10.1088/0264-9381/31/3/035012 * [37] Joshua H. Cooperman, Kyle Lee and Jonah M. Miller “A Second Look at Transition Amplitudes in (2+1)-Dimensional Causal Dynamical Triangulations” In _Classical and Quantum Gravity_ 34.11, 2017, pp. 115008 DOI: 10.1088/1361-6382/aa6d38 * [38] J. Ambjørn, J. Jurkiewicz, R. Loll and G. Vernizzi “Lorentzian 3D Gravity with Wormholes via Matrix Models” In _Journal of High Energy Physics_ 2001.09, 2001, pp. 022–022 DOI: 10.1088/1126-6708/2001/09/022 * [39] Bergfinnur Durhuus and Thordur Jonsson “The Structure of Spatial Slices of 3-Dimensional Causal Triangulations” In _Annales de l’Institut Henri Poincaré D_ 7.3, 2020, pp. 365–393 DOI: 10.4171/aihpd/91 * [40] J. Ambjørn, J. Jurkiewicz and R. Loll “Renormalization of 3D Quantum Gravity from Matrix Models” In _Physics Letters B_ 581.3-4, 2004, pp. 255–262 DOI: 10.1016/j.physletb.2003.11.068 * [41] J. Ambjørn, J. Jurkiewicz, R. Loll and G. Vernizzi “3D Lorentzian Quantum Gravity from the Asymmetric ABAB Matrix Model” In _Acta Physica Polonica B_ 34, 2003, pp. 4667–4688 arXiv:hep-th/0311072 * [42] D. Benedetti, R. Loll and F. Zamponi “(2+1)-Dimensional Quantum Gravity as the Continuum Limit of Causal Dynamical Triangulations” In _Physical Review D_ 76.10 American Physical Society, 2007, pp. 104022 DOI: 10.1103/PhysRevD.76.104022 * [43] T.. Budd and R. Loll “Exploring Torus Universes in Causal Dynamical Triangulations” In _Physical Review D_ 88.2 American Physical Society, 2013, pp. 024015 DOI: 10.1103/PhysRevD.88.024015 * [44] T.. Budd “Non-Perturbative Quantum Gravity: A Conformal Perspective” In _Ph.D. Thesis_ , 2012 * [45] T.. Budd “The Effective Kinetic Term in CDT” In _Journal of Physics: Conference Series_ 360, 2012, pp. 012038 DOI: 10.1088/1742-6596/360/1/012038 * [46] Dario Benedetti and James P. Ryan “Capturing the Phase Diagram of (2+1)-Dimensional CDT Using a Balls-in-Boxes Model” In _Classical and Quantum Gravity_ 34.10, 2017, pp. 105012 DOI: 10.1088/1361-6382/aa6b5d * [47] Dario Benedetti and Joe Henson “Spacetime Condensation in (2+1)-Dimensional CDT from a Hořava-Lifshitz Minisuperspace Model” In _Classical and Quantum Gravity_ 32.21, 2015, pp. 215007 DOI: 10.1088/0264-9381/32/21/215007 * [48] Dario Benedetti and Joe Henson “Spectral Geometry as a Probe of Quantum Spacetime” In _Physical Review D_ 80.12, 2009, pp. 124036 DOI: 10.1103/PhysRevD.80.124036 * [49] S. Jordan and R. Loll “Causal Dynamical Triangulations without Preferred Foliation” In _Physics Letters B_ 724.1-3, 2013, pp. 155–159 DOI: 10.1016/j.physletb.2013.06.007 * [50] S. Jordan and R. Loll “De Sitter Universe from Causal Dynamical Triangulations without Preferred Foliation” In _Physical Review D_ 88.4, 2013, pp. 044055 DOI: 10.1103/PhysRevD.88.044055 * [51] Joren Brunekreef, Daniel Németh and Andrzej Görlich “JorenB/3d-Cdt: First Release”, Zenodo, 2022 DOI: 10.5281/zenodo.6628721 * [52] Nicholas Metropolis et al. “Equation of State Calculations by Fast Computing Machines” In _The Journal of Chemical Physics_ 21.6 American Institute of Physics, 1953, pp. 1087–1092 DOI: 10.1063/1.1699114 * [53] Joren Brunekreef and Daniel Németh “Lorentzian 3D Quantum Gravity at Higher Spatial Genus (to Appear)”, 2022 * [54] Joshua H. Cooperman “Scale-Dependent Homogeneity Measures for Causal Dynamical Triangulations” In _Physical Review D_ 90.12 American Physical Society, 2014, pp. 124053 DOI: 10.1103/PhysRevD.90.124053 * [55] J. Ambjørn et al. “The Transfer Matrix Method in Four-Dimensional Causal Dynamical Triangulations” In _AIP Conference Proceedings_ 1514.1, 2013, pp. 67–72 American Institute of Physics DOI: 10.1063/1.4791727 * [56] Grégory Miermont “The Brownian Map Is the Scaling Limit of Uniform Random Plane Quadrangulations” In _Acta Mathematica_ 210.2, 2013, pp. 319–401 arXiv:1104.1606 * [57] D.. Boulatov, V.. Kazakov, I.. Kostov and A.. Migdal “Analytical and Numerical Study of a Model of Dynamically Triangulated Random Surfaces” In _Nuclear Physics B_ 275.4 North-Holland, 1986, pp. 641–686 DOI: 10.1016/0550-3213(86)90578-X * [58] Jan Ambjørn, K.. Anagnostopoulos and R. Loll “A New Perspective on Matter Coupling in 2D Quantum Gravity” In _Physical Review D_ 60.10, 1999, pp. 104035 DOI: 10.1103/PhysRevD.60.104035 * [59] V.g. Knizhnik, A.m. Polyakov and A.b. Zamolodchikov “Fractal Structure of 2D Quantum Gravity” In _Modern Physics Letters A_ 03.08 World Scientific Publishing Co., 1988, pp. 819–826 DOI: 10.1142/S0217732388000982 * [60] Jan Ambjørn, K. Anagnostopoulos and R. Loll “Crossing the c=1 Barrier in 2D Lorentzian Quantum Gravity” In _Physical Review D_ 61.4 American Physical Society, 2000, pp. 044010 DOI: 10.1103/PhysRevD.61.044010 * [61] S. Jain and Samir D. Mathur “World-Sheet Geometry and Baby Universes in 2D Quantum Gravity” In _Physics Letters B_ 286.3-4 North-Holland, 1992, pp. 239–246 DOI: 10.1016/0370-2693(92)91769-6 * [62] J. Ambjørn, S. Jain and Gudmar Thorleifsson “Baby Universes in 2D Quantum Gravity” In _Physics Letters B_ 307.1-2 North-Holland, 1993, pp. 34–39 DOI: 10.1016/0370-2693(93)90188-N * [63] J. Ambjørn and Y. Watabiki “Scaling in Quantum Gravity” In _Nuclear Physics B_ 445.1, 1995, pp. 129–142 DOI: 10.1016/0550-3213(95)00154-K * [64] J. Ambjørn, J. Jurkiewicz and Y. Watabiki “On the Fractal Structure of Two-Dimensional Quantum Gravity” In _Nuclear Physics B_ 454.1, 1995, pp. 313–342 DOI: 10.1016/0550-3213(95)00468-8 * [65] Jerome Barkley and T.. Budd “Precision Measurements of Hausdorff Dimensions in Two-Dimensional Quantum Gravity” In _Classical and Quantum Gravity_ 36.24 IOP Publishing, 2019, pp. 244001 DOI: 10.1088/1361-6382/ab4f21 * [66] Bergfinnur Durhuus, Thordur Jonsson and John F. Wheater “On the Spectral Dimension of Causal Triangulations” In _Journal of Statistical Physics_ 139.5, 2010, pp. 859–881 DOI: 10.1007/s10955-010-9968-x * [67] S. Catterall, G. Thorleifsson, M. Bowick and V. John “Scaling and the Fractal Geometry of Two-Dimensional Quantum Gravity” In _Physics Letters B_ 354.1-2 North-Holland, 1995, pp. 58–68 DOI: 10.1016/0370-2693(95)00623-S * [68] Jan Ambjorn, Gudmar Thorleifsson and Mark Wexler “New Critical Phenomena in 2D Quantum Gravity” In _Nuclear Physics B_ 439.1-2, 1995, pp. 187–204 DOI: 10.1016/0550-3213(95)00014-J * [69] Antje Schneider and Thomas Filk “On the Universality of Matrix Models for Random Surfaces” In _The European Physical Journal C_ 8.3, 1999, pp. 523–526 DOI: 10.1007/s100529901092 * [70] Jan Ambjørn et al. “The Quantum Space-Time of C=-2 Gravity” In _Nuclear Physics B_ 511.3, 1998, pp. 673–710 DOI: 10.1016/S0550-3213(97)00659-7 * [71] Jan Ambjørn, Bergfinnur Durhuus and Thordur Jonsson “Summing over All Genera for d $>$ 1: A Toy Model” In _Physics Letters B_ 244.3, 1990, pp. 403–412 DOI: 10.1016/0370-2693(90)90337-6 * [72] R. Loll and B. Ruijl “Locally Causal Dynamical Triangulations in Two Dimensions” In _Physical Review D_ 92.8 American Physical Society, 2015, pp. 084002 DOI: 10.1103/PhysRevD.92.084002 * [73] J. Ambjørn, A.. Görlich, J. Jurkiewicz and H.-G. Zhang “Pseudo-Topological Transitions in 2D Gravity Models Coupled to Massless Scalar Fields” In _Nuclear Physics B_ 863.2, 2012, pp. 421–434 DOI: 10.1016/j.nuclphysb.2012.05.024 * [74] J. Ambjørn, B. Durhuus and J.. Wheater “A Restricted Dimer Model on a 2-Dimensional Random Causal Triangulation” In _Journal of Physics A: Mathematical and Theoretical_ 47.36, 2014, pp. 365001 DOI: 10.1088/1751-8113/47/36/365001 * [75] N. Klitgaard and R. Loll “Implementing Quantum Ricci Curvature” In _Physical Review D_ 97.10 American Physical Society, 2018, pp. 106017 DOI: 10.1103/PhysRevD.97.106017 * [76] J. Brunekreef and R. Loll “Curvature Profiles for Quantum Gravity” In _Physical Review D_ 103.2 American Physical Society, 2021, pp. 026019 DOI: 10.1103/PhysRevD.103.026019 * [77] J. Brunekreef and R. Loll “Quantum Flatness in Two-Dimensional Quantum Gravity” In _Physical Review D_ 104.12 American Physical Society, 2021, pp. 126024 DOI: 10.1103/PhysRevD.104.126024 * [78] Timothy Budd and Luca Lionni “A Family of Triangulated 3-Spheres Constructed from Trees” arXiv, 2022 DOI: 10.48550/arXiv.2203.16105 * [79] Y. Avni “Energy Spectra of X-ray Clusters of Galaxies.” In _The Astrophysical Journal_ 210, 1976, pp. 642–646 DOI: 10.1086/154870 * [80] Rene Andrae, Tim Schulze-Hartung and Peter Melchior “Dos and Don’ts of Reduced Chi-Squared”, 2010 arXiv:1012.3754
# Detection of 49 Weak Dispersed Radio Pulses in a Parkes Observation of the X-ray Pulsar PSR J0537$-$6910 Fronefield Crawford Department of Physics and Astronomy, Franklin and Marshall College, P.O. Box 3003, Lancaster, PA 17604, USA ###### Abstract I conducted a new search for dispersed radio pulses from the X-ray pulsar PSR J0537$-$6910 in the Large Magellanic Cloud in a long (11.6 hr) archival 1.4 GHz Parkes search observation. I searched dispersion measures (DMs) between 0 and 10000 pc cm-3 and detected 49 pulses with a signal-to-noise ratio (S/N) greater than 7 at a wide range of DMs using the HEIMDALL and FETCH pulse detection and classification packages. All of the pulses were weak, with none having a S/N above 8.5. There was a significant excess of pulses observed in the DM range of the known pulsar population in the LMC, suggesting that these pulses may originate from LMC pulsars. Three repeat pulses, each having widths $\lesssim 1$ ms, were detected in a single DM trial of 103.412 pc cm-3, which is in the LMC DM range. This is unlikely to occur by chance in a single DM trial in this search at the (marginally significant) 4.3$\sigma$ level. It remains unclear whether any of the detected pulses in the sample are from PSR J0537$-$6910 itself. Pulsars (1306); Radio Transient Sources (2008) ††facilities: Parkes††software: HEIMDALL (Barsdell, 2012; Barsdell et al., 2012), FETCH (Agarwal et al., 2020) ## 1 Introduction and Background PSR J0537$-$6910 is a young rotation-powered X-ray pulsar associated with the supernova remnant (SNR) N157B in the Large Magellanic Cloud (LMC). It was first discovered as a 16-ms pulsed X-ray source by Marshall et al. (1998), and it is the most rapidly rotating unrecycled pulsar currently known. Despite previous searches for both periodic radio emission and single radio pulses with the Parkes 64-m radio telescope (“Murriyang”) (Crawford et al., 1998, 2005), PSR J0537$-$6910 has not yet been detected as a radio emitter. PSR J0537$-$6910 is a particularly good candidate to search for giant radio pulses. Cognard et al. (1996) suggested a relationship between the giant pulse emission mechanism and the magnetic field strength at the light cylinder radius, defined as the equatorial radius at which co-rotation with the pulsar would equal the speed of light. This magnetic field strength can be computed from the spin parameters of the pulsar according to $B_{lc}=3\times 10^{8}\dot{P}^{1/2}P^{-5/2}$ G, where $P$ and $\dot{P}$ are the pulsar period in seconds and its time derivative, respectively. Along with two other young pulsars that are known emitters of giant radio pulses, the Crab pulsar (Staelin & Reifenstein, 1968; Lundgren et al., 1995) and PSR B0540$-$69, which also resides in the LMC (Johnston & Romani, 2003), PSR J0537$-$6910 has a large light-cylinder magnetic field strength. It is noteworthy that five millisecond pulsars (MSPs) with much different properties than these young pulsars but with large light-cylinder magnetic field values have also been observed to emit giant radio pulses (Cognard et al., 1996; Romani & Johnston, 2001; Johnston & Romani, 2003; Joshi et al., 2004; Knight et al., 2005, 2006). PSR J0537$-$6910 has a light-cylinder field strength that is more than twice as large as either of the next two highest light-cylinder- field pulsars (the Crab pulsar and the MSP PSR B1937+21), making it a good target to test this hypothesis. Table 1 presents a current listing of the pulsars with the largest light- cylinder field strengths, with the observed giant radio pulse emitters identified. This is also illustrated in Fig. 1, where pulsars with $B_{lc}>10^{5}$ G are plotted as a function of the spin period (see also Table 1 of both Crawford et al. (2005) and McLaughlin & Cordes (2003), and Fig. 4 of Cognard et al. (1996); however, these references do not have the more recently discovered pulsars shown in Table 1 and Fig. 1). ## 2 Radio Search History Prior searches for both periodic and single-pulse radio emissions from PSR J0537$-$6910 were conducted (unsuccessfully) using Parkes at several frequencies. McLaughlin & Cordes (2003) searched for the pulsar in a single 0.5 hr pointing at 435 MHz, while Crawford et al. (1998) searched two 4 hr pointings at 660 MHz and a single 6-hr pointing at 1374 MHz. Subsequently, Crawford et al. (2005) observed the pulsar with a longer 11.6 hr integration at a center frequency of 1390 MHz. This observation had a 256 MHz bandwidth split into 512 channels and was sampled at 80 $\mu$s. In all of these searches, no dispersion measures (DMs) above 300 pc cm3 were searched. This encompassed the DMs of the known LMC pulsar population (which currently ranges from 45 to 273 pc cm-3; see, e.g., Hisano et al. 2022). No convincing astrophysical signals were seen in any of these prior searches. This last (and longest) observation is the one I have searched again with newer software packages over a wider range of DMs. ## 3 Data Analysis I reprocessed this single long Parkes observation with the HEIMDALL pulse detection package (Barsdell, 2012)111https://sourceforge.net/projects/heimdall-astro at trial DMs ranging from 0 to 10000 pc cm-3 in order to search for single-pulse events, including fast radio bursts (FRBs; Lorimer et al. 2007). A total of 1011 DM trials were produced and searched by HEIMDALL. Boxcar-matched filtering windows of $2^{n}$ samples, with $n$ ranging from 0 to 9, were applied to each dedispersed time series to maintain maximum sensitivity to pulses with widths up to $\sim 41$ ms. This is significantly larger than the widths expected for any pulses from PSR J0537$-$6910 given its 16 ms spin period. All of the pulses detected by HEIMDALL were then analyzed by FETCH.222https://github.com/devanshkv/fetch FETCH is a pulse classifier that assigns a probability of being real to each detected pulse based on its morphology and characteristics (Agarwal et al., 2020). FETCH rated each detected pulse using its Model A (see Table 4 of Agarwal et al. 2020) and assigned a probability of being a real, astrophysical pulse of between 0 and 1. I also searched the data for periodicities at DMs ranging from 0 to to 5000 using PRESTO tools (Ransom, 2001) in case other pulsars were present in the same beam. No promising periodicity signals were detected. ## 4 Results and Discussion A total of 49 single pulses were detected with a signal-to-noise ratio (S/N) above 7 which had a probability assigned by FETCH that was greater than 0.5. All of these pulses were subsequently checked visually to ensure they were not obvious radio frequency interference (RFI) signals that had been misidentified by FETCH. All of the detected pulses were weak, with none having a S/N above 8.5. This corresponds to a fluence threshold of 0.6 Jy ms (for a putative pulse width of $W=1$ ms; this sensitivity limit scales as $\sqrt{W}$). Table 2 lists these 49 pulse detections with their characteristics. ### 4.1 Possible False Detections As a check to see if these pulses might have been artificially generated by the software during the detection and classification process, I repeated the search over the same range of negative DMs (0 to $-10000$ pc cm-3) with the same S/N threshold of 7 in order to see whether a significant number of spurious candidates would be produced. This exercise produced only one artificial negative-DM candidate that was classified as real (with S/N = 7.0). This is in contrast to the results of Perera et al. (2022) and Hisano et al. (2022) who each conducted a similar test on two different large-scale pulsar surveys and found more artificial, low-S/N candidates at negative DMs than corresponding positive-DM candidates. This indicated that their sample of detected low-S/N candidates might be largely artificial. Nimmo et al. (2023) reported several cases in which weak pulses detected from the repeating FRB 20200120E were not classified as real by FETCH. These pulses were ultimately determined to be real owing to their DM proximity to the FRB. In this case, real, weak pulses had been classified as RFI and missed (false negatives), but not the reverse (no false positives were reported). Thus, this same misclassification issue would not have produced spurious, false-positive detections in this sample of 49 pulses (although it could possibly have resulted in missing some real pulses). For comparison, in 36.6 hr of targeted observations of several SNRs that used a similar 1.4 GHz Parkes observing setup and the same analysis procedure used here, no pulses (spurious or otherwise) were detected and classified as real, apart from four pulses from a known bright pulsar in the vicinity (Crawford, 2023). Another indication that the pulses could be artificial is if their measured pulse widths are smaller than the corresponding dispersive smearing time within the finite frequency channels at that DM. Such narrow intrinsic pulses would be expected to be broadened by dispersion if the pulses were real. This dispersion smearing scales linearly with DM and is determined by $\tau=(202/f_{c})^{3}\,\Delta f\,{\rm DM}$, where $\tau$ is the smearing in ms, $f_{c}$ and $\Delta f$ are the center observing frequency and the channel width in MHz (1390 and 0.5, respectively), and DM is in pc cm-3. Fig. 2 shows the measured pulse widths for the sample of 49 pulses from Table 2 plotted as a function of DM, with the expected width from dispersive broadening also plotted. As seen in the figure, none of the detected pulses lie below the minimum (dispersive) width, lending further support to the notion that the pulses are not obviously artificial or otherwise generated by the software. Note that although scattering contributions to the broadening are not considered here, the Galactic contribution to scattering along this line of sight is negligible (Cordes & Lazio, 2002). ### 4.2 Pulse Detection Rate The average detection rate in this single observation was one pulse detected and classified as real above S/N of 7 for every 14 minutes of observing time. This rate is high compared to a similar search conducted of a large-scale survey of the LMC with Parkes using HEIMDALL and FETCH (Hisano et al., 2022). That survey used a similar (though not identical) observing system that had a comparable raw sensitivity. They searched 702 beams totaling 1677 hr for single pulses out to a DM of 10000 pc cm-3, and they used a similar maximum boxcar width (33 ms). A total of 229 pulses were found in that survey with a S/N above 7, a DM above 50 pc cm-3, and a FETCH probability of being real above 0.9 (this included nine pulses detected from the giant pulse emitter PSR B0540$-$69). When using these same cutoffs and filtering criteria for the set of detections reported here, 33 of the 49 original pulses are retained. However, these were detected over the much smaller 11.6 hr of integration time. The pulse detection rate per unit of observing time is therefore $\sim 20$ times larger than the large-scale LMC survey analysis of Hisano et al. (2022). Some of this difference may be attributable to the fact that the lone observation analyzed here targeted the 30 Doradus star formation region, where more pulsars may be present than on average in the LMC. However, this large difference remains difficult to reconcile completely if the pulses detected here are indeed real and coming from the LMC. This discrepancy becomes even more pronounced if some fraction of the weak pulses detected by Hisano et al. (2022) are not actually real (see the discussion above). As outlined in Crawford et al. (2005), if the Crab pulsar were located at the distance of the LMC, it would emit a giant pulse every 20 minutes that would be detectable with this observing system. This would lead to several dozen such detections in this single observation. Crawford et al. (2005) also indicate that a giant pulse from PSR B0540$-$69 in the LMC should be detectable every 0.5 hr with such an observing setup. This is broadly consistent with the detection of nine pulses from PSR B0540$-$69 (four of which were above S/N of 9) in a single 2.4-hr Parkes survey beam of the LMC which covered the location of that pulsar (Hisano et al. 2022; see their Table 1). The sample reported here clearly does not include any pulses from a single source (such as PSR J0537$-$6910) with this brightness or frequency of occurrence. ### 4.3 DM Distribution of Detected Pulses The largest DM of any currently known radio pulsar in the LMC is 273 pc cm-3 (Ridley et al., 2013), but the remainder of the known LMC population has DMs that lie between 45 and 147 pc cm-3 (Manchester et al., 2006; Ridley et al., 2013). Fig. 3 shows our 49 detected pulses with S/N plotted against DM. The majority of the detected pulses (34 out of 49) fall within the observed DM range of the currently known LMC pulsar population. However, only 26% of the DM trials in the search were in this range, and so we would expect only 13 events to occur here by chance. Given this, the likelihood of detecting 34 or more pulses by chance in the DM trials in this range is less than $10^{-6}$, suggesting that the observed excess is real. Therefore, it is possible that many of these pulses are from as-yet-unidentified pulsars in the LMC. The wide distribution of DMs of the detected pulses indicates that they cannot all be coming from the same object. Some may be coming from other, as-yet- unidentified LMC pulsars (possible, given the excess of pulses seen in the LMC DM range). The few pulses with very large DMs (4 of the 49 had DM $>800$ pc cm-3; see Table 2 and Fig. 3) could be FRBs originating from well beyond the LMC. ### 4.4 Repeat Pulses Three pulses were detected in a single DM trial (DM = 103.412 pc cm-3). These three pulses are shown in Fig. 4 and are identified in Table 2. The likelihood of three or more pulses occurring by chance in a single DM trial can be estimated using the total number of pulses detected (49) and the total number of DM trials in the search (1011). Following the analysis outlined in Section 4.2 of Paine et al. (2024) for a similar likelihood estimate for a single- pulse search of M82, I determined that this is unlikely to occur by chance at the (marginally significant) 4.3$\sigma$ level. No other DM trial had any repeat pulses (see Table 2). This DM value is well within the observed DM range for LMC pulsars, and all three pulses had comparable widths of between 0.3 and 1.0 ms, as determined from the HEIMDALL detections (see Table 2). Note that the dispersive smearing within frequency channels at this DM is 0.15 ms. This is significantly smaller than the measured pulse widths (see also Fig. 2), indicating that these measured widths are largely intrinsic to the pulsar. This also suggests that these are not micropulses or nanoshots like those seen in giant pulses from the Crab pulsar (Hankins & Eilek, 2007; Hankins et al., 2016). It is possible that these three pulses could all be coming from the same pulsar in the LMC. The pulse widths are much smaller than the 16 ms period of PSR J0537$-$6910, so they could be coming from PSR J0537$-$6910 specifically. However, the time separations between the three pulses are not close to an integer number of pulse periods from PSR J0537$-$6910\. To determine this, the topocentric period and its uncertainty were determined for PSR J0537$-$6910 from the ATNF catalog parameters (Manchester et al., 2005)333https://www.atnf.csiro.au/research/pulsar/psrcat/ using PRESTO tools. The topocentric period uncertainty was combined with the measured half-widths of the pulses to obtain an uncertainty in the time of each pulse separation. In all three cases, the uncertainty (which was 2%-3% of the pulse period of PSR J0537$-$6910) was much less than the remainder when the pulse time difference was divided by the pulsar period (these remainders were 41%, 18%, and 77%). This disfavors the conclusion that the pulses are from PSR J0537$-$6910\. However, given the large and prolific timing glitches seen for the pulsar (e.g., Ho et al. 2020), this may not be definitive. I also dedispersed the raw data at this DM and folded the resulting time series using the ephemeris of PSR J0537$-$6910\. No signal was detected in this fold. ## 5 Conclusions In a new analysis of an archival Parkes search observation of the LMC X-ray pulsar PSR J0537$-$6910, I detected 49 dispersed single radio pulses with a S/N greater than 7. All 49 pulses had a FETCH likelihood of being real that was greater than 0.5. None of the 49 detected pulses had a S/N above 8.5, corresponding to a fluence threshold of 0.6 Jy ms (for a putative 1 ms pulse width). A significant excess of the detected pulses (34 out of 49) occurred within the DM range of the known LMC pulsar population, suggesting that some of these pulses may be from as-yet-unidentified LMC pulsars or possibly from PSR J0537$-$6910 itself. Three pulses having widths $\lesssim 1$ ms were detected in a single DM trial (DM = 103.412 pc cm-3). This is unlikely to occur by chance at the 4.3$\sigma$ level. This DM value is within the observed range for LMC pulsars, suggesting that the pulses may originate from a pulsar in the LMC. Future observations with more sensitive, next-generation facilities may be useful for determining whether any of the pulses detected in the sample are from PSR J0537$-$6910. The Parkes radio telescope is part of the Australia Telescope National Facility (grid.421683.a), which is funded by the Australian Government for operation as a National Facility managed by CSIRO. This work was supported in part by National Science Foundation (NSF) Physics Frontiers Center award Nos. 1430284 and 2020265, and used the Franklin and Marshall College compute cluster, which was funded through NSF grant 1925192. ## References * Agarwal et al. (2020) Agarwal, D., Aggarwal, K., Burke-Spolaor, S., Lorimer, D. R., & Garver-Daniels, N. 2020, MNRAS, 497, 1661, doi: 10.1093/mnras/staa1856 * Barsdell (2012) Barsdell, B. R. 2012, PhD thesis, Swinburne University of Technology * Barsdell et al. (2012) Barsdell, B. R., Bailes, M., Barnes, D. G., & Fluke, C. J. 2012, MNRAS, 422, 379, doi: 10.1111/j.1365-2966.2012.20622.x * Cognard et al. (1996) Cognard, I., Shrauner, J. A., Taylor, J. H., & Thorsett, S. E. 1996, ApJ, 457, L81, doi: 10.1086/309894 * Cordes & Lazio (2002) Cordes, J. M., & Lazio, T. J. W. 2002, arXiv e-prints, astro, doi: 10.48550/arXiv.astro-ph/0207156 * Crawford (2023) Crawford, F. 2023, Research Notes of the American Astronomical Society, 7, 238, doi: 10.3847/2515-5172/ad09e0 * Crawford et al. (1998) Crawford, F., Kaspi, V. M., Manchester, R. N., et al. 1998, Mem. Soc. Astron. Italiana, 69, 951. https://arxiv.org/abs/astro-ph/9808358 * Crawford et al. (2005) Crawford, F., McLaughlin, M., Johnston, S., Romani, R., & Sorrelgreen, E. 2005, Advances in Space Research, 35, 1181, doi: 10.1016/j.asr.2005.03.074 * Hankins & Eilek (2007) Hankins, T. H., & Eilek, J. A. 2007, ApJ, 670, 693, doi: 10.1086/522362 * Hankins et al. (2016) Hankins, T. H., Eilek, J. A., & Jones, G. 2016, ApJ, 833, 47, doi: 10.3847/1538-4357/833/1/47 * Hisano et al. (2022) Hisano, S., Crawford, F., Bonidie, V., et al. 2022, ApJ, 928, 161, doi: 10.3847/1538-4357/ac5802 * Ho et al. (2020) Ho, W. C. G., Espinoza, C. M., Arzoumanian, Z., et al. 2020, MNRAS, 498, 4605, doi: 10.1093/mnras/staa2640 * Johnston & Romani (2003) Johnston, S., & Romani, R. W. 2003, ApJ, 590, L95, doi: 10.1086/376826 * Joshi et al. (2004) Joshi, B. C., Kramer, M., Lyne, A. G., McLaughlin, M. A., & Stairs, I. H. 2004, in Young Neutron Stars and Their Environments, ed. F. Camilo & B. M. Gaensler, Vol. 218, 319, doi: 10.48550/arXiv.astro-ph/0310285 * Knight et al. (2005) Knight, H. S., Bailes, M., Manchester, R. N., & Ord, S. M. 2005, ApJ, 625, 951, doi: 10.1086/429533 * Knight et al. (2006) Knight, H. S., Bailes, M., Manchester, R. N., Ord, S. M., & Jacoby, B. A. 2006, ApJ, 640, 941, doi: 10.1086/500292 * Lorimer et al. (2007) Lorimer, D. R., Bailes, M., McLaughlin, M. A., Narkevic, D. J., & Crawford, F. 2007, Science, 318, 777, doi: 10.1126/science.1147532 * Lundgren et al. (1995) Lundgren, S. C., Cordes, J. M., Ulmer, M., et al. 1995, ApJ, 453, 433, doi: 10.1086/176404 * Manchester et al. (2006) Manchester, R. N., Fan, G., Lyne, A. G., Kaspi, V. M., & Crawford, F. 2006, ApJ, 649, 235, doi: 10.1086/505461 * Manchester et al. (2005) Manchester, R. N., Hobbs, G. B., Teoh, A., & Hobbs, M. 2005, AJ, 129, 1993, doi: 10.1086/428488 * Marshall et al. (1998) Marshall, F. E., Gotthelf, E. V., Zhang, W., Middleditch, J., & Wang, Q. D. 1998, ApJ, 499, L179, doi: 10.1086/311381 * McLaughlin & Cordes (2003) McLaughlin, M. A., & Cordes, J. M. 2003, ApJ, 596, 982, doi: 10.1086/378232 * Nimmo et al. (2023) Nimmo, K., Hessels, J. W. T., Snelders, M. P., et al. 2023, MNRAS, 520, 2281, doi: 10.1093/mnras/stad269 * Paine et al. (2024) Paine, S., Hawkins, T., Lorimer, D. R., et al. 2024, MNRAS, 528, 6340, doi: 10.1093/mnras/stae344 * Perera et al. (2022) Perera, B. B. P., Smith, A. J., Vaddi, S., et al. 2022, MNRAS, 509, 1929, doi: 10.1093/mnras/stab3153 * Ransom (2001) Ransom, S. M. 2001, PhD thesis, Harvard University, Massachusetts * Ridley et al. (2013) Ridley, J. P., Crawford, F., Lorimer, D. R., et al. 2013, MNRAS, 433, 138, doi: 10.1093/mnras/stt709 * Romani & Johnston (2001) Romani, R. W., & Johnston, S. 2001, ApJ, 557, L93, doi: 10.1086/323415 * Staelin & Reifenstein (1968) Staelin, D. H., & Reifenstein, Edward C., I. 1968, Science, 162, 1481, doi: 10.1126/science.162.3861.1481 Table 1: Cataloged Pulsars with the Largest Light-Cylinder Magnetic Field Strengths PSR | $B_{lc}$ | Type | Giant Pulse ---|---|---|--- | ($10^{5}$ G) | | References J0537$-$6910 | 20.7 | younga | B1937+21 | 10.2 | MSP | Cognard et al. (1996) B0531+21 (Crab) | 9.6 | young | Staelin & Reifenstein (1968) J1402+13 | 7.8 | MSPa | B1821$-$24A | 7.4 | MSP | Romani & Johnston (2001) J0058$-$7218 | 7.3 | younga | J1748$-$2446ak | 5.8 | MSP | J1701$-$3006F | 5.6 | MSP | J1737$-$0314A | 5.3 | MSP | J1555$-$2908 | 4.7 | MSP | J1835$-$3259B | 4.4 | MSP | B1957+20 | 3.8 | MSP | Joshi et al. (2004) B0540$-$69 | 3.6 | young | Johnston & Romani (2003) J1400$-$6325 | 3.5 | young | J0218+4232 | 3.2 | MSP | Knight et al. (2006) Note. — aNot a radio pulsar. Data were taken from the ATNF Pulsar Catalog (version 1.70). The top fifteen pulsars ranked by $B_{lc}$ are listed. Bold entries indicate pulsars observed to emit giant radio pulses. Note that one other pulsar, PSR B1820$-$30A, is an MSP that emits observable giant radio pulses (Knight et al., 2005), but it is not listed here since it ranks 23rd on this list. See also Fig. 1. Table 2: List of Detected Single Pulses Ordered by DM Pulse | Time of | DM Trial | S/N | FETCH | Pulse Width ---|---|---|---|---|--- Number | Pulse (s) | (pc cm-3) | | Likelihood | (ms) 1 | 6676.2417(3) | 22.161 | 7.5 | 99.997% | 0.6 2 | 2442.0319(6) | 35.742 | 7.5 | 99.985% | 1.3 3 | 8397.3183(2) | 38.948 | 7.4 | 99.841% | 0.3 4 | 5369.5697(2) | 44.762 | 7.8 | 99.515% | 0.3 5 | 20223.2706(2) | 50.421 | 7.3 | 71.246% | 0.4 6 | 40026.1995(1) | 56.835 | 7.3 | 99.988% | 0.2 7 | 3497.6510(1) | 59.695 | 7.1 | 99.438% | 0.2 8* | 5924.5415(5) | 103.412 | 7.2 | 99.970% | 1.0 9* | 6764.7114(2) | 103.412 | 7.2 | 98.097% | 0.3 10* | 40538.7033(2) | 103.412 | 7.0 | 62.850% | 0.4 11 | 26744.4141(3) | 109.731 | 7.1 | 99.983% | 0.6 12 | 5523.0110(2) | 111.903 | 7.0 | 99.886% | 0.5 13 | 2083.5429(2) | 117.103 | 7.0 | 99.990% | 0.3 14 | 24753.1893(2) | 118.623 | 7.6 | 52.817% | 0.4 15 | 11974.3024(6) | 120.160 | 7.1 | 99.945% | 1.3 16 | 8762.3732(3) | 122.494 | 7.6 | 99.989% | 0.6 17 | 18975.3036(5) | 124.071 | 7.0 | 77.577% | 1.0 18 | 22657.4292(8) | 128.902 | 7.6 | 66.141% | 1.7 19 | 5913.4912(8) | 139.035 | 8.4 | 99.758% | 1.5 20 | 8012.5940(8) | 143.452 | 7.2 | 99.981% | 1.6 21 | 27263.3490(8) | 153.597 | 7.8 | 96.984% | 1.5 22 | 36148.5644(10) | 155.507 | 7.4 | 99.468% | 2.0 23 | 1460.5734(8) | 161.362 | 7.2 | 99.995% | 1.7 24 | 12319.5048(7) | 167.412 | 7.4 | 99.998% | 1.4 25 | 26833.7275(12) | 169.473 | 7.2 | 89.701% | 2.5 26 | 18547.8670(9) | 172.608 | 8.2 | 98.323% | 1.8 27 | 9213.3246(5) | 174.726 | 7.0 | 96.529% | 1.0 28 | 14323.1635(13) | 177.947 | 7.3 | 59.221% | 2.6 29 | 15232.4035(6) | 179.033 | 7.5 | 94.064% | 1.1 30 | 26228.9132(15) | 185.673 | 7.6 | 99.998% | 3.0 31 | 40129.3022(3) | 189.077 | 7.4 | 90.696% | 0.6 32 | 34427.6883(2) | 194.876 | 7.1 | 79.459% | 0.3 33 | 38263.8594(9) | 199.633 | 7.5 | 93.066% | 1.8 34 | 21103.5138(10) | 206.969 | 7.4 | 99.992% | 1.9 35 | 6778.7168(5) | 210.730 | 7.0 | 61.309% | 1.0 36 | 20002.8843(3) | 222.397 | 7.4 | 88.357% | 0.6 37 | 1891.0116(2) | 230.508 | 7.3 | 97.514% | 0.5 38 | 22474.6054(4) | 236.069 | 7.1 | 99.995% | 0.8 39 | 32339.2015(2) | 311.730 | 7.4 | 85.183% | 0.5 40 | 5607.8609(6) | 315.418 | 7.2 | 94.921% | 1.1 41 | 12581.6178(15) | 365.243 | 7.1 | 99.765% | 3.0 42 | 7558.9811(13) | 401.071 | 7.3 | 94.033% | 2.6 43 | 39081.7106(21) | 403.421 | 7.7 | 99.410% | 4.2 44 | 12254.4407(46) | 405.784 | 7.0 | 52.500% | 9.1 45 | 33980.0101(6) | 536.747 | 7.3 | 99.978% | 1.3 46 | 18378.7853(37) | 810.519 | 8.5 | 99.847% | 7.4 47 | 16925.0990(32) | 839.202 | 7.3 | 99.502% | 6.4 48 | 21693.3493(138) | 1844.050 | 7.1 | 91.929% | 27.5 49 | 26754.9458(128) | 3463.760 | 7.6 | 99.987% | 25.6 Note. — The time of the pulse is relative to the start of the integration at MJD 52888.61267361. The figure in parentheses represents the uncertainty in the last digit of the time of the pulse as determined by HEIMDALL. The pulse widths were also measured from the HEIMDALL detections. Pulses 8, 9, and 10 (indicated with asterisks) are the three pulses that appeared in a single DM trial. See also Fig. 4. Figure 1: Light-cylinder magnetic field strength vs. spin period for pulsars in the ATNF pulsar catalog (version 1.70) (Manchester et al., 2005) that have $B_{lc}>10^{5}$ G. PSR J0537$-$6910 has the largest $B_{lc}$ by a factor of two and is indicated by the open circle. The seven pulsars observed to emit giant radio pulses are indicated by open stars; the two pulsars that have the next highest values of $B_{lc}$ after PSR J0537$-$6910 are labeled (PSR B1937+21 and the Crab pulsar). The population shown here can be divided into MSPs on the left and young pulsars on the right. See also Table 1. Figure 2: Pulse width vs. DM for 49 detected pulses. Also plotted is a dashed line indicating where the pulse width equals the dispersive smearing within the frequency channels. Real, astrophysical pulses would not be expected to lie below this line. Figure 3: Pulse S/N vs. DM for 49 single-pulse detections with S/N above 7 which had a FETCH-assigned probability of being real greater than 0.5. There is an excess of pulses in the observed DM range of the known LMC pulsar population (45 to 273 pc cm-3, indicated by the dashed vertical lines), suggesting that some of these pulses could be from LMC sources. Whether PSR J0537$-$6910 is the source of any of these pulses remains speculative. Figure 4: Three weak single pulse detections occurring in a single DM trial of 103.412 pc cm-3. The pulses are shown in the order given in Table 2, which provides further details. The top panel in each case shows flux vs. time, with the pulse centered at time zero. The middle panel shows frequency vs. time after dedispersion has been applied. A broadband signal (straight vertical line) would be expected in this panel for a typical pulse from a pulsar. The bottom panel shows DM vs. time. A localized signal appearing at a non-zero DM would be expected for an astrophysical signal. The likelihood of three or more of the 49 detected pulses appearing in a single DM trial by chance in this search is small (unlikely at the 4.3$\sigma$ level). All three pulses have comparable widths and could be coming from the same pulsar in the LMC, though probably not from PSR J0537$-$6910 (see the discussion in the main text).
18215 LABEL:LastPageJan. 11, 2021Jun. 07, 2022 biblatexPatching footnotes failed # Quotients, inductive types, & quotient inductive types , Marcelo P. Fiore Department of Computer Science and Technology University of Cambridge United Kingdom {marcelo.fiore, andrew.pitts<EMAIL_ADDRESS>, Andrew M. Pitts and S. C. Steenkamp(🖂) ###### Abstract. This paper introduces an expressive class of indexed quotient-inductive types, called QWI types, within the framework of constructive type theory. They are initial algebras for indexed families of equational theories with possibly infinitary operators and equations. We prove that QWI types can be derived from quotient types and inductive types in the type theory of toposes with natural number object and universes, provided those universes satisfy the Weakly Initial Set of Covers (WISC) axiom. We do so by constructing QWI types as colimits of a family of approximations to them defined by well-founded recursion over a suitable notion of size, whose definition involves the WISC axiom. We developed the proof and checked it using the Agda theorem prover. ###### Key words and phrases: dependent type theory, higher inductive types, quotient types, well-founded relation, weakly initial set of covers, topos theory ## 1\. Introduction Inductive types are an essential feature of many type theories and the proof assistants and programming languages that are based upon them. Homotopy Type Theory [Uni13] introduces a powerful extension to the notion of inductive type: higher inductive types (HITs). To define an ordinary inductive type one declares the ways in which its elements are constructed, and these constructions may depend inductively on smaller elements. To define a HIT one not only declares element constructors, but also declares equality constructors valued in identity types (possibly iterated ones), specifying how the element constructors (and possibly the equality constructors) are related. In the language of homotopy theory, these correspond to points, paths, surfaces, and so on. In this paper, rather than using HoTT, we work in the simpler setting of Extensional Type Theory [Mar82], where any propositional equality is reflected as a definitional equality. However, our results also hold for an intensional type theory with the uniqueness of identity proofs axiom (UIP), as demonstrated by our Agda development which can be found at [FPS21]. In either case identity types are trivial in dimensions higher than one, so that any two proofs of identity are equal. Nevertheless, as [AK16] point out, HITs are still useful in such a one-dimensional setting. They introduced the term _quotient inductive type_ (QIT) for this truncated form of HIT. One specific advantage of QITs over inductive types is that they provide a natural way to study universal algebra in type theory, since general algebraic theories cannot be represented in an equation-free way just using inductive notions [MM16]. As an example, the free commutative monoid on carrier $X$ is given by the quotient inductive type of finite multisets111Other presentations of multisets are available, aside from the two presentations given here. Incidentally, the algebra of monoids has an equation-free presentation [Kel80, section 23], [Kel92] – the algebra of Lists – and this is why no equation for associativity is needed in either definition of multisets., $\mathsf{Bag}\,X$, with constructors: $\displaystyle{}[]$ $\displaystyle:\mathsf{Bag}\,X$ (1) $\displaystyle\\_{\mathrel{::}}\\_$ $\displaystyle:X\rightarrow\mathsf{Bag}\,X\rightarrow\mathsf{Bag}\,X$ $\displaystyle\mathsf{swap}$ $\displaystyle:\prod_{x,y:X}\prod_{\mathit{zs}:\mathsf{Bag}\,X}x\mathrel{::}y\mathrel{::}\mathit{zs}=y\mathrel{::}x\mathrel{::}\mathit{zs}$ This can be seen as the element constructors for lists ($[]$ and $\mathrel{::}$), along with an equality constructor $\mathsf{swap}$ that transposes two elements and thereby allows all reorderings of a list to be identified. Notice that the $\mathsf{swap}$ constructor lands in the identity type on $\mathsf{Bag}\,X$ (just written as $\\_{=}\\_$), making it an _equation_ constructor, compared to the _element_ constructors, $[]$ and $\\_{\mathrel{::}}\\_$, which are familiar from ordinary inductive type definitions. There is no need to include a constructor for the congruence property of $\\_{\mathrel{::}}\\_$ since that is automatic from the congruence properties of identity types. An alternative presentation of multisets, whose equations may be easier to work with, is: $\displaystyle{}[]$ $\displaystyle:\mathsf{Bag}^{\prime}\,X$ (2) $\displaystyle\\_{\mathrel{::}}\\_$ $\displaystyle:X\rightarrow\mathsf{Bag}^{\prime}\,X\rightarrow\mathsf{Bag}^{\prime}\,X$ $\displaystyle\mathsf{comm}$ $\displaystyle:\prod_{x,y:X}\prod_{\mathit{as},\mathit{bs},\mathit{cs}:\mathsf{Bag}^{\prime}\,X}\mathit{as}=y\mathrel{::}\mathit{cs}\rightarrow x\mathrel{::}\mathit{cs}=\mathit{bs}\rightarrow x\mathrel{::}\mathit{as}=y\mathrel{::}\mathit{bs}$ Notice that the $\mathsf{comm}$ equality constructor is _conditional_ on two equations, $\mathit{as}=y\mathrel{::}\mathit{cs}$ and $x\mathrel{::}\mathit{cs}=\mathit{bs}$, in order to deduce the final equation $x\mathrel{::}\mathit{as}=y\mathrel{::}\mathit{bs}$. The QWI-types introduced in this paper are not able to encode such conditional QITs (at least, not obviously so); see the discussion in the Conclusion. Given the parameter $X$, the types $\mathsf{Bag}\,X$ and $\mathsf{Bag}^{\prime}\,X$ are single quotient-inductive types, rather than indexed families defined quotient-inductively. As an example of the latter, consider the go-to example of an inductively defined family, the vector type (length-indexed lists). This has a commutative variant, $\mathsf{AbVec}\,X\,n$ for $n:\mathbb{N}$, which makes it an example of an indexed QIT. It has constructors: $\displaystyle{}[]$ $\displaystyle:\mathsf{AbVec}\,X\,0$ (3) $\displaystyle\\_{\mathrel{::}}\\_$ $\displaystyle:X\rightarrow\prod_{i:\mathbb{N}}\mathsf{AbVec}\,X\,i\rightarrow\mathsf{AbVec}\,X\,(i+1)$ $\displaystyle\mathsf{swap}$ $\displaystyle:\prod_{x,y:X}\prod_{i:\mathbb{N}}\prod_{\mathit{zs}:\mathsf{AbVec}\,X\,i}x\mathrel{::}y\mathrel{::}\mathit{zs}=y\mathrel{::}x\mathrel{::}\mathit{zs}$ Our final example of a QIT is unordered countably-branching trees over $X$, called $\mathsf{\omega Tree}\,X$, with constructors: $\displaystyle{}\mathsf{leaf}$ $\displaystyle:\mathsf{\omega Tree}\,X$ (4) $\displaystyle\mathsf{node}$ $\displaystyle:X\rightarrow(\mathbb{N}\rightarrow\mathsf{\omega Tree}\,X)\rightarrow\mathsf{\omega Tree}\,X$ $\displaystyle\mathsf{perm}$ $\displaystyle:\prod_{x:X}\prod_{b:\mathbb{N}\rightarrow\mathbb{N}}\prod_{b^{\prime}:\mathsf{isIso}\,b}\prod_{f:\mathbb{N}\rightarrow\mathsf{\omega Tree}\,X}\mathsf{node}\,x\,f=\mathsf{node}\,x\,(f\circ b)$ where elements of $\mathsf{isIso}\,b$ witness that $b$ is a bijection. (As with $\mathsf{AbVec}$, one could similarly consider a depth-indexed variant of $\mathsf{\omega Tree}$.) The significance of this example is that the $\mathsf{node}\,x\,\\_$ and $\mathsf{perm}\,x\,b\,b^{\prime}\,\\_$ constructors have infinite arity, making this an example of an _infinitary_ QIT. This type is one of the original motivations for considering QITs [AK16], since they can enable constructive versions of structures that classically use non-constructive choice principles, as we explain next. The examples of QITs in (1)–(3) only involve element constructors of _finite_ arity. For example in (1), $[]$ is nullary and for each $x,y:X$, $x\mathrel{::}\\_$ and $\mathsf{swap}\,x\,y\,\\_$ are unary. Consequently $\mathsf{Bag}\,X$ is isomorphic to the type obtained from the ordinary inductive type of finite lists over $X$ by quotienting by the congruence generated by $\mathsf{swap}$. By contrast, (4) involves element and equality constructors with countably infinite arity. So if one first forms the ordinary inductive type of _ordered_ (or planar) countably-branching trees (by dropping the equality constructor $\mathsf{perm}$ from the declaration) and then quotients by a suitable relation to get the equalities specified by $\mathsf{perm}$, one appears to need the axiom of countable choice to be able to lift the $\mathsf{node}$ element constructor to the quotient [AK16, Section 2.2]. The construction of the Cauchy reals as a higher inductive-inductive type [Uni13, Section 11.3] provides a similar, but more complicated example where use of countable choice is avoided. So it seems that without some form of choice principle, quotient inductive types are strictly more expressive than the combination of ordinary inductive types with quotient types. Indeed, [LS19, Section 9] turn an infinitary equational theory due to [Bla83, Section 9] into a higher-inductive type that cannot be proved to exist in ZF set theory without the Axiom of Choice. In this paper we show that in fact a much weaker and constructively acceptable form of choice is sufficient for constructing indexed quotient inductive types. This is the _Weakly Initial Set of Covers_ (WISC) axiom, originally called the “Type Theoretic Collection Axiom” TTCAf by [Str05] and (in the context of constructive set theory) the “Axiom of Multiple Choice” by [vdBM14]. WISC is constructively acceptable in that it is known to hold in the internal logic of a wide range of toposes [vdBM14] (for example, all Grothendieck and all realizability toposes over the topos of classical sets), including those commonly used in the semantics of Type Theory. Section 4 reviews WISC and our motivation for using it herein. We make two contributions: * • First we define a class of indexed quotient inductive types called _QWI-types_ and give elimination and computation rules for them (Section 3). The usual W-types of [Mar82] are inductive types giving the algebraic terms over a possibly infinitary signature. One specifies a QWI-type by giving a family of equations between such terms. So such types give initial algebras for (families of) possibly infinitary algebraic theories. They can encode a very wide range of examples of, possibly infinitary, indexed quotient inductive types (Section 7). In classical set theory with the Axiom of Choice, QWI-types can be constructed simply as quotients of the underlying Indexed W-type—hence the name. * • Second, our main result (Section 6) is that with only WISC it is still possible to construct QWI-types from inductive types and quotients in the constructive type theory of toposes, but not simply by quotienting a W-type. Instead, quotienting is interleaved with an inductive construction. To make sense of this and prove that the resulting type has the required universal property, we construct the type as the colimit of a family of approximations to it defined by well-founded recursion over a suitable notion of _size_ (a constructive version of what classically would be accomplished with a sequence of transfinite ordinal length). Our notion of size is developed in Section 5. Sizes are elements of a W-type equipped with the _plump_ [Tay96] well-founded order; and WISC allows us to make the W-type big enough that a given polynominal endofunctor preserves size-indexed colimits (Section 5.1). This fact is then applied in Section 6 to form QWI-types as size-indexed colimits. ### Note This paper is a greatly revised and expanded version of our earlier paper [FPS20], which introduced QW-types (see Section 3.2 which is the non-indexed version of Section 3.3). There we made use of _sized types_ [Abe12] as they are implemented in the Agda theorem prover [Agd20]. Though the results of that paper were formalised in Agda version 2.5.2, unfortunately, in the current version of Agda, 2.6.2, sized types are logically unsound. In this paper, not only do we consider the more expressive indexed form of QW-types, namely QWI- types, we also put the construction of QW-types on a firm semantic footing by avoiding Agda-style sized types (and positivity assumptions on quotient types, see Section 6.1) in favour of a well-founded notion of size built from WISC. The Agda formalization of the results in this paper is available at [FPS21]. ## 2\. Type theory The results in this paper are proved in a version of Extensional Type Theory [Mar82] that is an internal language for toposes [Joh02] with natural numbers object and universes [Str05a]. We use this type theory in an informal way, in the style and notation of the HoTT Book [Uni13]. Thus dependent function types are written $\prod_{x:A}B\,x$ and non-dependent ones as either $A\rightarrow B$ or $B^{A}$. Dependent product types are written $\sum_{x:A}B\,x$ and non- dependent ones as $A\times B$. Coproduct types are written $A+B$ (with injections $\iota_{1}:A\rightarrow A+B$ and $\iota_{2}:B\rightarrow A+B$); and finite types are written $\mathbb{0}$ (with no elements), $\mathbb{1}$ (with a single element $0$), $\mathbb{2}$ (with two elements $0,1$), etc. The identity type for $x:A$ and $y:A$ is $x=_{A}y$, or just $x=y$ when the type $A$ is known from the context. When we need to refer to the judgement that $x$ and $y$ are definitionally equal, we write $x\equiv y$. Unlike in the HoTT Book [Uni13], the type $x=y$ is an extensional identity type and so it is inhabited iff $x\equiv y$ holds. In particular, proofs of identity are unique when they exist and so it makes sense to use McBride’s heterogeneous form of identity type [McB99] wherever possible. Given $x:A$ and $y:B$, we denote the heterogeneous identity type by $x\mathrel{{=}{=}}y$; thus this type is inhabited iff $A\equiv B$ and $x\equiv y$. The type theory has universes of types, $\mathcal{U}_{0}:\mathcal{U}_{1}:\mathcal{U}_{2},\ldots$ which we write just as $\mathcal{U}$ when the universe level is immaterial. We use Russell-style universes (there is no syntactic distinction between an element $A:\mathcal{U}$ and the corresponding type of its elements). Influenced by Agda (see below) and unlike the HoTT Book [Uni13], we do not assume universes are cumulative, but instead use Agda-style closure under $\prod$-types: if $A:\mathcal{U}_{i}$ and $B:A\rightarrow\mathcal{U}_{j}$, then $\prod_{x:A}B\,x:\mathcal{U}_{\max ij}$ (and similarly for $\sum$-types). Since we only consider toposes that have a natural numbers object, we can assume the universes are closed under forming not just W-types [MP00, Proposition 3.6], but also all inductively defined indexed families of types [GH04]. Indexed W-types are considered in more detail in the next section. The lowest universe contains an impredicative universe $\mathsf{Prop}$ of propositions, corresponding to the subobject classifier in a topos. $\mathsf{Prop}$ contains the identity types, is closed under intuitionistic connectives ($\wedge$, $\vee$, $\rightarrow$, $\leftrightarrow$, etc.) and quantifiers ($\forall(x:A).\phi(x)$ and $\exists(x:A).\phi(x)$, for $A$ in any universe), and satisfies propositional extensionality: $\mathsf{propext}:\forall(p,q:\mathsf{Prop}).\,(p\leftrightarrow q)\rightarrow{p=q}$ (5) Being an extensional type theory, we also have function extensionality: $\mathsf{funext}:\forall\left(f,g:\prod_{x:A}B\,x\right).\,(\forall(x:A).\,f\,x=g\,x)\rightarrow f=g$ (6) Note that like the universes $\mathcal{U}_{i}$, $\mathsf{Prop}$ is also a Russell-style universe, in that we do not make a notational distinction between a proposition $p:\mathsf{Prop}$ and the type of its proofs. Given $A:\mathcal{U}$ and $\phi:A\rightarrow\mathsf{Prop}$, we regard the comprehension type $\\{x:A\mid\phi\,x\\}$ in $\mathcal{U}$ as synonymous with the dependent product $\sum_{x:A}\,\phi\,x$. Toposes have coequalizers and effective epimorphisms [Joh02, A2.4]. Correspondingly the type theory contains quotient types, with notation as in Figure 1 (recall that $\mathrel{{=}{=}}$ stands for heterogeneous identity). These can be constructed in the usual way via equivalence classes, using $\mathsf{Prop}$’s impredicative quantifiers and the fact that toposes satisfy unique choice: $\mathsf{uniquechoice}:(\exists(x:A).\forall(y:A).x=y)\rightarrow A$ (7) Thus $\mathsf{uniquechoice}$ is a function mapping proofs that $A$ has exactly one element to a name for that one element. From $\mathsf{uniquechoice}$ it follows that given $A:\mathcal{U}$, $B:\mathcal{U}^{A}$ and $\phi:\prod_{x:A}(B\,x\rightarrow\mathsf{Prop})$, from a proof of $\forall(x:A)\exists!(y:B\,x).\;\phi\,x\,y$ we get there is a unique $f:\prod_{x:A}B\,x$ such that $\forall(x:A).\;\phi\,x\,(f\,x)$. • Given $A:\mathcal{U}$ and $R:A\rightarrow A\rightarrow\mathsf{Prop}$, we have: $\displaystyle A/R:\mathcal{U}$ $\displaystyle[\\_]_{R}:A\rightarrow A/R$ $\displaystyle\mathsf{qeq}_{R}:\forall(x,y:A)\mathbin{.}\,R\,x\,y\rightarrow[x]_{R}=[y]_{R}$ • Furthermore, given $B:A/R\rightarrow\mathcal{U}$, $f:\prod_{x:A}B([x]_{R})$, and $p:\forall(x,y:A)\mathbin{.}\,R\,x\,y\rightarrow f\,x\mathrel{{=}{=}}f\,y$, we have: $\displaystyle\mathsf{qelim}_{R}\,B\,f\,p:\prod_{z:A/R}B\,z$ $\displaystyle\mathsf{qcomp}_{R}\,B\,f\,p:\forall(x:A)\mathbin{.}\,\mathsf{qelim}\,B\,f\,[x]_{R}=f\,x$ • Quotients of equivalence relations are effective: writing $\mathsf{ER}_{R}$ for the proposition that $R$ is reflexive, symmetric and transitive, we have: $\displaystyle\mathsf{qeff}_{R}:\mathsf{ER}_{R}\rightarrow\forall(x,y:A)\mathbin{.}\,[x]_{R}=[y]_{R}\rightarrow R\,x\,y$ Figure 1. Quotient types ### Agda development We have developed and checked a version of the results in this paper using the Agda theorem prover [Agd20]. In particular our Agda development gives the full details of the construction of the indexed version of QW-types, whereas in the paper, for readability, we only give the construction in the non-indexed case (Section 6). Being intensional and predicative, the type theory provided by Agda is weaker than the one described above, but can be soundly interpreted in it. One has to work harder to establish the results with Agda, since two expressions that are definitionally equal in Extensional Type Theory may not be so in Agda and hence one has to produce a term in a corresponding identity type to prove them equal. On the other hand, when a candidate term is given, Agda can decide whether or not it is a correct proof, because validity of the judgements of the type theory it implements is decidable222though not always feasibly so, in contrast with the situation for Extensional Type Theory [Hof95]. Our development is also made easier by extensive use of the predicative universes of proof-irrelevant propositions that feature in recent versions of Agda. Not only are any two proofs of such propositions definitionally equal, but inductively defined propositions (such as the well-founded ordering on sizes used in Section 6) can be eliminated in proofs using dependent pattern matching, which is a very great convenience. We still need these propositions to satisfy the extensionality (5) and unique choice (7) properties, so we add them as postulates in Agda. Also, the impredicative construction of quotient types is not available, so we get quotients as in Figure 1 by postulating them and using a user-declared rewrite to make their computation rule a definitional equality. Although function extensionality (6) is not provable in Agda’s core type theory, it becomes so once one has such quotient types. Our Agda development is available at [FPS21]. ## 3\. Indexed polynomial functors and equational systems In this section we introduce a class of indexed quotient inductive types, called _QWI-types_ , which are indexed families of free algebras for possibly infinitary equational theories. To begin with we describe the simpler, non- indexed case (_QW-types_) and then treat the general, indexed case in Section 3.3. ### 3.1. W-types We recall some facts about types of well-founded trees, the W-types of [Mar82]. We take _signatures_ (also known as _containers_ [AAG05]) to be elements of the dependent product $\textstyle\mathsf{Sig}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sum_{A:\mathcal{U}}(A\rightarrow\mathcal{U})$ (8) So a signature is given by a pair $\mathrm{\Sigma}=(A,B)$ consisting of a type $A:\mathcal{U}$ and a family of types $B:A\rightarrow\mathcal{U}$. Each such signature determines a polynomial endofunctor [GH04, AAG05] $S_{\mathrm{\Sigma}}:\mathcal{U}\rightarrow\mathcal{U}$ whose value at $X:\mathcal{U}$ is the following dependent product $\textstyle S_{\mathrm{\Sigma}}(X)\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sum_{a:A}(B\,a\rightarrow X)$ (9) and whose action on a function $f:X\rightarrow Y$ in $\mathcal{U}$ is the function $S_{\mathrm{\Sigma}}f:S_{\mathrm{\Sigma}}(X)\rightarrow S_{\mathrm{\Sigma}}(Y)$ where $S_{\mathrm{\Sigma}}f\,(a,b)\mathrel{\smash{\overset{\text{\tiny def}}{=}}}(a,f\circ b)\qquad(a:A,b:B\,a\rightarrow X)$ (10) An _$S_{\mathrm{\Sigma}}$ -algebra_ is by definition an element of the dependent product $\textstyle\mathsf{Alg}_{\mathrm{\Sigma}}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sum_{X:\mathcal{U}}(S_{\mathrm{\Sigma}}(X)\rightarrow X)$ (11) $S_{\mathrm{\Sigma}}$-algebra morphisms $(X,\alpha)\rightarrow(X^{\prime},\alpha^{\prime})$ are given by functions $h:X\rightarrow X^{\prime}$ together with an element of the type $\mathsf{isHom}_{\alpha,\alpha^{\prime}}\,h\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\prod_{a:A}\prod_{b:B\,a\rightarrow X}(\alpha^{\prime}(a,h\circ b)=h(\alpha(a,b)))$ (12) Then the W-type $\mathop{\vphantom{\sum}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{7.00009pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{5.00006pt}{}{$\mathrm{W}$}}}}}\slimits@_{\mathrm{\Sigma}}$ determined by a signature $\mathrm{\Sigma}$ is the underlying type of an initial object in the evident category of $S_{\mathrm{\Sigma}}$-algebras. More generally, [Dyb97] shows that the initial algebra of any non-nested, strictly positive endofunctor on $\mathcal{U}$ is given by a W-type; and [AAG05] extend this to the case with nested uses of W-types as part of their work on containers. (These proofs take place in extensional type theory [Mar82], but work just as well in intensional type theory with uniqueness of identity proofs and function extensionality, which is what we use in Agda.) More concretely, given a signature $\mathrm{\Sigma}=(A,B)$, if one thinks of elements $a:A$ as names of operation symbols whose (not necessarily finite) arity is given by the type $B\,a:\mathcal{U}$, then the elements of the W-type $\mathop{\vphantom{\sum}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{7.00009pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{5.00006pt}{}{$\mathrm{W}$}}}}}\slimits@_{\mathrm{\Sigma}}$ represent the closed algebraic terms (i.e. well-founded trees) over the signature. From this point of view it is natural to consider not only closed terms solely built up from operations, but also open terms additionally built up with variables drawn from some type $V$. As well as allowing operators of possibly infinite arity, we also allow terms involving possibly infinitely many variables (example (4) involves such terms). Categorically, the type $T_{\mathrm{\Sigma}}(V)$ of such open terms is the free $S_{\mathrm{\Sigma}}$-algebra on $V$ and is another W-type, for the signature obtained from $\mathrm{\Sigma}$ by adding the elements of $V$ as nullary operations. Nevertheless, it is convenient to give a direct inductive definition: the value of $T_{\mathrm{\Sigma}}:\mathcal{U}\rightarrow\mathcal{U}$ at some some $V:\mathcal{U}$, is the inductively defined type with constructors $\displaystyle\eta$ $\displaystyle:V\rightarrow T_{\mathrm{\Sigma}}\,V$ (13) $\displaystyle\sigma$ $\displaystyle:S_{\mathrm{\Sigma}}(T_{\mathrm{\Sigma}}\,V)\rightarrow T_{\mathrm{\Sigma}}\,V$ Given an $S_{\mathrm{\Sigma}}$-algebra $(X,\alpha):\mathsf{Alg}_{\mathrm{\Sigma}}$ and a function $f:V\rightarrow X$, the unique morphism of $S_{\mathrm{\Sigma}}$-algebras from the free $S_{\mathrm{\Sigma}}$-algebra on $V$ to $(X,\alpha)$ that extends $f$, $\\_\mathbin{\gg\\!=}f:(T_{\mathrm{\Sigma}}(V),\sigma)\rightarrow(X,\alpha)$, has underlying function $T_{\mathrm{\Sigma}}(V)\rightarrow X$ mapping each $t:T_{\mathrm{\Sigma}}(V)$ to the element $t\mathbin{\gg\\!=}f$ defined as follows by recursion on the structure of $t$: $\displaystyle\eta\,x\mathbin{\gg\\!=}f$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}f\,x$ (14) $\displaystyle\sigma\,(a,b)\mathbin{\gg\\!=}f$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\alpha\,(a,\lambda\,x\mathbin{.}b\,x\mathbin{\gg\\!=}f)$ As the notation suggests, $\mathbin{\gg\\!=}$ is the Kleisli lifting operation (“bind”) for a monad structure on $T_{\mathrm{\Sigma}}$. Indeed, $T_{\mathrm{\Sigma}}$ is the free monad on the endofunctor $S_{\mathrm{\Sigma}}$ and in particular the functorial action of $T_{\mathrm{\Sigma}}$ on $f:V\rightarrow V^{\prime}$ yields the function $T_{\mathrm{\Sigma}}f:T_{\mathrm{\Sigma}}(V)\rightarrow T_{\mathrm{\Sigma}}(V^{\prime})$ given by $T_{\mathrm{\Sigma}}f\,t\mathrel{\smash{\overset{\text{\tiny def}}{=}}}t\mathbin{\gg\\!=}(\eta\circ f)\qquad(t:T_{\mathrm{\Sigma}}(V))$ (15) ### 3.2. QW-types The notion of _QW-type_ that we introduce in this section is obtained from that of a W-type by considering not only the algebraic terms over a given structure, but also equations between terms. To code equations we use a type- theoretic rendering of a categorical notion of equational system introduced by [FH09], referred to as _term equational system_ [FH09, Section 2] and as _monadic equational system_ [Fio13, Section 5], here instantiated to the special case of free monads on signatures. ###### Definition . A _system of equations_ over a signature $\mathrm{\Sigma}:\mathsf{Sig}$ is specified by: * • A type $E:\mathcal{U}$, whose elements $e:E$ can be thought of as names for each equation. * • A function $V:E\rightarrow\mathcal{U}$. For each equation $e:E$, the type $V\,e:\mathcal{U}$ represents the variables used in the equation $e$. * • For each equation $e:E$, two elements called $l\,e$ and $r\,e$ of type $T_{\mathrm{\Sigma}}(V\,e)$, which is the free $S_{\mathrm{\Sigma}}$-algebra on the variables $V\,e$. So $l\,e$ and $r\,e$ can be thought of as the abstract syntax trees of terms with some leaves being free variables drawn from $V\,e$. So systems of equations are elements of the dependent product $\mathsf{Syseq}_{\mathrm{\Sigma}}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sum_{E:\mathcal{U}}\sum_{V:E\rightarrow\mathcal{U}}\bigg{(}\prod_{e:E}T_{\mathrm{\Sigma}}(V\,e)\bigg{)}\times\bigg{(}\prod_{e:E}T_{\mathrm{\Sigma}}(V\,e)\bigg{)}$ (16) An $S_{\mathrm{\Sigma}}$-algebra $\alpha:S_{\mathrm{\Sigma}}(X)\rightarrow X$ _satisfies_ the system of equations $\varepsilon\equiv(E,V,l,r):\mathsf{Syseq}_{\mathrm{\Sigma}}$ if there is a proof of $\mathsf{Sat}_{\alpha,\varepsilon}(X)\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\forall(e:E).\forall(\rho:V\,e\rightarrow X).\;(l\,e\mathbin{\gg\\!=}\rho)=(r\,e\mathbin{\gg\\!=}\rho)$ (17) The category-theoretic view of QW-types is that they are simply $S_{\mathrm{\Sigma}}$-algebras that are initial among those satisfying a given system of equations: ###### Definition (QW-type). A _QW-type_ for a signature $\mathrm{\Sigma}\equiv(A,B):\mathsf{Sig}$ and a system of equations $\varepsilon\equiv(E,V,l,r):\mathsf{Syseq}_{\mathrm{\Sigma}}$ is given by a type $\mathsf{QW}:\mathcal{U}$ equipped with an $S_{\mathrm{\Sigma}}$-algebra structure and a proof that it satisfies the equations. Thus there are functions $\displaystyle\mathsf{qwintro}$ $\displaystyle:S_{\mathrm{\Sigma}}(\mathsf{QW})\rightarrow\mathsf{QW}$ (18) $\displaystyle\mathsf{qwequate}$ $\displaystyle:\mathsf{Sat}_{\mathsf{qwintro},\varepsilon}(\mathsf{QW})$ (19) together with functions that witness that it is an initial such algebra: $\displaystyle\mathsf{qwrec}$ $\displaystyle:\prod_{X:\mathcal{U}}\prod_{\alpha:S_{\mathrm{\Sigma}}\,X\rightarrow X}\mathsf{Sat}_{\alpha,\varepsilon}(X)\rightarrow\mathsf{QW}\rightarrow X$ (20) $\displaystyle\mathsf{qwrechom}$ $\displaystyle:\prod_{X:\mathcal{U}}\prod_{\alpha:S_{\mathrm{\Sigma}}\,X\rightarrow X}\prod_{p:\mathsf{Sat}_{\alpha,\varepsilon}(X)}\mathsf{isHom}\,(\mathsf{qwrec}\,X\,\alpha\,p)$ (21) $\displaystyle\mathsf{qwuniq}$ $\displaystyle:\prod_{X:\mathcal{U}}\prod_{\alpha:S_{\mathrm{\Sigma}}\,X\rightarrow X}\prod_{p:\mathsf{Sat}_{\alpha,\varepsilon}(X)}\prod_{f:\mathsf{QW}\rightarrow X}\mathsf{isHom}\,f\rightarrow\mathsf{qwrec}\,X\,\alpha\,p=f$ (22) ###### Remark (Free algebras). Section 3.2 defines QW-types as initial algebras for systems of equations $\varepsilon$ over signatures $\mathrm{\Sigma}$. More generally, the _free_ $(\mathrm{\Sigma},\varepsilon)$-algebra on a type $X:\mathcal{U}$ is a type $F_{\mathrm{\Sigma},\varepsilon}\,X:\mathcal{U}$ equipped with an $S_{\mathrm{\Sigma}}$-algebra structure $S_{\mathrm{\Sigma}}(F_{\mathrm{\Sigma},\varepsilon}\,X)\rightarrow F_{\mathrm{\Sigma},\varepsilon}\,X$ satisfying the system of equations $\varepsilon$ and an inclusion of generators $\eta_{X}:X\rightarrow F_{\mathrm{\Sigma},\varepsilon}\,X$ which is universal among such data. Initial algebras are the $X=\mathbb{0}$ special case of free algebras. But once one has them, one also has free algebras, by change of signature: $F_{\mathrm{\Sigma},\varepsilon}\,X$ is the QW-type for the signature $\mathrm{\Sigma}_{X}$ and system of equations $\varepsilon_{X}$ defined as follows. If $\mathrm{\Sigma}=(A,B)$, then $\mathrm{\Sigma}_{X}=(X+A,B_{X})$ where $B_{X}:X+A\rightarrow\mathcal{U}$ satisfies $B_{X}(\iota_{1}\,x)=\mathbb{0}$ and $B_{X}(\iota_{2}\,a)=B\,a$. And if $\varepsilon=(E,V,l,r)$, then $\varepsilon_{X}=(E,V,l_{X},r_{X})$ where for each $e:E$, $l_{X}\,e=(l\,e\mathbin{\gg\\!=}\eta)$ and $r_{X}\,e=(r\,e\mathbin{\gg\\!=}\eta)$ (using the $S_{\mathrm{\Sigma}}$-algebra structure $s$ on $T_{\mathrm{\Sigma}}(V\,e)$ given by $s(a,b)=\sigma(\iota_{2}\,a,b)$). The definitions of $S_{\mathrm{\Sigma}}$ in (9) and $\mathsf{Sat}_{\alpha,\varepsilon}$ in (17) suggest that QW-types are an instance of the notion of quotient-inductive type [AK16], with $\mathsf{qwintro}$ as the element constructor and $\mathsf{qwequate}$ as the equality constructor. To show that QW-types are indeed quotient-inductive types, they need to have the requisite dependently-typed elimination and computation properties333In this paper we work with extensional type theory, so the computation property, (26), is also a definitional equality. In our Agda development we work in intensional type theory, so there we only establish the computation property up to propositional equality; using the terminology of [Shu18], those are _typal_ indexed quotient-inductive types. for these elements and equality constructors. We show that these follow from (20)–(22) and function extensionality. To state this proposition we need a dependent version of the bind operation (14). For each “motive” $P$ and induction step $p$, $\displaystyle P$ $\displaystyle:\mathsf{QW}\rightarrow\mathcal{U}$ (23) $\displaystyle p$ $\displaystyle:\prod_{a:A}\prod_{b:B\,a\rightarrow\mathsf{QW}}\bigg{(}\prod_{x:B\,a}P(b\,x)\bigg{)}\rightarrow P(\mathsf{qwintro}(a,b))$ as well as a type $X:\mathcal{U}$, a substitution $\rho:\left(X\rightarrow\sum_{x:\mathsf{QW}}P\,x\right)$, and term $t:T_{\mathrm{\Sigma}}\,X$, we have an element $\mathsf{lift}_{P,X}\,p\,\rho\,t:P\,(t\mathbin{\gg\\!=}(\pi_{1}\circ\rho))$ defined by recursion on the structure of $t$: $\begin{array}[]{lcl}\mathsf{lift}_{P,X}\,p\,\rho\,(\eta\,x)&\mathrel{\smash{\overset{\text{\tiny def}}{=}}}&\pi_{2}\,(\rho\,x)\\\ \mathsf{lift}_{P,X}\,p\,\rho\,(\sigma\,(a,b))&\mathrel{\smash{\overset{\text{\tiny def}}{=}}}&p\,a\,(\lambda\,x\mathbin{.}b\,x\mathbin{\gg\\!=}(\pi_{1}\circ\rho))\,((\mathsf{lift}_{P,X}\,p\,\rho)\circ b)\end{array}$ (24) Note that the substitution $\rho$ gives terms in the “dependent telescope” $\sum_{x:\mathsf{QW}}P\,x$, which will be written as $P^{\prime}$. ###### Proposition . For a QWI-type as defined above, given $P$ and $p$ as in (23), and a term $p_{\mathsf{resp}}$ of type $\prod_{e:E}\prod_{\rho:V\,e\rightarrow\sum_{x:\mathsf{QW}}P\,x}\mathsf{lift}_{P,X}\,p\,\rho\,(l\,e)==\mathsf{lift}_{P,X}\,p\,\rho\,(r\,e)$ (25) there are elimination and computation terms $\displaystyle\mathsf{qwelim}$ $\displaystyle:\prod_{x:\mathsf{QW}}P\,x$ (26) $\displaystyle\mathsf{qwcomp}$ $\displaystyle:\prod_{a:A}\prod_{b:B\,a\rightarrow\mathsf{QW}}\mathsf{qwelim}\,(\mathsf{qwintro}\,(a,b))=p\,a\,b\,(\mathsf{qwelim}\circ b)$ (Note that (25) uses heterogeneous identity $\mathrel{{=}{=}}$, because $\mathsf{lift}_{P,X}\,p\,\rho\,(l\,e)$ and $\mathsf{lift}_{P,X}\,p\,\rho\,(r\,e)$ inhabit different types, namely $P\,(l\,e\mathbin{\gg\\!=}(\pi_{1}\circ\rho))$ and $P\,(r\,e\mathbin{\gg\\!=}(\pi_{1}\circ\rho))$ respectively.) ###### Proof. To define the eliminator, we must first use the algebra map on the more general type $P^{\prime}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sum_{x:\mathsf{QW}}P\,x$. The algebra map, $\mathsf{qwrec}$, requires that this type $P^{\prime}$ has an algebra structure $\beta:S_{\mathrm{\Sigma}}(P^{\prime})\rightarrow P^{\prime}$, which is given by: $\beta\,(a,b)\mathrel{\smash{\overset{\text{\tiny def}}{=}}}(\mathsf{qwintro}(a,\pi_{1}\circ b),p\,a\,(\pi_{1}\circ b)\,(\pi_{2}\circ b))$ (27) Moreover, we must show that the recursor satisfies the equations by giving a proof $s:\mathsf{Sat}_{\beta,\varepsilon}(P^{\prime})$, which we do pointwise on the two elements of the dependent product $P^{\prime}$. Taking projections distributes (possibly dependently) over $\mathbin{\gg\\!=}$; so to construct $s$ it suffices that, given any $e:E$ and $\rho:V\,e\rightarrow P^{\prime}$, we have the two terms: $\begin{array}[]{rcrcl}\mathsf{qwequate}\,e\,\rho&:&l\,e\mathbin{\gg\\!=}(\pi_{1}\circ\rho)&=&r\,e\mathbin{\gg\\!=}(\pi_{1}\circ\rho)\\\ p_{\mathsf{resp}}\,e\,\rho&:&\mathsf{lift}\,p\,\rho\,(l\,e)&==&\mathsf{lift}\,p\,\rho\,(r\,e)\end{array}$ (28) Then the eliminator is defined by taking the second projection of this recursor. $\mathsf{qwelim}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\pi_{2}\circ\mathsf{qwrec}\,P^{\prime}\,\beta\,s$ (29) Given an element $(a,b):S_{\mathrm{\Sigma}}(\mathsf{QW})$, we can prove that applying the eliminator to it computes as expected, that is $\mathsf{qwelim}\,(\mathsf{qwintro}\,(a,b))=p\,a\,b\,(\mathsf{qwelim}\circ b)$, as follows, where we write $r^{\prime}$ for $\mathsf{qwrec}\,P^{\prime}\,\beta\,s$: $\displaystyle\mathsf{qwelim}\,(\mathsf{qwintro}\,(a,b))$ $\displaystyle=={}$ $\displaystyle\pi_{2}\,(r^{\prime}(\mathsf{qwintro}\,(a,b)))$ def. of $\mathsf{qwelim}$ $\displaystyle=={}$ $\displaystyle\pi_{2}\,(\beta\,(a,r^{\prime}\circ b))$ using $\mathsf{qwrechom}$ against $\mathsf{qwintro}\,(a,b)$ $\displaystyle=={}$ $\displaystyle p\,a\,(\pi_{1}\circ r^{\prime}\circ b)\,(\pi_{2}\circ r^{\prime}\circ b)$ def. of $\beta$ $\displaystyle=={}$ $\displaystyle p\,a\,b\,(\pi_{2}\circ r^{\prime}\circ b)$ $\displaystyle=={}$ $\displaystyle p\,a\,b\,(\mathsf{qwelim}\circ b)$ def. of $\mathsf{qwelim}$ ### 3.3. QWI-types In this section we consider _indexed_ quotient inductive types specified by families of (possibly infinitary) equational theories. As for ordinary W-types, the indexed version of QW-types gives convenient expressive power; see Section 7. To describe the indexed case we first introduce some notation for indexed families of types and functions. Given a type $I:\mathcal{U}$, when we map between two $I$-indexed types, say $A:\mathcal{U}^{I}$ to $B:\mathcal{U}^{I}$, we will write $A\rightharpoondown B\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\prod_{k:I}A_{k}\rightarrow B_{k}$ (30) for the function type. The composition of $f:A\rightharpoondown B$ and $g:B\rightharpoondown C$ will just be written as $g\circ f\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\lambda i.\lambda x.\,g_{i}(f_{i}\,x)$ (31) We also often need to define an indexed family of types $B_{i}\,a:\mathcal{U}^{I}$ which is dependent on some $i:I$ and element $a:A_{i}$ for $A:\mathcal{U}^{I}$. The type of such families will be written as $A\nrightharpoondown\mathcal{U}^{I}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\prod_{i:I}(A_{i}\rightarrow\mathcal{U}^{I})$ (32) We take our notion of an indexed signature for a WI-type (indexed W-type) to be a particular case of indexed containers [AAG05], namely an element of the type $\sum_{I:\mathcal{U}}\sum_{A:\mathcal{U}^{I}}\prod_{i:I}(A_{i}\rightarrow\mathcal{U}^{I})$, which with the above notational conventions we write as $\mathsf{Sig}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sum_{I:\mathcal{U}}\sum_{A:\mathcal{U}^{I}}(A\nrightharpoondown\mathcal{U}^{I})$ (33) We only need indexed containers whose source and target indexing types are the same, since we only need to consider endofunctors on the category of $I$-indexed families in $\mathcal{U}$ and their algebras. Thus an indexed signature $\mathrm{\Sigma}\equiv(I,A,B)$ is a triple that can be thought of as a set of indices (or sorts) $I$, an $I$-indexed family of operators $A_{i}$, and a function mapping each operator $a:A_{i}$ to an $I$-indexed family of arities in $\mathcal{U}^{I}$. For instance, the type of vectors over some type $D$ has an indexed signature as follows (see Section 7.2): * • natural numbers as indices; * • for index $0$ a parameter-less operator for the empty vector, and for index $n+1$ an operator parametrised by $D$ for cons; and * • for the operator indexed by $0$, a family of empty types for arities, and for operators indexed by $n+1$, a family of arities which is the unit type just in the $n$th index, and empty otherwise. Each indexed signature $\mathrm{\Sigma}\equiv(I,A,B)$ determines a polynomial endofunctor $S_{\mathrm{\Sigma}}:\mathcal{U}^{I}\rightarrow\mathcal{U}^{I}$, which is defined at a family $X:\mathcal{U}^{I}$ by $(S_{\mathrm{\Sigma}}(X))_{i}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sum_{a:A_{i}}(B_{i}\,a\rightharpoondown X)\quad\text{(for each $i:I$)}$ (34) An $S_{\mathrm{\Sigma}}$-_algebra_ is by definition an element of the dependent product $\mathsf{Alg}_{\mathrm{\Sigma}}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sum_{X:\mathcal{U}^{I}}(S_{\mathrm{\Sigma}}(X)\rightharpoondown X)$ (35) (For example, when $\mathrm{\Sigma}$ is the signature for vectors over some type $D$, then an algebra $(X,\alpha):\mathsf{Alg}_{\mathrm{\Sigma}}$ amounts to $\alpha_{0}:\mathbb{1}\rightarrow X_{0}$ and $\alpha_{n+1}:D\times X_{n}\rightarrow X_{n+1}$.) $S_{\mathrm{\Sigma}}$-algebra morphisms $(X,\alpha)\rightarrow(X^{\prime},\alpha^{\prime})$ are given by an $I$-indexed family of functions $h:X\rightharpoondown X^{\prime}$ together with a family of elements in the types $(\mathsf{isHom}\,h)_{i}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\prod_{a:A_{i}}\prod_{b:B_{i}\,a\rightharpoondown X}(\alpha^{\prime}_{i}\,(a,h\circ b)=h_{i}(\alpha_{i}(a,b)))$ (36) The WI-type $\mathop{\vphantom{\sum}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{7.00009pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{5.00006pt}{}{$\mathrm{W}$}}}}}\slimits@_{i:I,a:A_{i}}B_{i}\,a$, or $\mathop{\vphantom{\sum}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{7.00009pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{5.00006pt}{}{$\mathrm{W}$}}}}}\slimits@_{\mathrm{\Sigma}}$, determined by $\mathrm{\Sigma}\equiv(I,A,B)$ is the underlying type of an initial $S_{\mathrm{\Sigma}}$-algebra. The elements of $\mathop{\vphantom{\sum}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{7.00009pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{5.00006pt}{}{$\mathrm{W}$}}}}}\slimits@_{\mathrm{\Sigma}}$ represent an $I$-indexed family of _closed_ algebraic terms (i.e. well-founded trees) over the signature $\mathrm{\Sigma}$. Given a family $V:\mathcal{U}^{I}$, the family $T_{\mathrm{\Sigma}}(V):\mathcal{U}^{I}$ of _open_ terms with variables from $V$ is the inductively defined family with constructors $\displaystyle\eta$ $\displaystyle:V\rightharpoondown T_{\mathrm{\Sigma}}\,V$ (37) $\displaystyle\sigma$ $\displaystyle:S_{\mathrm{\Sigma}}(T_{\mathrm{\Sigma}}\,V)\rightharpoondown T_{\mathrm{\Sigma}}\,V$ Given an $S_{\mathrm{\Sigma}}$-algebra $(X,\alpha):\mathsf{Alg}_{\mathrm{\Sigma}}$ and an $I$-indexed family of functions $f:V\rightharpoondown X$, the unique morphism of $S_{\mathrm{\Sigma}}$-algebras from the $S_{\mathrm{\Sigma}}$-algebra $(T_{\mathrm{\Sigma}}(V),\sigma)$ to $(X,\alpha)$ that extends $f$ has an underlying family of functions $T_{\mathrm{\Sigma}}(V)\rightharpoondown X$ mapping each $t:(T_{\mathrm{\Sigma}}\,V)_{i}$ to the element $t\mathbin{\gg\\!=}f$ defined analogously to (14). Given an indexed signature $\mathrm{\Sigma}\equiv(I,A,B):\mathsf{Sig}$, a _system of equations_ for it is an element of the dependent product $\mathsf{Syseq}_{\mathrm{\Sigma}}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sum_{E:\mathcal{U}^{I}}\sum_{V:E\nrightharpoondown\mathcal{U}^{I}}\bigg{(}\prod_{i:I}\prod_{e:E_{i}}(T_{\mathrm{\Sigma}}(V_{i}\,e))_{i}\bigg{)}\times\bigg{(}\prod_{i:I}\prod_{e:E_{i}}(T_{\mathrm{\Sigma}}(V_{i}\,e))_{i}\bigg{)}$ (38) An $S_{\mathrm{\Sigma}}$-algebra $\alpha:S_{\mathrm{\Sigma}}(X)\rightharpoondown X$ satisfies the system of equations $\varepsilon\equiv(E,V,l,r):\mathsf{Syseq}_{\mathrm{\Sigma}}$ if for each $i:I$ there is a proof of $(\mathsf{Sat}_{\alpha,\varepsilon}(X))_{i}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\forall(e:E_{i}).\forall(\rho:V_{i}\,e\rightharpoondown X).\;((l_{i}e)\mathbin{\gg\\!=}\rho)=((r_{i}e)\mathbin{\gg\\!=}\rho)$ (39) ###### Definition (QWI-type). A _QWI-type_ for an indexed signature $\mathrm{\Sigma}\equiv(I,A,B):\mathsf{Sig}$ and a system of equations $\varepsilon\equiv(E,V,l,r):\mathsf{Syseq}_{\mathrm{\Sigma}}$ is given by an $I$-indexed family of types $\mathsf{QW}:\mathcal{U}^{I}$ equipped with an $S_{\mathrm{\Sigma}}$-algebra structure and a proof that it satisfies the equations $\displaystyle\mathsf{qwintro}$ $\displaystyle:S_{\mathrm{\Sigma}}(\mathsf{QW})\rightharpoondown\mathsf{QW}$ (40) $\displaystyle\mathsf{qwequate}$ $\displaystyle:\mathsf{Sat}_{\mathsf{qwintro},\varepsilon}(\mathsf{QW})$ (41) together with functions that witness that it is an initial such algebra: $\displaystyle\mathsf{qwrec}$ $\displaystyle:\prod_{X:\mathcal{U}^{I}}\prod_{\alpha:S_{\mathrm{\Sigma}}(X)\rightharpoondown X}\left(\mathsf{Sat}_{\alpha,\varepsilon}(X)\rightarrow\left(\mathsf{QW}\rightharpoondown X\right)\right)$ (42) $\displaystyle\mathsf{qwrechom}$ $\displaystyle:\prod_{X:\mathcal{U}^{I}}\prod_{\alpha:S_{\mathrm{\Sigma}}(X)\rightharpoondown X}\prod_{p:\mathsf{Sat}_{\alpha,\varepsilon}(X)}\mathsf{isHom}\,(\mathsf{qwrec}\,X\,\alpha\,p)$ (43) $\displaystyle\mathsf{qwuniq}$ $\displaystyle:\prod_{X:\mathcal{U}^{I}}\prod_{\alpha:S_{\mathrm{\Sigma}}(X)\rightharpoondown X}\prod_{p:\mathsf{Sat}_{\alpha,\varepsilon}(X)}\prod_{f:\mathsf{QW}\rightharpoondown X}\mathsf{isHom}\,f\rightarrow\mathsf{qwrec}\,X\,\alpha\,p=f$ (44) ## 4\. Weakly initial sets of covers In Section 3 we defined a QWI-type in a topos to be an initial algebra for a given (possibly infinitary) indexed signature and system of equations (Section 3.3). If one interprets these notions in the topos of sets in classical Zermelo-Fraenkel set theory with the Axiom of Choice (ZFC), one regains the usual notion from universal algebra of initial algebras for equational theories that are multi-sorted [BL70] and infinitary [Lin66]. Thus in ZFC, the QWI-type for a signature $\mathrm{\Sigma}=(I,A,B)$ and system of equations $\varepsilon=(E,V,l,r)$ can be constructed by first forming the $I$-indexed family $\mathop{\vphantom{\sum}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{7.00009pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{5.00006pt}{}{$\mathrm{W}$}}}}}\slimits@_{\mathrm{\Sigma}}$ of sets of well-founded trees over $\mathrm{\Sigma}$ and then quotienting by the congruence relation $\sim_{\varepsilon}$ on $\mathop{\vphantom{\sum}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{7.00009pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{5.00006pt}{}{$\mathrm{W}$}}}}}\slimits@_{\mathrm{\Sigma}}$ generated by $\varepsilon$. The $I$-indexed family of quotient sets $(\mathop{\vphantom{\sum}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{7.00009pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{5.00006pt}{}{$\mathrm{W}$}}}}}\slimits@_{\mathrm{\Sigma}})/{\sim_{\varepsilon}}$ yields the desired initial algebra for $(\mathrm{\Sigma},\varepsilon)$ provided the $S_{\mathrm{\Sigma}}$-algebra structure on $\mathop{\vphantom{\sum}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{7.00009pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{5.00006pt}{}{$\mathrm{W}$}}}}}\slimits@_{\mathrm{\Sigma}}$ induces one on the quotient sets. It does so, because for each operator in the signature, using the Axiom of Choice (AC) one can pick representatives of the (possibly infinitely many) equivalence classes that are the arguments of the operator, apply the interpretation of the operator in $\mathop{\vphantom{\sum}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{10.00012pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{7.00009pt}{}{$\mathrm{W}$}}}}{\vbox{\hbox{\leavevmode\resizebox{5.00006pt}{}{$\mathrm{W}$}}}}}\slimits@_{\mathrm{\Sigma}}$ and then take the equivalence class of that. So the topos of sets in ZFC has QWI-types. Is this use of AC really necessary? [Bla83, Section 9] shows that if one drops AC and just works in ZF, then provided a certain large cardinal axiom is consistent with ZFC, it is consistent with ZF that there is an infinitary equational theory with no initial algebra. He shows this by first exhibiting a countably presented equational theory whose initial algebra has to be an uncountable regular cardinal; and secondly appealing to the construction of [Git80] of a model of ZF with no uncountable regular cardinals (assuming a certain large cardinal axiom). [LS19] turn the infinitary equational theory of Blass into a higher-inductive type that cannot be proved to exist in ZF (and hence cannot be constructed in type theory just using pushouts and the natural numbers). We show in Section 7.2 that this higher inductive type can be presented as a QWI-type. So one cannot hope to construct QWI-types using a type theory which is interpretable in just ZF. However, the type theory in which we work (Section 2) already requires going beyond ZF to be able to give a classical set- theoretic interpretation of its universes (by assuming the existence of enough strongly inaccessible cardinals, for example). So the above considerations about non-existence of initial algebras for infinitary equational theories in ZF do not necessarily rule out the construction of QWI-types in other toposes with natural numbers object and universes. In Section 6, we show that QWI types exist in a topos if it satisfies a weak form of choice principle that we call IWISC and which is known to hold for a wide range of toposes. IWISC is a version for universes in the type theory in which we are working of a property introduced by [vdBM14], who work in constructive set theory CZF and call it the _Axiom of Multiple Choice_ (building upon previous work by [MP02]); it is related to the _Type Theoretic Collection Axiom_ TTCAf of [Str05] (see Section 4). ###### Definition (Indexed WISC). A _family_ in a universe $\mathcal{U}$ is just a pair $(A,B)$ where $A:\mathcal{U}$ and $B:A\rightarrow\mathcal{U}$. A family $(C,E)$ in $\mathcal{U}$ is a _wisc_ for $Y:\mathcal{U}$ if for all $X:\mathcal{U}$ and surjective functions $q:X\rightarrow Y$ in $\mathcal{U}$, there exists $c:C$ and $f:E\,c\rightarrow X$ for which $q\circ f:E\,c\rightarrow Y$ is surjective. A family $(C,E)$ is a wisc for a family $(A,B)$ if it is a wisc for each type $B\,a$ in the family. We say that _the universe $\mathcal{U}$ satisfies IWISC_ if for all families $(A,B)$ in $\mathcal{U}$, there exists a family in $\mathcal{U}$ which is a wisc for $(A,B)$. Following [https://ncatlab.org/nlab/show/WISC], “wisc” stands for “weakly initial set of covers”, with the terminology being justified as follows. A _cover_ of $Y:\mathcal{U}$ is just a surjective function $q:X\rightarrow Y$ for some $X:\mathcal{U}$. If $(C,E)$ is a wisc for $Y$ as in the above definition, then the family of all covers of the form $E\,c\rightarrow Y$ with $c:C$ is weakly initial, in the sense that for every cover $q:X\rightarrow Y$ in $\mathcal{U}$, there is a cover in the family that factors through $q$. Note that in classical set theory AC implies IWISC; this is because the Axiom of Choice implies that covers split and hence any family $(A,B)$ is a wisc for itself. IWISC is independent of ZF in classical logic [Kar14, Rob15]. The results of [vdBM14] imply that in constructive logic it is preserved by many ways of making new toposes from existing ones, in particular by the formation of sheaf toposes and realizability toposes over a given topos. Thus it holds in all the toposes commonly used for the semantics of Type Theory. It is in this sense that IWISC is a constructively acceptable form of the axiom of choice. ([Rob15] gives examples of toposes which fail to satisfy it.) ###### Remark . IWISC is a property of a single universe, in contrast to Streicher’s type theoretic formulation of a WISC axiom, TTCAf [Str05], which uses two universes (and which can be shown to imply IWISC). With a single universe it seems necessary to quantify over indexed families in the universe (hence our choice of terminology), rather than just over types in the universe as in _loc. cit._ ; see [Lev21, Section 5.1]. Note that IWISC asks that a wisc merely exists for each family in $\mathcal{U}$. A stronger requirement on $\mathcal{U}$ would be to ask for a function assigning a wisc to each family; and this is equivalent to asking for a function mapping each $B:\mathcal{U}$ to a wisc for $B$. (For note that, given a family $(A,B)$, if we have a function mapping each $a:A$ to a wisc $(C_{a},E_{a})$ for $B\,a$, then the single family $(C,E)$ with $C=\sum_{a:A}C_{a}$ and $E(a,c)=E_{a}\,c$ is a wisc for all the $B\,a$ simultaneously.) [Lev21] calls this property Global WISC. Although the topos of sets in ZFC with Grothendieck universes satisfies Global WISC, it is not known which other toposes do. The following lemma gives a convenient reformulation of the wisc property from Section 4 that we use in the next section. ###### Lemma . Suppose that $A:\mathcal{U}$ has a wisc $(C:\mathcal{U},E:C\rightarrow\mathcal{U})$. If $B:A\rightarrow\mathcal{U}$ and $\phi:\prod_{x:A}(B\,x\rightarrow\mathsf{Prop})$ satisfy $\forall(x:A).\exists(y:B\,x).\;\phi\,x\,y$, then there exist $c:C$, a surjection $p:E\,c\rightarrow A$ and a function $q:\prod_{z:E\,c}B(p\,z)$ satisfying $\forall(z:E\,c).\;\phi\,(p\,z)\,(q\,z)$. ###### Proof. Consider the type $\mathrm{\Phi}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sum_{x:A}\sum_{y:B\,x}\,\phi\,x\,y$ in $\mathcal{U}$. By assumption $\pi_{1}:\mathrm{\Phi}\rightarrow A$ is a surjection, so since $(C,E)$ is a wisc for $A$, there exist $c:C$ and $e:E\,c\rightarrow\mathrm{\Phi}$ with $\pi_{1}\circ e$ a surjection. So we can take $p\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\pi_{1}\circ e$ and $q\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\pi_{1}\circ\pi_{2}\circ e$; and then $\pi_{2}(\pi_{2}(e\,z))$ is a proof that $\phi\,(p\,z)\,(q\,z)$ holds. ∎ ## 5\. Size [Swa18] uses IWISC to construct _W-types with reductions_ , which are a simple special case of QWI-types (see Section 7.2). We will see that in fact IWISC implies the existence of all QWI-types, but using a different approach from that of [Swa18]. We show in this section that, starting with a given signature $\mathrm{\Sigma}$ and system of equations $\varepsilon$ (Section 3.2), using IWISC one can construct a well-founded type of “sizes” with the property that colimits of diagrams indexed by such sizes are preserved when taking exponents by the arity types of $\mathrm{\Sigma}$ and $\varepsilon$. This will enable us to construct the QW-type for $(\mathrm{\Sigma},\varepsilon)$ as a colimit in Section 6. We consider the general case of indexed signatures in Section 5.1; and the construction of QWI-types as a sized-indexed colimit is detailed in the accompanying Agda formalization [FPS21]. Size is here playing a role in the constructive type theory of toposes that in classical set theory would be played by ordinal numbers. The next definition gives the properties of size that we need. ###### Definition (Size). A type $\mathsf{Size}$ in a universe $\mathcal{U}$ will be called a _type of sizes_ if it comes equipped with * • a binary relation $\\_<\\_:\mathsf{Size}\rightarrow\mathsf{Size}\rightarrow\mathsf{Prop}$ which is transitive $\forall i,j,k.\;i<j\rightarrow j<k\rightarrow i<k$ (45) and well-founded $\forall(\phi:\mathsf{Size}\rightarrow\mathsf{Prop}).(\forall i.(\forall j.\;{j<i}\rightarrow\phi\,j)\rightarrow\phi\,i)\rightarrow\forall i.\;\phi\,i$ (46) * • a distinguished element $0^{s}$ * • a binary operation $\\_\sqcup^{s}\\_:\mathsf{Size}\rightarrow\mathsf{Size}\rightarrow\mathsf{Size}$ which gives upper bounds with respect to $<$ for pairs of elements $\forall i,j.\;i<i\sqcup^{s}j\;\wedge\;j<i\sqcup^{s}j$ (47) Note that defining ${i}^{+}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}i\sqcup^{s}i$ we get a form of successor operation satisfying $\forall i.\;i<{i}^{+}$ (48) For the examples of types of sizes given in Section 5 this successor operation preserves $<$, but we do not need that property for the results in this paper.444In fact we do not need the distinguished element $0^{s}$ either, but it does no harm to require it and then sizes are directed with respect to $<$ in the usual sense of having upper bounds for all finite subsets, including the empty one. Such a type of sizes has many properties of the classic notion of limit ordinal, except that we do not require the order to be total ($\forall i,j.\;i<j\vee i=j\vee j<i$); that would be too strong in a constructive setting and indeed does not hold in the examples below. Nor do we have any need for an extensionality property ($\forall i,j.\;(\forall k.\;k<i\leftrightarrow k<j)\rightarrow i=j$). In the precursor to this paper [FPS20] we made use of Agda’s built in type $\mathsf{Size}$ and its version of sized types [Abe12]. This shares only some of the properties of the above definition. In particular, it has a “stationary” size $\infty$ satisfying $\infty<\infty$ and hence the relation $<$ for Agda’s notion of size is not well-founded.555Unfortunately in the current version of Agda (2.6.2) it is possible to prove well-foundedness of $<$ for its built in type of sizes $\mathsf{Size}$, and this leads immediately to logical unsoundness; see github.com/agda/agda/issues/3026. For this reason we avoid the use of Agda’s sized types in the current Agda development accompanying this paper. ###### Notation (Bounded quantification over sizes). We write $\prod_{j<i}\,\\_$ for $\prod_{j:(\downarrow i)}\,\\_$, where $\downarrow i\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\\{j:\mathsf{Size}\mid j<i\\}$ (49) is the type of sizes below $i:\mathsf{Size}$. Similarly for other binders involving sizes, such as $\forall{j<i}.\;\\_$ and $\lambda\,{j<i}\mathbin{.}\\_$. In the next section we need the fact that well-founded recursion (50) can be reduced to well-founded induction (46) by defining the graph of the recursive function and then appealing to unique choice (7) and function extensionality (6): ###### Proposition (Well-founded recursion). For any family of types $A:\mathsf{Size}\rightarrow\mathcal{U}$ and function $\alpha:\prod_{i}\left(\left(\prod_{j<i}\,A\,j\right)\rightarrow A\,i\right)$, there is a unique function $\mathop{\mathsf{rec}}A\,\alpha:\prod_{i}\,A\,i$ satisfying $\forall i.\;\mathop{\mathsf{rec}}A\,\alpha\,i=\alpha\,i\,(\lambda\,{j<i}\mathbin{.}\mathop{\mathsf{rec}}A\,\alpha\,j)$ (50) ###### Proof. Let $R:\prod_{i}(A\,i\rightarrow\mathsf{Prop})$ be the least666Instead of the impredicative definition of $R$ (and $<$ and $\leq$ in Section 5) as the least relation closed under some rules, in our Agda development such relations are constructed as inductively defined datatypes in the universe $\mathsf{Prop}$, allowing one to use dependent pattern-matching to simplify proofs about them. family of relations satisfying $\textstyle\forall i.\forall(f:\prod_{j<i}\,A\,j).\;(\forall{j<i}.\;R_{j}(f\,j))\rightarrow R_{i}(\alpha\,i\,f)$ (51) The fact that $R$ is least with this property allows one to prove $\forall i.\exists!(x:A_{i}).\;R_{i}\,x$ by applying (46) with $\phi\,i\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\exists!\,(x:A_{i}).\;R_{i}\,x$. So by unique choice (7) there is $r:\prod_{i}\,A\,i$ satisfying $\forall i.\;R_{i}(r\,i)$ and hence by (51) also satisfying $\forall i.\;R_{i}(\alpha\,i\,(\lambda\,{j<i}\mathbin{.}r\,j))$. Since for each $i$ there is a unique $x$ with $R_{i}\,x$, it follows that $r\,i=\alpha\,i\,(\lambda\,{j<i}\mathbin{.}r\,j)$. So we can take $\mathop{\mathsf{rec}}A\,\alpha$ to be $r$. If $r^{\prime}:\prod_{i}\,A\,i$ also satisfies $\forall i.\,r^{\prime}i=\alpha\,i\,(\lambda\,{j<i}\mathbin{.}r^{\prime}j)$, then $\forall i.\,r^{\prime}i=r\,i$ follows from another application of (46), so that $r^{\prime}=r$ by function extensionality (6). ∎ ###### Example (Plump order). Let $\mathsf{Size}:\mathcal{U}$ be the W-type determined by some type $\mathit{Op}:\mathcal{U}$ and family $\mathit{Ar}:\mathit{Op}\rightarrow\mathcal{U}$ (see Section 3.1). Thus $\mathsf{Size}$ is inductively defined with constructor $\sigma:(\sum_{a:\mathit{Op}}(\mathit{Ar}\,a\rightarrow\mathsf{Size}))\rightarrow\mathsf{Size}$. The _plump_ ordering [Tay96] on $\mathsf{Size}$ is given by the least relations ${\\_<\\_},{\\_\leq\\_}:\mathsf{Size}\rightarrow\mathsf{Size}\rightarrow\mathsf{Prop}$ satisfying for all $a:\mathit{Op}$, $b:\mathit{Ar}\,a\rightarrow\mathsf{Size}$ and $i:\mathsf{Size}$ $\displaystyle(\forall(x:\mathit{Ar}\,a).\;b\,x<i)\rightarrow\sigma\,a\,b\leq i$ (52) $\displaystyle(\exists(x:\mathit{Ar}\,a).\;i\leq b\,x)\rightarrow i<\sigma\,a\,b$ (53) It is not hard to see that the relation $<$ is transitive (45) and well- founded (46). Furthermore one can prove that $\leq$ is reflexive and hence from (53) we get $\forall(x:\mathit{Ar}\,a).\;b\,x<\sigma\,a\,b$ (54) So for each operation $a:\mathit{Op}$, every family $b:\mathit{Ar}\,a\rightarrow\mathsf{Size}$ of elements of $\mathsf{Size}$ indexed by its arity $\mathit{Ar}\,a$ has an upper bound with respect to $<$, namely $\sigma\,a\,b$. In particular, if the signature $(\mathit{Op},\mathit{Ar})$ contains an operation $a_{2}:\mathit{Op}$ whose arity is $\mathit{Ar}\,a_{2}=\mathbb{2}$, then an upper bound for any $i,j:\mathsf{Size}$ with respect to $<$ is given by $i\sqcup^{s}j\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sigma\,a_{2}\,b_{i,j}$ (55) with $b_{i,j}:\mathbb{2}\rightarrow\mathsf{Size}$ defined by $b_{i,j}\,0=i$ and $b_{i,j}\,1=j$. So (47) is satisfied in this case. Similarly, if the signature $(\mathit{Op},\mathit{Ar})$ contains an operation $a_{0}:\mathit{Op}$ whose arity is $\mathit{Ar}\,a_{0}=\mathbb{0}$, then $\mathsf{Size}$ contains a distinguished element $0^{s}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sigma\,a_{0}\,b$ (56) with $b:\mathbb{0}\rightarrow\mathsf{Size}$ uniquely determined. Therefore, we have: ###### Lemma . If the signature $\mathit{Op}:\mathcal{U},\mathit{Ar}:\mathit{Op}\rightarrow\mathcal{U}$ contains a nullary operation ($a_{0}:\mathit{Op}$ with $\mathit{Ar}\,a_{0}=\mathbb{0}$) and a binary operation ($a_{2}:\mathit{Op}$ with $\mathit{Ar}\,a_{2}=\mathbb{2}$), then the corresponding W-type $\mathsf{Size}:\mathcal{U}$, equipped with the plump order $<$, is a type of sizes in the sense of Section 5. Furthermore for any $a:\mathit{Op}$, every $(\mathit{Ar}\,a)$-indexed family of sizes $b:\mathit{Ar}\,a\rightarrow\mathsf{Size}$ has an upper bound with respect to $<$ (namely $\sigma\,a\,b$). ∎ Given a signature $\mathrm{\Sigma}$ in a universe $\mathcal{U}$ satisfying IWISC, the lemma allows us to prove the existence of a type of sizes $\mathsf{Size}$ with enough upper bounds to be able to prove a cocontinuity property of the polynomial endofunctor associated with $\mathrm{\Sigma}$ (Section 5.1 below); we apply this in Section 6 to construct QW-types for a given signature and system of equations (and QWI-types for indexed signatures and systems of equations as defined in Section 3.3). The cocontinuity property has to do with colimits of $\mathsf{Size}$-indexed diagrams in $\mathcal{U}$. To state it, we first recall some semi-standard777They are only _semi_ -standard, because $<$ is not reflexive (indeed is irreflexive, because of well-foundedness), so that $(\mathsf{Size},<)$ is only a (thin) _semi_ -category. category-theoretic notions to do with diagrams and colimits. ### 5.1. Colimits of size-indexed diagrams By definition, given a type of sizes $\mathsf{Size}$ in a universe $\mathcal{U}$ (Section 5), a _$\mathsf{Size}$ -indexed diagram_ is given by a family of types $D:\mathsf{Size}\rightarrow\mathcal{U}$ equipped with functions $\delta_{i,j}:D_{i}\rightarrow D_{j}$ for all $i,j:\mathsf{Size}$ with $i<j$, satisfying $\forall(i,j,k:\mathsf{Size}).\;i<j<k\rightarrow\delta_{i,k}=\delta_{j,k}\circ\delta_{i,j}$ (57) As usual, the _colimit_ of such a diagram is a cocone of functions in $\mathcal{U}$ $(\nu_{D})_{i}:D_{i}\rightarrow\mathop{\mathsf{colim}}D\qquad\forall(i,j:\mathsf{Size}).\;i<j\rightarrow(\nu_{D})_{i}=(\nu_{D})_{j}\circ\delta_{i,j}$ (58) with the universal property that for any other cocone $f_{i}:D_{i}\rightarrow C\qquad\forall(i,j:\mathsf{Size}).\;i<j\rightarrow f_{i}=f_{j}\circ\delta_{i,j}$ there is a unique function $f:\mathop{\mathsf{colim}}D\rightarrow C$ satisfying $\forall i.\;f_{i}=f\circ(\nu_{D})_{i}$. Colimits can be constructed using quotient types: we define a binary relation $\\_{\sim}\\_$ on $\sum_{i}\,D_{i}$ by: $(i,x)\sim(j,y)\;\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\;\exists(k:\mathsf{Size}).\;{i<k}\;\wedge\;{j<k}\;\wedge\;{\delta_{i,k}\,x=\delta_{j,k}\,y}$ (59) This is an equivalence relation, because $(\mathsf{Size},{<})$ has property (47). Quotienting by it yields $\textstyle\mathop{\mathsf{colim}}D\;\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\;\left(\sum_{i}D_{i}\right)/{\sim}$ (60) with the universal cocone functions $(\nu_{D})_{i}$ given by mapping each $x:D_{i}$ to the equivalence class $[i,x]_{\sim}$. ###### Definition (Preservation of colimits by taking a power). Given a $\mathsf{Size}$-indexed diagram $(D,\delta)$ in $\mathcal{U}$ and a type $X:\mathcal{U}$, we get a _power_ diagram $(D^{X},\delta^{X})$ in $\mathcal{U}$ with $\displaystyle(D^{X})_{i}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}X\rightarrow D_{i}$ $\displaystyle(i:\mathsf{Size})$ (61) $\displaystyle(\delta^{X})_{i,j}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\lambda\,(f:X\rightarrow D_{i})\mathbin{.}\delta_{i,j}\circ f$ $\displaystyle(i,j:\mathsf{Size})$ Post-composition with $(\nu_{D})_{i}$ gives a cocone under the diagram $D^{X}$ with vertex $X\rightarrow\mathop{\mathsf{colim}}D$ and by universality this induces a function $\displaystyle\kappa_{D,X}:\mathop{\mathsf{colim}}(D^{X})\rightarrow(X\rightarrow\mathop{\mathsf{colim}}D)$ (62) $\displaystyle\kappa_{D,X}\,[i,f]_{\sim}=(\nu_{D})_{i}\circ f$ One says that _taking a power by $X:\mathcal{U}$ preserves $\mathsf{Size}$-indexed colimits_ if for all $\mathsf{Size}$-indexed diagrams $(D,\delta)$ in $\mathcal{U}$ the function $\kappa_{D,X}$ is an isomorphism. ###### Theorem (Cocontinuity). Suppose $\mathcal{U}$ is a universe satisfying IWISC in a topos with natural numbers object. Given $A:\mathcal{U}$ and $B:A\rightarrow\mathcal{U}$, there exists a type of sizes $\mathsf{Size}:\mathcal{U}$ with the property that for all $a:A$, taking a power by the type $B\,a:\mathcal{U}$ preserves $\mathsf{Size}$-indexed colimits. ###### Proof. The proof is in three parts, (1)–(3). In part (1) we show how to find a suitable type of sizes $\mathsf{Size}$ using Section 5; so $\mathsf{Size}$ is a W-type for a suitable signature derived from $A:\mathcal{U}$ and $B:A\rightarrow\mathcal{U}$. The signature contains arities arising from a wisc whose existence is guaranteed by IWISC. In fact we need to consider not only the covers in such a wisc, but also a wisc for the family of (subtypes of) covers in this wisc, for the same reason that Swan uses “2-cover bases” [Swa18, Section 3.2.2]. Next, we have to prove that (62) is an isomorphism when $X:\mathcal{U}$ is $B\,a$, for any $a:A$; and for this it suffices to prove that it is (2) injective and (3) surjective. For then we can apply unique choice (7) to construct a two-sided inverse for $\kappa_{D,X}$. By definition of $\sim$ (59), $\kappa_{D,X}$ is injective iff $\forall(i,j:\mathsf{Size}).\forall(f:X\rightarrow D_{i}).\forall(f^{\prime}:X\rightarrow D_{j}).\;(\nu_{D})_{i}\circ f=(\nu_{D})_{j}\circ f^{\prime}\rightarrow{}\\\ \exists(k:\mathsf{Size}).\;{i<k}\;\wedge\;{j<k}\;\wedge\;{\delta_{i,k}\circ f=\delta_{j,k}\circ f^{\prime}}$ (63) and surjective iff $\forall(f:X\rightarrow\mathop{\mathsf{colim}}D).\exists(i:\mathsf{Size}).\exists(f^{\prime}:X\rightarrow D_{i}).\;(\nu_{D})_{i}\circ f^{\prime}=f$ (64) 1. (1) _Construction of $\mathsf{Size}$_: Given $A:\mathcal{U}$ and $B:A\rightarrow\mathcal{U}$, using IWISC let $(C,F)$ be a wisc for the family $(A,B)$. For each $c:C$, $a:A$ and function $f:F\,c\rightarrow B\,a$ we can form the kernel of $f$ $\textstyle\mathsf{Ker}f\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sum_{x,x^{\prime}:F\,c}(f\,x=f\,x^{\prime})$ (65) Applying IWISC again, let $(C^{\prime},F^{\prime})$ be a wisc for this family of kernels indexed by $(c,a,f):\sum_{(c,a):C\times A}(F\,c\rightarrow B\,a)$. Finally, consider the signature with $\mathit{Op}=\mathbb{1}+\mathbb{1}+A+C+C^{\prime}$ and $\mathit{Ar}:\mathit{Op}\rightarrow\mathcal{U}$ the function mapping $0:\mathbb{1}$ in the first summand of $\mathit{Op}$ to $\mathbb{0}$, $0:\mathbb{1}$ in the second summand to $\mathbb{2}$, $a:A$ in the third summand to $B\,a$, $c:C$ in the fourth summand to $F\,c$ and $c^{\prime}:C^{\prime}$ in the fifth summand to $F^{\prime}c^{\prime}$. Then as in Section 5, the W-type for the signature $(\mathit{Op},\mathit{Ar})$ is a type of sizes. Call it $\mathsf{Size}$. 2. (2) _Proof of ( 63)_: Recall that $X:\mathcal{U}$ is of the form $B\,a$ for some $a:A$. Suppose we have $f:X\rightarrow D_{i}$ and $f^{\prime}:X\rightarrow D_{j}$ satisfying $(\nu_{D})_{i}\circ f=(\nu_{D})_{j}\circ f^{\prime}$. So by definition of $\mathop{\mathsf{colim}}D$ (60) we have $\forall(x:X).\exists(k:\mathsf{Size}).\;i<k\;\wedge\;j<k\;\wedge\;\delta_{i,k}(f\,x)=\delta_{j,k}(f^{\prime}\,x)$ (66) Using the version of the wisc property of $(C,F)$ from Section 4, there is $c:C$, $p:F\,c\rightarrow X$ and $s:F\,c\rightarrow\mathsf{Size}$ with $p$ surjective and satisfying $\forall(z:F\,c).\;i<s\,z\;\wedge\;j<s\,z\;\wedge\;\delta_{i,s\,z}(f(p\,z))=\delta_{j,s\,z}(f^{\prime}(p\,z))$ (67) Since $F\,c$ is one of the arities in the signature of the W-type $\mathsf{Size}$, by Section 5 we have that $s:F\,c\rightarrow\mathsf{Size}$ has an upper bound, i.e. there is $k:\mathsf{Size}$ with $\forall(z:F\,c).\;s\,z<k$; and by (47) we can assume further that $i<k$ and $j<k$. Furthermore, from (67) and (57) we get $\forall(z:F\,c).\;\delta_{i,k}(f(p\,z))=\delta_{j,k}(f^{\prime}(p\,z))$; but $p$ is surjective, so $\delta_{i,k}\circ f=\delta_{j,k}\circ f^{\prime}$, as required for (63). 3. (3) _Proof of ( 64)_: Suppose we have $f:X\rightarrow\mathop{\mathsf{colim}}D$. Since the quotient function $[\\_]_{\sim}:D_{i}\rightarrow\mathop{\mathsf{colim}}D$ is surjective, using Section 4 again, there is $c:C$, $p:F\,c\rightarrow X$, $s:F\,c\rightarrow\mathsf{Size}$ and $g:\prod_{z:F\,c}D_{s\,z}$ with $p$ surjective and satisfying $\forall(z:F\,c).\;f(p\,z)=[s\,z,g\,z]_{\sim}$. As before, we have that $s:F\,c\rightarrow\mathsf{Size}$ has an upper bound, i.e. there is $j:\mathsf{Size}$ with $\forall(z:F\,c).\;s\,z<j$. So we get a function $g^{\prime}:F\,c\rightarrow D_{j}$ by defining $g^{\prime}\,z\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\delta_{s\,z,j}(g\,z)$ and hence $f(p\,z)=[s\,z,gz]_{\sim}=[j,\delta_{s\,z,j}(gz)]_{\sim}=(\nu_{D})_{j}(g^{\prime}z)$ (68) Let $Y\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathsf{Ker}\,p$ be the kernel of $p$ as in (65). Then for any $(z,z^{\prime},\\_):Y$, since $p\,z=p\,z^{\prime}$, we have $(\nu_{D})_{j}(g^{\prime}z)=f(p\,z)=f(p\,z^{\prime})=(\nu_{D})_{j}(g^{\prime}z^{\prime})$ and hence $\exists k.\;j<k\wedge{\delta_{j,k}(g^{\prime}z)=\delta_{j,k}(g^{\prime}z^{\prime})}$. So we can apply Section 4 again to deduce the existence of $c^{\prime}:C^{\prime}$, $\langle p_{1}^{\prime},p_{2}^{\prime},\\_\rangle:F^{\prime}c^{\prime}\rightarrow Y$ and $s^{\prime}:F^{\prime}c^{\prime}\rightarrow\mathsf{Size}$ with $\langle p_{1}^{\prime},p_{2}^{\prime}\rangle$ surjective and satisfying $\forall(z^{\prime}:F^{\prime}c^{\prime}).\;{j<s^{\prime}z^{\prime}}\;\wedge\;\delta_{j,s^{\prime}z^{\prime}}(g^{\prime}(p_{1}^{\prime}\,z^{\prime}))=\delta_{j,s^{\prime}z^{\prime}}(g^{\prime}(p_{2}^{\prime}\,z^{\prime}))$ (69) Since $F^{\prime}c^{\prime}$ is one of the arities in the signature of the W-type $\mathsf{Size}$, by Section 5 we have that the family of sizes $s^{\prime}:F^{\prime}c^{\prime}\rightarrow\mathsf{Size}$ has an upper bound, $i$ say; and by (47) we can assume $j<i$. Let $g^{\prime\prime}:F\,c\rightarrow D_{i}$ be $\delta_{j,i}\circ g^{\prime}$. Thus from (68) we have $f(p\,z)=(\nu_{D})_{i}(g^{\prime\prime}z)$ (70) and from (69) also $g^{\prime\prime}\circ p_{1}^{\prime}=g^{\prime\prime}\circ p_{2}^{\prime}$. So altogether we have functions $\textstyle{{F\,c^{\prime}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\langle p_{1}^{\prime},p_{2}^{\prime}\rangle}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{1}}$$\scriptstyle{\pi_{2}}$$\textstyle{{F\,c}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g^{\prime\prime}}$$\textstyle{D_{i}}$ (71) with $g^{\prime\prime}\circ\pi_{1}\circ\langle p_{1}^{\prime},p_{2}^{\prime}\rangle=g^{\prime\prime}\circ p_{1}^{\prime}=g^{\prime\prime}\circ p_{2}^{\prime}=g^{\prime\prime}\circ\pi_{2}\circ\langle p_{1}^{\prime},p_{2}^{\prime}\rangle$. Since $\langle p_{1}^{\prime},p_{2}^{\prime}\rangle$ is surjective, this implies $g^{\prime\prime}\circ\pi_{1}=g^{\prime\prime}\circ\pi_{2}$. But since $Y=\mathsf{Ker}\,p$, $p$ is the coequalizer of $\pi_{1}$ and $\pi_{2}$ (since as we noted in Section 2, toposes have effective epimorphisms); therefore there is a unique function $f^{\prime}:X\rightarrow D_{i}$ satisfying $f^{\prime}\circ p=g^{\prime\prime}$. Since $(\nu_{D})_{i}\circ f^{\prime}\circ p=(\nu_{D})_{i}\circ g^{\prime\prime}=f\circ p$ by (70) and $p$ is surjective, we have $(\nu_{D})_{i}\circ f^{\prime}=f$, completing the proof of (64) and hence of Section 5.1. ∎ ###### Remark . Note that the type of sizes constructed in the proof of the theorem has the property that for any $a:A$, $\mathsf{Size}$ has upper bounds (with respect to $<$) for families of sizes indexed by $B\,a$ (because by construction $B\,a$ is one of the arities in the signature of the W-type $\mathsf{Size}$ and so Section 5 applies). Such upper bounds are not needed in the above proof. We included them because they are needed below in the proof of Section 6 when proving property (89). ###### Definition (Preservation of colimits by polynomial endofunctors). If $(D,\delta)$ is a $\mathsf{Size}$-indexed diagram in $\mathcal{U}$, then we get another such, $(S_{\mathrm{\Sigma}}\circ D,S_{\mathrm{\Sigma}}\circ\delta)$, by composing with the polynominal endofunctor $S_{\mathrm{\Sigma}}:\mathcal{U}\rightarrow\mathcal{U}$ associated with the signature $\mathrm{\Sigma}$ as in (9): $\displaystyle(S_{\mathrm{\Sigma}}\circ D)_{i}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}S_{\mathrm{\Sigma}}(D_{i})$ $\displaystyle(i:\mathsf{Size})$ (72) $\displaystyle(S_{\mathrm{\Sigma}}\circ\delta)_{i,j}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}S_{\mathrm{\Sigma}}(\delta_{i,j})$ $\displaystyle(i,j:\mathsf{Size})$ Applying $S_{\mathrm{\Sigma}}$ to (58) gives a cocone under $(S_{\mathrm{\Sigma}}\circ D,S_{\mathrm{\Sigma}}\circ\delta)$ with vertex $S_{\mathrm{\Sigma}}(\mathop{\mathsf{colim}}D)$ and this induces a function in $\mathcal{U}$, $\kappa_{D,\mathrm{\Sigma}}:\mathop{\mathsf{colim}}(S_{\mathrm{\Sigma}}\circ D)\rightarrow S_{\mathrm{\Sigma}}(\mathop{\mathsf{colim}}D)$ (73) One says that _the polynomial endofunctor $S_{\mathrm{\Sigma}}$ preserves $\mathsf{Size}$-indexed colimits_ if $\kappa_{D,\mathrm{\Sigma}}$ is an isomorphism for all diagrams $(D,\delta)$. ###### Corollary (Cocontinuity of $S_{\mathrm{\Sigma}}$). In a topos with natural numbers object, given a signature $\mathrm{\Sigma}=(A:\mathcal{U},B:A\rightarrow\mathcal{U})$ in a universe $\mathcal{U}$ satisfying IWISC, there exists a type of sizes $\mathsf{Size}$ with the property that the associated polynominal endofunctor $S_{\mathrm{\Sigma}}:\mathcal{U}\rightarrow\mathcal{U}$ preserves $\mathsf{Size}$-indexed colimits. ###### Proof. We apply Section 5.1 to find a type of sizes, $\mathsf{Size}$ for the given $A:\mathcal{U}$ and $B:A\rightarrow\mathcal{U}$. So for each diagram $(D,\delta)$ in $\mathcal{U}$, we have properties (63) and (64) when $X$ is $B\,a$, for any $a:A$. The function (73) satisfies for all $i:\mathsf{Size}$, $a:A$ and $b:B\,a\rightarrow D_{i}$ $\kappa_{D,\mathrm{\Sigma}}[i,(a,b)]_{\sim}=(a,(\nu_{D})_{i}\circ b)$ (74) We have that $\kappa_{D,\mathrm{\Sigma}}$ is injective, because if $(a,(\nu_{D})_{i}\circ b)=(a^{\prime},(\nu_{D})_{j}\circ b^{\prime})$, then $a=a^{\prime}$ and $(\nu_{D})_{i}\circ b=(\nu_{D})_{j}\circ b^{\prime}$; but then from (63) with $X=B\,a=B\,a^{\prime}$, there is some $k$ with $i,j<k$ and $\delta_{i,k}\circ b=\delta_{j,k}\circ b^{\prime}$, so that $[i,(a,b)]_{\sim}$ and $[j,(a^{\prime},b^{\prime})]_{\sim}$ are equal terms of $\mathop{\mathsf{colim}}(S_{\mathrm{\Sigma}}\circ D)$. Furthermore, $\kappa_{D,\mathrm{\Sigma}}$ is surjective because given $(a,b):S_{\mathrm{\Sigma}}(\mathop{\mathsf{colim}}D)$, from (64) with $X=B\,a$, there exist $i:\mathsf{Size}$ and $b^{\prime}:B\,a\rightarrow D_{i}$ with $(\nu_{D})_{i}\circ b^{\prime}=b$; so $[i,(a,b^{\prime})]_{\sim}$ in $\mathop{\mathsf{colim}}(S_{\mathrm{\Sigma}}\circ D)$ is mapped by $\kappa_{D,\mathrm{\Sigma}}$ to $(a,b)$. Since $\kappa_{D,\mathrm{\Sigma}}$ is both injective and surjective, we can apply unique choice (7) to conclude that it is an isomorphism. ∎ ###### Remark (Cocontinuity for polynomial endofunctors on $\mathcal{U}^{I}$). There are indexed versions of Section 5.1 and Section 5.1. To state them for an indexing type $I:\mathcal{U}$, we need to consider $\mathsf{Size}$-indexed diagrams in $\mathcal{U}^{I}$ and their colimits. Since the latter are given pointwise by colimits in $\mathcal{U}$, this makes what follows a simple extension of the previous development. Given a $\mathsf{Size}$-indexed diagram $(D,\delta)$ in $\mathcal{U}^{I}$ and a family $X:\mathcal{U}^{I}$, the indexed version of (61) is a _power_ diagram $(D^{X},\delta^{X})$ in $\mathcal{U}$ with $\displaystyle(D^{X})_{i}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}X\rightharpoondown D_{i}$ $\displaystyle(i:\mathsf{Size})$ (75) $\displaystyle(\delta^{X})_{i,j}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\lambda\,(f:X\rightharpoondown D_{i})\mathbin{.}\delta_{i,j}\circ f$ $\displaystyle(i,j:\mathsf{Size})$ Post-composition with $(\nu_{D})_{i}$ gives a cocone under the diagram $D^{X}$ with vertex $X\rightharpoondown\mathop{\mathsf{colim}}D$ and this induces a function which is the indexed version of (62): $\displaystyle\kappa_{D,X}:\mathop{\mathsf{colim}}(D^{X})\rightharpoondown(X\rightharpoondown\mathop{\mathsf{colim}}D)$ (76) $\displaystyle\kappa_{D,X}\,[i,f]_{\sim}=(\nu_{D})_{i}\circ f$ One says that _taking a power by $X:\mathcal{U}^{I}$ preserves $\mathsf{Size}$-indexed colimits_ if for all $\mathsf{Size}$-indexed diagrams $(D,\delta)$ in $\mathcal{U}^{I}$ the function $\kappa_{D,X}$ is an isomorphism. Then we have: > _If $\mathcal{U}$ is a universe satisfying IWISC in a topos with natural > numbers object, then given $A,I:\mathcal{U}$ and > $B:A\rightarrow\mathcal{U}^{I}$, there exists a type of sizes > $\mathsf{Size}:\mathcal{U}$ with the property that for all $a:A$, taking a > power by a family $B\,a:\mathcal{U}^{I}$ preserves $\mathsf{Size}$-indexed > colimits._ The proof of this is similar to the proof of Section 5.1 and can be found in the Agda development accompanying this paper [FPS21]. Given a $\mathsf{Size}$-indexed diagram $(D,\delta)$ in $\mathcal{U}^{I}$, and an $I$-indexed signature $\mathrm{\Sigma}$ as in (34), then we get another $\mathsf{Size}$-indexed diagram, $(S_{\mathrm{\Sigma}}\circ D,S_{\mathrm{\Sigma}}\circ\delta)$, by composing with the polynominal endofunctor $S_{\mathrm{\Sigma}}:\mathcal{U}^{I}\rightarrow\mathcal{U}^{I}$: $\displaystyle(S_{\mathrm{\Sigma}}\circ D)_{i}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}S_{\mathrm{\Sigma}}(D_{i})$ $\displaystyle(i:\mathsf{Size})$ (77) $\displaystyle(S_{\mathrm{\Sigma}}\circ\delta)_{i,j}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}S_{\mathrm{\Sigma}}(\delta_{i,j})$ $\displaystyle(i,j:\mathsf{Size})$ Applying $S_{\mathrm{\Sigma}}$ to the $I$-indexed version of (58) gives a cocone under $(S_{\mathrm{\Sigma}}\circ D,S_{\mathrm{\Sigma}}\circ\delta)$ with vertex $S_{\mathrm{\Sigma}}(\mathop{\mathsf{colim}}D)$ and this induces a family of functions in $\mathcal{U}^{I}$ $\kappa_{D,\mathrm{\Sigma}}:\mathop{\mathsf{colim}}(S_{\mathrm{\Sigma}}\circ D)\rightharpoondown S_{\mathrm{\Sigma}}(\mathop{\mathsf{colim}}D)$ (78) One says that _the polynomial endofunctor $S_{\mathrm{\Sigma}}:\mathcal{U}^{I}\rightarrow\mathcal{U}^{I}$ preserves $\mathsf{Size}$-indexed colimits_ if $\kappa_{D,\mathrm{\Sigma}}$ is a family of isomorphisms, for all diagrams $(D,\delta)$. Then the indexed generalization of Section 5.1 is: > _In a topos with natural numbers object, given an indexed signature > $\mathrm{\Sigma}=(I,A,B)$ in a universe $\mathcal{U}$ satisfying IWISC, > there exists a type of sizes $\mathsf{Size}$ with the property that the > associated polynominal endofunctor > $S_{\mathrm{\Sigma}}:\mathcal{U}^{I}\rightarrow\mathcal{U}^{I}$ preserves > $\mathsf{Size}$-indexed colimits._ This is proved as a corollary of the indexed version of Section 5.1 given above, and can also be found in the accompanying Agda development [FPS21]. ## 6\. Construction of QWI-types We aim to prove the following theorem about existence of QWI-types (Section 3.3): ###### Theorem . Suppose $\mathcal{U}$ is a universe satisfying IWISC in a topos with natural numbers object. Then for every indexed signature and system of equations in $\mathcal{U}$, there exists a QWI-type for it. The proof follows from the cocontinuity results of the previous section (the indexed versions of Section 5.1 and Section 5.1 given in Section 5.1). For simplicity, here we only give the proof for QW-types (Section 3.2), that is, for the non-indexed $I=\mathbb{1}$ case of signatures. The general case is similar, but notationally more involved since there are indexes ranging both over an index type and over sizes. The proof for the general, indexed case can be found in our Agda development [FPS21]. _So in this section we fix a signature $\mathrm{\Sigma}=(A:\mathcal{U},B:A\rightarrow\mathcal{U})$ and system of equations $\varepsilon=(E:\mathcal{U},V:E\rightarrow\mathcal{U},l,r:\prod_{e:E}T_{\mathrm{\Sigma}}(V\,e))$ over it in some universe $\mathcal{U}$ satisfying IWISC in a topos with natural numbers object._ ### 6.1. Motivating the construction We noted at the start of Section 4 that QW-types can be constructed in the category of ZFC sets as initial algebras for possibly infinitary equational theories by first forming the W-type of terms of the theory and then quotienting that by the congruence relation generated by the equations of the theory. AC is used when constructing the algebra structure of the quotient, because the signature’s arities may be infinite. To avoid this use of AC, instead of forming all terms in one go and then quotienting, we consider interleaving quotient formation with the construction of terms of free algebras for equational systems (cf. the categorical construction by [FH09]). $\begin{array}[]{l}\mathtt{mutual}\\\ \quad\mathtt{data}\;W:\mathcal{U}\;\mathtt{where}\\\ \quad\quad q\tau:T_{\mathrm{\Sigma}}(W/{\sim})\rightarrow W\\\ \quad\mathtt{data}\;\\_\sim\\_:W\rightarrow W\rightarrow\mathsf{Prop}\;\mathtt{where}\\\ \quad\quad q\varepsilon:\forall(e:E).\forall(\rho:V\;e\rightarrow W/{\sim}).\;q\tau(T_{\mathrm{\Sigma}}\,\rho\,(l\,e))\sim q\tau(T_{\mathrm{\Sigma}}\,\rho\,(r\,e))\\\ \quad\quad q\eta:\forall(t:T_{\mathrm{\Sigma}}(W/{\sim})).\;q\tau(\eta\,[q\tau\,t]_{\sim})\sim q\tau\,t\\\ \quad\quad q\sigma:\forall(a:A).\forall(b:B\,a\rightarrow T_{\mathrm{\Sigma}}(W/{\sim})).\;q\tau(\sigma(a,b))\sim q\tau(\sigma(a,\eta\circ[\\_]_{\sim}\circ q\tau\circ b))\\\ \mathsf{QW}=W/{\sim}\end{array}$ Figure 2. First attempt at constructing QW- types Figure 2 gives the idea, using the constructors $\eta$ and $\sigma$ from (13) and Agda-like notation for inductive definitions. We would like to construct the QW-type for $(\mathrm{\Sigma},\varepsilon)$ as a quotient $\mathsf{QW}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}W/{\sim}$, but now the type $W:\mathcal{U}$ and the relation $\\_\sim\\_:W\rightarrow W\rightarrow\mathsf{Prop}$ are mutually inductively defined, with constructors as indicated in the figure. Note that whereas the construction in ZFC uses AC to get an $S_{\mathrm{\Sigma}}$-algebra structure for $\mathsf{QW}$, here we get one trivially from the constructor $q\tau:T_{\mathrm{\Sigma}}(W/{\sim})\rightarrow W$: $S_{\mathrm{\Sigma}}(\mathsf{QW})\equiv S_{\mathrm{\Sigma}}(W/{\sim})\xrightarrow{S_{\mathrm{\Sigma}}\,\eta}S_{\mathrm{\Sigma}}(T_{\mathrm{\Sigma}}(W/{\sim}))\xrightarrow{\sigma}T_{\mathrm{\Sigma}}(W/{\sim})\xrightarrow{q\tau}W\xrightarrow{[\\_]_{\sim}}W/{\sim}\equiv\mathsf{QW}$ (79) Furthermore the property $q\varepsilon$ of $\sim$ in Figure 2 ensures that the $S_{\mathrm{\Sigma}}$-algebra $W/{\sim}$ satisfies the equational system $\varepsilon$. The use of $T_{\mathrm{\Sigma}}$ rather than $S_{\mathrm{\Sigma}}$ in the domain of $q\tau$ seems necessary for this method of construction to go through (once we have fixed up the problems mentioned in the next paragraph); but it does mean that as well as $q\varepsilon$, we have to impose the conditions $q\eta$ and $q\sigma$ to ensure that $W/{\sim}$ has a $S_{\mathrm{\Sigma}}$-algebra structure, or equivalently, an algebra structure for the monad $T_{\mathrm{\Sigma}}$. Note that the domain of the constructor $q\tau$ combines $T_{\mathrm{\Sigma}}(\\_)$ with $\\_/\\_$. While the first is unproblematic for inductive definitions, the second is not: if one thinks of the semantics of inductively defined types in terms of initial algebras of endofunctors, it is not clear what endofunctor (in some class known to have initial algebras) is involved here, given that both arguments to $\\_/\\_$ are being defined simultaneously. Agda uses a notion of “strict positivity” as a conservative approximation for such a class of functors; and one can instruct Agda to regard quotienting as a strictly positive operation through the use of its POLARITY declarations. If one does so, then a definition like the one in Figure 2 is accepted by Agda. The semantic justification for regarding quotients as strictly positive constructions needs further investigation. We avoid the need for that here and replace the attempt to define $W$ and $\sim$ inductively by a size-indexed version that uses definition by well-founded recursion over a type of sizes as in the previous section. This also avoids another difficulty with Figure 2: even if one can define $W$ and $\sim$ inductively, one still has to verify that $W/{\sim}$ has the universal property (20)–(22) required of a QW-type. In particular, there is an obvious recursive definition of $\mathsf{qwrec}$ following the shape of the inductive definition in Figure 2, but it is not at all clear why this recursive definition is terminating (i.e. gives a well-defined, total function). Well- founded recursion over sizes will solve this problem as well. ### 6.2. QW-type via sizes We are given a signature $\mathrm{\Sigma}=(A:\mathcal{U},B:A\rightarrow\mathcal{U})$ and system of equations $\varepsilon=(E:\mathcal{U},V:E\rightarrow\mathcal{U},l,r:\prod_{e:E}T_{\mathrm{\Sigma}}(V\,e))$. Let $\mathsf{Size}:\mathcal{U}$ be the type of sizes whose existence is guaranteed by Section 5.1 when in the theorem we take $A$ to be $AE\mathrel{\smash{\overset{\text{\tiny def}}{=}}}A+E$ and $B$ to be the function $BV:AE\rightarrow\mathcal{U}$ mapping $a:A$ to $B\,a$ and $e:E$ to $V\,e$. It follows (as in the proof of Section 5.1) that $S_{\mathrm{\Sigma}}$ preserves $\mathsf{Size}$-indexed colimits. ###### Definition . A _$\mathsf{Size}$ -indexed $\mathrm{\Sigma}$-structure_ in $\mathcal{U}$ is specified by a family of types $D:\mathsf{Size}\rightarrow\mathcal{U}$ equipped with functions $\tau_{j,i}:T_{\mathrm{\Sigma}}(D_{j})\rightarrow D_{i}\quad\text{for all $i,j:\mathsf{Size}$ with $j<i$}$ (80) Similarly, for each $i:\mathsf{Size}$, a _$(\downarrow i)$ -indexed $\mathrm{\Sigma}$-structure_ is the same thing, except with $D$ only defined on the subsemicategory $\downarrow i$ (49) rather than the whole of $\mathsf{Size}$. Clearly, given a $\mathsf{Size}$-indexed $\mathrm{\Sigma}$-structure $(D,\tau)$, for each $i:\mathsf{Size}$ we get a $(\downarrow i)$-indexed one $(D\;(\downarrow i),\tau\;(\downarrow i))$ by restriction. If $i:\mathsf{Size}$ and $(D,\tau)$ is a $(\downarrow i)$-indexed $\mathrm{\Sigma}$-structure, let $\Diamond_{i}D:\mathcal{U}$ be the quotient type (see Figure 1) $\Diamond_{i}D\;\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\;\left.\left(\sum_{j<i}T_{\mathrm{\Sigma}}\,D_{j}\right)\right/R_{i}$ (81) with the relation $R_{i}:(\sum_{j<i}T_{\mathrm{\Sigma}}\,D_{j})\rightarrow(\sum_{j<i}T_{\mathrm{\Sigma}}\,D_{j})\rightarrow\mathsf{Prop}$ defined by: $R_{i}\,(j,t)\,(k,t^{\prime})\;\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\;{}\\\ \begin{aligned} &&&({k=j}\;\wedge\;\exists(e:E).\exists(\rho:V\,e\rightarrow D_{j}).\;{t=T_{\mathrm{\Sigma}}\,\rho\,(l\,e)}\;\wedge\;{t^{\prime}=T_{\mathrm{\Sigma}}\,\rho\,(r\,e)})\\\ &\vee&&({k<j}\;\wedge\;t=\eta(\tau_{k,j}\,t^{\prime}))\\\ &\vee&&({k<j}\;\wedge\;\exists(a:A).\exists(b:B\,a\rightarrow T_{\mathrm{\Sigma}}D_{k}).\;{t=\sigma(a,b)}\;\wedge\;{t^{\prime}=\sigma(a,\eta\circ\tau_{k,j}\circ b)})\end{aligned}$ (82) (The three clauses in the above definition correspond to the three constructors $q\varepsilon$, $q\eta$ and $q\sigma$ in Figure 2.) We will see that the QW-type for $(\mathrm{\Sigma},\varepsilon)$ can be constructed from $\mathsf{Size}$-indexed $\mathrm{\Sigma}$-structures $(D,\tau)$ satisfying the following fixed-point property. ###### Definition . A $\mathsf{Size}$-indexed $\mathrm{\Sigma}$-structure $(D,\tau)$ is a _$\Diamond$ -fixed point_ if for all $i:\mathsf{Size}$ $D_{i}=\Diamond_{i}(D\;(\downarrow i))$ (83) and for all $j<i$ and $t:T_{\mathrm{\Sigma}}D_{j}$ $\tau_{j,i}\,t=[j,t]_{R_{i}}$ (84) Suppose that $(D,\tau)$ is a $\Diamond$-fixed point. Because of (83) and (84), for $i,j:\mathsf{Size}$ with $i<j$ the functions $\delta_{i,j}:D_{i}\rightarrow D_{j}\qquad\delta_{i,j}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\tau_{i,j}\circ\eta$ (85) satisfy $\delta_{i,j}([k,t]_{R_{i}})=[k,t]_{R_{j}}\quad\text{(for all $k<i$ and $t:T_{\mathrm{\Sigma}}\,D_{k}$)}$ (86) In particular they satisfy composition (57) and so $(D,\delta)$ is a $\mathsf{Size}$-indexed diagram in $\mathcal{U}$ whose colimit we can form as in Section 5.1. We prove that this colimit $\mathsf{QW}\;\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\;\mathop{\mathsf{colim}}D$ (87) has the structure (18)–(22) of a QW-type for the signature $(\mathrm{\Sigma},\varepsilon)$. $\mathsf{qwintro}$: Given how we chose $\mathsf{Size}$ at the start of this section, from the (proof of) Section 5.1 we have that $S_{\mathrm{\Sigma}}:\mathcal{U}\rightarrow\mathcal{U}$ preserves $\mathsf{Size}$-indexed colimits. We claim that the functions $S_{\mathrm{\Sigma}}(D_{i})\mathbin{\smash{\xrightarrow{S_{\mathrm{\Sigma}}\eta}}}S_{\mathrm{\Sigma}}(T_{\mathrm{\Sigma}}\,D_{i})\mathbin{\smash{\xrightarrow{\sigma}}}T_{\mathrm{\Sigma}}\,D_{i}\mathbin{\smash{\xrightarrow{\tau_{i,{i}^{+}}}}}D_{{i}^{+}}\mathbin{\smash{\xrightarrow{(\nu_{D})_{{i}^{+}}}}}\mathop{\mathsf{colim}}D$ (88) form a cocone under the diagram $S_{\mathrm{\Sigma}}\circ D$ and hence induce a function $\mathop{\mathsf{colim}}(S_{\mathrm{\Sigma}}\circ D)\rightarrow\mathop{\mathsf{colim}}D$; then we obtain $\mathsf{qwintro}:S_{\mathrm{\Sigma}}(\mathop{\mathsf{colim}}D)\rightarrow\mathop{\mathsf{colim}}D$ by composing this with the isomorphism $S_{\mathrm{\Sigma}}(\mathop{\mathsf{colim}}D)\cong\mathop{\mathsf{colim}}(S_{\mathrm{\Sigma}}\circ D)$ from Section 5.1. That the functions in (88) form a cocone for $S_{\mathrm{\Sigma}}\circ D$ follows from the fact that the functions in (85) satisfy for all sizes $i<j$ $\forall(t:T_{\mathrm{\Sigma}}(D_{i})).\exists(k:\mathsf{Size}).\;j<k\;\wedge\;\tau_{j,k}(T_{\mathrm{\Sigma}}\,\delta_{i,j}(t))=\tau_{i,k}(t)$ (89) from which it follows that $(\nu_{D})_{{j}^{+}}\circ\tau_{j,{j}^{+}}\circ T_{\mathrm{\Sigma}}\,\delta_{i,j}=(\nu_{D})_{{i}^{+}}\circ\tau_{i,{i}^{+}}$ and hence also the cocone property. Property (89) can be proved by induction on the structure of $t:T_{\mathrm{\Sigma}}(D_{i})$, with the $t=\sigma(a,b)$ case of (13) proved using the fact that $\mathsf{Size}$ has $<$-upper bounds for families of sizes indexed by $B\,a$ (Section 5.1). $\mathsf{qwequate}$: Given $e:E$ and $\rho:V\,e\rightarrow\mathop{\mathsf{colim}}D$, by Section 5.1 there exist $i:\mathsf{Size}$ and $\rho^{\prime}:V\,e\rightarrow D_{i}$ with $\rho=(\nu_{D})_{i}\circ\rho^{\prime}$. By standard properties of the bind operation $\mathbin{\gg\\!=}$ (14) (which depends implicitly on the $S_{\mathrm{\Sigma}}$-algebra structure $\mathsf{qwintro}$ that we have just constructed), we have $\begin{array}[]{@{}r@{}c@{}r@{}c@{}l@{}r@{}}{}(l\,e\mathbin{\gg\\!=}\rho)&{}={}&(l\,e\mathbin{\gg\\!=}(\nu_{D})_{i}\circ\rho^{\prime})&{}={}&(T_{\mathrm{\Sigma}}\,\rho^{\prime}&(l\,e)\mathbin{\gg\\!=}(\nu_{D})_{i})\\\ (r\,e\mathbin{\gg\\!=}\rho)&{}={}&(r\,e\mathbin{\gg\\!=}(\nu_{D})_{i}\circ\rho^{\prime})&{}={}&(T_{\mathrm{\Sigma}}\,\rho^{\prime}&(r\,e)\mathbin{\gg\\!=}(\nu_{D})_{i})\end{array}$ (90) From the definition of $\mathsf{qwintro}$ and (84) it follows that for any $t:T_{\mathrm{\Sigma}}\,D_{i}$, there is a proof of $(t\mathbin{\gg\\!=}(\nu_{D})_{i})=(\nu_{D})_{{i}^{+}}(\tau_{i,{i}^{+}}\,t)$ (91) So from above we have that $(l\,e\mathbin{\gg\\!=}\rho)\;=\;(\nu_{D})_{{i}^{+}}(\tau_{i,{i}^{+}}(T_{\mathrm{\Sigma}}\rho^{\prime}(l\,e)))$ and $(r\,e\mathbin{\gg\\!=}\rho)\;=\;(\nu_{D})_{{i}^{+}}(\tau_{i,{i}^{+}}(T_{\mathrm{\Sigma}}\rho^{\prime}(r\,e)))$. Since from (84) and the first clause in the definition of $R_{i}$ (82) we also have $\tau_{i,{i}^{+}}(T_{\mathrm{\Sigma}}\rho^{\prime}(l\,e))=\tau_{i,{i}^{+}}(T_{\mathrm{\Sigma}}\rho^{\prime}(r\,e))$, it follows that there is a proof of $(l\,e\mathbin{\gg\\!=}\rho)=(r\,e\mathbin{\gg\\!=}\rho)$. $\mathsf{qwrec}$: Given an $S_{\mathrm{\Sigma}}$-algebra $(X,\alpha):\sum_{X:\mathcal{U}}(S_{\mathrm{\Sigma}}\,X\rightarrow X)$ satisfying the system of equations $\varepsilon$, the function $\mathsf{qwrec}:\mathop{\mathsf{colim}}D\rightarrow X$ is induced by a cocone of functions $r:\prod_{i}(D_{i}\rightarrow X)$ under the diagram $D$, defined by well-founded recursion (Section 5). More precisely, a strengthened “recursion hypothesis” is needed: instead of $\prod_{i}(D_{i}\rightarrow X)$ we use $\prod_{i}\,F_{i}$ where $F_{i}\;\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\;\\{f:D_{i}\rightarrow X\mid\forall j<i.\forall(t:T_{\mathrm{\Sigma}}\,D_{j}).\;(t\mathbin{\gg\\!=}(f\circ\delta_{j,i}))=f([j,t]_{R_{i}})\\}$ (92) (The definition uses $\alpha$ implicitly in $\mathbin{\gg\\!=}$ and relies on the fact (83) that $D_{i}=\Diamond_{i}(D\;(\downarrow i))$.) For each $i:\mathsf{Size}$, if we have $r_{j}:F_{j}$ for all $j<i$, then we get a function $r_{i}:F_{i}$ well-defined by $r_{i}([j,t]_{R_{i}})\mathrel{\smash{\overset{\text{\tiny def}}{=}}}t\mathbin{\gg\\!=}r_{j}\quad\text{for all $j<i$ and $t:T_{\mathrm{\Sigma}}\,D_{j}$}$ (93) (The defining property of $F_{i}$ is needed to see that the right-hand side of this definition respects the relation $R_{i}$.) Hence by well-founded recursion (Section 5) we get an element $r:\prod_{i}\,F_{i}$. One can prove $\forall i.\forall j<i.\;r_{j}=r_{i}\circ\delta_{j,i}$ by well-founded induction (46); so $r$ is a cocone and induces a function $\mathsf{qwrec}:\mathop{\mathsf{colim}}D\rightarrow X$. $\mathsf{qwrechom}$: We noted at the start of this section that by choice of $\mathsf{Size}$, the functor $S_{\mathrm{\Sigma}}$ preserves $\mathsf{Size}$-indexed colimits. So to prove that $\textstyle{{S_{\mathrm{\Sigma}}(\mathop{\mathsf{colim}}D)}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{qwintro}}$$\scriptstyle{S_{\mathrm{\Sigma}}\mathsf{qwrec}}$$\textstyle{{\mathop{\mathsf{colim}}D}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathsf{qwrec}}$$\textstyle{{S_{\mathrm{\Sigma}}\,X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$$\textstyle{X}$ (94) commutes, by the definitions of $\mathsf{qwintro}$ and $\mathsf{qwrec}$, it suffices to prove that $\textstyle{{S_{\mathrm{\Sigma}}\,D_{i}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tau_{i,{i}^{+}}\circ\sigma\circ S_{\mathrm{\Sigma}}\eta}$$\scriptstyle{S_{\mathrm{\Sigma}}\,r_{i}}$$\textstyle{{D_{{i}^{+}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r_{{i}^{+}}}$$\textstyle{{S_{\mathrm{\Sigma}}\,X}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$$\textstyle{X}$ (95) does for each $i:\mathsf{Size}$. But each $(a,b):S_{\mathrm{\Sigma}}\,D_{i}$ is mapped by $\tau_{i,{i}^{+}}\circ\sigma\circ S_{\mathrm{\Sigma}}\eta$ to $[i,\sigma(a,\eta\circ b)]_{R_{{i}^{+}}}$ (using the fact that $D_{{i}^{+}}=\Diamond_{{i}^{+}}(D\;(\downarrow{i}^{+}))$); and by (93), that is mapped by $r_{{i}^{+}}$ to $\sigma(a,\eta\circ b)\mathbin{\gg\\!=}r_{i}$, which is indeed equal to $\alpha(S_{\mathrm{\Sigma}}\,r_{i}(a,b))$ by definition of $\mathbin{\gg\\!=}$ (14) and the action of $S_{\mathrm{\Sigma}}$ on functions (10). $\mathsf{qwuniq}$: If $h:\mathop{\mathsf{colim}}D\rightarrow X$ is a morphism of $S_{\mathrm{\Sigma}}$-algebras, then one can prove by well-founded induction for $<$ that $\forall i.\;h\circ(\nu_{D})_{i}=r_{i}$ holds: for if we have $h\circ(\nu_{D})_{j}=r_{j}$ for all $j<i$, then for any $[j,t]_{R_{i}}$ in $D_{i}=\Diamond_{i}(D\;(\downarrow i))$ $\begin{array}[]{r@{}c@{}l@{\hspace{2em}}l}{}r_{i}([j,t]_{R_{i}})&{}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}{}&t\mathbin{\gg\\!=}r_{j}\hfil\hskip 20.00003pt&\text{\eqref{eq:qwrec}}\\\ &=&t\mathbin{\gg\\!=}(h\circ(\nu D)_{j})\hfil\hskip 20.00003pt&\text{by induction hypothesis}\\\ &=&h(t\mathbin{\gg\\!=}(\nu_{D})_{j})\hfil\hskip 20.00003pt&\text{since $h$ is a morphism of $S_{\mathrm{\Sigma}}$-algebras}\\\ &=&h((\nu_{D})_{{j}^{+}}[j,t]_{R_{{j}^{+}}})\hfil\hskip 20.00003pt&\text{by\leavevmode\nobreak\ \eqref{eq:qwequate} and\leavevmode\nobreak\ \eqref{eq:fixed2}}\\\ &=&h((\nu_{D})_{i}[j,t]_{R_{i}})\hfil\hskip 20.00003pt&\parbox[c]{186.45341pt}{\raggedright{}since by\leavevmode\nobreak\ \eqref{eq:fixed-diag}, $\delta_{{j}^{+},{j}^{+}\sqcup^{s}i}([j,t]_{R_{{j}^{+}}})=[j,t]_{R_{{j}^{+}\sqcup^{s}i}}=\delta_{i,{j}^{+}\sqcup^{s}i}([j,t]_{R_{i}})$.\@add@raggedright}\end{array}$ So by well-founded induction, $h\circ(\nu_{D})_{i}=r_{i}$ holds for all $i:\mathsf{Size}$, and hence by definition of $\mathsf{qwrec}$ and the uniqueness part of the universal property of colimits we have $h=\mathsf{qwrec}$. Thus we have proved: ###### Proposition . Let $\mathsf{Size}$ be the type of sizes defined at the start of this section, whose existence is guaranteed by Section 5.1. If the $\mathsf{Size}$-indexed $\mathrm{\Sigma}$-structure $(D,\tau)$ is a $\Diamond$-fixed point, then $\mathop{\mathsf{colim}}D$ has the structure of a QW-type for the signature $(\mathrm{\Sigma},\varepsilon)$. ∎ Now we can complete the proof of the main theorem: ###### Proof of non-indexed version of Section 6. In view of Section 6.2, it suffices to construct a $\mathsf{Size}$-indexed $\mathrm{\Sigma}$-structure which is a $\Diamond$-fixed point in the sense of Section 6.2. For each $i:\mathsf{Size}$, say that a $(\downarrow i)$-indexed $\mathrm{\Sigma}$-structure (Section 6.2) is an _upto- $i$_ $\Diamond$-fixed point if $\forall j<i.\;D_{j}=\Diamond_{j}(D\;(\downarrow j))\;\wedge\;\forall k<j.\forall t.\;\tau_{k,j}\,t\mathrel{{=}{=}}[k,t]_{R_{j}}$ (96) (cf. (83) and (84)). Note that: 1. (A) _Given $j<i$, any upto-$i$ $\Diamond$-fixed point restricts to an upto-$j$ $\Diamond$-fixed point._ 2. (B) _For all $i$, any two upto-$i$ $\Diamond$-fixed points are equal_ (proof by well-founded induction (46)). Using these two facts, it follows by well-founded recursion (Section 5) that there is an upto-$i$ $\Diamond$-fixed point for all $i:\mathsf{Size}$. For if $(D^{(j)},\tau^{(j)})$ is an upto-$j$ $\Diamond$-fixed point for all $j<i$, then we get $D^{(i)}:{\downarrow i}\rightarrow\mathcal{U}$ by defining for each $j<i$ $(D^{(i)})_{j}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\Diamond_{j}(D^{(j)})$ (97) If $k<j<i$, then by (A) we have that $D^{(j)}(\downarrow k)$ is an upto-$k$ $\Diamond$-fixed point and hence by (B) that $D^{(j)}(\downarrow k)=D^{(k)}$. So together with (96) this gives: $(D^{(i)})_{k}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\Diamond_{k}(D^{(k)})=\Diamond_{k}(D^{(j)}(\downarrow k))=(D^{(j)})_{k}$ (98) Hence we can define $(\tau^{(i)})_{k,j}:T_{\mathrm{\Sigma}}((D^{(i)})_{k})\rightarrow(D^{(i)})_{j}$ by $(\tau^{(i)})_{k,j}\,t\mathrel{\smash{\overset{\text{\tiny def}}{=}}}[k,t]_{R_{j}}$ and this makes $(D^{(i)},\tau^{(i)})$ into a $(\downarrow i)$-indexed $\mathrm{\Sigma}$-structure which by construction is an upto-$i$ $\Diamond$-fixed point. Thus by well-founded recursion (Section 5) we have an upto-$i$ $\Diamond$-fixed point $D^{(i)}$ for all $i:\mathsf{Size}$. Let $D:\mathsf{Size}\rightarrow\mathcal{U}$ be given by $D_{i}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\Diamond_{i}(D^{(i)})$. If $j<i$, then by (A) and (B) we have $D^{(i)}(\downarrow j)=D^{(j)}$ and together with (96) this gives: $D_{j}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\Diamond_{j}(D^{(j)})=\Diamond_{j}(D^{(i)}(\downarrow j))=(D^{(i)})_{j}$ (99) So we can define $\tau_{j,i}:T_{\mathrm{\Sigma}}\,D_{j}\rightarrow D_{i}$ by $\tau_{j,i}\,t\mathrel{\smash{\overset{\text{\tiny def}}{=}}}[j,t]_{R_{i}}$. This makes $(D,\tau)$ into a $\mathrm{\Sigma}$-structure which by construction is a $\Diamond$-fixed point. ∎ ## 7\. Encoding QITs as QWI-types The general notion of indexed quotient inductive type (QIT) was discussed by example in the Introduction. We wish to show that a wide variety of QITs can be expressed as QWI-types, namely those which do not use conditional equality constructors (as in the example in (2)). We first introduce a schema for such QITs that combines desirable features of ones that occur in the literature. As in the previous section, for simplicity sake we will confine our attention to the non-indexed case of QITs and QW-types. ### 7.1. General QIT schemas and encodings [BGvdW17] present a schema for infinitary QITs that do not support conditional path equations. Constructors are defined by arbitrary polynomial endofunctors built up using (non-dependent) products and sums, which means in particular that parameters and arguments can occur in any order; however, they require constructors to be in uncurried form. They also construct a model of simple 1-cell complexes and other non-recursive QITs. [DM18, Sections 3.1 and 3.2] present a schema for finitary QITs that does allow curried constructors (and also supports _conditional_ path equations), but requires all parameters to appear before all arguments. This contrasts with the more convenient schema for regular inductive types in Agda, which allows parameters and arguments in any order. [KKA19] define an encoding of finitary quotient inductive-inductive (QIIT) types, called the _theory of signatures_ (ToS), which is a restriction of the _theory of codes_ for HIITs [KK18]. In the ToS a QI(I)T is encoded as a context of a small internal type theory. This encoding is not quite as convenient as the schema for regular inductive types in Agda, but, when using named variables, it is much closer to Agda than QWI-types are. They also construct a model for finitary QIITs (and [KK20] reduce the problem of modelling infinitary QIITs to just one “universal” QIIT – a generalised ToS). Building on the first two schemas mentioned above, we provide a schema for infinitary, non-conditional QITs combining the arbitrarily ordered parameters and arguments of the first [BGvdW17] with the curried constructors of the second [DM18]. In the following we fix the context $\mathrm{\Gamma}$ in which the QIT, $Q$, ($Q\notin\mathrm{\Gamma}$) is defined, whereas $\mathrm{\Delta}$, $A$, $B$, $H$, $K$, $a$, $b$, $x$, $y$, etc. are metavariables. When deriving an element or equality constructor, $\mathrm{\Delta}$ will always be equal to $\mathrm{\Gamma}$ or an extension of it. A type _strictly-positive in $Q$_ is made up with $\prod$-types and $\sum$-types and can use $Q$ and constants or variables in the context, provided that $Q$ never occurs on the LHS of a $\prod$-type, and that the RHS of a $\sum$-type never depends on $Q$, even if it occurs on the LHS. $\mathrm{\Delta}\vdash K\mathsf{StrPos}$ $\mathrm{\Delta}$ $\vdash$ B : $\mathcal{U}$ 1[Param]$\mathrm{\Delta}$ $\vdash$ B $\mathsf{StrPos}$ $\mathrm{\Delta}$ $\vdash$ B : $\mathcal{U}$ $\mathrm{\Delta}$, b : B $\vdash$ K $\mathsf{StrPos}$ 2[StrPosFun]$\mathrm{\Delta}$ $\vdash$ $\prod$_b : B K $\mathsf{StrPos}$ . 0[IndArg]$\mathrm{\Delta}$ $\vdash$ Q $\mathsf{StrPos}$ $\mathrm{\Delta}$ $\vdash$ A $\mathsf{StrPos}$ $\mathrm{\Delta}$, a : A${[\nicefrac{{{\mathbb{1}}}}{{{Q}}}\\!]}$ $\vdash$ K $\mathsf{StrPos}$ 2[Prod]$\mathrm{\Delta}$ $\vdash$ $\sum$_a : A K’ $\mathsf{StrPos}$ . Where $K^{\prime}$ is found from $K$ by recursively replacing each sub-term of type $\mathbb{1}$, $A\rightarrow\mathbb{1}$, etc. with the corresponding unique term $0$, $!$, etc. _Note: We can see that these rules give strictly-positive types because: (1) at this point $\mathrm{\Delta}\vdash Q:\mathcal{U}$ is not derivable since $Q\notin\mathrm{\Gamma}$ and no rule for $\mathsf{StrPos}$ adds $Q$ to the context, so $Q$ can never appear in the LHS of a $\prod$-type. Also (2) the Prod rule ensures that abstracting with $\sum$ cannot result in dependencies of $Q$ in the codomain. _ The type of an element constructor is an (iterated) $\prod$-type with codomain $Q$, with zero or more strictly- positive arguments. $\mathrm{\Delta}\vdash H\mathsf{ElConstr}$ $\mathrm{\Delta}$, Q : $\mathcal{U}$ $\vdash$ H $\mathsf{ElType}$ 1[ElConstr]$\mathrm{\Delta}$ $\vdash$ H $\mathsf{ElConstr}$ . 0[ElCoDom]$\mathrm{\Delta}$ $\vdash$ Q $\mathsf{ElType}$ $\mathrm{\Delta}$${[\nicefrac{{{\mathbb{1}}}}{{{Q}}}\\!]}$ $\vdash$ K $\mathsf{StrPos}$ $\mathrm{\Delta}$, a : K’ $\vdash$ H $\mathsf{ElType}$ 2[ElArg]$\mathrm{\Delta}$ $\vdash$ $\prod$_a : K’ H $\mathsf{ElType}$ . Where $\mathrm{\Delta}{[\nicefrac{{{\mathbb{1}}}}{{{Q}}}\\!]}$ is the context that results after replacing occurrences of $Q$ in $\mathrm{\Delta}$ by the type $\mathbb{1}$. _Note: Deriving $\mathrm{\Gamma}\vdash\prod_{a:K^{\prime}}Q\mathsf{ElConstr}$ via ElArg for some strictly-positive $K$ does not imply $\mathrm{\Gamma}\vdash\prod_{a:K^{\prime}}Q:\mathcal{U}$ since $\mathrm{\Gamma}\nvdash Q:\mathcal{U}$ (since $Q\notin\mathrm{\Gamma}$), and also $\mathrm{\Gamma}\nvdash K^{\prime}:\mathcal{U}$ in general since $K^{\prime}$ may contain $Q$. This distinction also applies to the following judgements. _ The type of an equality constructor is an (iterated) $\prod$-type with codomain $x=_{Q}y$, with zero or more strictly-positive arguments. Note that each of the element constructors are added to the context and can be used to derive $x:Q$ and $y:Q$. $\mathrm{\Delta}\vdash K\mathsf{EqConstr}$ $\mathrm{\Delta}$, Q : $\mathcal{U}$, c1 : C1, $\cdots$, cm : Cm $\vdash$ K : $\mathsf{EqType}$ 1[EqConstr]$\mathrm{\Delta}$ $\vdash$ K $\mathsf{EqConstr}$ $\mathrm{\Delta}$ $\vdash$ x : Q $\mathrm{\Delta}$ $\vdash$ y : Q 2[EqCoDom]$\mathrm{\Delta}$ $\vdash$ x = y $\mathsf{EqType}$ $\mathrm{\Delta}$${[\nicefrac{{{\mathbb{1}}}}{{{Q}}}\\!]}$ $\vdash$ K $\mathsf{StrPos}$ $\mathrm{\Delta}$, a : K’ $\vdash$ H $\mathsf{EqType}$ 2[EqArg]$\mathrm{\Delta}$ $\vdash$ $\prod$_a : K’ H $\mathsf{EqType}$ Figure 3. Rules for QIT element and equality constructors. The following definition should be read as an extension of a formalisation of the type theory described in Section 2 in terms of typing contexts ($\mathrm{\Gamma},\mathrm{\Delta},\ldots$) and various judgements-in-context (such as $\mathrm{\Gamma}\vdash a:A$, $\mathrm{\Gamma}\vdash a=a^{\prime}:A$, etc.); see the HoTT Book [Uni13, Appendix]. ###### Definition . A (_non-conditional_) _QIT_ , $Q$, in a context $\mathrm{\Gamma}$, $Q\notin\mathrm{\Gamma}$ is specified by a list of element constructors ${c_{1}:C_{1},}\ldots,{c_{m}:C_{m}}$, followed by a list of equality constructors ${d_{1}:D_{1},}\ldots,{d_{n}:D_{n}}$, where the $C_{i}$s and $D_{j}$s are derived as $\mathrm{\Gamma}\vdash C_{i}\,\mathsf{ElConstr}$ and $\mathrm{\Gamma}\vdash D_{j}\,\mathsf{EqConstr}$ respectively, according to the rules in Figure 3. Given a list of element and equality constructors built up according to these rules, the newly-defined QIT, $Q$, has formation rule $\prooftree\infer 0{\mathrm{\Gamma}\vdash Q:\mathcal{U}}$ for the specific context $\mathrm{\Gamma}$ in which it was defined. And it has an introduction rule for each element constructor and each equality constructor: $\prooftree\infer 0{\mathrm{\Gamma}\vdash c_{1}:C_{1}}\quad\cdots\quad\prooftree\infer 0{\mathrm{\Gamma}\vdash c_{m}:C_{m}}\qquad\qquad\prooftree\infer 0{\mathrm{\Gamma}\vdash d_{1}:D_{1}}\quad\cdots\quad\prooftree\infer 0{\mathrm{\Gamma}\vdash d_{n}:D_{n}}$ We show how to derive elimination and computation rules for $Q$ from these formation and introduction rules (compare with [BGvdW17, Definition 10]) after first giving an example of how the schema is instantiated. ###### Example . Consider multisets as in example (1) with two element constructors $[]$ and $\\_\mathrel{::}\\_$, and one equality constructor $\mathsf{swap}$. Given the context containing parameter $X:\mathcal{U}$, we define the QIT $\cdot,X\mathbin{:}\mathcal{U}\vdash\mathsf{Bag}\mathbin{:}\mathcal{U}_{1}$ by providing those constructors along with their types such that they are derived by the rules in Figure 3: Let $\mathrm{\Gamma}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}(\cdot,X:\mathcal{U})$ and $\mathrm{\Delta}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}(\mathrm{\Gamma},\mathsf{Bag}:\mathcal{U})=(\cdot,X:\mathcal{U},\mathsf{Bag}:\mathcal{U})$; so that, by definition, $\mathrm{\Delta}{[\nicefrac{{{\mathbb{1}}}}{{{\mathsf{Bag}}}}\\!]}=\mathrm{\Gamma}$. * • $[]$ 0[ElCoDom]$\mathrm{\Delta}$ $\vdash$ $\mathsf{Bag}$$\mathsf{ElType}$ 1[ElConstr]$\mathrm{\Gamma}$ $\vdash$ $\mathsf{Bag}$$\mathsf{ElConstr}$ * • $\\_\mathrel{::}\\_$ 0$\mathrm{\Gamma}$ $\vdash$ X : $\mathcal{U}$ 1[Param]$\mathrm{\Gamma}$ $\vdash$ X $\mathsf{StrPos}$ 0[IndArg]$\mathrm{\Gamma}$, a : X $\vdash$ $\mathsf{Bag}$$\mathsf{StrPos}$ 0[ElCoDom]$\mathrm{\Delta}$, a : X, b : $\mathsf{Bag}$$\vdash$ $\mathsf{Bag}$$\mathsf{ElType}$ 2[ElArg]$\mathrm{\Delta}$, a : X $\vdash$ ($\mathsf{Bag}$$\rightarrow$ $\mathsf{Bag}$) $\mathsf{ElType}$ 2[ElArg]$\mathrm{\Delta}$ $\vdash$ (X $\rightarrow$ $\mathsf{Bag}$$\rightarrow$ $\mathsf{Bag}$) $\mathsf{ElType}$ 1[ElConstr]$\mathrm{\Gamma}$ $\vdash$ (X $\rightarrow$ $\mathsf{Bag}$$\rightarrow$ $\mathsf{Bag}$) $\mathsf{ElConstr}$ Let $\displaystyle{}\mathrm{\Theta}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathrm{\Delta},[]:\mathsf{Bag},\\_\mathrel{::}\\_:X\rightarrow\mathsf{Bag}\rightarrow\mathsf{Bag}$ $\displaystyle\mathrm{\Xi}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathrm{\Theta}{[\nicefrac{{{\mathbb{1}}}}{{{\mathsf{Bag}}}}\\!]}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathrm{\Gamma},[]:\mathbb{1},\\_\mathrel{::}\\_:X\rightarrow\mathbb{1}\rightarrow\mathbb{1}$ $\displaystyle\mathrm{\Phi}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathrm{\Theta},x\;y:X,\mathit{zs}:\mathsf{Bag}$ $\displaystyle\mathit{eqn}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}x\mathrel{::}y\mathrel{::}\mathit{zs}=y\mathrel{::}x\mathrel{::}\mathit{zs}$ * • $\mathsf{swap}$ 0$\mathrm{\Xi}$ $\vdash$ X : $\mathcal{U}$ 1$\mathrm{\Xi}$ $\vdash$ X $\mathsf{StrPos}$ 0$\mathrm{\Xi}$, x : X $\vdash$ X : $\mathcal{U}$ 1$\mathrm{\Xi}$, x : X $\vdash$ X $\mathsf{StrPos}$ 0$\mathrm{\Xi}$, x y : X $\vdash$ $\mathsf{Bag}$$\mathsf{StrPos}$ $\vdots$ 1$\mathrm{\Phi}$ $\vdash$ x $\mathrel{::}$ y $\mathrel{::}$ $\mathit{zs}$: $\mathsf{Bag}$ $\vdots$ 1$\mathrm{\Phi}$ $\vdash$ y $\mathrel{::}$ x $\mathrel{::}$ $\mathit{zs}$: $\mathsf{Bag}$ 2[EqCoDom] $\mathrm{\Phi}$ $\vdash$ eqn$\mathsf{EqType}$ 2[EqArg] $\mathrm{\Theta}$, x y : X $\vdash$ $\prod$_$\mathit{zs}$: $\mathsf{Bag}$ eqn$\mathsf{EqType}$ 2[EqArg] $\mathrm{\Theta}$, x : X $\vdash$ $\prod$_y : X $\prod$_$\mathit{zs}$: $\mathsf{Bag}$ eqn$\mathsf{EqType}$ 2[EqArg] $\mathrm{\Theta}$ $\vdash$ $\prod$_x y : X $\prod$_$\mathit{zs}$: $\mathsf{Bag}$ eqn$\mathsf{EqType}$ 1[EqConstr]$\mathrm{\Gamma}$ $\vdash$ $\prod$x y : X $\prod$_$\mathit{zs}$: $\mathsf{Bag}$ eqn$\mathsf{EqConstr}$ ###### Definition (Elimination and computation). The _arguments_ of a constructor $C_{j}$, written $\mathsf{Arg}(C_{j})$, is the list of $K^{\prime}$ for all strictly-positive types $K$ introduced with the rule ElArg. Given a QIT defined by constructors $c_{1},\ldots,c_{m},d_{1},\ldots,d_{n}$ as above, the _underlying inductive type_ $\lfloor Q\rfloor$ is the inductive type defined by only the element constructors $c_{1},\ldots,c_{m}$, ignoring the equalities; then the _underlying W-type_ is the W-type that encodes this inductive type. (Recall that every strictly-positive description of a polynomial endofunctor gives rise to a W-type with the same initial algebra; see [Dyb97], and for the more general indexed and nested case see [AGHMM15].) Define _leaf application_ on a strictly-positive term $a:A$ (that is, a term with a strictly-positive type) for a function $f:\prod_{q:Q}R\,q$ into some type $R$, written $f\mathbin{\mathdollar}a$, by induction on the $\mathsf{StrPos}$ structure of $A$: IndArg $\displaystyle f\mathbin{\mathdollar}x$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}f\,x$ (100) Param $\displaystyle f\mathbin{\mathdollar}b$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}b$ Prod $\displaystyle f\mathbin{\mathdollar}(a,b)$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}(f\mathbin{\mathdollar}a,f\mathbin{\mathdollar}b)$ StrPosFun $\displaystyle f\mathbin{\mathdollar}g$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}(f\mathbin{\mathdollar}\\_)\circ g$ In order to define the elimination and computation rules we must first define the induction step. First, given a “motive” $\mathrm{\Gamma}\vdash P:Q\rightarrow\mathcal{U}$, define the type $P^{\prime}\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sum_{x:Q}P\,x$. Now given a strictly-positive argument $A$, define the type $A^{\prime}$ (it will be (one of) the induction hypotheses) by induction on the structure of $A$, replacing each occurrence of $Q$ (IndArg) with $P^{\prime}$. Terms $a^{\prime}:A^{\prime}$ are also strictly-positive and admit leaf application for functions of type $\prod_{p^{\prime}:P^{\prime}}S\,p^{\prime}$ for some type $S$. To find the induction step $\savestack{\tmpbox}{\stretchto{\scaleto{\leavevmode\hbox to24.80237pt{\hss\hbox to13.35547pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{\leavevmode\hbox to38.15784pt{\hskip 38.15784pt\hbox to9.53928pt{\hss\leavevmode\hbox{\set@color\leavevmode\hbox to47.69711pt{\hss\hbox to0.35pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{ \scalerel*[\widthof{\small$C_{j}\;C_{j}$}]{\kern 0.1pt\mathchar 12382\relax\kern 0.1pt}{\rule{0.0pt}{505.89pt}}}}}}}\hss}\hskip 47.69711pt}}}\hss}}}}}}\hss}\hskip 24.80237pt}}{}}{2.4ex}}\stackon[-6.9pt]{C_{j}}{\tmpbox}$ for each element constructor $c_{j}:C_{j}$, replace each of the strictly-positive arguments $\prod_{a_{1}:A_{1}}\,\cdots\,\prod_{a_{p}:A_{p}}$ with $\prod_{a_{1}:A_{1}^{\prime}}\,\cdots\,\prod_{a_{p}:A_{p}^{\prime}}$ , as defined above, and replace the target $Q$ of the constructor with $P\,(c_{j}\,(\pi_{1}\mathbin{\mathdollar}a_{1})\,\cdots\,(\pi_{1}\mathbin{\mathdollar}a_{p}))$. The induction cases for each of the element constructors are then $h_{1}:\savestack{\tmpbox}{\stretchto{\scaleto{\leavevmode\hbox to24.96461pt{\hss\hbox to13.44284pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{\leavevmode\hbox to38.40746pt{\hskip 38.40746pt\hbox to9.60167pt{\hss\leavevmode\hbox{\set@color\leavevmode\hbox to48.00912pt{\hss\hbox to0.35pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{ \scalerel*[\widthof{\small$C_{1}\;C_{1}$}]{\kern 0.1pt\mathchar 12382\relax\kern 0.1pt}{\rule{0.0pt}{505.89pt}}}}}}}\hss}\hskip 48.00912pt}}}\hss}}}}}}\hss}\hskip 24.96461pt}}{}}{2.4ex}}\stackon[-6.9pt]{C_{1}}{\tmpbox},\ldots,h_{m}:\savestack{\tmpbox}{\stretchto{\scaleto{\leavevmode\hbox to26.946pt{\hss\hbox to14.50975pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{\leavevmode\hbox to41.45575pt{\hskip 41.45575pt\hbox to10.36374pt{\hss\leavevmode\hbox{\set@color\leavevmode\hbox to51.81949pt{\hss\hbox to0.35pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{ \scalerel*[\widthof{\small$C_{m}\;C_{m}$}]{\kern 0.1pt\mathchar 12382\relax\kern 0.1pt}{\rule{0.0pt}{505.89pt}}}}}}}\hss}\hskip 51.81949pt}}}\hss}}}}}}\hss}\hskip 26.946pt}}{}}{2.4ex}}\stackon[-6.9pt]{C_{m}}{\tmpbox}$, and these provide an eliminator $\mathsf{elim}\,h_{1}\,\cdots\,h_{m}$ for the underlying inductive type $\lfloor Q\rfloor$. Given $h_{1},\ldots,h_{m}$, define $\savestack{\tmpbox}{\stretchto{\scaleto{\leavevmode\hbox to25.88763pt{\hss\hbox to13.93985pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{\leavevmode\hbox to39.82748pt{\hskip 39.82748pt\hbox to9.95667pt{\hss\leavevmode\hbox{\set@color\leavevmode\hbox to49.78415pt{\hss\hbox to0.35pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{ \scalerel*[\widthof{\small$D_{k}\;D_{k}$}]{\kern 0.1pt\mathchar 12382\relax\kern 0.1pt}{\rule{0.0pt}{505.89pt}}}}}}}\hss}\hskip 49.78415pt}}}\hss}}}}}}\hss}\hskip 25.88763pt}}{}}{2.4ex}}\stackon[-6.9pt]{D_{k}}{\tmpbox}$, for each equality constructor $d_{k}:D_{k}$, in the same way. Replace the homogeneous equality type with a heterogeneous equality type and replace the endpoints $l,r$ of the equality with $\savestack{\tmpbox}{\stretchto{\scaleto{\leavevmode\hbox to17.96158pt{\hss\hbox to9.67186pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{\leavevmode\hbox to27.63344pt{\hskip 27.63344pt\hbox to6.90822pt{\hss\leavevmode\hbox{\set@color\leavevmode\hbox to34.54166pt{\hss\hbox to0.35pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{ \scalerel*[\widthof{\small$l\;l$}]{\kern 0.1pt\mathchar 12382\relax\kern 0.1pt}{\rule{0.0pt}{505.89pt}}}}}}}\hss}\hskip 34.54166pt}}}\hss}}}}}}\hss}\hskip 17.96158pt}}{}}{2.4ex}}\stackon[-6.9pt]{l}{\tmpbox},\savestack{\tmpbox}{\stretchto{\scaleto{\leavevmode\hbox to19.4674pt{\hss\hbox to10.48271pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{\leavevmode\hbox to29.95012pt{\hskip 29.95012pt\hbox to7.48738pt{\hss\leavevmode\hbox{\set@color\leavevmode\hbox to37.4375pt{\hss\hbox to0.35pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{ \scalerel*[\widthof{\small$r\;r$}]{\kern 0.1pt\mathchar 12382\relax\kern 0.1pt}{\rule{0.0pt}{505.89pt}}}}}}}\hss}\hskip 37.4375pt}}}\hss}}}}}}\hss}\hskip 19.4674pt}}{}}{2.4ex}}\stackon[-6.9pt]{r}{\tmpbox}$ inductively defined by cases depending on if the abstracted variable is a constructor or not: $\displaystyle{}\savestack{\tmpbox}{\stretchto{\scaleto{\leavevmode\hbox to21.4937pt{\hss\hbox to11.57384pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{\leavevmode\hbox to33.06754pt{\hskip 33.06754pt\hbox to8.26672pt{\hss\leavevmode\hbox{\set@color\leavevmode\hbox to41.33426pt{\hss\hbox to0.35pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{ \scalerel*[\widthof{\small$c_{j}\;c_{j}$}]{\kern 0.1pt\mathchar 12382\relax\kern 0.1pt}{\rule{0.0pt}{505.89pt}}}}}}}\hss}\hskip 41.33426pt}}}\hss}}}}}}\hss}\hskip 21.4937pt}}{}}{2.4ex}}\stackon[-6.9pt]{c_{j}}{\tmpbox}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}h_{j}$ (101) $\displaystyle\savestack{\tmpbox}{\stretchto{\scaleto{\leavevmode\hbox to20.33406pt{\hss\hbox to10.94939pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{\leavevmode\hbox to31.28345pt{\hskip 31.28345pt\hbox to7.82071pt{\hss\leavevmode\hbox{\set@color\leavevmode\hbox to39.10416pt{\hss\hbox to0.35pt{\leavevmode\hbox{\set@color\resizebox{}{}{{\leavevmode\hbox{\set@color{ \scalerel*[\widthof{\small$x\;x$}]{\kern 0.1pt\mathchar 12382\relax\kern 0.1pt}{\rule{0.0pt}{505.89pt}}}}}}}\hss}\hskip 39.10416pt}}}\hss}}}}}}\hss}\hskip 20.33406pt}}{}}{2.4ex}}\stackon[-6.9pt]{x}{\tmpbox}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}x:P^{\prime}_{i}$ The elimination rule is then: $\mathrm{\Gamma}$ $\vdash$ P : Q $\rightarrow$ $\mathcal{U}$ [no rule]1 $\mathrm{\Gamma}$ $\vdash$ h1 : *[$C_{1}\;C_{1}$]^ 2.4ex[-6.9pt]C1 $\ldots$ $\mathrm{\Gamma}$ $\vdash$ hm : *[$C_{m}\;C_{m}$]^ 2.4ex[-6.9pt]Cm [no rule]1 $\mathrm{\Gamma}$ $\vdash$ p1 : *[$D_{1}\;D_{1}$]^ 2.4ex[-6.9pt]D1 $\ldots$ $\mathrm{\Gamma}$ $\vdash$ p_nm : *[$D_{\mathrlap{n}\phantom{m}}\;D_{\mathrlap{n}\phantom{m}}$]^ 2.4ex[-6.9pt]D_nm 1[QITelim]$\mathsf{qitelim}$ h1 $\cdots$ hm p1 $\cdots$ pn : $\prod$_x : Q P x Finally the computation rules are, for each element constructor $c_{i}:C_{i}$, _same hypotheses as QITelim_ $\mathrm{\Gamma}$ $\vdash$ a1,$\ldots$,ap : $\mathsf{Arg}$(Ci) 2[QITcomp] $\mathsf{qitelim}$ h1 $\cdots$ hm p1 $\cdots$ pn (ci a1 $\cdots$ ap) [no rule]1 =_P (ci a1 $\cdots$ ap) hi (a1 , ($\mathsf{qitelim}$ $\cdots$) $\mathbin{\mathdollar}$a1) $\cdots$ (ap , ($\mathsf{qitelim}$ $\cdots$) $\mathbin{\mathdollar}$ap) ###### Example . The elimination rule for $X:\mathcal{U}\vdash\mathsf{Bag}$ is: $\mathrm{\Gamma}$ $\vdash$ P : $\mathsf{Bag}$$\rightarrow$ $\mathcal{U}$ $\mathrm{\Gamma}$ $\vdash$ nil: P [] $\mathrm{\Gamma}$ $\vdash$ cons: $\prod$_x : X $\prod$_$\mathit{xs}$’ : ($\sum$_$\mathit{xs}$: $\mathsf{Bag}$ P $\mathit{xs}$) P (x $\mathrel{::}$ ($\pi$1 $\mathit{xs}$’)) [no rule]1 $\mathrm{\Gamma}$ $\vdash$ resp: $\prod$_x y : X $\prod$_$\mathit{zs}$’ : ($\sum$_$\mathit{zs}$: $\mathsf{Bag}$ P $\mathit{zs}$) cons x (y $\mathrel{::}$ ($\pi$1 $\mathit{zs}$’), cons y $\mathit{zs}$’) == cons y (x $\mathrel{::}$ ($\pi$1 $\mathit{zs}$’), cons x $\mathit{zs}$’) 1Bagelim nil cons resp: $\prod$_x : $\mathsf{Bag}$ P x And it computes: $\displaystyle\mathsf{Bagelim}\,\mathit{nil}\,\mathit{cons}\,\mathit{resp}\,[]$ $\displaystyle=_{P\,[]}$ $\displaystyle\mathit{nil}$ $\displaystyle\mathsf{Bagelim}\,\mathit{nil}\,\mathit{cons}\,\mathit{resp}\,(x\mathrel{::}\mathit{xs})$ $\displaystyle=_{P\,(x\mathrel{::}\mathit{xs})}$ $\displaystyle\mathit{cons}\,x\,(\mathit{xs},\mathsf{Bagelim}\,\mathit{nil}\,\mathit{cons}\,\mathit{resp}\,\mathit{xs})$ ### 7.2. From QIT to QWI-type We claim that any QIT $Q$ in the sense of Section 7.1 can be constructed as the QW-type for a signature and equational system derived from the declaration of the QIT; and that the same is true for the indexed version of Section 7.1 and QWI-types. That is, QWI-types are universal for non-conditional QITs in the same sense that W-types are for inductive types in a sufficiently extensional type theory. We saw above that the data $c_{1}:C_{1},\ldots,c_{m}:C_{m}$ in Section 7.1 gives rise to an underlying inductive type $\lfloor Q\rfloor$ and hence to a W-type [AGHMM15], with signature $\mathrm{\Sigma}=(A,B)$. The parameters and arguments of the equality constructors $d_{1}:D_{1},\ldots,d_{n}:D_{n}$ are also encoded in the same way by a signature $(E,V)$. Then the endpoints of the equality constructors can be encoded by the $l$ and $r$ arguments of an equational system $\varepsilon=(E,V,l,r)$ in the sense of Section 3.2; the encoding follows the structure of $\mathsf{EqType}$ judgements in Figure 3, using the $\eta$ constructor of $T_{\mathrm{\Sigma}}$ (13) for variables and the $\sigma$ constructor for $c_{1},\cdots,c_{m}$ introduced by EqConstr in $\mathrm{\Theta}$. We illustrate the encoding by example, beginning with the three examples from the Introduction. ###### Example (Finite multisets). The element constructors of the QIT, $\mathsf{Bag}\,X$, of finite multisets over $X:\mathcal{U}$ in (1) are encoded exactly as the W-type for List over $X$: we take $A:\mathcal{U}$ to be $\mathbb{1}+X$, where $\iota_{1}\,0$ corresponds to $[]$ and $\iota_{2}\,x$ corresponds to $x\mathrel{::}\\_$ for each $x:X$. The arity of $[]$ is zero, and the arity of each $x\mathrel{::}\\_$ is one; so we take $B:A\rightarrow\mathcal{U}$ to be the function mapping $\iota_{1}\,0$ to $\mathbb{0}$ and each $\iota_{2}\,x$ to $\mathbb{1}$. The $\mathsf{swap}$ equality constructor is parametrised by elements of $E\mathrel{\smash{\overset{\text{\tiny def}}{=}}}X\times X$ and for each $(x,y):E$, $\mathsf{swap}\,(x,y)$ yields an equation involving a single free variable (called $\mathit{zs}:\mathsf{Bag}\,X$ in (1)); so we define $V\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\lambda\,\\_\mathbin{.}\mathbb{1}:E\rightarrow\mathcal{U}$. Each side of the equation named by $\mathsf{swap}\,(x,y)$ is coded by an element of $T_{\mathrm{\Sigma}}\,(V\,(x,y))=T_{\mathrm{\Sigma}}\,\mathbb{1}$. Recalling the definition of $T_{\mathrm{\Sigma}}$ from Section 3.1, the single free variable, $\mathit{zs}$, corresponds to $\eta\,0:T_{\mathrm{\Sigma}}\,\mathbb{1}$. Then the left-hand side of the equation, $x\mathrel{::}y\mathrel{::}zs$, is encoded as $\sigma\,(\iota_{2}\,x,(\lambda\,\\_\mathbin{.}\sigma\,(\iota_{2}\,y,(\lambda\,\\_\mathbin{.}\eta\,0))))$, and similarly the right-hand side, $y\mathrel{::}x\mathrel{::}zs$, is encoded as $\sigma\,(\iota_{2}\,y,(\lambda\,\\_\mathbin{.}\sigma\,(\iota_{2}\,x,(\lambda\,\\_\mathbin{.}\eta\,0))))$. So altogether, the encoding of $\mathsf{Bag}\,X$ as a QW-type uses the non- indexed signature $\mathrm{\Sigma}=(A,B)$ and equational system $\varepsilon=(E,V,l,r)$, where: $\displaystyle A$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathbb{1}+X$ $\displaystyle E$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}X\times X$ $\displaystyle B(\iota_{1}\,0)$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathbb{0}$ $\displaystyle V(x,y)$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathbb{1}$ $\displaystyle B(\iota_{2}\,x)$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathbb{1}$ $\displaystyle l(x,y)$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sigma(\iota_{2}\,x,(\lambda\,\\_\mathbin{.}\,\sigma(\iota_{2}\,y,(\lambda\,\\_\mathbin{.}\,\eta\,0))))$ $\displaystyle r(x,y)$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\sigma(\iota_{2}\,y,(\lambda\,\\_\mathbin{.}\,\sigma(\iota_{2}\,x,(\lambda\,\\_\mathbin{.}\,\eta\,0))))$ ###### Example (Length-indexed multisets). The QWI-type encoding the QIT $\mathsf{AbVec}\,X$ of length-indexed multisets in (3) is an indexed version of the previous example, using the index type $I\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathbb{N}$. The indexed signature $\mathrm{\Sigma}=(\mathbb{N},A,B)$ has $A:\mathcal{U}^{\mathbb{N}}$ and $B:\prod_{i:\mathbb{N}}(A_{i}\rightarrow\mathcal{U}^{\mathbb{N}})$ given by: $\displaystyle A_{0}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathbb{1}$ $\displaystyle A_{i+1}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}X$ $\displaystyle B_{0}\,0\,j$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathbb{0}$ $\displaystyle B_{i+1}\,x\,j$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}(i=j)$ The indexed system of equations $\varepsilon=(\mathbb{N},E,V,l,r)$ over $\mathrm{\Sigma}$ has $E:\mathcal{U}^{\mathbb{N}}$, $V:\prod_{i:\mathbb{N}}(E_{i}\rightarrow\mathcal{U}^{\mathbb{N}})$ and $l,r:\prod_{i:\mathbb{N}}\prod_{e:E_{i}}(T_{\mathrm{\Sigma}}(V_{i}\,e))_{i}$ given by: $\displaystyle E_{0}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathbb{0}$ $\displaystyle E_{1}$ $\displaystyle\mathrel{\smash{\overset{\text{\tiny def}}{=}}}\mathbb{0}$
# A Clinical Dataset for the Evaluation of Motion Planners in Medical Applications Inbar Fried1,2, Jason A. Akulian3, and Ron Alterovitz1 This research was supported by the U.S. National Institutes of Health (NIH) under awards R01EB024864 and F30CA265234, and by the National Science Foundation (NSF) under awards 2008475 and 2038855.The authors acknowledge the National Cancer Institute and the Foundation for the National Institutes of Health, and their critical role in the creation of the free publicly available LIDC/IDRI Database used in this study.The MR brain images from healthy volunteers used in this paper were collected and made available by the CASILab at The University of North Carolina at Chapel Hill and were distributed by the MIDAS Data Server at Kitware, Inc.1 I. Fried and R. Alterovitz are with the Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA. {ifried01<EMAIL_ADDRESS>I. Fried is also with the Medical Scientist Training Program, University of North Carolina School of Medicine, Chapel Hill, NC, 27599, USA.3J. A. Akulian is with the Division of Pulmonary Diseases and Critical Care Medicine at the University of North Carolina at Chapel Hill, NC 27599, USA<EMAIL_ADDRESS> ###### Abstract The prospect of using autonomous robots to enhance the capabilities of physicians and enable novel procedures has led to considerable efforts in developing medical robots and incorporating autonomous capabilities. Motion planning is a core component for any such system working in an environment that demands near perfect levels of safety, reliability, and precision. Despite the extensive and promising work that has gone into developing motion planners for medical robots, a standardized and clinically-meaningful way to compare existing algorithms and evaluate novel planners and robots is not well established. We present the Medical Motion Planning Dataset (Med-MPD), a publicly-available dataset of real clinical scenarios in various organs for the purpose of evaluating motion planners for minimally-invasive medical robots. Our goal is that this dataset serve as a first step towards creating a larger robust medical motion planning benchmark framework, advance research into medical motion planners, and lift some of the burden of generating medical evaluation data. ## I Introduction Automation of medical robots for clinical procedures or subtasks is increasingly being shown to be feasible. Achieving autonomy in interventional medical procedures has a lot of potential benefits for patient care and hospital efficiency. Much like teleoperated medical robots, such as the da Vinci (Intuitive Surgical Inc., Sunnyvale, CA), can compensate for physician fatigue and hand instability, autonomous medical robotics can further improve and standardize patient care by accounting for inter- and intra-physician variability while also focusing the physician’s time on sub-tasks that require their expertise. However, beyond the technical challenges that exist in hardware and software, a critical, if not the most important, challenge in these systems is making them safe and reliable. To address these challenges and still benefit from the advantages of automation, integrating motion planning into medical robots to ensure safe motions is essential. One class of medical robots that has been studied extensively over the past couple decades has been medical continuum robots, which include, for example, concentric tube robots and steerable needles [1]. Many mechanical designs have been proposed for these devices, but at their core, medical continuum robots can follow curvilinear trajectories in 3D, allowing them to curve around obstacles and access regions of the anatomy that are otherwise inaccessible when using straight rigid tools. The potential benefit of these devices has been proposed in numerous organs and for various medical procedures. The complex kinematics of these devices in conjunction with the precision required for safe medical procedures make manual operation of these devices unintuitive and impractical. To overcome this challenge, autonomous robots have been proposed that actuate the medical continuum robot following a planned trajectory. Despite the numerous motion planners that have been proposed for medical continuum robots, to the best of our knowledge, a benchmarking dataset to evaluate the performance of these algorithms does not exist. The lack of a shared benchmarking resource has lead each research group to generate their own testing data, which is often a time intensive effort. Since the motion planners have been tested in various organs, in different anatomical models of those organs, and likely with different obstacle resolutions, it is difficult to properly assess the benefits and drawbacks of each proposed motion planning approach and to compare motion planners. To help evaluate the benefits of robot automation in medicine, it is important to have benchmarks that can robustly and equitably evaluate the performance of algorithms in clinically relevant scenarios. In this work, we propose Med-MPD, a medical benchmarking dataset consisting of real clinical motion planning environments for assessing motion planners for medical continuum robots and related minimally-invasive medical robots. The data includes benchmark scenarios defined by the relevant anatomy and the clinical problem in the lungs, liver, and brain. We make Med-MPD publicly available at https://github.com/UNC-Robotics/Med-MPD. ## II Related Work There are several robotics datasets and benchmarking suites that have been developed specifically to allow robust evaluation of motion planners [2, 3, 4, 5] (and citations within). These works focus on non-medical robots. There have also been several medical robotics datasets that have been published, but these are mostly focused on computer vision problems like tool or anatomy segmentation, physician training or assessment, and object manipulation, but not on motion planning [6, 7, 8, 9]. Several works have proposed simulators for medical robots [10, 11, 12], but there is no set benchmarking dataset with which to compare different motion planning algorithms. A variety of algorithms for medical continuum robots have been proposed that encompass various organs and clinical applications. Within the broad class of medical continuum robots, there has been substantial work in developing motion planners for steerable needles in the lungs [13, 14, 15], liver [16, 17], prostate [18, 19, 20, 21], and brain [22, 23, 24]. It is difficult to effectively compare these motion planning algorithms since they span different organs and different instances of these organs. ## III Med-MPD Med-MPD contains anatomical environments and specifications of clinically relevant scenarios in the lungs, liver, and brain. These three organs have received considerable research attention from the continuum medical robots motion planning community, especially for steerable needles. The description of each environment, the clinical motivation, and several relevant evaluation criteria are presented below. Although each organ has various pathologies each defined by a different clinical objective, the planning problem we consider is target reach. This objective encompasses many clinical procedures, including biopsy, ablation, and drug delivery. Future iterations of the data could be adapted to evaluate motion planners for scenarios where the objective is different, such as manipulation at the target. We represent the environments as three-dimensional binary maps that indicate the presence or absence of obstacles at each corresponding voxel location in the original medical image. This environmental representation is used by many of the medical robot motion planners referenced above. We also provide a collection of clinically-motivated start poses in each environment, along with target points that correspond to true clinical targets. ### III-A Lungs It is estimated that roughly one million pulmonary nodules are discovered every year in the United States. In order to get a definitive diagnosis for these nodules, a tissue biopsy is required. There are several methods to reach lung nodules, but the least invasive and safest approach is via bronchoscopy where a physician navigates a bronchoscope through the airways and inserts a needle into the lung tissue towards the target. Since physicians currently use straight rigid tools to perform the biopsy which are limited in reach and access, there have been efforts to use flexible steerable needles to overcome some of the existing challenges and increase the number of patients for which bronchoscopy can be used [25]. Delivery of the robot can be done through the working channel of a bronchoscope. At a high level, the anatomy of the lungs consists of the airways, major blood vessels, and the pleura (lung boundary) (see Figure 1). The remaining space inside the lung (known as the parenchyma) is composed of functional tissue and is the location where lung nodules that may be suspicious for cancer often present. We present 5 motion planning scenarios in the lungs that reflect real clinical scenarios of patients with lung nodules. The data is part of the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) image collection [26, 27] from The Cancer Imaging Archive (TCIA) [28]. Each environment consists of obstacles including the airways, major blood vessels, lung fissures, and pleura, and segmented nodules in the parenchyma. The fissures and nodules were manually segmented while all other objects were automatically segmented [29]. The start poses correspond to areas along the airway wall that are accessible with a bronchoscope through which a medical robot can be passed. An example plan for a flexible steerable needle is shown in Figure 1. Figure 1: A representative lung environment consisting of blood vessels, lung fissures, bronchial tree, and pleural boundary. A nodule (target) is shown on the right along with a sample planned trajectory. The steerable needle starts at a valid point along the airway wall and can travel through the space within the pleural boundary that is not occupied by obstacles. ### III-B Liver Currently, percutaneous liver biopsies, where a physician inserts a needle through the abdominal wall and into the liver, are most commonly performed with straight rigid tools. The mechanical constraint of existing devices make it hard to reach posterior sites that are obstructed by critical anatomy. Additionally, when multiple targets exist, a physician will need to re-insert the needle for each target. Medical devices such as steerable needles that are able to curve around obstacles and reach multiple sites from a single point- of-entry can alleviate some of these clinical challenges. Similar to the lungs, the liver motion planning environment consists of major blood vessels and the organ boundary. The remaining space within the liver (also referred to as the parenchyma) is traversable. We present 5 motion planning environments in the liver in patients with hepatocellular carcinoma with segmentations of relevant obstacles. The data is derived from the Hepatocellular Carcinoma Transarterial Chemoembolization Segmentation (HCC- TACE-Seg) dataset [30, 31] from TCIA [28]. A sample liver planning environment is shown in Figure 2. Figure 2: A liver environment from the dataset showing the segmented hepatic arteries, hepatic veins, portal vein, liver boundary, nodule (target), and a sample trajectory. The three boxes on the bottom show the view in the CT slices (transverse planes). ### III-C Brain The brain is one of the most complex organs in the body, with nearly every portion of tissue critical to some physiologic function. From a planning perspective, while obvious obstacles exist such as blood vessels and ventricles, there are many other regions of the brain and properties of the tissue that are important to consider. For example, the directionality of white matter fibers, which can be analyzed via tractography, can play an important role in evaluating trajectories through the brain. Given the density of critical regions in the brain and their fragility, medical robots and motion planning algorithms that consider and account for these constraints can have a large impact in this domain. We include 5 motion planning environments in the brain where the targets are the globi pallidi for deep brain stimulation. We consider blood vessels and ventricles as traditional obstacles, whereas all other segmentations of brain regions can be assigned a cost since some subset of them must be traversed. White matter fiber tracts are not currently included in the data. The data is part of the Healthy MR Database [32]. Blood vessels were manually segmented and all other structures were segmented using FastSurfer [33]. A sample environment is depicted in Figure 3. Figure 3: A brain environment from the dataset showing the segmented blood vessels, lateral ventricles (without the temporal horn), globi pallidi, brain boundary, a sample trajectory, and various segmented regions of the brain (left: multiple colors). ### III-D Evaluation Criteria In order to evaluate and compare the performance of different motion planning algorithms, we describe several criteria that are relevant to many medical applications. This is a general and non-exhaustive list of relevant criteria, and in many cases, each organ and clinical application has domain-specific considerations that would be valuable to use as evaluation metrics. The following metrics can be reported for a single clinical target or as a statistic across a collection of clinical targets in the data. * • Path Length: the length of the collision-free motion plan from the start pose to the target. * • Computation Time: the amount of computation time that the motion planner took to find a kinematically-feasible collision-free plan from the start pose to the target prior to the procedure. * • Replanning Time: the amount of time that the motion planner took to find a kinematically-feasible collision-free plan from its current intraoperative pose to the target following a random deviation event. * • Obstacle Clearance Statistics: the minimum, mean, and median of the Euclidean distances between every pose along the motion plan to its nearest obstacle. * • Procedure Success (clinical targets): the percentage of clinical targets that the motion planner was able to successfully plan to. * • Coverage of the Anatomy (random targets): the percentage of random goals that the motion planner was able to successfully plan to. This is an approximation for the motion planner’s ability to generalize to any target. ## IV Discussion In this work, we proposed Med-MPD, a new medical benchmarking dataset for motion planners for medical continuum robots and related minimally invasive medical robots. At its current state, Med-MPD is a stand-alone collection of clinically-relevant motion planning scenarios. It is our hope to extend the benchmarking suite in the various ways described throughout this paper, as well as to integrate it directly into an existing motion planning framework. We also hope to expand the provided start poses to include start regions from which motion planning can begin. This would introduce an interesting and medically relevant problem where the choice of start pose is itself a motion planning challenge that can be optimized as part of the procedure. We also hope to introduce uncertainty into the data by implementing methods that allow for target and obstacle deviation during a medical procedure. Uncertainty is highly likely to have an impact during a procedure because of the deformable nature of organ tissue. Ideally, the environmental uncertainty would be incorporated into a simulator that would consider a robot’s model and differential constraints to enable more realistic evaluations. Since the data is not directly tied to medical continuum robots, the clinical environments can be used to evaluate other existing and novel medical robots. It is our intention that this dataset be used to advance research in motion planning for autonomous medical robots towards the ultimate goal of leveraging these systems to improve patient care. ## References * [1] R. J. Webster III and B. A. Jones, “Design and kinematic modeling of constant curvature continuum robots: A review,” _The International Journal of Robotics Research_ , vol. 29, no. 13, pp. 1661–1683, 2010. * [2] C. Chamzas, C. Quintero-Pena, Z. Kingston, A. Orthey, D. Rakita, M. Gleicher, M. Toussaint, and L. E. Kavraki, “Motionbenchmaker: A tool to generate and benchmark motion planning datasets,” _IEEE Robotics and Automation Letters_ , vol. 7, no. 2, pp. 882–889, 2021. * [3] M. Moll, I. A. Sucan, and L. E. Kavraki, “Benchmarking motion planning algorithms: An extensible infrastructure for analysis and visualization,” _IEEE Robotics & Automation Magazine_, vol. 22, no. 3, pp. 96–102, 2015\. * [4] E. Heiden, L. Palmieri, L. Bruns, K. O. Arras, G. S. Sukhatme, and S. Koenig, “Bench-MR: A motion planning benchmark for wheeled mobile robots,” _IEEE Robotics and Automation Letters_ , vol. 6, no. 3, pp. 4536–4543, 2021\. * [5] L. Kästner, T. Bhuiyan, T. A. Le, E. Treis, J. Cox, B. Meinardus, J. Kmiecik, R. Carstens, D. Pichel, B. Fatloun, _et al._ , “Arena-Bench: A benchmarking suite for obstacle avoidance approaches in highly dynamic environments,” _IEEE Robotics and Automation Letters_ , vol. 7, no. 4, pp. 9477–9484, 2022. * [6] Y. Gao, S. S. Vedula, C. E. Reiley, N. Ahmidi, B. Varadarajan, H. C. Lin, L. Tao, L. Zappella, B. Béjar, D. D. Yuh, _et al._ , “JHU-ISI gesture and skill assessment working set (JIGSAWS): A surgical activity dataset for human motion modeling,” in _MICCAI workshop: M2cai_ , vol. 3, no. 3, 2014. * [7] P. Mountney, D. Stoyanov, and G.-Z. Yang, “Three-dimensional tissue deformation recovery and tracking,” _IEEE Signal Processing Magazine_ , vol. 27, no. 4, pp. 14–24, 2010. * [8] N. Ahmidi, L. Tao, S. Sefati, Y. Gao, C. Lea, B. B. Haro, L. Zappella, S. Khudanpur, R. Vidal, and G. D. Hager, “A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery,” _IEEE Transactions on Biomedical Engineering_ , vol. 64, no. 9, pp. 2025–2041, 2017\. * [9] N. Madapana, M. M. Rahman, N. Sanchez-Tamayo, M. V. Balakuntala, G. Gonzalez, J. P. Bindu, L. V. Venkatesh, X. Zhang, J. B. Noguera, T. Low, _et al._ , “DESK: A robotic activity dataset for dexterous surgical skills transfer to medical robots,” in _IEEE/RSJ International Conference on Intelligent Robots and Systems_ , 2019, pp. 6928–6934. * [10] N. Chentanez, R. Alterovitz, D. Ritchie, L. Cho, K. K. Hauser, K. Goldberg, J. R. Shewchuk, and J. F. O’Brien, “Interactive simulation of surgical needle insertion and steering,” in _ACM SIGGRAPH_ , 2009, pp. 1–10. * [11] T. Jianu, B. Huang, M. E. Abdelaziz, M. N. Vu, S. Fichera, C.-Y. Lee, P. Berthet-Rayne, A. Nguyen, _et al._ , “CathSim: An open-source simulator for autonomous cannulation,” _arXiv preprint arXiv:2208.01455_ , 2022. * [12] R. Dreyfus, Q. Boehler, and B. J. Nelson, “A simulation framework for magnetic continuum robots,” _IEEE Robotics and Automation Letters_ , vol. 7, no. 3, pp. 8370–8376, 2022. * [13] A. Kuntz, L. G. Torres, R. H. Feins, R. J. Webster III, and R. Alterovitz, “Motion planning for a three-stage multilumen transoral lung access system,” _IEEE/RSJ International Conference on Intelligent Robots and Systems_ , pp. 3255–3261, 9 2015. * [14] J. Hoelscher, M. Fu, I. Fried, M. Emerson, T. E. Ertop, M. Rox, A. Kuntz, J. A. Akulian, R. J. Webster III, and R. Alterovitz, “Backward planning for a multi-stage steerable needle lung robot,” _IEEE Robotics and Automation Letters_ , vol. 6, no. 2, pp. 3987–3994, 2021. * [15] M. Fu, K. Solovey, O. Salzman, and R. Alterovitz, “Resolution-optimal motion planning for steerable needles,” in _IEEE International Conference on Robotics and Automation_ , 2022, pp. 9652–9659. * [16] T. K. Adebar, J. D. Greer, P. F. Laeseke, G. L. Hwang, and A. M. Okamura, “Methods for improving the curvature of steerable needles in biological tissue,” _IEEE Transactions on Biomedical Engineering_ , vol. 63, no. 6, pp. 1167–1177, 2015. * [17] F. Liu, A. Garriga-Casanovas, R. Secoli, and F. R. y Baena, “Fast and adaptive fractal tree-based path planning for programmable bevel tip steerable needles,” _IEEE Robotics and Automation Letters_ , vol. 1, pp. 601–608, 2016\. * [18] J. Xu, V. Duindam, R. Alterovitz, and K. Goldberg, “Motion planning for steerable needles in 3D environments with obstacles using rapidly-exploring random trees and backchaining,” in _IEEE International Conference on Automation Science and Engineering_ , 2008, pp. 41–46. * [19] J. van den Berg, S. Patil, R. Alterovitz, P. Abbeel, and K. Goldberg, “LQG-based planning, sensing, and control of steerable needles,” in _Algorithmic Foundations of Robotics IX_. Springer, 2010, pp. 373–389. * [20] S. Patil and R. Alterovitz, “Interactive motion planning for steerable needles in 3D environments with obstacles,” in _IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics_, 2010, pp. 893–899. * [21] M. C. Bernardes, B. V. Adorno, G. A. Borges, and P. Poignet, “3D robust online motion planning for steerable needles in dynamic workspaces using duty-cycled rotation,” _Journal of Control, Automation and Electrical Systems_ , vol. 25, no. 2, pp. 216–227, 2014. * [22] M. Pinzi, S. Galvan, and F. Rodriguez y Baena, “The adaptive hermite fractal tree (AHFT): a novel surgical 3D path planning approach with curvature and heading constraints,” _International journal of computer assisted radiology and surgery_ , vol. 14, no. 4, pp. 659–670, 2019. * [23] A. Favaro, L. Cerri, S. Galvan, F. R. Y. Baena, and E. De Momi, “Automatic optimized 3D path planner for steerable catheters with heuristic search and uncertainty tolerance,” in _IEEE International Conference on Robotics and Automation_ , 2018, pp. 9–16. * [24] A. Segato, V. Pieri, A. Favaro, M. Riva, A. Falini, E. De Momi, and A. Castellano, “Automated steerable path planning for deep brain stimulation safeguarding fiber tracts and deep gray matter nuclei,” _Frontiers in Robotics and AI_ , vol. 6, p. 70, 2019. * [25] Fried, I., Hoelscher, J., Fu, M., Emerson, M., Ertop, T. E., Rox, M., Granna, J., Kuntz, A., Akulian, J. A., Webster, R.J. and R. Alterovitz, “Design considerations for a steerable needle robot to maximize reachable lung volume,” in _IEEE International Conference on Robotics and Automation_ , 2021\. * [26] S. G. Armato III, G. McLennan, L. Bidaut, M. F. McNitt-Gray, C. R. Meyer, A. P. Reeves, B. Zhao, D. R. Aberle, C. I. Henschke, E. A. Hoffman, _et al._ , “The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on ct scans,” _Medical physics_ , vol. 38, no. 2, pp. 915–931, 2011. * [27] ——, “Data from LIDC-IDRI,” _The Cancer Imaging Archive_ , 2015. * [28] K. Clark, B. Vendt, K. Smith, J. Freymann, J. Kirby, P. Koppel, S. Moore, S. Phillips, D. Maffitt, M. Pringle, _et al._ , “The cancer imaging archive (TCIA): maintaining and operating a public information repository,” _Journal of digital imaging_ , vol. 26, no. 6, pp. 1045–1057, 2013. * [29] M. Fu, A. Kuntz, R. J. Webster, and R. Alterovitz, “Safe motion planning for steerable needles using cost maps automatically extracted from pulmonary images,” in _IEEE/RSJ International Conference on Intelligent Robots and Systems_ , 2018, pp. 4942–4949. * [30] A. Morshid, K. M. Elsayes, A. M. Khalaf, M. M. Elmohr, J. Yu, A. O. Kaseb, M. Hassan, A. Mahvash, Z. Wang, J. D. Hazle, _et al._ , “A machine learning model to predict hepatocellular carcinoma response to transcatheter arterial chemoembolization,” _Radiology. Artificial intelligence_ , vol. 1, no. 5, 2019. * [31] A. W. Moawad, D. Fuentes, A. Morshid, A. M. Khalaf, M. M. Elmohr, A. Abusaif, J. D. Hazle, A. O. Kaseb, M. Hassan, A. Mahvash, J. Szklaruk, A. Qayyom, and K. M. Elsayes, “Multimodality annotated HCC cases with and without advanced imaging segmentation [data set],” _The Cancer Imaging Archive_ , 2021. * [32] E. Bullitt, D. Zeng, G. Gerig, S. Aylward, S. Joshi, J. K. Smith, W. Lin, and M. G. Ewend, “Vessel tortuosity and brain tumor malignancy: a blinded study,” _Academic radiology_ , vol. 12, no. 10, pp. 1232–1240, 2005. * [33] L. Henschel, S. Conjeti, S. Estrada, K. Diers, B. Fischl, and M. Reuter, “FastSurfer-a fast and accurate deep learning based neuroimaging pipeline,” _NeuroImage_ , vol. 219, p. 117012, 2020.
# Inflation and reheating in quadratic metric-affine gravity with derivative couplings Ioannis D. Gialamas 0000-0002-2957-5276 , Theodoros Katsoulas 0000-0003-4103-7937 and Kyriakos Tamvakis 0009-0007-7953-9816 ###### Abstract Within the framework of metric-affine theories of gravity, where both the metric and connection are treated as independent variables, we consider actions quadratic in the Ricci scalar curvature coupled non-minimally to a scalar field through derivative couplings. Our analysis delves into the inflationary predictions, revealing their consistency with the latest observational constraints across a wide range of parameters. This compatibility permits adjustments such as an increase in the spectral index and a reduction in the tensor-to-scalar ratio. While we do not propose a specific reheating mechanism, our analysis demonstrates that within the quadratic model of inflation, the maximum reheating temperature can reach $\sim 3\times 10^{15}\;\mathrm{GeV}$. ## 1 Introduction Cosmological inflation [1, 2, 3, 4] offers a natural explanation of how quantum fluctuations of gravitational and matter fields can be promoted to the cosmological perturbations [5, 6, 7, 8, 9, 10] that are the origin of large scale structure of the Universe. The standard way of realizing inflation is through the vacuum energy of a scalar degree of freedom (the inflaton), either introduced as a fundamental field or as part of gravity itself. In both cases non-minimal couplings of the inflaton to curvature are expected to be present as corrections to the classical general relativity (GR) action arising from the quantum interactions of gravitating matter fields. Such are couplings of the scalar field to the Ricci scalar curvature $\sim f(\phi){R}$ [11] or derivative couplings to the Ricci tensor $\sim(\partial_{\mu}\phi\partial_{\nu}\phi)R^{\mu\nu}$ [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. Quadratic terms of the curvature invariants [31] are also expected to modify the classical action, a particular case being that of a quadratic Ricci scalar term $\sim R^{2}$ giving rise to the Starobinsky model of inflation [32]. The standard metric formulation of GR, where the connection is given by the Levi-Civita relation in terms of the metric, is known to be equivalent to the so-called metric-affine formulation, where the connection is an independent variable. Nevertheless, this equivalence ceases to be true if we depart from the Einstein-Hilbert form of the action and consider modifications such as non-minimal couplings of scalar fields to the curvature or quadratic curvature terms. In the general framework of metric-affine theories of gravity, while general covariance is preserved, the connection is promoted to an independent variable independent of the metric. The difference of the independent connection and the Levi-Civita one is the so-called distortion tensor, which in cases that it can be integrated out, leads to a resulting effective metric theory with or without additional dynamical degrees of freedom. A characteristic example is the case of the metric-affine (Palatini) version of the ${\cal{R}}^{2}$ model [33] which, in contrast to the analogous metric model, does not predict any propagating scalar mode, although, when coupled to a scalar field, gives rise to a characteristic inflationary plateau independently of the scalar self interaction potential. This general feature of metric-affine ${\cal{R}}^{2}$ models [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76] is maintained in the presence of non-minimal $f(\phi){\cal{R}}$ couplings [77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120]. Nevertheless, the effect of derivative couplings $\sim(\partial\phi)^{2}{\cal{R}}$ or $\sim(\partial_{\mu}\phi\partial_{\nu}\phi)\mathcal{R}^{\mu\nu}$ on this behaviour [121, 122], being an open issue, is the main focus of the present paper. As working examples we consider the case of a scalar field with a quadratic potential and the case of a scalar field with a quartic self interaction. In both models the inclusion of the derivative couplings can increase the value of the spectral index. In the absence of the $\mathcal{R}^{2}$ term the tensor-to-scalar ratio, $r$, is reduced but the reduction is not enough to render the models compatible with the observational data. The largest reduction of $r$ comes from the inclusion of the $\mathcal{R}^{2}$ term [36, 38]. The effect of the derivative couplings in combination with the $\mathcal{R}^{2}$ term can rescue both models aligning their inflationary predictions more closely with the latest observational constraints. We also analyze the issue of reheating in the simpler case of the quadratic model and, without adopting any specific reheating mechanism, we find that the maximum reheating temperature $T_{\rm ins}$, is of order $\sim 3\times 10^{15}\;\mathrm{GeV}$ , with small deviations for varying the derivative coupling parameter $\tilde{\alpha}$ and small values of the ${\cal{R}}^{2}$-parameter $\beta$. For large $\beta$ the maximum reheating temperature is independent of $\tilde{\alpha}$ and behaves as $\beta^{-1/4}$. The outline of the paper is as follows. In section 2 we set up the theoretical framework of metric-affine gravity with derivative couplings and quadratic in curvature terms. Section 3 discusses the inflationary predictions of the quadratic and quartic models. The reheating temperature is computed in section 4. Finally, we summarize and conclude in section 5. Throughout the paper, we adopt the mostly plus signature for the metric and we use natural units, setting the reduced Planck mass $M_{\rm P}$ to one. ## 2 The model Metric-affine theories of gravity treat the metric tensor $g_{\mu\nu}$ and the connection $\tensor{\Gamma}{{}^{\lambda}_{\mu}{}_{\nu}}$ as independent variables. The independent connection can be decomposed as $\tensor{\Gamma}{{}^{\lambda}_{\mu}{}_{\nu}}=\\{\tensor{}{{}^{\lambda}_{\mu}{}_{\nu}}\\}+\tensor{\mathcal{C}}{{}_{\mu}^{\lambda}{}_{\nu}}\,,$ (2.1) where $\\{\tensor{}{{}^{\lambda}_{\mu}{}_{\nu}}\\}$ is the usual Levi-Civita tensor and $\tensor{\mathcal{C}}{{}^{\lambda}_{\mu}{}_{\nu}}$ is the so-called distortion tensor. The Riemann tensor is given by $\displaystyle\tensor{\mathcal{R}}{{}^{\alpha}_{\beta}{}_{\gamma}{}_{\delta}}$ $\displaystyle=\partial_{\gamma}\tensor{\Gamma}{{}^{\alpha}_{\delta}{}_{\beta}}-\partial_{\delta}\tensor{\Gamma}{{}^{\alpha}_{\gamma}{}_{\beta}}+\tensor{\Gamma}{{}^{\alpha}_{\gamma}{}_{\mu}}\tensor{\Gamma}{{}^{\mu}_{\delta}{}_{\beta}}-\tensor{\Gamma}{{}^{\alpha}_{\delta}{}_{\mu}}\tensor{\Gamma}{{}^{\mu}_{\gamma}{}_{\beta}}$ $\displaystyle=\tensor{R}{{}^{\alpha}_{\beta}{}_{\gamma}{}_{\delta}}+\nabla_{\gamma}\tensor{\mathcal{C}}{{}_{\delta}^{\alpha}{}_{\beta}}-\nabla_{\delta}\tensor{\mathcal{C}}{{}_{\gamma}^{\alpha}{}_{\beta}}+\tensor{\mathcal{C}}{{}_{\gamma}^{\alpha}{}_{\mu}}\tensor{\mathcal{C}}{{}_{\delta}^{\mu}{}_{\beta}}-\tensor{\mathcal{C}}{{}_{\delta}^{\alpha}{}_{\mu}}\tensor{\mathcal{C}}{{}_{\gamma}^{\mu}{}_{\beta}}\,,$ (2.2) where $\tensor{R}{{}^{\alpha}_{\beta}{}_{\gamma}{}_{\delta}}$ is the metric Riemann tensor constructed from the metric and $\nabla$ is the covariant derivative in terms of the Levi-Civita connection. The only symmetry of $\tensor{\mathcal{R}}{{}^{\alpha}_{\beta}{}_{\gamma}{}_{\delta}}$ is the antisymmetry under the interchange of the last two indices. As a result, there are three non-zero contractions given by $\tensor{\mathcal{R}}{{}_{\mu}{}_{\nu}}=\tensor{\mathcal{R}}{{}^{\rho}_{\mu}{}_{\rho}{}_{\nu}}\,,\qquad\tensor{\widehat{\mathcal{R}}}{{}^{\mu}_{\nu}}=g^{\alpha\beta}\tensor{\mathcal{R}}{{}^{\mu}_{\alpha}{}_{\beta}{}_{\nu}}\,,\qquad\tensor{\tilde{\mathcal{R}}}{{}_{\mu}{}_{\nu}}=\tensor{\mathcal{R}}{{}^{\alpha}_{\alpha}{}_{\mu}{}_{\nu}}\,,$ (2.3) called the Ricci, co-Ricci, and homothetic curvature tensor, respectively. There is a single Ricci scalar determined through an additional contraction of either the Ricci tensor or the co-Ricci tensor, expressed as follows: $\mathcal{R}=g^{\mu\nu}\tensor{\mathcal{R}}{{}_{\mu}{}_{\nu}}=-\tensor{\widehat{\mathcal{R}}}{{}^{\mu}_{\mu}}\,.$ (2.4) We consider the following metric-affine action of a scalar field $\phi$ coupled non-minimally to the Ricci scalar as well as the Ricci tensors through derivative couplings: $\displaystyle\mathcal{S}=\int{\rm d}^{4}x\sqrt{-g}\bigg{(}\frac{1}{2}(f(\phi)+\alpha_{1}X)\mathcal{R}-\frac{1}{2}K(\phi)X+\alpha_{2}\mathcal{R}^{\mu\nu}X_{\mu\nu}+\alpha_{3}\tensor{\widehat{\mathcal{R}}}{{}^{\mu}{}^{\nu}}X_{\mu\nu}+\frac{\beta}{4}\mathcal{R}^{2}-V(\phi)\bigg{)}\,,$ (2.5) where $X_{\mu\nu}=\partial_{\mu}\phi\partial_{\nu}\phi\qquad\text{and}\qquad X=g^{\mu\nu}X_{\mu\nu}\,.$ (2.6) There is no coupling of the homothetic curvature Ricci tensor $\tensor{\tilde{\mathcal{R}}}{{}_{\mu}{}_{\nu}}$ due to the antisymmetry of its indices. Note that in the metric case (i.e. $\tensor{\mathcal{C}}{{}_{\mu}^{\lambda}{}_{\nu}}=0$) there is no separate $a_{3}$ coupling since $\widehat{R}_{\mu\nu}=-R_{\mu\nu}$. The functions $F(\phi)$, $K(\phi)$, and $V(\phi)$ represent nonminimal couplings, non- canonical kinetic terms, and the potential term of the scalar field, respectively. As we have remarked in the introduction quadratic terms of the curvature are bound to be generated from quantum corrections due to gravitating matter fields. Nevertheless, general quadratic terms of the Riemann and Ricci tensors [123, 124, 55] are known to be associated with unphysical degrees of freedom [125, 126, 127, 128, 129, 130], in contrast to quadratic terms of the Ricci scalar which is renowned for its reliable inflationary predictions [36, 38] both in the metric as well as the metric- affine case. Therefore, in the action (2.5), we have also incorporated a quadratic Ricci scalar term. Note that the $\mathcal{R}^{2}$ term is not the only possible safe quadratic term that can be added to the action. A quadratic term constructed from the Holst invariant $\tensor{\epsilon}{{}_{\alpha}^{\beta}{}^{\gamma}{}^{\delta}}\tensor{\mathcal{R}}{{}^{\alpha}_{\beta}{}_{\gamma}{}_{\delta}}$, can also be incorporated. The implications of such terms in inflation have been investigated in studies such as [64, 67, 74]. Nevertheless, in the present study we concentrate on the presence of $\mathcal{R}^{2}$. The $\mathcal{R}^{2}$ term can be written in terms of the auxiliary scalar field $\chi$ as $\mathcal{R}^{2}=2\chi\mathcal{R}-\chi^{2}$, so the action (2.5) takes the form $\displaystyle\mathcal{S}=\int{\rm d}^{4}x\sqrt{-g}\bigg{(}\frac{1}{2}(\mathcal{F}(\phi,\chi)+\alpha_{1}X)\mathcal{R}-\frac{1}{2}K(\phi)X+\alpha_{2}\mathcal{R}^{\mu\nu}X_{\mu\nu}+\alpha_{3}\tensor{\widehat{\mathcal{R}}}{{}^{\mu}{}^{\nu}}X_{\mu\nu}-U(\phi,\chi)\bigg{)}\,,$ (2.7) with $\mathcal{F}(\phi,\chi)=f(\phi)+\beta\chi\qquad\text{and}\qquad U(\phi,\chi)=V(\phi)+\frac{\beta}{4}\chi^{2}\,.$ (2.8) To delve into the dynamics of inflation within the theory, it becomes necessary to rephrase the action in what is known as the Einstein frame. While actions constructed solely from the Ricci scalar can be transformed via a Weyl rescaling (or conformal transformation), since our action (2.5) involves derivative couplings of the scalar field to the Ricci tensors, we need to employ a broader set of transformations, namely the so-called disformal transformations. These transformations are defined as (we have closely followed the notation of [122]) $g_{\mu\nu}=\gamma_{1}(\phi,\tilde{X})\tilde{g}_{\mu\nu}+\gamma_{2}(\phi,\tilde{X})X_{\mu\nu}\,,$ (2.9) where $\tilde{X}=\tilde{g}^{\mu\nu}X_{\mu\nu}$. The inverse transformation is $\tilde{g}_{\mu\nu}=\tilde{\gamma}_{1}(\phi,X)g_{\mu\nu}+\tilde{\gamma}_{2}(\phi,X)X_{\mu\nu}\,,$ (2.10) with $\tilde{\gamma}_{1}=1/\gamma_{1}$ and $\tilde{\gamma}_{2}=-\gamma_{2}/\gamma_{1}$, while the determinants are related by the equation $g=\tilde{g}\gamma_{1}^{3}(\phi,\tilde{X})(\gamma_{1}(\phi,\tilde{X})+\gamma_{2}(\phi,\tilde{X})\tilde{X})$. Using the relations for the inverse metrics $g^{\mu\nu}$ and $\tilde{g}^{\mu\nu}$ we obtain also that $X=\frac{\tilde{X}}{\gamma_{1}(\phi,\tilde{X})+\gamma_{2}(\phi,\tilde{X})\tilde{X}}\qquad\text{and}\qquad\tilde{X}=\frac{X}{\tilde{\gamma}_{1}(\phi,X)+\tilde{\gamma}_{2}(\phi,X)X}\,.$ (2.11) Following [122] we may replace the co-Ricci tensor with the so-called average Ricci tensor, defined as $\overline{\mathcal{R}}_{\mu\nu}=(\mathcal{R}_{\mu\nu}+\widehat{\mathcal{R}}_{\mu\nu})/2$, which vanishes if the connection is metric compatible. In general metric- affine theories have non-zero torsion $T_{\mu\lambda\nu}=2\mathcal{C}_{[\mu|\lambda|\nu]}$ (see [131] for a recent review on gravitational theories with torsion) and non-metricity $\nabla_{\rho}g_{\mu\nu}=-2\mathcal{C}_{(\mu\nu)\rho}$. In the Einstein-Cartan gravity framework, non-metricity is assumed to be zero, whereas in Palatini gravity, the torsion is required to vanish. In the subsequent discussion, we will initially maintain the coupling to the average Ricci tensor. However, ultimately, we will omit it, focusing our analysis solely on the Einstein- Cartan case. By substituting the average Ricci tensor into equation (2.7) and applying the disformal transformation, we derive $\displaystyle\mathcal{S}=\int{\rm d}^{4}x\sqrt{-g}\bigg{[}$ $\displaystyle F_{1}(\phi,X,\chi)\frac{\mathcal{R}}{2}+F_{2}(\phi,X)\overline{\mathcal{R}}^{\mu\nu}X_{\mu\nu}+F_{3}(\phi,X,\chi)\mathcal{R}^{\mu\nu}X_{\mu\nu}$ $\displaystyle-F_{4}(\phi,X)X-F_{5}(\phi,X,\chi)U(\phi,\chi)\bigg{]}\,.$ (2.12) The functions $F_{i}$ are given by $\displaystyle F_{1}(\phi,X,\chi)$ $\displaystyle=(1+\gamma X)^{1/2}\left(\gamma_{1}\mathcal{F}(\phi,\chi)+\frac{\alpha_{1}X}{1+\gamma X}\right)\,,$ (2.13a) $\displaystyle F_{2}(\phi,X)$ $\displaystyle=\alpha_{3}(1+\gamma X)^{-1/2}\,,$ (2.13b) $\displaystyle F_{3}(\phi,X,\chi)$ $\displaystyle=\frac{1}{2}(1+\gamma X)^{-1/2}\left(-\gamma_{2}\mathcal{F}(\phi,\chi)+\frac{\alpha_{2}-\alpha_{3}-(\alpha_{1}+\alpha_{3})\gamma X}{(1+\gamma X)}\right)\,,$ (2.13c) $\displaystyle F_{4}(\phi,X)$ $\displaystyle=\frac{1}{2}\gamma_{1}(1+\gamma X)^{-1/2}K(\phi)\,,$ (2.13d) $\displaystyle F_{5}(\phi,X,\chi)$ $\displaystyle=\gamma_{1}^{2}(1+\gamma X)^{1/2}\,,$ (2.13e) where, for brevity, we have omitted the tildes from the rescaled quantities and the arguments from the functions $\gamma$ and $\gamma_{1}$. Our action (2) aligns with the one presented in [122], with the sole distinction being the replacement given by equation (2.8). Note however that the presence of the auxiliary scalar $\chi$ will turn out to have important effects on the inflationary behaviour. If we assume that $\alpha_{3}=0$ (i.e. the Einstein-Cartan case), to derive the action in the EF, we essentially need to solve the system of equations $F_{1}(\phi,X,\chi)=1$ and $F_{3}(\phi,X,\chi)=0$. Solving this system is inherently challenging. However, we can approximate the solutions by assuming that in the slow-roll approximation, the higher-order kinetic terms are negligible (i.e. $X\ll 1$), particularly during inflation [43], as well as during reheating [53]. An approximate solution under this assumption is111Note that the invertibility of the disformal transformation requires $\gamma_{1}>0,\,\,\,\,\gamma_{2}\geq 0,\,\,\,\,\gamma_{1}+\tilde{X}\gamma_{2}>0,\,\,\newline \tilde{\gamma}_{1}-X\partial\tilde{\gamma}_{1}/\partial X-X^{2}\partial\tilde{\gamma}_{2}/\partial X\neq 0$. Using the approximate solution (2.14) we see that the above system is equivalent to $\alpha_{2}>0$, $\alpha_{1}+\alpha_{2}<0$ and $|X|<1/(-\alpha_{1}+\alpha_{2}/2)$. Additionally, we can infer that the combination $4\alpha_{1}+\alpha_{2}$ involved in the inflationary dynamics is negative, as required. $\gamma\simeq\alpha_{2}-\frac{\alpha_{2}^{2}}{2}X+\frac{5\alpha_{2}^{3}}{8}X^{2}\,,\quad\gamma_{1}\simeq\frac{1}{\mathcal{F}(\phi,\chi)}\left(1-(\alpha_{1}+\alpha_{2}/2)X+(\alpha_{1}\alpha_{2}+5\alpha_{2}^{2}/8)X^{2}\right)\,,$ (2.14) where we kept terms up to $\mathcal{O}(X^{2})$. Substituting the solution back to the action we obtain $\displaystyle\mathcal{S}=\int{\rm d}^{4}x\sqrt{-g}\bigg{[}$ $\displaystyle\frac{R}{2}-\frac{K(\phi)X}{2\mathcal{F}(\phi,\chi)}(1-(\alpha_{1}+\alpha_{2})X)$ $\displaystyle-\frac{U(\phi,\chi)}{\mathcal{F}^{2}(\phi,\chi)}\left(1-(2\alpha_{1}+\alpha_{2}/2)X+(\alpha_{1}^{2}+2\alpha_{1}\alpha_{2}+5\alpha_{2}^{2}/8)X^{2}\right)\bigg{]}\,,$ (2.15) where now the Ricci scalar $R$ is the one constructed by the Levi-Civita connection. Note that for $\alpha_{1}=\alpha_{2}=0$ the above action is reduced to the known Palatini-$\mathcal{R}^{2}$ action [36, 38]. The subsequent step involves varying the action with respect to the auxiliary field $\chi$. Upon doing so, the solution $\chi=\chi(\phi,X)$ must be re- expanded in powers of the kinetic term $X$ and then substituted back into the action. The resulting effective action will represent the final metric action of the scalar field $\phi$, featuring modified potential and kinetic terms. The variation gives $\frac{\delta\mathcal{S}}{\delta\chi}=0\quad\Rightarrow\quad\chi=\frac{4V(\phi)+A(\phi)X+B(\phi)X^{2}}{f(\phi)+C(\phi)X+D(\phi)X^{2}}\,,$ (2.16) with $\displaystyle A(\phi)=$ $\displaystyle K(\phi)f(\phi)-4V(\phi)(2\alpha_{1}+\alpha_{2}/2)\,,$ (2.17a) $\displaystyle B(\phi)=$ $\displaystyle 4V(\phi)(\alpha_{1}^{2}+2\alpha_{1}\alpha_{2}+5\alpha_{2}^{2}/8)-(\alpha_{1}+\alpha_{2})K(\phi)f(\phi)\,,$ (2.17b) $\displaystyle C(\phi)=$ $\displaystyle-\beta K(\phi)-f(\phi)(2\alpha_{1}+\alpha_{2}/2)\,,$ (2.17c) $\displaystyle D(\phi)=$ $\displaystyle\beta(\alpha_{1}+\alpha_{2})K(\phi)+(\alpha_{1}^{2}+2\alpha_{1}\alpha_{2}+5\alpha_{2}^{2}/8)f(\phi)\,.$ (2.17d) Expanding in powers of $X$ we obtain222In the minimal case $f(\phi)=K(\phi)=1$ the auxiliary field reads $\chi\simeq 4V(\phi)+(1+4\beta V(\phi))X+(1+4\beta V(\phi))(\beta+\alpha_{1}-\alpha_{2}/2)X^{2}+\mathcal{O}(X^{3})\,.$ (2.18) $\chi\simeq\frac{4V(\phi)}{f(\phi)}+XK(\phi)\left(1+4\beta\frac{V(\phi)}{f^{2}(\phi)}\right)+X^{2}\frac{K(\phi)}{2f(\phi)}\left(1+4\beta\frac{V(\phi)}{f^{2}(\phi)}\right)\left(\,(2\alpha_{1}-\alpha_{2})f(\phi)+2\beta K(\phi)\right)\,.$ (2.19) Substituting the eq. (2.19) back to the action (2) and re-expanded in powers of $X$ the $\chi$-dependent part of the first line gives ${\cal{L}}_{1}\simeq-X\frac{K(\phi)}{2f(\phi)\left(1+4\beta\frac{V(\phi)}{f^{2}(\phi)}\right)}+X^{2}\frac{K(\phi)}{2f^{2}(\phi)\left(1+4\beta\frac{V(\phi)}{f^{2}(\phi)}\right)}\left((\alpha_{1}+\alpha_{2})f(\phi)+\beta K(\phi)\right)\,,$ (2.20) while the second line reads ${\cal{L}}_{2}\simeq\frac{V(\phi)}{f^{2}(\phi)+4\beta V(\phi)}-\frac{X}{2}\frac{(4\alpha_{1}+\alpha_{2})V(\phi)}{f^{2}(\phi)+4\beta V(\phi)}+\frac{X^{2}}{8}\frac{\left(2\beta K^{2}(\phi)+(8\alpha_{1}^{2}+5\alpha_{2}^{2}+16\alpha_{1}\alpha_{2})V(\phi)\right)}{f^{2}(\phi)+4\beta V(\phi)}\,.$ (2.21) Having done all the groundwork, we can substitute the last expansions in the action (2) to obtain $\mathcal{S}=\int{\rm d}^{4}x\sqrt{-g}\left(\frac{R}{2}-\bar{K}(\phi)\frac{X}{2}-\bar{U}(\phi)+\mathcal{O}(X^{2})\right)\,,$ (2.22) with $\bar{K}(\phi)=\frac{-\tilde{\alpha}V(\phi)+f(\phi)K(\phi)}{\left(f^{2}(\phi)+4\beta V(\phi)\right)}\qquad\text{and}\qquad\bar{U}(\phi)=\frac{V(\phi)}{f^{2}(\phi)+4\beta V(\phi)}\,,$ (2.23) where we have defined $4\alpha_{1}+\alpha_{2}=\tilde{\alpha}$. Although we have retained the functions $f(\phi)$ and $K(\phi)$ for generality in this discussion, moving forward, we will simplify the analysis by considering the minimal case where $f(\phi)=K(\phi)=1$. Note that, achieving canonical form for the kinetic term is possible through the field redefinition ${\rm d}\phi_{c}=\sqrt{\bar{K}(\phi)}{\rm d}\phi$. Consequently, the canonically normalized inflaton $\phi_{c}$ can, in principle, be expressed as a function of $\phi$. However, it is not obligatory to employ a canonical field for determining inflationary parameters. By directly working with $\phi$, this impediment can be bypassed. ## 3 Inflation Regarding cosmological observables and assuming the slow-roll approximation, we initiate the discussion by introducing the scalar ($\mathcal{P}_{\zeta}$) and tensor ($\mathcal{P}_{T}$) power spectra, crucial elements in inflationary cosmology. By selecting an arbitrary pivot scale $k_{\star}$ that exited the horizon, the expressions for the scalar and tensor power spectra take the form: $\mathcal{P}_{\zeta}(k)=A_{s}\left(\frac{k}{k_{\star}}\right)^{n_{s}-1},\quad A_{s}\simeq\frac{1}{24\pi^{2}}\frac{\bar{U}(\phi_{\star})}{\epsilon_{\bar{U}}(\phi_{\star})}\qquad\text{and}\qquad\mathcal{P}_{T}(k)\simeq\frac{2\bar{U}(\phi_{\star})}{3\pi^{2}}\left(\frac{k}{k_{\star}}\right)^{n_{t}}\,,$ (3.1) where $A_{s}$ is the amplitude of the power spectrum of scalar perturbations. The scalar ($n_{s}$) and tensor ($n_{t}$) spectral indices given by $n_{s}-1=\frac{{\rm d}\ln\mathcal{P}_{\zeta}(k)}{{\rm d}\ln k}\simeq-6\epsilon_{\bar{U}}+2\eta_{\bar{U}}\qquad\text{and}\qquad n_{t}=\frac{{\rm d}\ln\mathcal{P}_{T}(k)}{{\rm d}\ln k}\,,$ (3.2) characterize the scale-dependence of the power spectra (3.1). In the equations above we have used the potential slow-roll parameters $\epsilon_{\bar{U}}=\frac{1}{2\bar{K}(\phi)}\left(\frac{\bar{U}^{\prime}(\phi)}{\bar{U}(\phi)}\right)^{2}\qquad\text{and}\qquad\eta_{\bar{U}}=\frac{\left(\bar{K}^{-1/2}(\phi)\bar{U}^{\prime}(\phi)\right)^{\prime}}{\bar{K}^{1/2}(\phi)\bar{U}(\phi)}\,.$ (3.3) In these equations primes denote derivatives with respect the scalar field, while both slow-roll parameters are small $(\ll 1)$ during inflation and one of them approaches unity near its end. The tensor-to-scalar ratio is defined as $r=\frac{\mathcal{P}_{T}(k)}{\mathcal{P}_{\zeta}(k)}\simeq 16\epsilon_{\bar{U}},$ (3.4) while the duration of inflation is measured by the number of $e$-folds $N_{\star}=\int^{\phi_{\star}}_{\phi_{\rm end}}\bar{K}(\phi)\frac{\bar{U}(\phi)}{\bar{U}^{\prime}(\phi)}{\rm d}\phi\,.$ (3.5) The inflationary predictions are significantly constrained by observations of the cosmic microwave background (CMB), as demonstrated in [132, 133]. The most recent combination of Planck, BICEP/Keck, and BAO data has established the following limits on the observable values at the pivot scale $k_{\star}=0.05\,{\rm Mpc}^{-1}$: $A_{s}=(2.10\pm 0.03)\times 10^{-9},\qquad n_{s}=0.9649\pm 0.0042\quad(1\sigma\mbox{ region}),\qquad r<0.036\,.$ (3.6) Subsequently, we examine particular models and delve into their predictions. We focus on an intriguing category of models where the potential, $V$, takes the form of a monomial in the field $\phi$, i.e. $V\sim\phi^{n}$, with $n$ even integer. More precisely we will study the quadratic $(n=2)$ and quartic $(n=4)$ models of inflation. ### 3.1 The quadratic model We first consider the simple case of the minimally coupled ($f(\phi)=1$) quadratic model with a potential $V(\phi)=\frac{m^{2}}{2}\phi^{2}\,,$ (3.7) where $m$ is a parameter of dimension ${\rm mass}^{2}$. Provided that the $\mathcal{O}(X^{2})$ terns are small, the first slow-roll parameters (3.3) are given by $\epsilon_{\bar{U}}=\frac{4}{\phi^{2}(2-\tilde{\alpha}m^{2}\phi^{2})(1+2\beta m^{2}\phi^{2})}\,,\qquad\eta_{\bar{U}}=\frac{8\left(1+\beta m^{2}\phi^{2}(3\tilde{\alpha}m^{2}\phi^{2}-4)\right)}{\phi^{2}(2-\tilde{\alpha}m^{2}\phi^{2})^{2}(1+2\beta m^{2}\phi^{2})}\,.$ (3.8) The number of $e-$folds (3.5), left to the end of on inflation are $N_{\star}=\frac{1}{16}(\phi_{\rm end}^{2}-\phi_{\star}^{2})\left(\tilde{\alpha}m^{2}(\phi_{\rm end}^{2}+\phi_{\star}^{2})-4\right)\simeq\frac{1}{16}\phi_{\star}^{2}(4-\tilde{\alpha}m^{2}\phi_{\star}^{2})\,,$ (3.9) where the second equality holds for $\phi_{\rm end}^{2}\ll\phi_{\star}^{2}$. The above equation (without neglecting $\phi_{\rm end}$ is a quadratic equation for $\phi_{\star}^{2}$. The sole solution that restores the correct $\tilde{\alpha}=0$ limit is $\phi_{\star}^{2}=\frac{2-\sqrt{\tilde{\alpha}^{2}m^{4}\phi_{\rm end}^{4}-4\tilde{\alpha}m^{2}\phi_{\rm end}^{2}+4-16\tilde{\alpha}m^{2}N_{\star}}}{\tilde{\alpha}m^{2}}\xrightarrow{\tilde{a}\rightarrow 0}\phi_{\rm end}^{2}+4N_{\star}\,.$ (3.10) The field value at the end of inflation is defined by the condition $\epsilon_{\bar{U}}(\phi_{\rm end})=1\Rightarrow$ $2\tilde{\alpha}\beta m^{4}\phi_{\rm end}^{6}+(\tilde{\alpha}m^{2}-4\beta m^{2})\phi_{\rm end}^{4}-2\phi_{\rm end}^{2}+4=0\,.$ (3.11) In [43], it has been shown that for $\tilde{\alpha}=0$ the field value at the end of inflation is bounded from above, namely $\phi_{\rm end}^{2}<2$. So in this case, the condition $\phi_{\rm end}^{2}\ll\phi_{\star}^{2}$ is indeed true. In our scenario we have seen numerically that such a bound for the $\phi_{\rm end}^{2}$ also exists, thereby ensuring that the approximation $\phi_{\rm end}^{2}\ll\phi_{\star}^{2}$ holds true. Figure 1: Left: The mass parameter $m^{2}$ given by eq. (3.14) as function of the parameter $\tilde{\alpha}$. Right: The spectral index, given by eq. (3.16), as function of the parameter $\tilde{\alpha}$ for $\beta=0$. The shaded regions represent the permissible parameter space at confidence levels of $68\%$ (dark blue) and $95\%$ (light blue), as derived from the most recent combination of Planck, BICEP/Keck, and BAO data [132, 133]. In what follows, we safely omit the term $\phi_{\rm end}$. Under this approximation the field value at the horizon crossing is given by $\phi_{\star}^{2}\simeq\frac{2-2\sqrt{1-4\tilde{\alpha}m^{2}N_{\star}}}{\tilde{\alpha}m^{2}}\,.$ (3.12) Using $\phi_{\star}$, given above, $A_{s}$ of eq. (3.1) is written in terms of $N_{\star}$ as $A_{s}\simeq\frac{\sqrt{1-4\tilde{\alpha}m^{2}N_{\star}}\left(1-\sqrt{1-4\tilde{\alpha}m^{2}N_{\star}}\right)^{2}}{24\pi^{2}\tilde{\alpha}^{2}m^{2}}\,,$ (3.13) which under this approximation does not depend on the parameter $\beta$. The above equation can be used in order to fix one of the parameters. If we solve with respect to $m^{2}$, the only solution that recovers the correct $\tilde{\alpha}=0$ limit, namely $m^{2}\simeq 6\pi^{2}A_{s}/N_{\star}^{2}$, is $m^{2}=\frac{3\pi^{2}A_{S}}{4\mathring{A}N_{\star}^{3}}\left[N_{\star}^{2}+4\mathring{A}N_{\star}-\mathring{A}^{2}-(N_{\star}-\mathring{A})\left(N_{\star}^{2}-6\mathring{A}N_{\star}+\mathring{A}^{2}\right)^{1/2}\right]\,,\quad\mathring{A}=6\pi^{2}A_{s}\tilde{\alpha}\,.$ (3.14) The impact of the parameter $\tilde{\alpha}$ on $m^{2}$ is visible once the parameter $\mathring{A}$ reaches a comparable or larger magnitude than $N_{\star}$.333This can be easily observed if we expand around $\mathring{A}\sim 0$. The mass for small $|\mathring{A}|$ is given by $m^{2}\simeq\frac{6\pi^{2}A_{s}}{N_{\star}^{2}}\left[1+\left(\frac{\mathring{A}}{N_{\star}}\right)^{2}+4\left(\frac{\mathring{A}}{N_{\star}}\right)^{3}+\mathcal{O}(\mathring{A}^{4}/N_{\star}^{4})\right]\,.$ (3.15) For $N_{\star}\sim 50-60$ this occurs for $|\tilde{\alpha}|\gtrsim 10^{8}$. The left panel of Fig. 1 illustrates the dependence of the parameter $m^{2}$ on the parameter $\tilde{\alpha}$ according to the aforementioned equation. It is evident that for “small” values of $|\tilde{\alpha}|$ there is no discernible effect on the value of the parameter $m^{2}$. However, for $|\tilde{\alpha}|\gg 10^{8}$ it exhibits a linear growth as $m^{2}\simeq-\frac{9\pi^{4}A_{s}^{2}\tilde{\alpha}}{N_{\star}^{3}}\left[1+\mathcal{O}(N_{\star}/\mathring{A})\right]$. The spectral index (3.2) is given by $n_{s}\simeq 1-\frac{1+\sqrt{1-4\tilde{\alpha}m^{2}N_{\star}}-6\tilde{\alpha}m^{2}N_{\star}}{N_{\star}(1-4\tilde{\alpha}m^{2}N_{\star})}\simeq\begin{cases}\displaystyle 1-\frac{2}{N_{\star}}\,,&\text{if }\,\,|\mathring{A}|/N_{\star}\ll 1\\\\[8.5359pt] \displaystyle 1-\frac{3}{2N_{\star}}\,,&\text{if }\,\,|\mathring{A}|/N_{\star}\gg 1\,.\end{cases}$ (3.16) As in the $\tilde{\alpha}=0$ case [36], the spectral index is $\beta$-independent to leading order in the slow-roll parameters as well as the scalar power spectrum and the number of $e$-folds. Therefore, for small $|\mathring{A}|$, always compared to $N_{\star}$, the prediction aligns with that of the simple quadratic model of inflation. As $|\mathring{A}|$ increases the spectral index also increases eventually reaching the asymptotic value $n_{s}\simeq 1-3/(2N_{\star})$. As is also depicted in the right panel of Fig. 1 the asymptotic region for large $|\mathring{A}|$ is marginally outside of the $2\sigma$ observational bounds, namely $N_{\star}$ is forced to be $\lesssim 56$. Finally, the tensor-to-scalar ratio (3.4) is given by $r\simeq\frac{16\tilde{\alpha}m^{2}}{\sqrt{1-4\tilde{\alpha}m^{2}N_{\star}}(\sqrt{1-4\tilde{\alpha}m^{2}N_{\star}}-1)\left[4\beta(\sqrt{1-4\tilde{\alpha}m^{2}N_{\star}}-1)-\tilde{\alpha}\right]}\,,$ (3.17) while its limiting cases are $r\simeq\begin{cases}\displaystyle\frac{8}{N_{\star}+48\pi^{2}A_{s}\beta}\,,&\text{if }\,\,|\mathring{A}|/N_{\star}\ll 1\\\\[8.5359pt] \displaystyle\frac{4}{N_{\star}+24\pi^{2}A_{s}\beta}\,,&\text{if }\,\,|\mathring{A}|/N_{\star}\gg 1\,.\end{cases}$ (3.18) As shown in Fig. 2, the decrease in the tensor-to-scalar ratio for large $|\mathring{A}|$ alone is insufficient to bring the value of $r$ within the observational limit of $r<0.036$. As highlighted in the right panel, the introduction of a substantial $\beta$ parameter ($\beta\gtrsim 10^{8}$) becomes necessary. Figure 2: The tensor-to-scalar ratio, given by eq (3.17), as function of the parameter $\tilde{\alpha}$ for $\beta=0$ (left) and $\beta=10^{9}$ (right). The latest observational data excludes the shaded region in the left panel, since $r_{0.05}<0.036$ at $95\%$ confidence [133]. ### 3.2 The quartic model As a second model worth studying, is the quartic model $V(\phi)=\frac{\lambda}{4}\phi^{4}\,.$ (3.19) Here, the dimensionless coupling parameter $\lambda$ is referred to as the quartic coupling. However, the inflationary predictions of this model do not align with observations, whether considering the pure $\phi^{4}$ model or with the inclusion of the $\mathcal{R}^{2}$ term. In the former case, both the tensor-to-scalar ratio and the spectral index deviate from the established bounds. In the latter case, the inclusion of the $\mathcal{R}^{2}$ term succeeds in reducing the value of $r$, while $n_{s}$ remains around $\sim 0.94-0.96$ for $N_{\star}=50-60$. Subsequently, we will explore how derivative couplings may enhance the value of $n_{s}$ to bring it within the observationally allowed range444It is worth noting that the quartic model can be rescued if one assumes a non-minimal coupling of the form $\sim\xi\phi^{2}\mathcal{R}$ both in the metric [11] and the Palatini [77] formulation. Nevertheless, here we shall consider the minimally coupled case of $f(\phi)=1$ to isolate the effect of a non-minimal derivative coupling on predictions.. As in the quadratic model, the first slow-roll parameters, are easily computed to be $\epsilon_{\bar{U}}=\frac{32}{\phi^{2}(4-\tilde{\alpha}\lambda\phi^{4})(1+\beta\lambda\phi^{4})}\,,\qquad\eta_{\bar{U}}=\frac{16\left[12(1-\beta\lambda\phi^{4})+\tilde{\alpha}\lambda\phi^{4}(5\beta\lambda\phi^{4}-1)\right]}{\phi^{2}(4-\tilde{\alpha}\lambda\phi^{4})^{2}(1+\beta\lambda\phi^{4})}\,,$ (3.20) while the number of $e-$folds left to the end of inflation are $N_{\star}=\frac{1}{96}\left[12(\phi_{\star}^{2}-\phi_{\rm end}^{2})+\tilde{\alpha}\lambda(\phi_{\rm end}^{6}-\phi_{\star}^{6})\right]\,.$ (3.21) Using again the approximation $\phi_{\rm end}^{2}\ll\phi_{\star}^{2}$ we obtain that the field value at the horizon crossing is $\phi_{\star}^{2}\simeq\frac{-2(\tilde{\alpha}\lambda+Y^{2})}{\tilde{\alpha}\lambda Y}\,,\qquad\text{with}\qquad Y=\left(6\tilde{\alpha}^{2}\lambda^{2}N_{\star}^{2}+\sqrt{\tilde{\alpha}^{3}\lambda^{3}(36\tilde{\alpha}\lambda N_{\star}^{2}-1)}\right)^{1/3}\,.$ (3.22) Figure 3: Left: The quartic coupling $\lambda$ as function of the parameter $\tilde{\alpha}$. Right: The spectral index, given by eq. (3.2), as function of the parameter $\tilde{\alpha}$ for $\beta=0$. The shaded regions represent the permissible parameter space at confidence levels of $68\%$ (dark blue) and $95\%$ (light blue), as derived from the most recent combination of Planck, BICEP/Keck, and BAO data [132, 133]. In contrast to the quadratic model, the intricate expression of $\phi_{\star}^{2}$ prevents us from presenting a concise formula for the quartic coupling $\lambda$, akin to what was done for the mass parameter in equation (3.14). However, we can provide approximate limiting expressions using the fact that the value of the quartic coupling $\lambda$ is fixed by the observed value of the amplitude of the scalar power spectrum, $A_{s}\simeq 2.1\times 10^{-9}$ at the pivot scale $k_{\star}=0.05\,{\rm Mpc}^{-1}$. As depicted in the left panel of Fig. 3, for $|\tilde{\alpha}|\ll 10^{8}$, we recover the familiar expression for the quartic model, specifically $\lambda\approx 3\pi^{2}A_{s}/(2N_{\star}^{3})$. On the other hand, for $|\tilde{\alpha}|\gg 10^{8}$, the quartic coupling increases as $\lambda\approx 32\pi^{6}A_{s}^{3}|\tilde{\alpha}|^{2}/(9N_{\star}^{5})$. Furthermore, since an analytic expression for the quartic coupling $\lambda$ is unavailable, the spectral index is expressed as a function of it, given by: $\displaystyle n_{s}\simeq$ $\displaystyle-\tilde{\alpha}\lambda\big{[}-\tilde{\alpha}\lambda Y^{2}+\tilde{\alpha}^{2}\lambda^{2}(1+N_{\star}(48N_{\star}Y^{2}+2Y-1))+12\tilde{\alpha}^{3}\lambda^{3}N_{\star}^{2}(3N_{\star}-5)$ $\displaystyle+(1-8N_{\star}Y)Y(Y^{3}-6\tilde{\alpha}^{2}\lambda^{2}N_{\star}^{2})\big{]}/(Y^{3}-6\tilde{\alpha}^{2}\lambda^{2}N_{\star}^{2})^{2}\,.$ (3.23) The limiting cases are given by $n_{s}\simeq\begin{cases}\displaystyle 1-\frac{3}{N_{\star}}\,,&\text{if }\,\,|\tilde{\alpha}|\ll 10^{8}\\\\[8.5359pt] \displaystyle 1-\frac{5}{3N_{\star}}\,,&\text{if }\,\,|\tilde{\alpha}|\gg 10^{8}\,.\end{cases}$ (3.24) The $|\tilde{\alpha}|\ll 10^{8}$ limit, lies outside of the observational bounds, since it coincides with the pure $\phi^{4}$ model. In the $|\tilde{\alpha}|\gg 10^{8}$ limit, the spectral index takes the form $n_{s}\simeq 1-5/(3N_{\star})\simeq 0.967-0.972$ for $N_{\star}=50-60$ (see the right panel of Fig. 3). These values fall well within observational bounds, marking a significant achievement as the derivative couplings, in conjunction with $\mathcal{R}^{2}$, rescue the quartic model of inflation. Figure 4: The tensor-to-scalar ratio, given by eq (3.25), as function of the parameter $\tilde{\alpha}$ for $\beta=0$ (left) and $\beta=10^{9}$ (right). The latest observational data excludes the shaded region in the left panel, since $r_{0.05}<0.036$ at $95\%$ confidence [133]. As previously indicated, the derivative couplings alone are insufficient to rescue this model. The tensor-to-scalar ratio is given by $r\simeq 64\tilde{\alpha}^{4}\lambda^{3}Y^{5}\left[(Y^{2}+\tilde{\alpha}\lambda)(Y^{4}+\tilde{\alpha}\lambda Y^{2}+\tilde{\alpha}^{2}\lambda^{2})(4\beta Y^{4}+\tilde{\alpha}\lambda(\tilde{\alpha}+8\beta)Y^{2}+4\tilde{\alpha}^{2}\lambda^{2}\beta)\right]^{-1}\,,$ (3.25) with the limits being $r\simeq\begin{cases}\displaystyle\frac{16}{N_{\star}+96\pi^{2}A_{s}\beta}\,,&\text{if }\,\,|\tilde{\alpha}|\ll 10^{8}\\\\[8.5359pt] \displaystyle\frac{16/3}{N_{\star}+32\pi^{2}A_{s}\beta}\,,&\text{if }\,\,|\tilde{\alpha}|\gg 10^{8}\,.\end{cases}$ (3.26) As depicted in the left panel of Fig. 4, the tensor-to-scalar ratio is reduced by approximately $67\%$, which is insufficient for agreement with the observational bound $r<0.036$. The inclusion of $\mathcal{R}^{2}$ is inevitable, and a value of $\beta>10^{8}$ (see the right panel of Fig. 4) is adequate to relocate the tensor-to-scalar ratio within the allowed region. ## 4 Reheating As an illustrative example within this section, we will compute the reheating temperature, for the quadratic model discussed in Section 3.1, giving analytic approximations in regions of the parameter space. The number of $e$-folds during the reheating era is given by $N_{\rm reh}\equiv\ln\left[\frac{a_{\rm reh}}{a_{\rm end}}\right]=-\frac{1}{3(1+w)}\ln\left[\frac{\rho_{\rm reh}}{\rho_{\rm end}}\right]\,,$ (4.1) where the subscripts “reh” and “end” in the cosmic scale factor $a$ and the energy densities $\rho$ indicate that these quantities are evaluated at the end of the reheating period and inflation, respectively. The equation of state parameter $w$ is a free parameter, taking the value of $-1/3$ during inflation and $1/3$ during the radiation-dominated phase. In terms of the number of $e$-folds during the reheating era, the reheating temperature is given by $T_{\rm reh}=T_{\rm ins}\exp\left(-\frac{3(1+w)N_{\rm reh}}{4}\right)\,,\qquad T_{\rm ins}\equiv\left(\frac{30}{\pi^{2}}\frac{\rho_{\rm end}}{g_{\star}(T_{\rm reh})}\right)^{1/4}\,,$ (4.2) where $T_{\rm ins}$ is the maximum temperature defined as the instantaneous reheating temperature obtained for $a_{\rm reh}=a_{\rm end}$ ($w=1/3$)555Regarding Palatini inflationary models, the authors of [43, 47, 54, 109, 66, 134] have investigated the inflationary predictions across a range of values for the equation of state parameter, spanning from $-1/3$ to $1$. The resulting reheating temperature, exhibits significant variability, ranging from relatively low values near those during BBN, $\sim\mathrm{MeV}$, up to much higher values of around $\sim 10^{16}\,\mathrm{GeV}$.. It’s worth noting that $g_{\star}(T_{\text{reh}})$ represents the effective degrees of freedom of entropy density, which is approximately $106.75$ under the assumption of Standard Model particle content and temperatures around $\sim 1\;\mathrm{TeV}$ or higher. To estimate the instantaneous reheating temperature, we only require the value of the energy density at the end of inflation, which can be approximated as666This equality holds precisely when there are no higher-order kinetic terms. References [43, 66] have considered the inclusion of higher-order kinetic terms, revealing that they yield only a negligible correction, as demonstrated therein. $\rho_{\rm end}\simeq\frac{3}{2}\bar{U}(\phi_{\rm end})\,.$ The field value at the end of inflation is determined by solving equation (3.11). Since, this is a cubic equation for $\phi_{\rm end}^{2}$ its solution and consequently the $T_{\rm ins}$ are too complicated to be presented, although analytic expression for the unique positive solution does exist. Therefore, our objective is to provide analytic expressions for the limits $|\mathring{A}|/N_{\star}\ll 1$ and $|\mathring{A}|/N_{\star}\gg 1$, as described in the preceding section, as well as for the cases $\beta\ll 10^{9}$ and $\beta\gg 10^{9}$. Figure 5: In the $\beta$, $|\tilde{\alpha}|$ plane we display the instantaneous reheating temperature, as given by eq. (4.2), for $N_{\star}=55$. For small values of $\beta$, i.e. $\beta\ll 10^{9}$ the instantaneous reheating temperature can be approximated as $T_{\rm ins}\simeq\begin{cases}\displaystyle 0.38\times\left(\frac{1-\sqrt{1-4\mathring{A}/N_{\star}^{2}}}{\tilde{\alpha}}\right)^{1/4}\stackrel{{\scriptstyle|\tilde{\alpha}|\rightarrow 0}}{{\simeq}}2.8\times 10^{15}\left(\frac{55}{N_{\star}}\right)^{1/2}\;\mathrm{GeV}\,,&\text{if }\,\,|\mathring{A}|/N_{\star}\ll 1\\\\[8.5359pt] \displaystyle 0.38\times\left(\frac{1-\sqrt{1+\mathring{A}^{2}/N_{\star}^{3}}}{\tilde{\alpha}}\right)^{1/4}\stackrel{{\scriptstyle|\tilde{\alpha}|\rightarrow\infty}}{{\simeq}}3.9\times 10^{15}\left(\frac{55}{N_{\star}}\right)^{3/8}\;\mathrm{GeV}\,,&\text{if }\,\,|\mathring{A}|/N_{\star}\gg 1\,.\end{cases}$ (4.3) For large values of $\beta$, i.e. $\beta\gg 10^{9}$ in both $\tilde{\alpha}$ regimes we have that $T_{\rm ins}\simeq 7.8\times 10^{17}\beta^{-1/4}\;\mathrm{GeV}\,.$ (4.4) Note that we have reinstated the units by multiplying with $M_{\rm Pl}\simeq 2.4\times 10^{18}\;\mathrm{GeV}$. In Fig. 5 we display the instantaneous reheating temperature, as given by eq. (4.2), for $N_{\star}=55$. The light and dark blue regions represent the limits outlined by equation (4.3), with the white-dotted contour indicating the highest temperature $\sim 3.9\times 10^{15}\;\mathrm{GeV}$. As $\beta$ increases, the approximation (4.3) becomes more accurate, with the temperature decreasing as $\beta^{-1/4}$ in aggreement with [43, 66]. In conclusion, as indicated by the figure and the approximate expressions, the maximum instantaneous reheating temperature is attained for small values of the parameter $\beta$, typically ranging from approximately $(2.8-3.9)\times 10^{15}\;\mathrm{GeV}$ (for $N_{\star}=55$), as $\tilde{a}$ increases. Conversely, for large values of the parameter $\beta$, it diminishes as $\beta^{-1/4}$, regardless of the value of $\tilde{\alpha}$. ## 5 Summary In the present paper we considered theories of a scalar field, coupled to gravity through non-minimal couplings to curvature that include derivatives of the field, in the general framework of metric-affine theories incorporating quadratic Ricci scalar curvature terms. We employed disformal transformations of the metric to transform the action into the Einstein frame, focusing on the Einstein-Cartan case that allows derivative couplings to the Ricci-tensor and the Ricci scalar. We proceeded to study the inflationary predictions in the simple cases of two models, namely that of a quadratic potential and that of a quartic self-interacting scalar. We find that in both models the effect of the derivative couplings in combination with the quadratic Ricci curvature term leads to predictions aligned with the latest observational data. Derivative couplings tend in general to increase the spectral index, while a reduction to the tensor-to-scalar ration comes mostly from the quadratic Ricci scalar term. We have also studied reheating in the simpler case of the quadratic model that allows for approximate analytic treatment and found a maximum reheating temperature of $\mathcal{O}(3\times 10^{15})\;\mathrm{GeV}$ for small values of the quadratic Ricci scalar parameter. ## Acknowledgments The work of IDG was supported by the Estonian Research Council grants MOBJD1202, RVTT3, RVTT7, and by the CoE program TK202 “Fundamental Universe”. ## References * [1] D. Kazanas, Dynamics of the Universe and Spontaneous Symmetry Breaking, Astrophys. J. Lett. 241 (1980) L59–L63. * [2] K. Sato, First Order Phase Transition of a Vacuum and Expansion of the Universe, Mon. Not. Roy. Astron. Soc. 195 (1981) 467–479. * [3] A. H. Guth, The Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems, Phys. Rev. D 23 (1981) 347–356. * [4] A. D. Linde, A New Inflationary Universe Scenario: A Possible Solution of the Horizon, Flatness, Homogeneity, Isotropy and Primordial Monopole Problems, Phys. Lett. B 108 (1982) 389–393. * [5] A. A. Starobinsky, Spectrum of relict gravitational radiation and the early state of the universe, JETP Lett. 30 (1979) 682–685. * [6] V. F. Mukhanov and G. V. Chibisov, Quantum Fluctuations and a Nonsingular Universe, JETP Lett. 33 (1981) 532–535. * [7] S. W. Hawking, The Development of Irregularities in a Single Bubble Inflationary Universe, Phys. Lett. B 115 (1982) 295. * [8] A. A. Starobinsky, Dynamics of Phase Transition in the New Inflationary Universe Scenario and Generation of Perturbations, Phys. Lett. B 117 (1982) 175–178. * [9] A. H. Guth and S. Y. Pi, Fluctuations in the New Inflationary Universe, Phys. Rev. Lett. 49 (1982) 1110–1113. * [10] J. M. Bardeen, P. J. Steinhardt, and M. S. Turner, Spontaneous Creation of Almost Scale - Free Density Perturbations in an Inflationary Universe, Phys. Rev. D 28 (1983) 679. * [11] F. L. Bezrukov and M. Shaposhnikov, The Standard Model Higgs boson as the inflaton, Phys. Lett. B 659 (2008) 703–706, [arXiv:0710.3755]. * [12] L. Amendola, Cosmology with nonminimal derivative couplings, Phys. Lett. B 301 (1993) 175–182, [gr-qc/9302010]. * [13] S. Capozziello and G. Lambiase, Nonminimal derivative coupling and the recovering of cosmological constant, Gen. Rel. Grav. 31 (1999) 1005–1014, [gr-qc/9901051]. * [14] S. Capozziello, G. Lambiase, and H. J. Schmidt, Nonminimal derivative couplings and inflation in generalized theories of gravity, Annalen Phys. 9 (2000) 39–48, [gr-qc/9906051]. * [15] C. Germani and A. Kehagias, New Model of Inflation with Non-minimal Derivative Coupling of Standard Model Higgs Boson to Gravity, Phys. Rev. Lett. 105 (2010) 011302, [arXiv:1003.2635]. * [16] S. Tsujikawa, Observational tests of inflation with a field derivative coupling to gravity, Phys. Rev. D 85 (2012) 083518, [arXiv:1201.5926]. * [17] K. Kamada, T. Kobayashi, T. Takahashi, M. Yamaguchi, and J. Yokoyama, Generalized Higgs inflation, Phys. Rev. D 86 (2012) 023504, [arXiv:1203.4059]. * [18] H. M. Sadjadi and P. Goodarzi, Reheating in nonminimal derivative coupling model, JCAP 02 (2013) 038, [arXiv:1203.1580]. * [19] G. Koutsoumbas, K. Ntrekis, and E. Papantonopoulos, Gravitational Particle Production in Gravity Theories with Non-minimal Derivative Couplings, JCAP 08 (2013) 027, [arXiv:1305.5741]. * [20] Y. Ema, R. Jinno, K. Mukaida, and K. Nakayama, Particle Production after Inflation with Non-minimal Derivative Coupling to Gravity, JCAP 10 (2015) 020, [arXiv:1504.07119]. * [21] B. Gumjudpai and P. Rangdee, Non-minimal derivative coupling gravity in cosmology, Gen. Rel. Grav. 47 (2015), no. 11 140, [arXiv:1511.00491]. * [22] Y. Zhu and Y. Gong, PPN parameters in gravitational theory with nonminimally derivative coupling, Int. J. Mod. Phys. D 26 (2016), no. 02 1750005, [arXiv:1512.05555]. * [23] H. Sheikhahmadi, E. N. Saridakis, A. Aghamohammadi, and K. Saaidi, Hamilton-Jacobi formalism for inflation with non-minimal derivative coupling, JCAP 10 (2016) 021, [arXiv:1603.03883]. * [24] I. Dalianis, G. Koutsoumbas, K. Ntrekis, and E. Papantonopoulos, Reheating predictions in Gravity Theories with Derivative Coupling, JCAP 02 (2017) 027, [arXiv:1608.04543]. * [25] T. Harko, F. S. N. Lobo, E. N. Saridakis, and M. Tsoukalas, Cosmological models in modified gravity theories with extended nonminimal derivative couplings, Phys. Rev. D 95 (2017), no. 4 044019, [arXiv:1609.01503]. * [26] G. Tumurtushaa, Inflation with Derivative Self-interaction and Coupling to Gravity, Eur. Phys. J. C 79 (2019), no. 11 920, [arXiv:1903.05354]. * [27] C. Fu, P. Wu, and H. Yu, Primordial Black Holes from Inflation with Nonminimal Derivative Coupling, Phys. Rev. D 100 (2019), no. 6 063532, [arXiv:1907.05042]. * [28] I. Dalianis, S. Karydas, and E. Papantonopoulos, Generalized Non-Minimal Derivative Coupling: Application to Inflation and Primordial Black Hole Production, JCAP 06 (2020) 040, [arXiv:1910.00622]. * [29] S. Sato and K.-i. Maeda, Stability of hybrid Higgs inflation, Phys. Rev. D 101 (2020), no. 10 103520, [arXiv:2001.00154]. * [30] S. Karydas, E. Papantonopoulos, and E. N. Saridakis, Successful Higgs inflation from combined nonminimal and derivative couplings, Phys. Rev. D 104 (2021), no. 2 023530, [arXiv:2102.08450]. * [31] K. S. Stelle, Renormalization of Higher Derivative Quantum Gravity, Phys. Rev. D 16 (1977) 953–969. * [32] A. A. Starobinsky, A New Type of Isotropic Cosmological Models Without Singularity, Phys. Lett. 91B (1980) 99–102. * [33] X.-H. Meng and P. Wang, $R^{2}$ corrections to the cosmological dynamics of inflation in the Palatini formulation, Class. Quant. Grav. 21 (2004) 2029–2036, [gr-qc/0402011]. * [34] M. Borunda, B. Janssen, and M. Bastero-Gil, Palatini versus metric formulation in higher curvature gravity, JCAP 11 (2008) 008, [arXiv:0804.4440]. * [35] F. Bombacigno and G. Montani, Big bounce cosmology for Palatini $R^{2}$ gravity with a Nieh–Yan term, Eur. Phys. J. C 79 (2019), no. 5 405, [arXiv:1809.07563]. * [36] V.-M. Enckell, K. Enqvist, S. Rasanen, and L.-P. Wahlman, Inflation with $R^{2}$ term in the Palatini formalism, JCAP 02 (2019) 022, [arXiv:1810.05536]. * [37] D. Iosifidis, A. C. Petkou, and C. G. Tsagas, Torsion/non-metricity duality in f(R) gravity, Gen. Rel. Grav. 51 (2019), no. 5 66, [arXiv:1810.06602]. * [38] I. Antoniadis, A. Karam, A. Lykkas, and K. Tamvakis, Palatini inflation in models with an $R^{2}$ term, JCAP 11 (2018) 028, [arXiv:1810.10418]. * [39] I. Antoniadis, A. Karam, A. Lykkas, T. Pappas, and K. Tamvakis, Rescuing Quartic and Natural Inflation in the Palatini Formalism, JCAP 03 (2019) 005, [arXiv:1812.00847]. * [40] T. Tenkanen, Minimal Higgs inflation with an $R^{2}$ term in Palatini gravity, Phys. Rev. D 99 (2019), no. 6 063528, [arXiv:1901.01794]. * [41] A. Edery and Y. Nakayama, Palatini formulation of pure $R^{2}$ gravity yields Einstein gravity with no massless scalar, Phys. Rev. D 99 (2019), no. 12 124018, [arXiv:1902.07876]. * [42] M. Giovannini, Post-inflationary phases stiffer than radiation and Palatini formulation, Class. Quant. Grav. 36 (2019), no. 23 235017, [arXiv:1905.06182]. * [43] I. D. Gialamas and A. Lahanas, Reheating in $R^{2}$ Palatini inflationary models, Phys. Rev. D 101 (2020), no. 8 084007, [arXiv:1911.11513]. * [44] A. Lloyd-Stubbs and J. McDonald, Sub-Planckian $\phi^{2}$ inflation in the Palatini formulation of gravity with an $R^{2}$ term, Phys. Rev. D 101 (2020), no. 12 123515, [arXiv:2002.08324]. * [45] I. Antoniadis, A. Lykkas, and K. Tamvakis, Constant-roll in the Palatini-$R^{2}$ models, JCAP 04 (2020), no. 04 033, [arXiv:2002.12681]. * [46] D. M. Ghilencea, Palatini quadratic gravity: spontaneous breaking of gauged scale symmetry and inflation, Eur. Phys. J. C 80 (4, 2020) 1147, [arXiv:2003.08516]. * [47] N. Das and S. Panda, Inflation and Reheating in f(R,h) theory formulated in the Palatini formalism, JCAP 05 (2021) 019, [arXiv:2005.14054]. * [48] I. D. Gialamas, A. Karam, and A. Racioppi, Dynamically induced Planck scale and inflation in the Palatini formulation, JCAP 11 (2020) 014, [arXiv:2006.09124]. * [49] D. M. Ghilencea, Gauging scale symmetry and inflation: Weyl versus Palatini gravity, Eur. Phys. J. C 81 (2021), no. 6 510, [arXiv:2007.14733]. * [50] D. Iosifidis and L. Ravera, Parity Violating Metric-Affine Gravity Theories, Class. Quant. Grav. 38 (2021), no. 11 115003, [arXiv:2009.03328]. * [51] S. Bekov, K. Myrzakulov, R. Myrzakulov, and D. S.-C. Gómez, General slow-roll inflation in $f(R)$ gravity under the Palatini approach, Symmetry 12 (2020), no. 12 1958, [arXiv:2010.12360]. * [52] K. Dimopoulos and S. Sánchez López, Quintessential inflation in Palatini $f(R)$ gravity, Phys. Rev. D 103 (2021), no. 4 043533, [arXiv:2012.06831]. * [53] A. Karam, E. Tomberg, and H. Veermäe, Tachyonic preheating in Palatini $R^{2}$ inflation, JCAP 06 (2021) 023, [arXiv:2102.02712]. * [54] A. Lykkas and K. Tamvakis, Extended interactions in the Palatini-$R^{2}$ inflation, JCAP 08 (2021), no. 043 [arXiv:2103.10136]. * [55] I. D. Gialamas, A. Karam, T. D. Pappas, and V. C. Spanos, Scale-invariant quadratic gravity and inflation in the Palatini formalism, Phys. Rev. D 104 (2021), no. 2 023521, [arXiv:2104.04550]. * [56] I. Antoniadis, A. Guillen, and K. Tamvakis, Ultraviolet behaviour of Higgs inflation models, JHEP 08 (2021) 018, [arXiv:2106.09390]. [Addendum: JHEP 05, 074 (2022)]. * [57] I. D. Gialamas, A. Karam, T. D. Pappas, A. Racioppi, and V. C. Spanos, Scale-invariance, dynamically induced Planck scale and inflation in the Palatini formulation, J. Phys. Conf. Ser. 2105 (2021), no. 1 012005, [arXiv:2107.04408]. * [58] M. AlHallak, A. AlRakik, N. Chamoun, and M. S. El-Daher, Palatini f(R) Gravity and Variants of k-/Constant Roll/Warm Inflation within Variation of Strong Coupling Scenario, Universe 8 (2022), no. 2 126, [arXiv:2111.05075]. * [59] C. Dioguardi, A. Racioppi, and E. Tomberg, Slow-roll inflation in Palatini F(R) gravity, JHEP 06 (2022) 106, [arXiv:2112.12149]. * [60] K. Dimopoulos, A. Karam, S. Sánchez López, and E. Tomberg, Modelling Quintessential Inflation in Palatini-Modified Gravity, Galaxies 10 (2022), no. 2 57, [arXiv:2203.05424]. * [61] K. Dimopoulos, A. Karam, S. Sánchez López, and E. Tomberg, Palatini R 2 quintessential inflation, JCAP 10 (2022) 076, [arXiv:2206.14117]. * [62] G. Pradisi and A. Salvio, (In)equivalence of metric-affine and metric effective field theories, Eur. Phys. J. C 82 (2022), no. 9 840, [arXiv:2206.15041]. * [63] R. Durrer, O. Sobol, and S. Vilchinskii, Magnetogenesis in Higgs-Starobinsky inflation, Phys. Rev. D 106 (2022), no. 12 123520, [arXiv:2207.05030]. * [64] A. Salvio, Inflating and reheating the Universe with an independent affine connection, Phys. Rev. D 106 (2022), no. 10 103510, [arXiv:2207.08830]. * [65] I. Antoniadis, A. Guillen, and K. Tamvakis, Late time acceleration in Palatini gravity, JHEP 11 (2022) 144, [arXiv:2207.13732]. * [66] A. B. Lahanas, Issues in Palatini $R^{2}$ inflation: Bounds on the reheating temperature, Phys. Rev. D 106 (2022), no. 12 123530, [arXiv:2210.00837]. * [67] I. D. Gialamas and K. Tamvakis, Inflation in metric-affine quadratic gravity, JCAP 03 (2023) 042, [arXiv:2212.09896]. * [68] C. Dioguardi, A. Racioppi, and E. Tomberg, Inflation in Palatini quadratic gravity (and beyond), arXiv:2212.11869. * [69] D. Iosifidis, R. Myrzakulov, and L. Ravera, Cosmology of Metric-Affine R+$\beta$$R^{2}$ Gravity with Pure Shear Hypermomentum, Fortsch. Phys. 72 (2024), no. 1 2300003, [arXiv:2301.00669]. * [70] I. D. Gialamas and K. Tamvakis, Bimetric-affine quadratic gravity, Phys. Rev. D 107 (2023), no. 10 104012, [arXiv:2303.11353]. * [71] I. D. Gialamas, A. Karam, T. D. Pappas, and E. Tomberg, Implications of Palatini gravity for inflation and beyond, arXiv:2303.14148. * [72] S. Sánchez López, K. Dimopoulos, A. Karam, and E. Tomberg, Observable gravitational waves from hyperkination in Palatini gravity and beyond, Eur. Phys. J. C 83 (2023), no. 12 1152, [arXiv:2305.01399]. * [73] C. Dioguardi and A. Racioppi, Palatini $F(R,X)$: a new framework for inflationary attractors, arXiv:2307.02963. * [74] A. Di Marco, E. Orazi, and G. Pradisi, Einstein–Cartan pseudoscalaron inflation, Eur. Phys. J. C 84 (2024), no. 2 146, [arXiv:2309.11345]. * [75] D. A. Gomes, R. Briffa, A. Kozak, J. Levi Said, M. Saal, and A. Wojnar, Cosmological constraints of Palatini f(R) gravity, JCAP 01 (2024) 011, [arXiv:2310.17339]. * [76] W.-Y. Hu, Q.-Y. Wang, Y.-Q. Ma, and Y. Tang, Gravitational Waves from Preheating in Inflation with Weyl Symmetry, arXiv:2311.00239. * [77] F. Bauer and D. A. Demir, Inflation with Non-Minimal Coupling: Metric versus Palatini Formulations, Phys. Lett. B665 (2008) 222–226, [arXiv:0803.2664]. * [78] S. Rasanen and P. Wahlman, Higgs inflation with loop corrections in the Palatini formulation, JCAP 11 (2017) 047, [arXiv:1709.07853]. * [79] T. Tenkanen, Resurrecting Quadratic Inflation with a non-minimal coupling to gravity, JCAP 12 (2017) 001, [arXiv:1710.02758]. * [80] A. Racioppi, Coleman-Weinberg linear inflation: metric vs. Palatini formulation, JCAP 12 (2017) 041, [arXiv:1710.04853]. * [81] T. Markkanen, T. Tenkanen, V. Vaskonen, and H. Veermäe, Quantum corrections to quartic inflation with a non-minimal coupling: metric vs. Palatini, JCAP 03 (2018) 029, [arXiv:1712.04874]. * [82] L. Järv, A. Racioppi, and T. Tenkanen, Palatini side of inflationary attractors, Phys. Rev. D 97 (2018), no. 8 083513, [arXiv:1712.08471]. * [83] C. Fu, P. Wu, and H. Yu, Inflationary dynamics and preheating of the nonminimally coupled inflaton field in the metric and Palatini formalisms, Phys. Rev. D 96 (2017), no. 10 103542, [arXiv:1801.04089]. * [84] A. Racioppi, New universal attractor in nonminimally coupled gravity: Linear inflation, Phys. Rev. D 97 (2018), no. 12 123514, [arXiv:1801.08810]. * [85] A. Kozak and A. Borowiec, Palatini frames in scalar–tensor theories of gravity, Eur. Phys. J. C 79 (2019), no. 4 335, [arXiv:1808.05598]. * [86] S. Rasanen, Higgs inflation in the Palatini formulation with kinetic terms for the metric, Open J. Astrophys. 2 (2019), no. 1 1, [arXiv:1811.09514]. * [87] J. P. B. Almeida, N. Bernal, J. Rubio, and T. Tenkanen, Hidden Inflaton Dark Matter, JCAP 03 (2019) 012, [arXiv:1811.09640]. * [88] K. Shimada, K. Aoki, and K.-i. Maeda, Metric-affine Gravity and Inflation, Phys. Rev. D 99 (2019), no. 10 104020, [arXiv:1812.03420]. * [89] T. Takahashi and T. Tenkanen, Towards distinguishing variants of non-minimal inflation, JCAP 04 (2019) 035, [arXiv:1812.08492]. * [90] R. Jinno, K. Kaneta, K.-y. Oda, and S. C. Park, Hillclimbing inflation in metric and Palatini formulations, Phys. Lett. B 791 (2019) 396–402, [arXiv:1812.11077]. * [91] J. Rubio and E. S. Tomberg, Preheating in Palatini Higgs inflation, JCAP 1904 (2019), no. 04 021, [arXiv:1902.10148]. * [92] A. Racioppi, Non-Minimal (Self-)Running Inflation: Metric vs. Palatini Formulation, JHEP 21 (2020) 011, [arXiv:1912.10038]. * [93] M. Shaposhnikov, A. Shkerin, and S. Zell, Quantum Effects in Palatini Higgs Inflation, JCAP 07 (2020) 064, [arXiv:2002.07105]. * [94] A. Borowiec and A. Kozak, New class of hybrid metric-Palatini scalar-tensor theories of gravity, JCAP 07 (2020) 003, [arXiv:2003.02741]. * [95] L. Järv, A. Karam, A. Kozak, A. Lykkas, A. Racioppi, and M. Saal, Equivalence of inflationary models between the metric and Palatini formulation of scalar-tensor theories, Phys. Rev. D 102 (2020), no. 4 044029, [arXiv:2005.14571]. * [96] A. Karam, M. Raidal, and E. Tomberg, Gravitational dark matter production in Palatini preheating, JCAP 03 (2021) 064, [arXiv:2007.03484]. * [97] J. McDonald, Does Palatini Higgs Inflation Conserve Unitarity?, JCAP 04 (2021) 069, [arXiv:2007.04111]. * [98] M. Langvik, J.-M. Ojanperä, S. Raatikainen, and S. Rasanen, Higgs inflation with the Holst and the Nieh–Yan term, Phys. Rev. D 103 (2021), no. 8 083514, [arXiv:2007.12595]. * [99] M. Shaposhnikov, A. Shkerin, I. Timiryasov, and S. Zell, Higgs inflation in Einstein-Cartan gravity, JCAP 02 (2021) 008, [arXiv:2007.14978]. [Erratum: JCAP 10, E01 (2021)]. * [100] M. Shaposhnikov, A. Shkerin, I. Timiryasov, and S. Zell, Einstein-Cartan gravity, matter, and scale-invariant generalization, JHEP 10 (2020) 177, [arXiv:2007.16158]. * [101] Y. Mikura, Y. Tada, and S. Yokoyama, Conformal inflation in the metric-affine geometry, EPL 132 (2020), no. 3 39001, [arXiv:2008.00628]. * [102] S. Verner, Quintessential Inflation in Palatini Gravity, JCAP 04 (2021) [arXiv:2010.11201]. * [103] V.-M. Enckell, S. Nurmi, S. Räsänen, and E. Tomberg, Critical point Higgs inflation in the Palatini formulation, JHEP 04 (2021) 059, [arXiv:2012.03660]. * [104] Y. Reyimuaji and X. Zhang, Natural inflation with a nonminimal coupling to gravity, JCAP 03 (2021) 059, [arXiv:2012.14248]. * [105] A. Karam, S. Karamitsos, and M. Saal, $\beta$-function reconstruction of Palatini inflationary attractors, JCAP 10 (2021) 068, [arXiv:2103.01182]. * [106] Y. Mikura, Y. Tada, and S. Yokoyama, Minimal $k$-inflation in light of the conformal metric-affine geometry, Phys. Rev. D 103 (2021), no. 10 L101303, [arXiv:2103.13045]. * [107] A. Racioppi, J. Rajasalu, and K. Selke, Multiple point criticality principle and Coleman-Weinberg inflation, JHEP 06 (2022) 107, [arXiv:2109.03238]. * [108] Y. Mikura and Y. Tada, On UV-completion of Palatini-Higgs inflation, JCAP 05 (2022), no. 05 035, [arXiv:2110.03925]. * [109] D. Y. Cheong, S. M. Lee, and S. C. Park, Reheating in models with non-minimal coupling in metric and Palatini formalisms, JCAP 02 (2022), no. 02 029, [arXiv:2111.00825]. * [110] H. Azri, I. Bamwidhi, and S. Nasri, Isocurvature modes and non-Gaussianity in affine inflation, Phys. Rev. D 104 (2021), no. 10 104064, [arXiv:2111.03828]. * [111] A. Racioppi and M. Vasar, On the number of e-folds in the Jordan and Einstein frames, Eur. Phys. J. Plus 137 (2022), no. 5 637, [arXiv:2111.09677]. * [112] M. Piani and J. Rubio, Higgs-Dilaton inflation in Einstein-Cartan gravity, JCAP 05 (2022), no. 05 009, [arXiv:2202.04665]. * [113] G. K. Karananas, M. Shaposhnikov, and S. Zell, Field redefinitions, perturbative unitarity and Higgs inflation, JHEP 06 (2022) 132, [arXiv:2203.09534]. * [114] C. Rigouzzo and S. Zell, Coupling metric-affine gravity to a Higgs-like scalar field, Phys. Rev. D 106 (2022), no. 2 024015, [arXiv:2204.03003]. * [115] I. D. Gialamas, A. Karam, and T. D. Pappas, Gravitational corrections to electroweak vacuum decay: metric vs. Palatini, Phys. Lett. B 840 (2023) 137885, [arXiv:2212.03052]. * [116] S. C. Hyun, J. Kim, T. Kodama, S. C. Park, and T. Takahashi, Nonminimally assisted inflation: a general analysis, JCAP 05 (2023) 050, [arXiv:2302.05866]. * [117] M. Piani and J. Rubio, Preheating in Einstein-Cartan Higgs Inflation: oscillon formation, JCAP 12 (2023) 002, [arXiv:2304.13056]. * [118] I. D. Gialamas and H. Veermäe, Electroweak vacuum decay in metric-affine gravity, Phys. Lett. B 844 (2023) 138109, [arXiv:2305.07693]. * [119] C. Rigouzzo and S. Zell, Coupling metric-affine gravity to the standard model and dark matter fermions, Phys. Rev. D 108 (2023), no. 12 124067, [arXiv:2306.13134]. * [120] B. Barman, N. Bernal, and J. Rubio, Rescuing Gravitational-Reheating in Chaotic Inflation, arXiv:2310.06039. * [121] I. D. Gialamas, A. Karam, A. Lykkas, and T. D. Pappas, Palatini-Higgs inflation with nonminimal derivative coupling, Phys. Rev. D 102 (2020), no. 6 063522, [arXiv:2008.06371]. * [122] H. B. Nezhad and S. Rasanen, Scalar fields with derivative coupling to curvature in the Palatini and the metric formulation, JCAP 02 (2024) 009, [arXiv:2307.04618]. * [123] J. Annala, Higgs inflation and higher-order gravity in Palatini formulation, Master’s thesis, Helsinki U., 2020. * [124] J. Annala and S. Rasanen, Inflation with $R_{(\alpha\beta)}$ terms in the Palatini formulation, JCAP 09 (2021) 032, [arXiv:2106.12422]. * [125] J. Beltrán Jiménez and A. Delhom, Ghosts in metric-affine higher order curvature gravity, Eur. Phys. J. C 79 (2019), no. 8 656, [arXiv:1901.08988]. * [126] J. Beltrán Jiménez and A. Delhom, Instabilities in metric-affine theories of gravity with higher order curvature terms, Eur. Phys. J. C 80 (2020), no. 6 585, [arXiv:2004.11357]. * [127] C. Marzo, Radiatively stable ghost and tachyon freedom in metric affine gravity, Phys. Rev. D 106 (2022), no. 2 024045, [arXiv:2110.14788]. * [128] J. Annala and S. Rasanen, Stability of non-degenerate Ricci-type Palatini theories, JCAP 04 (2023) 014, [arXiv:2212.09820]. [Erratum: JCAP 08, E02 (2023)]. * [129] W. Barker and C. Marzo, Particle spectra of general Ricci-type Palatini or metric-affine theories, arXiv:2402.07641. * [130] W. Barker and S. Zell, Consistent particle physics in metric-affine gravity from extended projective symmetry, arXiv:2402.14917. * [131] N. E. Mavromatos, P. Pais, and A. Iorio, Torsion at Different Scales: From Materials to the Universe, Universe 9 (2023), no. 12 516, [arXiv:2310.13150]. * [132] Planck Collaboration, Y. Akrami et al., Planck 2018 results. X. Constraints on inflation, Astron. Astrophys. 641 (2020) A10, [arXiv:1807.06211]. * [133] BICEP, Keck Collaboration, P. A. R. Ade et al., Improved Constraints on Primordial Gravitational Waves using Planck, WMAP, and BICEP/Keck Observations through the 2018 Observing Season, Phys. Rev. Lett. 127 (2021), no. 15 151301, [arXiv:2110.00483]. * [134] F.-Y. Zhang, Reheating predictions in non-minimally coupled inflationary models with radiative corrections, Phys. Dark Univ. 39 (2023) 101169.
# Piloting Diversity and Inclusion Workshops in Artificial Intelligence and Robotics for Children A. Badillo-Perez, D. Badillo-Perez, D. Coyotzi-Molina, D. Cruz, R. Montenegro, L. Vazquez and M. Xochicale air4children: Artificial Intelligence and Robotics for Children Xicohtzinco, México <EMAIL_ADDRESS> ###### Abstract In this paper, we present preliminary work from a pilot workshop that aimed to promote diversity and inclusion for fundamentals of Artificial Intelligence and Robotics for Children (air4children) in the context of developing countries. Considering the scarcity of funding and the little to none availability of specialised professionals to teach AI and robotics in developing countries, we present resources based on free open-source hardware and software, open educational resources, and alternative education programs. That said, the contribution of this work is the pilot workshop of four lessons that promote diversity and inclusion on teaching AI and Robotics for children to a small gender-balanced sample of 14 children of an average age of 7.64 years old. We conclude that participant, instructors, coordinators and parents engaged well in the pilot workshop noting the various challenges of having the right resources for the workshops in developing countries and posing future work. The resources to reproduce this work are available at https://github.com/air4children/hri2022. ###### Index Terms: Open Educational Resources, Educational robots, Child-Robot Interaction ## I Introduction Accessible and affordable technology in conjunction with open educational resources can promote equal opportunities for childhood education [1]. However, teaching state-of-the-art technologies such as Artificial Intelligence and Robotics, AIR, is a current challenge for low-income and often politically or culturally marginalized countries. Additionally, creating the right environment to promote inclusivity and diversity to teach AIR has been little investigated. Astobiza et al. 2019, for instance, reported the need of collaborations between industry and a multidisciplinary group of researchers to address concerns on the paradigm of inclusivity in robotics [2]. In that sense, Astobiza et al. suggested that inclusive robotics should be based on two points: ”1) they should be easy to use, and 2) they must contribute to making accessibility easier in distinct environments” [2]. Peixoto et al. in 2018 reported the use of robots as tool to promote diversity leading to improve competences in communication, teamwork, leadership, problem solving, resilience and entrepreneurship [3, 4]. Recently, Pannier et al. in 2020 pointed out the challenges of increasing the participation of women and underrepresented minorities in the areas of Mechatronics and Robotics Engineering as well as the creation of community of educators to promote diversity and inclusion [5]. Similarly, Pannier et al. mentioned that the prevalence of free and open-source software and hardware made mechatronics and robotics more accessible to a diverse group of population. Pannier et al. also touched on the importance of offering workshops to different range of underrepresented students leading to inspire other programs and to create outreach activities for students, trainers and workshops [5]. In March 2021, we introduced air4children, Artificial Intelligence and Robotics for Children, as a way (a) to address aspects for inclusion accessibility, equity and fairness and (b) to create affordable child-centred materials in AI and Robotics (AIR) in developing countries [6]. That said, in this work, we are addressing the challenges of piloting and organising workshops in the context of communities in developing countries where little to none is known about the demographics, education levels and socio-economical factors that impact on teaching AIR. For instance, considering the town of Xicohtzinco, Tlaxcala México as our case study, where Xicohtzinco has a total population of 14,197 (6762 males and 7435 females) [7] and 19 schools including seven kindergartens (3 public and 4 private), seven primaries (4 public and 3 private), four secondaries (2 public and 2 private) and one public hight-school [8]. However, neither the census [7] nor the education information site [8] provide further information about teaching technological subjects in AI and Robotics. That said, we hypothesised that piloting workshops of air4chidren in a town such as Xicohtzinco might led us to have better understanding of the needs and challenges of promoting diversity and inclusion considering state-of-the-art technologies with open education resources. This short paper is organised as follows: Section II presents resources to promote diversity and inclusion in AI and Robotics for children. Section III presents the design of workshops for children from 6 to 8 years old. Section IV presents outcomes of a four lessons pilot workshop for 14 children including the engagement of instructors and coordinators. We present results of the workshops and finalise it with conclusions, limitations and future work. ## II Resources to promote Diversity and Inclusion in AI and Robotics for children ### II-A Free and open-source software, open-source hardware and open educational resources In March 2021, we presented examples to create educational resources aimed to be ”affordable, educational and fun”, such examples are (a) Otto DIY – an educational open source robot and (b) JetBot platform – open source educational robot to create new AI projects [6]. Similarly, considering Open Educational Resources (OER) which aim to provide ”teaching and learning materials that are available without access fees” seems to be a right direction to afford innovation through OER-enabled pedagogy [9]. However, Wiley et al. in 2014 contrasted positives and negatives of OERS where for instance of the benefits of OERs is to make course development process quicker and easier but also highlighting the challenges of making OER material for people easier to find but with the challenge of making financially self- sustainable programs among many other difficulties [10]. Hence, in this work, we consider Otto Humanoid as a good option because of its affordability with a cost of 200 EUROS, the block diagram programming interface, the multiple sensors and actuators (servos and LCD matrix display) aligned with open-source software and hardware and OER principles [11]. ### II-B Ensuring education and Inclusive Learning Recently, Opertti et al. in 2021 discussed ideas in the forum ”Ensuring education and Inclusive Learning for Educational Recovery 2021” [12]. Such ideas to ensure and inclusive learning are summarised as: (1) personalisation of education including the recognition of specific learning expectations and needs, (2) designing inclusive, emphatic and participatory curriculum for an plural and open participation of a diversity of actors and institutions, (3) appropriation of technology as a community resource to strength ties between students, educators, families and communities, (4) empowering knowledge, learning, collaboration, trust and listening among peers, and (5) the visualisation of schools as lifelong learning spaces. Therefore, such ideas would help to promote diversity and inclusion of teaching AI and Robotics to children. ### II-C Alternative education programs with new technologies Alternative education programs such as Montessori, Waldorf and Regio Emilia considers children as active authors of their own development [13, 14]. In the last 5 years such programs are starting to include topics on AI, robotics and computational thinking into their curriculum [15, 16]. For instance, Aljabreen pointed out the adoptions of new technologies and how early child education is re-conceptualised [16]. Elkin et al. in 2014 explored the how robots can be used in the Montessori curriculum [15]. Similarly Elkin et al. posed the question on the revision of new curriculums that include technology should not deviate from the main purpose of the Montessori classroom [15]. Drigas and Gkeka in 2016 reviewed the application of information and communication technologies in the Montessori Method, mentioning the Manipulatives, as objects to develop motor skills or understand mathematical abstractions, are based on cultural areas, language, mathematics and sensoria but little to none on technological areas [17]. Drigas and Gkeka reviewed Montessori materials of the 21st century where interactive systems with sounds and lights, touch application to enhance visual literacy or the development of computational thinking and constructions of the physical world [17]. These indicate that the incorporation of such manipulatives with the use of robotics might led to reach scenarios to explore motor skill development, visualisation and computational thinking. Recently, Scippo and Ardolino reported a longitudinal study of the use of computational thinking in five years participants of primary school in a Montessori school [18] Scippo and Ardolino pointed out the importance of alignment of the Montessori material with the computational thinking activities. That said, previous authors stated various challenges on the incorporation of new technologies into their curriculum posing more questions on creating curriculums that should be more accessible to a diverse group of population as it has been done in other areas such as the case of open educational resources. ## III Designing Diversity and Inclusion Workshops To design diversity and inclusive workshops we considered: (a) free and open- source software, open-source hardware and open educational resources, (b) Ensuring education and Inclusive Learning, and (c) alternative education programs with new technologies. That said, with the combination of such resources, we proposed a four lesson workshop with three-fold aims: (a) to promote diversity of and inclusion to children to teach AI and Robotics with recreational and engaging activities, (a) to encourage children to discover and increase their interest in AI and Robotics with open source hardware and software, and (c) to develop the Montessori concept of ’concrete to abstract’ to make abstract concepts clearer with hands-on learning materials. Figure 1: Curriculum of the pilot workshop with four lessons. Lesson 01 introduce the course (L01), lesson 02 provides the basics of anatomy (L02), lesson 03 covers algorithms (L03), and lesson 04 wrap-up and showcase the project of children (L04). The arrows in the figure illustrate the connection from the final to initial part of each lesson and how all the lessons where connected to the final section of the workshop. #### Lesson 01: Breaking the ice and motivations The educational goal for this lesson was to develop the children’s curiosity about AI and Robotics while emphasizing the importance of interpersonal connections that will evolve into a collaboration work for the following lessons. That said, this lesson started with a recreational activity where each student and teacher introduced themselves with name, favorite food and a superpower or ability that we would like to have and that was related to robots. This lesson also covers basic concepts and examples of AI and Robotics in different fields and daily life, how the brain works and how the human senses and body parts relates to the way a robot is built and how it works and perform activities. #### Lesson 02: Human senses and coding my first robot The main purpose of this lesson was to understand fundamentals of Robotics. Children began to work with more abstract concepts, developing problem-solving skills as well as cooperatively working relationships. The first activity, outside of the classroom, was a true or false game, were the teacher told a sentence about AI and Robotics, and the children jumped in front of a rope if the sentence was true, or if the sentence was false, the kids jumped behind of the rope. In the second activity, the instructor explain about the human senses and their relationship with inputs and outputs. After that, instructors explain examples of sequences and codes. In the last activity, participants were asked to sort out tangram in a group in which a leader of the group provide instructions to the team-mates as an analogy of coding a robot with algorithms. #### Lesson 03: Playing with reaction-action activities The educational goal of this lesson is to cover the concept of the effect of causes and consequences with daily life examples and the computational thinking of robots. Hence, this lesson start with a match game consisting of figures and shadow’s figures where participants develop their comparing skills to match similar or different robots. This lesson covered points on how robot works with sensors, processors, actuators and programming. The “find the effect” activity was also introduced where participants have to relate pictures of cause and consequences, for example “the cause is the rain and the consequently is a rainbow”. Afterwards, we worked with the Otto humanoids in which we programmed the sensor presence for that robot moves, dances and that emit texts with the 8x8 matrix. #### Lesson 04: Develop your own AIR The four lesson aimed to summaries what was covered in the previous lessons emphasising the relationship of the human body anatomy (brain, neurons and body parts) with humanoid robots (computer, sensors and actuators). This lesson covered real-word application of AI and Robotics including medicine, spacial robotics and smart cities. Three projects were prepared to be introduced to each team in which every participant have a role. Each team prepare a short speech of their application using AI and Robotics. See figure 1 that illustrates four lessons of the workshops. ## IV Piloting Diversity and Inclusion Workshops To pilot the four lesson workshop, we invited 14 participants (6 female and 8 male) with range of age from 6 to 11 years old (average age of 7.64) (Figure 2). For the workshop, three instructors with three years of experience in teaching and two coordinators with ten years of teaching experience volunteered to deliver four lessons of 90 minutes in the workshop (as shown in the proposed curriculum Fig 1). During the initial three lessons of the workshop, children incorporated the gained knowledge to relate fundamental human body anatomy (brain, neurons, body and senses) to robot parts (microcontroller, motors, sensors) based on open source hardware and software. In the final lesson, children showcased their final work promoting a sense of achievement in the children working not only with their mind but also with their social emotional well-being. In all lessons, instructors encouraged and engaged every participant for individual and group activities. Figure 2: Instructors demonstrating fundamentals of AI and Robotics (A, and B). Children engaging with classmates, robots and instructors (B, and C). We however noted that each lessons was originally planned to be 90 minutes of length and we did not consider breaks nor the participant’s energy levels to which in the second to four lesson a 15 minutes break was incorporated. Additionally, we piloted surveys to (a) children with ten questions about their understanding and feelings towards different type of robots and (b) to parents with 30 questions about their understanding of AI and robotics and how parents were aware of technological advances in AI and Robotics. Although the aim of the surveys was not to be reported but only to understand how participants and parents feel about being surveyed and how the logistics of surveys would be followed with more participants. That said, we noticed that participants require support as few participants were not familiar with reading surveys to which the content of 10 questions was spread into five questions into two sessions. On other hand, parents felt that surveys were lengthy, taking more than 60 minutes, and we also realised that a paper-based survey require more work as scans and transcripts are required. ## V Conclusions, limitations and future work In this paper, we posed the challenges of promoting Diversity and Inclusion to teach ”Artificial Intelligence and Robotics for Children” in developing countries, considering resources of open-source hardware and software and principles of Montessori education. We think that the goals of each lessons were intended to remain beyond the learning of a single concept but to contribute to develop skills of inclusion and diverstiy that children can take and apply to other areas in their life. That said, for the pilot of the workshop, we considered a small sample of 14 children of an average age of 7.64 years old from the town Xicohtzinco Tlaxcala México. The workshops were free of cost as a way encourage participation and inclusion of anyone. During the pilot workshop, children were enthusiastic about learning the fundamentals for AIR by coding, designing and playing with open-sourced robots. The instructors embraced the different set of skills each child had by working in small groups and supporting the students during all the activities. However, we noted that grouping children of four participants with one instructor was not creating an engaging experience as each group has only one robot and one computer and the space and number of participants was leaving sometimes one participant outside of the reachable robot-computer setup. In terms of limitations, the pilot surveys only helped us to identify the gender and age of the participants and no other insights such as needs of the target group were considered. Similarly, this work did no consider metrics to quantify the impact of the workshop but to identify the needs of the workshop that might be addressed in future work. The workshops were free of cost but no sustainable model was considered for this pilot experiment. As a future work, we are planing to run another pilot in late 2022 or early 2023 with more lessons and perhaps more participants considering the addition of a study design and to run a pre-survey to identify the needs of participants of the workshops. For the curriculum of the workshops, we are planning to improve the activities to be more engaging, diverse and inclusive and to provide further evidence on how alternative education programs (e.g. Montessori, Waldorf, Regio Emilia [13]; and ”synthesis program” [19]) with new technologies might lead to potential new avenues of inclusivity and diversity. ## Acknowledgment To Marta Pérez and Donato Badillo for their support in organising the pilot of the workshops. To Rocio Montenegro for her contributions with the design of the Montessori curriculum for the workshops. To Donato Badillo Peréz, Antonio Badillo Peréz and Diego Coyotzi Molina for volunteering as instructors of the workshops. To Leticia Vázquez for her support with the logistics and feedback to improve the workshops. To Adriana Pérez Fortis for her contributions and discussion to prepare draft pilot surveys for the parents and children. To Elías Méndez Zapata for his support and feedback on the hardware design of the robot. To Dago Cruz for his feedback and discussions on the design of the workshops. To Angel Mandujano, Elva Corona and others who have contributed with feedback and support to keep AIR4children project alive ## Contributions Antonio Badillo-Peréz: Contributing to design and write up of lesson 02. Donato Badillo-Perézo: Contributing to design and write up of lesson 01. Diego Coyotzi-Molina: Contributing to design and write up of lesson 03. Dago Cruz: Contributing to proofreading, edition and feedback. Rocio Montenegro: Contributing to the write up of designing and piloting workshops. Leticia Vázquez: Write up and refinement of the conclusions. Miguel Xochicale: Contributing to create the open source and reproducible workflow, drafting, write-up, edition, and submission of the paper. ## References * [1] Y. Kaga and D. Sretenov, “Inclusion in early childhood care and education : Brief on inclusion in education,” https://unesdoc.unesco.org/ark:/48223/pf0000379502, accessed: 29 January 2022\. * [2] A. Monasterio Astobiza, M. Toboso, M. Aparicio, T. Ausín, D. López, R. Morte, and J. L. Pons, “Bringing inclusivity to robotics with inbots,” _Nature Machine Intelligence_ , vol. 1, no. 4, pp. 164–164, Apr 2019. [Online]. Available: https://doi.org/10.1038/s42256-019-0040-5 * [3] A. Peixoto, M. Castro, M. Blazquez, S. Martin, E. Sancristobal, G. Carro, and P. Plaza, “Robotics tips and tricks for inclusion and integration of students,” in _2018 IEEE Global Engineering Education Conference (EDUCON)_ , 2018, pp. 2037–2041. * [4] A. Peixoto, C. S. G. González, R. Strachan, P. Plaza, M. de los Angeles Martinez, M. Blazquez, and M. Castro, “Diversity and inclusion in engineering education: Looking through the gender question,” in _2018 IEEE Global Engineering Education Conference (EDUCON)_ , 2018, pp. 2071–2075. * [5] C. Pannier, C. Berry, M. Morris, and X. Zhao, “Diversity and inclusion in mechatronics and robotics engineering education,” _ASEE annual conference exposition proceedings_ , 2020. [Online]. Available: https://par.nsf.gov/biblio/10184534 * [6] R. Montenegro, E. Corona, D. Badillo-Perez, A. Mandujano, L. Vazquez, D. Cruz, and M. Xochicale, “Air4children: Artificial intelligence and robotics for children,” 2021. [Online]. Available: https://github.com/air4children/hri2021 * [7] , “The national institute of statistics and geography (inegi),” https://en.www.inegi.org.mx/, accessed: 10 January 2022. * [8] , “Sistema de información y gestión educativa (siged),” https://www.siged.sep.gob.mx/SIGED/escuelas.html, accessed: 10 January 2022\. * [9] V. Clinton-Lisell, E. M. Legerski, B. Rhodes, and S. Gilpin, _Open Educational Resources as Tools to Foster Equity_. Cham: Springer International Publishing, 2021, pp. 317–337. * [10] D. Wiley, T. J. Bliss, and M. McEwen, _Open Educational Resources: A Review of the Literature_. New York, NY: Springer New York, 2014, pp. 781–789. * [11] C. Parra-Palacio, T. Svarcova, and E. Clime. (2016) Otto diy robots. [Online]. Available: https://www.ottodiy.com/ * [12] O. Renato, B. Carlos, and A. Perrine, “Thematic notes 1: Inclusion in education,” https://unesdoc.unesco.org/ark:/48223/pf0000378427, accessed: 29 January 2022. * [13] C. Edwards, “Three approaches from europe: Waldorf, montessori, and reggio emilia,” _Early Childhood Research and Practice_ , vol. 4, 03 2002. * [14] M. C. A. and C. Maria, _Absorbent Mind_. New York: Dell Pub, 1969. * [15] M. Elkin, A. Sullivan, and M. Bers, “Implementing a robotics curriculum in an early childhood montessori classroom,” _Journal of Information Technology Education: Innovations in Practice_ , vol. 13, pp. 153–169, 01 2014\. * [16] H. Aljabreen, “Montessori, waldorf, and reggio emilia: A comparative analysis of alternative models of early childhood education,” _International Journal of Early Childhood_ , vol. 52, no. 3, pp. 337–353, Dec 2020. [Online]. Available: https://doi.org/10.1007/s13158-020-00277-1 * [17] A. Drigas and E. Gkeka, “Montessori method and icts,” _International Journal of Recent Contributions from Engineering, Science; IT (iJES)_ , vol. 4, no. 1, p. pp. 25–30, Mar. 2016. [Online]. Available: https://online-journals.org/index.php/i-jes/article/view/5481 * [18] S. Scippo and F. Ardolino, “Computational thinking in montessori primary school,” _Ricerche di Pedagogia e Didattica. Journal of Theories and Research in Education_ , vol. 16, no. 2, p. 59–76, Jan. 2021. [Online]. Available: https://rpd.unibo.it/article/view/12163 * [19] “Synthesis: where kids learn how to think.” https://www.synthesis.is/, accessed: 16 Jan 2022.
# Learning Lie Group Symmetry Transformations with Neural Networks Alex Gabel Victoria Klein Riccardo Valperga Jeroen S. W. Lamb Kevin Webster Rick Quax Efstratios Gavves ###### Abstract The problem of detecting and quantifying the presence of symmetries in datasets is useful for model selection, generative modeling, and data analysis, amongst others. While existing methods for hard-coding transformations in neural networks require prior knowledge of the symmetries of the task at hand, this work focuses on discovering and characterizing unknown symmetries present in the dataset, namely, Lie group symmetry transformations beyond the traditional ones usually considered in the field (rotation, scaling, and translation). Specifically, we consider a scenario in which a dataset has been transformed by a one-parameter subgroup of transformations with different parameter values for each data point. Our goal is to characterize the transformation group and the distribution of the parameter values. The results showcase the effectiveness of the approach in both these settings. Machine Learning, ICML ## 1 Introduction It has been shown that restricting the hypothesis space of functions that neural networks are able to approximate using known properties of data improves performance in a variety of tasks (Worrall & Welling, 2019; Cohen et al., 2018; Weiler et al., 2018; Zaheer et al., 2017; Cohen & Welling, 2016). The field of Deep Learning has produced a prolific amount of work in this direction, providing practical parameterizations of function spaces with the desired properties that are also universal approximators of the target functions (Yarotsky, 2022). In physics and, more specifically, time-series forecasting of dynamical systems, symmetries are ubiquitous and laws of motion are often symmetric with respect to various transformations such as rotations and translations, while transformations that preserve solutions of equations of motions are in one way or another associated with conserved quantities (Noether, 1918). In computer vision, successful neural network architectures are often invariant with respect to transformations that preserve the perceived object identity as well as all pattern information, such as translation, rotation and scaling. Many of these transformations are smooth and differentiable, and thus belong to the family of Lie groups, which is the class of symmetries we deal with in this work. Figure 1: The distribution of transformations in a toy dataset that correspond to the Lie groups of rotation and (isotropic) scaling, given in terms of the parameters degree and scaling factor respectively; crucially, these groups are differentiable and can be (locally) decomposed into one-parameter subgroups. Although methods that hard-code transformations are capable of state-of-the- art performance in various tasks, they all require prior knowledge about symmetries in order to restrict the function space of a neural network. A broad class of, a priori unknown, transformations come into play in the context of modelling dynamical systems and in applications to physics. On the other hand, in vision tasks, identity-preserving transformations are often known beforehand. Despite this, these transformations are expressed differently by different datasets. As a result, algorithms for not only _discovering_ unknown symmetries but also _quantifying_ the presence of specific transformations in a given dataset, may play a crucial role in informing model selection for scientific discovery or computer vision, by identifying and describing physical systems through their symmetries and selecting models that are invariant or equivariant with respect to only those symmetries that are _actually_ present in the dataset under consideration. In this work, we address the problem of qualitatively and quantitatively detecting the presence of symmetries with respect to one-parameter subgroups within a given dataset (see Figure 1). In particular, let $\phi(t)$ be a one parameter subgroup of transformations. We consider the scenario in which a dataset $\\{x_{i}\\}_{i=1}^{N}$ has been acted on by $\phi(t)$, with a _different_ value of the parameter $t$ for every point $x_{i}$. Our goal is to characterise the group of transformations $\phi(t)$, as well as the _distribution_ from which the parameters $t$ have been sampled. We propose two models: a naive approach that successfully manages to identify the underlying one-parameter subgroup, and an autoencoder model that learns transformations of a one-parameter subgroup in the latent space and is capable of extracting the overall shape of the $t$-distributions. The cost of the latter is that the one-parameter subgroup in the latent space is not necessarily identical to that in pixel space. The work is structured as follows: Section 2 introduces some basic tools from Lie group theory; Section 3 outlines the method; Section 5 provides an overview of the existing methods that are related to our own; and lastly, results are shown in Section 4. ## 2 Background The theoretical underpinnings of symmetries or invariance can be described using group theory (Fulton & Harris, 1991). In particular, we present the necessary theory of one-parameter subgroups (Olver, 1993) on which our method is based, following the logic of Oliveri (2010). ### 2.1 One-parameter subgroups We focus on learning invariances with respect to one-parameter subgroups of a Lie group $G$, which offer a natural way to describe continuous symmetries or invariances of functions on vector spaces. ###### Definition 2.1. A one-parameter subgroup of $G$ is a differentiable homomorphism $\phi:\mathbb{R}\to G$, more precisely, such that $\phi(t+s)=\phi(t)\phi(s)$ for all $t,s\in\mathbb{R}$. Let the action of $\phi$ on the vector space $X\subset\mathbb{R}^{n}$ be a transformation $T:X\times\mathbb{R}\to X$ that is continuous in $x\in X$ and $t\in\mathbb{R}$. Because of continuity, for sufficiently small $t$ and some fixed $x\in X$, the action is given by $T(x,t)\approx x+tA(x)\;\text{where}\;A(x):=\frac{\partial T(x,t)}{\partial t}\Bigg{|}_{t=0}.$ (1) Note that this is equivalent to taking a first-order Taylor expansion in $t$ around $t=0$. ### 2.2 Generators In general, we can use $A(x)$ in (1) to construct what is known as the generator of a one-parameter subgroup $\phi$ of a Lie group $G$, that in turn will characterise an ordinary differential equation, the solution to which coincides with the action $T$ on $X$. Let $C^{\infty}(X)$ be the space of smooth functions from $X$ to $X$. The generator of $\phi$ is defined as a linear differential operator $L:C^{\infty}(X)\to C^{\infty}(X)$ such that $L=\sum_{i=0}^{n}(A(x))_{i}\frac{\partial}{\partial x_{i}}$ (2) describing the vector field of the infinitesimal increment $A(x)t$ in (1), where $\partial/\partial x_{i}$ are the unit vectors of $X$ in the coordinate directions for $i=1,\ldots,n$. It can be shown (Olver, 1993) that, for a fixed $x\in X$, that $T(x,t)$ is the solution to the ordinary differential equation $\frac{dT(x,t)}{dt}=LT(x,t)\quad\text{where}\quad T(x,0)=x.$ (3) The solution to (3) is the exponential $T(x,t)=e^{tL}x$ where $e^{tL}:=\sum_{k=0}^{\infty}\frac{(tL)^{k}}{k!},$ (4) where $L^{k}$ is the operator $L$ applied $k$ times iteratively. For a one-parameter subgroup $\phi$ of a matrix Lie group $G\subset GL(n,\mathbb{R})$ and a fixed $x\in X$, it can be shown (Olver, 1993) that there exists a unique matrix $A\in\mathbb{R}^{n\times n}$ such that $A(x)=Ax$. This is a more restrictive approach as groups such as translations cannot be written as a matrix multiplication. Figure 2: Model architecture. ## 3 Method As in Rao & Ruderman (1998); Sanborn et al. (2022); Dehmamy et al. (2021) the semi-supervised symmetry detection setting that we consider consists of learning the generator $L$ of a one-parameter subgroup $\phi$ from pairs of observations of the form $\left\\{\left(x_{i},\bar{x}=T(x_{i},t_{i})\right)\right\\}_{i=1}^{N}$, where $N$ is the number of observations and each $t_{i}\in\mathbb{R}$ is drawn from some unknown distribution $p(t)$. Not only do we attempt to learn the generator $L$, but also the unknown distribution $p(t)$ of the parameters $\\{t_{i}\\}_{i=1}^{N}$. ### 3.1 Parametrisation of the generator Deciding how to parametrise $L$ has an effect on the structure of the model and ultimately on what one-parameter subgroups we are able to learn. For simplicity, consider one-parameter subgroups acting on $X\subset\mathbb{R}^{2}$, although this operator can be defined for higher- dimensional vector spaces. The generator $L$ of $\phi$ is given as in Eq. (2) and we parametrise $A(x,y)$ as a linear operator in the basis $\\{1,x,y\\}$ with a coefficient matrix $A=\alpha\in\mathbb{R}^{2\times 3}$, giving $\displaystyle\begin{split}L^{\alpha}&:=(\alpha_{11}+\alpha_{12}x+\alpha_{23}y)\frac{\partial}{\partial x}\\\ &+(\alpha_{12}+\alpha_{22}x+\alpha_{23}y)\frac{\partial}{\partial y}.\end{split}$ (5) In this particular basis, for different values of $\alpha$, the generator $L^{\alpha}$ is able to express one-parameter sub-groups of the affine group. This includes the “traditional” symmetries that are usually considered (translation, rotation, and isotropic scaling) and all other affine transformations111Alternatively, the constant terms can be thought of as the drift terms (i.e. translation) and the four others can be arranged into a diffusion matrix.. This can be generalized to any functional form of the generator by augmenting the basis accordingly. ### 3.2 Discretisation and interpolation The generator $L^{\alpha}$ is constructed as an operator that acts on a function $f:\mathbb{R}^{2}\to\mathbb{R}$, given, in practice, by $I\in\mathbb{R}^{n\times n}$ such that $I_{ij}=f(i,j)$ are evaluations of $f$ on a regularly-sampled $n\times n$ grid $M$ of points $M_{ij}=(i,j)\in\mathbb{R}^{2}$. We then vectorise $I$, obtaining a point in a vector space $\tilde{I}\in\mathbb{R}^{n^{2}}$ such that $\tilde{I}_{i+j}:=I_{ij}$ and construct the matrix operator $L^{\alpha}\in\mathbb{R}^{n^{2}\times n^{2}}$ as $\displaystyle\begin{split}L^{\alpha}&:=(\alpha_{11}+\alpha_{12}X_{x}+\alpha_{13}X_{y})\frac{\partial}{\partial X_{x}}\\\ &+(\alpha_{21}+\alpha_{22}X_{x}+\alpha_{23}X_{y})\frac{\partial}{\partial X_{y}},\end{split}$ (6) acting on $\tilde{I}$, where $X_{x}\in\mathbb{R}^{n^{2}\times n^{2}}$ and $X_{y}\in\mathbb{R}^{n^{2}\times n^{2}}$ are such that $(X_{x})_{ij}:=i$ and $(X_{y})_{ij}:=j$, while ${\partial}/{\partial X_{x}}$ and ${\partial}/{\partial X_{y}}$ are also matrix operators in $\mathbb{R}^{n^{2}\times n^{2}}$. The exponential in (4) and the action $T$ coincides with the matrix exponential. In order to define ${\partial}/{\partial X_{x}}$ and ${\partial}/{\partial X_{y}}$ as operators that transform by infinitesimal amounts at discrete locations, we require an interpolation function. The Shannon-Whittaker theorem (Marks, 2012) states that any square-integrable, piecewise continuous function that is band-limited in the frequency domain can be reconstructed from its discrete samples if they are sufficiently close and equally spaced. For sake of interpolations, we will also assume that the function is periodic. ##### Interpolation: 1D In the case where $M$ is a discrete set of $n$ points in 1D, we have that $I(i+n)=I(i)$ for all $i=1,\ldots,n$ samples. Shannon-Whittaker interpolation reconstructs the signal for all $x\in\mathbb{R}$ as $\begin{split}&I(x)=\sum_{i=0}^{n-1}I(i)Q(x-i),\quad\text{where}\\\ &Q(x)=\frac{1}{n}\left[1+2\sum_{p=1}^{n/2-1}\cos\left(\frac{2\pi px}{n}\right)\right]\end{split}$ (7) Differentiating $Q$ with respect to $x$ and evaluating it at every $x_{i}\in M$ gives an analytic expression for a vector field in $\mathbb{R}^{n}$, describing continuous changes in $x$ at all $n$ points (Rao & Ruderman, 1998). This is precisely what ${\partial}/{\partial x}$ or ${\partial}/{\partial y}$ in (5) are. ##### Interpolation: 2D In the case where $M$ is a grid of $n\times n$ points in 2D, we construct the $n\times n$ matrices of the partial derivatives of $Q$ with respect to $x$ and $y$, analogously to the 1D case, stacking them to construct the ${n^{2}\times n^{2}}$ block diagonal matrices ${\partial}/{\partial X_{x}}$ and ${\partial}/{\partial X_{y}}$. It is worth noting that alternative interpolation techniques can be used to obtain the operators and the method does not depend on any specific one. Two different architectures, the main model and the latent model, are proposed to learn $L^{\alpha}$ and, in doing so, the action $T$. #### 3.2.1 Naive model The coefficients $\alpha$ of $L^{\alpha}$ are approximated by fixed coefficients that are shared across the dataset, while the parameter $t_{i}$ is approximated by $\hat{t}_{i}$ that depends on the input pair $(x_{i},\bar{x}_{i})$. We learn 1. 1. the coefficients $\alpha\in\mathbb{R}^{2\times 3}$ of the generator $L^{\alpha}$ and 2. 2. the parameters $\theta$ of an MLP $f_{\theta}$ that returns $f_{\theta}(x_{i},\bar{x}_{i})=:\hat{t}_{i}$ as a function of every input pair, such that the solution to (3) for $L^{\alpha}$ is approximated by $\hat{T}(x_{i},\bar{x}_{i}):=e^{f_{\theta}(x_{i},\bar{x}_{i})L^{\alpha}}\,x_{i}.$ (8) The model objective is then given by the reconstruction loss $\mathcal{L}_{T}(x_{i},\bar{x}_{i})=||\hat{T}_{\phi}(x_{i},\bar{x}_{i})-\bar{x}_{i}||^{2}.$ (9) #### 3.2.2 Latent model While the model described above will prove to work sufficiently well for learning the coefficients $\alpha$ of $L^{\alpha}$, the matrix exponential function in $\hat{T}$ in (8) can be costly to compute and difficult to optimise in high dimensions; consider that the cost of the matrix exponential in a single forward pass is roughly $O(n^{3})$ using the algorithm of Al-Mohy & Higham (2010). As a result, a different version of the model is proposed that incorporates an autoencoder for reducing dimension. The concept remains the same, but $x_{i}$ is now mapped to some latent space $Z\subset\mathbb{R}^{n_{Z}}$ for $n_{Z}\ll n$, such that the exponential is taken in a significantly lower dimension. This is done by an encoder $h_{\psi}:X\to Z$ and a decoder $d_{\psi}:Z\to X$ such that $z_{i}=h_{\psi}(x_{i})$ and $x_{i}\approx d_{\psi}(z_{i})$. We learn 1. 1. the parameters $\psi$ of an MLP autoencoder, 2. 2. the coefficients $\tilde{\alpha}\in\mathbb{R}^{2\times 3}$ of the generator $L^{\tilde{\alpha}}$ for a one-parameter subgroup $\phi_{Z}$ acting on the latent space $Z$, 3. 3. the parameters $\theta$ of an MLP $f_{\theta}$ that returns $f_{\theta}(x_{i},\bar{x}_{i})=:\hat{t}_{i}$ as a function of every original input pair $(x_{i},\bar{x}_{i})$, such that the solution to (3) for $L^{\alpha}$, the generator in the original space, is approximated by $\hat{T}^{Z}(x_{i},\bar{x}_{i})=d_{\psi}(e^{f_{\theta}(x_{i},\bar{x}_{i})L^{\tilde{\alpha}}}h_{\psi}(x_{i})).$ (10) It is important to note that enforcing good reconstruction of the autoencoder alone does not enforce the commutativity of the diagram in Figure 3. To make it commutative, we use an objective that is a weighted sum of multiple terms. A simple reconstruction term for the autoencoder on each input example $\mathcal{L}_{R}(x_{i}):=||d_{\psi}(h_{\psi}(x_{i}))-x_{i}||^{2},$ (11) a transformation-reconstruction term in the original space $\mathcal{L}^{X}_{T}(x_{i},\bar{x}_{i}):=||\hat{T}^{Z}_{\phi}(x_{i},\bar{x}_{i})-\bar{x}_{i}||^{2},$ (12) a transformation-reconstruction term in the latent space $\mathcal{L}^{Z}_{T}(x_{i},\bar{x}_{i}):=||e^{f_{\theta}(x_{i},\bar{x}_{i})L^{\tilde{\alpha}}}h_{\psi}(x_{i})-h_{\psi}(\bar{x}_{i})||^{2},$ (13) and a Lasso term on the generator coefficients $\tilde{\alpha}$. The overall loss of the latent model is $\displaystyle\begin{split}\mathcal{L}(x_{i},\bar{x}_{i})&=\lambda_{R}(\mathcal{L}_{R}(x_{i})+\mathcal{L}_{R}(\bar{x}_{i}))\\\ &+\,\lambda_{X}\mathcal{L}^{X}_{T}(x_{i},\bar{x}_{i})+\lambda_{Z}\mathcal{L}^{Z}_{T}(x_{i},\bar{x}_{i})\\\ &+\lambda_{L}||\mathbf{\tilde{\alpha}}||^{2},\end{split}$ (14) where $\lambda_{R},\lambda_{X},\lambda_{Z},\lambda_{L}\in\mathbb{R}$ are treated as hyperparameters. $X$$X$$Z$$Z$$T(\cdot,t)$$e^{f_{\theta}(\cdot,\cdot)L^{\tilde{\alpha}}}$$h_{\psi}$$d_{\psi}$ Figure 3: The commuting diagram enforced by the objective function in the latent model: $T(x,t)\approx d_{\psi}(e^{f_{\theta}(x_{i},\bar{x}_{i})L^{\tilde{\alpha}}}h_{\psi}(x))$. ##### Recovering the group It is important to note that the one-parameter subgroup corresponding to the generator $L^{\tilde{\alpha}}$ and the generator $L^{\alpha}$ are _not_ necessarily the same; $L^{\tilde{\alpha}}$ is the generator corresponding to some action on $X$ of a one-parameter subgroup $\phi$, while $L^{\alpha}$ is a different generator corresponding to some action on $Z$ of a different one- parameter subgroup $\phi_{Z}$. ### 3.3 Uniqueness For both the naive model in Section 3.2.1 and the latent model in 3.2.2, the approximations $\hat{t}_{i}$ for the values of the parameters $t_{i}$ require interpretation. Both models parameterise $\hat{T}$ or $\hat{T}^{Z}$ with the products $\hat{t}_{i}L^{\alpha}$ or $\hat{t}_{i}L^{\tilde{\alpha}}$ respectively, where $\hat{t}_{i}=f_{\theta}(x_{i},\bar{x}_{i})$. While both the values of $\hat{t}_{i}L^{\alpha}$ and $\hat{t}_{i}L^{\tilde{\alpha}}$ are unique for a given action on $X$ and $Z$ respectively, their decomposition is only unique up to a constant. Therefore, $L^{\alpha}$ or $L^{\tilde{\alpha}}$ and $\hat{t}$ approximate the generators and the parameter respectively up to a constant. Consequently, the one-parameter subgroup $\phi$ can only be deduced by the values of the individual coefficients in $\alpha$ _relative to one another_ , as opposed to in absolute, likewise for $\phi_{Z}$ and $\tilde{\alpha}$ . We therefore recover a scaled approximation for the distribution of $\hat{t}_{i}$. ### 3.4 The most general setting Suppose we are given a labelled dataset $\mathcal{D}=\left\\{(x_{i},c_{i})\right\\}_{i=1}^{N}$ and a one-parameter subgroup $\phi$. Then we call $\mathcal{D}$ symmetric or invariant with respect to $\phi$ if the action of $\phi$ preserves the object identity of the data points, where by object identity we mean any property of the data that we might be interested in. For example, in the case of MNIST handwritten digits, rigid transformations preserve their labels 222With the exception of the number ’9’ that, if rotated 180 degrees becomes a ’6’. and therefore, can be considered symmetries of the dataset. Now suppose that every $x_{i}$ in $\mathcal{D}$ is acted on with a one-parameter subgroup $\phi_{t}$ to get $T\mathcal{D}=\left\\{(T(x_{i},t_{i}),c_{i})\right\\}_{i=1}^{N}$. The most general, fully unsupervised symmetry detection setting consists of learning $\phi$, and characterize the distribution of the parameter $t$ from just $\bar{\mathcal{D}}$. The idea is that, under the assumption that points with the same label are sufficiently similar for the subgroup transformation to account for the important difference333Keeping MNIST hand-written digits as our paradigmatic example, digits with the same label differ by small transformations that account for handwriting style differences., we can use labels to group data points, and compare those data points using methods such as the one presented in this paper. We leave the fully unsupervised symmetry detection setting for future work although we will emphasize that the proposed method can, in principle, be used in such setting without substantial changes to the architecture. ## 4 Experiments ### 4.1 Experiment setting In practice, we experiment with a dataset of MNIST digits transformed with either 2D rotations or translations in one direction. To test the method’s ability to learn distributions of these transformations, for each one- parameter subgroup $\phi\in\\{SO(2),T(2)\\}$ we construct a dataset $\left\\{x_{i},T(x_{i},t_{i})\right\\}_{i=1}^{N}$ by sampling the parameters $t_{i}\in\mathbb{R}$ from various _multimodal_ distributions. As in (Rao & Ruderman, 1998), the dataset is composed of signals $I:M\longrightarrow\mathbb{R}$ regularly-sampled from a discrete grid of $n^{2}$ points $(x,y)\in\mathbb{R}^{2}$ for $n=28$. The signals $I$ are vectorised into points in $\mathbb{R}^{784}$ as described in Section 3.2. The implementation of the naive model is available here. ### 4.2 Main model experiments The naive model architecture outlined in 3.2.1 consists of a fully-connected, 3-layer MLP for $f_{\theta}$ that was trained jointly with the coefficients $\alpha_{ij}$ using Adam (Kingma & Ba, 2014) with a learning rate of 0.001. Given the disproportionate number of trainable parameters in $f_{\theta}$ and the 6 coefficients in $\alpha$, updating $\alpha_{ij}$ roughly 10 times for every update of $\theta$ in $f_{\theta}$ was found to be beneficial during training. ##### Coefficients Figure 4 shows the evolution of $\alpha_{ij}$ during training. It can be seen that after a few hundred steps, the coefficients $\alpha_{ij}$ that do not correspond to the infinitesimal generator of the symmetry expressed by the dataset drop to zero, while those that do, settle to values compatible with those of the ground truth generator $L$. (a) Rotation (b) Translation in $x$ Figure 4: Training evolution of the coefficients $\alpha$ defining the generator $L^{\alpha}$ of the one-parameter subgroup, that are shown to converge to the ground-truth non-zero coefficients $\alpha$ for rotated ($-\alpha_{22}=\alpha_{13}=1$ and $0$ otherwise) and translated ($\alpha_{11}=1$ and $0$ otherwise) MNIST. ### 4.3 Latent model experiments The latent model outlined in 3.2.2 consists of a fully-connected, 3-layer MLP $f_{\theta}$, as in (8), to approximate $\hat{t}$, and two fully-connected, 3-layer MLPs with decreasing/increasing hidden dimensions for the encoder $h_{\psi}$ and $d_{\psi}$. We set the latent space to $n_{Z}=25$. Similar to the naive model experiment above, $f_{\theta}$ was trained jointly with the coefficients $\alpha_{ij}$ using Adam (Kingma & Ba, 2014) with learning rate 0.001. ##### Parameters After every epoch (roughly 500 steps), the outputs of $\hat{t}=f_{\theta}$ were collected in a histogram to show $p(\hat{t})$. Figure 5 shows how the distribution of $\hat{t}$ changes during training and how multimodal distributions are clearly recovered, showing the same number of modes as the ground truth distribution from which the transformations were sampled. (a) Unimodal distribution (b) Bimodal distribution (c) 3-mode distribution (d) 5-mode distribution Figure 5: Training evolution of the distributions $p(\hat{t})$ of the learned parameters $\hat{t}$ computed by $f_{\theta}$ for the validation set. The figure shows that $p(\hat{t})$ resembles the original multi-modal distributions $p(t)$ of the transformations expressed by the dataset. ## 5 Related Work Symmetries in Neural Networks Numerous studies have tackled the challenges associated with designing neural network layers and/or models that are equivariant with respect to specific transformations (Finzi et al., 2021). These transformations include continuous symmetries such as scaling (Worrall & Welling, 2019), rotation on spheres (Cohen et al., 2018), local gauge transformations (Cohen et al., 2019) and general E(2) transformations on the Euclidean plane (Weiler & Cesa, 2019), as well as discrete transformations like permutations of sets (Zaheer et al., 2017) and reversing symmetries (Valperga et al., 2022). Another line of research focuses on establishing theoretical principles and practical techniques for constructing general group-equivariant neural networks. Research in such areas show improved performances on tasks related to symmetries, but nonetheless require prior knowledge about the symmetries themselves. Symmetry Detection Symmetry detection aims to discover symmetries from observations, a learning task that is of great importance in of itself. Detecting symmetries in data not only lends itself to more efficient and effective machine learning models but also in discovering fundamental laws that govern data, a long-standing area of interest in the physical sciences. Learned symmetries can then be incorporated after training in equivariant models or used for data augmentation for downstream tasks. In physics and dynamical systems, the task of understanding and discovering symmetries is a crucial one; in classical mechanics and more generally Hamiltonian dynamics, continuous symmetries of the Hamiltonian are of great significance since they are associated, through Noether’s theorem (Noether, 1918), to conservation laws such as conservation of angular momentum or conservation of charge. The first work on learning symmetries of one-parameter subgroups from observations were Rao & Ruderman (1998) and Miao & Rao (2007), which outline MAP-inference methods for learning infinitesimally small transformations. Sohl-Dickstein et al. (2010) propose a transformation-specific smoothing operation of the transformation space to overcome the issue of a highly non- convex reconstruction objective that includes an exponential map. These methods are close to ours in that we also make use of the exponential map to obtain group elements from their Lie algebra. Despite this, Sohl-Dickstein et al. (2010) do not consider the task of characterizing the distribution of the parameter of the subgroup nor do they consider the whole of pixel-space, using small patches instead. Cohen & Welling (2014) focus on disentangling and learning the distributions of multiple compact “toroidal” one-parameter subgroups in the data. Neural Symmetry Detection A completely different approach to symmetry discovery is that of Sanborn et al. (2022), who’s model uses a group invariant function known as the bispectrum to learn group-equivariant and group- invariant maps from observations. Benton et al. (2020) consider a task similar to ours, attempting to learn groups with respect-to-which the data is invariant, however, the objective places constraints directly on the network parameters as well as the distribution of transformation parameters with which the data is augmented. Alternatively, Dehmamy et al. (2021) require knowledge of the specific transformation parameter for each input pair (differing by that transformation), unlike our model, where no knowledge of the one- parameter group is used in order to find the distribution of the transformation parameter. Latent Transformations Learning transformations of a one-parameter subgroup in latent space (whether that subgroup be identical to the one in pixel space or not) has been accomplished by Keurti et al. (2023) and Zhu et al. (2021). Nevertheless, other works either presuppose local structure in the data by using CNNs instead of fullly-connected networks or focus on disentangling interpretable features instead of directly learning generators that can be used as an inductive bias for a new model. In contrary to the other works mentioned above, we propose a promising framework in which we can simultaneously * • perform symmetry detection in pixel-space, without assuming any inductive biases are present in the data a priori, * • parametrize the generator such that non-compact groups (e.g. translation) can be naturally incorporated, * • and learn both the generator and the parameter distributions. ## 6 Discussion In this work we proposed a framework for learning one-parameter subgroups of Lie group symmetries from observations. Our method uses a neural network to predict the one-parameter of every transformation that has been applied to datapoints, and the coefficients of a linear combination of pre-specified generators. We show that our method can learn the correct generators for a variety of transformations as well as characterize the distribution of the parameter that has been used for transforming the dataset. While the goal of learning both the coefficients of the generator and the distribution of the transformation parameter has not been accomplished by only one model in this work, modifying our existing framework to do so is a priority for future work. In addition, the proposed method lends itself well to being composed to form multiple layers, which can then be applied to datasets that express multiple symmetries. By doing so, ideally, each layer would learn one individual symmetry. We leave this study, and the more general, fully unsupervised setting described in 3.4, for future work. ## Acknowledgements This publication is based on work partially supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1) and the Dorris Chen Award granted by the Department of Mathematics, Imperial College London. ## References * Al-Mohy & Higham (2010) Al-Mohy, A. H. and Higham, N. J. A new scaling and squaring algorithm for the matrix exponential. _SIAM Journal on Matrix Analysis and Applications_ , 31(3):970–989, 2010. * Benton et al. (2020) Benton, G., Finzi, M., Izmailov, P., and Wilson, A. G. Learning invariances in neural networks from training data. _Advances in neural information processing systems_ , 33:17605–17616, 2020. * Cohen & Welling (2014) Cohen, T. and Welling, M. Learning the irreducible representations of commutative lie groups. In _International Conference on Machine Learning_ , pp. 1755–1763. PMLR, 2014. * Cohen & Welling (2016) Cohen, T. and Welling, M. Group equivariant convolutional networks. In _International conference on machine learning_ , pp. 2990–2999. PMLR, 2016. * Cohen et al. (2019) Cohen, T., Weiler, M., Kicanaoglu, B., and Welling, M. Gauge equivariant convolutional networks and the icosahedral cnn. In _International conference on Machine learning_ , pp. 1321–1330. PMLR, 2019. * Cohen et al. (2018) Cohen, T. S., Geiger, M., Köhler, J., and Welling, M. Spherical cnns. _arXiv preprint arXiv:1801.10130_ , 2018. * Dehmamy et al. (2021) Dehmamy, N., Walters, R., Liu, Y., Wang, D., and Yu, R. Automatic symmetry discovery with lie algebra convolutional network. _Advances in Neural Information Processing Systems_ , 34:2503–2515, 2021. * Finzi et al. (2021) Finzi, M., Welling, M., and Wilson, A. G. A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups, 2021. * Fulton & Harris (1991) Fulton, W. and Harris, J. _Representation Theory: A First Course_. Graduate Texts in Mathematics. Springer New York, 1991. ISBN 9780387974958. URL https://books.google.nl/books?id=6GUH8ARxhp8C. * Keurti et al. (2023) Keurti, H., Pan, H.-R., Besserve, M., Grewe, B. F., and Schölkopf, B. Homomorphism autoencoder – learning group structured representations from observed transitions, 2023. * Kingma & Ba (2014) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014. * Langley (2000) Langley, P. Crafting papers on machine learning. In Langley, P. (ed.), _Proceedings of the 17th International Conference on Machine Learning (ICML 2000)_ , pp. 1207–1216, Stanford, CA, 2000\. Morgan Kaufmann. * Marks (2012) Marks, R. J. I. _Introduction to Shannon sampling and interpolation theory_. Springer Science & Business Media, 2012. * Miao & Rao (2007) Miao, X. and Rao, R. P. Learning the lie groups of visual invariance. _Neural computation_ , 19(10):2665–2693, 2007\. * Noether (1918) Noether, E. Invariante variationsprobleme. _Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse_ , 1918:235–257, 1918\. URL http://eudml.org/doc/59024. * Oliveri (2010) Oliveri, F. Lie symmetries of differential equations: Classical results and recent contributions. _Symmetry_ , 2, 06 2010. doi: 10.3390/sym2020658. * Olver (1993) Olver, P. _Applications of Lie Groups to Differential Equations_. Graduate Texts in Mathematics. Springer New York, 1993. ISBN 9780387950006. URL https://books.google.nl/books?id=sI2bAxgLMXYC. * Rao & Ruderman (1998) Rao, R. and Ruderman, D. Learning lie groups for invariant visual perception. _Advances in neural information processing systems_ , 11, 1998. * Sanborn et al. (2022) Sanborn, S., Shewmake, C., Olshausen, B., and Hillar, C. Bispectral neural networks. _arXiv preprint arXiv:2209.03416_ , 2022. * Sohl-Dickstein et al. (2010) Sohl-Dickstein, J., Wang, C. M., and Olshausen, B. A. An unsupervised algorithm for learning lie group transformations. _arXiv preprint arXiv:1001.1027_ , 2010. * Valperga et al. (2022) Valperga, R., Webster, K., Turaev, D., Klein, V., and Lamb, J. Learning reversible symplectic dynamics. In _Learning for Dynamics and Control Conference_ , pp. 906–916. PMLR, 2022. * Weiler & Cesa (2019) Weiler, M. and Cesa, G. General e (2)-equivariant steerable cnns. _Advances in Neural Information Processing Systems_ , 32, 2019. * Weiler et al. (2018) Weiler, M., Geiger, M., Welling, M., Boomsma, W., and Cohen, T. S. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. _Advances in Neural Information Processing Systems_ , 31, 2018. * Worrall & Welling (2019) Worrall, D. and Welling, M. Deep scale-spaces: Equivariance over scale. _Advances in Neural Information Processing Systems_ , 32, 2019. * Yarotsky (2022) Yarotsky, D. Universal approximations of invariant maps by neural networks. _Constructive Approximation_ , 55(1):407–474, 2022. * Zaheer et al. (2017) Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R. R., and Smola, A. J. Deep sets. _Advances in neural information processing systems_ , 30, 2017. * Zhu et al. (2021) Zhu, X., Xu, C., and Tao, D. Commutative lie group vae for disentanglement learning, 2021.
11institutetext: Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong SAR 11email<EMAIL_ADDRESS> # DiffRect: Latent Diffusion Label Rectification for Semi-supervised Medical Image Segmentation Xinyu Liu Wuyang Li Yixuan Yuan${}^{\text{{\char 0\relax}}}$ ###### Abstract Semi-supervised medical image segmentation aims to leverage limited annotated data and rich unlabeled data to perform accurate segmentation. However, existing semi-supervised methods are highly dependent on the quality of self- generated pseudo labels, which are prone to incorrect supervision and confirmation bias. Meanwhile, they are insufficient in capturing the label distributions in latent space and suffer from limited generalization to unlabeled data. To address these issues, we propose a Latent Diffusion Label Rectification Model (DiffRect) for semi-supervised medical image segmentation. DiffRect first utilizes a Label Context Calibration Module (LCC) to calibrate the biased relationship between classes by learning the category-wise correlation in pseudo labels, then apply Latent Feature Rectification Module (LFR) on the latent space to formulate and align the pseudo label distributions of different levels via latent diffusion. It utilizes a denoising network to learn the coarse to fine and fine to precise consecutive distribution transportations. We evaluate DiffRect on three public datasets: ACDC, MS-CMRSEG 2019, and Decathlon Prostate. Experimental results demonstrate the effectiveness of DiffRect, e.g. it achieves 82.40% Dice score on ACDC with only 1% labeled scan available, outperforms the previous state-of-the-art by 4.60% in Dice, and even rivals fully supervised performance. Code is released at https://github.com/CUHK-AIM-Group/DiffRect. ###### Keywords: Semi-supervised Medical Image Segmentation Diffusion Models Label Rectification. ## 1 Introduction Medical image segmentation is crucial for clinical applications but often requires large amounts of pixel-wise or voxel-wise labeled data, which is tedious and time-consuming to obtain [17, 19, 18, 1, 28]. Such a heavy annotation cost has motivated the community to develop semi-supervised learning methods [10, 20, 14, 38]. Existing semi-supervised image segmentation methods can be generally categorized into self-training and consistency regularization. For self-training methods [1, 31, 4, 7, 34, 15, 40, 23], they generate pseudo labels for unlabeled images, then use the pseudo-labeled images in conjunction with labeled images to update the segmentation model iteratively. This paradigm could effectively incorporate unlabeled data by minimizing their entropy. For consistency regularization methods [5, 27, 21, 30, 35, 9, 22, 37, 33], they are designed based on the assumption that perturbations should not change the predictions of the model, and have achieved more promising performance recently. Perturbations are applied on the input or the network level, and the models are enforced to achieve an invariance of predictions. Despite the progress, the semi-supervised medical image segmentation remains challenging due to the following factors. (1) Reliance Risk: Existing methods typically rely on self-generated pseudo labels to optimize the model [30, 37, 11, 35], which is ill-posed since errors in pseudo labels are preserved during iterative optimization. The overfitting to incorrect supervision could lead to severe confirmation bias [16] and considerable performance degradation. Besides, they do not fully utilize the category-wise correlation in the pseudo labels, and the label quality is sensitive to the perturbation design and network structure. (2) Distribution Misalignment: Most methods only apply consistency regularization and auxiliary supervision at the output mask level to encourage the model to produce consistent mask predictions between different perturbations [27, 5]. However, these approaches are insufficient in capturing the semantics in the latent space and tend to overlook the underlying label distributions, resulting in limited generalization to unlabeled data. To address the reliance risk issue, we first propose a Label Context Calibration Module (LCC). Different from methods that directly use the self- generated pseudo labels, LCC calibrates the biased semantic context, i.e., the relationships between different semantic categories, and reduce the errors in the pseudo labels. It starts with a semantic coloring scheme that encodes the one-hot pseudo labels and ground truth masks into the visual space, and subsequently feeds them into a semantic context embedding block to adjust the features of the pseudo labels in the latent space. Notably, LCC introduces explicit calibration guidance by encoding the dice score between the pseudo labels and the ground truth, thereby providing more reliable calibration directions for model optimization. To tackle the distribution misalignment problem, some previous works have proposed to model data distributions with VAE [41] or GAN [42]. However, their adversarial training scheme could suffer from mode collapse and conflict between generation and segmentation tasks, resulting in suboptimal performance. Different from them, the denoising diffusion probabilistic model (DDPM) is a new class of generative models trained using variational inference [8, 24, 12, 13], which alleviates the above problem by formulating the complex data distribution with probabilistic models. Therefore, we design a Latent Feature Rectification Module (LFR), which models the consecutive refinement between different latent distributions with a generative latent DDPM [25]. LFR leverages the power of DDPM to learn the latent structure of the semantic labels. Specifically, it first applies Gaussian noise on fine-grained label features with a diffusion schedule, then uses the coarse-grained label features as conditions to recover the clean feature. With the denoising process, the consecutive transportations of coarse to fine and fine to precise distributions of the pseudo labels are formulated and aligned, and the pseudo labels are progressively rectified for better supervision. Based on LCC and LFR, we construct a semi-supervised medical image segmentation framework named Latent Diffusion Label Rectification Model (DiffRect). Extensive experimental results show that our method outperforms prior methods by significant margins. ## 2 Methodology ### 2.1 Preliminary: Conditional DDPM DDPM is a class of latent variable generative model that learns a data distribution by denoising noisy images [8]. The forward process diffuses the data samples with pre-defined noise schedules. Concretely, given a clean data $z^{0}$, sampling of $z^{t}$ is expressed in a closed form: $q(z^{t}\|z^{0})=\mathcal{N}(z^{t};\sqrt{\overline{\alpha}_{t}}z^{0},(1-\overline{\alpha}_{t})\mathbf{I}),$ (1) where $\overline{\alpha}_{t}$ is the noise schedule variable [24, 8]. During the reverse process, we are given an optional condition $\rho$ [6], and each step is expressed as a Gaussian transition with learned mean $\boldsymbol{\mu}_{\epsilon}$ and variance $\sigma_{\epsilon}$ from the denoising model $\epsilon$: $p\left(z^{t-1}\mid z^{t},\rho\right):=\mathcal{N}\left(z^{t-1};\boldsymbol{\mu}_{\epsilon}\left(z^{t},t,\rho\right),\sigma_{\epsilon}\left(z^{t},t,\rho\right)\mathbf{I}\right).$ (2) By decomposing the above equation, we have: $z^{t-1}\leftarrow\frac{1}{\sqrt{\alpha_{t}}}(z^{t}-\frac{1-\alpha_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\epsilon(z^{t},t,\rho))+\sigma_{\epsilon}\eta,$ (3) where $\eta\sim\mathcal{N}(\mathbf{0},\mathbf{I})$ is a sampled noise that ensures each step is stochastic. In this work, we extend the conditional DDPM to the latent space of pseudo labels, and model the distribution transportations for label rectification. ### 2.2 Label Context Calibration Module (LCC) Existing semi-supervised training schemes that rely extensively on self- generated pseudo labels are often ill-posed, where errors in low-quality pseudo labels accumulate and degrade performance. To address this issue, we introduce LCC that effectively captures and calibrates the semantic context within the visual space, thereby mitigating the impact of noisy labels. As in Fig. 1(a), given the one-hot pseudo labels $y_{s},y_{w}\in\mathbb{R}^{H\times W\times C}$ with height $H$ and width $W$ from the segmentation network, we encode them to semantic pseudo labels $m_{s}$ and $m_{w}$ with dimensions of $\mathbb{R}^{H\times W\times 3}$, using a proposed semantic coloring scheme (SCS). Concretely, for a dataset that contains $C$ different classes, we build a color set $M_{C}$ that is composed of $C$ RGB colors, and each color is represented by a tuple of three values within the range $[0,255]$. We maximize the color difference between each encoded category to avoid semantic confusion. Therefore, it can be represented by a functional mapping $f:C\to M_{C}$, which is defined as: $m_{(h,w)}=f(y_{(h,w)}),\quad{\forall}h\in[1,2,...,H],w\in[1,2,...,W],$ (4) where $m$ is the semantic pseudo label in the visual space, and $m_{(h,w)}$ represents the mapped RGB color of the pixel at location $(h,w)$ in $m$. The $y_{(h,w)}$ represents the class of the corresponding pixel in one-hot mask $y$. The semantic coloring scheme can effectively incorporate color information into the segmentation task, which enables the model to exploit additional cues with the rich semantics from colors, and improves the discrimination ability [32, 3] as well as the interpretability of the model. Figure 1: Overall framework of DiffRect. (a) Label Context Calibration Module (LCC). (b) Latent Feature Rectification Module (LFR). (c) Segmentation Network. To perform context calibration with the semantic labels, we design a semantic context embedding block $\mathbf{B}_{sem}$, which embeds the pseudo labels to the latent features $z_{s},z_{w},z_{l}$ with the dimensions of $\mathbb{R}^{\frac{H}{16}\times\frac{W}{16}\times 256}$. Specially, additional calibration guidance (CG) $\tau^{u}$ for unlabeled data and $\tau^{l}$ for labeled data are also encoded into the block using the sinusoidal embeddings [8, 29], $\displaystyle\\{z_{s},z_{w}\\}$ $\displaystyle=\mathbf{B}_{\text{sem}}(m_{s},m_{w}\|\tau^{u})\quad\text{for unlabeled data},$ (5) $\displaystyle\\{z_{w},z_{l}\\}$ $\displaystyle=\mathbf{B}_{\text{sem}}(m_{w},m_{l}\|\tau^{l})\quad\text{for labeled data},$ where the $\tau^{u}$ and $\tau^{l}$ values for unlabeled and labeled data are computed using the dice coefficient between the one-hot segmentation masks of different qualities, which is denoted as follows: $\tau^{u}=\text{Dice}(y_{s},y_{w}),\quad\tau^{l}=\text{Dice}(y_{w},y_{l}).$ (6) By using the dice coefficient as the calibration guidance factor, the model can simultaneously measure the quality of pseudo labels and integrate this information into the learning process. It enables the model to better capture the semantic context and refine the pseudo labels for both unlabeled and labeled data. ### 2.3 Latent Feature Rectification Module (LFR) To address the distribution misalignment issue between the pseudo labels with different levels of quality, we propose a Latent Feature Rectification Module (LFR), which is illustrated in Fig. 1(b). Concretely, LFR applies a latent diffusion process to model the transportation of label quality distributions. For each unlabeled data $I_{u}$, the strongly and weakly semantic context embedding $z_{s}$ and $z_{w}$ are first obtained with LCC. We then construct a diffusion process from $z_{w}$ to the diffused noisy feature $z^{T}_{w}$ with $T$ timestamps as follows: $\displaystyle z^{T}_{w}$ $\displaystyle=\sqrt{\alpha_{T}}z^{T-1}_{w}+\sqrt{1-\alpha_{T}}\eta^{T-1}$ (7) $\displaystyle=\cdot\cdot\cdot=\sqrt{\overline{\alpha}_{T}}z_{w}+\sqrt{1-\overline{\alpha}_{T}}\eta,$ where $\alpha_{T}$ and $\overline{\alpha}_{T}$ are the schedule variables in the diffusion forward process, (e.g., cosine [24]), and $\overline{\alpha}_{T}=\prod^{T}_{i=1}\alpha_{i}$. The $\eta^{t}$ is the corresponding noise sampled from Gaussian distribution at the $t$-th step. Then, we train a denoising U-Net $\epsilon$ to learn to reverse this process. Since the individual reverse diffusion process is unconditioned, we add $z_{s}$ as the conditional input and also feed it into the denoising model. Therefore, the model is encouraged to learn the distribution transportation from coarse-grained masks $p(z_{s})$ (strong pseudo labels) to the latent distributions of fine-grained masks $p(z_{w})$ (weak pseudo labels), where we denote it as a strong-to-weak transportation (S2W). The reverse diffusion is formulated as the following Markov chain: $\begin{aligned} &p_{\epsilon}\left(z_{w}^{0:T}\right):=p\left(z_{w}^{T}\right)\prod_{t=1}^{T}p_{\epsilon}\left(z_{w}^{t-1}\mid z_{w}^{t},z_{s}\right),\quad z_{w}^{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\\\ &p_{\epsilon}\left(z_{w}^{t-1}\mid z_{w}^{t},z_{s}\right):=\mathcal{N}\left(z_{w}^{t-1};\boldsymbol{\mu}_{\epsilon}\left(z_{w}^{t},t,z_{s}\right),\sigma_{\epsilon}\left(z_{w}^{t},t,z_{s}\right)\mathbf{I}\right),\end{aligned}$ (8) where $\boldsymbol{\mu}$ and $\sigma$ are the predicted data mean and variance from the denoising U-Net model. For the training with unlabeled input, the latent loss for optimization can be expressed as follows: $\mathcal{L_{\text{Lat-U}}}=E_{z_{w},t}\left[\left\|z_{w}-r_{w}\right\|_{2}\right],$ (9) where $r_{w}=\epsilon\left(z_{w}^{T},z_{s},t\right)$, which is the reconstructed version of the weakly semantic context embedding $z_{w}$. The objective minimizes the $\ell_{2}$ distance between the clean and denoised feature and encourages the model to learn the distribution transportation from a coarse pseudo label to a fine pseudo label. Similarly, we can obtain the weak semantic context embedding of labeled data $z_{w}$ and the ground truth $z_{l}$. We then learn the reverse process that recovers $z_{l}$ based on the $T$-timestamp diffused noisy feature $z_{l}^{T}$, with the $z_{w}$ as condition: $\begin{aligned} &p_{\epsilon}\left(z_{l}^{0:T}\right):=p\left(z_{l}^{T}\right)\prod_{t=1}^{T}p_{\epsilon}\left(z_{l}^{t-1}\mid z_{l}^{t},z_{w}\right),\quad z_{l}^{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\\\ &p_{\epsilon}\left(z_{l}^{t-1}\mid z_{l}^{t},z_{w}\right):=\mathcal{N}\left(z_{l}^{t-1};\boldsymbol{\mu}_{\epsilon}\left(z_{l}^{t},t,z_{w}\right),\sigma_{\epsilon}\left(z_{l}^{t},t,z_{w}\right)\mathbf{I}\right),\end{aligned}$ (10) and the training objective for the reconstructed feature $r_{l}=\epsilon\left(z_{l}^{T},z_{w},t\right)$ is: $\mathcal{L_{\text{Lat-L}}}=E_{z_{l},t}\left[\left\|z_{l}-r_{l}\right\|_{2}\right].$ (11) With the above latent diffusion process, the continual distribution transportations from fine-grained mask distributions $p(z_{w})$ (weak pseudo labels) to precise mask distributions $p(z_{l})$ (ground truth) are also formulated in the latent space, which is denoted as the weak-to-groud truth transportation (W2G). The denoising U-Net is hence capable to achieve latent feature rectification. Afterwards, the weak pseudo labels of unlabeled data are fed into the denoising U-Net for obtaining the rectified features with progressive denoising. Specifically, we randomly sample a Gaussian noise $r_{l}^{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})$ as the input of the denoising U-Net, which simulates the $T$-timestamp noisy feature of the rectified pseudo label $y_{r}$. The rectified feature $r_{l}$ is generated via a progressive reverse diffusion process, with the weak pseudo label features $z_{w}$ as condition. Mathematically, a single denoising from step $t$ to $t-1$ is formulated as: $r_{l}^{t-1}\leftarrow\frac{1}{\sqrt{\alpha_{t}}}(r_{l}^{t}-\frac{1-\alpha_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\epsilon(r_{l}^{t},t,z_{w}))+\sigma_{\epsilon}\eta,$ (12) where $\eta\sim\mathcal{N}(\mathbf{0},\mathbf{I})$ which ensures each step is stochastic as in DDPM [8]. The rectified label is obtained with an upsampling of the feature $r_{l}$ to the input resolution $y_{r}=Upsample(r_{l})$, which is utilized as a better and more precise supervision signal for the segmentation model. ### 2.4 Loss Function The training of the DiffRect frameworks includes two parts: (1) the optimization of segmentation U-Net $\theta$ (with Seg Loss) and (2) the joint optimization of the rectification components $\mathbf{B}_{sem}$ and $\epsilon$ (with Diff Loss). The overall loss is: $\mathcal{L_{\text{DiffRect}}}=\underbrace{{\mathcal{L^{\text{Seg}}_{\text{Semi}}}+\mathcal{L_{\text{Rect}}}}}_{\text{Seg Loss}}+\underbrace{{\mathcal{L^{\text{Lat}}_{\text{Semi}}}+\lambda_{1}\mathcal{L_{\text{Lat-U}}}+\lambda_{2}\mathcal{L_{\text{Lat-L}}}}}_{\text{Diff Loss}},$ (13) where $\mathcal{L^{\text{Seg}}_{\text{Semi}}}$ and $\mathcal{L^{\text{Lat}}_{\text{Semi}}}$ are the semi-supervised losses for segmentation as in [27]. The $\lambda_{1}$ and $\lambda_{2}$ are trade-off factors to balance the contribution of each term. $\mathcal{L_{\text{Rect}}}$ is the rectified supervision loss between $y_{w}$ and the rectified pseudo label $y_{r}$, where the summation of cross-entropy and Dice score are used: $\mathcal{L_{\text{Rect}}}=\text{CE}(y_{w},y_{r})+\text{Dice}(y_{w},y_{r}).$ (14) During inference, the input is directly fed into segmentation network in Fig. 1(c) to produce the segmentation result, thus no extra inference cost is required. Table 1: Segmentation results on the ACDC validation and test sets. Method | Labeled | ACDC Validation Set | ACDC Test Set ---|---|---|--- Ratio | Dice$\uparrow$ | Jac$\uparrow$ | HD95$\downarrow$ | ASD$\downarrow$ | Dice$\uparrow$ | Jac$\uparrow$ | HD95$\downarrow$ | ASD$\downarrow$ UAMT [39] | 1% | 42.28 | 32.21 | 40.74 | 18.58 | 43.86 | 33.36 | 38.60 | 18.33 FixMatch [27] | 69.67 | 58.34 | 37.92 | 14.41 | 60.80 | 49.14 | 36.81 | 14.75 CPS [5] | 56.70 | 44.31 | 24.97 | 10.48 | 52.28 | 41.68 | 20.38 | 7.35 ICT [30] | 43.03 | 30.58 | 34.92 | 15.23 | 42.91 | 32.81 | 25.42 | 10.80 MCNetV2 [35] | 57.49 | 43.29 | 31.31 | 10.97 | 49.92 | 39.16 | 24.64 | 8.47 INCL [43] | 77.80 | 66.13 | 11.69 | 3.22 | 67.01 | 56.22 | 13.43 | 3.35 DiffRect (Ours) | 82.40 | 71.96 | 10.04 | 2.90 | 71.85 | 61.53 | 5.79 | 2.12 UAMT [39] | 5% | 72.71 | 60.89 | 21.48 | 7.15 | 69.93 | 58.45 | 17.01 | 5.25 FixMatch [27] | 83.12 | 73.59 | 9.86 | 2.61 | 74.68 | 64.12 | 11.18 | 2.93 CPS [5] | 75.24 | 64.67 | 10.93 | 2.98 | 74.67 | 63.51 | 9.37 | 2.55 ICT [30] | 74.20 | 62.90 | 17.01 | 4.32 | 73.10 | 60.69 | 11.92 | 3.70 MCNetV2 [35] | 78.96 | 68.15 | 12.13 | 3.91 | 75.86 | 65.20 | 9.85 | 2.88 INCL [43] | 85.43 | 75.76 | 6.37 | 1.37 | 80.64 | 70.78 | 5.29 | 1.42 DiffRect (Ours) | 86.95 | 78.08 | 4.07 | 1.23 | 82.46 | 71.76 | 7.18 | 1.94 UAMT [39] | 10% | 85.14 | 75.90 | 6.25 | 1.80 | 86.23 | 76.72 | 9.40 | 2.56 FixMatch [27] | 88.31 | 79.97 | 7.35 | 1.79 | 87.96 | 79.37 | 5.43 | 1.59 CPS [5] | 84.63 | 75.20 | 7.57 | 2.27 | 85.61 | 75.76 | 9.29 | 3.00 ICT [30] | 85.15 | 76.05 | 4.27 | 1.46 | 86.77 | 77.43 | 8.01 | 2.16 MCNetV2 [35] | 85.97 | 77.21 | 7.55 | 2.11 | 88.75 | 80.28 | 6.16 | 1.64 INCL [43] | 88.28 | 80.09 | 1.67 | 0.49 | 88.68 | 80.27 | 4.34 | 1.13 DiffRect (Ours) | 90.18 | 82.72 | 1.38 | 0.48 | 89.27 | 81.13 | 3.85 | 1.00 Supervised [26] | 100% | 91.48 | 84.87 | 1.12 | 0.34 | 91.65 | 84.95 | 1.14 | 0.50 Table 2: Segmentation results on MS-CMRSEG 2019 with 20% data labeled. Method | Dice $\uparrow$ | Jac$\uparrow$ | HD95$\downarrow$ | ASD$\downarrow$ ---|---|---|---|--- UAMT [39] | 84.27 | 73.69 | 12.15 | 4.18 FixMatch [27] | 84.31 | 73.57 | 17.79 | 4.81 CPS [5] | 83.66 | 73.03 | 15.01 | 4.30 ICT [30] | 83.66 | 73.06 | 17.24 | 4.85 MCNetV2 [35] | 83.93 | 73.45 | 13.10 | 3.39 INCL [43] | 84.33 | 73.92 | 9.95 | 2.61 DiffRect | 86.78 | 77.13 | 6.39 | 1.85 Supervised [26] | 88.19 | 79.28 | 4.21 | 1.32 Table 3: Segmentation results on Decathlon Prostate with 10% data labeled. Method | Dice$\uparrow$ | Jac$\uparrow$ | HD95$\downarrow$ | ASD$\downarrow$ ---|---|---|---|--- UAMT [39] | 40.91 | 29.13 | 28.32 | 10.45 FixMatch [27] | 54.70 | 41.07 | 16.82 | 5.24 CPS [5] | 43.51 | 31.18 | 26.93 | 8.31 ICT [30] | 39.91 | 28.95 | 24.73 | 7.59 MCNetV2 [35] | 40.58 | 28.77 | 21.29 | 7.11 INCL [43] | 55.67 | 41.91 | 31.09 | 15.78 DiffRect | 62.23 | 48.64 | 10.36 | 3.41 Supervised [26] | 73.81 | 61.25 | 7.28 | 1.94 ## 3 Experiments ### 3.1 Experimental Setup We examine all methods with identical settings for fair comparison, and trained on a NVIDIA 4090 GPU for 30k iterations. For the $\mathbf{B}_{sem}$ which downsamples the input to $\frac{H}{16}\times\frac{W}{16}$, we use two $3\times 3$ convolution layers followed by BN and LeakyReLU before the 2$\times$ downsample in each stage, and repeat for four stages. The Denoising U-Net $\epsilon$ down and upsamples the input by 4$\times$, which also uses two $3\times 3$ convolution layers per stage. The multi-scale image feature is embedded into the model via concatenation as in [36]. For the weak perturbation, we apply random flipping and rotation. For the strong perturbation, we apply random Gaussian blur and additional random image adjustments, including contrast, sharpness, and brightness enhancement. For ACDC, we test the 1%, 5%, and 10% labeling regimes following [20]. For MS- CMRSEG 2019, 20% labeling regime is tested, while 10% labeled data is used in Decathlon Prostate. ### 3.2 Comparison with State-of-the-art Methods We validate the effectiveness of the proposed approach on the ACDC dataset [2] in Tab. 1. Our method shows superior results under all labeling regimes. Compared with MCNetV2 [35], our method possesses superior capability with increments of 24.91%, 7.99%, 4.21% in Dice, 28.67%, 9.93%, 5.51% in Jaccard on the validation set with 1%, 5%, and 10% scans available. DiffRect displays better segmentation performance even when the labeled samples are extremely scarce (e.g. 82.40% Dice with 1% scans available), suggesting it can model the transportation of the pseudo label distributions precisely and produce refined masks. Results in MS-CMRSEG 2019 are shown in Tab. 3. DiffRect shows consistent performance gain on all metrics, with 86.78% in Dice, 77.13% in Jaccard, 6.39mm in HD95, and 1.85mm in ASD, outperforming the state-of-the-art method INCL [43] by 2.45% Dice, 3.21% Jaccard, 3.56mm HD95, and 0.76mm in ASD, respectively. On Decathlon Prostate in Tab. 3, DiffRect remains showing compelling results, demonstrating its capability in various modalities. Table 4: Ablation study of the proposed modules. Method | w/o | Dice$\uparrow$ | Jac$\uparrow$ | HD95$\downarrow$ | ASD$\downarrow$ ---|---|---|---|---|--- Baseline | - | 69.67 | 58.34 | 37.92 | 14.41 +LCC | SCS | 73.83 | 61.83 | 29.49 | 11.71 | CG | 76.12 | 64.69 | 26.24 | 8.31 | - | 78.28 | 66.97 | 20.46 | 5.60 +LCC | S2W | 79.97 | 69.31 | 14.07 | 4.91 & LFR | W2G | 78.57 | 66.38 | 21.07 | 5.91 | - | 82.40 | 71.96 | 10.04 | 2.90 Table 5: Ablation study of different calibration guidance choices in LCC. Choice | Dice$\uparrow$ | Jac$\uparrow$ | HD95$\downarrow$ | ASD$\downarrow$ ---|---|---|---|--- Dice | 82.40 | 71.96 | 10.04 | 2.90 Jaccard | 82.37 | 71.82 | 11.33 | 2.87 Fixed | 80.34 | 69.67 | 14.97 | 4.47 Random | 80.60 | 69.99 | 13.15 | 3.75 Both | 81.67 | 71.45 | 10.28 | 2.47 ### 3.3 Further Analysis Ablation study of the proposed modules. We evaluate the effect of individual modules in DiffRect in Tab. 5. Adopting LCC achieves 78.28% Dice and 66.97% Jaccard, with 8.61% and 8.63% gains compared with the Fixmatch baseline [27]. Removing the semantic coloring scheme (SGS) shows a large performance drop (73.83% Dice and 61.83% Jaccard), showing the importance of exploiting the semantics in the visual domain. No calibration guidance (CG) causes 2.16% Dice drop due to the impact of noisy calibration directions. Adding LFR improves Dice by 4.12% and 10.42mm in HD95. Removing the strong to weak transportation (S2W) shows a 2.43% Dice drop while removing the weak to ground truth (W2G) causes a severe Dice drop to 78.57%. The results demonstrate the necessity of each sub-component. Different Calibration Guidance Choices. To analyze the effectiveness and the optimal choice of calibration guidance, experiments were conducted to compare the performance of models trained with different calibration guidance in Tab. 5, including Dice score, Jaccard score, Fixed (using a fixed value 0.5), Random (using a random sampled value within 0$\sim$1), and Both (using the summation of Dice and Jaccard). It is shown that Dice, Jaccard, and Both have similar performance, and outperform the fixed and random strategies, which validates the reliable directions provided for optimization. ## 4 Conclusion In this paper, we identify the reliance risk and distribution misalignment issues in semi-supervised medical image segmentation, and propose DiffRect, a diffusion-based framework for this task. It comprises two modules: the LCC aims to calibrate the biased relationship between classes in pseudo labels by learning category-wise correlation, and the LFR models the consecutive transportations between coarse to fine and fine to precise distributions of the pseudo labels accurately with latent diffusion. Extensive experiments on three datasets demonstrate that DiffRect outperforms existing methods by remarkable margins. #### 4.0.1 Acknowledgements This work was supported by Hong Kong Research Grants Council (RGC) General Research Fund 14204321. ## References * [1] Bai, W., Oktay, O., Sinclair, M., Suzuki, H., Rajchl, M., Tarroni, G., Glocker, B., King, A., Matthews, P.M., Rueckert, D.: Semi-supervised learning for network-based cardiac mr image segmentation. In: MICCAI. pp. 253–260. Springer (2017) * [2] Bernard, O., Lalande, A., Zotti, C., Cervenansky, F., Yang, X., Heng, P.A., Cetin, I., Lekadir, K., Camara, O., Ballester, M.A.G., et al.: Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans. Med. Imaging 37(11), 2514–2525 (2018) * [3] Chen, J., Lu, J., Zhu, X., Zhang, L.: Generative semantic segmentation. In: CVPR. pp. 7111–7120 (2023) * [4] Chen, S., Bortsova, G., García-Uceda Juárez, A., Van Tulder, G., De Bruijne, M.: Multi-task attention-based semi-supervised learning for medical image segmentation. In: MICCAI. pp. 457–465. Springer (2019) * [5] Chen, X., Yuan, Y., Zeng, G., Wang, J.: Semi-supervised semantic segmentation with cross pseudo supervision. In: CVPR. pp. 2613–2622 (2021) * [6] Choi, J., Kim, S., Jeong, Y., Gwon, Y., Yoon, S.: Ilvr: Conditioning method for denoising diffusion probabilistic models. In: ICCV. pp. 14347–14356. IEEE (2021) * [7] Feng, Z., Zhou, Q., Cheng, G., Tan, X., Shi, J., Ma, L.: Semi-supervised semantic segmentation via dynamic self-training and classbalanced curriculum. arXiv preprint arXiv:2004.08514 1(2), 5 (2020) * [8] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. NeurIPS 33, 6840–6851 (2020) * [9] Hu, H., Wei, F., Hu, H., Ye, Q., Cui, J., Wang, L.: Semi-supervised semantic segmentation via adaptive equalization learning. NeurIPS 34, 22106–22118 (2021) * [10] Jiao, R., Zhang, Y., Ding, L., Cai, R., Zhang, J.: Learning with limited annotations: a survey on deep semi-supervised learning for medical image segmentation. arXiv preprint arXiv:2207.14191 (2022) * [11] Li, C., Lin, M., Ding, Z., Lin, N., Zhuang, Y., Huang, Y., Ding, X., Cao, L.: Knowledge condensation distillation. In: ECCV. pp. 19–35 (2022) * [12] Li, C., Liu, H., Liu, Y., Feng, B.Y., Li, W., Liu, X., Chen, Z., Shao, J., Yuan, Y.: Endora: Video generation models as endoscopy simulators. arXiv preprint arXiv:2403.11050 (2024) * [13] Li, C., Liu, X., Li, W., Wang, C., Liu, H., Yuan, Y.: U-kan makes strong backbone for medical image segmentation and generation. arXiv:2406.02918 (2024) * [14] Li, C., Ma, W., Sun, L., Ding, X., Huang, Y., Wang, G., Yu, Y.: Hierarchical deep network with uncertainty-aware semi-supervised learning for vessel segmentation. NCA pp. 1–14 (2022) * [15] Li, C., Zhang, Y., Liang, Z., Ma, W., Huang, Y., Ding, X.: Consistent posterior distributions under vessel-mixing: a regularization for cross-domain retinal artery/vein classification. In: ICIP. pp. 61–65. IEEE (2021) * [16] Li, J., Socher, R., Hoi, S.C.: Dividemix: Learning with noisy labels as semi-supervised learning. In: ICLR (2019) * [17] Liu, X., Guo, X., Liu, Y., Yuan, Y.: Consolidated domain adaptive detection and localization framework for cross-device colonoscopic images. Medical image analysis 71, 102052 (2021) * [18] Liu, X., Li, W., Yuan, Y.: Decoupled unbiased teacher for source-free domain adaptive medical object detection. IEEE Trans. Neural Netw. Learn. Syst. (2023) * [19] Liu, X., Yuan, Y.: A source-free domain adaptive polyp detection framework with style diversification flow. IEEE Transactions on Medical Imaging 41(7), 1897–1908 (2022) * [20] Luo, X.: SSL4MIS. https://github.com/HiLab-git/SSL4MIS (2020) * [21] Luo, X., Hu, M., Song, T., Wang, G., Zhang, S.: Semi-supervised medical image segmentation via cross teaching between cnn and transformer. In: MIDL. pp. 820–833. PMLR (2022) * [22] Luo, X., Wang, G., Liao, W., Chen, J., Song, T., Chen, Y., Zhang, S., Metaxas, D.N., Zhang, S.: Semi-supervised medical image segmentation via uncertainty rectified pyramid consistency. Med. Image Anal. 80, 102517 (2022) * [23] Mendel, R., Rauber, D., de Souza Jr, L.A., Papa, J.P., Palm, C.: Error-correcting mean-teacher: Corrections instead of consistency-targets applied to semi-supervised medical image segmentation. CIBM 154, 106585 (2023) * [24] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: ICML. pp. 8162–8171. PMLR (2021) * [25] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR. pp. 10684–10695 (2022) * [26] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: MICCAI. pp. 234–241. Springer (2015) * [27] Sohn, K., Berthelot, D., Carlini, N., Zhang, Z., Zhang, H., Raffel, C.A., Cubuk, E.D., Kurakin, A., Li, C.L.: Fixmatch: Simplifying semi-supervised learning with consistency and confidence. NeurIPS 33 (2020) * [28] Sun, L., Li, C., Ding, X., Huang, Y., Chen, Z., Wang, G., Yu, Y., Paisley, J.: Few-shot medical image segmentation using a global correlation network with discriminative embedding. CBM 140, 105067 (2022) * [29] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. NeurIPS 30 (2017) * [30] Verma, V., Kawaguchi, K., Lamb, A., Kannala, J., Solin, A., Bengio, Y., Lopez-Paz, D.: Interpolation consistency training for semi-supervised learning. Neural Netw. 145, 90–106 (2022) * [31] Vu, T.H., Jain, H., Bucher, M., Cord, M., Pérez, P.: Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In: CVPR. pp. 2517–2526 (2019) * [32] Wang, X., Wang, W., Cao, Y., Shen, C., Huang, T.: Images speak in images: A generalist painter for in-context visual learning. In: CVPR. pp. 6830–6839 (2023) * [33] Wang, Y., Xiao, B., Bi, X., Li, W., Gao, X.: Mcf: Mutual correction framework for semi-supervised medical image segmentation. In: CVPR. pp. 15651–15660 (2023) * [34] Wang, Y., Wang, H., Shen, Y., Fei, J., Li, W., Jin, G., Wu, L., Zhao, R., Le, X.: Semi-supervised semantic segmentation using unreliable pseudo-labels. In: CVPR. pp. 4248–4257 (2022) * [35] Wu, Y., Ge, Z., Zhang, D., Xu, M., Zhang, L., Xia, Y., Cai, J.: Mutual consistency learning for semi-supervised medical image segmentation. MIA 81, 102530 (2022) * [36] Xing, Z., Wan, L., Fu, H., Yang, G., Zhu, L.: Diff-unet: A diffusion embedded network for volumetric segmentation. arXiv preprint arXiv:2303.10326 (2023) * [37] Yang, L., Qi, L., Feng, L., Zhang, W., Shi, Y.: Revisiting weak-to-strong consistency in semi-supervised semantic segmentation. In: CVPR. pp. 7236–7246 (2023) * [38] Yang, Q., Liu, X., Chen, Z., Ibragimov, B., Yuan, Y.: Semi-supervised medical image classification with temporal knowledge-aware regularization. In: MICCAI. pp. 119–129. Springer (2022) * [39] Yu, L., Wang, S., Li, X., Fu, C.W., Heng, P.A.: Uncertainty-aware self-ensembling model for semi-supervised 3d left atrium segmentation. In: MICCAI. pp. 605–613. Springer (2019) * [40] Zhang, R., Liu, S., Yu, Y., Li, G.: Self-supervised correction learning for semi-supervised biomedical image segmentation. In: MICCAI. pp. 134–144. Springer (2021) * [41] Zhang, X., Yao, L., Yuan, F.: Adversarial variational embedding for robust semi-supervised learning. KDD (2019) * [42] Zhang, Y., Li, C., Lin, X., Sun, L., Zhuang, Y., Huang, Y., Ding, X., Liu, X., Yu, Y.: Generator versus segmentor: Pseudo-healthy synthesis. In: MICCAI. pp. 150–160 (2021) * [43] Zhu, Y., Yang, J., Liu, S., Zhang, R.: Inherent consistent learning for accurate semi-supervised medical image segmentation. In: MIDL (2023)
# RTFormer: Re-parameter TSBN Spiking Transformer 1st Hongzhi Wang School of Software Technology Zhejiang University Ningbo, China <EMAIL_ADDRESS>2nd Xiubo Liang *Corresponding author School of Software Technology Zhejiang University Ningbo, China <EMAIL_ADDRESS>3rd Mengjian Li Research Center for Data Hub and Security Zhejiang Lab Hangzhou, China <EMAIL_ADDRESS>4th Tao Zhang School of Software Technology Zhejiang University Ningbo, China <EMAIL_ADDRESS> ###### Abstract The Spiking Neural Networks (SNNs), renowned for their bio-inspired operational mechanism and energy efficiency, mirror the human brain’s neural activity. Yet, SNNs face challenges in balancing energy efficiency with the computational demands of advanced tasks. Our research introduces the RTFormer, a novel architecture that embeds Re-parameterized Temporal Sliding Batch Normalization (TSBN) within the Spiking Transformer framework. This innovation optimizes energy usage during inference while ensuring robust computational performance. The crux of RTFormer lies in its integration of re-parameterized convolutions and TSBN, achieving an equilibrium between computational prowess and energy conservation. Our experimental results highlight its effectiveness, with RTFormer achieving notable accuracy on standard datasets like ImageNet (80.54%), CIFAR-10 (96.27%), and CIFAR-100 (81.37%), and excelling in neuromorphic datasets such as CIFAR10-DVS (83.6%) and DVS128 (98.61%). These achievements illustrate RTFormer’s versatility and establish its potential in the realm of energy-efficient neural computing. ###### Index Terms: SNNs, LIF, Transformer, Normalization ## I Introduction Inspired by the human brain, deep Artificial Neural Networks (ANNs) have garnered significant success, particularly in areas such as computer vision [1, 2] and nature language processing [3, 4, 5, 6] . However, these accomplishments come at a considerable computational cost. ANNs consume approximately 12 times [7] more energy than the human brain, rendering high- energy models tough to deploy onto resource-constrained appliances, for example, smartphones and IoT devices. Utilising the brain’s efficient computational paradigm to create low-energy neural networks on such platforms holds significant value. Why SNN? Spiking Neural Networks (SNNs) present themselves as paragons of energy efficiency in the computational realm. While structurally echoing the design of traditional ANNs, SNNs diverge in their unique handling of data through discrete binary events. The zero denotes a dormant state, whereas one signals the firing of a neuron, a spike that conveys information. This binary data processing leads to sparse activations within the network, ensuring that energy is expended only when necessary. Such efficiency is not merely incidental but a core feature of SNNs, enabling them to operate with a fraction of the power required by their ANN counterparts and thus answering the call for sustainable and energy-conscious computing. How can we design a structure in which inference is more energy efficient? Addressing the question of how to craft a more energy-efficient structure for inference leads us to the inception of the Spatial-Temporal Core. The Spatial Core streamlines the convolutional process by employing structurally reparameterized convolutions, significantly reducing the computational burden during inference without compromising the integrity of learned features. Simultaneously, the Temporal Core introduces the concept of Temporal Sliding Batch Normalization (TSBN), which tailors the batch normalization process to the temporal aspects of data, ensuring that the network remains responsive to the temporal dynamics inherent in real-world scenarios. Together, these cores form a robust framework that not only excels in energy efficiency but also maintains high fidelity in data processing, making it an ideal candidate for deployment in energy-constrained environments like neuromorphic hardware. What’s the meaning of intaking the ST-Core? The Spatial-Temporal Core is not just a technological innovation; it’s a conceptual shift towards creating neural networks that operate with a level of energy efficiency akin to the human brain. By drawing inspiration from nature’s most sophisticated computing machine, we aim to bridge the gap between the computational prowess of deep learning models and the energy limitations of the devices they run on. This synergy of spatial efficiency and temporal precision paves the way for the next generation of neural network models that are both powerful and sustainable, ready for deployment in the increasingly connected and mobile world we live in. Our contributions are summarized below: * • We introduce the Spatial-Temporal Core, a harmonious fusion of structurally reparameterized convolutions (Spatial) and dynamic temporal batch normalization (Temporal) , crafted to deliver enhanced spatial efficiency and temporal adaptability, thereby elevating neuromorphic computing to new heights of processing excellence. * • We introduce the TSBN, an ingenious mechanism that aligns batch normalization with the temporal dimension of data, allowing for precise, context-sensitive normalization across sequential inputs, thereby bolstering the temporal coherence and predictive performance of neural networks. * • Extensive experiments confirming the proposed architecture’s superiority over state-of-the-art SNNs on neuromorphic and non-neuromorphic datasets, underlining its practical significance in advancing Spatial-Temporal data processing. ## II Related Works ### II-A SNN Learning Methods Spiking Neural Networks (SNNs) are heralded as the third generation of neural network models due to their biological fidelity, intrinsic event-driven computation, and energy efficiency on neuromorphic platforms [8]. These characteristics have catalyzed a surge in SNN research, positioning them as formidable contenders to their Artificial Neural Networks (ANNs) counterparts. The fundamental divergence between SNNs and ANNs stems from the employment of spiking neurons as the fundamental computational unit, which facilitates biological interpretability and adeptness in processing temporal information [9, 10, 11, 12, 13]. ANNs, with their robust gradient backpropagation training frameworks, contrast with SNNs, which predominantly utilize two training paradigms: ANN-to-SNN conversion and direct training with surrogate gradients. The conversion approach [14, 15, 16] entails substituting a pre-trained ANN’s ReLU layers with spiking neurons, necessitating fine-tuning of hyperparameters to retain accuracy. Nevertheless, this technique is limited by extended conversion time-steps and the architectural rigidity of the source ANN. To circumvent these limitations, [9] employs surrogate gradients to facilitate direct SNN training, yielding high accuracy within minimal temporal intervals. Methodologies have facilitated breakthroughs across domains, with Spiking-Yolo [17] and EMS-yolo [18] paving the way in object detection, Spiking-UNet [19] advancing semantic segmentation, SpikingGPT [20], and SpikingBert [21] emerging in language modeling, and SpikingGAN [22] introducing generative capabilities. For graph-based learning, SpikingGCN [23] and SpikingGAT [24] have shown promise. The advent of neuromorphic chips such as TrueNorth [25], Loihi [26], and Tianjic [27] further underscores the potential of SNNs to become prevalent in near-term computational ecosystems. ### II-B Transformer Architecture In SNNs Thanks to the well-established and effective network architectures in ANNs, SNNs can utilise them to construct high-performance models, such as [28, 29, 7, 30]. The attention mechanism, currently the most efficient method in ANNs, has also been integrated into SNNs, including the implementation of the Transformer, its most classic network architecture. Spikformer [31] is the initial directly-trained Transformer within SNNs. It adopts a new spike-form self-attention named Spiking Self Attention (SSA). However, the current configuration of the Spikformer, which includes residual [32] connections, still involves non-spike computation. Therefore, the Spike-Driven [33] Transformer presents novel structures for preserving the spike computation. The issue of non-spike computation is resolved by Spike-Driven Transformer through introducing Spike-Driven Self-Attention (SDSA). The integration of attention mechanisms within SNNs has facilitated the adaptation of Transformer architectures to the SNN paradigm. However, previous implementations have not adequately addressed constraints encountered during inference. To this end, we propose RTFormer, which leverages a reparameterization strategy to ensure that SNNs benefit from reduced parameter complexity during inference, thereby enhancing deployment efficiency. ### II-C BatchNormalization In SNNs In the realm of SNNs, the integration of Batch Normalization (BN) techniques has been pivotal in mitigating challenges associated with training dynamics, such as the vanishing or exploding gradient problem. One innovative approach, termed Batch Normalization Through Time (BNTT), was proposed by [34]. BNTT uniquely calculates BN statistics and parameters independently at each time- step, enhancing the network’s adaptability to instantaneous changes. However, this method may not fully account for the temporal correlations present in input spike sequences, potentially overlooking crucial sequential information. To address this limitation, [35] introduced the threshold-dependent Batch Normalization (tdBN) methodology. This technique consolidates BN statistics and parameters across the temporal dimension, thus maintaining the conventional BN’s benefits while accommodating the temporal structure inherent in SNNs. By aggregating data temporally, tdBN circumvents the instability often encountered in gradients during SNN training. Further expanding upon these developments, the Temporal Effects Batch Normalization (TEBN) method, conceived by [36], has refined the approach by merging data along the temporal axis for shared BN statistics. TEBN then introduces temporal dynamics into the BN process by applying distinct scaling weights. This method captures the essential temporal dynamics, thereby providing a more nuanced normalization process that aligns with the temporal nature of SNNs. We introdece the TSBN to selectively leverage the accumulated pre-synaptic inputs in the temporal domain, consistent with the properties of LIF neurons. ## III Method We introduce the RTFormer, a novel fusion of the Transformer architecture with the Re-parameter and Temporal Sliding Batch Normalization(TSBN). This section will begin by providing a concise overview of spike neurons’ working principles, followed by an in-depth exploration of the Spatial-Temporal Core and the Spiking Guided Attention (SGA) module. Finally, we will discuss the energy consumption aspect. ### III-A Preliminaries In SNNs, spike neurons control the release of spikes based on a threshold. In this paper we use LIF [37] neurons, which work in the following way: $\displaystyle U[t]=V[t-1]+\frac{1}{k_{\tau}}\left(X[t]-(V[t-1]-V_{reset})\right)$ (1) $\displaystyle S[t]=\mathcal{H}(U[t]-V_{th})$ (2) $\displaystyle V[t]=U[t]~{}(1-S[t])+V_{reset}S[t]$ (3) where $k_{\tau}$, $V_{th}$, and $V_{reset}$ represent the decay factor, firing threshold, and reset membrane potential, respectively, which are pre-set to default values. The notation $X[t]$ refers to the input at time step $t$, while $U[t]$ denotes the membrane potential. The function $\mathcal{H}$($\cdot$) represents the Heaviside step function. The spike output, denoted by $S[t]$, is calculated based on the membrane potential and the threshold. Additionally, $V[t]$ and $V[t-1]$ signify the temporal output at time t. In our study, we employ the spikingjelly [38], utilizing default values for $k_{\tau}$, $V_{th}$, and $V_{reset}$, specifically set to 2.0, 1.0, and 0, respectively. Figure 1: Illustration of the structural reparameterization and simplification from a complex multi-branch system to a streamlined model. The top dashed box represents the parameters of the TSBN incorporated into Conv3, and the bottom dashed box represents the parameters of the TSBN incorporated into the trainable threshold. ### III-B Spatial-Temporal Core In the exploration of SNNs, our innovative design, termed the ”Spatial- Temporal Core,” shown in Fig.1, addresses the nuanced realms of spatial and temporal processing. This sophisticated framework divides the architecture into two synergistic components: the ”Spatial Core” and the ”Temporal Core.” Each core is uniquely adapted to manage distinct aspects of data processing - the former concentrating on spatial features and the latter on temporal dynamics. Spatial Core. The ”Spatial Core” echoes the principles of structural reparameterization, akin to approaches seen in Artificial Neural Networks (ANNs). Here, we utilize depthwise convolutions (DW-Conv) that are strategically reparameterized to reduce the complexity of the model during inference. This innovative arrangement entails parallel DW-Conv layers with diverse kernel sizes, notably 1x1 and 3x3, enhancing spatial feature extraction efficiency. The transformation from four consecutive 3x3 layers to a trio of ”Spatial Core” units signifies a leap in spatial detail capture, all while maintaining a lean model structure. In the innovative architecture depicted in Fig.1, the STCore is structured with five concurrent branches, each contributing unique convolutional parameters to the network’s composite function. Notably, one of these branches—the identity branch—functions effectively as a 1 × 1 convolution, utilizing an identity matrix as its kernel. This integration ensures that each branch’s convolutional characteristics are distinctly represented. The fusion of these diverse convolutional influences is succinctly captured in Eq.4: $y=\sum_{i=1}^{n}TSBN_{i}(x*W^{(i)},\mu_{i},\sigma_{i},\gamma_{i},\beta_{i})$ (4) Here, the index $i$ extends over $n$ distinct branches, with this work specifically focusing on $n=5$. The variable $x$ symbolizes the input, while $y$ represents the resultant output. The operator $"*"$ denotes the convolution operation, and $(i)$ signifies the convolutional kernelassociated with the $i_{th}$ branch. The parameters $\mu,\sigma,\gamma,\beta$ correspond to the accumulated mean, standard deviation, as well as the learned scaling factor and bias, respectively, derived from the BN layer that follows each convolutional operation. These parameters are crucial for the TSBN process, which refines the data at each branch, ensuring a harmonized and effective integration of the temporal dynamics into the network’s overall learning process. Temporal Core. Conversely, the ”Temporal Core” integrates Temporal Sliding Batch Normalization (TSBN) with the adjustable thresholds of spiking neurons. This integration is a pivotal aspect of our approach, aligning with the innate dynamics of temporal information processing inherent in SNNs. By incorporating the TSBN parameters directly into the neurons’ thresholding mechanism (denoted as $V_{th}$), our model gains a robustness in handling temporal sequences, which is essential for cognitive functions. In practice, a sliding window mechanism judiciously controls the extent of Batch Normalization across time, allowing for refined data processing at each stage. Diverging from traditional BN methods like tdBN and BNTT, our focus is on data closer to the current timestep, ensuring a more context-sensitive normalization approach. This methodology enhances the network’s capability to adeptly respond to dynamic temporal shifts, as captured in Eq.5. $y_{[t-w:t]}=\gamma(t)\frac{x_{[t-w:t]}-\mu_{[t-w:t]}}{\sqrt{\sigma_{[t-w:t]}^{2}+\epsilon}}+\beta(t)$ (5) Here, $x_{[t-w:t]}$ and $y_{[t-w:t]}$ signify the input and output, respectively, spanning a temporal window of width $w$. The variable $t$ specifies the current timestep, anchoring the window’s position within the sequence. $\gamma(t)$ and $\beta(t)$ serve as the scale and shift factors at each specific timestep $t$. $\mu_{[t-w:t]}$ and $\sigma_{[t-w:t]}$ are the statistical core of this equation, representing the mean and variance computed over the inputs within the sliding window from $t-w$ to $t$, $\epsilon$ is a small number to avoid dividing zero . Moreover, we fold the TSBN into the neuron’s threshold , enabling an efficient integration of temporal normalization into the spiking mechanism of the neurons. This folding process is encapsulated in Eq.7, transforming $V_{th}$ into a trainable parameter , which dynamically adapts to the temporal normalization. ${s}^{(t)}=\begin{cases}1&\text{if }\gamma(t)\frac{x_{[t-w:t]}-\mu_{[t-w:t]}}{\sqrt{\sigma_{[t-w:t]}^{2}+\epsilon}}+\beta(t)>V_{\rm th}\\\ 0&\text{otherwise}\end{cases}.$ (6) ${\tilde{V}}_{\rm th}=\frac{(V_{\rm th}-{\beta}(t)){\sqrt{{\sigma}_{[t-w:t]}^{2}}}}{{\gamma}(t)}+{\mu}_{[t-w:t]}$ (7) Here, $V_{th}$ is the threshold of spiking neuron, ${\tilde{V}}_{\rm th}$ is its trainable counterpart in this work. $s^{(t)}$ is the spiking output matrix at timestep t. The ”Spatial-Temporal Core” stands as a hallmark of innovation in SNN architecture. Its ability to fluidly navigate both spatial and temporal dimensions positions it as a vital development towards emulating the sophisticated computational capabilities of the human brain. This structure not only ensures model efficiency but also accentuates the biological resemblance and pulse-like nature of SNNs, underscoring its potential for real-world applications where both performance and biologically-inspired functionality are paramount. TABLE I: Experiments on ImageNet. ’T’ denotes the timestep. The architecture abbreviations ’S-V,’ ’S-R,’ and ’S-T’ correspond to Spiking VGG, Spiking Resnet, and Spiking Transformer, respectively. Dataset | Methods | Type | Architecture | T | Param(M) | Acc(%) | Energy(mJ) ---|---|---|---|---|---|---|--- ImageNet | RMP [39] | ANN2SNN | S-V-16 | 4096 | 138.4 | 73.09 | 49.86 Calibration [40] | ANN2SNN | S-V-16 | 2048 | 138.4 | 75.32 | 25.98 SEW-ResNet [29] | SNN training | S-R-152 | 4 | 60.19 | 69.26 | 12.89 MS-ResNet [30] | SNN training | S-R-104 | 4 | 77.28 | 76.02 | 10.19 Att-MS-ResNet [7] | SNN training | S-R-104 | 4 | 78.37 | 77.08 | 7.3 tdBN [35] | SNN training | S-R-34 | 6 | 21.79 | 63.72 | 6.39 TEBN [36] | SNN training | S-R-34 | 4 | 21.79 | 64.29 | 7.05 MPBN [41] | SNN training | S-R-34 | 4 | 21.79 | 64.71 | 6.56 spikformer [31] | SNN training | S-T-8-768 | 4 | 66.34 | 74.81 | 21.47 spike-driven [33] | SNN training | S-T-8-768* | 4 | 66.34 | 77.07 | 6.09 This work | SNN training | S-T-8-768 | 4 | 58.86 | 80.54 | 5.59 ### III-C Analyse Of Energy Consumption In ANNs, computational demands largely stem from Floating-point Operations (FLOPs), predominantly due to Multiply and Accumulate (MAC) operations. SNNs, however, primarily rely on Accumulate (AC) operations, which reduces the need for MAC operations. This shift not only cuts down on FLOPs but also aligns with SNNs’ energy-efficient ethos by reducing power usage. Yet, MAC operations remain a factor in the initial stages of data processing, where raw images are converted into spike-encoded formats. To gauge energy use, an assessment of MAC and AC operations throughout the network’s computational processes is essential. The total energy consumption ($E$) can be expressed as follows: $\displaystyle E=E_{MAC}\times FL_{conv}+E_{AC}\times$ (8) $\displaystyle(FL_{STCore}+FL_{SGA}+FL_{STMLP})$ Here, $E_{MAC}$ and $E_{AC}$ represent the energy consumption associated with MAC and AC operations, respectively. Experimental measurements have determined $E_{MAC}$ to be approximately 4.6 picojoules (pJ), and $E_{AC}$ to be approximately 0.9 picojoules (pJ), based on testing conducted on 45nm technology [42], $FL_{\cdot}$ denotes the float number of the specific layer. Calculating energy consumption offers an accurate measure of the computational and energy efficiency enhancements delivered by our framework, considering the interplay of both MAC and AC operations across the processing pipeline. ## IV Experiments Our experimental evaluation encompasses both non-neuromorphic datasets like CIFAR10, CIFAR100, and ImageNet, as well as neuromorphic datasets such as CIFAR10-DVS and DVS128 Gesture. The visualizations of these results are presented in Fig.2, with ImageNet findings detailed in Tab.I. Results for other datasets are compiled in Tab.II, while the outcomes of our ablation studies are summarized in Tab.III. Additionally, visual representations of these ablation studies can be found in Fig.3. ### IV-A Non-neuromorphic Datasets Classification #### IV-A1 ImageNet Dataset Description. The ImageNet dataset, a cornerstone in the field of computer vision, consists of approximately 1.3 million images spanning 1,000 classes for training, alongside 50,000 validation images. Figure 2: Display of a comparative visualization across four columns for a series of images. Visualization comprises original images, Grad-CAM representations, Attention Maps , and Spiking Fire Rate (SFR) maps. RTFormer’s performance. As shown in Table I, the RTFormer model with TSBN and structurally reparameterized DW-conv achieves remarkable accuracy. Specifically, the Spikformer-8-768 model, equipped with 58.86M parameters, attains a top-1 accuracy of 80.54%, which is a notable enhancement over previous SNN models like SEW-ResNet and MS-ResNet. This leap in performance is also accompanied by a reduction in energy consumption, underlining the efficiency of the model’s transformer architecture and optimized components. Comparision with BN methods in SNNs. RTFormer significantly surpasses the tdBN method, achieving a higher accuracy rate of 80.54% compared to 64.29% obtained by tdBN. This stark difference in performance indicates the effectiveness of the architectural and BN improvements. Moreover, the energy consumption of ”This work” is lower (5.59mJ) compared to tdBN (7.05mJ), highlighting improved efficiency. Similarly, RTFormer showcases superior accuracy over TEBN, with a 13.87% increase in top-1 accuracy. RTFormer achieves an accuracy rate that is 13.45% higher than that of the MPBN method. The energy savings are also notable, with RTFormer consuming less energy, thereby presenting a strong case for the enhancements brought by the TSBN and Re-parameter Transformer architecture. Comparision with Transformers in SNN. The Spikformer shows competitive performance, achieving a 74.81% accuracy rate. However, RTFormer outperforms Spikformer by 3.35%, which is a significant margin in the realm of deep learning models. The Spike-Driven Transformer model, another variant of the Spiking Transformer, achieves a 77.07% accuracy rate, which is commendable. Nonetheless, RTFormer has a slight edge with an accuracy of 78.16%. The energy efficiency of our architecture is notably better, with an energy consumption of 5.59mJ compared to the spike-driven model’s 6.09mJ, showcasing that the model’s improvements do not come at the cost of increased energy usage. TABLE II: Experiments on both Static and DVS datasets. In this table, ’Ts’ refers to the Timestep for Static datasets, while ’Td’ indicates the Timestep for Neuromorphic (DVS) datasets. The architecture abbreviations ’S-V,’ ’S-R,’ and ’S-T’ correspond to Spiking VGG, Spiking Resnet, and Spiking Transformer, respectively. Asterisked results (*) represent outcomes from our implementations of these methods. Detailed hyperparameters used in these experiments are meticulously documented in the appendix for reference. Methods | Architecture | Param(M) | Ts | Acc | Td | Acc ---|---|---|---|---|---|--- cifar10 | cifar100 | cifar10-dvs | dvs 128 RMP [39] | S-V-16 | 138.4 | 4096 | 93.63 | 70.93 | - | - | - Calibration [40] | S-V-16 | 138.4 | 2048 | 95.79 | 77.87 | - | - | - SEW-ResNet [29] | S-R-21 | 21.79 | 4 | 95.34* | 78.32* | 16 | 74.4 | 97.9 MS-ResNet [30] | S-R-18 | 11.69 | 4 | 94.79* | 78.15* | 16 | 75.56 | 97.54* Att-MS-ResNet [7] | S-R-18 | 11.87 | 4 | 95.07* | 77.89* | 20 | 77.35* | 98.23 tdBN [35] | S-R-19 | 12.63 | 4 | 92.92 | 70.86 | 16 | 67.8 | 96.9 TEBN [36] | S-R-19 | 12.63 | 4 | 94.7 | 76.13 | 10 | 83.3* | 97.95* MPBN [41] | S-R-19 | 12.63 | 2 | 96.05* | 79.51 | 10 | 74.4 | 98.26* Spikformer [31] | S-T-4-384 | 9.32 | 4 | 95.19 | 77.86 | 16 | 80.9 | 98.3 Spike-Driven [33] | S-T-4-384 | 9.32 | 4 | 95.6 | 78.4 | 16 | 80 | 97.9* This work | S-T-4-384 | 7.93 | 4 | 96.27 | 81.37 | 16 | 83.6 | 98.61 2 | 96.12 | 80.9 | 10 | 82.9 | 98.61 #### IV-A2 CIFAR Dataset Description. The CIFAR-10 dataset is a well-known collection of 60,000 32x32 color images split into 10 classes, with each class represented by 6,000 images. The CIFAR-100 dataset, while similar in structure to CIFAR-10, offers a more challenging task with its 100 classes, each comprising 600 images, also totaling 60,000. Both datasets, developed by the Canadian Institute for Advanced Research, serve as fundamental benchmarks for machine learning and computer vision, facilitating the development and validation of innovative image classification models. Comparision with previous works. Firstly, as shown in Tab.II when benchmarked against previous SNN models like RMP and others, RTFormer demonstrates superior performance. While traditional SNNs like RMP and Calibration have laid the groundwork in the field, RTFormer capitalizes on their foundation and pushes the boundaries further. For instance, on CIFAR-100, RTFormer achieves an accuracy of 81.37%, which is a substantial improvement over the 70.93% and 77.87% accuracy rates achieved by RMP and Calibration, respectively. This leap in performance can be attributed to RTFormer’s more sophisticated temporal dynamics capturing capabilities and its optimized training methodologies. Comparision with BN methods in SNN. Secondly, in comparison with other SNN models employing various BN techniques, RTFormer stands out for its effective use of TSBN. This technique provides RTFormer with an edge, allowing for better normalization of the neuron’s output across different timesteps, which is crucial for datasets with a high degree of intra-class variability like CIFAR-100. The improvement in normalization contributes to more stable and faster convergence during training, as evidenced by the higher accuracy rates when compared to the tdBN, TEBN, and MPBN methods. Comparision with Transformers in SNN. Thirdly, against the backdrop of Spike Transformer architectures, RTFormer’s refined approach shines through. RTFormer’s innovative BN approach, coupled with structurally reparameterized depthwise convolutions (DWconv), significantly boosts its performance. While Spikformer and spike-driven models have shown the viability of Transformer architectures in SNNs, RTFormer optimizes these designs, achieving an impressive 96.27% on CIFAR-10 and 81.37% on CIFAR-100. This represents not only an improvement over the aforementioned models but also highlights the RTFormer’s architectural benefits, which are particularly advantageous for complex and nuanced datasets like CIFAR-100. In conclusion, RTFormer, with its strategic modifications and enhancements, stands as a testament to the potential of SNNs, particularly in processing complex visual data, and sets a new benchmark for accuracy and efficiency in the field. Figure 3: The figure presents two line graphs, where the blue line represents the baseline model, and the green line indicates the performance after incorporating TSBN. The graph on the left illustrates the results obtained on the CIFAR-10 dataset, while the right graph showcases the outcomes on the CIFAR10-DVS dataset. ### IV-B Neuromorphic Datasets Classification Dataset Description. The CIFAR10-DVS dataset is a neuromorphic version of the well-known CIFAR-10 dataset, converted using a Dynamic Vision Sensor (DVS). It presents everyday objects in a format compatible with neuromorphic vision systems, capturing temporal changes in pixel intensity. DVS128 Gesture, on the other hand, is a gesture recognition dataset specifically designed for neuromorphic processing. It comprises hand gesture data from 29 individuals under various lighting conditions, captured through a DVS camera, making it ideal for developing and testing gesture recognition models on SNNs and neuromorphic hardware. As shown in Tab.II, RTFormer exhibits unique advantages in Neuromorphic datasets, such as CIFAR10-DVS and DVS128 Gesture, which capitalize on the intrinsic features of Spiking Neural Networks (SNNs) and the architectural innovations specific to RTFormer. TABLE III: Ablation Experiment for TSBN. In the ”Architecture” column of the table, the abbreviation ”S” stands for ”Spiking,” ”R” represents ”ResNet,” and ”T” denotes ”Transformer.” Dataset | Method | Architecture | Acc.(%) ---|---|---|--- CIFAR10 | Baseline | S-R-19 | 95.28% w/ TSBN | S-R-19 | 96.04% Baseline[33] | S-T-4-384 | 95.60% w/ TSBN | S-T-4-384 | 96.27% CIFAR100 | Baseline | S-R-19 | 74.52% w/ TSBN | S-R-19 | 79.37% Baseline[31] | S-T-4-384 | 80.90% w/ TSBN | S-T-4-384 | 83.60% Compared to Previous Research. RTFormer outshines prior studies, notably in neuromorphic datasets like CIFAR10-DVS and DVS128 Gesture. It demonstrates superior accuracy, a clear advancement over earlier methods like RMP and Calibration, which do not present results for these DVS datasets. Against Other BN Methods in SNNs. When compared to other batch normalization techniques such as tdBN, TEBN, and MPBN, RTFormer exhibits noteworthy improvements in accuracy on neuromorphic datasets. This suggests that its approach to integrating batch normalization is more effective for handling the dynamic nature of these datasets. Versus Other Spiking Transformer Architectures. RTFormer also excels in comparison to other spiking transformer architectures like Spikformer and Spike-Driven. It achieves higher accuracy rates, highlighting its effectiveness in processing the temporally rich data characteristic of neuromorphic datasets, thereby underscoring its advanced capability in handling spatiotemporal data complexities. ### IV-C Ablation Study To verify the effectiveness of the TSBN, a lot of ablative studies using different architecture were conducted on the CIFAR10 and CIFAR10-DVS datasets. Table III clearly demonstrates that the integration of Temporal Sliding Batch Normalization (TSBN) leads to an enhancement in accuracy, regardless of whether the underlying backbone is a Spiking ResNet or a Spiking Transformer. This improvement is consistent across both Static and Neuromorphic datasets, a fact that is also visually evident in the accompanying Figure 3. ### IV-D Performance Insights of RTFormer Architectural Synergy with Neuromorphic Data. Neuromorphic datasets inherently contain temporal information that traditional static datasets lack, and RTFormer is adept at leveraging this. The model’s architecture, influenced by the Transformer design, is inherently suited for handling sequences, making it exceptionally well-aligned with the time-sensitive data in neuromorphic datasets. The RTFormer uses Spiking Transformer blocks tailored to process the spatio-temporal dynamics present in such data, enabling it to capture the nuanced temporal patterns that are pivotal for recognition tasks in neuromorphic vision. Effectiveness on Static Data. The RTFormer’s effectiveness in processing static datasets can be attributed to its novel integration of re-parameterized convolutions. The spatial core’s re-parameterized convolutions adeptly capture complex spatial patterns in static data, while the TSBN, even in a non- temporal context, provides adaptive normalization that enhances the network’s ability to generalize from training to unseen data. This combination not only boosts computational efficiency but also ensures a high level of accuracy, making RTFormer a versatile tool in both dynamic and static data environments. Efficient Temporal Encoding. The RTFormer’s use of Temporal Sliding Batch Normalization (TSBN) is particularly beneficial for neuromorphic datasets. This specialized BN method ensures that RTFormer’s neurons maintain an optimal firing rate, preventing both the vanishing and exploding gradient problems that are common in SNNs. This allows the RTFormer to efficiently encode temporal information, a critical aspect when dealing with datasets like CIFAR10-DVS, where each pixel’s intensity changes over time are encoded into spike trains. Robust to Diverse Visual Stimuli. RTFormer’s robustness to diverse visual stimuli, stemming from its Transformer roots, is evident in its neuromorphic dataset performance. The attention mechanisms allow the model to focus on the most salient features within the spike trains, enhancing its ability to discern between different gestures and visual patterns with high accuracy. ## V Conlcusion To ensure efficient inference on spiking chips, all convolutional operations within our model have been reparameterized structurally. In conjunction, we have refined the subsequent Batch Normalization (BN) technique to align with the characteristics of Leaky Integrate-and-Fire (LIF) neurons, resulting in the introduction of Temporal Sliding Batch Normalization (TSBN). By embedding TSBN into the transformer architecture, we have crafted the RTFormer, which achieves unprecedented results on both static and neuromorphic datasets, setting new benchmarks in performance. ## Acknowledgment This work was partly supported by ”Pioneer” and ”Leading Goose” R&D Program of Zhejiang (2023C01045) and Ningbo Leading Talents Training Project. ## References * [1] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in _European conference on computer vision_. Springer, 2020, pp. 213–229. * [2] K. Kim and H. S. Lee, “Probabilistic anchor assignment with iou prediction for object detection,” in _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16_. Springer, 2020, pp. 355–371. * [3] R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen _et al._ , “Palm 2 technical report,” _arXiv preprint arXiv:2305.10403_ , 2023. * [4] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale _et al._ , “Llama 2: Open foundation and fine-tuned chat models,” _arXiv preprint arXiv:2307.09288_ , 2023. * [5] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell _et al._ , “Language models are few-shot learners,” _Advances in neural information processing systems_ , vol. 33, pp. 1877–1901, 2020. * [6] J. W. Rae, S. Borgeaud, T. Cai, K. Millican, J. Hoffmann, F. Song, J. Aslanides, S. Henderson, R. Ring, S. Young _et al._ , “Scaling language models: Methods, analysis & insights from training gopher,” _arXiv preprint arXiv:2112.11446_ , 2021. * [7] M. Yao, G. Zhao, H. Zhang, Y. Hu, L. Deng, Y. Tian, B. Xu, and G. Li, “Attention spiking neural networks,” _arXiv preprint arXiv:2209.13929_ , 2022. * [8] W. Maass, “Networks of spiking neurons: the third generation of neural network models,” _Neural networks_ , vol. 10, no. 9, pp. 1659–1671, 1997. * [9] Y. Wu, L. Deng, G. Li, J. Zhu, and L. Shi, “Spatio-temporal backpropagation for training high-performance spiking neural networks,” _Frontiers in Neuroscience_ , vol. 12, May 2018. [Online]. Available: http://dx.doi.org/10.3389/fnins.2018.00331 * [10] W. Fang, Z. Yu, Y. Chen, T. Masquelier, T. Huang, and Y. Tian, “Incorporating learnable membrane time constant to enhance learning of spiking neural networks,” in _Proceedings of the IEEE/CVF international conference on computer vision_ , 2021, pp. 2661–2671. * [11] X. Yao, F. Li, Z. Mo, and J. Cheng, “Glif: A unified gated leaky integrate-and-fire neuron for spiking neural networks,” _Advances in Neural Information Processing Systems_ , vol. 35, pp. 32 160–32 171, 2022. * [12] L. Feng, Q. Liu, H. Tang, D. Ma, and G. Pan, “Multi-level firing with spiking ds-resnet: Enabling better and deeper directly-trained spiking neural networks,” _arXiv preprint arXiv:2210.06386_ , 2022. * [13] W. Fang, Z. Yu, Z. Zhou, Y. Chen, Z. Ma, T. Masquelier, and Y. Tian, “Parallel spiking neurons with high efficiency and long-term dependencies learning ability,” _arXiv preprint arXiv:2304.12760_ , 2023. * [14] T. Bu, J. Ding, Z. Yu, and T. Huang, “Optimized potential initialization for low-latency spiking neural networks,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 36, no. 1, 2022, pp. 11–20. * [15] Y. Li, S. Deng, X. Dong, R. Gong, and S. Gu, “A free lunch from ann: Towards efficient, accurate spiking neural networks calibration,” in _International conference on machine learning_. PMLR, 2021, pp. 6316–6325. * [16] Y. Cao, Y. Chen, and D. Khosla, “Spiking deep convolutional neural networks for energy-efficient object recognition,” _International Journal of Computer Vision_ , vol. 113, pp. 54–66, 2015. * [17] S. Kim, S. Park, B. Na, and S. Yoon, “Spiking-yolo: spiking neural network for energy-efficient object detection,” in _Proceedings of the AAAI conference on artificial intelligence_ , vol. 34, no. 07, 2020, pp. 11 270–11 277. * [18] Q. Su, Y. Chou, Y. Hu, J. Li, S. Mei, Z. Zhang, and G. Li, “Deep directly-trained spiking neural networks for object detection,” _arXiv preprint arXiv:2307.11411_ , 2023. * [19] H. Li, Y. Zhang, Z. Xiong, Z.-j. Zha, and X. Sun, “Deep spiking-unet for image processing,” _arXiv preprint arXiv:2307.10974_ , 2023. * [20] R.-J. Zhu, Q. Zhao, and J. K. Eshraghian, “Spikegpt: Generative pre-trained language model with spiking neural networks,” _arXiv preprint arXiv:2302.13939_ , 2023. * [21] M. Bal and A. Sengupta, “Spikingbert: Distilling bert to train spiking language models using implicit differentiation,” _arXiv preprint arXiv:2308.10873_ , 2023. * [22] V. Kotariya and U. Ganguly, “Spiking-gan: A spiking generative adversarial network using time-to-first-spike coding,” in _2022 International Joint Conference on Neural Networks (IJCNN)_. IEEE, 2022, pp. 1–7. * [23] Z. Zhu, J. Peng, J. Li, L. Chen, Q. Yu, and S. Luo, “Spiking graph convolutional networks,” _arXiv preprint arXiv:2205.02767_ , 2022. * [24] B. Wang and B. Jiang, “Spiking gats: Learning graph attentions via spiking neural network,” _arXiv preprint arXiv:2209.13539_ , 2022. * [25] P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura _et al._ , “A million spiking-neuron integrated circuit with a scalable communication network and interface,” _Science_ , vol. 345, no. 6197, pp. 668–673, 2014. * [26] M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain _et al._ , “Loihi: A neuromorphic manycore processor with on-chip learning,” _Ieee Micro_ , vol. 38, no. 1, pp. 82–99, 2018. * [27] J. Pei, L. Deng, S. Song, M. Zhao, Y. Zhang, S. Wu, G. Wang, Z. Zou, Z. Wu, W. He _et al._ , “Towards artificial general intelligence with hybrid tianjic chip architecture,” _Nature_ , vol. 572, no. 7767, pp. 106–111, 2019. * [28] Y. Hu, H. Tang, and G. Pan, “Spiking deep residual networks,” _IEEE Transactions on Neural Networks and Learning Systems_ , 2021. * [29] W. Fang, Z. Yu, Y. Chen, T. Huang, T. Masquelier, and Y. Tian, “Deep residual learning in spiking neural networks,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 21 056–21 069, 2021. * [30] Y. Hu, L. Deng, Y. Wu, M. Yao, and G. Li, “Advancing spiking neural networks towards deep residual learning,” _arXiv preprint arXiv:2112.08954_ , 2021. * [31] Z. Zhou, Y. Zhu, C. He, Y. Wang, S. Yan, Y. Tian, and L. Yuan, “Spikformer: When spiking neural network meets transformer,” _arXiv preprint arXiv:2209.15425_ , 2022. * [32] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2015. * [33] M. Yao, J. Hu, Z. Zhou, L. Yuan, Y. Tian, B. Xu, and G. Li, “Spike-driven transformer,” _arXiv preprint arXiv:2307.01694_ , 2023. * [34] Y. Kim and P. Panda, “Revisiting batch normalization for training low-latency deep spiking neural networks from scratch,” _Frontiers in neuroscience_ , vol. 15, p. 773954, 2021. * [35] H. Zheng, Y. Wu, L. Deng, Y. Hu, and G. Li, “Going deeper with directly-trained larger spiking neural networks,” in _Proceedings of the AAAI conference on artificial intelligence_ , vol. 35, no. 12, 2021, pp. 11 062–11 070. * [36] C. Duan, J. Ding, S. Chen, Z. Yu, and T. Huang, “Temporal effective batch normalization in spiking neural networks,” _Advances in Neural Information Processing Systems_ , vol. 35, pp. 34 377–34 390, 2022. * [37] P. Dayan and L. F. Abbott, _Theoretical neuroscience: computational and mathematical modeling of neural systems_. MIT press, 2005. * [38] W. Fang, Y. Chen, J. Ding, Z. Yu, T. Masquelier, D. Chen, L. Huang, H. Zhou, G. Li, and Y. Tian, “Spikingjelly: An open-source machine learning infrastructure platform for spike-based intelligence,” 2023. * [39] B. Han, G. Srinivasan, and K. Roy, “Rmp-snn: Residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network,” in _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2020, pp. 13 555–13 564. * [40] Y. Li, X. He, Y. Dong, Q. Kong, and Y. Zeng, “Spike calibration: Fast and accurate conversion of spiking neural network for object detection and segmentation,” 2022. * [41] Y. Guo, Y. Zhang, Y. Chen, W. Peng, X. Liu, L. Zhang, X. Huang, and Z. Ma, “Membrane potential batch normalization for spiking neural networks,” 2023. * [42] M. Horowitz, “1.1 computing’s energy problem (and what we can do about it),” in _2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC)_ , 2014, pp. 10–14.
# Using LLMs to discover emerging coded antisemitic hate-speech in extremist social media ††thanks: Research supported by American University’s Signature Research Initiative program. We thank Jacob Levine for his inspiration in the initial steps of this project. Dhanush Kikkisetti, Raza Ul Mustafa, Wendy Melillo, Roberto Corizzo, Zois Boukouvalas, Jeff Gill, Nathalie Japkowicz American University, 4400 Massachusetts Ave NW, Washington, DC 20016, USA <EMAIL_ADDRESS> ###### Abstract Online hate speech proliferation has created a difficult problem for social media platforms. A particular challenge relates to the use of coded language by groups interested in both creating a sense of belonging for its users and evading detection. Coded language evolves quickly and its use varies over time. This paper proposes a methodology for detecting emerging coded hate- laden terminology. The methodology is tested in the context of online antisemitic discourse. The approach considers posts scraped from social media platforms, often used by extremist users. The posts are scraped using seed expressions related to previously known discourse of hatred towards Jews. The method begins by identifying the expressions most representative of each post and calculating their frequency in the whole corpus. It filters out grammatically incoherent expressions as well as previously encountered ones so as to focus on emergent well-formed terminology. This is followed by an assessment of semantic similarity to known antisemitic terminology using a fine-tuned large language model, and subsequent filtering out of the expressions that are too distant from known expressions of hatred. Emergent antisemitic expressions containing terms clearly relating to Jewish topics are then removed to return only coded expressions of hatred. ###### Index Terms: hate speech, coded antisemitic terminology ## I Introduction Online hate speech detection111Warning: Some of the paper’s content may be disturbing to the reader. is a complex problem for social media platforms. A particular challenge, not much discussed in the literature, relates to the use of coded language. The following post illustrates the issue in the context of antisemitic hate speech: > “Nope Globalist want us intertwined and run by the elites, Globalist don’t > lay tariffs on their friends you stupid fu****”. [posted on Dec. 31, 2022, > on the Disqus platform ] According to the American Jewish Committee (AJC) Translate Hate Glossary222https://www.ajc.org/translatehate/globalist, a globalist, in its unbiased definition, is “a person who advocates the interpretation or planning of economic and foreign policy in relation to events and developments throughout the world”. According to this definition, the term is rather flattering. Indeed, that is the way it is intended in the Hyatt hotel’s welcoming message to its club members seen in Figure 1333Photo by one of the authors on 11/3/23 at a Hyatt Texas property.. In that commercial context a globalist refers to someone “who gets it!” and should feel good about it! The term does not, in any way, refer to Jews. Figure 1: Non antisemitic use of the term ”globalist” Yet, the AJC Translate Hate Glossary argues that the term has an antisemitic connotation when it “is used to promote the antisemitic conspiracy that Jewish people do not have allegiance to their countries of origin, like the United States, but to some worldwide order—like a global economy or international political system—that will enhance their control over the world’s banks, governments, and media”. In the above post, it is clear that the antisemitic connotation is implied. From this post, we surmise that 1. 1. The globalists (a.k.a., the Jews) are distinct from “us”, presumably, the good American citizens; 2. 2. They control “our” fate to be run by the elites (a subset of these Jews)444“Elite” appears in the AJC Glossary in the context of “Cosmpolitan Elite”: ““Cosmopolitan” and “elite” are terms that have separately incited antisemites across the political spectrum. Based on stereotypes of Jewish wealth and insularity, Jews have been accused of being part of an elite class for centuries.”; 3. 3. They help each other by not imposing the same tariffs on each other as those they impose on “us”. The above post, thus has a double meaning. To a recipient who is unaware of its antisemitic connotation, some category of people, the globalists, do not seem to behave very nicely. Yet, to an informed audience, it is a very pointed post that reiterates old Nazi and Soviet anitisemitic propaganda555c.f. “Globalist” and “Cosmopolitan Elite” in the AJC Glossary. and propagates it further. Furthermore, on social media platforms, it does so without setting off any serious alerts since, except for the “stupid fu****” mention, which could raise a flag, no offensive terms are used. Though the usefulness and importance of catching such subtle posts and their impact on society beyond the small extremist groups they are primarily intended for are important subjects that we debate elsewhere, this paper is concerned with the automatic discovery of “coded” terms similar to globalists and cosmopolitan elite which carry both a “regular” and an antisemitic connotation depending on the context in which they are used. Such an automated process is necessary due to the fact that coded terminology evolves rapidly online and fixed glossaries such as the AJC glossary become quickly outdated. In addition, due to the large volume of posts appearing on social media, human monitoring cannot be performed without the assistance of automated tools pointing them in the right direction. The purpose of our approach is just that: to create an automated monitoring tool to assist human monitors by suggesting emerging, potentially coded, antisemitic terminology, along with the posts that use that terminology. Though the topic of hate speech is, unfortunately, quite vast, this study focuses on antisemitism. The choice of a particular category of hate speech comes from our belief that we can perform a more thorough analysis of the problem by remaining focused. Antisemitism was selected because of the reported increase in antisemitic incidents in the months preceding the beginning of this study, in 2022. The lessons learned from this particular study will apply to other categories of hatred including hatred against Black, Muslim, Asian, and LGPTQ+ populations amongst others. The main contribution of this paper is a methodology for the novel problem of extracting emerging coded hate-laden terminology (antisemitism, in this paper) from extremist social posts, along with a practical pipeline to demonstrate its effectiveness. The methodology is based on the hypothesis that coded antisemitic terminology begets coded antisemitic terminology. In other words, those who use coded terminology to remain under the radar of social media monitors will, when not able to express new ideas with existing coded terms, derive or invent new ones. Based on this hypothesis, we harvest terminology used in similar contexts as known coded antisemitic terminology and propose it as potential emerging antisemitic coded terminology to human monitors, along with the context in which that terminology occurs. We propose four different versions of our pipeline and validate them using a quantitative approach. The most advanced version is also evaluated qualitatively. We conclude with a discussion of our approach’s practical utility. The remainder of the paper is structured as follows: Section II presents background and related work. In Section III, we discuss data preparation matters. Next, the methodology and pipeline for extracting coded terminology is introduced in detail in Section IV. This is followed by a presentation and discussion of the results in Section V. Finally, Section VI concludes the paper and discusses future work. ## II Background and related work With the advent of the internet and social media, technology has increased the speed at which language evolves. Propaganda in the form of hate speech now travels the world at such a fast pace that it is beyond human capacity to keep up with. Harmful words take on new meanings in both direct and coded ways, inciting hatred in the minds of those only too willing to believe them as they reinforce and justify preexisting prejudices. ### II-A Machine Learning Methods for Hate Speech Detection In recent times, there has been a notable rise in hate crimes across the United States.666https://bjs.ojp.gov/library/publications/hate-crime-recorded- law-enforcement-2010-2019 While establishing a clear relationship between hate crimes and online content is not straightforward, a report by the US Department of Justice points to the simultaneous purchases of Facebook ads containing dividing content and hate crime. These two reports777(1) https://bit.ly/2xeeF5h; (2) https://www.ojp.gov/pdffiles1/nij/grants/304532.pdf thus suggest that hate speech should not be considered harmless, and coming up with methods to curb it is an important goal. Previous work aims to detect hate speech from social media using various Machine Learning (ML) methods as documented by a number of surveys written in the last six years [1, 2, 3, 4]. One of the most recent surveys shows that while up to 2016, fewer than 10 papers were published on the topic each year, since then, there has been a huge increase in interest in the topic with over 150 papers published in 2020, the last year for which their survey had complete information [4]. Hate speech detection has been attempted using a wide variety of techniques and applied to many different problems. Founta et al.[5], for example, used Recurrent Neural Networks (RNN) to classify racism and sexism. Serrà et al. [6] showed that character level based Long Short-Term Memory networks (LSTMs) for abusive language detection could be useful. Similarly, Convolutional Neural Networks (CNNs) have also been shown to be successful in hate speech detection and classification [7]. More recently, large language models have been used for these tasks like in the work of [8] who propose different fine-tuned and non-fined-tuned variations of pre-trained models such as BERT, RoBERTa, ALBERT, etc. on offensive language detection. Most of these studies, however, consider hate speech as a whole and, typically, do not distinguish the community towards which it is directed. We feel that this generalized approach is too broad and decided, instead, to use a divide-and-conquer approach by focusing on particular communities separately. Our first attempt focused on the Jewish community and the problem of antisemitic speech in social media. ### II-B Antisemitism in Social Media and its Detection Antisemitism specifically targets Jewish individuals or the Jewish community [9]. In [10], authors use the outcomes of two surveys from EU and ADL to assess how the level of antisemitism relates to the perception of antisemitism by the Jewish community in eight different EU countries. A recent survey finds that 20% of American Jewish adults have experienced an act of antisemitism, such as an attack either online or on social media.888https://bit.ly/41FV6ei In another study, the authors address the challenges of quantifying and measuring online antisemitism. It raises the question of whether the number of antisemitic messages is increasing proportionally to other content or if the share of antisemitic content is rising. Additionally, the paper aims to determine the extent of online Jew-hatred beyond well-known websites, forums, and closed social media groups [11].999These studies preceded 10/7/23 when the situation worsened drastically. A few studies have attempted to combat online antisemitism in a way similar to the way in which generalized hate speech has been countered in the works discussed in the previous section. In [12], for example, the authors prepared a data set that includes both social posts and associated images, when available. They labeled the entries as antisemitic or not, and if antisemitic, indicated the kind of antisemitism: political, economic, religious or racial. They used a bimodal deep learning approach for classifying the data into these categories. [13] considers a subset of the text-only part of this datasetin an attempt to classify antisemitic posts using a less computationally-intensive approach. Focusing on the class imbalance problem in the data while taking advantage of OpenAI’s GPT technology, they compared GPT-based resampling techniques against other traditional kinds. Very recently, [14] proposed a new data set for antisemitism detection in social media posts that uses a strict annotating process. The data set is so recent, however, that it has not yet been used for classification or the results obtained on such efforts have not yet been published. There are other projects that consider the detection of online antisemitism using AI approaches as well. In particular, the project entitled “Decoding Antisemitism”101010https://decoding-antisemitism.eu/ calls itself an “AI-driven Study on Hate Speech and Imagery Online”, and already produced five published reports on the subject. The project specifically aims at linking national or international events reported in the traditional media to antisemitic online social media discussions. ### II-C Alternatives to automated hate speech detection In [15], the authors question whether the way in which hate speech detection has been handled by the machine learning community is the way forward, or whether hate speech detection is a lot more complex than previously assumed by the researchers who labeled data sets and applied classifiers to them. Furthermore, the authors note that some hateful content may occur without the use of well-known slurs and that on top of it all, the nature of hate speech is constantly evolving. In contrast to previous studies, our work takes these observations into consideration and focuses on identifying emerging, potentially coded terms related to antisemitism using NLP methods. There has been a lack of rigorous research in finding emerging antisemitic coded terms that can lead to the detection of hate speech and, perhaps, subsequently, to the prevention of hate crimes. This paper aims to bridge this gap and provide an approach for the detection of emerging antisemitic, sometimes coded, terminology used on extremist social media platforms. ## III Data Preparation This study is part of a large multi-disciplinary project sponsored by our institution which, simultaneously, collects and analyzes the use of coded language to express antisemitic sentiment in lightly moderated social media platforms typically preferred by individuals with extremist tendencies and studies the migration of this language from these extremist platforms to the general population. The overall project includes a data team, a population team, and a software team which collaborate closely and work in parallel. The pipeline illustrating our proposed methodology is shown in Figure 2. Figure 2: Emergent Coded Antisemitic Terminology Extraction Pipeline ### III-A Dataset The project is constantly evolving, though for this study, we considered the first delivery of the data curated by the data team in June 2023. The data team’s objectives concerning this study was to analyze the usage of antisemitic terms. We describe the data gathering and cleanup methodologies summarized by the 3 leftmost components in Figure 2. #### III-A1 Data Scraping and Labeling To build the corpus, the data team analyzed antisemitic social media posts from various extremist social media platforms including Discuss, Telegram, Minds, and GETTR. It used antisemitic expressions obtained from the previously mentioned American Jewish Committee (AJC) Translate Hate Glossary as well as the Southern Poverty Law Center (SPLC) to collect social media posts. This collection effort was facilitated by Pyrra111111https://www.pyrratech.com/, a private software company that allows its users to scrape posts from alt-social media platforms according to a list of seed terms. The data team considered the 46 seed expressions available from the AJC Glossary at the time as well as the term “Cultural Marxism”, discussed in a SPLC article121212https://www.splcenter.org/fighting-hate/intelligence- report/2003/cultural-marxism-catching and chose 16 of them to make the process tractable. It analyzed the 659 retrieved posts related to these seed expressions to determine whether the post was antisemitic or not.131313A copy of the coding statement is available upon request. The 16 terms used in the subset were selected based on their potential to reveal posts that had emerging new antisemitic terms in them. The list of seed words used is: Cabal, Cosmopolitan Elite, Cultural Marxism, Deicide, The Goyim Know, Holocough, Jewish Capitalist, Jewish Communist, Jew Down, Jewish Lobby, New World Order, Not the Real Jews, Rothschild, Soros, Zionist, and Zionist Occupied Government. Since the distribution of posts with respect to each seed expression is not ideal, though the software team used all the posts retrieved from the 16 seed expressions, it used only the seed expressions with at least 5 posts related to them to conduct its analysis. The terms dropped from the list according to this criterion are Jew Down and Cosmopolitan elite, leaving us with 14 seed words for the remainder of the study. #### III-A2 Preprocessing Text preprocessing is a critical step in Natural Language Processing (NLP). It involves transforming raw text data into a format that can be easily analyzed by machine learning algorithms. The preprocessing steps usually used involve several techniques, such as tokenization, stop word removal, stemming, and lemmatization [16]. During the first phase of cleaning the corpus, we removed the urls and lower-cased all the posts to normalize them. This initial procedure was followed by stop words removal. Then we lemmatized the text to get a single root form for each word prior to passing it on to the coded antisemitic terms extraction process, which will be discussed in the next section. Bigrams and trigrams were formed by running two- and three- word windows through all the posts.141414We also considered unigrams but were not able to filter them effectively using our current methodology. Their treatment was left for future work. It was important to filter out badly-formed expressions obtained through that approach. In particular, we decided to include bigrams and trigrams that only contain nouns, proper nouns, adjectives, and verbs, since others were judged less relevant to our quest.Since the emphasis of this study is on the novel proposed extraction process discussed next, we did not experiment with the various pre-processing techniques suggested in the literature on hate speech for social media posts [17]. It was left for future work. ## IV Coded Antisemitic Terms Extraction Approach As previously mentioned, the purpose of this study is the extraction of emerging coded antisemitic terms. In order to carry out this goal, we designed a method for operationalizing each term of that expression. That operationalization and the linking of its resulting components into a functional system constitute the main contribution of this work. The purpose of this section is to discuss the process. To begin with, we consider each word in the emerging coded antisemitic terms expression and give it the specific meaning shown below. * • Terms: the extracted expressions are limited to grammatically consistent bigrams and trigrams; they have to be relevant enough to the documents in which they appear and appear frequently enough in the corpus. * • Antisemitic: the candidate expressions have to be semantically related to antisemitic discourse. * • Coded: antisemitic expressions that contain terms relating to obvious Jewish concepts are removed. * • Emerging: already known coded antisemitic expressions are removed in order to concentrate on new terminology. These operations are divided into two phases. In Phase 1, we address the extraction of emerging coded terminology without worrying about its semantic relation to antisemitism. In Phase 2, we address semantics using large language models. Phase 1 is represented by the “Important Terms Extraction” component in Figure 2. Phase 2 is represented by the combination of the LLM Generation, Similarity Scoring, Antisemitic Terminology Extraction, and Monitoring components in Figure 2. Both phases of the pipeline are implemented using two approaches: a standard solution and an advanced solution. We subsequently test all four combinations, yielding a baseline approach composed of two standard solutions, two hybrid approaches composed of one standard and one advanced solution, and one advanced approach composed of two advanced solutions. ### IV-A Phase 1: Emerging Coded Trending Terms Extraction For the first part of Phase 1, the extraction of trending terms, we explore the use of off-the-shelf NLP tools for our standard solution and then propose our advanced solution that combines tf-idf and frequency. Once the trending terms are extracted, we propose a strategy to remove non-emerging and non- coded terms from the list of extracted terms. This strategy is applied to both the standard and advanced solutions. #### IV-A1 Standard Solution: Trending Terms Extraction using Concordance and Collocation tools In this first attempt at trending terms extraction, we use traditional NLP techniques to extract bi-grams and tri-grams using concordance and collocation algorithms from the NLTK Toolkit[18]. Concordance is a technique that provides a comprehensive view of how a given term appears in a corpus. Using this approach, we use the 14 seed terms from Section III-A1 for analyzing patterns and gaining insights into language usage. For each occurrence of a seed term, this approach provides the surrounding words context. We use default settings for the extraction of context. Next, using collocation, we find the most frequent bi-grams and tri-grams in the collected contexts. Collocation is a technique that finds a meaningful combination of words from a corpus that are semantically coherent. Different statistical measures can be used to detect collocations including frequency, pointwise mutual information (PMI), and log- likelihood ratio (LLR) among others. We use frequency, here since that is the measure also used in the advanced approach In the future, we plan to experiment with other statistical measures for both approaches. The standard approach yielded 126 trending terms. #### IV-A2 Advanced Solution: Trending Terms Extraction using TF-IDF and Frequency Our proposed advanced approach is presented in Algorithm 1 which uses TF-IDF feature-weighting [19] and frequency to extract trending terms. In a nutshell, this was done by selecting all the terms that obtained a TF-IDF value greater than a self-set threshold, listing these terms in decreasing order of frequency, and selecting the top 200 terms from the list.151515We assume that at least 200 terms had a TF-IDF value larger than the self-set threshold. When the same term appeared in several documents, the highest TF-IDF value it received was retained Algorithms 1 shows the approach that was followed in detail. The algorithm is explained line by line next. Algorithm 1 Trending terms extraction 1:Initialize Trending_terms $\triangleright$ Stores top 200 trending terms 2:Initialize $D_{s}$ $\triangleright$ Stores terms’ highest TF-IDF scores (s) 3:Initialize $D_{f}$ $\triangleright$ Stores terms’ values and frequencies 4:Set T$\triangleright$ Stores all the vocabulary terms (value). 5:Set F $\triangleright$ Stores the frequency of each term in the corpus. 6:Calculate the TF-IDF scores for each term in each document and store them in matrix $\mathbf{W}\in\mathbb{R}^{d\times v}$, where $d$ denotes the number of documents and $v$ denotes the vocabulary size. 7:for each term $t$ in ${\bf T}$ do 8: for each row in $\mathbf{W}$ do$\triangleright$ Each row is a document 9: $D_{s}[t]\leftarrow max(D_{s}[t],W[row,t])$ $\triangleright$ Finds t’s highest TF-IDF score, $s$, across all documents 10: end for 11:end for 12:$\delta\leftarrow Average(D_{s})$ $\triangleright$ $\delta$ is the average of all $s$’s 13:i = 1 14:for each t in $D_{s}$ do 15: if $D_{s}[t]\geq\delta$ then 16: $D_{f}[i]\leftarrow(T[t],F[t])$ $\triangleright$ If t’s highest TF-IDF score is larger than threshold $\delta$, store t’s value and frequency in $D_{f}$ 17: i = i + 1 18: end if 19:end for 20:Sort $D_{f}$ in descending order of frequency 21:for $i=1,2,\ldots,200$ do 22: $Trending\\_terms\leftarrow D_{f}[i][T]$ $\triangleright$ Store the most frequent terms in Trending_terms (drop the frequencies) 23:end for The algorithm begins by initializing the Trending_terms list which is the list that will return the 200 bigrams and trigrams (terms) that received the highest combination of TF-IDF and Frequency scores. Next, $D_{s}$ and $D_{f}$ are also initialized. $D_{s}$ will be used to store all the terms’ highest TF- IDF scores, whereas $D_{f}$ will store all the terms and their associated frequencies. Since terms are subsequently referred to according to their indices, $T$, which is set next, serves as the reference vector that associates an index with the actual value of the term (i.e., the actual bigram or trigram). Next, the frequency of each term in the corpus is calculated and saved in vector F. The TF-IDF values obtained for each unique term and each document are then calculated and placed in the matrix $\mathbf{W}$ of size d x v where d represents the number of documents whereas, v is the number of terms. On lines 7-11, we take each term in matrix $\mathbf{W}$ and find the highest score across all the rows (documents) of the matrix and store it in $D_{s}$. To remove the less relevant terms, we compute the average of all the values in $D_{s}$ and use this value, $\delta$, as a threshold. This allows us to consider only the terms with TF-IDF values greater than the average value of all the scores in $D_{s}$. $\delta$ is calculated on line 12. On lines 13-19, we check if the terms’ TF-IDF value is greater than $\delta$. If so, we save the terms’ values and their frequencies in $D_{f}$. Finally, on line 20, we sort the terms in $D_{f}$ in descending order of their frequency values and select the top 200 terms, storing them in Trending_terms. #### IV-A3 Removal Strategy: Redundant, Non-Emergent and Non-Coded terms Removal Once the list of most trending terms have been extracted using either the standard or advanced solution, three categories of terms are removed from it. First, as we consider expressions that are both bigrams and tri-grams, there is a possibility of encountering bigrams within trigrams. Such redundant bigrams are removed from the list of expressions. Next, we remove the terms that have occurred earlier. For now, this corresponds to the original list of 16 seed words used to retrieve the posts. In the future, this list will grow as we intend to use the system continuously, using newly discovered terms of interest as new seed terms. Lastly, the terms that are considered non-coded are removed. These correspond to terms that contain words that obviously pertain to Jewish themes. The list of words currently used includes jew, jewish, kike, and zionist. Expressions that include these words either as stand-alone words or embedded within other words are removed. After the removal phase is applied, we are left with 52 and 94 trending terms for the standard and advanced trending term extraction solutions, respectively. ### IV-B Phase 2: Embeddings and Comparisons Though the bigrams and trigrams extracted in the previous section are known to be trending, their semantics are unknown and, in particular, there is no information as to whether or not these terms are antisemitic. To find out which of these trending expressions are antisemitic, we compare the context in which they are used to the context in which the known antisemitic expressions are used. If a trending term appears in contexts similar to those in which seed expressions occur, it will be deemed antisemitic. Otherwise, it will be discarded as non-antisemitic. To compute embeddings for the trending and seed terms, we begin by fine-tuning BERT. Since BERT was not specifically trained on instances of hate speech or antisemitism, we fined-tuned it with additional data collected using the same seed expressions as before (since time elapsed between the original collection and the new collection, more posts were available for this exercise). This fine-tuned version of BERT is then used to generate contextual embeddings for both the trending terms discovered in the last section and the seed terms used to extract posts. We present the details of BERT’s fine-tuning followed by the standard and advanced embedding solutions we implemented. #### IV-B1 Fine-tuning the BERT model The generalized BERT model does not possess domain-specific vocabulary, thus it is not capable of handling coded hate speech such as antisemitism. Indeed, when such out-of-vocabulary terms occur, they get broken down into smaller tokens for which embeddings are generated. These are treated as rare tokens, yielding unsatisfactory results. To avoid this issue, we fine-tune the BERT model using an additional 56K posts extracted using the same seed words as before on Pyrra. We, thus, extend BERT’s vocabulary from 30k to 55k tokens, and fine-tune it using the Masked Language Modeling (MLM) approach. MLM is a pre-training approach that masks a few tokens. The model is subsequently trained to predict the masked tokens from the words that surround them.161616https://huggingface.co/learn/nlp-course/chapter7/3 #### IV-B2 Comparing Trending Terms to Seed Terms To differentiate between antisemitic and non-antisemitic terms during Phase 2, we compare the trending terms’ embeddings to the seed terms’ embeddings using Cosine Similarity. We generate two types of embeddings following i) the standard pre-truncate embedding method and ii) the advanced post-truncate embedding method. Figure 3: Pre-truncate embedding approach for a window of size 5. In pre-truncate embedding, we truncate the post containing the term to be embedded prior to embedding it. In post-truncate embedding, we embed the entire post containing the term of interest, and truncate the resulting embedding afterwards.171717Since we cannot embed posts exceeding 512 tokens, we turned large posts into multiple ones. ##### Standard Solution: Pre-truncate embeddings In this approach, we consider context windows of 5 to 14 words, where the size of the windows refers to the twin windows located before and after the term being embedded, respectively. We show an example of windows of size 5 in Figure 3. Since the same term may be found in more than one post, we concatenate all the embeddings extracted from fine-tuned BERT using the same window size and take their average. Embedding, here, refers to the pooled layer obtained from the 12 layers of the BERT architecture. We follow the same procedure for all the trending terms we extracted and the 14 seed words retained in Section III-A1. Next, we determine the trending terms antisemitic nature using Algorithm 2. After some initialisations on lines 1-4, $S[tt]$, the “similarity to antisemitism” value for trending term $tt$, is computed as follows: $tt$’s embedding is compared to each of the 14 seed terms (the $st$’s)’s embeddings using Cosine Similarity as described on lines 6-8. On line 9, the 14 resulting measurements are averaged and assigned to $S[tt]$ The process is repeated for each trending term (lines 5-10) and the median of all the $S[tt]$’s, $\gamma$, is calculated on line 11. $\gamma$ is then used as our threshold for potential antisemitism on lines 12-18: if $S[tt]$ for trending term $tt$ is greater than $\gamma$, $tt$ will be given the partial label “potentially antisemitic” ($TT\\_PL\\_w[tt]$ = 1). Otherwise, it will be given the partial label “probably not antisemitic” ($TT\\_PL\\_w[tt]$ = 0). (We used the median as it offered more flexibility than the mean.) Algorithm 2 is repeated 10 times, once for each window size $w$ considered. This yields 10 partial labels $TT\\_PL\\_w[tt]$, $w=1\dots 10$ for each term $tt$, and the final labeling for $tt$ is “antisemitic” if $m$ out of the 10 partial labels are “potentially antisemitic”. It is “not antisemitic”, otherwise. The optimal value of $m$ was 7 for the pre-truncate case. Algorithm 2 Comparing semantic similarity–window size $w$ 1:$Embeddings\\_tt\leftarrow\\{et\\_1,et\\_2\dots,et\\_n\\}$$\triangleright$ n pre- or post- truncate trending terms embeddings at window size $w$ 2:$Embeddings\\_st\leftarrow\\{es\\_1,es\\_2\dots,es\\_14\\}$$\triangleright$ 14 pre- or post- truncate seed words embeddings at window size $w$ 3:Initialize TT_PL_w. TT_PL_w will store the n trending terms & predicted antisemitic label for window size $w$. 4:Initialize $S$. $S$ will store the average semantic score for each trending term at window size $w$. 5:for each tt in ${\bf Embeddings\\_tt}$ do 6: for each st in ${\bf Embeddings\\_st}$ do 7: $tt\\_scores[tt]\leftarrow Sim(et\\_tt,es\\_st)$ $\triangleright$ Cosine Sim 8: end for 9: $S[tt]\leftarrow Average({tt\\_scores[tt]}$) $\triangleright$ Average all the 14 semantic scores between tt and all the st’s 10:end for 11:$\gamma\leftarrow Median(S)$ $\triangleright$ $\gamma$ is the median of all the scores 12:for each tt in ${\bf S}$ do 13: if $S[tt]>\gamma$ then$\triangleright$ check if score greater than $\gamma$ 14: $TT\\_PL\\_w[tt]\leftarrow 1$$\triangleright$ if score greater than $\gamma$ 15: else 16: $TT\\_PL\\_w[tt]\leftarrow 0$$\triangleright$ if score less than $\gamma$ 17: end if 18:end for Figure 4: Post-truncate embedding approach for a window of size 5. ##### Advanced Solution: Post-truncate embeddings In this approach, we begin by embedding each complete post using fine-tuned BERT. The approach is illustrated in Figure 4 for the 18-word post AND THE EVIL LYING DEEP STATE CABAL SATANIC SCUM BAGS ALL NEED TO BE ROUNDED UP AND EXECUTED. This yields an $18$ x $12$ x $768$ tensor representing the total number of words in the post, the total number of encoding layers, and their dimension. This embedding can be thought of as a word embeddings lookup table that provides complete context for each post.181818We assume that each word in the post has a token id in Bert’s vocabulary. Once this embedding is constructed, we follow the same procedure described in Section IV-B2 except for the fact that we now extract word-level contextual embeddings from the lookup table (see Figure 4). The advantage of this approach over the previous one is that it builds more informed embeddings given its use of a complete rather than partial context. Please note that there are three additional differences between the standard and the advanced approach: in the standard approach, we used context window sizes between 5 and 14 while in the advanced approach, we used context window sizes between 1 and 10. That is because a window of 1 word does not convey much information in the standard approach whereas it does in the advanced approach. As a result, we started at size 5 in the standard approach and 1 in the advanced approach and used 10 different window sizes in each case. Furthermore, in the advanced approach, the embeddings are generated by averaging the final encoder layer of BERT rather than using the pooled layer since that yielded better results. Finally, the optimal value for $m$ in the advanced approach was 9 rather than 7. ## V Results and Discussion The purpose of our study was to design a methodology for extracting emerging coded antisemitic terminology from online posts appearing on social media platforms often used by extremist groups. We proposed a pipeline to implement this methodology and instantiated this pipeline with standard and advanced components. The difficult part of our evaluation is the assessment of whether the approach yields a significant number of terms and whether these terms can, indeed in some contexts, have an antisemitic connotation. In order to answer these questions, we created a gold standard and tested our results according to it. ### V-A Construction of a gold standard: The gold standard we created uses two complementary methodologies. One for the terms already familiar to the community that fights antisemitism, and the other, for the terms unknown or not yet catalogued by that community.191919In this paper, we created a prototype system based on the seed words provided to us by the data team. These seed words are only a small subset of the already known coded antisemitic terms. As a result, some of the emergent terms discovered by our system are emergent vis-a-vis the system’s knowledge but not vis-a-vis the broader current knowledge. Discovering terms known to the community but not known by the system constitutes a useful proof of concept. The discovery of terms not currently known by the community constitutes an added demonstration of the worth of the approach. Known Terms For the first category, we simply compiled a general glossary from three existing sources: the Institute for Curriculum Services’ Glossary spanning the history of European Antisemitism, which we took in its entirety; the American Jewish Committee “Translate Hate” glossary which we also used in its entirety (prior to its recent expansion from 46 to 70 terms) and portions of the Glossary of Terms and Acronyms constructed by the R2Pris project on Radicalization and violent extremism. Since this last source encompassed hatred of different types, for this specific study, we restricted ourselves to the terms whose composition or definition included a known antisemitic term (e.g., nazi, Aryan, anti-semitic, Ku Klux Klan, SS, Swastika, Fascism, White Supremacist, etc.).202020 The sources we used can be found at the following websites: https://bit.ly/45kEtYB; https://bit.ly/3MIjKpt; and http://www.r2pris.org/glossary.html New Terms The new terms are the terms that do not appear in the glossaries just mentioned and that need to be manually verified through an internet search. We used the following systematic procedure to assign ground labels to new terms: * • Each extracted term not found in the glossary compilation was searched for on Google in two ways: the term alone or together with “+ antisemitism” added to the search. * • The documents retrieved on the first page of the Google browser for both searches were examined for references to antisemitism. * • If, based on this analysis, the term was found to be associated with antisemitism (e.g., “deep state” was found to be associated with a conspiracy theory against the Jews), it was coded as antisemitic in our gold standard database. If, on the other hand, the term did not carry any clear meaning (e.g., “late 20th”) or was not associated with antisemitism (e.g., “new york city”), it was coded as not antisemitic in our gold standard database. Qualitative evaluation We conducted two types of qualitative evaluation. The first one simply consisted of observing the terms extracted by the approach to assess whether they made sense when taken out of context. The second one can be thought of as a sanity check. For terms extracted and labeled as either antisemitic or not, we went back to the the posts from which the term was extracted to assess whether, within the context of the post, it was used in an antisemitic way or not. Though we do not use these qualitative assessments in our quantitative evaluation, we show examples of the different situations that arose in terms of agreement or disagreement between our system and our gold standard. ### V-B Results TABLE I: Accuracy, Precision, Recall and F-score using the four versions of our pipeline. Model+Embedding | Approach Type | Accuracy | Precision | Recall | F-score ---|---|---|---|---|--- colloc-pretrunc | standard | 0.74 | 0.34 | 1 | 0.51 colloc-posttrunc | hybrid | 0.76 | 0.36 | 1 | 0.53 tfidf-pretrunc | hybrid | 0.67 | 0.47 | 0.55 | 0.51 tfidf-posttrunc | advanced | 0.80 | 0.63 | 0.83 | 0.72 Quantitative Results: We tested four different versions of our proposed pipeline, by combining the standard and advanced solutions proposed for trending term extraction with the standard and advanced solutions proposed for term embedding. These combinations resulted in one standard, two hybrid, and one advanced implementation. Table I lists the results obtained by concordance + collocation followed by pre-truncation embedding (colloc-pretrunc) or post- truncation embedding (colloc-posttrunc); and those obtained by tfidf + frequency followed by pre-truncation embedding (tfidf-pretrunc) or post- truncation embedding (tfidf-posttrunc). The results were obtained using our gold standard labels. The approach using the two advanced components stands out as the absolute winner: tfidf-posttrunc, although the results for all four methods, including tfidf-posttrunc, show a higher level of recall than precision. Future work will attempt to improve all these metrics scores, with a focus on precision so as not to unduly label terms as antisemitic when they are, in fact, benign. When comparing the numbers in Table I, it is important to note that the number and type of terms retrieved differ between the two term extraction processes, colloc and tfidf. While colloc extracted 52 terms of which only 7 were truly antisemitic, tfidf extracted 94 of which 29 were truly antisemitic. The recall of 1 obtained by the two colloc-based methods, thus means that both pretrunc and posttrunc were able to identify these 7 antisemitic terms. Their low level of precision, however, suggests that they are too liberal in their labeling of terms as antisemitic. TABLE II: List of trending terms that are predicted antisemitic by the most advanced version of the pipeline. False Positives | | Known Terms | | New Terms | | Neutral | ---|---|---|---|---|---|---|--- plain sight | german people | white genocide | interest groups | FEMA camps | color revolution | end game | world war new york city | big part | nostra aetate | federal reserve | central bank | critical race theory | western civilization | democrat party Qualitative Results: Our qualitative evaluation was applied to the version of our pipeline that obtained the best results: tfidf-posttrunc, i.e., the advanced version. Table II shows some of the terms extracted by that version. The terms in red correspond to terms incorrectly classified as antisemitic with no good explanation; those in black are correctly classified as antisemitic as they correspond to our Known Terms; those in blue were verified to be antisemitic as they correspond to our New Terms; and those in purple were incorrectly classified as antisemitic, although the context in which they arise is clearly antisemitic. As discussed below, we call these terms Neutral (in an antisemitic context). Sanity Check: In Table III, we show a few sample posts containing the following trending antisemitic terms discovered by tfifd-posttrunc: Interest groups, White Genocide, Deep state. Each of these terms had an entry in the antisemitic glossary compilation described earlier. For instance, White Genocide, refers to a conspiracy theory rooted in white supremacist ideology, claiming that there is an intentional effort by Jews to destroy the white race through immigration, mixed-racial marriage, LGBTQ+ identification, etc. TABLE III: Posts on social media with automatically labeled antisemitic coded terms as per the most advanced version of the pipeline. Coded Term | Status | Website | Post ---|---|---|--- Interest groups | Known Term | Minds | the united states government is controlled by interest groups that are only seeking to enlarge their own power. the us government does not represent the will of the citizenry, and condemning it is not a condemnation on the principles of freedom, democracy, etc.the usa is being set up to fail.the rootless cosmopolitan elite have been constructing elaborate safehouses for decades in preparation for this. Deep State | Known Term | 4chan | US deep state MIGApede detected. The real deep state is the Jewish lobby. White Genocide | Known Term | Truth Social | Rotten Eggs - Dr. Reiner Fuellmich and Whitney Webb! Vatican Pro-Abortion- False Prophet Francis Owned By New World Order! Jacob’s Trouble = White Genocide! Pandemic Of The Double Dosed.Inflation Spiking, More Lockdowns, The Worst Is Yet To Come! End Game/FEMA camps | Neutral/New Term | Truth Social | FEMA is not a good thing! FEMA camps are concentration camps. FEMA camps are the end game of the New World Order Big Part | False Positive | 4chan | all turds need to be deported from the West. turds are brown MENA sunni muslim garbage. they are a big part of the non-white invasion. many of the turkish Iraqi and syrian immigrants who rape women and girls are actually ethnic turds. turds are also zionists and turdistan is a base for israeli ops. imagine sympathizing with these zio-muslim invaders. Table III also shows an instance of a new term —FEMA camps. This corresponds to a conspiracy theory where FEMA is believed to plan the incarceration and possible execution of US citizens in favor of the establishment of a New World Order, one of our seed words which often refers to the establishment of a new form of government controlled by a Jewish elite.On the other hand, during the process of extracting coded antisemitic terms, some terms were labeled as antisemitic despite the fact that they do not appear in our gold standard. In certain cases, that represents an outright mistake like in the case of Big Part in Table III where the context is certainly racist, but not specifically antisemitic, though antisemitism is part of the post, but in other situations, a case could be made for the antisemitic label. For example, our approach predicts End game as a coded antisemitic term, even though we did not find any reason for it in our glossary or internet search. A look at the post in which the term appeared (Table III) helps us understand how the antisemitic context of the post that includes the terms “concentration camps” and ”new world order” led the system to mislabel it. We conclude that, in such cases, our approach is extracting the right term according to the context, but the term should be considered Neutral (in an antisemitic context) rather than antisemitic. ### V-C Discussion Though we assume that our approach could still be refined, we note that the results obtained by the most successful version of our system are encouraging, suggesting the viability of our hypothesis that emergent coded terms could be discovered automatically using distance measures in embedding spaces. The sanity checks suggest that the terms identified by our approach are, usually, warranted as the context shown in the posts attests to the antisemitic nature of the way in which the identified terms are used. These checks also point to the errors made by the system and will help us improve our results. We also believe that our approach could have important practical uses. After being vetted by a human team, the emergent coded terminology it discovers could be input to the moderating algorithms used by social media platforms to discover problematic discourse or users currently avoiding discovery. ## VI Conclusion This paper proposes an approach for detecting the emergence of new antisemitic coded terminology which offers a valuable resource in combating online antisemitism and contributes to the ongoing efforts to create safer and more inclusive online spaces. We achieve an accuracy of 80% and F-Score of 72% in extracting antisemitic terms using this approach which relies on NLP techniques including POS tagging, TF-IDF, and Fined-tuned large language models such as BERT. In the future, we intend to refine our semantic similarity technique by exploring other deep learning and large language model approaches and their various parameter combinations. Similarly, we will experiment with different types of text pre-processing approaches to deal specifically with hate-speech and social media text. This will be done in the context of a lifelong-learning setting where the trending terms discovered will be used as input to the data scraping component in the following iteration. We also intend to create a more user-friendly version that will be convenient for people working in this space. Finally, our goal is to extend this study to hateful terminology against other minority groups. ## References * [1] A. Schmidt and M. Wiegand, “A survey on hate speech detection using natural language processing,” in SocialNLP@EACL, 2017. * [2] P. Fortuna and S. Nunes, “A survey on automatic detection of hate speech in text,” ACM Computing Surveys (CSUR), vol. 51, pp. 1 – 30, 2018. * [3] F. Poletto, V. Basile, M. Sanguinetti, C. Bosco, and V. Patti, “Resources and benchmark corpora for hate speech detection: a systematic review,” Language Resources and Evaluation, vol. 55, pp. 477 – 523, 2020. * [4] M. S. Jahan and M. Oussalah, “A systematic review of hate speech automatic detection using natural language processing,” Neurocomputing, vol. 546, p. 126232, 2021. * [5] A. M. Founta, D. Chatzakou, N. Kourtellis, J. Blackburn, A. Vakali, and I. Leontiadis, “A unified deep learning architecture for abuse detection,” in Proceedings of the 10th ACM conference on web science, pp. 105–114, 2019. * [6] J. Serra, I. Leontiadis, D. Spathis, G. Stringhini, J. Blackburn, and A. Vakali, “Class-based prediction errors to detect hate speech with out-of-vocabulary words,” in Proceedings of the first workshop on abusive language online, pp. 36–40, 2017. * [7] B. Gambäck and U. K. Sikdar, “Using convolutional neural networks to classify hate-speech,” in Proceedings of the first workshop on abusive language online, pp. 85–90, 2017. * [8] G. Wiedemann, S. M. Yimam, and C. Biemann, “Uhh-lt & lt2 at semeval-2020 task 12: Fine-tuning of pre-trained transformer networks for offensive language detection,” ArXiv, vol. abs/2004.11493, 2020. * [9] M. Schwarz-Friesel and J. Reinharz, Inside the antisemitic mind: the language of Jew-Hatred in contemporary Germany. Brandeis University Press, 2017. * [10] S. Zannettou, J. Finkelstein, B. Bradlyn, and J. Blackburn, “A quantitative approach to understanding online antisemitism,” in Proceedings of the International AAAI conference on Web and Social Media, vol. 14, pp. 786–797, 2020. * [11] G. Jikeli, D. Cavar, and D. Miehling, “Annotating antisemitic online content. towards an applicable definition of antisemitism,” arXiv preprint arXiv:1910.01214, 2019. * [12] M. Chandra, D. R. Pailla, H. Bhatia, A. J. Sanchawala, M. Gupta, M. Shrivastava, and P. Kumaraguru, ““subverting the jewtocracy”: Online antisemitism detection using multimodal deep learning,” Proceedings of the 13th ACM Web Science Conference 2021, 2021. * [13] N. A. Cloutier and N. Japkowicz, “Fine-tuned generative llm oversampling can improve performance over traditional techniques on multiclass imbalanced text classification,” IEEE COnfernece on Big Data, 2023. * [14] G. Jikeli, S. Karali, D. Miehling, and K. Soemer, “Antisemitic messages? a guide to high-quality annotation and a labeled dataset of tweets,” ArXiv, vol. abs/2304.14599, 2023. * [15] S. Parker and D. Ruths, “Is hate speech detection the solution the world wants?,” Proceedings of the National Academy of Sciences of the United States of America, vol. 120, 2023. * [16] R. U. Mustafa, M. S. Nawaz, J. Farzund, M. Lali, B. Shahzad, and P. Viger, “Early detection of controversial urdu speeches from social media,” Data Sci. Pattern Recognit., vol. 1, no. 2, pp. 26–42, 2017. * [17] A. Glazkova, “A comparison of text preprocessing techniques for hate and offensive speech detection in twitter,” Social Network Analysis and Mining, vol. 13, pp. 1–28, 2023. * [18] E. Loper and S. Bird, “Nltk: The natural language toolkit,” arXiv preprint cs/0205028, 2002. * [19] J. Ramos et al., “Using tf-idf to determine word relevance in document queries,” in Proceedings of the first instructional conference on machine learning, vol. 242:1, pp. 29–48, Citeseer, 2003.
# Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow Maria del Rio-Chanona 1,2, Nadzeya Laurentsyeva 3, and Johannes Wachs 4,5,1,∗ 1 Complexity Science Hub, Vienna. 2 Harvard Kennedy School 3 Faculty of Economics, LMU Munich 4 Corvinus University of Budapest 5 Centre for Economic and Regional Studies, Hungary Direct correspondence to <EMAIL_ADDRESS> (2023-07-14) ###### Abstract Large language models like ChatGPT efficiently provide users with information about various topics, presenting a potential substitute for searching the web and asking people for help online. But since users interact privately with the model, these models may drastically reduce the amount of publicly available human-generated data and knowledge resources. This substitution can present a significant problem in securing training data for future models. In this work, we investigate how the release of ChatGPT changed human-generated open data on the web by analyzing the activity on Stack Overflow, the leading online Q&A platform for computer programming. We find that relative to its Russian and Chinese counterparts, where access to ChatGPT is limited, and to similar forums for mathematics, where ChatGPT is less capable, activity on Stack Overflow significantly decreased. A difference-in-differences model estimates a 16% decrease in weekly posts on Stack Overflow. This effect increases in magnitude over time, and is larger for posts related to the most widely used programming languages. Posts made after ChatGPT get similar voting scores than before, suggesting that ChatGPT is not merely displacing duplicate or low- quality content. These results suggest that more users are adopting large language models to answer questions and they are better substitutes for Stack Overflow for languages for which they have more training data. Using models like ChatGPT may be more efficient for solving certain programming problems, but its widespread adoption and the resulting shift away from public exchange on the web will limit the open data people and models can learn from in the future. ## 1 Introduction Over the last thirty years, humans have constructed a vast library of information on the web. Using powerful search engines anyone with an internet connection can access valuable information from online knowledge repositories like Wikipedia, Stack Overflow, and Reddit. New content and discussions posted online are quickly integrated into this ever-growing ecosystem, becoming digital public goods used by people all around the world to learn new technologies and solve their problems (Hess & Ostrom, 2003; Henzinger & Lawrence, 2004; Lemmerich et al., 2019; Piccardi et al., 2021). More recently, these public goods have been used to train artificial intelligence (AI) systems, in particular, large language models (LLMs) (Vaswani et al., 2017). For example, the LLM ChatGPT (OpenAI, 2023) answers user questions by summarizing the information contained in these repositories. The remarkable effectiveness of ChatGPT is reflected in its quick adoption (Teubner et al., 2023) and application across diverse fields including auditing (Gu et al., 2023), astronomy (Smith & Geach, 2023), medicine (Kanjee et al., 2023), and chemistry (Guo et al., 2023). Randomized control trials show that using LLMs significantly boosts productivity in computer programming, professional writing, and customer support tasks (Peng et al., 2023; Noy & Zhang, 2023; Brynjolfsson et al., 2023). Indeed, the widely reported successes of LLMs like ChatGPT suggest that we will observe a significant change in how people search for, create and share information online. Ironically, if LLMs like ChatGPT present substitute traditional ways of searching and interrogating the web, then they will displace the very human behavior that generated their original training data. User interactions with ChatGPT are the exclusive property of OpenAI, its creator. Only OpenAI will be able to learn from the information contained in these interactions. As people begin to use LLMs instead of online knowledge repositories to find information, contributions to these repositories will likely decrease, diminishing the quantity and quality of these digital public goods. While such a shift would have significant social and economic implications, we have little evidence on whether people are actually substituting their consumption and creation of digital public goods with ChatGPT. The aim of this paper is to evaluate the impact of LLMs on the generation of open data on question-and-answer (Q&A) platforms. Since LLMs perform relatively well on software programming tasks (Peng et al., 2023), we study Stack Overflow, the largest online Q&A platform for software development and programming. We present three results. First, we examine whether the release of ChatGPT has decreased the volume of posts, i.e. questions and answers, posted on the platform. We measure the overall effect of ChatGPT’s release on Stack Overflow activity using a difference-in-differences model. We compare the weekly posting activity on Stack Overflow against that of four comparable Q&A platforms. These counterfactual platforms are less likely to be affected by ChatGPT either because their users are less able to access ChatGPT or because ChatGPT performs poorly in questions discussed on those platforms. We find that posting activity on Stack Overflow decreased by about $16\%$ following the release of ChatGPT, increasing over time to around $25\%$ within six months. Second, we investigate whether ChatGPT is simply displacing simpler or lower quality posts on Stack Overflow. To do so, we use data on up- and downvotes, simple forms of social feedback provided by other users to rate posts. We observe no change in the votes posts receive on Stack Overflow since the release of ChatGPT. This finding suggests that ChatGPT is displacing a wide variety of Stack Overflow posts, including high-quality content. Third, we study the heterogeneity of the impact of ChatGPT across different programming languages discussed on Stack Overflow. We test for these heterogeneities using an event study design. We observe that posting activity in some languages like Python and Javascript has decreased significantly more than the global site average. Using data on programming language popularity on GitHub, we find that the most widely used languages tend to have larger relative declines in posting activity. Our analysis points to several significant implications for the sustainability of the current AI ecosystem. The first is that the decreased production of open data will limit the training of future models (Villalobos et al., 2022). LLM-generated content itself is an ineffective substitute for training data generated by humans for the purpose of training new models (Gudibande et al., 2023; Shumailov et al., 2023; Alemohammad et al., 2023). One analogy is that training an LLM on LLM-generated content is like making a photocopy of a photocopy, providing successively less satisfying results (Chiang, 2023). And while human feedback to LLMs may facilitate continued learning, such feedback remains private information. This suggests a second issue: ChatGPT’s initial advantage can compound if it effectively learns from its interactions with users while simultaneously crowding out the generation of new open data (Arthur, 1989). More broadly, a shift from open data to a more closed web will likely have significant second-order impacts on the digital economy and how we access and share information. The rest of the paper is organized as follows. We introduce our empirical set- up, including the data and models used in our analysis, in Section 2. Section 3 presents our results. In Section 4, we discuss their implications. We argue that our findings of a significant decline in activity on Stack Overflow following the release of ChatGPT have important implications for the training of future language models, competition in the artificial intelligence sector, the provision of digital public goods, and how humans seek and share information. ## 2 Data and Methods ### 2.1 Stack Exchange and Segmentfault data To understand the effect ChatGPT can have on digital public goods, we compare the change in Stack Overflow’s activity with the activity on a set of similar platforms. These platforms are similar to Stack Overflow in that they are technical Q&A platforms, but are less prone to substitution by ChatGPT given their focus or target group. Specifically, we focus on the Stack Exchange platforms Mathematics and Math Overflow and on the Russian-language version of Stack Overflow. We also examine a Chinese-language Q&A platform on computer programming called Segmentfault. Mathematics and Math Overflow focus on university- and research-level mathematics questions respectively. We consider these sites to be less susceptible to replacement by ChatGPT given that, during our study’s period of observation, the free-tier version of ChatGPT performed poorly (0-20th percentile) on advanced high-school mathematics exams (OpenAI, 2023), and was therefore unlikely to serve as a suitable alternative to these platforms. The Russian Stack Overflow and the Chinese Segmentfault have the same scope as Stack Overflow, but target users located in Russia and China, respectively. We consider these platforms to be less affected by ChatGPT given that ChatGPT is officially unavailable in the Russian Federation, Belarus, Russian-occupied Ukrainian territory, and the People’s Republic of China. Although people in these places can and do access ChatGPT via VPNs (Kreitmeir & Raschky, 2023), such barriers still represent a hurdle to widespread fast adoption. We extract all posts (questions or answers) on Stack Overflow, Mathematics, Math Overflow, and Russian Stack Overflow from their launch to early June 2023 using https://archive.org/details/stackexchange. We scraped the data from Segmentfault directly. Our dataset comprises 58 million posts on Stack Overflow, over 900 thousand posts for the Russian-language version of Stack Overflow, 3.5 million posts on Mathematics Stack Exchange, 300 thousand posts for Math Overflow, and about 300 thousand for Segmentfault. We focus our analysis on data from January 2019 to June 2023, noting that our findings are robust to alternative time windows. For each post, our dataset includes the number of votes (up – positive feedback, or down – negative feedback) the post received, the author (user), and whether the post is a question or an answer. Furthermore, each post can have up to 5 tags – predefined labels that summarize the content of the post, for instance, an associated programming language. For more details on the data used, we refer the reader to section 5. From this point forward, we will refer to Mathematics, Math Overflow, Russian Stack Overflow, and Segmentfault, along with their corresponding posts, as the counterfactual platforms and posts. ### 2.2 Models #### Difference-in-differences We estimate the effect of ChatGPT for posting activity on Stack Overflow using a difference-in-differences method with four counterfactual platforms. We aggregate posting data at platform- and week-level and fit a regression model using ordinary least squares (OLS): $IHS(Posts_{p,t})=\alpha_{p}+\lambda_{t}+\beta\times Treated_{p,t}+\sum_{p\in P}\theta_{p}t+\epsilon_{p,t}$ (1) where $Posts_{p,t}$ is the number of posts on platform $p$ in a week $t$, which we transform using the inverse-hyperbolic sine function (IHS) (Burbidge et al., 1988).111We prefer this transformation because then the coefficient of interest can be roughly interpreted as a percent change in posting activity. The IHS behaves similarly to a natural log transformation for positive values but remains defined for zeroes. Our estimates are qualitatively similar to using log transformation, standardization or raw data. $\alpha_{p}$ are platform fixed effects, $\lambda_{t}$ are time (week) fixed effects, $\theta_{p}$ are platform-specific linear time trends, and $\epsilon_{p,t}$ is the error term. The coefficient of interest is $\beta$, which captures the estimated effect of ChatGPT on posting activity on Stack Overflow relative to the less affected platforms: $Treated$ equals one for weeks after the release of ChatGPT (starting November 27, 2022) when the platform $p$ is Stack Overflow and zero otherwise. We report robust standard errors clustered at the monthly level. To check the dynamics of the effect and to examine pretrends, we employ a similar specification but instead of $\beta\times Treated_{p,t}$, we use $\sum_{t}\beta_{t}\times I(week=t)\times I(platform=StackOverlow)$. We standardize the effects to 0 in the week before the public release of ChatGPT by dropping the indicator for that week from the regression. Separate coefficients for 25 weeks following the release of ChatGPT show how the effects of ChatGPT realized over time. Separate coefficients for the first 100 weeks before the release allow us to verify that posts on Stack Overflow had evolved similarly to the activity on counterfactual platforms prior to the introduction of ChatGPT. The advantage of the difference-in-differences method compared to a simple event study with Stack Overflow data only is that we estimate ChatGPT effects net of possible weekly shocks that are common across the technical Q&A platforms. For the interpretation of the coefficient, we note that we estimate relative change in posting activity on Stack Overflow compared to activity on other platforms before vs. after the release of ChatGPT. #### Event Study When analyzing the effect of ChatGPT on activity across programming languages, we can no longer compare data from Stack Overflow with the counterfactual platforms. This is because the tags annotating posts are different between Stack Exchange platforms. Therefore, we study ChatGPT’s heterogeneous effects using an event-study specification. For each programming language $i$ (identified by a tag), we model the standardized number of posts in a week $t$ on Stack Overflow by fitting a simple linear time trend with seasonal effects: $\overline{Posts}_{i,t}=\beta_{0}+\beta_{1}(t)+\beta_{2}(ChatGPT)+\beta_{3}(t\times ChatGPT)+\eta+\epsilon_{i,t}$ (2) where $t$ is the linear time trend and $\eta$ are seasonal (month of year) fixed effects. $ChatGPT$ equals one if the week $t$ is after the release of ChatGPT and zero otherwise. Coefficient $\beta_{2}$ captures the change in the intercept and coefficient $\beta_{3}$ reflects the change in the slope of the time trend following the release of ChatGPT. In the tag-level analysis, we standardize the dependent variable in order to be better able to compare effects across programming languages with different numbers of posts.222We standardize the number of posts within each tag by subtracting the mean and dividing by the standard deviation. Both statistics are calculated before the release of ChatGPT. We report HAC standard errors. ## 3 Results Figure 1: A) Time series of weekly posts to Stack Overflow since early 2016. The number of weekly posts decreases at a rate of about 7,000 posts each year from 2016 to 2022. In the six months after the release of ChatGPT, the weekly posting rate decreases by around 20,000 posts. B) Comparing posts to Stack Overflow, its Russian- and Chinese-language counterparts, and mathematics Q&A platforms since early 2022. Post counts are standardized by the average and standard deviation of post counts within each platform prior to the release of ChatGPT. Posting activity on Stack Overflow falls significantly more relative to activity on other platforms. ### 3.1 Decrease in posting activity Figure 1A, shows the evolution of activity on Stack Overflow from January 2016 to June 2023. Up to 2022 there was a gradual decrease in activity from roughly 110,000 to 60,000 posts per week, that is roughly 7,000k posts less per week each year. However, after the release of ChatGPT (November 30th, 2022) posting activity decreased sharply, with the weekly average falling from around 60,000 posts to 40,000 within six months. Compared to the pre-ChatGPT trend, this decrease represents more than five years worth of deceleration in just half a year. The decrease in activity on Stack Overflow is larger than for similar platforms for which we expect ChatGPT to be a less viable substitute. Figure 1B shows the standardized posting activity on Stack Overflow, the Russian- and Chinese-language counterparts of Stack Overflow, and two mathematics Q&A platforms. We standardize posting activity by the average and standard deviation of post counts within each platform prior to the release of ChatGPT. Figure 1B shows that Stack Overflow activity deviates markedly from activity on the other platforms after the release of ChatGPT. The plot visualizes the standardized posting activity within each platform since early 2022. Smoothed weekly activity varies between plus and minus two standard deviations for all platforms for most of 2022. Events, such as the Chinese New Year and other holidays and the start of the Russian invasion of Ukraine, are visible. Following the release of ChatGPT, we observe a significant decline in activity on Stack Overflow. We report the estimated effect of our difference-in-differences model in Table 1 and visualize the weekly estimates of the relative change in the Stack Overflow activity in Figure 2. Table 1 indicates that ChatGPT decreased posting activity on Stack Overflow by 15.6% ($1-e^{-0.17}$). These results are robust to changes in the controls and starting point of the data time series. We also tested for heterogeneity in subsets of the data: considering only questions (rather than counting both questions and answers) and posts on weekdays. In both subsets our estimates did not deviate significantly from the main result: we estimate a 12% relative decrease in questions and 14% relative decrease in posts on weekdays. | (1) | (2) | (3) ---|---|---|--- | Number of posts | Number of questions | Weekday posts Stack Overflow $\times$ Post-GPT | -0.170** | -0.112+ | -0.149* | (0.0607) | (0.0619) | (0.0636) Observations | 1,150 | 1,150 | 1,150 R-squared | 0.995 | 0.994 | 0.993 R-squared within | 0.290 | 0.315 | 0.232 Outcome mean | 16363 | 7273 | 13191 Outcome std. dev. | 29088 | 12661 | 23685 Table 1: Results of a difference-in-differences model, estimating the change in activity observed weekly on Stack Overflow following the release of ChatGPT, relative to activity on four other platforms less likely to have been impacted. All regressions comprise platform fixed effects, week fixed effects, and platform-specific linear time-trends. The standard-error of the estimate clustered on month is reported in parentheses. Significance codes: ***: $p<0.001$, **: $p<0.01$, *: $p<0.05$, +: $p<0.1$. Figure 2 shows that the impact of ChatGPT is increasing over time and is by the end of our study greater in magnitude than the average post-ChatGPT effect estimated in Table 1. By the end of April 2023, the estimated effect stabilizes at around 25%. Interestingly, ChatGPT use, in general, peaked around this time.333https://www.similarweb.com/blog/insights/ai-news/chatgpt- bard/ Figure 2: Difference-in-differences analysis for posting activities. The dashed line marks November 30, 2023 the release date of ChatGPT. Eight weeks after its introduction, we observe a steady decline in the activity of Stack Overflow. The plotted coefficients correspond to the interaction between a weekly dummy and posting on Stack Overflow. The coefficients are normalized to that in the week before the release of ChatGPT. The reported confidence intervals are at 95%. The regression comprises platform fixed effects, week fixed effects, and platform-specific linear time-trends. #### Voting activity A decrease in overall activity on Stack Overflow does not necessarily signify a problem; it could indicate a beneficial shift toward fewer but higher quality posts, as less valued or simplistic questions may be outsourced to ChatGPT. We investigate this possibility using data on voting activity but observe no significant change in the typical appreciation of posts after ChatGPT’s release. The time series of upvotes and downvotes, which we use as a proxy for the overall quality of posts, remain stable across the release of ChatGPT. Specifically, Figure 3 reports the average number of upvotes and downvotes that posts from a given week receive within five weeks of their creation. Upvotes are shown in grey and downvotes in blue; neither series changes significantly. Indeed the relative stability of voting behavior suggests that the quality of posts on Stack Overflow has not meaningfully changed after the introduction of ChatGPT. Figure 3: The time series of upvotes and downvotes accruing to posts within five weeks of their appearance. We observe no significant change since the release of ChatGPT. The horizontal axis indicates the week of the post. ### 3.2 Heterogeneities across tags Studying posts about different programming languages on Stack Overflow, we find significant heterogeneities in the impact of ChatGPT on posting behavior across languages. In Facet A of Figure 4, we plot the estimated effects (slope changes in the linear time trend after the introduction of ChatGPT) for those 69 tags that we connected to a programming language on GitHub. We estimate a negative effect of ChatGPT for most tags, but the estimates range between a 0.25 standard deviation decrease in slope (i.e. change per week following the ChatGPT release) to a 0.03 standard deviation increase. We observe that some of the widely used languages like Python and Javascript are the most impacted by ChatGPT. Interestingly, the model estimates that posts about CUDA have increased (though not significantly) after ChatGPT was released. CUDA is an application programming interface created by Nvidia, a graphics card manufacturer, that facilitates the use of graphics cards for computational tasks, in particular for machine learning and artificial intelligence. This exception again demonstrates the impact of ChatGPT on the world of computer programming: people are increasingly interested in software relating to artificial intelligence. Figure 4: A) The event study estimates of the effect of ChatGPT’s release on activity on a selection of tags on Stack Overflow. We report HAC-corrected 95% confidence intervals. B) The relationship between estimated effects and salary data from the Stack Overflow developer survey. We find no significant relationship. C) The relationship between the number of GitHub repositories using a tag and the estimated effect of ChatGPT on that tag. In both B) and C) we plot a linear fit with bootstrapped 95% confidence intervals. The dashed line in B) indicates that the correlation is not significant In Facet B, we compare the estimated impact of ChatGPT on different languages against salary data of developers using those languages. We source salary data from the 2022 Stack Overflow developer survey, focusing on US-based developers and calculating medians of reported salaries. We observe no clear relationship between the estimated labor market value of a specific language and changes in posting behavior in that language post-ChatGPT. To better understand the relationship between the size of the user base of a programming language and how it is impacted by ChatGPT, we compare our estimates with data from GitHub, the largest online platform for collaborative software development. Among other sources, ChatGPT was trained on data from GitHub. Because training data was collected up to September 2021, we use data on language use on GitHub up to June 2021. In Facet C of Figure 4, we visualize the relationship between the number of GitHub repositories (coding projects) in a specific language and the estimated impact of ChatGPT on that language. We observe that languages with more GitHub repositories tend to be more significantly impacted by the release of ChatGPT in terms of associated activity on Stack Overflow (Pearson’s $\rho=-0.45,p<.001$). ## 4 Discussion The rate at which people have adopted ChatGPT is one of the fastest in the history of technology (Teubner et al., 2023). It is essential that we better understand what activities this new technology displaces and what second-order effects this substitution may have (Schumpeter, 1942; Aghion & Howitt, 1992). This paper shows that after the introduction of ChatGPT there was a sharp decrease in human content creation on Stack Overflow. We compare the decrease in activity on Stack Overflow with other Stack Exchange platforms where current LLMs are less likely to be used. Using a difference-in-differences model, we find about $16\%$ relative decrease in posting activity on Stack Overflow, with a larger effect in later months. We observed no large change in social feedback on posts, measured using votes, following ChatGPT’s release, suggesting that average post quality has not changed. Posting activity related to more popular programming languages decreased more on average than that for more niche languages. These results suggest that users partially substituted Stack Overflow with ChatGPT. Consequently, the wide adoption of LLMs can decrease the provision of digital public goods, in particular, the open data previously generated by interactions on the web. Our results and data have some shortcomings that point to open questions about the use and impact of LLMs. First, while we can present strong evidence that ChatGPT decreased the posting activity in Stack Overflow, we can only partially assess quality of posting activity using data on upvotes and downvotes. Users may be posting more challenging questions, ones that LLMs cannot (yet) address, to Stack Overflow. Future work should examine whether continued activity on Stack Overflow is more complex or sophisticated on average than posts from prior to ChatGPT release. Similarly, ChatGPT may have reduced the volume of duplicate questions about simple topics, though this is unlikely to impact our main results as duplicates are estimated to account for only $3\%$ of posts (Correa & Sureka, 2013), and we do not observe significant changes in voting outcomes. A second limitation of our work is that we cannot observe the extent to which Russian- and Chinese-language users of the corresponding Q&A platforms are actually hindered from accessing ChatGPT; indeed recent work has shown a spike in VPN and Tor activity following the blocking of ChatGPT in Italy (Kreitmeir & Raschky, 2023). Given the potential economic importance of ChatGPT and similar LLMs, it is anyway essential that we better understand how such bans and blocks impact the accessibility of these tools (Gaessler & Piezunka, 2023). Finally, we do not address the issue that ChatGPT may be used to generate Stack Overflow content. Stack Overflow policy effectively banned posts authored by ChatGPT within a week of its release. In any case, a significant amount of ChatGPT activity on Stack Overflow would mean that our measures underestimate the effect of ChatGPT. Despite these shortcomings, our results have important implications for the future of digital public goods. Before the introduction of ChatGPT, more human-generated content was posted to Stack Overflow, forming a collective digital public good due to their non-rivalrous and non-exclusionary nature – anyone with internet access can view, absorb, and extend this information, without diminishing the value of the knowledge. Now, this information is rather fed into privately owned LLMs like ChatGPT. This represents a significant and trending shift of knowledge from the public domain to the private ones. This observed substitution effect poses several issues for the future of artificial intelligence in general. The first is that if language models crowd out open data creation, they will be limiting their own future training data and effectiveness. The second is that owners of the current leading models have exclusive access to user inputs and feedback, which, with a relatively smaller pool of open data, gives them a significant advantage against new competitors in training future models. Third, the decline of public resources on the web would reverse progress made by the web toward democratizing access to knowledge and information. Finally, the consolidation of humans searching for information around one or a few language models could narrow our explorations and focus our attention on mainstream topics. We briefly elaborate on these points, then conclude with a wider appeal for more research on the political economy of open data and AI, and how we can incentivize continued contributions to digital public goods. #### Training future models Our findings suggest that the widespread adoption of ChatGPT may make it difficult to train few iterations (Taleb, 2012). Though researchers have already expressed concerns about running out of data for training AI models (Villalobos et al., 2022), our results show that the use of LLMs can slow down the creation of new data. Given the growing evidence that data generated by LLMs cannot effectively train new LLMs (Gudibande et al., 2023; Shumailov et al., 2023; Alemohammad et al., 2023), modelers face the real problem of running out of useful data. If ChatGPT truly is a ‘‘blurry JPEG’’ of the web (Chiang, 2023), then, in the long run, it cannot effectively replace its most important input: data derived from human activity. The proliferation of LLMs has already impacted other forms of data creation: many Amazon Mechanical Turk workers now generate content (i.e. respond to surveys, evaluate texts) using ChatGPT (Veselovsky et al., 2023). #### Competition in the artificial intelligence sector A firm’s early advantage in technological innovation often leads to significant market share (Arthur, 1989). In our case, ChatGPT is simultaneously decreasing the amount of open training data that competitors could use to build competing models, while capturing a valuable private source of user data. There is also a growing concentration in tech driven by a shift from companies going public to acquisitions (Ederer & Pellegrino, 2023) – indeed OpenAI is partially owned by Microsoft. These forces may lead to a compounding advantage for OpenAI. Though firms have long used the massive amounts of open data created by users of platforms like Wikipedia, Stack Overflow, GitHub, OpenStreetMap or Reddit to create products and capture value (Henzinger & Lawrence, 2004; Vincent et al., 2018; Vincent & Hecht, 2021), these products have not generally replaced those platforms. #### Lost economic value Digital public goods generate value in many ways besides feeding LLMs and other algorithms. For instance, Wikipedia is an important source of information worldwide, but in developing countries, readers are more often motivated by intrinsic learning goals and tend to read articles in greater detail (Lemmerich et al., 2019). Unequal access to artificial intelligence may also compound inequalities in growth and innovation between countries (Gaessler & Piezunka, 2023). Digital public goods also provide direct value to the many websites that extract data from open data to complement their core services with extra information (Piccardi et al., 2021). For instance, there is substantial interdependence between sites like Wikipedia, Reddit, and Stack Overflow and the search engines that use them to enrich responses to user queries via infoboxes (McMahon et al., 2017; Vincent & Hecht, 2021). Contributors to digital public goods like Stack Overflow or Open Source Software (OSS) often enjoy indirect benefits (Lerner & Tirole, 2002). For instance, while OSS itself provides significant value in the global economy (Greenstein & Nagle, 2014), OSS contributions are valuable signals of a firm’s capabilities to investors (Conti et al., 2021). Individual contributions to Stack Overflow are used to signal ability on the labor market (Xu et al., 2020). Any general tendency of ChatGPT to crowd out contributions to digital public goods, may limit these valuable signals that reduce economic frictions. On the other hand, such signaling activity may serve as a powerful incentive to keep people contributing. #### Narrowing of information seeking The substitution effect we report likely has important second-order effects on how people search for information and their exposure to new ideas. LLMs likely favor well-established perspectives and due to their efficiency decrease the need for users to forage for information. These features of LLMs may reinforce a trend observed earlier in the context of the web. Specifically, internet search engines are thought to have pushed science toward consensus and narrower topics by improving efficiency of information search and improving the visibility of mainstream information (Evans, 2008). LLMs may also disincentivize the use of new or niche tools because they most amplify our productivity with those tools for which it has much training data. For instance, ChatGPT may not be able to help users of a new programming language that is has not seen many examples of. Given that LLMs are poised to change how we do research (Grossmann et al., 2023) and present a strong competitor to search engines (Xu et al., 2023), we need to understand what LLM efficiency implies for our contact with diverse sources of information and incentives to try new things. More generally, models like ChatGPT are going to generate political and economic winners and losers like many previous breakthrough technologies. While early evidence shows that these models enhance productivity especially among new and inexperienced workers (Noy & Zhang, 2023; Brynjolfsson et al., 2023), there are other ways in which they may contribute to inequality between people and firms (Rock, 2019), for instance via potential negative side effects of automation (Acemoglu & Restrepo, 2019; Eloundou et al., 2023). Our results suggest that the economics of data creation and ownership will become more salient: as data becomes more valuable, there will be growing interest in how creators of data can capture some of that value (Li et al., 2023). These multi-faceted aspects of the impact of LLMs suggest that the political economy of data and AI will be especially important in the next years (Lehdonvirta, 2022; Johnson & Acemoglu, 2023). In this context, our work highlights the specific issue that valuable digital public goods may be under-produced as a result of the proliferation of AI. A natural follow-up question is how we can incentivize the creation of such goods. While unemployment shocks are known to increase the provision of digital public goods (Kummer et al., 2020), it would be an unsatisfying solution to suggest that people put out of work by automation will fill this gap. In the case of platforms like Stack Overflow, active users are often motivated by social feedback and gamification (Anderson et al., 2012), but the continual onboarding of new users is what keeps these platforms relevant in the long run (Danescu-Niculescu-Mizil et al., 2013). For the sake of a sustainable open web and an AI ecosystem that draws on its data, we should think about how to keep people exchanging information and knowledge online. ## 5 Appendix ### Data #### Stack Exchange platform sites The raw dataset obtained from https://archive.org/details/stackexchange contains nearly all posting activity on the question and answer platforms hosted on the Stack Exchange network from its launch in 2008 to early June 2023. These include Stack Overflow, its Russian language version, and Math Overflow and Math Stack Exchange. Stack Overflow is the largest online Q&A platform for topics relating to computer programming and software development. It provides a community-curated discussion of issues programmers face (Anderson et al., 2012). Questions have multiple answers, and users debate the relative merits of solutions and alternatives in comments. A track record on Stack Overflow has value on the labor market as a signal of an individual’s skills (Xu et al., 2020). The data contains over 58 million posts, including both questions and answers. Posts are linked to their posting users, from which we infer poster previous activity and can identify posts made by new users. Questions are annotated with tags indicating the topic of the post including programming languages used. Users can give posts upvotes or downvotes, providing posting users with social feedback and reputation points. The Russian language version of Stack Overflow (over 900 thousand posts) and the mathematics-oriented platforms Math Stack Exchange (over 3.5 million posts) and Math Overflow (over 300 thousand posts) have identically structured data dumps hosted in the same location. Registered users can upvotes and downvote posts made on Stack Exchange platforms. These votes provide a valuable signal of the value of posts Anderson et al. (2012); Mamykina et al. (2011). They are the primary way users earn reputation points and status on Stack Exchange platforms. Votes also influence the ranking of posts in user feeds and search engine results, facilitating information filtering. Downvotes are used to moderate. The Stack Exchange data dump contains data on every vote cast, including the corresponding post, the date the vote was made, and whether it was an upvote or downvote. #### Segmentfault Segmentfault is a Chinese language platform with a Q&A platform for developers that has many similarities with the Stack Exchange sites. Users post questions on programming language topics and other users post answers. Questions are tagged by relevant languages and technologies, and there are similar gamification elements on the platform. We scraped data on all posts as of early June 2023, gathering over 300 thousand in total. #### Selection of tags Stack Overflow posts are annotated by tags which describe the concepts and technologies used in the post. For example, many tags indicate programming languages, webframeworks, database technologies, or programming concepts like functions or algorithms. Stack Overflow reconciles tags referring to the same things via a centralized synonym dictionary. We selected the 1,000 most used tags up to early June 2023, and focused on those 69 which could be directly linked to language statistics reported by GitHub, described next. #### GitHub data on programming language use We use data from the June 2021 GHTorrent data dump (Gousios & Spinellis, 2012) as a proxy measure for the amount of open data available for each programming language. The dataset reports which languages are used in each project or repository on GitHub. We simply count the number of repositories mentioning each language. We then link the languages with tags on Stack Overflow. As an alternative, we count the number of commits, elemental code contributions to repositories, to each repository, hence language. In the main paper we visualize the estimated effects of ChatGPT on specific tags that we can link to GitHub languages. We exclude some tags which refer to file formats or plain text, specifically: yaml, json, text, svg, markdown, and xml. ### Data and Code availability Data and code to reproduce our analyses will be made available in a subsequent draft. The Stack Overflow data dump is available here: https://archive.org/details/stackexchange. ### Acknowledgments We thank Frank Neffke, Gergő Tóth, Christoffer Koch, Sándor Juhász, Martin Allen, Manran Zhu, Karl Wachs, László Czaller, and Helene Strandt for helpful comments and discussions. ## References * (1) * Acemoglu & Restrepo (2019) Acemoglu, D. & Restrepo, P. (2019), ‘Automation and new tasks: How technology displaces and reinstates labor’, Journal of Economic Perspectives 33(2), 3–30. * Aghion & Howitt (1992) Aghion, P. & Howitt, P. (1992), ‘A model of growth through Creative Destruction’, Econometrica 60(2), 323–351. * Alemohammad et al. (2023) Alemohammad, S., Casco-Rodriguez, J., Luzi, L., Humayun, A. I., Babaei, H., LeJeune, D., Siahkoohi, A. & Baraniuk, R. G. (2023), ‘Self-consuming generative models go mad’. * Anderson et al. (2012) Anderson, A., Huttenlocher, D., Kleinberg, J. & Leskovec, J. (2012), Discovering value from community activity on focused question answering sites: a case study of Stack Overflow, in ‘Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining’, pp. 850–858. * Arthur (1989) Arthur, W. B. (1989), ‘Competing technologies, increasing returns, and lock-in by historical events’, The Economic Journal 99(394), 116–131. * Brynjolfsson et al. (2023) Brynjolfsson, E., Li, D. & Raymond, L. R. (2023), Generative AI at Work, Technical report, National Bureau of Economic Research. * Burbidge et al. (1988) Burbidge, J. B., Magee, L. & Robb, A. L. (1988), ‘Alternative transformations to handle extreme values of the dependent variable’, Journal of the American Statistical Association 83(401), 123–127. * Chiang (2023) Chiang, T. (2023), ‘ChatGPT is a Blurry JPEG of the Web’, The New Yorker . * Conti et al. (2021) Conti, A., Peukert, C. & Roche, M. P. (2021), ‘Beefing it up for your investor? open sourcing and startup funding: Evidence from github’, Open Sourcing and Startup Funding: Evidence from GitHub (August 25, 2021) . * Correa & Sureka (2013) Correa, D. & Sureka, A. (2013), Fit or unfit: analysis and prediction of’closed questions’ on stack overflow, in ‘Proceedings of the first ACM conference on Online Social Networks’, pp. 201–212. * Danescu-Niculescu-Mizil et al. (2013) Danescu-Niculescu-Mizil, C., West, R., Jurafsky, D., Leskovec, J. & Potts, C. (2013), No country for old members: User lifecycle and linguistic change in online communities, in ‘Proceedings of the 22nd international conference on World Wide Web’, pp. 307–318. * Ederer & Pellegrino (2023) Ederer, F. & Pellegrino, B. (2023), The great startup sellout and the rise of oligopoly, in ‘AEA Papers & Proceedings’, Vol. 113. * Eloundou et al. (2023) Eloundou, T., Manning, S., Mishkin, P. & Rock, D. (2023), ‘GPTs are GPTs: An early look at the labor market impact potential of large language models’, arXiv preprint 2303.10130 . * Evans (2008) Evans, J. A. (2008), ‘Electronic publication and the narrowing of science and scholarship’, science 321(5887), 395–399. * Gaessler & Piezunka (2023) Gaessler, F. & Piezunka, H. (2023), ‘Training with AI: Evidence from chess computers’, Strategic Management Journal . * Gousios & Spinellis (2012) Gousios, G. & Spinellis, D. (2012), Ghtorrent: Github’s data from a firehose, in ‘2012 9th IEEE Working Conference on Mining Software Repositories (MSR)’, IEEE, pp. 12–21. * Greenstein & Nagle (2014) Greenstein, S. & Nagle, F. (2014), ‘Digital dark matter and the economic contribution of apache’, Research Policy 43(4), 623–631. * Grossmann et al. (2023) Grossmann, I., Feinberg, M., Parker, D. C., Christakis, N. A., Tetlock, P. E. & Cunningham, W. A. (2023), ‘AI and the transformation of social science research’, Science 380(6650), 1108–1109. * Gu et al. (2023) Gu, H., Schreyer, M., Moffitt, K. & Vasarhelyi, M. A. (2023), ‘Artificial intelligence co-piloted auditing’, Available at SSRN 4444763 . * Gudibande et al. (2023) Gudibande, A., Wallace, E., Snell, C., Geng, X., Liu, H., Abbeel, P., Levine, S. & Song, D. (2023), ‘The false promise of imitating proprietary llms’, arXiv preprint arXiv:2305.15717 . * Guo et al. (2023) Guo, T., Guo, K., Liang, Z., Guo, Z., Chawla, N., Wiest, O., Zhang, X. et al. (2023), ‘What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks’, arXiv: 2305.18365 . * Henzinger & Lawrence (2004) Henzinger, M. & Lawrence, S. (2004), ‘Extracting knowledge from the world wide web’, Proceedings of the National Academy of Sciences 101, 5186–5191. * Hess & Ostrom (2003) Hess, C. & Ostrom, E. (2003), ‘Ideas, artifacts, and facilities: information as a common-pool resource’, Law and contemporary problems 66(1/2), 111–145. * Johnson & Acemoglu (2023) Johnson, S. & Acemoglu, D. (2023), Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, Hachette UK. * Kanjee et al. (2023) Kanjee, Z., Crowe, B. & Rodman, A. (2023), ‘Accuracy of a generative artificial intelligence model in a complex diagnostic challenge’, JAMA . * Kreitmeir & Raschky (2023) Kreitmeir, D. H. & Raschky, P. A. (2023), ‘The Unintended Consequences of Censoring Digital Technology–Evidence from Italy’s ChatGPT Ban’, arXiv preprint 2304.09339 . * Kummer et al. (2020) Kummer, M., Slivko, O. & Zhang, X. (2020), ‘Unemployment and digital public goods contribution’, Information Systems Research 31(3), 801–819. * Lehdonvirta (2022) Lehdonvirta, V. (2022), Cloud empires: How digital platforms are overtaking the state and how we can regain control. * Lemmerich et al. (2019) Lemmerich, F., Sáez-Trumper, D., West, R. & Zia, L. (2019), Why the world reads wikipedia: Beyond english speakers, in ‘Proceedings of the twelfth ACM international conference on web search and data mining’, pp. 618–626. * Lerner & Tirole (2002) Lerner, J. & Tirole, J. (2002), ‘Some simple economics of open source’, The journal of industrial economics 50(2), 197–234. * Li et al. (2023) Li, H., Vincent, N., Chancellor, S. & Hecht, B. (2023), The dimensions of data labor: A road map for researchers, activists, and policymakers to empower data producers, in ‘Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency’, pp. 1151–1161. * Mamykina et al. (2011) Mamykina, L., Manoim, B., Mittal, M., Hripcsak, G. & Hartmann, B. (2011), Design lessons from the fastest q&a site in the west, in ‘Proceedings of the SIGCHI conference on Human factors in computing systems’, pp. 2857–2866. * McMahon et al. (2017) McMahon, C., Johnson, I. & Hecht, B. (2017), The substantial interdependence of Wikipedia and Google: A case study on the relationship between peer production communities and information technologies, in ‘Proceedings of the International AAAI Conference on Web and Social Media’, Vol. 11, pp. 142–151. * Noy & Zhang (2023) Noy, S. & Zhang, W. (2023), ‘Experimental evidence on the productivity effects of generative artificial intelligence’, Science 381(6654), 187–192. * OpenAI (2023) OpenAI (2023), ‘GPT-4 Technical Report’. * Peng et al. (2023) Peng, S., Kalliamvakou, E., Cihon, P. & Demirer, M. (2023), ‘The impact of AI on developer productivity: Evidence from GitHub Copilot’, arXiv preprint arXiv:2302.06590 . * Piccardi et al. (2021) Piccardi, T., Redi, M., Colavizza, G. & West, R. (2021), On the Value of Wikipedia as a Gateway to the Web, in ‘Proceedings of the Web Conference 2021’, pp. 249–260. * Rock (2019) Rock, D. (2019), ‘Engineering value: The returns to technological talent and investments in artificial intelligence’, Available at SSRN 3427412 . * Schumpeter (1942) Schumpeter, J. A. (1942), Capitalism, socialism, and democracy, Routledge, New York. * Shumailov et al. (2023) Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N. & Anderson, R. (2023), ‘The curse of recursion: Training on generated data makes models forget’, arXiv preprint 2305.17493v2 . * Smith & Geach (2023) Smith, M. J. & Geach, J. E. (2023), ‘Astronomia ex machina: a history, primer and outlook on neural networks in astronomy’, Royal Society Open Science 10(5), 221454. * Taleb (2012) Taleb, N. N. (2012), Antifragile: How to live in a world we don’t understand, Vol. 3, Allen Lane London. * Teubner et al. (2023) Teubner, T., Flath, C. M., Weinhardt, C., van der Aalst, W. & Hinz, O. (2023), ‘Welcome to the era of ChatGPT et al. the prospects of large language models’, Business & Information Systems Engineering 65(2), 95–101. * Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. & Polosukhin, I. (2017), ‘Attention is all you need’, Advances in neural information processing systems 30. * Veselovsky et al. (2023) Veselovsky, V., Ribeiro, M. H. & West, R. (2023), ‘Artificial artificial artificial intelligence: Crowd workers widely use large language models for text production tasks’, arXiv preprint 2306.07899 . * Villalobos et al. (2022) Villalobos, P., Sevilla, J., Heim, L., Besiroglu, T., Hobbhahn, M. & Ho, A. (2022), ‘Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning’, arXiv preprint 2211.04325 . * Vincent & Hecht (2021) Vincent, N. & Hecht, B. (2021), ‘A deeper investigation of the importance of wikipedia links to search engine results’, Proceedings of the ACM on Human-Computer Interaction 5(CSCW1), 1–15. * Vincent et al. (2018) Vincent, N., Johnson, I. & Hecht, B. (2018), Examining Wikipedia with a broader lens: Quantifying the value of Wikipedia’s relationships with other large-scale online communities, in ‘Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems’, pp. 1–13. * Xu et al. (2020) Xu, L., Nian, T. & Cabral, L. (2020), ‘What makes geeks tick? A study of Stack Overflow careers’, Management Science 66(2), 587–604. * Xu et al. (2023) Xu, R., Feng, Y. & Chen, H. (2023), ‘ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience’, arXiv preprint arXiv:2307.01135 .
# GraVAC: Adaptive Compression for Communication-Efficient Distributed DL Training Sahil Tyagi Indiana University Bloomington, USA <EMAIL_ADDRESS>Martin Swany Indiana University Bloomington, USA <EMAIL_ADDRESS> ###### Abstract Distributed data-parallel (DDP) training improves overall application throughput as multiple devices train on a subset of data and aggregate updates to produce a globally shared model. The periodic synchronization at each iteration incurs considerable overhead, exacerbated by the increasing size and complexity of state-of-the-art neural networks. Although many gradient compression techniques propose to reduce communication cost, the ideal compression factor that leads to maximum speedup or minimum data exchange remains an open-ended problem since it varies with the quality of compression, model size and structure, hardware, network topology and bandwidth. We propose _GraVAC_ , a framework to dynamically adjust compression factor throughout training by evaluating model progress and assessing gradient information loss associated with compression. _GraVAC_ works in an online, black-box manner without any prior assumptions about a model or its hyperparameters, while achieving the same or better accuracy than dense SGD (i.e., no compression) in the same number of iterations/epochs. As opposed to using a static compression factor, _GraVAC_ reduces end-to-end training time for ResNet101, VGG16 and LSTM by 4.32$\times$, 1.95$\times$ and 6.67$\times$ respectively. Compared to other adaptive schemes, our framework provides 1.94$\times$ to 5.63$\times$ overall speedup. ###### Index Terms: deep learning, data-parallel training, gradient compression, sparsification, adaptive systems ## I Introduction Deep Learning (DL) is a supervised machine learning approach that optimizes a loss function over a non-convex surface by comparing model predictions with ground truth. Each training iteration in DL involves forward and backward pass, i.e., generate predictions from input data, assess loss, compute gradients and update model parameters via optimization method like gradient descent. Training is an iterative process, typically involving multiple passes over the entire dataset where each pass is called an _epoch_. DL is also heavily influenced by certain _hyperparameters_ that affect training speed, quality, or both. Commonly used hyperparameters are learning rate, momentum, batch size, weight decay, epochs, activation function, etc. Distributed data-parallel (DDP) methods further scale training across multiple nodes that train a globally shared model with I.I.D. data (independent and identically distributed) by periodically aggregating locally computed gradients at the end of each iteration. The compute requirements to train DL models doubles every 3.5 months [1], while the compute gains in chip design for ML accelerators and bandwidth gains in telecommunications networks double every 24 and 18 months [2, 3]. Thus, the infrastructure required to train state-of-the-art models tends to fall behind their compute and networking demands. Since upgrading network stack in the cloud, datacenter and HPC clusters can be infrequent as compared to appending new accelerators in pre- existing systems, gradient communication tends to be the major bottleneck in distributed training [4]. Different compression techniques have been proposed in recent years to mitigate this synchronization overhead. However, the optimal compression factor (CF) that minimizes data exchange or end-to-end training time depends on the model itself (i.e., its size, structure and depth), available network bandwidth and the compression overhead itself. Unlike traditional HPC and distributed computing applications that only measure parallel efficiency, DDP training has an additional statistical efficiency associated with it. Although the amount of computation performed on each iteration is the same, some iterations tend to be more crucial than others towards the overall learning of the model. Updates are especially sensitive in early stages and to hyperparameters like learning rate schedule, momentum and weight decay [5]. It would thus be intuitive to compare information loss in gradients on account of compression, and use a lower CF when considerably more information is lost and a higher CF when most information is preserved under compression. We can subsequently increase compression as training continues and gradients saturate, and decrease it back during the aforementioned critical stages. We take into account the parallel and statistical efficiency aspect of gradient compression in this work: a high CF improves overall throughput (i.e., number of samples processed per second) by reducing communication cost, but increases information loss in the gradients resulting in either slower or insignificant updates. The two metrics in DDP compression are pareto-related as one improves at the detriment of the other. We propose GraVAC: {Gra}dient {V}ariance-based {A}daptive {C}ompression to dynamically adjust CF by comparing information loss from compression with that of the original gradients computed in backpropagation. _GraVAC_ evaluates different CFs in a given search space and determines the CF that best balances parallel and statistical efficiency in DDP training with compression. We validate our approach over a variety of DL models and directly compare with static CF on compressors like Top-$\mathit{k}$ [6], Deep Gradient Compression or DGC [7], Redsync [9] and Random-$\mathit{k}$ [6]. ## II Background and related work DDP training can be implemented either via MPI-based collectives (AllReduce) [10, 11, 12] or using one or more centralized parameter servers (PS) [13] to accumulate and distribute model updates among workers. ### II-A Scaling Efficiency of DDP Training DL training is an iterative process that involves parameter updates at each step via gradient descent (GD) [14]. Full GD uses entire training data at every step, making the whole process slow and compute-intensive, while Stochastic GD processes a single sample and does not vectorize multiple samples on fast accelerators. Mini-batch GD is the optimal middle ground between Full and Stochastic GD where _b_ samples are randomly sampled from I.I.D. data. Eqn. (1) describes the update rule in mini-batch GD where parameters $\mathit{w}$ at $(\mathit{i}+1)$-th iteration on $\mathit{N}$ workers minimize loss function $\mathit{\mathcal{L}(\cdot)}$ on input samples $\mathit{x_{j}}$ of size $\mathit{b}$ from distribution $\mathcal{X}_{j}$ with learning rate $\mathit{\eta}$. With weak scaling, we can increase the amount of per-iteration work by adding more workers and keeping per-worker batch-size $\mathit{b}$ the same. $w_{i+1}=w_{i}-\eta\dfrac{1}{N}\sum_{n=1}^{n=N}{\dfrac{1}{|b|}\sum_{j\in b}\dfrac{\partial}{\partial w_{i}}\mathcal{L}(x_{(j,n)},w_{i})}$ (1) The _ideal_ throughput of a distributed application $\mathit{T_{N}}$ executed across $N$ workers is $N$ times the throughput of a single worker $\mathit{T_{1}}$. The deviation is measured via “scaling efficiency“ in Eqn. 2a. Assuming negligible IO overhead, iteration time in dense SGD is bounded by computation and communication time (Eqn. (2b)). It may be possible to overlap communication with computation, but only partially since the latter is comparatively much lower on modern GPUs and TPUs. Model communication has been shown to be an order of hundreds or even thousands of magnitudes higher than gradient computation. Thus, frequent synchronization ($t_{sync}$) is the bottleneck that halts linear scaling in DDP. Table 1 describes the size, density and convergence target of ResNet101 [15], VGG16 [16] and LSTM [17] with dense SGD communication. Latency is further exacerbated on constrained networks with limited bandwidth as large volumes of data is exchanged by multiple workers simultaneously. $\eta_{scaling}=T_{N}/N\cdot T_{1}$ (2a) $t_{iter}\approx t_{compute}+t_{sync}$ (2b) TABLE 1: DL model description Model | Layers | Size (MB) | Dataset | Test target ---|---|---|---|--- ResNet101 | 101 | 170 | CIFAR10 | 80% Top-1 LSTM | 2 | 252 | PTB | 22.0 PPL VGG16 | 16 | 528 | CIFAR100 | 90% Top-5 (a) Scaling efficiency in DDP (b) Initial gradient sensitivity Figure 1: Communication overhead and early critical period in DDP training. For a DL model with a total of $M$ parameters, the time cost based on the $\alpha$-$\beta$ communication model (where $\alpha$ is the latency and $\beta$ is the inverse of bandwidth) for tree-based allreduce is $(2\alpha logN+2MlogN\beta)$ [18]. For ring-based allreduce, this becomes $2(N-1)\alpha+2M\beta(N-1)/N$. Hence, communication cost increases as more workers are added to the mix in distributed training. Fig. 1a shows how overall throughput deviates from the ideal as cluster-size increases. The scaling efficiency is also influenced by the message size, i.e., total gradients/parameters to be communicated. In dense SGD, we observed scaling to be affected by the tensor-size distributions across the layers of a model as well. For e.g., LSTM has a better $\eta_{scaling}$ than ResNet101 despite being a larger model. This is because parameters in LSTM are spread across just 2 layers, compared to 101 in ResNet101. ### II-B Gradient Variance in Deep Learning Prior work has demonstrated that gradient information can help measure the statistical efficiency of distributed training [19, 20]. There is a strong correlation between changes in the eigen values of second-order hessian [21] and first-order gradients (i.e., variance). [22, 23] explores how gradients behave in early stages of DL training and during certain critical periods, influenced by hyperparameters like learning rate schedule, gradient clipping and type of SGD used (e.g., zero, first or second-order moments). Fig. 1b attests those findings where we plot variance over the starting iterations and notice how drastically the gradients change and saturate over training. ### II-C Gradient Compression Many lossy compression techniques have been proposed for DDP and federated learning in recent years. Lossy compression incurs a fundamental trade-off between data-size and information loss; one can either reduce message size by losing more information, or preserve data quality by keeping majority of the original bits intact. In the context of DDP, higher CF reduces communication time at the cost of accuracy degradation or more steps/epochs required for the same convergence. CF measures the size of original gradients to the size of compressed tensors. E.g., compressing 10% gradients gives CF of 10$\mathsf{x}$, while 1% gives 100$\mathsf{x}$. Lossy compression can be broadly classified into _quantization_ , _sparsification_ or _low-rank approximations_. The bit-width of single-precision (32-bit) floats is reduced in gradient quantization. Techniques like automatic mixed precision (AMP) [24] reduces gradients to half-precision, resulting in 2$\mathsf{x}$ CF. QSGD [25] balances the trade-off between accuracy and quantization precision. 1-bit SGD [26] reduces 32-bit floats to 1-bit and propagates quantization error via error- feedback. Sparsification methods communicate only a fraction of the gradient values along with their indices and set everything else to 0. Top-k sparisifies by extracting the top k% values while Random-k does so randomly with negligible compression overhead. DGC discards gradients below a certain threshold along with using momentum correction and gradient clipping. Methods like Redsync [40] combine quantization and sparsification, but the estimation quality is not accurate [27]. Approaches like PowerSGD [28] and Pufferfish [29] achieve compression via low-rank updates. The former can be viewed as adding regularization in DL, while the latter performs low-rank factorization on fully connected, convolutional and LSTM layers. _What should be the ideal CF in Compression-based DDP?_ The ideal CF is one that reduces communication time without trimming too much gradients which can be detrimental to final model. Compression has its own associated costs depending on the target CF and computational complexity of the mechanism itself. These factors affect both the parallel efficiency of distributed training as well as statistical inefficiency due to information loss from compression. Fig. 2 aptly demonstrates this where the CF that gives maximum speedup varies for each model and compression technique employed. The models are trained to Table 1 targets. ResNet101 on Top-k achieves most speedup at 100$\mathsf{x}$, while VGG16 and LSTM peak at CFs 1000$\mathsf{x}$ and 10$\mathsf{x}$ respectively. On the other hand, ResNet101 fails to converge for any CF with Random-k compression. VGG16 and LSTM converged with 10$\mathsf{x}$ and failed with other CFs. Although a typical ML practitioner may not necessarily need to think about a plethora of compression methods, choosing the right CF with any compressor and DL model that minimizes training time, or even converges successfully, presents a non-trivial challenge. (a) Top-k (b) Random-k Figure 2: CF with maximal speedup (to reach Table 1 targets) varies for each model and compression technique used. The results are normalized by 10$\mathsf{x}$ CF while a speedup of 0.0 implies convergence failure. Dynamic compression mechanisms like AdaQS [30] perform quantization using gradient mean to standard deviation ratio (MSDR). Systems like Accordion [31] and ScaDLES [32] switch between low and high compression based on critical regime identification. We tackle the ideal CF exploration problem in _GraVAC_ in a gradient-driven manner by comparing variance of prior and post- compression gradients. For clarity, prior-compression gradients refer to the original tensors computed in backward pass. By measuring the information lost in compression, we dynamically adjust CF over each iteration. Starting with a low CF initially, we gradually increase compression as training progresses. On encountering senstive or critical regions, _GraVAC_ switches to a lower CF that least degrades convergence. ## III Design and implementation In this section, we first describe the trade-off between parallel and statistical efficiency of DDP training _with_ compression. Then we describe the metrics “compression gain“ and “compression throughput“ to combine the two, and explain _GraVAC_ ’s adaptive compression algorithm. ### III-A Parallel Efficiency of Gradient Compression The end goal of gradient compression is to improve DDP scaling efficiency. Application scaling is governed by the DDP mechanism (ring-based, tree-based allreduce or parameter servers), communication library used (MPI, NCCL [11], Gloo [10] or RPC) and available bandwidth. Keeping the latter and network infrastructure aside, speedup in any DL model depends on the target CF, quality of estimation and compression overhead. The overall iteration time in Eqn. 2b is adjusted for compression as $\mathit{t_{iter}^{(\mathit{c})}}\approx\mathit{t_{compute}}+\mathit{t_{sync}^{(\mathit{c})}}+\mathit{t_{compress}^{(\mathit{c})}}+\mathit{t_{decompress}^{(\mathit{c})}}$ where it takes $\mathit{t_{compress}^{(\mathit{c})}}$ time to reduce gradients to CF $\mathit{c}$ such that it reduces communication time to $\mathit{t_{sync}^{(c)}}$. $\mathit{t_{decompress}^{(\mathit{c})}}$ is the time taken to reconstruct the compressed gradients to the same dimension as the original gradients. A viable compressor must have its compression time considerably lower than synchronization time. The parallel efficiency of a distributed application suffers with more workers due to higher synchronization costs. Improving the network bandwidth alleviates this to only a certain extent. [4] investigates how DDP throughput improves marginally with higher bandwidth. They observed that ResNet50 peaks to 75% scale-out on a 25 Gbps network and remains the same even for 100 Gbps. Its because network transport implementation of current DL frameworks cannot fully utilize the available network bandwidth. Thus, even though cloud providers like GCP provide anywhere from 10-32 Gbps bandwidth depending on the machine type and VM size, they may not be utilized to their full potential. Fig. 3 shows how the throughput increases and communication overhead reduces with compression. The results are relative to CF 10$\mathsf{x}$ for each model. We perform layerwise DGC compression over a 32 GPU cluster. System throughput is determined only by compression overhead and communication time as the compute time in backpropagation stays the same across all CFs. Based on the compressor used, compression latency may vary with target CF. For e.g., it decreases with larger CF as Top-k uses max-heap and sorts the top k% elements in $O(N+\mathit{k}\log{}\mathit{k})$ time. Throughput for ResNet101 and VGG16 saturates at 500$\mathsf{x}$ and does not improve thereafter, while LSTM saturates at 1000$\mathsf{x}$ (Fig. 3a). Communication savings also diminish at higher CFs due to small message size and network saturation (Fig. 3b). Thus, the highest CF may not necessarily correspond to the largest throughput. (a) Relative throughput (b) Relative communication Figure 3: Throughput and communication speedup for layerwise DGC compression, normalized by 10$\mathsf{x}$ CF. ### III-B Statistical Inefficiency of Gradient Compression Gradient compression mechanisms rely on _error-feedback_ [35, 36] which essentially acts as delayed updates, as commonly noted in asynchronous training. The gradients ineligible for compression in the current iteration are not discarded, but added to residual gradients which in turn are added to gradients computed in the next iteration. Residual gradients and error- feedback helps preserve important features and is critical to convergence [6, 7, 8]. Applying compression without error-feedback has been shown to achieve lower accuracy in deep learning models [35]. At the same time, residual gradients can sometimes degrade generalization performance due to stale updates. DDP training with very high CFs can negatively impact training time, convergence quality, or both if the compressed gradients are too sparse or quantized to update the model in any significant way. _It is thus crucial to have an indicator that quantifies information loss between compressed and the original gradients._ We do so by comparing variance between the original and compressed tensors on every iteration and see how it relates to actual model convergence. Denoting the original gradients as _BC_ (Before-Compression) and compressed tensors as _AC_ (After-Compression), we compare BC and AC tensors in two separate configurations with CFs 10$\mathsf{x}$ and 1000$\mathsf{x}$ in Fig. 4, 5 and 6. We compare the convergence curves for the two CFs with _Dense SGD_ (i.e., no compression) to see how much accuracy degrades with compression. _AC_ 10$\mathsf{x}$ is nearly identical to its _BC_ counterpart in ResNet101 (Fig. 4a) while there is considerably more information loss in between _BC_ and _AC_ 1000$\mathsf{x}$ (Fig. 4b). This translates to their convergence curves in Fig. 4c as well where 10$\mathsf{x}$ and dense SGD runs follow a similar convergence trajectory while 1000$\mathsf{x}$ achieves considerably lower accuracy for the same iterations. (a) 10$\mathsf{x}$ CF (b) 1000$\mathsf{x}$ CF (c) Convergence curve (d) Compression gain Figure 4: ResNet101: Prior and Post-Compression gradients, test accuracy and compression gain for CFs 10$\mathsf{x}$ and 1000$\mathsf{x}$. VGG16 follows a similar trend with 10$\mathsf{x}$ CF. The _BC_ and _AC_ gradient variance (Fig. 5a) is nearly identical and so are the convergence curves for 10$\mathsf{x}$ and Dense SGD (Fig. 5c). We notice a slight deviation between _BC_ and _AC_ at 1000$\mathsf{x}$ initially in Fig. 5b, which correlates to slow convergence in the early iterations for 1000$\mathsf{x}$ in Fig. 5c. As the deviation _BC_ and _AC_ decreases, we see both CFs converge to the same accuracy as Dense SGD in the same iterations. (a) 10$\mathsf{x}$ CF (b) 1000$\mathsf{x}$ CF (c) Convergence curve (d) Compression gain Figure 5: VGG16: Prior and Post-Compression gradients, test accuracy and compression gain for CFs 10$\mathsf{x}$ and 1000$\mathsf{x}$. The _AC_ 10$\mathsf{x}$ and 1000$\mathsf{x}$ gradients lie on similar scales as _BC_ in LSTM, although the higher CF has slightly higher variance (Fig. 6a and 5b). As seen from Fig. 5c, Dense SGD has the least perplexity (thus, better model quality), followed by 10$\mathsf{x}$ and 1000$\mathsf{x}$ CFs. (a) 10$\mathsf{x}$ CF (b) 1000$\mathsf{x}$ CF (c) Convergence curve (d) Compression gain Figure 6: LSTM: Prior and Post-Compression gradients, test perplexity (lower is better) and compression gain for CFs 10$\mathsf{x}$ and 1000$\mathsf{x}$. To compare the information loss between the original and gradients compressed to CF _c_ , we define a simplistic metric called _Compression gain_. As part of error feedback, we update the gradients such that $\mathit{g_{ef}^{(i)}}=\mathit{g_{0}^{(i)}}+\;\mathsf{residual\\_gradients}^{(i-1)}$ for $i\geq 1$. Here, $\mathit{g_{0}^{(i)}}$ are the original gradients calculated via backpropagation at iteration $i$, while $\mathsf{residual\\_gradients}^{(i-1)}$ are left-overs from the last iteration $(i-1)$ and before, which are added back as part of error-feedback to produce $\mathit{g_{ef}^{(i)}}$ for the current iteration. With compression operator $\mathcal{C}$, gradients are compressed as $\mathit{g_{c}^{(i)}}=\mathcal{C}[\mathit{g_{ef}^{(i)}}]$. Compression gain is then measured as the ratio of expected variance of compressed gradients $\mathit{g_{c}^{(i)}}$ and the original gradients modified with error- feedback, i.e., $\mathit{g_{ef}^{(i)}}$: $\mathsf{Compression\ gain}=\frac{\mathbb{E}[||g_{c}^{(i)}||^{2}]}{\mathbb{E}[||g_{ef}^{(i)}||^{2}]}$ In prior work, gradient noise has been well studied in deep learning literature pertaining to divergence between locally-computed and aggregated gradients in DDP [20, 37, 38]. These works use gradient information to tweak the global batch-size in DDP to optimize job completion time or allocate optimal resources for a job. Instead of looking at local and global gradients, _GraVAC_ ’s novelty comes from evaluating the noise between the original and compressed tensors. The gradients computed over each iteration can be noisy. Thus, we keep a moving average of the respective variances of the original and compressed gradients. The computation and memory footprint of this approach is low since the window-size in moving average is finite and only a single- precision floating point is stored for every iteration. Compression gain is bounded between $\\{0,1]$ such that it is low when $\mathcal{C}$ trims too much information. As models keep training, gradients saturate and higher compression becomes more viable in later stages of training. Hence, compression gain increases over training as compressed tensors become more aligned with the original gradients. We plot compression gains for the three models when training with fixed CF 10$\mathsf{x}$ and 1000$\mathsf{x}$ respectively, shown in Fig. 4d, 5d and 6d. In each model, 10$\mathsf{x}$ has higher compression gain than 1000$\mathsf{x}$ since more information is preserved in the smaller CF. _It should also be apparent that Dense SGD training has a constant gain of 1.0._ For all models, convergence curve of 10$\mathsf{x}$ follows a similar trajectory as Dense SGD. Correspondigly, the compression gain of 10$\mathsf{x}$ stays close to 1.0 throughout. In ResNet101, gain of 1000$\mathsf{x}$ is low initially and grows in an oscillating manner, although still lower than gains of 10$\mathsf{x}$ and Dense SGD. The low gains in the first 1000 iterations of CF 1000$\mathsf{x}$ correlates to the considerable gap between _BC_ and _AC_ gradients in Fig. 4b and lower accuracy in Fig. 4c. VGG16 is more robust to higher CFs (Fig. 5c), as also seen from the high compression gains of CF 1000$\mathsf{x}$ in Fig. 5d. For LSTM, compression gain for 10$\mathsf{x}$ stays close to 1.0 and between 0.8-0.9 for 1000$\mathsf{x}$. The proximity of the two CFs to Dense SGD’s gain of 1.0 is equivalent to their perplexity curves in Fig. 6c. From these results we see how compression gain serves as a viable indicator of the statistical efficiency of DDP with compression. ### III-C Combining System Throughput and Compression Gain As described earlier in II-C as well as Fig. 2, choosing a high CF unintuitively does not necessarily improve training time and may even degrade final model quality. Thus, to account for both the parallel and statistical efficiency DDP training _with_ gradient compression, we combine _system throughput_ ($\text{T}_{system}$) and _compression gain_ into a single metric called _Compression Throughput_ : $\text{T}_{compression}=\;\text{T}_{system}\;\times\;\mathsf{Compression\ gain}$ If CF is high, system throughput would be high as well but compression gain would relatively be lower, decreasing the resulting $\text{T}_{compression}$. On the other hand, compression gain will be high for a low CF, but system throughput will be lower due to relatively higher communication overhead. _With Compression Throughput, we capture this pareto-relationship between the parallel (system throughput) and statistical efficiency (compression gain) of gradient compression in DDP._ 1 Input: $\theta_{min}$, $\theta_{max}$, $\epsilon$, $\theta_{s}$, $\omega$, $\mathsf{window}$, compressor $\mathcal{C}$ 2 $w_{o}:$ initial model state, N: total nodes, b: per-worker batch-size, residual = 0; $\text{T}_{sys},\text{T}_{compress}$ = empty() 3 Train for _i = 1,2,3… $\triangleright$ training iterations_ 4 $g_{o}^{(i)},t_{o}=\nabla f(x^{(i)},w_{i})$ $\triangleright$ backpropagation 5 $g_{o}^{(i)}=g_{o}^{(i)}+$ residual $\triangleright$ error-feedback 6 $g_{min}^{(i)},t_{min}=\mathcal{C}(g_{o}^{(i)},\theta_{min})$ $\triangleright$ compress to CF $\theta_{min}$ 7 $\delta_{min}=\text{EWMA}$($\frac{||g_{min}^{(i)}||^{2}}{||g_{o}^{(i)}||^{2}}$) $\triangleright$ $\theta_{min}$ compression gain 8 $g_{c}^{(i)},t_{c}^{(i)}=\mathcal{C}(g_{min}^{(i)},\theta_{s})$ $\triangleright$ compress to CF ($\theta_{s}\cdot\theta_{min}$) 9 $\delta_{c}=\text{EWMA}$($\frac{||g_{c}^{(i)}||^{2}}{||g_{o}^{(i)}||^{2}}$) $\triangleright$ gain for CF ($\theta_{s}\cdot\theta_{min}$) 10 $t_{compress}=t_{min}+t_{c}$ $\triangleright$ total compression time 11 if _$\delta_{c}\geq\epsilon:$_ 12 $\tilde{g}^{(i)}$, $t_{s}$ = Aggregate($g_{c}^{(i)}$) $\triangleright$ synchronize $g_{c}^{(i)}$ 13 residual = $g_{o}^{(i)}-g_{c}^{(i)}$ $\triangleright$ update residual 14 $t_{iter}$ = $t_{o}$ \+ $t_{compress}$ \+ $t_{s}$ $\triangleright$ iteration time 15 UpdateStep(_$\theta_{s}\cdot\theta_{min},\delta_{c},t_{iter}$_) 16 else if _$\delta_{c} <\epsilon\;\text{and}\;\delta_{min}\geq\epsilon:$_ 17 $\tilde{g}^{(i)}$, $t_{s}$ = Aggregate($g_{min}^{(i)}$) $\triangleright$ synchronize $g_{min}^{(i)}$ 18 residual = $g_{o}^{(i)}-g_{min}^{(i)}$ $\triangleright$ update residuals 19 $t_{iter}$ = $t_{o}$ \+ $t_{compress}$ \+ $t_{s}$ $\triangleright$ iteration time 20 UpdateStep(_$\theta_{min},\delta_{min},t_{iter}$_) 21 else 22 $\tilde{g}^{(i)}$, $t_{s}$ = Aggregate($g_{o}^{(i)}$) $\triangleright$ synchronize $g_{o}^{(i)}$ 23 residual = 0 $\triangleright$ no residual gradients 24 $t_{iter}$ = $t_{o}$ \+ $t_{s}$ $\triangleright$ iteration time 25 UpdateStep(_$1,1,t_{iter}$_) 26 $w_{i+1}=w_{i}-\eta\cdot\tilde{g}^{(i)}$ $\triangleright$ apply SGD update 27 $\theta_{s}$ = CheckGraVAC(_i, $\theta_{s},\delta_{min},\delta_{c}$_) 28procedure _UpdateStep(_$\theta,\delta,t_{iter}$_)_: 29 $\text{T}_{sys}$ = $\text{N}\cdot\text{b}/t_{iter}$ $\triangleright$ system throughput 30 $\text{T}_{compress}[\theta]=\text{T}_{sys}\cdot\delta$ $\triangleright$ compression throughput 31procedure _CheckGraVAC(_i, $\theta_{s},\delta_{min},\delta_{c}$_)_: 32 if _i % $\mathsf{window}==0:$_ 33 $\theta_{s}=\textsf{ScalingPolicy}(\theta_{s})$ $\triangleright$ compression scale-up 34 if _$\omega\geq\frac{|\delta_{min}-\delta_{c}|}{\delta_{min}}:$_ 35 $\theta_{min}=\theta_{s}\cdot\theta_{min}$ $\triangleright$ scale-up minimum CF 36 ct = sort($\text{T}_{compress}.\text{values}()$) $\triangleright$ $\text{T}_{compress}$ vals 37 if _$|\frac{\text{ct}[-1]\;-\;\text{ct}[-2]}{\text{ct}[-2]}|\leq\omega:$_ 38 $\theta_{ideal}=\text{T}_{compress}.get(\text{ct}[-2])$ $\triangleright$ ideal CF 39 return $\theta_{ideal}/\theta_{min}$ $\triangleright$ gives optimal $\theta_{s}$ 40 41 else 42 return $\theta_{s}$ $\triangleright$ else use old scaling factor Algorithm 1 _GraVAC_ ’s Adaptive Compression We build _GraVAC_ as a modular extension on top of PyTorch’s [33] DDP module [34] using Python in about 3000 lines of code. A base $\mathsf{GravacOptimizer}$ wraps common SGD optimizers implemented in PyTorch by extending the base $\mathsf{torch.optim.Optimizer}$ class. The optimizer takes an additional $\mathsf{Compressor}$ object that specifies the type of compression technique used. We implement four pre-existing techniques as compressor classes in this paper: Top-k, DGC, Redsync and Random-k. Compression for the appropriate CF and its gain is computed before the optimizer $\mathsf{step}$ function which applies the aggregated gradient updates on model parameters. _GraVAC Algorithm:_ Alg. 1 describes _GraVAC_ ’s approach of using compressor $\mathcal{C}$ to scale CFs in the exploration space [$\theta_{min},\theta_{max}$], where each candidate CF is evaluated for $\mathsf{window}$ steps and incremented in step-size of $\theta_{s}$ w.r.t. $\theta_{min}$. For e.g., scaling from CF 10$\mathsf{x}$ to CF 20$\mathsf{x}$ means $\theta_{s}=20/10=2\mathsf{x}$. The threshold $\epsilon$ denotes the minimum compression gain required for any CF to be eligible for communication in _GraVAC_ , while threshold $\omega$ is used to measure saturation in compression throughputs and for scaling up $\theta_{min}$. We explain this in the following sections in more detail. For every iteration, we compute gradients $g_{o}^{(i)}$ with model parameters $w_{i}$ on training sample $x^{(i)}$ in time $t_{o}$ (line 4). To incorporate error-feedback,residual holds the leftover gradients not communicated from previous iterations. The shape and memory size of tensors in residual is the same as gradients itself. As shown in line 5, we add residual gradients to the gradients computed in the current iteration. In the first stage, we compress original gradients using $\mathcal{C}$ to compressed gradients $g_{min}^{(i)}$ corresponding to minimum CF $\theta_{min}$ (line 6). We then compute the compression gain corresponding to $\theta_{min}$ (line 7), and smoothen out the inter-iteration gain through exponential weighted moving average (EWMA) smoothing. In our evaluation, we set the EWMA smoothing factor to $N$/100, where $N$ is the number of participating workers. We evaluate the next candidate CF by stepping up the previous $\theta_{min}$ and further compressing the already compressed gradients $g_{min}^{(i)}$ by stepsize $\theta_{s}$ (line 8). Thus, candidate CF evaluated in this case is $\theta_{s}\cdot\theta_{min}$. This is done as part of our multi-level compression strategy to avoid compressing the large, original tensors $g_{o}^{(i)}$ twice. We measure the time savings of our multi-level approach in section IV-C. Next, we compute the gradients and compression gain of candidate CF $\theta_{s}\cdot\theta_{min}$ (line 8-9), and denote the total compression time $t_{compress}$ as the sum of time to compress original gradients to $g_{min}^{(i)}$ (line 6) and the time to further compress $g_{min}^{(i)}$ to $g_{c}^{(i)}$ (line 8). Based on the compression gains obtained and threshold $\epsilon$, we choose the appropriate gradients to call the collective operation on. If the gain of our candidate CF meets $\epsilon$ (line 11), we go ahead and communicate compressed gradients $g_{c}^{(i)}$ among workers. We update the residual gradients in accord with $g_{c}^{(i)}$ as well (line 13), calculate the total iteration time (line 14) and update the system as well as compression throughput for CF $\theta_{s}\cdot\theta_{min}$ via $\mathsf{UpdateStep}$ function. $\text{T}_{compress}$ is a dictionary or a hashmap that stores compression throughput of each candidate CF, min-max CF as well as dense SGD setting (i.e., CF 1$\mathsf{x}$). If the gain of $g_{c}^{(i)}$ does not meet the threshold, but gain $\delta_{min}$ of $\theta_{min}$ does (line 16), we instead synchronize compressed gradients $g_{min}^{(i)}$ corresponding to $\theta_{min}$. In a similar fashion as before, we update the residuals, this time with $g_{min}^{(i)}$ instead of $g_{c}^{(i)}$ (line 18), compute iteration time and assess compression throughput. _It is important to remember that synchronization overhead to communicate $g_{min}^{(i)}$ is more than $g_{c}^{(i)}$ due to the former’s lower CF. The trade-off we make in GraVAC is to incur higher communication latency for more accurate representation of the original gradients (measured by compression gain) and vice-versa._ If both $\theta_{min}$ and currently evaluated CF do not meet the set threshold, we incur maximum communication latency by transmitting the original gradients via dense SGD (line 22). In this case, residual gradients are set to 0 and no compression overhead is included as part of iteration time and computing system/compression throughput. The CF and compression gain are both 1, as set in the $\mathsf{UpdateStep}$ function at line 25. Following SGD update (line 26), we evaluate _GraVAC_ to assess the performance of CFs evaluated so far. This happens at a frequency determined by $\mathsf{window}$. Here, we adjust $\theta_{s}$ by a certain factor to scale up compression, determined by the chosen ScalingPolicy. The scaling policy tunes compression only until the upper bound $\theta_{max}$. We explore two scaling policies in this paper that we describe in detail under section IV-B. After scaling $\theta_{s}$, we also assess if the minimum CF, i.e., $\theta_{min}$ can be scaled up as well. The intuition is that as training progresses, model gradually starts converging as well and we can use higher compression even for the minimum CF later on. In addition to candidate CFs, we thus scale up the minimum CF as well. The transition is made if the current gain $\delta_{c}$ is within $\omega$% of the gain of previous $\theta_{min}$ (line 34). Once enough CFs are evaluated, we look at the two largest compression throughputs (line 36) and fetch the corresponding CF if they are within the bounds of $\omega$. We do this as it means the compression throughput has saturated and thus, we pick the lower CF as $\theta_{ideal}$ (line 38) and send the appropriate step-size (line 39). If the threshold $\omega$ is not met, we use $\theta_{s}$ as is. _When does compression scale-up?_ As seen from Alg. 1, the compression scale- up happens during _GraVAC_ ’s evaluation phase where we scale the step-size $\theta_{s}$ in accordance with a specific scaling policy. At the same time, we escalate the minimum CF $\theta_{min}$ to currently evaluated CF if the two compression gains are within $\omega$% of each other. _When does compression scale-down?_ Compression scale-down is determined by $\epsilon$ (shown via conditional statements lines 11-25). If current CF loses considerably more information in compressed gradients $g_{c}^{(i)}$, we use the lower CF $\theta_{min}$. If the latter fails to meet $\epsilon$ as well, we send uncompressed gradients $g_{o}^{(i)}$ as a last resort. ## IV Evaluation ### IV-A Cluster Setup and Training Hyperparameters We evaluate _GraVAC_ on a 32 GPU setup on the Google Cloud Platform (GCP) across 8 VMs. Each VM is a $\mathsf{n1}$-$\mathsf{standard}$-$\mathsf{8}$ machine type with 8 vCPUs, 30 GB system memory and 4 NVIDIA V100 GPUs with 16 GB VRAM each. The machines are configured with PyTorch 1.10.1, CUDA 11.3, CUDA driver 465.19.01 and NCCL 2.10.3. We evaluate the three models described in Table 1. ResNet101 is trained with per-worker batch size 32, momentum 0.9, weight decay 0.0001 and SGD optimizer with initial learning rate (lr) 0.1 decayed by a factor of 10 at 9K and 14K iterations respectively. VGG16 is also trained with per-worker batch-size 32, weight decay 0.0005, momentum 0.9 and SGD with fixed lr 0.1. Lastly, LSTM is measured with test perplexity (i.e., exponential of test loss) with per-worker batch-size 20, momentum 0.9, weight decay 0.0001 and SGD with fixed lr 0.1. The model is initialized with 1500 embedding dimensions and 2 hidden layers with 35 bptt steps. We evaluate _GraVAC_ with different scaling policies and look at their convergence curves (i.e. test accuracy/perplexity vs. iterations), average compression throughput of candidate CFs and kernel density estimates (KDE) of training iterations using different CFs over the course of training. KDE gives the distribution over the iterations for all CFs and plotted on the log-scale with smoothing bandwidth of $0.1$ passed to the gaussian KDE. ### IV-B _GraVAC_ ’s Adaptive Compression Policies In this section, we look at how _GraVAC_ achieves optimal CF for a given $\theta_{min}$, $\theta_{max}$, $\epsilon$, $\mathsf{window}$, $\omega$ and $\mathsf{stepsize}$. To see how a model converges and communication costs vary by evaluating different candidate CFs in the search space, we employ an _Exponential_ policy that upscales CFs aggressively, and a relatively smoother _Geometric_ scaling policy that scales CFs as a geometric progression. #### IV-B1 Exponential scaling policy In this policy, we implement the ScalingPolicy function from Alg. 1 such that CFs are scaled up in exponents of 2 w.r.t the first initialized $\theta_{min}$. On top of DGC, we set $\theta_{min}$ and $\theta_{max}$ to 10$\mathsf{x}$ and 1000$\mathsf{x}$, $\mathsf{window}$=500 and $\omega$=1%. So we scale up by factors of $2^{1}$, $2^{2}$, $2^{4}$, $2^{8}$ w.r.t 10$\mathsf{x}$ up until 1000$\mathsf{x}$. The candidate CFs thus evaluated in this policy are 10$\mathsf{x}$, 20$\mathsf{x}$, 40$\mathsf{x}$, 160$\mathsf{x}$ and 1000$\mathsf{x}$. We run _GraVAC_ on two configuration with different thresholds on compression gain, $\epsilon$ = 0.7 and 0.9. The lower $\epsilon$ relaxes the constraint on the gain for higher CFs to be eligible for communication, thus achieving higher compression. A large $\epsilon$ (i.e., close to 1) allows for compression only if the compressed tensors are highly representative of the original gradients. First, we compare these two thresholds with Dense SGD as the latter demonstrates the ideal convergence scenario. Then, we compare _GraVAC_ with different compression techniques on static CFs and look at final model accuracy, communication savings and overall speedup. _ResNet101:_ Fig. 7 shows how _GraVAC_ achieves the same convergence as dense SGD in the same number of iterations. The low and high $\epsilon$ reduce overall communication volume by 163$\times$ and 19$\times$ over dense SGD. _We measure communication volume as the ratio of cumulative single-precision floats exchanged among workers in GraVAC relative to dense SGD._ Training cycle is slightly more volatile with compression, as seen from the accuracy drop due to lr decay at around 9000-th iteration. The drop is more apparent for $\epsilon$ = 0.7 as we continue to train with higher CFs on account of the lower threshold. Comparatively, $\epsilon$ = 0.9 is more robust to hyperparameter tuning like lr decay as we tend to train with a lower CF due to higher threshold. This is corroborated from Fig. 7b which shows distribution of training iterations over the CFs. We equally train with 10$\mathsf{x}$ and 1000$\mathsf{x}$ for $\epsilon$ = 0.9, while we mostly train with 1000$\mathsf{x}$ for $\epsilon$ of 0.7. For the compression throughputs of $\epsilon$ = 0.9 in Fig. 7c, it might seem counterintuitive at first that although $T_{compression}$ is maximum for 1000$\mathsf{x}$ and minimum for 10$\mathsf{x}$, we still evenly train with the two CFs. This is on account of the high threshold and because $\theta_{min}$ did not scale up and remained at 10$\mathsf{x}$ for ResNet101. Thus, whenever the compression gain of any candidate CF did not meet the threshold, we synchronized gradients compressed at 10$\mathsf{x}$. For $\epsilon$ of 0.7, compression throughput was maximum for 1000$\mathsf{x}$ and we trained at this CF for most iterations as the corresponding gain easily met that threshold. (a) Test accuracy (b) Iteration density (c) $T_{compression}$ Figure 7: ResNet101:_GraVAC_ with $\epsilon$ = [0.7, 0.9] and Dense SGD. _VGG16:_ Like ResNet101, VGG16 also converges to the same accuracy as dense SGD within the same iterations, where $\epsilon$ = 0.7 and 0.9 reduce communication volume by 80$\times$ and 13.5$\times$ over dense SGD (Fig. 9). Although $T_{compression}$ is maximum at 1000$\mathsf{x}$ for $\epsilon$ = 0.9, the corresponding gain was _not_ as high to meet the threshold. Because of this, we switch back to $\theta_{min}$ and thus train with 10$\mathsf{x}$ for majority iterations as seen from the kernel density estimates in Fig. 8b. However, when $\epsilon$ was lower, we were able to find 40$\mathsf{x}$ CF to meet that threshold. $T_{compression}$ corresponding to this CF was second largest in our exploration space. As candidate CFs are evaluated over the iterations, the model gradually converges and as a result, compression gain improves even further on larger CFs as training progresses. Ultimately, we arrive on $\theta_{ideal}$ = 1000$\mathsf{x}$ corresponding to the maximum compression throughput (Fig. 8c). (a) Test accuracy (b) Iteration density (c) $T_{compression}$ Figure 8: VGG16: _GraVAC_ with $\epsilon$ = [0.7, 0.9] and Dense SGD. _LSTM:_ Like the models before, _GraVAC_ with either $\epsilon$ converged in the same iterations as dense SGD training, while reducing the communication volume by 279$\times$ and 289$\times$ for $\epsilon$ of 0.9 and 0.7 respectively. Given the dataset, model and training hyperparameters, we already saw from Fig. 6d that compression gain for LSTM was high for both 10$\mathsf{x}$ and 1000$\mathsf{x}$. We observed a similar trend here as compression gain corresponding to 1000$\mathsf{x}$ easily satisfied both thresholds and thus, we train with the largest available CF for most iterations (Fig. 9b). Correspondingly, the compression throughput is maximum at this CF as well. (a) Test accuracy (b) Iteration density (c) $T_{compression}$ Figure 9: LSTM: _GraVAC_ with $\epsilon$ = [0.7, 0.9] and Dense SGD. TABLE 2: _GraVAC_ ’s model quality and speedup over static CFs Model | Compression | Acc./Ppl | Diff. | Speedup ---|---|---|---|--- ResNet101 | Top-k 10$\mathsf{x}$ | 80.14% | +0.14% | 1$\times$ Top-k 1000$\mathsf{x}$ | 76.4% | $-$3.6% | 3.02$\times$ DGC 10$\mathsf{x}$ | 80.4% | +0.4% | 1.23$\times$ DGC 1000$\mathsf{x}$ | 78.6% | $-$1.4% | 5.19$\times$ Redsync 10$\mathsf{x}$ | 79.4% | $-$0.6% | 1.2$\times$ Redsync 1000$\mathsf{x}$ | 77.4% | $-$2.6% | 6.94$\times$ Random-k 10$\mathsf{x}$ | - | - | - Random-k 1000$\mathsf{x}$ | - | - | - _GraVAC_ | 80.2% | +0.2% | 4.32$\times$ VGG16 | Top-k 10$\mathsf{x}$ | 91.2% | +1.2% | 1$\times$ Top-k 1000$\mathsf{x}$ | 90.68% | +0.68% | 3.22$\times$ DGC 10$\mathsf{x}$ | 90.8% | +0.8% | 0.935$\times$ DGC 1000$\mathsf{x}$ | 90.4% | +0.4% | 3.35$\times$ Redsync 10$\mathsf{x}$ | 90.45% | +0.45% | 0.99$\times$ Redsync 1000$\mathsf{x}$ | 90.3% | +0.3% | 3.6$\times$ Random-k 10$\mathsf{x}$ | 87.8% | $-$2.2% | 0.7$\times$ Random-k 1000$\mathsf{x}$ | - | - | - _GraVAC_ | 90.48% | +0.48% | 1.95$\times$ LSTM | Top-k 10$\mathsf{x}$ | 22.0 | +0.0 | 1$\times$ Top-k 1000$\mathsf{x}$ | 26.78 | $-$4.78 | 3.36$\times$ DGC 10$\mathsf{x}$ | 21.67 | +0.33 | 1.23$\times$ DGC 1000$\mathsf{x}$ | 25.14 | $-$3.14 | 6.25$\times$ Redsync 10$\mathsf{x}$ | 21.65 | +0.35 | 1.17$\times$ Redsync 1000$\mathsf{x}$ | 24.24 | $-$2.24 | 6.9$\times$ Random-k 10$\mathsf{x}$ | 24.15 | $-$2.15 | 1.3$\times$ Random-k 1000$\mathsf{x}$ | - | - | - _GraVAC_ | 21.25 | +0.75 | 6.67$\times$ Further, we compare _GraVAC_ with static CFs running on different compression techniques. In particular, we train our models with Top-k, DGC, Redsync and Random-k at CFs 10$\mathsf{x}$ and 1000$\mathsf{x}$. We run each compression technique to report the final accuracy/perplexity until it does not improve any further, difference in convergence compared to dense SGD baseline from Table 1, and relative training speedup over Top-k 10$\mathsf{x}$ for each model. The results are tabulated in Table 2. We do not consider dense SGD training in this comparison since we already established previously how _GraVAC_ is able to achieve the same convergence in the same iterations, and other compression techniques have already been compared to dense SGD in prior works. For ResNet101, 1000$\mathsf{x}$ CF on Redsync, DGC and Top-k have considerably high speedups than 10$\mathsf{x}$ Top-k. However, these methods at 1000$\mathsf{x}$ CF achieve considerably less accuracy than Top-k at 10$\mathsf{x}$. At 1000$\mathsf{x}$, Top-k, DGC and Redsync do not improve beyond 76.4%, 78.6% and 77.4% top-1 test accuracy. Random-k faild to converge at either CF and accuracy did not improve beyond 20% . Because of _GraVAC_ ’s adaptive scheme, we converge to 80.2% accuracy while still reducing training time by 4.32$\times$. For VGG16, we previously observed that the model is already quite robust to high compression (Fig. 5). We see that again here for Top-k, DGC and Redsync at 1000$\mathsf{x}$ cross 90% accuracy with 3.22, 3.35 and 3.6$\times$ speedup over Top-k 10$\mathsf{x}$. Random-k at 10$\mathsf{x}$ also converged, albeit to a lower 87.8% accuracy and slower convergence. Since _GraVAC_ attains 90.48% test accuracy with 1.95$\times$ training speedup, other compression schemes were more optimal in this case simply because they used high CFs. In LSTM, _GraVAC_ obtains the least perplexity of 21.25 while still providing maximum speedup of 6.67$\times$ over Top-k 10$\mathsf{x}$. Random-k 10$\mathsf{x}$ converged to 24.15 perplexity and did not improve further, while Random-k 1000$\mathsf{x}$ failed here again. Of all the configurations, only Top-k, DGC and Redsync at 10$\mathsf{x}$ CF and _GraVAC_ achieved better perplexity than dense SGD. Thus, we see how _GraVAC_ is able to train models like ResNet101 and LSTM to high accuracy/perplexity and still reduce training time significantly. Static compression schemes achieve high accuracy at low CF at the cost of high communication overhead, thus providing lower speedup. Large CFs considerably reduce communication, but the final model quality is not at par with _GraVAC_. On the flip side, some over-parameterized models like VGG16 can be robust to compression and still converge successfully at high static CFs. #### IV-B2 Geometric scaling policy We also propose a relatively smoother compression policy where ScalingPolicy increments CFs as a geometric progression with common ratio 2. We deploy _GraVAC_ with Redsync on ResNet101 and set $\theta_{min}$ = 10$\mathsf{x}$, $\theta_{max}$ = 2000$\mathsf{x}$, $\epsilon$ = 0.7, $\mathsf{window}$ = 2000 steps and $\omega$ = 1%. Thus, candidate CFs are 10$\mathsf{x}$, 20$\mathsf{x}$, 40$\mathsf{x}$, 80$\mathsf{x}$, 160$\mathsf{x}$, 320$\mathsf{x}$, 640$\mathsf{x}$, 1280$\mathsf{x}$ and 2000$\mathsf{x}$. Fig. 10a shows the accuracy curve over the iterations. Compared to dense SGD (Fig. 7a), _GraVAC_ with geometric scaling converged _while reducing communication volume by 76 $\times$_. In contrast to exponential scaling, convergence is relatively slower because we evaluate each candidate CF for a larger $\mathsf{window}$ size. As a result, gradients get even smaller as _GraVAC_ gradually arrives at larger CFs and compression gain increases beyond $\epsilon$. Thus, we see similar iteration densities from CF 10$\mathsf{x}$ to 640$\mathsf{x}$ (Fig. 10b). After the first 7 CFs are evaluated over 2000 steps each, we mostly train with CF 1280$\mathsf{x}$ from 16K iterations onward (because 8 $\times$ 2000 = 16000). We did not scale to 2000$\mathsf{x}$ in our evaluation since compression throughput for 1280$\mathsf{x}$ and 2000$\mathsf{x}$ was 1029.9 and 1035.4, which falls within $\omega$’s bound of 1%. _This case highlights the effectiveness of GraVAC such that it does not scale the CF beyond a point when it stop improving the parallel or statistical efficiency of gradient compression_. In this case, _GraVAC_ does not compress beyond 1280$\mathsf{x}$ as it corresponds to the maximum compression throughput (and at a lower CF of 1280$\mathsf{x}$ compared to 2000$\mathsf{x}$). (a) Convergence (b) $T_{compression}$ and KDE Figure 10: ResNet101: _GraVAC_ with Geometric scaling policy. ### IV-C Gains of Multi-level Compression in _GraVAC_ Alg. 1 explains how at each iteration, _GraVAC_ scales compression from initial $\theta_{min}$ to current CF being evaluated (i.e., $\theta_{c}$), up to the maximum allowed $\theta_{max}$. Thus, compressing the original gradients (computed over backward pass) twice; i.e., once over $\theta_{min}$ and then again on $\theta_{c}$ can incur significant overhead, especially on larger models. The latency of a compressor may vary with the size of the tensor to compress as well as the target CF. To reduce the cumulative overhead of compressing original tensors multiple times, we apply a multi-level compression scheme as follows: given a compressor $\mathcal{C}$ and tensor $\mathcal{X}$ to be compressed to CFs $\theta_{1}$ and $\theta_{2}$ such that $\theta_{2}>\theta_{1}$, rather than compressing each CF on $\mathcal{X}$ as: $\mathcal{X}_{1}=\mathcal{C}(\theta_{1},\mathcal{X})\;\text{and}\;\mathcal{X}_{2}=\mathcal{C}(\theta_{2},\mathcal{X})$ to produce compressed tensors where $|\mathcal{X}_{2}|<|\mathcal{X}_{1}|<|\mathcal{X}|$. In _GraVAC_ , we first compute $\mathcal{X}_{1}$ and then compress this tensor to $\theta_{2}^{{}^{\prime}}$ to produce $\mathcal{X}_{2}^{{}^{\prime}}$: $\mathcal{X}_{1}=\mathcal{C}(\theta_{1},\mathcal{X})\;\Longrightarrow\mathcal{X}_{2}^{{}^{\prime}}=\mathcal{C}(\theta_{2}^{{}^{\prime}},\mathcal{X}_{1})\;:\;\theta_{2}^{{}^{\prime}}=\frac{\theta_{2}}{\theta_{1}}$ The resulting tensor $\mathcal{X}_{2}^{{}^{\prime}}$ is such that $\mathcal{X}_{2}^{{}^{\prime}}=\mathcal{X}_{2}$ for $\theta_{2}^{{}^{\prime}}=\theta_{2}/\theta_{1}$. The appeal of doing so is that the second compression operation is applied on a smaller tensor $\mathcal{X}_{1}$ instead of $\mathcal{X}$ again. We tabulate the savings of multi-level compression in Table 3. Let’s consider a scaling case of _GraVAC_ where $\theta_{min}=10\mathsf{x}$ and current CF evaluated is 1000$\mathsf{x}$. Then multilevel _GraVAC_ first compresses to 10$\mathsf{x}$ and then further compresses the reduced tensors to 100$\mathsf{x}$, i.e., $\theta_{1}=10\mathsf{x}$ and $\theta_{2}^{{}^{\prime}}=100\mathsf{x}$ so that $\theta_{2}=1000\mathsf{x}$. In direct approach, we first compress original gradients to 10$\mathsf{x}$, then compress the original gradients again to 1000$\mathsf{x}$. From our results, we see that multi-level compression is at least 1.1$\times$ and up to 1.83$\times$ faster than directly compressing the original tensors twice. TABLE 3: _GraVAC_ ’s mulit-level (MTL) compression speedup Model | Method | Direct (ms) | MTL (ms) | Speedup ---|---|---|---|--- ResNet101 | Top-k | 606 | 332 | 1.83$\times$ DGC | 90 | 59 | 1.52$\times$ Redsync | 33 | 29.8 | 1.1$\times$ Random-k | 23 | 14 | 1.64$\times$ VGG16 | Top-k | 181 | 121 | 1.49$\times$ DGC | 122 | 95.5 | 1.27$\times$ Redsync | 101.4 | 87.7 | 1.16$\times$ Random-k | 41.6 | 31 | 1.34$\times$ LSTM | Top-k | 200 | 126 | 1.59$\times$ DGC | 88 | 63 | 1.4$\times$ Redsync | 69.4 | 46.4 | 1.5$\times$ Random-k | 56.4 | 37.4 | 1.5$\times$ ### IV-D Comparing _GraVAC_ with Prior Art In this section, we compare _GraVAC_ with another adaptive scheme called Accordion [31]. For the three models, we use bounds of Rank-1 and Rank-4 for compression in Accordion, as described in [31] and compare with _GraVAC_ in terms of communication and time savings (i.e., training speedup) to achieve the same test accuracy/perplexity. The savings are normalized by Accordion’s performance for each respective model, shown in Table 4. For ResNet101, _GraVAC_ reduces total communication volume by 44.5$\times$ and reduces training time by 1.94$\times$ over Accordion. _GraVAC_ speeds up training by 5.63$\times$ over Accordion for communication-heavy models like VGG16. In LSTM training, _GraVAC_ converges twice as fast by reducing communication volume up to 104.2$\times$. Accordion is based on detecting critical regions during training, i.e., when inter-iteration gradients computed in backward pass change significantly and cross a certain user-defined threshold. Accordion switches between 2 compression factors such that it uses the low CF in critical regions and the higher CF otherwise. On the other hand, _GraVAC_ looks at information loss on account of compression (i.e., statistical efficiency) and not just relative gradient change in sensitive regions of training. That is, _GraVAC_ looks at intra-iterations gradients as well (between original and gradients compressed at different CFs). Additionally, _GraVAC_ scales compression across a wider range and carefully inspects intermediary CFs as potential compression candidates. Thus, we obtain higher speedups when training with _GraVAC_. TABLE 4: _GraVAC_ vs. Accordion: Communication and Time savings Model | Method | Floats sent | Comm. sav. | Time sav. ---|---|---|---|--- ResNet101 | Accordion | 4.17 $\times 10^{11}$ | 1$\times$ | 1$\times$ _GraVAC_ | $\mathbf{9.38\times 10^{9}}$ | $\mathbf{44.5\times}$ | $\mathbf{1.94\times}$ VGG16 | Accordion | 3.83 $\times 10^{11}$ | 1$\times$ | 1$\times$ _GraVAC_ | $\mathbf{1.7\times 10^{10}}$ | $\mathbf{22.4\times}$ | $\mathbf{5.63\times}$ LSTM | Accordion | 4.2 $\times 10^{11}$ | 1$\times$ | 1$\times$ _GraVAC_ | $\mathbf{4\times 10^{9}}$ | $\mathbf{104.2\times}$ | $\mathbf{2.06\times}$ #### IV-D1 _GraVAC_ vs. Accordion on Random-k Compression We previously saw in Fig. 2b and Table 2 that ResNet101 failed to converge at any CF with Random-k compression. In this section, we present a special case of using Random-k under the hood with both _GraVAC_ and Accordion. Although the compression quality of Random-k is lower compared to other compressors, we present this as a special case to demonstrate how _GraVAC_ is more dynamic and operates at a finer granularity. We launch _GraVAC_ with Random-k on $\theta_{min}$ = 1.5$\mathsf{x}$, $\theta_{max}$ = 1000$\mathsf{x}$, $\mathsf{window}$ = 2000 and $\epsilon$ = 0.7. The CFs are scaled up via _geometric scaling policy_. Accordion was also deployed with the same min-max bounds on CF as _GraVAC_ , i.e., low CF = 1.5$\mathsf{x}$ and high CF = 1000$\mathsf{x}$. The convergence curves comparing _GraVAC_ and Accordion are shown in Fig. 11a. Unlike static 10$\mathsf{x}$ Random-k compression (Fig. 2b) that failed to converge, we were able to achieve to 78% top-1 test accuracy for ResNet101 with _GraVAC_. The CFs used for training by _GraVAC_ were 1.5$\mathsf{x}$, 3$\mathsf{x}$, 6$\mathsf{x}$, 12$\mathsf{x}$, 24$\mathsf{x}$ and 48$\mathsf{x}$. All candidate CFs beyond this were ignored as they did not meet the required threshold of $\epsilon$. CF 12$\mathsf{x}$ has the highest density, implying most iterations used this CF for training (Fig. 11b). Correspondingly, compression throughput is maximum for this CF as well. Compared to dense SGD, we reduced overall communication volume by 18$\times$. As for Accordion on Random-k, we see in Fig. 11a that training saturates at 20% accuracy. This is because Accordion does _not_ consider the efficacy of the compression technique itself, and only switches between a low and high CF if the uncompressed, inter-iteration gradients change beyond a certain measure. With a low CF 1.5$\mathsf{x}$, information loss in Random-k was too high to update ResNet101 in a meaningful way. (a) Model convergence (b) _GraVAC_ $T_{comp.}$ and KDE Figure 11: _GraVAC_ and Accordion on Random-k compression. ## V Conclusion Gradient noise has previously been used as a scalability indicator for batch and cluster-size scaling in deep learning [39, 20, 19, 37, 38]. Adaptive compression schemes like Accordion [31] switch between two compression levels based on when the inter-iteration gradients change by some margin. _GraVAC_ ’s key insight is to tweak compression factor over the course of training while balancing the pareto-relationship between parallel and statistical efficiency in gradient compression. We use “compression gain“ to measure information loss on account of compression and choose a CF appropriately. In our evaluation, we see that _GraVAC_ converges 1.95 to 6.67$\times$ faster than choosing a static CF, while converging in the same number of iterations as dense SGD. Compared to Accordion, we observed up to 5.63$\times$ reduction in end-to-end training time. One should be mindful when training models with _GraVAC_ as it introduces parameters like compression threshold ($\epsilon$) and $\mathsf{window}$ size that may affect overall training performance. Setting too small a $\mathsf{window}$ size may result in poor convergence as all the candidate CFs may be exhausted while the model is still in early training stages and gradients are still volatile. As for $\epsilon$, choosing a very small threshold may enable high compression but may lead to model degradation by allowing high CF gradients from the beginning that will not update the model in a significant way. ## References * [1] AI and Compute, https://openai.com/blog/ai-and-compute/, 2018-05-16. * [2] Cherry, S.IEEE Spectrum, “Edholm’s law of bandwidth“, 2004. DOI 10.1109/MSPEC.2004.1309810. * [3] Schaller, Robert R.“Moore’s Law: Past, Present, and Future“, 1997.IEEE Press, DOI 10.1109/6.591665. * [4] Zhang Z., Chang C. , Lin H. Wang Y. , Arora R. and Jin X., “Is Network the Bottleneck of Distributed Training¿‘, NetAI 2020. DOI 10.1145/3405671.3405810. * [5] Alessandro A., Matteo R., Stefano S. “Critical Learning Periods in Deep Neural Networks“, 2019. arXiv 1711.08856. * [6] Dan A., Torsten H., Mikael J., Sarit K., Nikola K. and Cédric R. “The Convergence of Sparsified Gradient Methods“, 2018. arXiv 1809.10505. * [7] Yujun L., Song H., Huizi M., Yu W and Bill D. “Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training“, ICLR 2018. * [8] Shi, S., Chu, X., Cheung, K.C., and See, S. (2019). Understanding Top-k Sparsification in Distributed Deep Learning. ArXiv, abs/1911.08772. * [9] Fang J., Fu H., Yang G and Hsieh, CJ. “RedSync: Reducing synchronization bandwidth for distributed deep learning training system“, 2019. Journal of Parallel and Distributed Computing, DOI 10.1016/j.jpdc.2019.05.016. * [10] Facebook Gloo, https://github.com/facebookincubator/gloo. * [11] https://developer.nvidia.com/nccl, NVIDIA Collective Communication Library. * [12] “MPI: A message passing interface“, Supercomputing ’93:Proceedings of the 1993 ACM/IEEE Conference on Supercomputing. DOI 10.1145/169627.169855. * [13] Mu L., David G. A., Jun W.P., Alexander J. S., Amr A., Vanja J., James L., Eugene J. S and Bor-Yiing S. “Scaling Distributed Machine Learning with the Parameter Server“, 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14). * [14] Ruder S., “An overview of gradient descent optimization algorithms“. DOI 10.48550/ARXIV.1609.04747. * [15] Kaiming H., and Xiangyu Z., Shaoqing R. and Jian S. “Deep Residual Learning for Image Recognition“, 2015, arXiv 1512.03385. * [16] Karen S. and Andrew Z., “Very Deep Convolutional Networks for Large-Scale Image Recognition“, 2015. * [17] Hochreiter S. and Schmidhuber J., “Long Short-Term Memory, Neural Computation, 1997. DOI 10.1162/neco.1997.9.8.1735. * [18] Agarwal S., Wang H., Venkataraman S. and Papailiopoulos D., “On the Utility of Gradient Compression in Distributed Training Systems“, MLSys 2022. * [19] Johnson T. B., Agrawal P., Gu H. and Guestrin C.,“AdaScale SGD: A User-Friendly Algorithm for Distributed Training“, https://arxiv.org/abs/2007.05105. * [20] Luo M., Guo L., Marcel W., Konstantinos F., Andrei-Octavian B. and Peter P.,“Kungfu: Making Training in Distributed Machine Learning Adaptive“, 14th USENIX OSDI 2020. * [21] Sagun L., Bottou L. and LeCun Y.,“Eigenvalues of the Hessian in Deep Learning: Singularity and Beyond“, 2016. DOI 10.48550/ARXIV.1611.07476. * [22] Jonathan F., David J. S. and Ari S. M., “The Early Phase of Neural Network Training“, International Conference on Learning Representations, 2020. * [23] Alessandro A., Matteo R. and Stefano S.,“Critical Learning Periods in Deep Neural Networks“, 2019, arXiv 1711.08856. * [24] Paulius M., Sharan N., Jonah A., Gregory D., Erich E., David G., Boris G., Michael H., Oleksii K., Ganesh V. and Hao W.,“Mixed Precision Training“, 2018. arXiv 1710.03740. * [25] Dan A., Demjan G., Jerry L., Ryota T. and Milan V.,“QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding“, 2017. arXiv 1610.02132. * [26] Seide F., Fu H., Droppo J., Li G. and Yu D.,“1-Bit Stochastic Gradient Descent and Application to Data-Parallel Distributed Training of Speech DNNs“, Interspeech 2014. * [27] Ahmed M. A., Ahmed E., Mohamed-Slim A. and Marco C.,“An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems“, MLSys 2021. * [28] Thijs V., Sai P. K. and Martin J.,“PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization“, NeurIPS, 2019. * [29] Hongyi W., Saurabh A. and Dimitris P., “Pufferfish: Communication-efficient Models At No Extra Cost“, MLSys 2021. * [30] Guo J., Liu W., Wang W., Han J., Li R., Lu Y. and Hu S.,“Accelerating Distributed Deep Learning By Adaptive Gradient Quantization“, ICASSP 2020. * [31] Saurabh A., Hongyi W., Kangwook L., Shivaram V. and Dimitris P.,“Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification“, arXiv 2010.16248. * [32] S. Tyagi and M. Swany, “ScaDLES: Scalable Deep Learning over Streaming data at the Edge“, IEEE Big Data 2022. * [33] Adam P., Sam G., Francisco M., Adam L., James B., Gregory C., Trevor K., Zeming L., Natalia G., Luca A., Alban D., Andreas K., Edward Y., Zach D., Martin R., Alykhan T., Sasank C., Benoit S., Lu F., Junjie B. and Soumith C.,“PyTorch: An Imperative Style, High-Performance Deep Learning Library“, NeurIPS 2019. * [34] Li S., Zhao Y., Varma R., Salpekar O., Noordhuis P., Li T., Paszke A., Smith J., Vaughan B., Damania P. andChintala S.,“PyTorch Distributed: Experiences on Accelerating Data Parallel Training“, VLDB Endowment 2020. * [35] Karimireddy S. P., Rebjock Q., Stich S. U. and Jaggi M.,“Error Feedback Fixes SignSGD and other Gradient Compression Schemes“,ICML2019. * [36] Zheng S., Huang Z. and Kwok J. T.,“Communication-Efficient Distributed Blockwise Momentum SGD with Error-Feedback“, NeurIPS 2019. * [37] Sam M., Jared K., Dario A and OpenAI Dota Team,“An Empirical Model of Large-Batch Training“, 2018. arXiv abs/1812.06162. * [38] Aurick Q., Sang K. C., Suhas J. S., Willie N., Qirong H., Hao Z., Gregory R. G. and Eric P. X.,“Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning“, 15th USENIX OSDI. 2021. * [39] Tyagi S., Sharma P.,“Scavenger: A Cloud Service for Optimizing Cost and Performance of ML Training“, IEEE/ACM Symposium on Cluster, Cloud and Grid Computing (CCGrid), 2023. * [40] Fang J., Fu H., Yang G., Hsieh C.J., “Accelerating Distributed Deep Learning Training with Gradient Compression“, https://arxiv.org/pdf/1808.04357.pdf, 2018.
# Complete separation of variables in the geodesic Hamilton–Jacobi equation M. O. Katanaev Steklov Mathematical Institute, 119991, Moscow, ul. Gubkina, 8 E-mail<EMAIL_ADDRESS> ###### Abstract We consider a (pseudo)Riemannian manifold of arbitrary dimension. The Hamilton–Jacobi equation for geodesic Hamiltonian admits complete separation of variables for some (separable) metrics in some (separable) coordinate systems. Separable metrics are very important in mathematics and physics. The Stäckel problem is: “Which metrics admit complete separation of variables in the geodesic Hamilton–Jacobi equation?” This problem was solved for inverse metrics with nonzero diagonal elements, in particular, for positive definite Riemannian metrics, long ago. However the question is open for indefinite inverse metrics having zeroes on diagonals. We propose the solution. Separable metrics are divided into equivalence classes characterised by the number of Killing vector fields, quadratic indecomposable conservation laws for geodesics, and the number of coisotropic coordinates. The paper contains detailed proofs, sometimes new, of previous results as well as new cases. As an example, we list all canonical separable metrics in each equivalence class in two, three, and four dimensions. Thus the Stäckel problem is completely solved for metrics of any signature in any number of dimensions. ## 1 Introduction Many important and interesting problems in mathematical physics are related to analysis of geodesics on a (pseudo)Riemannian manifold. In its turn, integration of geodesic equations often reduces to solution of the corresponding Hamilton–Jacobi equation. Models admitting complete separation of variables in the Hamilton–Jacobi equations are of particular interest because, in this case, there are $n$ independent conservation laws in envolution for $n$ dimensional Hamiltonian system, and geodesic equations are integrated in quadratures. Therefore finding metrics which admit complete separation of variables is of great importance. This problem is interesting both for Riemannian (positive definite) metrics and metrics of Lorentz signature, the latter case being important in gravity models. In 1891, Stäckel raised the question: “Which metrics admit complete separation of variables in the respective Hamilton–Jacobi equation?” [1, 2, 3, 4], and gave the answer in the case of diagonal (orthogonal) metrics in the presence of only quadratic indecomposable conservation laws. These metrics are called Stäckel or separable. The problem attracted much interest of mathematicians and physicists and became classical. It is clear that if the metric has enough symmetry then it may admit $n$ envolutive conservation laws. At the same time some Stäckel metrics admit complete separation of variables even without any symmetry. Many interesting and important results for orthogonal separable metrics were obtained in papers [5, 6, 7, 8], but we focus our attention on nondiagonal metrics. The separating action functions were found for nondiagonal metrics of arbitrary signature under the assumption that all diagonal elements differ from zero in [9, 10, 11]. This is always true for Riemannian positive definite metrics. The corresponding separable metrics were derived in [12, 13]. These results were culminated in [14] (see also [15]) were necessary and sufficient conditions for complete separation of variables were found for more general Hamiltonians depending on time and containing the term linear in momenta and potential, again under the assumption that all diagonal elements of separable metrics differ from zero. A different technique was used in [16, 17] (see also [18]) for obtaining separable metrics including the case when the diagonal inverse metric components include zeroes (the corresponding coordinate lines were called there “essential coordinates of type I” and we call them “coisotropic coordinates” for brevity). However the full list of separating action functions and conservation laws in Hamiltonian formulation was not derived. This general problem was also attacked in [19, 20] and many important examples especially with coisotropic coordinates were considered in [21, 22]. In the present paper, we solve this problem using another technique which allows us not only to derive separable metrics but, in addition, the full set of separating action functions and conservation laws in the Hamiltonian formulation. The coordinate free formulation of separability criteria in general case was proposed in [23]. It turns out that Stäckel metrics and complete multiplicative separation of variables in the respective Laplace–Beltrami, Helmholtz, and Schrödinger equations are closely related. Namely, complete multiplicative separation of variables in the latter equations provides sufficient conditions for complete additive separation of variables in the Hamilton–Jacobi equation [12, 13]. This observation increases the importance of separable metrics. Separable metrics of Lorentzian signature are of great importance in gravity models. Usually, one assumes existence of a large symmetry group for metric to reduce the number of independent components which allows to obtain exact solutions of Einstein’s equations in many cases. As a rule, such solutions admit complete separation of variables in the geodesic Hamilton–Jacobi equation. This approach can be inverted. One may assume complete separability of metric which also reduces significantly the number of independent metric components yielding a hope of obtaining exact solutions of gravity equations. Such solutions are very attractive because the respective geodesic equations are Liouville integrable, and this helps us to understand global structure of respective space-times. This idea was successfully implemented in [24] for separable metrics admitting two commutative Killing vector fields and two indecomposable quadratic conservation laws. A large class of exact solutions was described in this way including Schwarzschild, Reissner–Nordström, Kerr, and many others. It turns out that all nonzero components of separable metrics are given by fixed functions of the set of arbitrary functions of single coordinates. This means that vacuum Einstein’s equations reduce to a system of nonlinear ordinary differential equations. This feature enhances the hope of obtaining exact solutions. Complete separation of variables for the geodesic Hamilton–Jacobi equation occurs in the presence of Killing vectors and Killing second rank tensors. The importance of second rank Killing tensors for linear Hamiltonian systems is discussed in [25]. Some topological properties of complete separation of variables for Killing tensors are discussed in [26]. In the present paper, we propose complete solution of the Stäckel problem for metrics of any signature in any dimensions including cases of zero diagonal inverse metric components. Our proofs in many cases differ from others. At the end of the paper, we give complete lists of canonical separable metrics in two, three, and four dimensions. ## 2 Separation of variables We consider $n$-dimensional topologically trivial manifold ${\mathbb{M}}\approx{\mathbb{R}}^{n}$ covered by a global coordinate system $x^{\alpha}$, $\alpha=1,\dotsc,n$. Let there be a geodesic Hamiltonian (function on the phase space $(x,p)\in{\mathbb{T}}^{*}({\mathbb{M}})$) $H_{0}(x,p):=\frac{1}{2}g^{\alpha\beta}(x)p_{\alpha}p_{\beta},$ (1) where $g^{\alpha\beta}(x)$ is the inverse metric on ${\mathbb{M}}$ and $p_{\alpha}$ are momenta. It is well known that it yields Hamiltonian equations for geodesics on a (pseudo)Riemannian manifold $({\mathbb{M}},g)$. All functions are assumed to be sufficiently smooth, and we shall not mention this in what follows. Moreover we often say simply “metric” instead of “inverse metric”. If metric is positive definite then Hamiltonian (1) describes motion of a point particle on Riemannian manifold $({\mathbb{M}},g)$. For Lorentzian signature metric, Hamiltonian (1) describes worldlines of point particles on space-time $({\mathbb{M}},g)$. Both cases are of considerable interest from mathematical and physical points of view, and many properties of such mechanical systems do not depend on the signature of the metric. We shall consider metrics of arbitrary signature, transition to positive definite metric being usually trivial. The Hamilton–Jacobi equation for the truncated action function (characteristic Hamilton function) $W(x)$ is $H_{0}\left(x,\frac{\partial W}{\partial x^{\alpha}}\right)=\frac{1}{2}g^{\alpha\beta}\partial_{\alpha}W\partial_{\beta}W=E,\qquad E={\sf\,const}.$ (2) ###### Definition. A solution of Hamilton–Jacobi equation (2) $W(x,c)$ depending on $n$ independent parameters (integration constants) $(c_{a})\in{\mathbb{V}}\subset{\mathbb{R}}^{n}$, $a=1,\dotsc,n$, such that $\det\frac{\partial^{2}W}{\partial x^{\alpha}\partial c_{a}}\neq 0,$ (3) is called complete integral. ∎ It is not a general solution of Eq.(2) which has functional arbitrariness. However any solution of the Hamilton–Jacobi equation can be obtained from a complete integral by variation of parameters (see, e.g., [27], Ch. IX, §3). Therefore, if complete integral is known, then the problem may be considered as solved. Note that there are infinitely many complete integrals of the Hamilton–Jacobi equation, if they exist. Therefore our aim is not to find all complete integrals but only one. The domain $c\in{\mathbb{V}}$ depends on the metric and therefore is not specified. It should be found in every particular case, and requires some investigation. In general, constant $E(c)$ (energy) in the Hamilton–Jacobi equation depends on parameters. In particular, it can be considered as one of them. It is clear that any solution of the Hamilton–Jacobi equation is defined up to addition of arbitrary constant. This constant cannot be chosen as one of the parameters $c$ because condition (3) is violated. Therefore all additive integration constants of $W$ will be dropped as inessential. We use Latin indices $a,b,\dotsc$ for enumeration of independent parameters $c$ though they run the same values as the Greek ones. This is done to stress important difference: tensor components with Greek indices are transformed under coordinate transformation whereas with Latin indices are not. For example, functions $\partial^{a}W:=\partial W/\partial c_{a}$ are scalars while partial derivatives $\partial_{\alpha}W:=\partial W/\partial x^{\alpha}$ are components of covector. ###### Definition. Coordinates $x^{\alpha}$, if they exist, are called separable, if Hamilton–Jacobi equation (2) admits additive separation of variables in this coordinate system, i.e. the action function is given by the sum $W=\sum_{\alpha=1}^{n}W_{\alpha}(x^{\alpha},c)$ (4) where every summand $W_{\alpha}$ is a function of only one coordinate $x^{\alpha}$ and parameters $c_{a}$, taking values in some domain ${\mathbb{V}}\subset{\mathbb{R}}^{n}$. We require $\det\frac{\partial^{2}W}{\partial x^{\alpha}\partial c_{a}}=\det\frac{\partial^{2}W_{\alpha}}{\partial x^{\alpha}\partial c_{a}}\neq 0$ (5) for all $x$ and $c$. Metric in separable coordinate system is called separable. Functions $W_{\alpha}$ in the sum (4) are called separating. ∎ Individual summand in the action function (4) may depend only on some part of parameters, but the whole action function depends on all $n$ parameters. For brevity, we use notation $\partial^{a}W:=\frac{\partial W}{\partial c_{a}},\qquad W^{\prime}_{\alpha}(x^{\alpha},c):=\partial_{\alpha}W_{\alpha}(x^{\alpha},c).$ Then requirement (5) is written as $\det(\partial^{a}W^{\prime}_{\alpha})\neq 0.$ (6) The problem of complete separation of variables in Hamilton–Jacobi equation (2) is as follows. We have to describe all metrics $g^{\alpha\beta}(x)$, depending only on coordinates $x$, for which there is a constant $E(c)\neq 0$ and functions $W^{\prime}_{\alpha}(x^{\alpha},c)$, depending on only one coordinate $x^{\alpha}$ and parameters $c$ such that $\det\partial^{a}W^{\prime}_{\alpha}\neq 0$ and equation $g^{\alpha\beta}W^{\prime}_{\alpha}W^{\prime}_{\beta}=2E$ (7) holds. During solution of this problem we find admissible form of separating functions $W^{\prime}_{\alpha}$ which, as we shall see, define independent envolutive conservation laws in the corresponding Hamiltonian systems. Thus, we have to solve functional (not differential) equation (7) with respect to $g^{\alpha\beta}(x)$ and $W^{\prime}_{\alpha}(x^{\alpha},c)$, which are supposed sufficiently smooth both on $x$ and $c$. All functions $W_{\alpha}$ is the sum (4) are scalars with respect to coordinate transformations on ${\mathbb{M}}$ though they have the coordinate index $\alpha$. Covector is defined by partial derivatives $W^{\prime}_{\alpha}:=\partial_{\alpha}W_{\alpha}$. If the Hamilton–Jacobi equation admits complete separation of variables, then the corresponding Hamiltonian equations are Liouville integrable, the solution being given in quadratures. Requirement (6) means that $n$ functions $W_{\alpha}$ are functionally independent and can be chosen as new coordinates. These are the action-angle coordinates in the corresponding Hamiltonian formulation. Consider an important example of mechanical system in which metric has no Killing vectors but admit complete separation of variables in the Hamilton–Jacobi equation. ###### Example 2.1 (The Liouville system). Consider a conformally flat metric $g_{\alpha\beta}=\Phi^{2}\eta_{\alpha\beta}\quad\Leftrightarrow\quad g^{\alpha\beta}:=\frac{1}{\Phi^{2}}\eta^{\alpha\beta},\qquad\Phi^{2}:=\phi_{1}+\dotsc+\phi_{n}>0,$ (8) where every function $\phi_{\alpha}(x^{\alpha})$ (no summation) depends on single coordinate, and $\eta_{\alpha\beta}$ denotes (pseudo)Euclidean metric of arbitrary signature. This metric defines the Hamiltonian $H:=\frac{\eta^{\alpha\beta}p_{\alpha}p_{\beta}}{2(\phi_{1}+\dotsc+\phi_{n})}+U(x)=\frac{1}{\Phi^{2}}\left(\frac{1}{2}\eta^{\alpha\beta}p_{\alpha}p_{\beta}+\Theta^{2}\right).$ (9) Here we added potential energy $U$ of special type for generality $U:=\frac{\Theta^{2}}{\Phi^{2}}=\frac{\theta_{1}(x^{1})+\dotsc+\theta_{n}(x^{n})}{\phi_{1}(x^{1})+\dotsc+\phi_{n}(x^{n})},$ every function $\theta_{\alpha}(x^{\alpha})$ being depended on single coordinate. Hamiltonian equations of motion are $\begin{split}\dot{x}^{\alpha}=&\frac{\eta^{\alpha\beta}p_{\beta}}{\Phi^{2}},\\\ \hphantom{\qquad{\sf\,ns\,}(\alpha)}\dot{p}_{\alpha}=&\frac{\eta^{\beta\gamma}p_{\beta}p_{\gamma}}{2\Phi^{4}}\partial_{\alpha}\phi_{\alpha}+\frac{\Theta^{2}\partial_{\alpha}\phi_{\alpha}-\Phi^{2}\partial_{\alpha}\theta_{\alpha}}{\Phi^{4}}=\frac{H}{\Phi^{2}}\partial_{\alpha}\phi_{\alpha}-\frac{\partial_{\alpha}\theta_{\alpha}}{\Phi^{2}},\qquad{\sf\,ns\,}(\alpha),\end{split}$ were the dot denotes differentiation with respect to the evolution parameter $\tau\in{\mathbb{R}}$ (time), and summation on lower indices $\alpha$ is absent. Here and in what follows, we denote this circumstance by special symbol ${\sf\,ns\,}(\alpha)$ on the right of equations for brevity. Summation over other repeated indices is performed as usual. The corresponding Lagrangian is $L:=p_{\alpha}\dot{x}^{\alpha}-H=\frac{\phi_{1}+\dotsc+\phi_{n}}{2}\eta_{\alpha\beta}\dot{x}^{\alpha}\dot{x}^{\beta}-\frac{\theta_{1}+\dotsc+\theta_{n}}{\phi_{1}+\dotsc+\phi_{n}}.$ Sure, energy $E$ for every trajectory is conserved, $E:=H\big{|}_{\text{trajectory}}={\sf\,const}.$ (10) Multiply Hamilton–Jacobi equation (7) by the conformal factor: $\eta^{\alpha\beta}W^{\prime}_{\alpha}W^{\prime}_{\beta}+2\Theta^{2}=2E\Phi^{2}.$ Separation of variables for the Liouville system happens in the way $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}\eta^{\alpha\alpha}W^{\prime 2}_{\alpha}+2\theta_{\alpha}-2E\phi_{\alpha}=c_{\alpha},\qquad c_{\alpha}\in{\mathbb{R}},\quad\forall\alpha,\qquad\qquad{\sf\,ns\,}(\alpha).$ (11) The action function has the form $W=\sum_{\alpha=1}^{n}W_{\alpha},\qquad W_{\alpha}:=\int\\!\\!dx^{\alpha}\sqrt{\eta^{\alpha\alpha}\big{(}c_{\alpha}+2E\phi_{\alpha}-2\theta_{\alpha}\big{)}}.$ Certainly, the radicand is assumed to be positive. This complete integral satisfies the Hamilton–Jacobi equation multiplied by the conformal factor: $\delta^{\alpha\beta}\partial_{\alpha}W\partial_{\beta}W=2E\Phi^{2},$ (12) where multipliers $\eta^{\alpha\alpha}$ are included in the action function $W$. In the Hamiltonian formalism, we have $n$ quadratic conservation laws $\eta^{\alpha\alpha}p_{\alpha}^{2}+2\theta_{\alpha}-2E\phi_{\alpha}=c_{\alpha}.$ (13) This can be easily checked by straightforward computation: $\dot{c}_{\alpha}=2\eta^{\alpha\alpha}p_{\alpha}\dot{p}_{\alpha}+2\dot{\theta}_{\alpha}-2E\dot{\phi}_{\alpha}=\eta^{\alpha\alpha}p_{\alpha}\frac{2H\partial_{\alpha}\phi_{\alpha}}{\Phi^{2}}-2E\dot{x}^{\alpha}\partial_{\alpha}\phi_{\alpha}=\eta^{\alpha\alpha}p_{\alpha}\frac{2(H-E)}{\Phi^{2}}\partial_{\alpha}\phi_{\alpha}=0,$ because $H=E$ along every trajectory. For nontrivial functions $\theta$ and $\phi$ these conservation laws are indecomposable, parameters satisfying the condition $c_{1}+\dotsc+c_{n}=0$. Indeed, sum all conservation laws (13) taking into account the definition of Hamiltonian (9): $c_{1}+\dotsc+c_{n}=(2H-2E)(\phi_{1}+\dotsc+\phi_{n})=0.$ (14) This is the sole relation between parameters. It shows that only $n-1$ conservation laws among (11) are independent. The last independent conservation law is also quadratic $\frac{1}{\Phi^{2}}\big{(}\eta^{\alpha\alpha}p_{\alpha}^{2}+2\Theta\big{)}=2E.$ (15) It depends on all momenta and coordinates. The complete set of independent parameters is, for example, $c_{1},\dotsc,c_{n-1},2E$. Thus, all canonical pairs of variables are separated. The particular feature of this separation is that conformally flat metric (8) has no Killing vector for nonconstant functions $\phi_{\alpha}(x^{\alpha})$. Besides, variables were separated only after multiplication of the Hamilton–Jacobi equation by the conformal factor. Separation of variables for the Liouville system does not depend on the metric signature. ∎ ## 3 Separation of variables and Hamiltonian formalism In this section, we show what happens in Hamiltonian formalism for complete separation of variables in the Hamilton–Jacobi equation. Remind that we are considering Hamiltonian (1) yielding equations of motion $\begin{split}\dot{x}^{\alpha}=&[x^{\alpha},H_{0}]=g^{\alpha\beta}p_{\beta},\\\ \dot{p}_{\alpha}=&[p_{\alpha},H_{0}]=\frac{1}{2}\partial_{\alpha}g_{\beta\gamma}p^{\beta}p^{\gamma}.\end{split}$ (16) Suppose that relations $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}G_{\alpha}(x^{\alpha},p_{\alpha},c):=p_{\alpha}-W^{\prime}_{\alpha}(x^{\alpha},c)=0,\qquad\forall\alpha,\qquad\qquad{\sf\,ns\,}(\alpha),$ (17) hold, where $(W^{\prime}_{\alpha})$ is an arbitrary solution of the Hamilton–Jacobi equation (7) depending on $n$ independent parameters. Due to inequality (6), these equations can be locally solved with respect to $c$: $c_{a}=F_{a}(x,p),$ (18) where $F_{a}$ are some functions on canonical variables $x,p$ only. ###### Theorem 3.1. Relations (18) hold along every trajectory of the Hamiltonian system. These first integrals of Hamiltonian equations are in involution. ###### Proof. Compute $\dot{c}_{a}=\frac{\partial F_{a}}{\partial x^{\alpha}}\dot{x}^{\alpha}+\frac{\partial F_{a}}{\partial p_{\alpha}}\dot{p}_{\alpha}.$ (19) To find partial derivatives of functions $F_{a}$, we do the following. Take the differential of Eq.(17): $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}dp_{\alpha}-\partial_{\alpha}W^{\prime}_{\alpha}dx^{\alpha}-\partial^{a}W^{\prime}_{\alpha}dc_{a}=0,\qquad\qquad{\sf\,ns\,}(\alpha).$ These equations can be solved with respect to $dc_{a}$: $dc_{a}=\sum_{\alpha=1}^{n}\left(\partial^{a}W^{\prime}_{\alpha}\right)^{-1}\big{(}dp_{\alpha}-\partial_{\alpha}W^{\prime}_{\alpha}dx^{\alpha}\big{)},$ (20) because of inequality (6). On the other hand, the differential of Eq.(18) yields $dc_{a}=\frac{\partial F_{a}}{\partial x^{\alpha}}dx^{\alpha}+\frac{\partial F_{a}}{\partial p_{\alpha}}dp_{\alpha}.$ Comparing Eqs.(19) and (20), we conclude that $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}\frac{\partial F_{a}}{\partial x^{\alpha}}=-(\partial^{a}W^{\prime}_{\alpha})^{-1}\partial_{\alpha}W^{\prime}_{\alpha},\qquad\frac{\partial F_{a}}{\partial p_{\alpha}}=(\partial^{a}W^{\prime}_{\alpha})^{-1},\qquad\qquad{\sf\,ns\,}(\alpha).$ Now Eq.(19) yields: $\dot{c}_{a}=\sum_{\alpha=1}^{n}(\partial^{a}W^{\prime}_{\alpha})^{-1}\left(-\partial_{\alpha}W^{\prime}_{\alpha}p_{\alpha}+\frac{1}{2}\partial_{\alpha}g_{\beta\gamma}p^{\beta}p^{\gamma}\right),$ where Hamiltonian equations (16) are used for exclusion of derivatives with respect to evolution parameter $\tau$. On the other hand, differentiate Hamilton–Jacobi equation (7) with respect to $x^{\gamma}$: $\partial_{\gamma}g^{\alpha\beta}W^{\prime}_{\alpha}W^{\prime}_{\beta}+2g^{\alpha\beta}W^{\prime}_{\alpha}\partial_{\gamma}W^{\prime}_{\beta}=-g^{\alpha\delta}g^{\beta\epsilon}\partial_{\gamma}g_{\delta\epsilon}W^{\prime}_{\alpha}W^{\prime}_{\beta}+2W^{\prime\alpha}\partial_{\gamma}W^{\prime}_{\alpha}=\\\ =-\partial_{\gamma}g_{\delta\epsilon}W^{\prime\delta}W^{\prime\epsilon}+2W^{\prime\alpha}\partial_{\gamma}W^{\prime}_{\alpha}=-\partial_{\gamma}g_{\delta\epsilon}p^{\delta}p^{\epsilon}+2p^{\gamma}\partial_{\gamma}W^{\prime}_{\gamma}=0,$ where Eq.(17) is used. Therefore $\dot{c}_{a}=0$, and the first statement of the theorem is proved. Now compute the Poisson bracket of two conservation laws with indices $a\neq b$ $[F_{a},F_{b}]=\frac{\partial F_{a}}{\partial x^{\alpha}}\frac{\partial F_{b}}{\partial p_{\alpha}}-\frac{\partial F_{a}}{\partial p_{\alpha}}\frac{\partial F_{b}}{\partial x^{\alpha}}=\\\ =-\sum_{\alpha=1}^{n}\Big{(}(\partial^{a}W^{\prime}_{\alpha})^{-1}\partial_{\alpha}W^{\prime}_{\alpha}(\partial^{b}W^{\prime}_{\alpha})^{-1}-(\partial^{a}W^{\prime}_{\alpha})^{-1}(\partial^{b}W^{\prime}_{\alpha})^{-1}\partial_{\alpha}W^{\prime}_{\alpha}\Big{)}=0.$ This proves that conservation laws are in involution. ∎ We see that, if variables are completely separated in the Hamilton–Jacobi equation, then the corresponding Hamiltonian system admits $n$ independent conservation laws (18) which are in involution, and, consequently, it is Liouville integrable. This statement is proved for the geodesic Hamiltonian $H_{0}$. Moreover, it is valid in a general case. If summands $W^{\prime}_{\alpha}(x^{\alpha},c)$ are found, then conservation laws (18) are obtained by solution of Eqs. (17) with respect to parameters in explicit form. Luckily, this can be always done for geodesic Hamiltonian. If a complete integral of the Hamilton–Jacobi equation is known, then one can always go to the action-angle coordinates at least implicitly. ###### Theorem 3.2. Let us perform the canonical transformation $(x,p)\mapsto(X,P)$ with generating function $S_{2}(x,P):=W(x,P)$, where momenta are substituted instead of parameters: $(c_{a})\mapsto(P_{a})$. Then Hamiltonian equations for new variables take the form $\begin{split}\dot{X}^{a}=&\frac{\partial H}{\partial P_{a}}=\frac{\partial H}{\partial p_{\alpha}}\partial^{a}\partial_{\alpha}W,\\\ \dot{P}_{a}=&0,\qquad\Rightarrow\qquad P_{a}=c_{a}={\sf\,const},\end{split}$ (21) where the substitution $x=x(X,P)$ and $p=p(X,P)$ is performed on the right hand side. ###### Proof. If the generating function of the canonical transformation depends on old coordinates and new momenta, then old momenta and new coordinates are given by formulae (see, e.g., [28]) $p_{\alpha}=\partial_{\alpha}W,\qquad X^{a}=\partial^{a}W.$ (22) The last equality can be solved with respect to old coordinates at least locally due to inequality (3), and we find $x=x(X,P)$. Substitution of this solution in the first equality defines $p=p(X,P)$. Take the differential of Eqs. (22): $\begin{split}dp_{\alpha}=&\partial^{2}_{\alpha\beta}Wdx^{\alpha}+\partial^{a}\partial_{\alpha}WdP_{a},\\\ dX^{a}=&\partial^{a}\partial_{\alpha}Wdx^{\alpha}+\partial^{a}\partial^{b}WdP_{b}.\end{split}$ (23) This defines differentials $\begin{split}dx^{\alpha}=&(\partial^{a}\partial_{\alpha}W)^{-1}(dX^{a}-\partial^{a}\partial^{b}WdP_{b}),\\\ dp_{\alpha}=&\partial^{2}_{\alpha\beta}W(\partial^{a}\partial_{\alpha}W)^{-1}dX^{a}+\big{[}\partial^{a}\partial_{\alpha}W-\partial^{2}_{\alpha\beta}W(\partial^{b}\partial_{\beta}W)^{-1}\partial^{a}\partial^{b}W\big{]}dP_{a}.\end{split}$ (24) Consequently, partial derivatives are $\begin{split}\frac{\partial x^{\alpha}}{\partial X^{a}}=&(\partial^{a}\partial_{\alpha}W)^{-1},\\\ \frac{\partial x^{\alpha}}{\partial P_{a}}=&-(\partial^{b}\partial_{\alpha}W)^{-1}\partial^{a}\partial^{b}W,\\\ \frac{\partial p_{\alpha}}{\partial X^{a}}=&\partial^{2}_{\alpha\beta}W(\partial^{a}\partial_{\beta}W)^{-1},\\\ \frac{\partial p_{\alpha}}{\partial P_{a}}=&\partial^{a}\partial_{\alpha}W-\partial^{2}_{\alpha\beta}W(\partial^{b}\partial_{\beta}W)^{-1}\partial^{a}\partial^{b}W.\end{split}$ (25) On the other hand the Hamilton–Jacobi equation implies $H\left(x,\frac{\partial W(x,P)}{\partial x}\right)=2E(P).$ Differentiate this equality by $x^{\alpha}$: $\frac{\partial H}{\partial x^{\alpha}}+\frac{\partial H}{\partial p_{\beta}}\frac{\partial p_{\beta}}{\partial x^{\alpha}}=\frac{\partial H}{\partial x^{\alpha}}+\frac{\partial H}{\partial p_{\beta}}\partial_{\alpha\beta}W=0.$ (26) Now we see that equalities $\frac{\partial H}{\partial X^{a}}\equiv 0,\qquad\frac{\partial H}{\partial P_{a}}=\frac{\partial H}{\partial p_{\alpha}}\partial^{a}\partial_{\alpha}W,$ hold where $H=H\big{(}x(X,P),p(X,P)\big{)}$. ∎ Separable coordinates, when they exist, are not uniquely defined, and there is a functional arbitrariness related to coordinate transformations. To isolate it, we give ###### Definition. Two separable coordinate systems $x$ and $X$ on ${\mathbb{M}}$ are equivalent, if there is a canonical transformation $(x,p)\mapsto(X,P)$ of respective Hamiltonian systems, such that new coordinates $X(x)$ depend only on old ones but not on momenta $p$. Moreover, separable coordinate systems are equivalent in case when parameters are related by nondegenerate transformation $c\mapsto\tilde{c}(c)$ which do not involve coordinates. Manifold ${\mathbb{M}}$ with metric $g$ which admits separable coordinates for Eq. (2) in some neighbourhood of each point related by this equivalence relation in overlapping domains is called Stäckel. ∎ Right hand sides of conservation laws (18) are scalars under canonical transformations: $F(x,p)\mapsto\tilde{F}_{a}(X,P)=F_{a}\big{(}x(X),p(X,P)\big{)}.$ Conservation laws are transformed into conservation laws and their involution survives under arbitrary canonical transformation because the latter preserves the Poisson bracket. We shall use canonical transformations in what follows to bring separable metrics to the most simple (canonical) forms and eliminate inessential functions. There may occur inequivalent separable coordinate systems for one and the same metric. In next sections, we obtain necessary and sufficient conditions for existence of separable coordinate systems and present explicit form of canonical separable metrics in each class of equivalent separable metrics. Before solution of the Stäckel problem, we consider the simple instructive example. ###### Example 3.1. Consider Euclidean space ${\mathbb{R}}^{n}$ with metric $\delta^{ab}$, $a,b=1,\dotsc,n$ in Cartesian coordinates $y^{a}$. The Hamilton–Jacobi equation $\delta^{ab}W^{\prime}_{a}W^{\prime}_{b}=2E$ admits complete separation of variables $W^{\prime}_{a}=c_{a}={\sf\,const},$ the energy $2E=c^{a}c_{a}$ being the dependent parameter. The action function is $W=y^{a}c_{a},$ (27) where we dropped inessential additive integration constant. It depends on $n$ independent parameters $c_{1},\dotsc,c_{n}$. Consequently, action function (27) is the complete integral of the Hamilton–Jacobi equation, and the problem is solved. Complete separation of variables (27) implies $n$ conservation laws in the Hamiltonian formulation: $p_{a}=c_{a},$ which means conservation of all momenta. We see that Cartesian coordinate system in Euclidean space is separable. Vector fields $\partial_{a}$ are Killing vector fields. At the same time most of curvilinear coordinates in ${\mathbb{R}}^{n}$ are not separable. Indeed, let us go to curvilinear coordinates $y^{a}\mapsto x^{\alpha}(y)$, $\alpha=1,\dotsc,n$. Then inverse functions $y^{a}(x)$ define vielbein (Jacobi matrices) and functions $W^{\prime}_{\alpha}(x,c)$ are $W^{\prime}_{\alpha}=e_{\alpha}{}^{a}c_{a},\qquad e_{\alpha}{}^{a}:=\partial_{\alpha}y^{a},$ because they are covector components. It is necessary and sufficient for new coordinates to be separable that vielbein components $e_{\alpha}{}^{a}(x^{\alpha})$ depend only on one coordinate $x^{\alpha}$. It means that transition functions must be specific $y^{a}=\sum_{\alpha=1}^{n}k_{\alpha}{}^{a}(x^{\alpha}),$ where all elements of each row of the matrix $k_{\alpha}{}^{a}$ depend only on one coordinate $x^{\alpha}$. Then the vielbein is $e_{\alpha}{}^{a}(x^{\alpha})=\partial_{\alpha}k_{\alpha}{}^{a}=:k^{\prime}_{\alpha}{}^{a}$ with the sole condition $\det k^{\prime}_{\alpha}{}^{a}\neq 0$. Now we compute metric $g^{\alpha\beta}$, and variables in the Hamilton–Jacobi equation are separated: $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}W=\sum_{\alpha=1}^{n}W_{\alpha},\qquad W_{\alpha}:=\int\\!\\!dx^{\alpha}e_{\alpha}{}^{a}(x^{\alpha})c_{a},\qquad\qquad{\sf\,ns\,}(\alpha).$ In the Hamiltonian formulation, we have $n$ independent conservation laws: $p_{\alpha}e^{\alpha}{}_{a}(x)=c_{a}.$ Note that the inverse vielbein components $e^{\alpha}{}_{a}(x)$, in general, depend on all coordinates. After the inverse transformation $x^{\alpha}\mapsto y^{a}$, the conservation laws take simple form $p_{a}=c_{a}$ which correspond to $n$ commuting Killing vectors $\partial_{a}$ (translations along Cartesian coordinates). All coordinate systems obtained in this way and only they (this will be proved in section 4) are equivalent. To see this, we make canonical transformation $(x^{\alpha},p_{\alpha})\mapsto(y^{a},p_{a})$ with generating function depending on old coordinates and new momenta $S_{2}(x^{\alpha},p_{a}):=\sum_{\alpha=1}^{n}k_{\alpha}{}^{a}(x^{\alpha})p_{a}.$ Then $p_{\alpha}=\frac{\partial S_{2}}{\partial x^{\alpha}}=k^{\prime}_{\alpha}{}^{a}p_{a}=e_{\alpha}{}^{a}p_{a},\qquad y^{a}=\frac{\partial S_{2}}{\partial p_{a}}=\sum_{\alpha=1}^{n}k_{\alpha}{}^{a},$ (28) and we are led to previous formulae. Thus, if we solve functional equation (7), then its general solution contains many arbitrary functions. In our case, there are $n^{2}$ inessential functions $k_{\alpha}{}^{a}(x^{\alpha})$ describing transformations between equivalent separable coordinate systems. We see that the problem can be essentially simplified by making canonical transformations. Therefore it is sufficient to choose the most simple separable metric in each class of equivalent metrics which is called canonical. In the present case, it is the Euclidean metric, all other equivalent separable metrics are related to it by suitable canonical transformation. We note that transformation of momenta (28) is linear. Therefore linear and quadratic conservation laws are transformed into linear and quadratic ones, respectively. At the same time, separation of variables in Euclidean space may happen in a different way. Consider Euclidean plane in polar coordinates $(y^{a})=(r,\varphi)\in{\mathbb{R}}^{2}$. In Cartesian coordinates, we have two linear conservation laws $p_{1}=c_{1},\qquad p_{2}=c_{2},$ corresponding to two commuting Killing vectors $\partial_{1}$ and $\partial_{2}$. In polar coordinates, $x:=y^{1}=r\cos\varphi,\qquad y:=y^{2}=r\sin\varphi.$ metric and its inverse are $g_{\alpha\beta}=\begin{pmatrix}1&0\\\ 0&r^{2}\end{pmatrix},\qquad g^{\alpha\beta}=\begin{pmatrix}1&0\\\ 0&\frac{1}{r^{2}}\end{pmatrix}.$ The Hamilton–Jacobi equation is $W^{\prime 2}_{r}+\frac{1}{r^{2}}W^{\prime 2}_{\varphi}=\delta^{ab}c_{a}c_{b}=2E.$ Variables in this equation are separated as follows $W^{\prime}_{\varphi}=c_{\varphi},\qquad W^{\prime 2}_{r}+\frac{c_{\varphi}^{2}}{r^{2}}=2E.$ (29) The first equality corresponds to rotational invariance (Killing vector $\partial_{\varphi}$), and the second is the conservation of energy. The action function is now $W=\int\\!\\!dr\sqrt{2E-\frac{c_{\varphi}^{2}}{r^{2}}}+\varphi c_{\varphi}.$ It is the complete integral of the Hamilton–Jacobi equation. If $E>0$, then this expression makes sense only for $r^{2}>c_{\varphi}^{2}/2E$. In the Hamiltonian formulation, we have two conservation laws $p_{\varphi}=c_{\varphi},\qquad p_{r}^{2}+\frac{c_{\varphi}^{2}}{r^{2}}=2E,$ These are conservation of angular momentum and energy. Thus variables are also separated in polar coordinates, but now we have one linear and one indecomposable quadratic conservation laws. Though variables are separated both in Cartesian and polar coordinates, these coordinate systems are not equivalent in the sense of our definition. Indeed, the transformation of coordinate does exist but it is not canonical because conservation laws do not transform into conservation laws. In polar and Cartesian coordinates, parameters $(E,c_{\varphi})$ and $(c_{1},c_{2})$ constitute full sets of parameters, respectively, for the complete action functions. We have $2E=c_{1}^{2}+c_{2}^{2}$, but parameter $c_{\varphi}$ cannot be expressed entirely through $c_{1}$ and $c_{2}$. Really, momenta in polar and Cartesian coordinates are related by transformation $p_{r}=\cos\varphi\,p_{1}+\sin\varphi\,p_{2},\qquad p_{\varphi}=-r\sin\varphi\,p_{1}+r\cos\varphi\,p_{2}.$ It implies expression for the angular parameter $c_{\varphi}=-r\sin\varphi\,c_{1}+r\cos\varphi\,c_{2}=-yc_{1}+xc_{2}.$ That is transformation of parameters $(c_{1},c_{2})\mapsto(E,c_{\varphi},x,y)$ necessarily include coordinates. We see that transformation between full sets of independent parameters may include coordinates, and separation of variables does depend on our choice. ∎ Considered examples teach us the following. First, a general solution of the Stäckel problem contains many functions parameterizing transformations between equivalent separable coordinate systems. In configuration space we have ordinary coordinate transformations which do not alter geometric invariants, e.g. scalar curvature or squares of curvature tensor. These functions are inessential, and can be eliminated by suitable canonical transformation. Note that these transformations do not exhaust the whole group of diffeomorphisms. Second, transformation of one full set of parameters to the other $(c_{a})\mapsto(\tilde{c}_{a},x)$ may include coordinates. In this case, separation of variables holds differently, and corresponding separable coordinates are not equivalent. ## 4 Linear conservation laws Example 3.1 shows that separation of variables and conservation laws depend on the choice of complete set of parameters. The right hand side of the Hamilton–Jacobi equation (7) depends on one constant $E$ which therefore is distinguished. In this section, we choose “symmetric” set of independent parameters which does not include $E$. This is done as follows. Fix a point ${\textsc{p}}\in{\mathbb{M}}$ and introduce parameters $c_{a}:=W^{\prime}_{\alpha=a}\big{|}_{\textsc{p}}.$ Here we change indices into Latin ones, $\alpha\mapsto a$ to stress that the set of parameters $(c_{a})$ after coordinate transformations is multiplied by the inverse Jacobi matrix at a fixed point p. The Hamilton–Jacobi equation at this point is $g^{ab}c_{a}c_{b}=2E,\qquad g^{ab}:=g^{\alpha\beta}\big{|}_{{\textsc{p}},\alpha=a,\beta=b},$ where subscript p means restriction of metric to this point. That is $E(c)$ becomes the dependent parameter. Now we choose the collection $(c_{a})$, $a=1,\dotsc,n$, as the complete set of independent parameters. Moreover metric $g^{ab}$ at point p can be transformed into diagonal form with $\pm 1$ on the diagonal depending on the signature of the metric by suitable linear coordinate transformation. Denote this matrix by $\eta^{ab}:={\sf\,diag\,}(\underbrace{1,\dotsc,1,}_{r}\underbrace{-1,\dotsc,-1}_{s}),\qquad r+s=n,$ (30) where the pair $(r,s)$ designates the signature of the metric. Then the Hamilton–Jacobi equation takes the “symmetrical” form $g^{\alpha\beta}W^{\prime}_{\alpha}W^{\prime}_{\beta}=\eta^{ab}c_{a}c_{b},$ (31) where each function $W^{\prime}_{\alpha}(x^{\alpha},c)$ depends on single coordinate $x^{\alpha}$ and, probably, on the full set of parameters $c$. The left hand side of this equation is a geometric invariant. Therefore we can regard metric components $g^{\alpha\beta}$ and functions $W^{\prime}_{\alpha}$ as transforming by usual tensor transformation rules under coordinate changes while all quantities with Latin indices remain unchanged. The geometric meaning of this set of parameters is as follows. One and only one geodesic goes through each point p in every direction. Therefore the set of parameters $(x_{\textsc{p}},c)$ yields the Cauchy data and uniquely defines geodesic line in some neighborhood of point p. In the Hamiltonian formulation, the set $(c_{a})$ represent momenta at a fixed point p. Now we derive necessary and sufficient conditions for complete separation of variables for “symmetric” choice of parameters in Hamilton–Jacobi equation (31). There is a global vielbein $e^{\alpha}{}_{a}(x)$ on every topologically trivial manifold ${\mathbb{M}}\approx{\mathbb{R}}^{n}$: $g^{\alpha\beta}=e^{\alpha}{}_{a}e^{\beta}{}_{b}\eta^{ab},$ which is defined up to local ${\mathbb{O}}(r,s)$ (pseudo)rotations which include coordinate reflections. The inverse vielbein is denoted by $e_{\alpha}{}^{a}$ ($e_{\alpha}{}^{b}e^{\beta}{}_{b}=\delta_{\alpha}^{\beta}$, $e_{\alpha}{}^{a}e^{\alpha}{}_{b}=\delta^{a}_{b}$). Then Eq. (31) is rewritten as $\eta^{ab}W^{\prime}_{a}W^{\prime}_{b}=\eta^{ab}c_{a}c_{b},\qquad W^{\prime}_{a}:=e^{\alpha}{}_{a}W^{\prime}_{\alpha}.$ It implies that covectors $W^{\prime}_{a}$ and $c_{a}$ are necessarily related by some (pseudo)rotational matrix $S\in{\mathbb{O}}(r,s)$: $W^{\prime}_{a}(x)=S_{a}{}^{b}(x)c_{b}.$ multiply this expression by inverse vielbein $e_{\alpha}{}^{a}$ and obtain $W^{\prime}_{\alpha}=\tilde{e}_{\alpha}{}^{a}c_{a},\qquad\text{where}\qquad\tilde{e}_{\alpha}{}^{a}:=e_{\alpha}{}^{b}S_{b}{}^{a}.$ For complete separation of variables in the Hamilton–Jacobi equation, it is necessary and sufficient that the right hand side of this relation depends solely on $x^{\alpha}$. Drop the tilde sign and formulate the result. ###### Theorem 4.1. Variables in Hamilton–Jacobi equation (31) with ”symmetric“ set of independent parameters $(c_{a})$ are completely separated if and only if there exists the vielbein $e_{\alpha}{}^{a}(x^{\alpha})$ such that its components with fixed $\alpha$ depend solely on one coordinate $x^{\alpha}$ for all values of index $a$. Then every term in action function (4) is a primitive $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}W_{\alpha}(x_{\alpha},c):=\int\\!\\!dx^{\alpha}e_{\alpha}{}^{a}(x^{\alpha})c_{a},\qquad\qquad{\sf\,ns\,}(\alpha).$ (32) In the Hamiltonian formulation, there are $n$ linear in momenta conservation laws $e^{\alpha}{}_{a}(x)p_{\alpha}=c_{a}$ (33) for all $a$. ###### Corollary. Covariant components of the separable metric for Hamilton–Jacobi equation (31) $g_{\alpha\beta}(x^{\alpha},x^{\beta})=e_{\alpha}{}^{a}(x^{\alpha})e_{\beta}{}^{b}(x^{\beta})\eta_{ab}$ depend only on two coordinates. ∎ The proof is evident. Sure, components of the inverse metric $g^{\alpha\beta}(x)$ and vielbein $e^{\alpha}{}_{a}(x)$ depend on all coordinates in general. That is conservation laws (33) are linear in momenta but depend on all coordinates. ###### Example 4.1. Complicate the problem considered in example 3.1. Suppose that rotationally symmetric metric is given in polar coordinates on a plane $(r,\varphi)\in{\mathbb{R}}^{2}$ $g_{\alpha\beta}=\begin{pmatrix}f^{2}&0\\\ 0&r^{2}\end{pmatrix},\qquad g^{\alpha\beta}=\begin{pmatrix}\frac{1}{f^{2}}&0\\\ 0&\frac{1}{r^{2}}\end{pmatrix},$ (34) where $f(r)>0$ is some function of radius only. Then the Hamilton–Jacobi equation in polar coordinates is $\frac{W^{\prime 2}_{r}}{f^{2}}+\frac{W^{\prime 2}_{\varphi}}{r^{2}}=2E,$ and variables are completely separated: $W^{\prime}_{\varphi}=c_{\varphi},\qquad\frac{W^{\prime 2}_{r}}{f^{2}}+\frac{c_{\varphi}^{2}}{r^{2}}=2E.$ Parameters $c_{\varphi}$ and $E$, as before, correspond to conservation of the angular momentum and the energy. Let us look what happens in Cartesian coordinates $(x,y)\in{\mathbb{R}}^{2}$. In polar coordinates vielbein can be chosen in diagonal form $e_{\alpha}{}^{a}=\begin{pmatrix}f&0\\\ 0&r\end{pmatrix},\qquad e^{\alpha}{}_{a}=\begin{pmatrix}\frac{1}{f}&0\\\ 0&\frac{1}{r}\end{pmatrix}.$ In Cartesian coordinates it is written as $e_{x}{}^{a}=\left(\frac{x}{r}f,\,-\frac{y}{r}\right),\qquad e_{y}{}^{a}=\left(\frac{y}{r}f,\,\frac{x}{r}\right).$ Vielbein components $e_{x}{}^{a}$ explicitly depend on $y$, and variables are not separated. According to theorem 4.1, separable coordinates exist if there is the vielbein $\tilde{e}_{\alpha}{}^{a}=e_{\alpha}{}^{b}S_{b}{}^{a},\qquad S_{b}{}^{a}=\begin{pmatrix}\cos\omega&-\sin\omega\\\ \sin\omega&~{}~{}\cos\omega\end{pmatrix}\in{\mathbb{S}}{\mathbb{O}}(2),$ where $\omega(x,y)$ is some rotational angle, such that vielbein components $\tilde{e}_{x}{}^{a}(x)$ and $\tilde{e}_{y}{}^{a}(y)$ depend only on $x$ and $y$, respectively. ###### Proposition 4.1. If $f^{2}\equiv\\!\\!\\!\\!\\!\\!/\>1$, then there is no rotational matrix $S$ such that components of the first row of vielbein $\tilde{e}_{x}{}^{a}$ depend solely on $x$ for all $a=1,2$, and components of the second row $\tilde{e}_{y}{}^{a}$ depend solely on $y$. ###### Proof. Two components of vielbein in Cartesian coordinates after the rotation are $\begin{split}\tilde{e}_{x}{}^{1}=&~{}~{}\frac{x}{r}f\cos\omega-\frac{y}{r}\sin\omega,\\\ \tilde{e}_{x}{}^{2}=&-\frac{x}{r}f\sin\omega-\frac{y}{r}\cos\omega.\end{split}$ Their independence on $y$ is given by equations $\displaystyle\partial_{y}\tilde{e}_{x}{}^{1}=$ $\displaystyle-\frac{xy}{r^{3}}f\cos\omega+\frac{xy}{r^{2}}f^{\prime}\cos\omega-\frac{x}{r}f\sin\omega\partial_{y}\omega-\frac{x^{2}}{r^{3}}\sin\omega-\frac{y}{r}\cos\omega\partial_{y}\omega=0,$ (35) $\displaystyle\partial_{y}\tilde{e}_{x}{}^{2}=$ $\displaystyle~{}~{}\frac{xy}{r^{3}}f\sin\omega-\frac{xy}{r^{2}}f^{\prime}\sin\omega-\frac{x}{r}f\cos\omega\partial_{y}\omega-\frac{x^{2}}{r^{3}}\cos\omega+\frac{y}{r}\sin\omega\partial_{y}\omega=0,$ (36) where $f^{\prime}:=df/dr$. Take the linear combination of theses equations $(\ref{anskhg})\sin\omega+(\ref{anbswe})\cos\omega=-\frac{x}{r}f\partial_{y}\omega-\frac{x^{2}}{r^{3}}=0\qquad\Rightarrow\qquad f\partial_{y}\omega=-\frac{x}{r^{2}}$ and substitute in Eq. (35). Finally, we obtain equation for $f$ $rff^{\prime}-f^{2}+1=0.$ Its general solution is $f^{2}=1+Cr^{2},\qquad C={\sf\,const}.$ Now we write the independence of the second row of the vielbein on $x$, $\partial_{x}\tilde{e}_{y}{}^{a}=0$, and make the same calculations as for Eqs. (35) and (36). Then we obtain $f\partial_{x}\omega=\frac{y}{r^{2}}$ with the same function $f$. Consequently, the rotation angle is defined by the system of equation $\partial_{x}\omega=\frac{y}{r^{2}f},\qquad\partial_{y}\omega=-\frac{x}{r^{2}f}.$ It is easily checked that integrability conditions are not fulfilled for $f^{2}\neq 1$, and therefore function $\omega(x,y)$ does not exist. Therefore variables for metric (34) are separated in Cartesian coordinates only for $f^{2}\equiv 1$. ∎ Thus variables in the Hamilton–Jacobi equation for metric (34) are completely separated in polar coordinates but not in Cartesian. This is a natural result because metric (34) is not translationally invariant. ∎ The moral of this example is that if the Hamilton–Jacobi equation is not separable for ”symmetric“ choice of parameters, then it does not mean that separability does not happen for other choices of independent parameters. Consequently, theorem 4.1 does not exhaust all possibilities of variable separations. Conservation laws (33) can be significantly simplified by canonical transformation $(x^{\alpha},p_{\alpha})\mapsto(X^{a},P_{a})$ with generating function depending on old coordinates $(x^{\alpha})$ and new momenta $(P_{a})$ $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}S_{2}:=\sum_{\alpha=1}^{n}I_{\alpha},\qquad I_{\alpha}:=\int\\!\\!dx^{\alpha}\,e_{\alpha}{}^{a}(x^{\alpha})P_{a},\qquad\qquad{\sf\,ns\,}(\alpha),$ (37) where $I_{\alpha}$ are primitives. Then $p_{\alpha}=\frac{\partial S_{2}}{\partial x^{\alpha}}=e_{\alpha}{}^{a}P_{a},\qquad X^{a}=\frac{\partial S_{2}}{\partial P_{a}}=\sum_{\alpha=1}^{\textsc{n}}\int\\!\\!dx^{\alpha}e_{\alpha}{}^{a}.$ This is exactly the coordinate transformation $(x^{\alpha})\mapsto\big{(}X^{a}(x)\big{)}$. Indeed, the Jacobi matrix is $\partial_{\alpha}X^{a}=e_{\alpha}{}^{a},$ and momenta components are transformed as covariant vector. In new coordinates, the Hamiltonian is $H_{0}=\frac{1}{2}\eta^{ab}P_{a}P_{b}.$ It implies $n$ independent linear conservation laws $P_{a}=c_{a}$ (38) (all momenta are conserved). Since metric components are constant in these coordinates then there is the maximal number $n$ of linearly independent commuting Killing vectors $\partial_{a}$ on ${\mathbb{M}}$. In this way, we proved ###### Theorem 4.2. Variables in the Hamilton–Jacobi equation (31) with the “symmetric” choice of parameters are completely separable if and only if there exist $n$ linearly independent at each point commuting Killing vector fields on $({\mathbb{M}},g)$. Then there is a coordinate system in which metric components are constant, and conservation laws have form (38). This result is not surprising. Indeed, if vielbein components $e_{\alpha}{}^{a}(x^{\alpha})$ depend on single coordinate $x^{\alpha}$ for all $a$, then curvature tensor vanishes. These calculations are most easily performed in Cartan variables. The nonholonomicity components are identically zero $c_{\alpha\beta}{}^{a}:=-\partial_{\alpha}e_{\beta}{}^{a}+\partial_{\beta}e_{\alpha}{}^{a}\equiv 0.$ Therefore the corresponding ${\mathbb{S}}{\mathbb{O}}(r,s)$ connection is also zero, and, consequently, curvature vanishes. Therefore manifold ${\mathbb{M}}$ is locally (pseudo)Euclidean. This is enough for complete separation of variables in Cartesian coordinates. Thus, if complete separation of variables occurs for “symmetrical” choice of independent parameters, then manifold ${\mathbb{M}}$ is necessarily locally flat. ## 5 Quadratic conservation laws and coisotropic coordinates Now we consider another possibility of complete variables separation of variables in the Hamilton–Jacobi equation. Remember that our aim is not to find all complete integrals, which are infinitely many, but to find at least one of them for a given type of separable metric. Therefore our strategy is the following. 1. 1. Describe all possible types of separable metrics. 2. 2. Choose independent parameters. 3. 3. Specify the class of separating functions in each case. 4. 4. Solve the functional Hamilton–Jacobi equation. 5. 5. Choose the canonical separable metric in every equivalence class, find the complete integral and the full set of conservation laws. 6. 6. Prove that any additional conservation law is functionally dependent on the full set of previously obtained conservation laws at least locally. First, we introduce notation, choose parameters, and prove the simple lemma which is important for further analysis. Suppose that we have exactly $0\leq{\textsc{n}}\leq n$ commuting Killing vector fields and not more, and variables are nevertheless completely separated. We assume that variables corresponding to Killing vectors are separated, and they are the first n coordinates. There are only two possibilities left: diagonal components of separable metrics can differ from zero or vanish, which is possible only for indefinite metrics. Assume that the number of nonzero diagonal metric components is equal to $0\leq{\textsc{m}}\leq n$, and they precede zero components. We shall see in what follows that the inequality $n-{\textsc{n}}-{\textsc{m}}\leq{\textsc{n}}\qquad\Leftrightarrow\qquad 2{\textsc{n}}+{\textsc{m}}\leq n$ (39) must hold, otherwise the metric becomes degenerate. A curve $x^{\alpha}(\tau)$, $\tau\in{\mathbb{R}}$ is called isotropic if the tangent vector $dx^{\alpha}$ is null, i.g. $g_{\alpha\alpha}\equiv 0$. By analogy, we call the coordinate line coisotropic if $g^{\alpha\alpha}\equiv 0$. Sure, coisotropic lines exist only on pseudoriemannian manifolds. Thus the last $n-{\textsc{n}}-{\textsc{m}}$ coordinates are coisotropic. We shall see in what follows that the nonzero diagonal metric components correspond to indecomposable quadratic conservation laws. Introduce notation. Divide all coordinates on three groups $(x^{\alpha},y^{\mu},z^{\varphi})\in{\mathbb{M}}$, where indices from the beginning, middle, and end of the Greek alphabet take the following values: $\displaystyle\alpha,\beta,\dotsc=$ $\displaystyle 1,\dotsc,{\textsc{n}}$ $\displaystyle\text{(commuting Killing vectors)},$ (40) $\displaystyle\mu,\nu,\dotsc=$ $\displaystyle{\textsc{n}}+1,\dotsc,{\textsc{n}}+{\textsc{m}}$ $\displaystyle(\text{quadratic conservation laws},~{}g^{\mu\mu}\neq 0),$ $\displaystyle\varphi,\phi,\dotsc=$ $\displaystyle{\textsc{n}}+{\textsc{m}}+1,\dotsc,n$ $\displaystyle(\text{coisotropic coordinates},~{}g^{\varphi\varphi}\equiv 0).$ In this way we described all possible types of separable metrics. Suppose that all variables corresponding to Killing vector fields are separated, and the first n coordinates are chosen cyclic. Then the corresponding separating functions and conservation laws are $W^{\prime}_{\alpha}=c_{\alpha}\qquad\text{and}\qquad p_{\alpha}=c_{\alpha}.$ All parameters are also divided into three groups $(c,d,a)$. Parameters $c$ are already introduced and parameters $d$ and $a$ are defined as follows. In a fixed point ${\textsc{p}}\in{\mathbb{M}}$, we set $\displaystyle d_{ii}:=W^{\prime 2}_{\mu}\big{|}_{{\textsc{p}},\mu=i},$ $\displaystyle i={\textsc{n}}+1,\dotsc,{\textsc{n}}+{\textsc{m}},$ (41) $\displaystyle a_{r}:=W^{\prime}_{\varphi}\big{|}_{{\textsc{p}},\varphi=r},$ $\displaystyle r={\textsc{n}}+{\textsc{m}}+1,\dotsc,n.$ which is always possible. For convenience, we consider parameters $d_{ii}$ as diagonal elements of matrix $d=(d_{ij})$, $d_{ij}=0$ for $i\neq j$ (to preserve the number of independent parameters). Sure, all diagonal elements are positive, $d_{ii}>0$. Parameters $d$ for quadratic conservation laws and $a$ for coisotropic coordinates are enumerated by Latin indices from the middle and end of the alphabet, respectively. At a fixed point p, the Hamilton–Jacobi equation becomes $g^{\alpha\beta}_{\textsc{p}}c_{\alpha}c_{\beta}+2g^{\alpha i}_{\textsc{p}}c_{\alpha}\sqrt{d_{ii}}+2g^{\alpha r}_{\textsc{p}}c_{\alpha}a_{r}+g^{ij}_{\textsc{p}}\sqrt{d_{ii}d_{jj}}+2g^{ir}_{\textsc{p}}\sqrt{d_{ii}}\,a_{r}+g^{rs}_{\textsc{p}}a_{r}a_{s}=2E,$ where $g^{rr}=0$. Therefore parameters $(c,d,a)$ and $E$ are simply related. Choose the independent set of parameters in the following ways. Case 1. Energy $E$ corresponds to the coisotropic coordinate. Redefine $a_{n}:=2E$. Then the set of independent parameters is $(c_{\alpha},d_{ij},a_{{\textsc{n}}+{\textsc{m}}+1},\dotsc,a_{n-1\,n-1},a_{n}:=2E).$ (42) Case 2. Energy $E$ corresponds to the indecomposable quadratic conservation law. Redefine $d_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}:=2E$. Then the set of independent parameters is $(c_{\alpha},d_{{\textsc{n}}+1\,{\textsc{n}}+1},\dotsc,d_{{\textsc{n}}+{\textsc{m}}-1\,{\textsc{n}}+{\textsc{m}}-1},d_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}:=2E,a_{r}).$ (43) This is always possible because redefinition of parameters $a_{n}$ and $d_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}$ leads to multiplication of separating functions $W^{\prime}_{n}$ and $W^{\prime}_{{\textsc{n}}+{\textsc{m}}}$ on some nonzero constants. We are looking for solutions of the Hamilton–Jacobi equation for separating functions $W^{\prime 2}_{\mu}$ and $W^{\prime}_{\varphi}$ within the class functions linear in parameters $d$ and $a$: $\displaystyle W^{\prime 2}_{\mu}:=$ $\displaystyle b_{\mu\mu}{}^{ij}(y^{\mu},c)d_{ij}+b_{\mu\mu}{}^{r}(y^{\mu},c)a_{r}+k_{\mu\mu}(y^{\mu},c)>0,$ (44) $\displaystyle W^{\prime}_{\varphi}:=$ $\displaystyle b_{\varphi}{}^{ij}(z^{\varphi},c)d_{ij}+b_{\varphi}{}^{r}(z^{\varphi},c)a_{r}+l_{\varphi}(z^{\varphi},c),$ (45) where $b_{\mu\mu}{}^{ii}(y^{\mu},c)$, $b_{\mu\mu}{}^{r}(y^{\mu},c)$, $b_{\varphi}{}^{ii}(z^{\varphi},c)$, and $b_{\varphi}{}^{r}(z^{\varphi},c)$ are some functions of single coordinate and, possibly, the first group of parameters $c$. We suppose that matrices $b_{\mu\mu}{}^{ii}$ whose elements are enumerated by pairs of indices $\mu\mu$ and $ii$, and $b_{\varphi}{}^{r}$ are nondegenerate. This choice is justified by the result: the functional Hamilton–Jacobi equation has solution within this class of separating functions, and our aim is to find at least one solution. Then the Hamilton–Jacobi equation becomes $g^{\alpha\beta}c_{\alpha}c_{\beta}+2g^{\alpha\mu}c_{\alpha}W^{\prime}_{\mu}+2g^{\alpha\varphi}c_{\alpha}W^{\prime}_{\varphi}+g^{\mu\nu}W^{\prime}_{\mu}W^{\prime}_{\nu}+2g^{\mu\varphi}W^{\prime}_{\mu}W^{\prime}_{\varphi}+g^{\varphi\phi}W^{\prime}_{\varphi}W^{\prime}_{\phi}=2E.$ (46) This equality must hold for all values of coordinates and parameters. For terms quadratic in $a$, we have $g^{\varphi\phi}b_{\varphi}{}^{r}b_{\phi}{}^{s}a_{r}a_{s}\equiv 0\qquad\Rightarrow\qquad g^{\varphi\phi}\equiv 0,$ because matrix $b_{\varphi}{}^{r}$ is nondegenerate. That is the whole metric block for coisotropic coordinates must be equal to zero: $g^{\varphi\phi}\equiv 0$ for all $\varphi$ and $\phi$. There are irrational function on $d$ on the left hand side of Eq. (46) which cannot be cancelled. Therefore the equalities $g^{\alpha\mu}\equiv 0$ and $g^{\mu\varphi}\equiv 0$ must hold. In addition block $g^{\mu\nu}$ must be diagonal. Thus we have proved ###### Lemma 5.1. If independent parameters are chosen as in Eqs. (42) or (43) and separating functions have form (44), (45), then separable metric has block form $g^{**}=\begin{pmatrix}g^{\alpha\beta}(y,z)&0&g^{\alpha\phi}(y,z)\\\ 0&g^{\mu\nu}(y,z)&0\\\ g^{\varphi\beta}(y,z)&0&0\end{pmatrix},$ (47) where block $g^{\mu\nu}$ is diagonal, and the star takes all values: $*:=(\alpha,\mu,\varphi)$. In this way, three items of our strategy are met. We shall see in what follows that variables are separated differently depending on which group of parameters contains energy $E$: either $a_{n}=2E$ (case 1), or $d_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}=2E$ (case 2). Therefore each separable metric belongs to one of the equivalence class $[{\textsc{n}},{\textsc{m}},n-{\textsc{n}}-{\textsc{m}}]_{1,2}$, where the index shows the location of $E$, if it is considered as independent parameter, i.e. if ${\textsc{m}}\neq 0$ and/or $n-{\textsc{n}}-{\textsc{m}}\neq 0$. All Riemannian positive definite metrics belong to classes $[{\textsc{n}},n-{\textsc{n}},0]_{2}$. One and the same metric in different separable coordinates can be the member of different classes. For example, the Euclidean metric on a plane in the Cartesian and polar coordinates belong to classes $[2,0,0]$ and $[1,1,0]_{2}$, respectively (see example 3.1). In the Hamiltonian formalism, the complete variables separation leads to $n$ independent conservation laws in involution. For their derivation in explicit form, we have to replace $W^{\prime}_{\mu}\mapsto p_{\mu}$, $W^{\prime}_{r}\mapsto p_{r}$ and solve equalities (44), (45) with respect to parameters $d_{ii}$ and $a_{r}$. This introduces some restrictions on matrix $b$ which will be put later. Now we solve functional Hamilton–Jacobi equation with respect to separating functions and metric components for different values of m and n. ### 5.1 Quadratic conservation laws Suppose that variables in the Hamilton–Jacobi equation are completely separated, but Killing vectors and coisotropic coordinates are absent. Then ${\textsc{n}}=0$, ${\textsc{m}}=n$, and the Hamilton–Jacobi equation is $g^{\mu\nu}W^{\prime}_{\mu}W^{\prime}_{\nu}=2E,$ (48) where separating functions $W^{\prime}_{\mu}(y^{\mu},d)$ depend on single coordinate $y^{\mu}$ and, in a general case, on all independent parameters $d:=(d_{ij})={\sf\,diag\,}(d_{11},\dotsc,d_{n-1\,n-1},d_{nn}:=2E).$ (49) Due to lemma 5.1 the separable metric must be diagonal. Separating functions have the form (44) $W^{\prime 2}_{\mu}(y^{\mu},d)=b_{\mu\mu}{}^{ij}(y^{\mu})d_{ij}>0,$ (50) where $b_{\mu\mu}{}^{ii}(y^{\mu})$ is some invertible $(n\times n)$-matrix, whose rows depend on single coordinate, and the inequalities restrict the form of matrix $b$ for given parameters $d$. Without loss of generality, we assume $b_{\mu\nu}{}^{ij}\big{|}_{i\neq j}\equiv 0,\qquad b_{\mu\nu}{}^{ij}\big{|}_{\mu\neq\nu}\equiv 0,$ (51) because the parameter matrix $d_{ij}$ and metric $g^{\mu\nu}$ are diagonal. Differentiate the Hamilton–Jacobi equation (48) with respect to all parameters. As the result, we obtain the system of $n$ linear equations for $n$ diagonal metric components: $\begin{split}\sum_{\mu=1}^{n}g^{\mu\mu}W^{\prime}_{\mu}\frac{\partial W^{\prime}_{\mu}}{\partial d_{ii}}=&0,\qquad i=1,\dotsc,n-1,\\\ \sum_{\mu=1}^{n}g^{\mu\mu}W^{\prime}_{\mu}\frac{\partial W^{\prime}_{\mu}}{\partial E}=&1.\end{split}$ (52) The determinant of this system of equations on metric components differ from zero: $\hphantom{\qquad{\sf\,ns\,}(\alpha)}\det\left(W^{\prime}_{\mu}\frac{\partial W^{\prime}_{\mu}}{\partial d_{ii}}\right)=W^{\prime}_{1}\dotsc W^{\prime}_{n}\det\left(\frac{\partial W^{\prime}_{\mu}}{\partial d_{ii}}\right)\neq 0,\qquad{\sf\,ns\,}(\mu),$ where matrix indices are enumerated by pairs $\mu\mu$ and $ii$ due to condition (3) which takes the form $\det\left(\frac{\partial W^{\prime}_{\mu}}{\partial d_{ii}}\right)\neq 0.$ (53) in our case. Consequently, the system of equations (52) has unique solution. To find metric components, we substitute Eqs. (50) in the left hand side of the Hamilton–Jacobi equation (48): $g^{\mu\nu}b_{\mu\nu}{}^{ij}d_{ij}=2E.$ (54) Since $d_{nn}=2E$, it implies that diagonal metric components are $g^{\mu\mu}=b_{nn}{}^{\mu\mu},$ (55) where $b_{nn}{}^{\mu\mu}(y)$ is the last row of the inverse matrix $b_{ii}{}^{\mu\mu}$: $\sum_{i=1}^{n}b_{\nu\nu}{}^{ii}b_{ii}{}^{\mu\mu}=b_{\nu\nu}{}^{ij}b_{ij}{}^{\mu\mu}=\delta^{\mu}_{\nu},\qquad\sum_{\mu=1}^{n}b_{ii}{}^{\mu\mu}b_{\mu\mu}{}^{jj}=b_{ii}{}^{\mu\nu}b_{\mu\nu}{}^{jj}=\delta^{j}_{i},$ whose elements depend on all coordinates in general. In addition, we must require that all elements of the last row of matrix (55) differ from zero, otherwise the metric becomes degenerate. Thus the problem is solved, and the complete integral of the Hamilton–Jacobi equation is $W(y,d)=\sum_{\mu=1}^{n}W_{\mu}(y^{\mu},d),$ where the right hand side contains primitives $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}W_{\mu}(y^{\mu},d):=\int\\!\\!dy^{\mu}\sqrt{b_{\mu\mu}{}^{ij}d_{ij}},\qquad\qquad{\sf\,ns\,}(\mu),\quad\forall\mu.$ Integration constants in the last equalities are inessential because the action function is defined up to a constant. Equalities (50) imply $n$ quadratic conservation laws $b_{ii}{}^{\mu\nu}p_{\mu}p_{\nu}=d_{ii}$ (56) in the Hamiltonian formulation. In general, the left hand side depends on all coordinates and momenta. These conservation laws correspond to Killing tensors of second rank. Rewrite Eq. (56) in the form $p_{\mu}^{2}=b_{\mu\mu}{}^{ij}d_{ij}.$ It implies that quadratic conservation laws are indecomposable if and only if each row of matrix $b_{\mu\mu}{}^{ii}$ contains at least two nonzero and not proportional elements. Otherwise, the square root can be taken, and linear conservation law appears, which contradicts the assumption on the absence of Killing vectors. Thus we have proved ###### Theorem 5.1. If all diagonal inverse metric components differ from zero, $g^{\mu\mu}\neq 0$, and Killing vector fields and coisotropic components are absent, then there exist such independent parameters (49) and separating functions (50), that all conservation laws are quadratic, and the Hamilton–Jacobi equation admits complete separation of variables if and only if when inverse metric $g^{\mu\nu}$ is diagonal with components (55), where $b_{ii}{}^{\mu\mu}$ is the inverse matrix to an arbitrary nondegenerate matrix $b_{\mu\mu}{}^{ii}(y^{\mu})$, whose rows depend on single coordinate $y^{\mu}$ and contain at least two nonzero not proportional elements. In addition, inequalities $b_{nn}{}^{\mu\mu}\neq 0$ must hold for all values of index $\mu$, and arbitrary functions $b_{\mu\mu}{}^{ii}$ must be chosen in such a way that the system of equations for separating functions $W^{\prime}_{\mu}$ (50) have real solutions for given values of parameters $d$. Consequently, diagonal metric components are parameterized by $n^{2}$ arbitrary functions on single coordinate $b_{\mu\mu}{}^{ii}(y^{\mu})$, satisfying three conditions: $\det b_{\mu\mu}{}^{ii}\neq 0$, $b_{nn}{}^{\mu\mu}\neq 0$ for all $\mu$, and each row of matrix $b_{\mu\mu}{}^{ii}$ must contain at least two nonzero not proportional elements. Note that components of the inverse matrix $b_{ii}{}^{\mu\mu}(y)$ depend on all coordinates in general. In fact, we prove the following. The right hand sides of Eqs. (50) contain arbitrary functions of single coordinates linear and homogeneous in $d$. Therefore the separable metric is found for arbitrary indecomposable quadratic conservation laws. All restrictions on arbitrary functions in the conservation laws follow from nondegeneracy of the separable metric, indecomposability of the quadratic Killing tensors, and existence of solutions for separating functions $W^{\prime}_{\mu}$ in Eqs. (50). In the Hamiltonian formulation, we have $n$ involutive quadratic conservation laws (56). Consequently, the Hamiltonian system is Liouville integrable. ###### Example 5.1. Consider two dimensional Euclidean space $(y^{1},y^{2})\in{\mathbb{R}}^{2}$ and choose matrix $b$ in a general form $b_{\mu\mu}{}^{ii}:=\begin{pmatrix}\phi_{11}(y^{1})&\phi_{12}(y^{1})\\\ \phi_{21}(y^{2})&\phi_{22}(y^{2})\end{pmatrix}\qquad\Rightarrow\qquad b_{ii}{}^{\mu\mu}:=\frac{1}{\det b}\begin{pmatrix}~{}~{}\phi_{22}&-\phi_{12}\\\ -\phi_{21}&~{}~{}\phi_{11}\end{pmatrix}$ (57) and assume that $\det b=\phi_{11}\phi_{22}-\phi_{12}\phi_{21}\neq 0.$ (58) If all matrix elements differ from zero, then conditions of theorem 5.1 hold: elements of the first and second rows depend on $y^{1}$ and $y^{2}$, respectively, and each row contains two nonzero elements assumed to be not proportional. Separable metric corresponding to matrix $b$ us parameterized by four functions of single coordinates: $g^{\mu\nu}=\frac{1}{\det b}\begin{pmatrix}-\phi_{21}&0\\\ ~{}~{}0&\phi_{11}\end{pmatrix}.$ (59) Now we consider particular cases. Let $\phi_{11}\equiv 1$ and $\phi_{21}\equiv-1$. Then $b_{\mu\mu}{}^{ii}:=\begin{pmatrix}~{}~{}1&\phi_{12}(y^{1})\\\ -1&\phi_{22}(y^{2})\end{pmatrix}\qquad\Rightarrow\qquad b_{ii}{}^{\mu\mu}:=\frac{1}{\phi_{12}+\phi_{22}}\begin{pmatrix}\phi_{22}&-\phi_{12}\\\ 1&1\end{pmatrix}.$ The respective inverse metric is conformally Euclidean: $g^{\mu\nu}=\frac{1}{\phi_{12}+\phi_{22}}\begin{pmatrix}1&0\\\ 0&1\end{pmatrix}.$ (60) The Hamilton–Jacobi equation becomes $W^{\prime 2}_{1}+W^{\prime 2}_{2}=2E(\phi_{12}+\phi_{22}).$ Complete separation of variables yields $W^{\prime 2}_{1}=d_{11}+2E\phi_{12},\qquad W^{\prime 2}_{2}=-d_{11}+2E\phi_{22},$ where $d_{11}$ and $E$ are two independent parameters. The conservation laws are quadratic $\frac{1}{\phi_{12}+\phi_{22}}\big{(}\phi_{22}p_{1}^{2}-\phi_{12}p_{2}^{2}\big{)}=d_{11},\qquad\frac{1}{\phi_{12}+\phi_{22}}\big{(}p_{1}^{2}+p_{2}^{2}\big{)}=2E.$ (61) In general, they are indecomposable for $d_{11}\neq 0$, $E>0$ and nontrivial functions $\phi_{12}$, $\phi_{22}$. The parameters domain for $d_{11}$, $E$ and acceptable form of arbitrary functions are defined by the following inequalities: $d_{11}+2E\phi_{12}>0,\qquad-d_{11}+2E\phi_{22}>0.$ If, for example, $d_{11}>0$ and $E>0$, then $\phi_{12}>-\frac{d_{11}}{2E},\qquad\phi_{22}>\frac{d_{11}}{2E}.$ Consequently this example is the particular two dimensional case of the Liouville system for Riemannian metric considered in example 2.1. Now put $\phi_{11}\equiv-1$ and $\phi_{21}\equiv-1$. Then $b_{\mu\mu}{}^{ii}:=\begin{pmatrix}-1&\phi_{12}(y^{1})\\\ -1&\phi_{22}(y^{2})\end{pmatrix}\qquad\Rightarrow\qquad b_{ii}{}^{\mu\mu}:=\frac{1}{\phi_{12}-\phi_{22}}\begin{pmatrix}\phi_{22}&-\phi_{12}\\\ 1&-1\end{pmatrix}.$ These matrices imply conformally Lorentzian metric $g^{\mu\nu}=\frac{1}{\phi_{12}-\phi_{22}}\begin{pmatrix}1&~{}~{}0\\\ 0&-1\end{pmatrix}.$ (62) The respective Hamilton–Jacobi equation is $W^{\prime 2}_{1}-W^{\prime 2}_{2}=2E(\phi_{12}-\phi_{22}),$ and variables are separated: $W^{\prime 2}_{1}=-d_{11}+2E\phi_{12},\qquad W^{\prime 2}_{2}=-d_{11}+2E\phi_{22}.$ They are indecomposable for $d_{11}\neq 0$, $E\neq 0$, and nonconstant functions $\phi_{12}$, $\phi_{22}$. The conservation laws are quadratic: $\frac{1}{\phi_{12}-\phi_{22}}\big{(}\phi_{22}p_{1}^{2}-\phi_{12}p_{2}^{2}\big{)}=d_{11},\qquad\frac{1}{\phi_{12}-\phi_{22}}\big{(}p_{1}^{2}-p_{2}^{2}\big{)}=2E.$ (63) The parameter domain of definition and the form of arbitrary functions are defined by inequalities: $-d_{11}+2E\phi_{12}>0,\qquad-d_{11}+2E\phi_{22}>0.$ If $E>0$, then there are two possibilities when $\phi_{12}-\phi_{22}\neq 0$: $\phi_{12}>\phi_{22}>\frac{d_{11}}{2E},\qquad\phi_{22}>\phi_{12}>\frac{d_{11}}{2E},$ for $g^{11}>0$ and $g^{11}<0$, respectively. This is also two-dimensional Liouville system considered in example 2.1 but for Lorentzian signature metric. Let $\phi_{12}\equiv 0$ and $\phi_{21}\equiv-1$. Then $b_{\mu\mu}{}^{ii}:=\begin{pmatrix}~{}~{}\phi_{11}(x)&0\\\ -1&\phi_{22}(y)\end{pmatrix}\qquad\Rightarrow\qquad b_{ii}{}^{\mu\mu}:=\frac{1}{\phi_{11}\phi_{22}}\begin{pmatrix}\phi_{22}&0\\\ 1&\phi_{11}\end{pmatrix}.$ The conditions of theorem 5.1 are not fulfilled because the first row of matrix $b_{\mu\mu}{}^{ii}$ contains only one nonzero element. The respective inverse metric is $g^{\mu\nu}=\frac{1}{\phi_{11}\phi_{22}}\begin{pmatrix}1&0\\\ 0&\phi_{11}\end{pmatrix}.$ (64) The Hamilton–Jacobi equation for this metric $W^{\prime 2}_{1}+\phi_{11}W^{\prime 2}_{2}=2E\phi_{11}\phi_{22},$ admits complete separation of variables: $W^{\prime 2}_{1}=\phi_{11}d_{11},\qquad W^{\prime 2}_{2}=-d_{11}+2E\phi_{22}.$ The conservation laws are quadratic: $\frac{1}{\phi_{11}\phi_{22}}p_{1}^{2}=d_{11},\qquad\frac{1}{\phi_{11}\phi_{22}}\big{(}p_{1}^{2}+\phi_{11}p_{2}^{2}\big{)}=2E.$ (65) The parameter domain of definition and admissible form of arbitrary functions are defined by inequalities: $\phi_{11}d_{11}>0,\qquad-d_{11}+2E\phi_{22}>0.$ For $d_{11}>0$ and $E>0$, for example, they restrict arbitrary functions: $\phi_{11}>0,\qquad\phi_{22}>\frac{d_{11}}{2E}.$ In this case the first conservation law (65) is decomposable: it is the square of linear conservation law $p_{1}/\sqrt{\phi_{11}\phi_{22}}=\sqrt{d_{11}}$. Consequently, we have in fact one linear and one quadratic conservation law. ∎ Now we simplify the form of matrix $b_{\mu\mu}{}^{ii}$ using the canonical transformation. It turns out that one of nonzero elements in each row of matrix $b_{\mu\mu}{}^{ii}$ can be transformed to unity. For example, let $b_{\mu\mu}{}^{ii}\neq 0$ for given $\mu$ and $i$. For definiteness we assume that it is the diagonal element, $\mu=i$. Choose the generating function of canonical transformation for one pair of canonically conjugate variables $(y^{\mu},p_{\mu})\mapsto(Y^{i},P_{i})$ as $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}S_{2}(y,P):=\int\\!\\!dy^{\mu}\sqrt{|b_{\mu\mu}{}^{ii}|}\,P_{i},\qquad\qquad{\sf\,ns\,}(\mu,i).$ (66) Note that it must be linear in $P$, because otherwise the coordinate transformation $x\mapsto X(x)$ depends on momenta which contradicts the equivalence relation. Then we get for fixed $\mu$: $p_{\mu}=\frac{\partial S_{2}}{\partial y^{\mu}}=\sqrt{|b_{\mu\mu}{}^{ii}|}\,P_{i},\qquad Y^{i}=\frac{\partial S_{2}}{\partial P_{i}}=\int\\!\\!dy^{\mu}\sqrt{|b_{\mu\mu}{}^{ii}|},\qquad\qquad{\sf\,ns\,}(\mu,i),$ and Eq. (50) becomes $|b_{\mu\mu}{}^{ii}|\,\tilde{W}^{\prime 2}_{i}=b_{\mu\mu}{}^{ii}\,d_{ii}+\sum_{j\neq i}b_{\mu\mu}{}^{jj}\,d_{jj}$ for transformed separating function $\tilde{W}_{i}$. Dividing it by $|b_{\mu\mu}{}^{ii}|$, we obtain the quadratic equality $\tilde{W}^{\prime 2}_{i}=\pm d_{ii}+\sum_{j\neq i}\tilde{b}_{ii}{}^{jj}\,d_{jj}$ (67) with some new functions $\tilde{b}_{ii}{}^{jj}(Y^{i})$. Similar transformation can be performed for each row. So, without loss of generality, one of nonzero elements in each row can be set to unity because signs $\pm 1$ can be attributed to parameters $d$. It means that canonical separable metric is parameterized by $n^{2}-n$ functions of single coordinates. In fact, this statement is evident. Indeed, Eq. (50) depends only on single coordinate, and one of nonzero elements can be transformed to $\pm 1$ by coordinate transformation $y^{\mu}\mapsto\tilde{y}^{\mu}(y^{\mu})$. ### 5.2 Linear and quadratic conservation laws Assume that metric admits exactly $1\leq{\textsc{n}}<n$ and not more commuting Killing vector fields, and coisotropic coordinates are absent, $n-{\textsc{n}}-{\textsc{m}}=0$. There is one linear conservation law for each Killing vector. Then we need additional $n-{\textsc{n}}$ independent involutive conservation laws to provide complete integrability. These conservation laws are quadratic as was shown in the preceeding section. Now we have two groups of coordinates $(x^{\alpha},y^{\mu})\in{\mathbb{M}}$ and two groups of independent parameters $c$ and $d$ (43). Separable metric (47) in this case is block diagonal $g^{**}=\begin{pmatrix}g^{\alpha\beta}&0\\\ 0&g^{\mu\nu}\end{pmatrix},$ (68) where the lower block $g^{\mu\nu}$ is diagonal. Functions $k$ in equality (44) must be quadratic in $c$ as the consequence of the Hamilton–Jacobi equation. Therefore separating functions are $W^{\prime 2}_{\mu}=b_{\mu\mu}{}^{ij}d_{ij}+k^{\alpha\beta}_{\mu\mu}c_{\alpha}c_{\beta}>0,$ (69) where $b_{\mu\mu}{}^{ii}(y^{\mu})$ is some nondegenerate matrix whose elements in each row depend on single coordinate corresponding to the number of the row, and functions $k^{\alpha\beta}_{\mu\mu}(y^{\mu})$ are arbitrary and symmetric in indices $\alpha$ and $\beta$ but can be nondiagonal in them. Their indices can be written one under another because they will never be lowered or raised. There is the new property. We shall see that the term $k^{\alpha\beta}_{\mu\mu}c_{\alpha}c_{\beta}$ in Eq. (69) must not vanish for nondegenerate metric. Therefore the requirement that each row of matrix $b_{\mu\mu}{}^{ii}$ contains at least two not proportional elements is unnecessary. After separating the first group of coordinates, the Hamilton–Jacobi equation becomes $g^{\alpha\beta}c_{\alpha}c_{\beta}+g^{\mu\nu}(b_{\mu\nu}{}^{ij}d_{\ij}+k^{\alpha\beta}_{\mu\nu}c_{\alpha}c_{\beta})=2E,$ (70) where matrix $g^{\mu\nu}$ is diagonal. We get previous expression (55) for elements in the second block because $d_{nn}:=2E$. Therefore $g^{\alpha\beta}c_{\alpha}c_{\beta}+g^{\mu\nu}k^{\alpha\beta}_{\mu\nu}c_{\alpha}c_{\beta}=0.$ (71) This equality must be fulfilled for all $c$, and defines the upper block of the inverse separable metric (68): $g^{\alpha\beta}=-k^{\alpha\beta}_{\mu\nu}g^{\mu\nu},$ (72) where $k^{\alpha\beta}_{\mu\mu}(y^{\mu})$ are arbitrary functions on single coordinates providing nondegeneracy of $g^{\alpha\beta}$ and positivity of right hand sides of Eqs. (69). Thus we have proved ###### Theorem 5.2. If separable metric admits exactly $1\leq{\textsc{n}}<n$ and not more commuting Killing vector fields, and coisotropic coordinates are absent, $n-{\textsc{n}}-{\textsc{m}}=0$, then there exists such set of independent parameters (43) and separating functions (69), that separable inverse metric has block form (68). The lower block is diagonal with elements (55), where $b_{ii}{}^{\mu\mu}(y)$ is the matrix inverse to matrix $b_{\mu\mu}{}^{ii}(y^{\mu})$ whose elements in each row depend only on single coordinates, and all elements of the last row differ from zero, $b_{nn}{}^{\mu\mu}\neq 0$. The upper block has form (72) with arbitrary functions $k^{\alpha\beta}_{\mu\mu}(y^{\mu})=k^{\beta\alpha}_{\mu\mu}(y^{\mu})$ depending on single coordinates. In addition, the upper block (71) must be nondegenerate and provide positiveness of right hand sides of Eqs. (69). In the Hamiltonian formulation, there are $n-{\textsc{n}}$ indecomposable quadratic conservation laws: $\begin{split}p_{\alpha}=&c_{\alpha},\\\ b_{ii}{}^{\mu\nu}\big{(}p_{\mu}p_{\nu}-k^{\alpha\beta}_{\mu\nu}c_{\alpha}c_{\beta}\big{)}=&d_{ii}.\end{split}$ (73) Quadratic conservation laws (73) appear after contraction of Eqs. (69) with inverse matrix $b_{ii}{}^{\mu\mu}$. Thus the canonical separable inverse metric of type $[{\textsc{n}},n-{\textsc{n}},0]_{2}$ has block diagonal form $g^{**}=\begin{pmatrix}-k^{\alpha\beta}_{\mu\nu}g^{\mu\nu}&0\\\ 0&g^{\mu\nu}\end{pmatrix},$ (74) where the lower block $g^{\mu\nu}$ is diagonal (55), and stars denote all indices $*=(\alpha,\mu)$. We can also simplify matrix $b_{\mu\mu}{}^{ii}$ using canonical transformation (66) from previous section. Therefore one of nonzero elements in each row of matrix $b$ can se set to unity without loss of generality. ###### Example 5.2. Consider the simplest example, when separable metric admits only one indecomposable quadratic conservation law, i.e. ${\textsc{n}}=n-1$ and $d_{nn}=2E$. In this case, there is only one coordinate $y$, which may not be enumerated. Matrix $b_{\mu\mu}{}^{ii}$ consists of only one element $b(y)\neq 0$, and its inverse is $1/b$. Due to theorem 5.2, the most general canonical separable metric (74) is block diagonal: $g^{**}=\begin{pmatrix}g^{\alpha\beta}(y)&0\\\ 0&\frac{1}{b(y)}\end{pmatrix}=\begin{pmatrix}-k^{\alpha\beta}(y)\frac{1}{b(y)}&0\\\ 0&\frac{1}{b(y)}\end{pmatrix},\qquad\alpha,\beta=1,\dotsc,n-1,$ (75) where the $(n-1)\times(n-1)$ block $g^{\alpha\beta}(y)$ can be arbitrary nondegenerate matrix if we include arbitrary function $b(y)\neq 0$ in the definition of $k^{\alpha\beta}(y)$. Variables are separated as $W^{\prime}_{\alpha}=c_{\alpha},\qquad W^{\prime 2}_{n}=2Eb+k^{\alpha\beta}c_{\alpha}c_{\beta}>0,$ where the inequality restricts functions $b$ and $k$ for fixed $E$ and $c$. The respective conservation laws are $p_{\alpha}=c_{\alpha},\qquad\frac{1}{b}\big{(}p_{n}^{2}-k^{\alpha\beta}c_{\alpha}c_{\beta}\big{)}=2E.$ (76) To simplify metric (75) we perform the canonical transformation $(y,p_{n})\mapsto(Y,P_{n})$ with the generating function $S_{2}:=\int\\!\\!dy\sqrt{|b(y)|}P_{n},$ leaving remaining variables $x^{\alpha},p_{\alpha}$ untouched. Then $p_{n}=\frac{\partial S_{2}}{\partial y}=\sqrt{|b|}P_{n},\qquad Y=\frac{\partial S_{2}}{\partial P_{n}}=\int\\!\\!dy\sqrt{|b|},$ and the quadratic conservation law takes the form $\pm P^{2}_{n}+g^{\alpha\beta}c_{\alpha}c_{\beta}=2E,$ After this transformation of variables, the canonical separable metric becomes $g^{**}=\begin{pmatrix}g^{\alpha\beta}(y)&0\\\ 0&\pm 1\end{pmatrix},$ (77) where the sign choice depends on the signature of the metric. ∎ ### 5.3 Coisotropic coordinates We showed in section 4 that linear conservation laws for the “symmetric” choice of independent parameters correspond to Killing vector fields. However, if energy $E$ in the right hand side of the Hamilton–Jacobi equation is considered as independent parameter, linear conservation laws not related to Killing vectors may appear. This possibility arises only for indefinite metrics having coisotropic coordinates. For Riemannian positive definite metrics linear conservation laws are always related to Killing vector fields. Existence of these linear conservation laws can take place only when sufficient number of commuting Killing vectors are simultaneously present. Suppose that ${\textsc{m}}=0$, i.e. we have only Killing vector fields and coisotropic coordinates. then we have two groups of coordinates $(x^{\alpha},z^{\varphi})\in{\mathbb{M}}$, where indices take the following values $\alpha,\beta,\dotsc=1,\dotsc,{\textsc{n}};\qquad\varphi,\phi,\dotsc={\textsc{n}}+1,\dotsc,n;\qquad\frac{n}{2}\leq{\textsc{n}}<n.$ (78) By assumption, there are independent parameters $\\{c_{1},\dotsc,c_{\textsc{n}},a_{{\textsc{n}}+1},\dotsc,a_{n-1},a_{n}:=2E\\}.$ (79) Separable metric of type $[{\textsc{n}},0,n-{\textsc{n}}]_{1}$ has block form (47) $g^{**}=\begin{pmatrix}g^{\alpha\beta}(z)&g^{\alpha\phi}(z)\\\ g^{\varphi\beta}(z)&0\end{pmatrix}.$ (80) This metric must be nondegenerate, therefore the number of Killing vectors cannot be less then the number of coisotropic coordinates ${\textsc{n}}\geq n-{\textsc{n}}\qquad\Leftrightarrow\qquad{\textsc{n}}\geq\frac{n}{2},$ (81) which was assumed at the very beginning (78). In addition, the rank of rectangular matrix $g^{\alpha\phi}$ is equal to $n-{\textsc{n}}$, otherwise separable metric is degenerate. After separation of cyclic coordinates condition (6) becomes $\det\big{(}\partial^{r}W^{\prime}_{\varphi}\big{)}\neq 0,$ (82) and Hamilton–Jacobi equation (7) for metric (80) takes the form $g^{\alpha\beta}c_{\alpha}c_{\beta}+2g^{\alpha\varphi}c_{\alpha}W^{\prime}_{\varphi}=2E.$ (83) Separating functions $W^{\prime}_{\varphi}$ (45) are linear in parameters $a$: $W^{\prime}_{\varphi}=b_{\varphi}{}^{r}(z^{\varphi},c)a_{r}+l_{\varphi}(z^{\varphi},c),$ (84) where $b_{\varphi}{}^{r}$ is some nondegenerate matrix, whose elements in each row depend only on single coordinate $z^{\varphi}$ and, possibly, on the first group of parameters $(c_{\alpha})$, and $l_{\varphi}$ are some functions also on single coordinate and first group of parameters. Equating terms with $E$ in Eq. (83), we obtain $2g^{\alpha\varphi}c_{\alpha}=b_{n}{}^{\varphi},$ (85) where $b_{n}{}^{\varphi}$ is the last row of matrix $b_{r}{}^{\varphi}$, which is inverse to $b_{\varphi}{}^{r}$: $b_{r}{}^{\varphi}b_{\phi}{}^{r}=\delta_{\phi}^{\varphi}$. This equality implies that the last row of matrix $b_{n}{}^{\varphi}(z,c)$, whose elements depend in general on all coordinates $z$, are linear in $c_{\alpha}$. Let $b_{n}{}^{\varphi}:=b_{n}{}^{\alpha\varphi}c_{\alpha},$ (86) where $b_{n}{}^{\alpha\varphi}(z)$ is a set of arbitrary functions on $z$, such that matrix $(b_{r}{}^{\alpha\varphi}c_{\alpha})$ is not degenerate. Equality (85) must hold for all values of $c_{\alpha}$, therefore $2g^{\alpha\varphi}=b_{n}{}^{\alpha\varphi}.$ (87) Now we substitute the expression for separating functions $W^{\prime}_{\varphi}$ (84) and use Eq. (85) in the Hamilton–Jacobi equation (83): $g^{\alpha\beta}c_{\alpha}c_{\beta}+b_{n}{}^{\alpha\varphi}c_{\alpha}l_{\varphi}=0.$ (88) It implies that functions $l_{\varphi}$ are linear and homogeneous in $c_{\alpha}$: $l_{\varphi}=l^{\alpha}_{\varphi}(z^{\varphi})c_{\alpha},$ (89) where $l^{\alpha}_{\varphi}(z^{\varphi})$ are some functions depending on single coordinate. Indices here can be written one over the other because they will be never raised or lowered. Then Eq. (88) defines the square block for Killing vectors $2g^{\alpha\beta}=-b_{n}{}^{\alpha\varphi}l^{\beta}_{\varphi}-b_{n}{}^{\beta\varphi}l^{\alpha}_{\varphi}.$ (90) Thus Hamilton–Jacobi equation (83) is solved, and the separable metric is $g^{**}=\frac{1}{2}\begin{pmatrix}-b_{n}{}^{\alpha\chi}l_{\chi}^{\beta}-b_{n}{}^{\beta\chi}l_{\chi}^{\alpha}&b_{n}{}^{\alpha\phi}\\\\[4.0pt] b_{n}{}^{\beta\varphi}&0\end{pmatrix},$ (91) where the star denotes all indices, $*=(\alpha,\varphi)$. To simplify the separable metric, we perform the canonical transformation with generating function $S_{2}:=x^{\alpha}P_{\alpha}+z^{\varphi}P_{\varphi}+\sum_{\varphi={\textsc{n}}+1}^{n}\int\\!\\!dz^{\varphi}l^{\alpha}_{\varphi}(z^{\varphi})P_{\alpha}.$ (92) Then $\displaystyle p_{\alpha}=$ $\displaystyle P_{\alpha},$ $\displaystyle\quad p_{\varphi}=$ $\displaystyle P_{\varphi}+l^{\alpha}_{\varphi}P_{\alpha},$ $\displaystyle X^{\alpha}=$ $\displaystyle x^{\alpha}+\sum_{\varphi={\textsc{n}}+1}^{n}\int\\!\\!dz^{\varphi}\,l^{\alpha}_{\varphi},$ $\displaystyle Z^{\phi}=$ $\displaystyle z^{\varphi},\qquad{\sf\,ns\,}(\varphi).$ The last two equalities define the coordinate transformation with the Jacobi matrix $\frac{\partial(X^{\beta},Z^{\phi})}{\partial(x^{\alpha},z^{\varphi})}=\begin{pmatrix}\delta_{\alpha}^{\beta}&0\\\ l^{\beta}_{\varphi}&\delta_{\varphi}{}^{\phi}\end{pmatrix}.$ The metric transforms as follows $g^{**}\mapsto\tilde{g}^{**}$ with $\tilde{g}^{\alpha\beta}=0,\qquad\tilde{g}^{\alpha\phi}=g^{\alpha\phi},\qquad\tilde{g}^{\varphi\phi}=0,$ where equality $g^{\varphi\phi}\equiv 0$ was used. After this transformation the separable metric is simplified $g^{**}=\frac{1}{2}\begin{pmatrix}0&b_{n}{}^{\alpha\phi}\\\\[4.0pt] b_{n}{}^{\beta\varphi}&0\end{pmatrix}.$ (93) This metric is always degenerate except for $n=2{\textsc{n}}$. Then $\det g^{**}=\frac{(-1)^{\textsc{n}}}{2^{n}}\det{}^{2}(b_{n}{}^{\alpha\phi})\neq 0.$ Now we are left with the problem to find such matrix $b_{\varphi}{}^{r}(z,c)$ in Eq. (84), that equality (87) be satisfied for all coordinates $z$ and parameters $c$. In other words we have to extract explicitly the dependence of matrix elements $b_{\varphi}{}^{r}$ on parameters, because Eq. (86) contains elements of the inverse metric. ###### Proposition 5.1. Matrix elements $(b_{\varphi}{}^{r})$ in Eq. (84) satisfying equality (87) must have the form $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}b_{\varphi}{}^{r}(z^{\varphi},c)=\frac{\phi_{\varphi}{}^{r}}{h_{\varphi}^{\alpha}c_{\alpha}},\qquad\qquad{\sf\,ns\,}(\varphi),$ (94) where $\phi_{\varphi}{}^{r}(z^{\varphi})$ is the nondegenerate matrix, whose elements of each row depend on single coordinate and contain unity, and $h_{\varphi}^{\alpha}(z^{\varphi})$ is a set of arbitrary nonzero functions depending on single coordinates. ###### Proof. Parameterise matrix $(b_{\varphi}{}^{r})$ as $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}b_{\varphi}{}^{r}(z^{\varphi},c):=\frac{\phi_{\varphi}{}^{r}(z^{\varphi},c)}{h_{\varphi}(x^{\varphi},c)},\qquad\qquad{\sf\,ns\,}(\varphi),$ where $h_{\varphi}$ are some nonzero functions, and $(\phi_{\varphi}{}^{r}):=\begin{pmatrix}1&\phi_{{\textsc{n}}+1}{}^{{\textsc{n}}+2}&\phi_{{\textsc{n}}+1}{}^{{\textsc{n}}+3}&\cdots&\phi_{{\textsc{n}}+1}{}^{n}\\\\[8.0pt] \phi_{{\textsc{n}}+2}{}^{{\textsc{n}}+1}&1&\phi_{{\textsc{n}}+2}{}^{{\textsc{n}}+3}&\cdots&\phi_{{\textsc{n}}+2}{}^{n}\\\\[8.0pt] \phi_{{\textsc{n}}+3}{}^{{\textsc{n}}+1}&\phi_{{\textsc{n}}+3}{}^{{\textsc{n}}+2}&1&\cdots&\phi_{{\textsc{n}}+3}{}^{n}\\\ \vdots&\vdots&\vdots&\ddots&\vdots\\\ \phi_{n}{}^{{\textsc{n}}+1}&\phi_{n}{}^{{\textsc{n}}+2}&\phi_{n}{}^{{\textsc{n}}+3}&\cdots&1\end{pmatrix}.$ (95) This matrix has unities on the diagonal, and each row of matrix $(b_{\varphi}{}^{r})$ is the quotient of the row $(\phi_{\varphi}{}^{r})$ by $h_{\varphi}$. Multiply Eq. (87) by matrix $b_{\varphi}{}^{r}$ and sum over $\varphi$: $g^{\alpha\varphi}c_{\alpha}b_{\varphi}{}^{r}=\sum_{\varphi={\textsc{n}}+1}^{n}g^{\alpha\varphi}c_{\alpha}\frac{\phi_{\varphi}{}^{r}}{h_{\varphi}}=\delta_{n}^{r}.$ This equality must be fulfilled for all coordinates and parameters. Therefore $\phi_{\varphi}{}^{r}(z^{\varphi},c)=\phi_{\varphi}{}^{r}(z^{\varphi}),\qquad h_{\varphi}(x^{\varphi},c)=h_{\varphi}^{\alpha}(z^{\varphi})c_{\alpha},$ where $h_{\varphi}^{\alpha}(z^{\varphi})$ are some functions on single coordinate. It means that matrix (95) must not depend on $c$, and functions $h_{\varphi}$ be linear in $c$. ∎ It implies that matrix $(b_{\varphi}{}^{r})$ and consequently canonical separable metric is parameterized by $2{\textsc{n}}^{2}-{\textsc{n}}$ arbitrary functions, and $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}2g^{\alpha\varphi}=b_{n}{}^{\alpha\varphi}=\phi_{n}{}^{\varphi}h_{\varphi}^{\alpha},\qquad\qquad{\sf\,ns\,}(\varphi),$ (96) where $(\phi_{r}{}^{\varphi})$ is the matrix inverse to $(\phi_{\varphi}{}^{r})$. Thus we have found canonical separable metric in the case of only Killing vector fields and coisotropic coordinates. ###### Theorem 5.3. If separable metric admit exactly n and not more commuting Killing vector fields, and all other coordinates are coisotropic, then there exists such set of parameters (79) and separating functions (84), that the Hamilton–Jacobi equation admit complete separation of variables if and only if the dimension of the manifold is even, $n=2{\textsc{n}}$, and canonical separable metric has block form (93). The off diagonal blocks are given by Eqs. (96), where $(\phi_{n}{}^{\varphi})$ is the last row of the matrix inverse to arbitrary nondegenerate matrix (95), whose rows depend on single coordinates, and the diagonal consists of unities. All conservation laws are linear: $\begin{split}p_{\alpha}=&c_{\alpha},\\\ \sum_{\varphi}\phi_{r}{}^{\varphi}h_{\varphi}^{\alpha}c_{\alpha}p_{\varphi}=&a_{r}.\end{split}$ (97) The second conservation law (97) is linear in momenta only after separation of cyclic coordinates. If this is not done, then it is quadratic $\sum_{\varphi}\phi_{r}{}^{\varphi}h_{\varphi}^{\alpha}p_{\alpha}p_{\varphi}=a_{r}.$ The respective Killing tensor is indecomposable in general. The Hamilton–Jacobi equation for canonical separable metric (93) is $\sum_{\varphi}\phi_{n}{}^{\varphi}h_{\varphi}^{\alpha}W^{\prime}_{\alpha}W^{\prime}_{\varphi}=2E,$ (98) and variables are completely separated $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}W^{\prime}_{\alpha}=c_{\alpha},\qquad W^{\prime}_{\varphi}=\frac{\phi_{\varphi}{}^{r}}{h_{\varphi}^{\alpha}c_{\alpha}}a_{r},\qquad\qquad{\sf\,ns\,}(\varphi).$ The unusual feature of this separation is that parameters $c$ appear in the denominator. Finally, using canonical transformations we simplify the form of the canonical separable metric by setting one of the nonzero elements in each row of matrix $h_{\varphi}^{\alpha}$ to unity. ### 5.4 General separation of variables If separable metric admits simultaneously commuting Killing vector fields, indecomposable quadratic conservation laws, and coisotropic coordinates, then coordinates are divided into three groups $(x,y,z)\in{\mathbb{M}}$ described in section 5. There are two possibilities for full sets of independent parameters: (42) and (43), and separable metric must have block form (47). Separable functions are given by Eqs. (44) and (45). First, we consider case 2, when energy enters parameters for quadratic conservation laws, $d_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}:=2E$. Without loss of generality, we set $b_{\mu\nu}{}^{ij}\big{|}_{i\neq j}\equiv 0,\qquad b_{\mu\nu}{}^{ij}\big{|}_{\mu\neq\nu}\equiv 0,\qquad b_{\mu\nu}{}^{r}\big{|}_{\mu\neq\nu}\equiv 0,\qquad k_{\mu\nu}\big{|}_{\mu\neq\nu}\equiv 0,$ because metric $g^{\mu\nu}$ and matrix of parameters $d_{ij}$ are diagonal. The respective Hamilton–Jacobi equation is $g^{\alpha\beta}c_{\alpha}c_{\beta}+g^{\mu\nu}\big{(}b_{\mu\nu}{}^{ij}d_{ij}+b_{\mu\nu}{}^{r}a_{r}+k_{\mu\nu}\big{)}+2g^{\alpha\phi}c_{\alpha}\big{(}b_{\phi}{}^{ij}d_{ij}+b_{\phi}{}^{r}a_{r}+l_{\phi}\big{)}=2E.$ (99) Differentiate this equation consequently with respect to parameters $d$ and $a$: $\displaystyle g^{\mu\nu}\frac{\partial(W^{\prime}_{\mu}W^{\prime}_{\nu})}{\partial d_{ii}}+2g^{\alpha\varphi}c_{\alpha}\frac{\partial W^{\prime}_{\varphi}}{\partial d_{ii}}=$ $\displaystyle 0,\qquad i\neq{\textsc{n}}+{\textsc{m}},$ (100) $\displaystyle g^{\mu\nu}\frac{\partial(W^{\prime}_{\mu}W^{\prime}_{\nu})}{\partial d_{ii}}+2g^{\alpha\varphi}c_{\alpha}\frac{\partial W^{\prime}_{\varphi}}{\partial d_{ii}}=$ $\displaystyle 1,\qquad i={\textsc{n}}+{\textsc{m}},$ $\displaystyle g^{\mu\nu}\frac{\partial(W^{\prime}_{\mu}\tilde{W}^{\prime}_{\nu})}{\partial a_{r}}+2g^{\alpha\varphi}c_{\alpha}\frac{\partial W^{\prime}_{\varphi}}{\partial a_{r}}=$ $\displaystyle 0,$ This system of equation is considered as the system of linear algebraic equations for $g^{\mu\nu}$ and linear combinations $g^{\alpha\varphi}c_{\alpha}$. Its determinant differs from zero $\det\begin{pmatrix}\displaystyle\frac{\partial(W^{\prime 2}_{\mu})}{\partial d_{ii}}&\displaystyle\frac{\partial W^{\prime}_{\varphi}}{\partial d_{ii}}\\\\[8.0pt] \displaystyle\frac{\partial(W^{\prime 2}_{\mu})}{\partial a_{r}}&\displaystyle\frac{\partial W^{\prime}_{\varphi}}{\partial a_{r}}\end{pmatrix}=2^{{\textsc{m}}}\tilde{W}^{\prime}_{{\textsc{n}}+1}\dotsc\tilde{W}^{\prime}_{{\textsc{n}}+{\textsc{m}}}\det\begin{pmatrix}\displaystyle\frac{\partial(W^{\prime}_{\mu})}{\partial d_{ii}}&\displaystyle\frac{\partial W^{\prime}_{\varphi}}{\partial d_{ii}}\\\\[8.0pt] \displaystyle\frac{\partial(W^{\prime}_{\mu})}{\partial a_{r}}&\displaystyle\frac{\partial W^{\prime}_{\varphi}}{\partial a_{r}}\end{pmatrix}\neq 0$ due to condition (6). Consider $(n-{\textsc{n}})\\!\times\\!(n-{\textsc{n}})$ matrix $B:=\begin{pmatrix}b_{\mu\mu}{}^{ii}(y^{\mu},c)&b_{\mu\mu}{}^{r}(y^{\mu},c)\\\ b_{\varphi}{}^{ii}(z^{\varphi},c)&b_{\varphi}{}^{r}(z^{\varphi},c)\end{pmatrix},$ (101) whose elements of each row depend on single coordinate and, possibly, the first group of parameters $c$. It must be nondegenerate, as we shall see. The inverse metric is $B^{-1}=\begin{pmatrix}b_{ii}{}^{\mu\mu}&b_{ii}{}^{\varphi}\\\ b_{r}{}^{\nu\nu}&b_{r}{}^{\varphi}\end{pmatrix},$ where $\displaystyle b_{\mu\mu}{}^{ij}b_{ij}{}^{\nu\nu}+b_{\mu\mu}{}^{r}b_{r}{}^{\nu\nu}=$ $\displaystyle\delta_{\mu}^{\nu},$ $\displaystyle\qquad b_{ii}{}^{\mu\nu}b_{\mu\nu}{}^{jj}+b_{ii}{}^{\varphi}b_{\varphi}{}^{jj}=$ $\displaystyle\delta_{i}^{j},$ $\displaystyle b_{\mu\mu}{}^{ij}b_{ij}{}^{\phi}+b_{\mu\mu}{}^{r}b_{r}{}^{\phi}=$ $\displaystyle 0,$ $\displaystyle\qquad\qquad b_{ii}{}^{\mu\nu}b_{\mu\nu}{}^{r}+b_{ii}{}^{\varphi}b_{\varphi}{}^{r}=$ $\displaystyle 0,$ $\displaystyle b_{\varphi}{}^{ij}b_{ij}{}^{\nu\nu}+b_{\varphi}{}^{r}b_{r}{}^{\nu\nu}=$ $\displaystyle 0,$ $\displaystyle b_{r}{}^{\mu\nu}b_{\mu\nu}{}^{jj}+b_{r}{}^{\varphi}b_{\varphi}{}^{jj}=$ $\displaystyle 0,$ $\displaystyle b_{\varphi}{}^{ij}b_{ij}{}^{\phi}+b_{\varphi}{}^{r}b_{r}{}^{\phi}=$ $\displaystyle\delta_{\varphi}^{\phi},$ $\displaystyle b_{r}{}^{\mu\nu}b_{\mu\nu}{}^{s}+b_{r}{}^{\varphi}b_{\varphi}{}^{s}=$ $\displaystyle\delta_{r}^{s}.$ Note that elements of the inverse matrix depend in general on all coordinates $y$, $z$, and parameters $c$. Matrix $B$ must be nondegenerate in order for Eq. (99) to have unique solution, and $g^{\mu\mu}=b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\mu\mu}(y,z),\qquad 2g^{\alpha\phi}c_{\alpha}=b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\phi}(y,z,c),$ (102) because constant $E$ enters the left hand side of Eq. (99) only through parameter $d_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}$. This implies that elements $b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\mu\mu}$ do not depend on $c$, and $b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\phi}$ are linear in $c$: $b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\phi}=b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\alpha\phi}c_{\alpha}.$ Substitution of obtained expressions $g^{\mu\nu}$ and $g^{\alpha\phi}c_{\alpha}$ into the Hamilton–Jacobi equation (99) yields $g^{\alpha\beta}c_{\alpha}c_{\beta}+b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\mu\nu}k_{\mu\nu}+b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\alpha\phi}c_{\alpha}l_{\phi}=0,$ (103) which must be fulfilled for all $c$. Therefore functions $k_{\mu\mu}$ and $l_{\phi}$ must be quadratic and linear in $c$, respectively. It is sufficient to choose them homogeneous $k_{\mu\mu}=k^{\alpha\beta}_{\mu\nu}c_{\alpha}c_{\beta},\qquad l_{\varphi}=l^{\alpha}_{\varphi}c_{\alpha},$ (104) where functions $k^{\alpha\beta}_{\mu\nu}(y^{\mu})$ and $l^{\alpha}_{\varphi}(z^{\varphi})$ do not depend on parameters $c$. Functions $k^{\alpha\beta}_{\mu\nu}=k^{\beta\alpha}_{\mu\nu}$ can differ from zero for $\alpha\neq\beta$. Indices of $k$ and $l$ are written one over the other because they will never be raised or lowers. To simplify separable metric, we perform canonical transformation $(x^{\alpha},z^{\varphi},p_{\alpha},p_{\varphi})\mapsto(X^{\alpha},Z^{\varphi},P_{\alpha},P_{\varphi})$ with generating function $\hphantom{\qquad\qquad{\sf\,ns\,}(\alpha)}S_{2}:=x^{\alpha}P_{\alpha}+z^{\varphi}P_{\varphi}+\int\\!\\!dz^{\varphi}l^{\alpha}_{\varphi}P_{\alpha},\quad\quad{\sf\,ns\,}(\varphi).$ (105) Then $\displaystyle p_{\alpha}=$ $\displaystyle\frac{\partial S_{2}}{\partial x^{\alpha}}=P_{\alpha},$ $\displaystyle\qquad p_{\varphi}=$ $\displaystyle\frac{\partial S_{2}}{\partial z^{\varphi}}=P_{\varphi}+l^{\alpha}_{\varphi}P_{\alpha},$ (106) $\displaystyle X^{\alpha}=$ $\displaystyle\frac{\partial S_{2}}{\partial p_{\alpha}}=x^{\alpha}+\int\\!\\!dz^{\varphi}l^{\alpha}_{\varphi},$ $\displaystyle Z^{\varphi}=$ $\displaystyle\frac{\partial S_{2}}{\partial p_{\varphi}}=z^{\varphi},\quad\quad{\sf\,ns\,}(\varphi).$ Afterwards relation (45) in the Hamiltonian formulation becomes $P_{\varphi}=b_{\varphi}{}^{ij}d_{ij}+b_{\varphi}{}^{r}a_{r},$ where equalities $p_{\alpha}=P_{\alpha}=c_{\alpha}$ are used. Therefore we can set $l^{\alpha}_{\varphi}\equiv 0$ without loss of generality. Now nontrivial blocks of inverse metric (102) take the form $\begin{split}g^{\alpha\beta}=&-b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\mu\nu}k^{\alpha\beta}_{\mu\nu},\\\ g^{\mu\mu}=&~{}~{}b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\mu\mu},\\\ 2g^{\alpha\phi}=&~{}~{}b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\alpha\varphi},\end{split}$ (107) where functions $k^{\alpha\beta}_{\mu\nu}(y^{\mu})$ are arbitrary, and functions $b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\mu\mu}$ and $b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\alpha\varphi}$ are defined by matrix (101). Note that block $g^{\alpha\beta}$ may be degenerate due to the presence of rectangular block $g^{\alpha\varphi}$. In section 5.2 this is not allowed because metric (74) becomes degenerate. Moreover, using the canonical transformation of type (66), one of nonzero elements in each row of matrix $b_{\mu\mu}{}^{ii}$ can be set to unity. Thus we have solved the functional Hamilton–Jacobi equation. ###### Theorem 5.4. Let separable metric be of type $[{\textsc{n}},{\textsc{m}},n-{\textsc{n}}-{\textsc{m}}]_{2}$. Then there is such set of parameters (43) and separating functions (44), (45), that canonical separable metric has block form (47) with blocks (107), where functions $k^{\alpha\beta}_{\mu\nu}(y^{\mu})=k^{\beta\alpha}_{\mu\nu}(y^{\mu})$ are arbitrary, and functions $b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\mu\mu}$ and $b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\alpha\varphi}$ are elements of the $({\textsc{n}}+{\textsc{m}})$-th row of the matrix inverse to matrix $B$ (101). Matrix $B$ must be nondegenerate with arbitrary elements, such that elements of each row depend on single coordinates and parameters $(c_{\alpha})$ in such a way that all elements of the $({\textsc{n}}+{\textsc{m}})$-th row of matrix $B^{-1}$ are nonzero, elements $b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\mu\mu}$ do not depend on $(c_{\alpha})$, and elements $b_{{\textsc{n}}+{\textsc{m}}\,{\textsc{n}}+{\textsc{m}}}{}^{\phi}$ are linear in parameters $(c_{\alpha})$. Arbitrary functions must also produce nondegenerate separable metric and provide real solutions of Eqs. (44) for $W^{\prime}_{\mu}$ and $W^{\prime}_{\varphi}$. In addition, conservation laws in the Hamiltonian formulation have the form $\begin{split}p_{\alpha}=&c_{\alpha},\\\ b_{ii}{}^{\mu\nu}\big{(}p_{\mu}p_{\nu}-k^{\alpha\beta}_{\mu\nu}c_{\alpha}c_{\beta}\big{)}+b_{ii}{}^{\varphi}p_{\varphi}=&d_{ii},\\\ b_{r}{}^{\mu\nu}\big{(}p_{\mu}p_{\nu}-k^{\alpha\beta}_{\mu\nu}c_{\alpha}c_{\beta}\big{)}+b_{r}{}^{\varphi}p_{\varphi}=&a_{r}.\end{split}$ (108) In general, there are n linear and $n-{\textsc{n}}$ quadratic conservation laws for separable metrics of type $[{\textsc{n}},{\textsc{m}},n-{\textsc{n}}-{\textsc{m}}]_{2}$ for ${\textsc{m}}\geq 1$. Let us consider separable metrics of type $[{\textsc{n}},{\textsc{m}},n-{\textsc{n}}-{\textsc{m}}]_{1}$, i.g. energy $E$ enters the group of parameters for coisotropic coordinates (42). Then solution of the Hamilton–Jacobi equation repeats all previous steps. The only difference is the change of relations (102). Blocks of separable metric after the canonical transformation (105) become $g^{\mu\mu}=b_{n}{}^{\mu\mu}(y,z),\qquad 2g^{\alpha\phi}=b_{n}{}^{\alpha\phi}(y,z).$ (109) Therefore we only formulate the result. ###### Theorem 5.5. Let separable metric be of type $[{\textsc{n}},{\textsc{m}},n-{\textsc{n}}-{\textsc{m}}]_{1}$. Then there is such set of parameters (42) and separating functions (44), (45), that canonical separable metric has block form (47) with blocks $\begin{split}g^{\alpha\beta}=&-b_{n}{}^{\mu\nu}k^{\alpha\beta}_{\mu\nu},\\\ g^{\mu\mu}=&~{}~{}b_{n}{}^{\mu\mu},\\\ 2g^{\alpha\varphi}=&~{}~{}b_{n}{}^{\alpha\varphi},\end{split}$ (110) where functions $k^{\alpha\beta}_{\mu\nu}(y^{\mu})=k^{\beta\alpha}_{\mu\nu}(y^{\mu})$ are arbitrary, and functions $b_{nn}{}^{\mu\mu}$ and $b_{nn}{}^{\alpha\varphi}$ are elements of the $n$-th row of the matrix inverse to matrix $B$ (101). Matrix $B$ must be nondegenerate with arbitrary elements such that elements of each row depend on single coordinates and parameters $(c_{\alpha})$ in such a way that all elements of the last row of matrix $B^{-1}$ are nonzero, elements $b_{nn}{}^{\mu\mu}$ do not depend on $(c_{\alpha})$, and $b_{nn}{}^{\phi}$ are linear in parameters $(c_{\alpha})$. Arbitrary functions must also produce nondegenerate separable metric and provide real solutions of Eqs. (44) for $W^{\prime}_{\mu}$ and $W^{\prime}_{\varphi}$. In addition, conservation laws in the Hamiltonian formulation have form (108). Thus we looked over all possible classes of separable metrics, introduced sets of independent parameters and separating functions and explicitly separated all variables. The used method is constructive and allows to write down all $n$ conservation laws (108) for geodesics. We showed that there exists such choice of independent parameters and separating functions in complete integrals that conservation laws are at most quadratic in momenta: (i), there are linear conservation laws corresponding to commuting Killing vector fields; (ii), there are indecomposable quadratic conservation laws; (iii), there may exist linear conservation laws for coisotropic coordinates which are not related to Killing vectors. The last possibility arises only for indefinite metrics which may have zeroes on the diagonal. We see that constructed conservation laws are at most quadratic. ###### Theorem 5.6. Let the Hamilton–Jacobi equation for geodesics (2) admit complete separation of variables. Then there exists such choice of independent parameters and separating functions in complete integrals that corresponding conservation laws are at most quadratic in momenta. This statement is important because for another choice of parameters and separating functions we cannot guarantee the absence of higher order conservation laws. At the end we prove that any additional conservation law is a function of conservation laws described in theorems (5.4) and (5.5). To this end we square equality $p_{\alpha}=c_{\alpha}$. Then all conservation laws become quadratic (remember that functions $b_{ii}{}^{\varphi}$ and $b_{r}{}^{\varphi}$ are linear in $c_{\alpha}$ and, consequently, in momenta $p_{\alpha}$). Part of these conservation laws may be decomposable but this is not essential. We simplify notation: $\begin{split}(x^{\alpha},y^{\mu},z^{\varphi})\quad\mapsto&\quad(q^{\alpha}),\\\ (p_{\alpha},p_{\mu},p_{\varphi})\quad\mapsto&\quad(p_{\alpha}),\\\ (c_{\alpha}^{2},d_{ij},a_{r})\quad\mapsto&\quad(c_{\textsc{a}}),\end{split}$ where indices in the right hand sides take all values from $1$ to $n$: $\alpha=1,\dotsc,n$ and ${\textsc{a}}=1,\dotsc,n$. The conservation laws take the form $c_{\textsc{a}}=F_{\textsc{a}}(q,p)=\frac{1}{2}f_{\textsc{a}}^{\alpha\beta}p_{\alpha}p_{\beta},$ where $f_{\textsc{a}}^{\alpha\beta}(q)$ are some functions only on coordinates $q$. They contain the Hamiltonian among themselves: for separable metrics of type $1$ and $2$, it is $F_{n}$ and $F_{{\textsc{n}}+{\textsc{m}}}$, respectively. Perform the canonical transformation $(q^{\alpha},p_{\alpha})\mapsto(Q^{\textsc{a}},P_{\textsc{a}})$ with generating functions $S_{2}:=W(q,P)$ to action-angle variables. Then $F_{\textsc{a}}=P_{\textsc{a}},\qquad\forall{\textsc{a}}=1,\dotsc n,$ in new variables. Assume that there is additional involutive conservation law $G:=\sum_{{\textsc{a}}=1}^{n}G^{\textsc{a}}(Q)P_{\textsc{a}}^{m}={\sf\,const},\qquad m\in{\mathbb{R}},$ with some differentiable functions $G^{\textsc{a}}$ only from new coordinates, and where $m$ is some real number. In particular, $G$ may be homogeneous polynomial of any order $m$ on new momenta. Involutivity means that $[G,F_{\textsc{a}}]=[G,P_{\textsc{a}}]=\sum_{{\textsc{b}}=1}^{n}\frac{\partial G^{\textsc{b}}}{\partial Q^{\textsc{a}}}P_{\textsc{b}}^{m}=0.$ This equality must hold for all momenta and values of index a. Therefore functions $G^{\textsc{a}}$ do not depend on coordinates and, consequently, there is functional dependence between integrals $(F_{\textsc{a}},G)$ because the number of functions $F_{\textsc{a}}$ in maximal and equals to the number of momenta, i.e. any additional conservation law is some function on integrals $(F_{\textsc{a}})$ at least locally, $G=G(F)$. Thus we proved ###### Theorem 5.7. Let variables in the Hamilton–Jacobi equation for geodesics be completely separable, and all conservation laws $F_{\textsc{a}}$ are found according to theorems (5.4) and (5.5). Then any additional involutive conservation law $G$ for geodesics is some function $G=G(F)$ at least locally. This solves completely the Stäckel problem for metrics of arbitrary signature on manifolds of any dimension. All separable metrics are divided into equivalence classes. Metrics in each class are related by canonical transformations and nondegenerate transformations of parameters which do not involve coordinates. There is the canonical (the simplest) separable metric in each equivalence class. Its form is given by theorems 5.4 and 5.5. Corresponding conservation laws are at most quadratic in momenta. For other choices of parameters conservation laws may have higher order but they are functionally dependent on conservation laws listed in theorems 5.4 and 5.5 due to theorem 5.7. Matrix $B$ in general theorems is constructed as matrix (94). The proved theorems are constructive. In next sections we list all separable metrics in two, three, and four dimensions as examples. ## 6 Separation of variables in two dimensions Two dimensional manifolds provide the simplest examples of separable metrics for the Hamilton– Jacobi equation for geodesics which have important features present in higher dimensions. Separation of variables in two dimensions is considered in example 5.1 in detail. Therefore we only formulate results. There are only three classes of different separable metrics. 1) Class $[2,0,0]$. Two commuting Killing vectors. Coordinates and parameters: $(x^{\alpha},y^{\mu},z^{\varphi})\mapsto(x^{1},x^{2}),\qquad(c_{\alpha},d_{ij},a_{r})\mapsto(c_{1},c_{2}).$ This case is described by theorem 4.2. In canonical form, the inverse separable metric is (pseudo)Euclidean: $g^{\alpha\beta}=\eta^{\alpha\beta},$ (111) where $\eta^{\alpha\beta}$ is either Euclidean or Lorentzian metric. The Hamilton–Jacobi equation has the form $\eta^{\alpha\beta}W^{\prime}_{\alpha}W^{\prime}_{\beta}=\eta^{\alpha\beta}c_{\alpha}c_{\beta}.$ Variables are separated as $W^{\prime}_{\alpha}=c_{\alpha}.$ The system has two linear conservation laws: $p_{\alpha}=c_{\alpha},$ (112) and respective coordinates $x^{1},x^{2}$ are cyclic. Separable metric (111) has two independent commuting Killing vector fields $\partial_{1}$ and $\partial_{2}$. 2) Class $[1,1,0]_{2}$. One Killing vector and one indecomposable quadratic conservation law. Coordinates and parameters: $(x^{\alpha},y^{\mu},z^{\varphi})\mapsto(x^{1}:=x,y^{2}:=y),\qquad(c_{\alpha},d_{ij},a_{r})\mapsto(c_{1}:=c,d_{22}:=2E).$ This case in more generality is considered in example 5.2. Canonical separable (pseudo)Riemannian metric has form (77) $g^{**}=\begin{pmatrix}-k(y)&0\\\ 0&1\end{pmatrix},$ (113) where $k(y)\neq 0$ is an arbitrary function. For $k<0$ and $k>0$, we have Riemannian and Lorentzian metrics, respectively. The Hamilton–Jacobi equation is $-kW^{\prime 2}_{1}+W^{\prime 2}_{2}=2E.$ Variables are separated in the following way $W_{1}=c,\qquad W^{\prime 2}_{2}=2E+kc^{2}>0.$ In the Hamiltonian formulation, we have one linear and one quadratic conservation laws: $p_{1}=c,\qquad p_{2}^{2}-k(y)c^{2}=2E.$ Canonical separable metric (113) is parameterized by one arbitrary function $k(y)\neq 0$, satisfying inequality $2E+kc^{2}>0$ for fixed $E$ and $c$. 3) Class $[0,2,0]_{2}$. Tow quadratic conservation laws. Coordinates and parameters: $(x^{\alpha},y^{\mu},z^{\varphi})\mapsto(y^{1},y^{2}),\qquad(c_{\alpha},d_{ij},a_{r})\mapsto(d_{11}:=d,d_{22}:=2E).$ This case is described by theorem 5.1. In canonical form, the separable metric is given by Eq. (59). Variables are completely separated (50), where matrix $b$ is given by Eq. (57). Conservation laws are quadratic (56). The canonical separable metric is parameterized by four arbitrary functions $\phi_{11}(y^{1})$, $\phi_{12}(y^{1})$, $\phi_{21}(y^{2})$, and $\phi_{22}(y^{2})$, whose restrictions are described in example 5.1 in detail. This class of metrics contains two dimensional Liouville systems (8). 4) Class $[1,0,1]_{1}$. One Killing vector and one coisotropic coordinate. Coordinates and parameters: $(x^{\alpha},y^{\mu},z^{\varphi})\mapsto(x^{1}:=x,z^{2}:=z),\qquad(c_{\alpha},d_{ij},a_{r})\mapsto(c_{1}:=c,a_{2}:=2E).$ Appearance of zeroes on the diagonal is possible only for Lorentzian signature metrics and one Killing vector. This case is described by theorem 5.3. Canonical separable metric has form (93) where off diagonal block $b_{n}{}^{\alpha\phi}$ is reduced to one nonzero function $b(z)$, which can be transformed to unity by the canonical transformation, i.e. $g^{**}=\begin{pmatrix}0&1\\\ 1&0\end{pmatrix}.$ (114) This metric produces two linear conservation laws $p_{1}=c,\qquad cp_{2}=E$ (115) We see that separation of variables in two dimensions corresponds to two commuting Killing vectors $\partial_{x}$ and $\partial_{z}$, i.e. coordinates $x$, $z$ are null. Therefore separable metric (168) is equivalent to the Lorentz metric, and respective classes are also equivalent $[1,0,1]_{1}\sim[2,0,0]$. All three classes of separable metrics were known already to Stäckel [1]. However the case 4) with coisotropic coordinate was not described. Moreover, we proved that there no other separable metrics. ## 7 Separation of variables in three dimensions There are six different classes of separable metrics. 1) Class $[3,0,0]$. Three commuting Killing vectirs. Coordinates and parameters: $(x^{\alpha},y^{\mu},z^{\varphi})\mapsto(x^{1},x^{2},x^{3}),\qquad(c_{\alpha},d_{ij},a_{r})\mapsto(c_{1},c_{2},c_{3}).$ In canonical form, the separable metric is (pseudo)Euclidean. Separation of variables and conservation laws have the same form as in two dimensions in case $[2,0,0]$, but now indices $\alpha=1,2,3$ take three values, and all Cartesian coordinates $x^{\alpha}$ are cyclic. 2) Class $[2,1,0]_{2}$. Two commuting Killing vectors and one quadratic conservation law. Coordinates and parameters: $(x^{\alpha},y^{\mu},z^{\varphi})\mapsto(x^{1},x^{2},y^{3}:=y),\qquad(c_{\alpha},d_{ij},a_{r})\mapsto(c_{1},c_{2},d_{33}:=2E).$ Canonical separable metric has the form (77): $g^{**}=\begin{pmatrix}g^{\alpha\beta}(y)&0\\\ 0&1\end{pmatrix},\qquad\alpha,\beta=1,2,$ (116) where $g^{\alpha\beta}$ is an arbitrary symmetric nondegenerate matrix. The Hamilton–Jacobi equation and separation of variables are as follows $\begin{split}&g^{\alpha\beta}W^{\prime}_{\alpha}W^{\prime}_{\beta}+W^{\prime 2}_{3}=2E,\\\ &W^{\prime}_{\alpha}=c_{\alpha},\qquad\qquad W^{\prime 2}_{3}=2E-g^{\alpha\beta}c_{\alpha}c_{\beta}.\end{split}$ Conservation laws are $p_{\alpha}=c_{\alpha},\qquad g^{\alpha\beta}(y)p_{\alpha}p_{\beta}+p_{3}^{2}=2E.$ (117) In this case, the canonical separable metric is parameterized by three arbitrary functions of single argument in matrix $g^{\alpha\beta}$ with restriction $\det g^{\alpha\beta}\neq 0$. Moreover, condition $W^{\prime 2}_{3}\geq 0$ also restricts arbitrary functions for fixed $E$ and $c$. Depending on matrix $g^{\alpha\beta}$ separable metric (116) may have arbitrary signature. In particular, when matrix $g^{\alpha\beta}$ is constant, we return to class $[3,0,0]$. 3) Class $[1,2,0]_{2}$. One Killing vector and two quadratic conservation laws. Coordinates and parameters: $(x^{\alpha},y^{\mu},z^{\varphi})\mapsto(x^{1},y^{2},y^{3}),\qquad(c_{\alpha},d_{ij},a_{r})\mapsto(c_{1}:=c,d_{22}:=d,d_{33}:=2E).$ This case is described by theorem 5.2. It is new, and we consider it in more detail. First, we define matrix $b$: $b_{\mu\mu}{}^{ii}=\begin{pmatrix}\phi_{22}(y^{2})&\phi_{23}(y^{2})\\\ \phi_{32}(y^{3})&\phi_{33}(y^{3})\end{pmatrix},\qquad b_{ii}{}^{\mu\mu}=\frac{1}{\det b}\begin{pmatrix}\phi_{33}&-\phi_{23}\\\ -\phi_{32}&\phi_{22}\end{pmatrix},$ where $\det b=\phi_{22}\phi_{33}-\phi_{23}\phi_{32}\neq 0$. After canonical transformation of type (66), One element in each row of matrix $b$ can be transformed to $\pm 1$. Let $\phi_{22}=1,\qquad\phi_{32}=-1.$ Then $b_{\mu\mu}{}^{ii}=\begin{pmatrix}~{}~{}1&\phi_{23}\\\ -1&\phi_{33}\end{pmatrix},\qquad b_{ii}{}^{\mu\mu}=\frac{1}{\det b}\begin{pmatrix}\phi_{33}&-\phi_{23}\\\ 1&1\end{pmatrix},$ (118) where $\det b=\phi_{23}+\phi_{33}$. Diagonal metric elements corresponding to quadratic conservation laws are $g^{22}=b_{33}{}^{22}=\frac{1}{\phi_{23}+\phi_{33}},\qquad g^{33}=b_{33}{}^{33}=\frac{1}{\phi_{23}+\phi_{33}}.$ (119) Element $g^{11}$ of inverse metric has form (74) $g^{11}=-b_{33}^{\mu\nu}k^{11}_{\mu\nu}=-\frac{1}{\phi_{23}+\phi_{33}}(k^{11}_{22}+k^{11}_{33}).$ Simplify notation $\phi_{23}(y^{2}):=\phi_{2}(y^{2}),\quad\phi_{33}(y^{3}):=\phi_{3}(y^{3}),\qquad k^{11}_{22}(y^{2}):=-k_{2}(y^{2})\quad k^{11}_{33}:=-k_{3}(y^{3}).$ Now the canonical separable metric takes the form $g^{**}=\frac{1}{\phi_{2}+\phi_{3}}\begin{pmatrix}k_{2}+k_{3}&0&0\\\ 0&1&0\\\ 0&0&1\end{pmatrix}.$ (120) Variables in the Hamilton–Jacobi equation $\frac{1}{\phi_{2}+\phi_{3}}\big{[}(k_{2}+k_{3})W^{\prime 2}_{1}+W^{\prime 2}_{2}+W^{\prime 2}_{3}\big{]}=2E$ are completely separated: $\begin{split}W^{\prime}_{1}=&~{}~{}c,\\\ W^{\prime 2}_{2}=&~{}~{}d+2\phi_{2}E-k_{2}c^{2},\\\ W^{\prime 2}_{3}=&-d+2\phi_{3}E-k_{3}c^{2}.\end{split}$ (121) These relations yield conservation laws $\begin{split}p_{1}=&c,\\\ \frac{1}{\phi_{2}+\phi_{3}}\left[\phi_{3}p_{2}^{2}-\phi_{2}p_{3}^{2}+\big{(}\phi_{3}k_{2}-\phi_{2}k_{3}\big{)}p_{1}^{2}\right]=&d,\\\ \frac{1}{\phi_{2}+\phi_{3}}\left[p_{2}^{2}+p_{3}^{2}+\big{(}k_{2}+k_{3}\big{)}p_{1}^{2}\right]=&2E.\end{split}$ (122) Thus canonical separable metric (120) is parameterized by four functions of single argument $\phi_{2,3}$ and $k_{2,3}$. They have to produce nondegenerate metric, and Eq. (121) must admit real solutions for separating functions $W^{\prime}_{\mu}$. In general, there are two indecomposable quadratic and one linear conservation laws. 4) Class $[1,1,1]_{2}$. One Killing vector, one indecomposable quadratic conservation law, and one coisotropic coordinate. Coordinates and parameters: $(x^{\alpha},y^{\mu},z^{\varphi})\mapsto(x^{1}:=x,y^{2}:=y,z^{3}:=z),\qquad(c_{\alpha},d_{ij},a_{r})\mapsto(c_{1}:=c,d_{22}:=2E,a_{3}:=a).$ This case is described by theorem 5.4. As always, we start with arbitrary $(2\times 2)$ matrix (101). To simplify notation, let $b_{22}{}^{22}\equiv 1,\qquad b_{22}{}^{3}\equiv\phi_{2}(y),\qquad b_{3}{}^{22}\equiv\phi_{3}/c,\qquad b_{3}{}^{3}\equiv 1/c.$ The first and the last equality can be always achieved by suitable canonical transformation. The dependence on $c$ follows from the independence of metric components on parameters. Then $B=\begin{pmatrix}1&\phi_{2}\\\ \phi_{3}/c&1/c\end{pmatrix}\qquad\Rightarrow\qquad B^{-1}=\frac{1}{1-\phi_{2}\phi_{3}}\begin{pmatrix}1&-c\phi_{2}\\\ -\phi_{3}&c\end{pmatrix},$ (123) where $\phi_{2}(y)$ and $\phi_{3}(z)$ are arbitrary functions on single coordinates. Equations (107) imply expression for the canonical separable metric $g^{**}=\frac{1}{1-\phi_{2}\phi_{3}}\begin{pmatrix}-k_{2}&0&-\phi_{2}/2\\\ 0&1&0\\\ -\phi_{2}/2&0&0\end{pmatrix},$ (124) where $k_{2}(y)$ is an arbitrary function. The Hamilton–Jacobi equation after substitution $W^{\prime}_{1}=c$ takes the form $\frac{1}{1-\phi_{2}\phi_{3}}\left(-k_{2}c^{2}+W^{\prime 2}_{2}-\phi_{2}cW^{\prime}_{3}\right)=2E.$ Variables are separated in the following form $\begin{split}W^{\prime 2}_{2}=&2E+\phi_{2}a+k_{2}c^{2},\\\ W^{\prime}_{3}=&\frac{1}{c}(2\phi_{3}E+a).\end{split}$ (125) Conservation laws (108) after substitution $p_{1}=c$ are $\begin{split}\frac{1}{1-\phi_{2}\phi_{3}}\big{(}p_{2}^{2}-k_{2}c^{2}-\phi_{2}cp_{3}\big{)}=&2E,\\\ \frac{1}{1-\phi_{2}\phi_{3}}\big{[}-\phi_{3}(p_{2}^{2}-k_{2}c^{2})+cp_{3}\big{]}=&a.\end{split}$ (126) Canonical separable metric (124) is parameterized by three arbitrary functions $\phi_{2}(y)$, $\phi_{3}(z)$, and $k_{2}(y)$. Determinant of the inverse metric (124) is $\det g^{**}=-\frac{\phi_{2}^{2}}{4(1-\phi_{2}\phi_{3})^{3}}\neq 0,\infty.$ Arbitrary functions must be restricted $\phi_{2}\neq 0$ and $\phi_{2}\phi_{3}\neq 1$, because otherwise the determinant is degenerate. Moreover functions $\phi_{2}$ and $k_{2}$ are to be chosen in such a way as to provide real solution for the first equation (125) with respect to $W^{\prime}_{2}$. Depending on arbitrary functions the separable metric may any signature. 5) Class $[1,1,1]_{1}$. One Killing vector, one indecomposable quadratic conservation law, and one coisotropic coordinate. Coordinates and parameters: $(x^{\alpha},y^{\mu},z^{\varphi})\mapsto(x^{1}:=x,y^{2}:=y,z^{3}:=z),\qquad(c_{\alpha},d_{ij},a_{r})\mapsto(c_{1}:=c,d_{22}:=d,a_{3}:=2E).$ This case is given by theorem 5.5. Matrix $B$ has the same form as in case $[1,1,1]_{2}$ (123), however the canonical separable metric is defined by the last row of matrix $B^{-1}$: $g^{**}=\frac{1}{1-\phi_{2}\phi_{3}}\begin{pmatrix}\phi_{3}k_{2}&0&1/2\\\ 0&-\phi_{3}&0\\\ 1/2&0&0\end{pmatrix}.$ (127) After substitution $W^{\prime}_{1}\equiv c$, the Hamilton–Jacobi equation take the form $\frac{1}{1-\phi_{2}\phi_{3}}\big{(}\phi_{3}k_{2}c^{2}-\phi_{3}W^{\prime 2}_{2}+cW^{\prime}_{3}\big{)}=2E.$ Variables are separated as $\begin{split}W^{\prime 2}_{2}=&d+2\phi_{2}E+k_{2}c^{2},\\\ W^{\prime}_{3}=&\frac{1}{c}({\phi_{3}}d+2E).\end{split}$ (128) Conservation laws (108) take the form $\begin{split}\frac{1}{1-\phi_{2}\phi_{3}}\big{[}p_{2}^{2}-k_{2}c^{2}-\phi_{2}cp_{3}\big{]}=&d,\\\ \frac{1}{1-\phi_{2}\phi_{3}}\big{[}-\phi_{3}(p_{2}^{2}-k_{2}c^{2})+cp_{3}\big{]}=&2E.\end{split}$ (129) So, canonical separable metric (127) is parameterized by three arbitrary functions $\phi_{2}(y)$, $\phi_{3}(z)$, and $k_{2}(y)$. They are restricted by nondegeneracy of the determinant $\det g^{**}=\frac{\phi_{3}}{4(1-\phi_{2}\phi_{3})^{3}}\neq 0,\infty$ which imply $\phi_{3}\neq 0$ and $\phi_{2}\phi_{3}\neq 1$. Moreover, functions $\phi_{2}$ and $k_{2}$ have to provide existence of real solutions for $W^{\prime}_{2}$ in the first equation (128). Depending on arbitrary functions, the separable metric may have any signature. 6) Class $[0,3,0]_{2}$. Absence of Killing vectors and coisotropic coordinates. Coordinates and parameters: $(x^{\alpha},y^{\mu},z^{\varphi})\mapsto(y^{1},y^{2},y^{3}),\qquad(c_{\alpha},d_{ij},a_{r})\mapsto(d_{11}:=d_{1},d_{22}:=d_{2},d_{33}:=2E).$ This case is described by theorem 5.1. After the canonical transformation with generating function (66) matrix $b$ and its inverse are $b_{\mu\mu}{}^{ii}=\begin{pmatrix}1&b_{12}(y^{1})&b_{13}(y^{1})\\\ b_{21}(y^{2})&1&b_{23}(y^{2})\\\ b_{31}(y^{3})&b_{32}(y^{3})&1\end{pmatrix},\qquad b_{ii}{}^{\mu\mu}=\frac{1}{\vartriangle}\begin{pmatrix}\vartriangle_{11}&\vartriangle_{21}&\vartriangle_{31}\\\ \vartriangle_{12}&\vartriangle_{22}&\vartriangle_{32}\\\ \vartriangle_{13}&\vartriangle_{23}&\vartriangle_{33}\end{pmatrix},$ (130) where $\vartriangle:=\det b_{\mu\mu}{}^{ii}$ and symbols $\vartriangle_{\mu i}$ denote cofactors of elements $b_{\mu\mu}{}^{ii}$. This matrix produces the diagonal metric (55) $g^{**}=\frac{1}{\vartriangle}\begin{pmatrix}\vartriangle_{13}&0&0\\\ 0&\vartriangle_{23}&0\\\ 0&0&\vartriangle_{33}\end{pmatrix}.$ (131) The respective Hamilton–Jacobe equation becomes $\frac{1}{\vartriangle}\big{(}\vartriangle_{13}W^{\prime 2}_{1}+\vartriangle_{23}W^{\prime 2}_{2}+\vartriangle_{33}W^{\prime 2}_{3}\big{)}=2E.$ Variables are completely separated in the way $\begin{split}W^{\prime 2}_{1}=&d_{1}+b_{12}d_{2}+2b_{13}E,\\\ W^{\prime 2}_{2}=&b_{21}d_{1}+d_{2}+2b_{23}E,\\\ W^{\prime 2}_{3}=&b_{31}d_{1}+b_{32}d_{2}+2E.\\\ \end{split}$ (132) All three conservation laws are quadratic in general $\begin{split}\frac{1}{\vartriangle}\big{(}\vartriangle_{11}p^{2}_{1}+\vartriangle_{21}p^{2}_{2}+\vartriangle_{31}p^{2}_{3}\big{)}=&d_{1},\\\ \frac{1}{\vartriangle}\big{(}\vartriangle_{12}p^{2}_{1}+\vartriangle_{22}p^{2}_{2}+\vartriangle_{32}p^{2}_{3}\big{)}=&d_{2},\\\ \frac{1}{\vartriangle}\big{(}\vartriangle_{13}p^{2}_{1}+\vartriangle_{23}p^{2}_{2}+\vartriangle_{33}p^{2}_{3}\big{)}=&2E.\end{split}$ (133) This case includes the Liouville system (see example 2.1). Indeed, take matrix $b$ in the form $b_{\mu\mu}{}^{ii}:=\begin{pmatrix}1&0&\phi_{1}(y^{1})\\\ 0&1&\phi_{2}(y^{2})\\\ -1&-1&\phi_{3}(y^{3})\end{pmatrix}\quad\Rightarrow\quad b_{ii}{}^{\mu\mu}=\frac{1}{\phi_{1}+\phi_{2}+\phi_{3}}\begin{pmatrix}\phi_{2}+\phi_{3}&-\phi_{1}&-\phi_{1}\\\ -\phi_{2}&\phi_{1}+\phi_{3}&-\phi_{2}\\\ 1&1&1\end{pmatrix}.$ Then conservation laws are $\begin{split}p^{2}_{1}-2\phi_{1}E=&d_{1},\\\ p^{2}_{2}-2\phi_{2}E=&d_{2},\\\ \frac{1}{\phi_{1}+\phi_{2}+\phi_{3}}\big{(}p^{2}_{1}+p^{2}_{2}+p^{2}_{3}\big{)}=&2E,\end{split}$ (134) which coincides with Eqs. (13) and (15) for $\Theta\equiv 0$ up to notation. Five of six separable metrics in three dimensions were listed in [29], where different technique is used. Types of metrics coincide with respect to the number of Killing vectors, quadratic conservation laws and coisotropic coordinates. However computations of the present section allowed us to find explicitly separable functions $W^{\prime}_{\alpha}$, $\alpha=1,\dotsc,3$. In addition, we give more detailed classification indicated by indices $1,2$, and separable metric of class $[1,1,1]_{1}$ is not mentioned in paper [29]. ## 8 Separation of variables in four dimensions Separable metrics in four dimensions are of great importance in gravity models, in particular, in general relativity. These metrics has Lorentzian signature, and coisotropic coordinates may appear. There are ten classes of different separable metrics in four dimensions. 1) Class $[4,0,0]$. Four Killing vectors. Variables are separated in the same way as in lower dimensions. Canonical separable metric and conservation laws have form (111) and (112), respectively, but now indices run over all four values, $\alpha,\beta=1,2,3,4$. 2) Class $[3,1,0]_{2}$. Three commuting Killing vectors and one indecomposable quadratic conservation law. As in three dimensions, the canonical separable metric and conservation laws have form (116) and (117), but indices $\alpha,\beta=1,2,3$ take more values. The canonical separable metric is parameterized by six arbitrary functions. The Kasner solution [30] lies in this class. 3) Class $[2,2,0]_{2}$. Two commuting Killing vector fields and two indecomposable quadratic conservation laws without coisotropic coordinates. Coordinates and parameters: $(x^{\alpha},y^{\mu},z^{\varphi})\mapsto(x^{1},x^{2},y^{3},y^{4}),\qquad(c_{\alpha},d_{ij},a_{r})\mapsto(c_{1},c_{2},d_{33}:=d,d_{44}:=2E).$ This case is described by theorem 5.2. Matrix $b$ and diagonal metric elements $g^{33}$ and $g^{44}$ have form (118) and (119) with replacement $(2,3)\mapsto(3,4)$. The block of inverse metric $g^{\alpha\beta}$, $\alpha,\beta=1,2$, is $g^{\alpha\beta}=-k^{\alpha\beta}_{\mu\nu}g^{\mu\nu}=\frac{1}{\phi_{3}+\phi_{4}}\big{(}k^{\alpha\beta}_{3}+k^{\alpha\beta}_{4}\big{)},\qquad\phi_{3}+\phi_{4}\neq 0,$ where $\phi_{3}(y^{3})$, $\phi_{4}(y^{4})$, and $k^{\alpha\beta}_{3}(y^{3})$, $k^{\alpha\beta}_{4}(y^{4})$ – are arbitrary functions of single coordinates. Thus canonical separable metric has the form $g^{**}=\frac{1}{\phi_{3}+\phi_{4}}\begin{pmatrix}k^{\alpha\beta}_{3}+k^{\alpha\beta}_{4}&0&0\\\ 0&1&0\\\ 0&0&1\end{pmatrix}.$ (135) In addition, we have to require fulfillment of two conditions $\det g^{**}=\frac{(k^{11}_{3}+k^{11}_{4})(k^{22}_{3}+k^{22}_{4})-(k^{12}_{3}+k^{12}_{4})(k^{21}_{3}+k^{21}_{4})}{(\phi_{3}+\phi_{4})^{4}}\neq 0,\infty.$ (136) After separation of two coordinates $W^{\prime}_{\alpha}\equiv c_{\alpha}$, the Hamilton–Jacobi equation becomes $\frac{1}{\phi_{3}+\phi_{4}}\big{(}k^{\alpha\beta}_{3}c_{\alpha}c_{\beta}+k^{\alpha\beta}_{4}c_{\alpha}c_{\beta}+W^{\prime 2}_{3}+W^{\prime 2}_{4}\big{)}=2E.$ (137) Variables are completely separated as $\begin{split}W^{\prime 2}_{3}=&~{}~{}d+2E\phi_{3}-k^{\alpha\beta}_{3}c_{\alpha}c_{\beta},\\\ W^{\prime 2}_{4}=&-d+2E\phi_{4}-k^{\alpha\beta}_{4}c_{\alpha}c_{\beta}.\end{split}$ (138) Two conservation laws are quadratic in general $\begin{split}\frac{1}{\phi_{3}+\phi_{4}}\big{[}\phi_{4}(p_{3}^{2}+k_{3}^{\alpha\beta}c_{\alpha}c_{\beta})-\phi_{3}(p_{4}^{2}+k_{4}^{\alpha\beta}c_{\alpha}c_{\beta}\big{]}=&d,\\\ \frac{1}{\phi_{3}+\phi_{4}}\big{[}p_{3}^{2}+k_{3}^{\alpha\beta}c_{\alpha}c_{\beta}+p_{4}^{2}+k^{\alpha\beta}_{4}c_{\alpha}c_{\beta}\big{]}=&2E.\end{split}$ (139) In this case, the canonical separable metric (135) is parameterized by eight arbitrary functions, $\phi_{3}(y^{3})$, $\phi_{4}(y^{4})$, $k^{\alpha\beta}_{3}(y^{3})$, and $k^{\alpha\beta}_{4}(y^{4})$ satisfying inequalities (136). Besides, they must be chosen in such a way that equations for $W^{\prime}$ (138) have real solutions. This class of separable metrics includes the Schwarzschild, Reissner–Nordström, Kerr, and other famous solutions in general relativity [24]. 4) Class $[2,0,2]_{1}$. Two commuting Killing vectors and two coisotropic coordinates. Coordinates and parameters: $(x^{\alpha},z^{\varphi})=(x^{1},x^{2},z^{3},z^{4}),\qquad(c_{\alpha},d_{ii},a_{r})\mapsto(c_{1},c_{2},a_{3}:=a,a_{4}:=2E).$ This case is described by theorem 5.3. It is new and considered in more detail. Matrix $\phi$ (95) has the block form $(\phi_{\varphi}{}^{r})=\begin{pmatrix}1&\phi_{3}\\\ \phi_{4}&1\end{pmatrix},\qquad\Leftrightarrow\qquad(\phi_{\varphi}{}^{r})^{-1}=\frac{1}{1-\phi_{3}\phi_{4}}\begin{pmatrix}1&-\phi_{3}\\\ -\phi_{4}&1\end{pmatrix},$ (140) where $\phi_{3}(z^{3})$ and $\phi_{4}(z^{4})$ are arbitrary functions of single variables. Functions $h_{\varphi}$ are $h_{3}(z^{3},c)=h_{3}^{\alpha}(z^{3})c_{\alpha},\qquad h_{4}(z^{4},c)=h_{4}^{\alpha}(z^{4})c_{\alpha},\qquad\alpha,\beta=1,2,$ where $h_{3}^{\alpha}(z^{3})$ and $h_{4}^{\alpha}(z^{4})$ are linear. Then matrix $b$ has the form $(b_{\varphi}{}^{r})=\begin{pmatrix}\displaystyle\frac{1}{h_{3}^{\alpha}c_{\alpha}}&\displaystyle\frac{\phi_{3}}{h_{3}^{\alpha}c_{\alpha}}\\\\[10.0pt] \displaystyle\frac{\phi_{4}}{h_{4}^{\alpha}c_{\alpha}}&\displaystyle\frac{1}{h_{4}^{\alpha}c_{\alpha}}\end{pmatrix}\qquad\Leftrightarrow\qquad(b_{r}{}^{\varphi})=\frac{1}{1-\phi_{3}\phi_{4}}\begin{pmatrix}h_{3}^{\alpha}c_{\alpha}&-\phi_{3}h_{4}^{\alpha}c_{\alpha}\\\ -\phi_{4}h_{3}^{\alpha}c_{\alpha}&h_{4}^{\alpha}c_{\alpha}\end{pmatrix}.$ (141) It implies separable metric $g^{**}=\frac{1}{1-\phi_{3}\phi_{4}}\begin{pmatrix}0&0&-\phi_{4}h_{3}^{1}&h_{4}^{1}\\\ 0&0&-\phi_{4}h_{3}^{2}&h_{4}^{2}\\\ -\phi_{4}h_{3}^{1}&-\phi_{4}h_{3}^{2}&0&0\\\ h_{4}^{1}&h_{4}^{2}&0&0\end{pmatrix}.$ (142) The Hamilton–Jacobi equation after separation of the first two coordinates, $\frac{1}{1-\phi_{3}\phi_{4}}\big{(}-\phi_{3}h_{3}^{\alpha}c_{\alpha}W^{\prime}_{3}+h_{4}^{\alpha}c_{\alpha}W^{\prime}_{4}\big{)}=2E,$ (143) is completely separated as $\begin{split}W^{\prime}_{3}=&\frac{1}{h_{3}^{\alpha}c_{\alpha}}(a+2\phi_{3}E),\\\ W^{\prime}_{4}=&\frac{1}{h_{4}^{\alpha}c_{\alpha}}(\phi_{4}a+2E).\end{split}$ (144) Conservation laws (97) are linear $\begin{split}\frac{1}{1-\phi_{3}\phi_{4}}\big{(}h_{3}^{\alpha}c_{\alpha}p_{3}-\phi_{3}h_{4}^{\alpha}c_{\alpha}p_{4}\big{)}=&a,\\\ \frac{1}{1-\phi_{3}\phi_{4}}\big{(}-\phi_{4}h_{3}^{\alpha}c_{\alpha}p_{3}+h^{\alpha}_{4}c_{\alpha}p_{4}\big{)}=&2E.\end{split}$ (145) Arbitrary functions are restricted by inequality $\det g^{**}=\frac{(\phi_{4})^{2}\big{(}h_{3}^{2}h_{4}^{1}-h_{3}^{1}h_{4}^{2}\big{)}^{2}}{(1-\phi_{3}\phi_{4})^{4}}\neq 0,\infty.$ (146) Thus canonical separable metric of type $[2,0,2]_{1}$ is parameterized by six arbitrary functions of single arguments satisfying requirement (146). Two nonzero functions, e.g., $h_{3}^{2}$ and $h_{4}^{1}$ can be set unity. Note that $\det g^{**}$ is always positive. Therefore signature of the separable metric of type $[2,0,2]_{1}$ can be only $(++--)$. 5) Class $[2,1,1]_{1}$. Two commuting Killing vectors, one indecomposable quadratic conservation law and one coisotropic coordinate. Coordinates and parameters: $(x^{\alpha},y^{\mu},z^{\varphi})=(x^{1},x^{2},y^{3}:=y,z^{4}:=z),\qquad(c_{\alpha},d_{ii},a_{r})\mapsto(c_{1},c_{2},d_{33}:=d,a_{4}:=2E).$ This case is described by theorem 5.5. Matrix $B$ is parameterized similar to Eq. (123): $B=\begin{pmatrix}1&\phi_{3}\\\ \displaystyle\frac{\phi_{4}}{h_{4}^{\alpha}c_{\alpha}}&\displaystyle\frac{1}{h_{4}^{\alpha}c_{\alpha}}\end{pmatrix}\qquad\Rightarrow\qquad B^{-1}=\frac{1}{1-\phi_{3}\phi_{4}}\begin{pmatrix}1&-\phi_{3}h_{4}^{\alpha}c_{\alpha}\\\ -\phi_{4}&h_{4}^{\alpha}c_{\alpha}\end{pmatrix},$ where $\phi_{3}(y)$, $\phi_{4}(z)$, and $h_{4}^{\alpha}(z)$, $\alpha=1,2$, are arbitrary functions of single coordinates. It yields separable metric $g^{**}=\frac{1}{1-\phi_{3}\phi_{4}}\begin{pmatrix}\phi_{4}k_{3}^{\alpha\beta}&0&h_{4}^{\alpha}/2\\\ 0&-\phi_{4}&0\\\ h_{4}^{\beta}/2&0&0\end{pmatrix},$ (147) where functions $k_{3}^{\alpha\beta}(y)=k_{3}^{\beta\alpha}(y)$ are arbitrary. After separation of the first two coordinates, $W^{\prime}_{\alpha}=c_{\alpha}$, the Hamilton–Jacobi equation takes the form
# The Sign of non-Gaussianity and the Primordial Black Holes Abundance Hassan Firouzjahi<EMAIL_ADDRESS>School of Astronomy, Institute for Research in Fundamental Sciences (IPM) P. O. Box 19395-5531, Tehran, Iran Antonio Riotto<EMAIL_ADDRESS>Département de Physique Théorique, Université de Genève, 24 quai E. Ansermet, CH-1211 Geneva, Switzerland Gravitational Wave Science Center (GWSC), Université de Genève, 24 quai E. Ansermet, CH-1211 Geneva, Switzerland ###### Abstract The abundance of primordial black holes changes in the presence of local non- Gaussianity. A positive non-linear parameter $f_{NL}$ increases the abundance while a negative one reduces it. We show that in non-attractor single-field models of inflation which enhance the curvature power spectrum and may give rise to primordial black holes, $f_{NL}$ is always positive, when computed in correspondence of the peak of the curvature power spectrum where the primordial black hole abundance has its maximum. This implies that the interpretation of the recent pulsar timing arrays data from scalar-induced gravitational waves generated at primordial black hole formation may not be supported by invoking non-Gaussianity within non-attractor single-field models. Introduction. Very recently the NANOGrav NANOGrav:2023gor ; NANOGrav:2023hde , EPTA EPTA:2023fyk ; EPTA:2023sfo ; EPTA:2023xxk , PPTA Reardon:2023gzh ; Zic:2023gta ; Reardon:2023zen and CPTA Xu:2023wog collaborations have provided evidence for a stochastic background of Gravitational Waves (GWs) detected through the pulsar timing arrays. One immediate question is under which circumstances such GWs can be associated to the formation of Primordial Black Holes (PBHs) during which GWs are inevitably generated at second-order Sasaki:2018dmp . Their amount is proportional to the the square of the amplitude of the dimensionless curvature perturbation power spectrum ${\cal P}_{\cal R}$, $\Omega_{\text{\tiny GW}}\sim{\cal P}_{\cal R}^{2}$. The abundance of PBHs is exponentially sensitive to the same amplitude, $f_{\text{\tiny PBH}}\sim{\rm exp}(-1/{\cal P}_{\cal R})$, where $f_{\text{\tiny PBH}}$ is the PBH abundance with respect to the total dark matter. The problem is that the observed stochastic GW background is explained by a relatively large values of ${\cal P}_{\cal R}$, which has been claimed to lead to a too large PBH abundance Franciolini:2023pbf ; Liu:2023ymk ; Cai:2023dls ; Inomata:2023zup ; Zhu:2023faa . While this negative conclusion may be invalidated by the recent observation that corrections from the non- linear radiation transfer function and the determination of the true physical horizon crossing decrease the PBH abundance DeLuca:2023tun , one can also rely on the introduction of some local Non-Gaussianity (NG) in the curvature perturbation ${\cal R}={\cal R}_{\rm g}+\frac{3}{5}f_{NL}\left({\cal R}^{2}_{\rm g}-\langle{\cal R}^{2}_{\rm g}\rangle\right),$ (1) where ${\cal R}_{\rm g}$ is the Gaussian component111We are adopting this quadratic expansion to be model independent, even though in general the exact relation between ${\cal R}$ and ${\cal R}_{\rm g}$ can be worked out model by model. However, since typically $f_{NL}{\cal R}_{\rm g}\mathrel{\hbox to0.0pt{\lower 4.0pt\hbox{\hskip 0.5pt$\sim$}\hss}\raise 1.0pt\hbox{$<$}}1$, the quadratic expansion is justified.. The short-scale power spectrum ${\cal P}_{S}$ responsible for the PBH formation is modulated by the presence of a long mode ${\cal R}_{L}$. The threshold ${\cal R}_{c}$ for the formation of the PBHs is shifted approximately by Young:2015kda ${\cal R}_{c}\simeq{\cal R}_{c}^{\rm g}\left(1-\frac{3}{5}f_{NL}{\cal R}_{c}^{\rm g}\right),$ (2) compared to the threshold ${\cal R}_{c}^{\rm g}$ in the Gaussian theory. Therefore, around peaks of the power spectrum of the curvature perturbation, a positive $f_{NL}$ increases the abundance of the PBHs, while a negative $f_{NL}$ has the opposite effect, thus helping the agreement with the recent pulsar timing array observations. This remains true even when calculating the abundance through a more correct variable, the averaged density contrast Kehagias:2019eil ; DeLuca:2022rfz . Under general assumptions, in this paper we will show that the sign of $f_{NL}$ at the peak scale of the power spectrum, where PBHs are mostly formed, is always positive in non-attractor single-field models. This no-go result is intimately related to the fact that $f_{NL}$ measures the response of the short-scale power spectrum ${\cal P}_{S}$ to the presence of a long mode and the sign of the NG is determined by the rate of growth of ${\cal P}_{S}$. The latter is positive if PBHs needs to be produced and this sets the sign of $f_{NL}$. Our findings automatically imply that NG may not help non- attractor single-field models to relax the tension between the observed stochastic GW background in pulsar timing arrays and the overproduction of PBHs. Non-attractor single-field models and the sign of NG. In attractor single- field models the curvature perturbation is constant on superhorizon scales and is equivalent in the spatially flat gauge to a field fluctuation ${\cal R}=-\delta\phi/\phi^{\prime}$, where primes denote derivatives with respect to the number of e-folds. The phase-space trajectory of the long mode perturbation follows that of the background itself. Short-scale modes evolving in a long mode perturbation then follow the phase-space trajectory of the background, with the only difference being the local e-folds which determines the relation between the comoving and the physical wavenumbers. The NG is therefore proportional to the variation of the short-scale power spectrum due to the long-wavelength mode $\displaystyle{\cal P}_{S}(x)$ $\displaystyle=$ $\displaystyle{\cal P}_{S}\left[1-\frac{d\ln{\cal P}_{S}}{d\ln k_{S}}{\cal R}_{L}(x)\right]$ (3) $\displaystyle=$ $\displaystyle\left[1+\frac{12}{5}f_{NL}{\cal R}_{L}(x)\right].$ This modulation is zero at the peak of the short-scale power spectrum and corresponds to a dilation of scales rather than an amplitude enhancement. In non-attractor single-field models, the attractor condition $\delta\phi^{\prime}=(\phi^{\prime\prime}/\phi^{\prime})\delta\phi$ is violated. In fact, during an Ultra-Slow-Roll (USR) phase, the curvature perturbation grows like ${\cal R}\sim a^{3}$, being $a$ the scale factor, and therefore in the spatially flat gauge $\delta\phi=-\phi^{\prime}{\cal R}=$ constant, implying that $\delta\phi^{\prime}=0$. Because of the the dependence of the background evolution on the initial kinetic energy, the perturbation may not be mapped into a change in the background clock along the same phase- space trajectory. The long mode perturbations carry no corresponding $\delta\phi^{\prime}$ and so they shift the USR trajectory to one with a different relationship between $\phi$ and $\phi^{\prime}$. In other words, a local measurement is sensitive to $\phi^{\prime}$ as different observers provide different measurements of the short-scale power spectrum depending on their relative position in the long-wavelength mode. This implies that in USR models the corresponding value of $f_{NL}$ can be large, even at the peak of the short-scale power spectrum. We consider single field models of inflation with the potential $V(\phi)$ for a canonically normalized scalar field with the sound speed of perturbations being equal to the speed of the gravitational waves perturbations. To be general, we do not specify the form of the potential. We assume that inflation has multiple stages, containing at least three distinct phases. The first stage is a conventional slow-roll (SR) phase in which the observed large scales, such as the CMB scales, leave the horizon. The power spectrum of these perturbations are fixed by the CMB observations Planck to be ${\cal{P}}_{\cal{R}}\simeq 2\times 10^{-9}$ with ${\cal{R}}$ being the curvature perturbations. The second phase is when the power spectrum experiences a rapid growth with a prime peak in power spectrum to generate PBHs Byrnes:2018txb ; Cole:2022xqc ; Carrilho:2019oqg ; t ; Ozsoy:2021pws . A common mechanism for the enhancement of the power spectrum may be the USR setup where the potential is flat Kinney:2005vj ; Namjoo:2012aa . However, we consider a general case and for this purpose, we may call this intermediate non-attractor phase as a “USR-type” phase. All we require from the form of the potential to be such that the power spectrum to increase monotonically during the second phase. The final phase is an attractor SR regime which is extended towards the end of inflation. The transitions between the stages can be either sharp or mild. We present our results for a three-phase setup SR $\rightarrow$ non-attractor $\rightarrow$ SR, and the extension of the results to higher multiple phases is straightforward. We do not consider the stochastic random motion of the background field so the behaviour of $\phi$ is monotonic. The non-attractor phase is extended in the region $\phi_{e}<\phi<\phi_{s}$ during the time interval $t_{s}<t<t_{e}$ and we are interested in the growth of power spectrum for the modes which leave the Hubble radius during the non-attractor phase. For PBH formation, we are interested in the short-scale power spectrum and in particular the PBH mass function will be dominated by the PBHs forming when the scale $k_{\rm pk}$ corresponding to the peak of the power spectrum will re-enter the Hubble radius. Let us consider therefore the effect of the long mode $k_{L}\mathrel{\hbox to0.0pt{\lower 4.0pt\hbox{\hskip 0.5pt$\sim$}\hss}\raise 1.0pt\hbox{$<$}}k_{\rm pk}\sim k_{S}$. Notice that long mode is itself suffering a period of USR phase, but it has exited the Hubble radius earlier than the scale $k_{S}$. The measurements of the power spectrum and the bispectrum are made at the end of inflation $t=t_{f}$ when the modes are frozen. The effects of the long mode on the short modes can be viewed as the modulation of the background quantities at the end of non- attractor phase $t=t_{e}$. As in separate universe approach, one can view the effects of the long mode as affecting nearby patches slightly differently. Consequently, different patches approach the final attractor phase with slightly different initial conditions modulated by the long mode at the end of non-attractor phase. With this picture in mind the bispectrum for two short modes under the modulation of a long mode can be written as $\displaystyle\Big{<}{\cal R}^{f}_{L}{\cal R}^{f}_{S}{\cal R}^{f}_{S}\Big{>}\simeq\Big{<}{\cal R}^{f}_{L}\,\Big{<}{\cal{R}}^{f}_{S}{\cal{R}}^{f}_{S}\Big{>}_{{\cal{R}}^{e}_{L}}\Big{>}$ (4) in which ${\cal{R}}_{S}$ and ${\cal{R}}_{L}$ represent the short and long modes while the superscript $f$ and $e$ indicate the corresponding values at $t=t_{f}$ and $t=t_{e}$, respectively. The assumption of having a single-field setup is essential in writing the above relation. If there are extra light fields, then one has to include the modulations by them in the right-hand side of Eq. (4) as well. In non-attractor single-field models ${\cal R}_{L}$ and $\dot{\cal R}_{L}$ are to be treated as independent variables Namjoo:2013fka . Expanding $\big{<}{\cal{R}}^{f}_{S}{\cal{R}}^{f}_{S}\big{>}_{{\cal{R}}^{e}_{L}}$ to leading order yields $\displaystyle\Big{<}{\cal R}^{f}_{L}{\cal R}^{f}_{S}{\cal R}^{f}_{S}\Big{>}$ $\displaystyle\simeq$ $\displaystyle\Big{<}{\cal R}^{f}_{L}\left({\cal R}^{e}_{L}\frac{\partial}{\partial{\cal R}^{e}_{L}}\langle{\cal R}^{f}_{S}{\cal R}^{f}_{S}\rangle\right.$ (5) $\displaystyle+$ $\displaystyle\left.\dot{\cal R}^{e}_{L}\frac{\partial}{\partial\dot{\cal R}^{e}_{L}}\langle{\cal R}^{f}_{S}{\cal R}^{f}_{S}\rangle\right)\Big{>}.$ An implicit assumption in performing the above expansion is that ${\cal{R}}$ and $\dot{\cal{R}}$ to be continuous across the transition. This is the usual assumption that one needs to impose for the continuity of the metric and the extrinsic curvature across the transition. Having said this, we do not impose any assumption on the potential $V(\phi)$ and its derivatives, as long as ${\cal{R}}$ and $\dot{\cal{R}}$ are continuous across the transition. Expressing the left hand side of Eq. (5) in terms of the usual non-Gaussianity parameter $f_{NL}$ and defining the power spectrum in Fourier space as $\big{\langle}{\cal{R}}_{\bf k_{1}}{\cal{R}}_{\bf k_{2}}\big{\rangle}=(2\pi)^{3}\delta^{3}({\bf{k_{1}}}+{\bf{k_{2}}})P(k_{1})$ and discarding the trivial factors of $(2\pi)^{3}\delta^{3}(\bf{k})$ which matches automatically from the momentum conservation, we obtain $\displaystyle\frac{12}{5}f_{NL}{P}^{f}_{L}{P}^{f}_{S}$ $\displaystyle\simeq$ $\displaystyle\langle{\cal{R}}^{f}_{L}{\cal{R}}^{e}_{L}\big{\rangle}\frac{\partial{P}^{f}_{S}}{\partial{\cal R}^{e}_{L}}+\big{\langle}{\cal{R}}^{f}_{L}\dot{\cal{R}}^{e}_{L}\big{\rangle}\frac{\partial{P}^{f}_{S}}{\partial\dot{\cal R}^{e}_{L}},$ (6) in which $P^{f}_{S}$ and $P^{f}_{L}$ represents the power spectrum at the end of inflation for the short and long modes respectively. From the above expression we have to calculate correlations like $\big{\langle}{\cal{R}}^{f}_{L}{\cal{R}}^{e}_{L}\big{\rangle}$ for the long mode perturbations at two different times $t_{e}$ and $t_{f}$. As explained before, this is because the long mode at the end of non-attractor phase modulates the power spectrum of the short modes which are measured at the end of inflation. Since the long mode is far outside the horizon at the end of non-attractor phase, we can treat it as classical and relate $\big{\langle}{\cal{R}}^{f}_{L}{\cal{R}}^{e}_{L}\big{\rangle}$ to $P^{f}_{L}$ via the ratio of the mode functions at these two times: $\displaystyle\big{\langle}{\cal{R}}^{f}_{L}{\cal{R}}^{e}_{L}\big{\rangle}=\left(\frac{{\cal{R}}^{e}_{L}}{{\cal{R}}^{f}_{L}}\right)P^{f}_{L},$ (7) and similarly $\displaystyle\big{\langle}{\cal{R}}^{f}_{L}\dot{\cal{R}}^{e}_{L}\big{\rangle}=\frac{1}{2}\left(\frac{{\cal{R}}^{f}_{L}}{{\cal{R}}^{e}_{L}}\right)\frac{dP^{e}_{L}}{dt}.$ (8) Plugging the above relations into Eq. (6) yields $\displaystyle\frac{12}{5}f_{NL}$ $\displaystyle=$ $\displaystyle\left(\frac{{\cal R}^{e}_{L}}{{\cal R}^{f}_{L}}\right)\frac{\partial\ln{\cal{P}}^{f}_{S}}{\partial{\cal{R}}^{e}_{L}}+\frac{1}{2}\left(\frac{{\cal R}^{f}_{L}}{{\cal R}^{e}_{L}}\right)\frac{\dot{\cal{P}}^{e}_{L}}{{\cal{P}}^{f}_{L}}\frac{\partial\ln{\cal{P}}^{f}_{S}}{\partial\dot{\cal{R}}^{e}_{L}},$ in which the dimensionless power spectrum ${\cal{P}}_{\cal{R}}$ is related to the power spectrum via ${\cal{P}}_{\cal{R}}\equiv\frac{k^{3}}{2\pi^{2}}P_{\cal{R}}.$ (10) We should now trade the two independent variables $({\cal{R}}_{L},\dot{\cal{R}}_{L})$ with two other variables in which then partial derivative has a more transparent meaning. From the point of view of a local observer within a region of size $\sim 1/k_{S}$, the long mode perturbation evolves with time, but with negligible spatial gradients so the metric takes the following form $\displaystyle ds^{2}=-dt^{2}+a^{2}(t)e^{2{\cal{R}}_{L}(t)}d{\bf x}^{2},$ (11) We can absorb the long mode into the scale factor via $\widetilde{a}\equiv ae^{{\cal{R}}_{L}}$ and the corresponding Hubble rate will change as $\widetilde{H}=H+\dot{\cal{R}}_{L}$. Consequently $\displaystyle d\ln\widetilde{a}=d{\cal{R}}_{L}$ (12) and $\displaystyle d\widetilde{H}=d\dot{\cal{R}}_{L}.$ (13) Eqs. (12) and (13) are two differential relations that can be used to relate $(d{\cal{R}},d\dot{\cal{R}})$ to $(d\ln\widetilde{a},d\ln\widetilde{H})$. More specifically, we have $\displaystyle d\ln{\cal{P}}_{S}$ $\displaystyle=$ $\displaystyle\frac{\partial\ln{{\cal{P}}}_{S}}{\partial{\cal R}_{L}}d{\cal{R}}_{L}+\frac{\partial\ln{{\cal{P}}}_{S}}{\partial\dot{\cal R}_{L}}d\dot{\cal{R}}_{L}$ (14) $\displaystyle=$ $\displaystyle\frac{\partial\ln{{\cal{P}}}_{S}}{\partial\ln\widetilde{a}}d\ln\widetilde{a}+\frac{\partial\ln{{\cal{P}}}_{S}}{\partial\widetilde{H}}d\widetilde{H}.$ Using the relations between $(d\ln\widetilde{a},d\widetilde{H})$ and $(d{\cal{R}},d\dot{\cal{R}})$, from the second line of the above equation we obtain $\displaystyle d\ln{\cal{P}}_{S}=\frac{\partial\ln{{\cal{P}}}_{S}}{\partial\ln\widetilde{a}}d{\cal{R}}_{L}+\frac{\partial\ln{{\cal{P}}}_{S}}{\partial\widetilde{H}}{d\dot{\cal{R}}_{L}}.$ (15) Comparing this differential equation with the first line of Eq. (14) we obtain $\displaystyle\frac{\partial\ln{{\cal{P}}}_{S}}{\partial{\cal R}_{L}}=\frac{\partial\ln{{\cal{P}}}_{S}}{\partial\ln\widetilde{a}}$ (16) and $\displaystyle\frac{\partial\ln{{\cal{P}}}_{S}}{\partial\dot{\cal R}_{L}}=\frac{\partial\ln{{\cal{P}}}_{S}}{\partial\widetilde{H}}.$ (17) Now, plugging the above relations into formula (The Sign of non-Gaussianity and the Primordial Black Holes Abundance) and replacing $\widetilde{a}$ and $\tilde{H}$ simply by $a$ and $H$ yields $\displaystyle\frac{12}{5}f_{NL}$ $\displaystyle=$ $\displaystyle\left(\frac{{\cal{R}}^{e}_{L}}{{\cal{R}}^{f}_{L}}\right)\frac{\partial\ln{{\cal{P}}}^{f}_{S}}{\partial\ln a_{e}}+\left(\frac{{\cal{R}}^{f}_{L}}{{\cal{R}}^{e}_{L}}\right)\frac{\dot{\cal{P}}^{e}_{L}}{2H^{2}_{e}{\cal{P}}^{f}_{L}}\frac{\partial\ln{\cal P}^{f}_{S}}{\partial\ln H_{e}}.$ One can think of Eq. (The Sign of non-Gaussianity and the Primordial Black Holes Abundance) as an extension of Maldacena’s consistency condition Maldacena:2002vr to the non-attractor setups (see also cr1 ; cr2 ). The importance of this consistency condition is that we can read off the value of $f_{NL}$ from the properties of the power spectrum and without the need to calculate the bispectrum using either $\delta N$ or in-in formalisms for higher orders perturbation theory. So far our analysis was general relying only on the assumption of a single- field inflation model undergoing non-attractor phase(s) during inflation. The working assumption is that the power spectrum experiences rapid growth until it reaching a peak associated to the narrow scale where PBHs are formed. For the modes which leave the Hubble radius during the non-attractor phase and near the peak, the power spectrum locally has the following form in momentum space $\displaystyle{\cal{P}}_{S}=f(a_{e})\left(\frac{k_{S}}{a_{e}H_{e}}\right)^{n_{\cal{R}}-1},$ (19) in which $n_{\cal{R}}$ is the spectral index and $f(a)$ is a function of the background which controls the rapid growth of the power spectrum. Technically speaking, the factor $f(a)$ comes from the fact that the first slow-roll parameter $\epsilon\equiv-\dot{H}/H^{2}$ falls off rapidly during the non- attractor phase so the the power spectrum ${\cal{P}}\propto\epsilon^{-1}$ experiences a rapid growth during the non-attractor phase. For example, in the conventional USR phase $\epsilon\propto a^{-6}$ and correspondingly $f(a)=a^{6}$. In our analysis, we do not rely on the particular type of the transition and the form of $f(a)$ and all we assume is that $f(a)$ is a growing function of $a$ to ensure the rapid growth of ${\cal{P}}_{\cal{R}}$ during the non-attractor phase. We emphasize again that the form of power spectrum given in Eq. (19) is valid only locally near the peak which is followed by a rapid increase in power spectrum. The general form of the power spectrum in $k$-space is more complicated and may not be even described by a power law behaviour. For example, it can have oscillatory features after the prime peak as in conventional USR setup Byrnes:2018txb ; Cole:2022xqc ; Carrilho:2019oqg ; t ; Ozsoy:2021pws . However, since we are interested in power spectrum slightly prior and around the peak associated to the narrow scales where the PBHs are formed, then the ansatz (19) is physically justified. From Eq. (19) we infer $\displaystyle\frac{\partial\ln{\cal P}^{f}_{S}}{\partial\ln H_{e}}=-\frac{d\ln{\cal P}^{f}_{S}}{d\ln k_{S}}=1-n_{\cal R}.$ (20) Near the peak of the power spectrum by definition $(n_{\cal R}-1)\simeq 0$ and correspondingly we obtain $\displaystyle f_{NL}^{\rm pk}=\frac{5}{12}\left(\frac{{\cal{R}}^{e}_{L}}{{\cal{R}}^{f}_{L}}\right)\frac{\partial\ln{\cal P}^{f}_{S}}{\partial\ln a_{e}}.$ (21) We note that the prefactor $({\cal{R}}^{e}_{L}/{\cal{R}}^{f}_{L})$ appears because the mode function in general evolves after the non-attractor phase. This is because the transition from the non-attractor phase to the final attractor phase may be mild so the mode keeps evolving in time until it reaches its final attractor value c . The long mode is far outside the horizon after the peak, evolving from its initial value ${\cal{R}}^{e}_{L}$ at $t=t_{e}$ to its final value ${\cal{R}}^{f}_{L}$ at $t=t_{f}$. Therefore, ${\cal{R}}^{f}_{L}$ is in phase with ${\cal{R}}^{e}_{L}$ in $k$-space. However, as the background quantities such as the slow-roll parameters are evolving during a mild transition, the mode function may change sign so the ratio $({\cal{R}}^{e}_{L}/{\cal{R}}^{f}_{L})$ may become negative. On the other hand, if the transition is mild, then the peak in power spectrum will not be significant as the power spectrum evolves in subsequent evolution so it is not a viable model for PBHs formation in the first place. Therefore, in what follows, we make an implicit assumption that the transition from the intermediate non-attractor phase to the final attractor phase is sharp enough such that $({\cal{R}}^{e}_{L}/{\cal{R}}^{f}_{L})$ remains positive. Since the power spectrum is an increasing function of time during the intermediate non- attractor phase, we conclude that $f_{NL}^{\rm pk}>0.$ (22) While our conclusion about the sign of $f_{NL}^{\rm pk}$ is general (with the implicit assumption of a sharp enough transition), let us examine it for some non-trivial examples. Let us consider a setup in which a USR phase is followed by an attractor SR phase in which the transition to the final attractor phase can be either sharp or mild. Defining the slow-roll parameter associated to the derivative of the potential at the final attractor phase by $\sqrt{2\epsilon_{V}}\equiv V_{\phi}/V$, the sharpness of the transition from the intermediate USR phase to the final attractor phase is determined by the parameter $h$ given by c $\displaystyle h\equiv-6\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}\,,$ (23) in which $\epsilon_{e}$ is the value of the slow-roll parameter at the end of USR phase. Note that in this convention $h<0$. For a very sharp transition $|h|\gg 1$ while for a mild transition $h$ may be comparable to slow-roll parameters. In order to have sharp enough transition such that the ratio $({\cal{R}}^{e}_{L}/{\cal{R}}^{f}_{L})$ remains positive, we assume $\eta_{V}\rightarrow 0$ in which $\eta_{V}$ is the second slow-roll parameter given by $\eta_{V}=V_{\phi\phi}/V$. The mode function for the modes which leave the horizon during the USR phase is given by c $\displaystyle{\cal{R}}^{f}_{k}=\left(1+\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}\right)\frac{H}{\sqrt{4\epsilon_{V}k^{3}}}.$ (24) Since during the USR phase the slow-roll parameter falls off like $a^{-6}$, then $\epsilon_{e}\propto a_{e}^{-6}$. Taking the derivative with respect to $a_{e}$ we find $\displaystyle\frac{d\ln{\cal{P}}^{f}}{d\ln a_{e}}=6\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}\left(1+\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}\right)^{-1}=\frac{6h}{h-6}.$ (25) On the other hand, the ratio ${\cal{R}}^{e}_{L}/{\cal{R}}^{f}_{L}$ yields an additional factor $\displaystyle\frac{{\cal{R}}^{e}_{L}}{{\cal{R}}^{f}_{L}}=\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}\left(1+\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}\right)^{-1}=\frac{h}{h-6}.$ (26) We see that the ratio $({\cal{R}}^{e}_{L}/{\cal{R}}^{f}_{L})$ is positive as expected. Using Eqs. (25) and (26) in our formula (21) yields $\displaystyle f_{NL}^{\rm pk}=\frac{5h^{2}}{2(6-h)^{2}}>0.$ (27) For an infinitely sharp transition with $h\rightarrow-\infty$ in which the mode function is frozen immediately after the transition with ${\cal{R}}^{e}_{L}={\cal{R}}^{f}_{L}$, from Eq. (27) we obtain the expected result $f_{NL}^{\rm pk}=5/2$. The expression Eq. (27) agrees with the result for $f_{NL}$ obtained in c where the power spectrum is scale-invariant as well. As a second example, now suppose we extend the above setup such that there is an upward shift $\Delta V$ in the potential at the end of non-attractor phase, followed by the final SR phase. As in Ref. Cai:2022erk , suppose the upward step in the potential is instantaneous, yielding to a sudden change in inflaton’s velocity. Imposing the conservation of energy, the inflaton velocity at the end of upward transition $\pi_{d}$ is related to the velocity at the end of no-attractor phase $\pi_{e}$ via $\displaystyle\pi_{d}=-\sqrt{\pi_{e}^{2}-6\frac{\Delta V}{V}}\,,$ (28) in which $\pi\equiv\phi^{\prime}$ with a prime denoting the derivative with respect to the number of e-folds. The linear mode function is given by Cai:2022erk $\displaystyle{\cal{R}}^{f}_{k}=\left(\frac{1}{g}+\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}\right)\frac{H}{\sqrt{4\epsilon_{V}k^{3}}},$ (29) in which $g\equiv\pi_{d}/\pi_{e}$ with $0<g<1$. Correspondingly, this yields $\displaystyle\frac{d\ln{\cal{P}}^{f}}{d\ln a_{e}}=\frac{6hg^{4}+36g^{2}-36}{g^{2}(g^{2}h-6)},$ (30) in which the sharpness parameter $h$ is now defined as $h\equiv-(6/g)\sqrt{\epsilon_{V}/\epsilon_{e}}$. In addition, the ratio of the mode functions is given by $\displaystyle\frac{{\cal{R}}^{e}_{L}}{{\cal{R}}^{f}_{L}}=\frac{hg^{2}}{hg^{2}-6}>0.$ (31) Note that if we set $g=1$ so $\Delta V=0$, Eqs. (31) and (30) reduce to Eqs. (26) and (25) respectively. Now plugging Eqs. (31) and (30) into our master formula Eq. (21) yields $\displaystyle f_{NL}^{\rm pk}=\frac{5h(hg^{4}+6g^{2}-6)}{2(g^{2}h-6)^{2}},$ (32) in exact agreement with Cai:2022erk for a scale-invariant power spectrum. If we set $g=1$, corresponding to no bump in potential, then Eq. (32) reduces to Eq. (27). Noting that $h<0$ and $0<g<1$, one can check that $f_{NL}^{\rm pk}>0$ for all allowed values of $(h,g)$ as our theorem predicts. Note that the above value of $f_{NL}$ was calculated in Cai:2022erk using the $\delta N$ formalism to second order in perturbation theory. However, in our approach based on consistency condition, we only need to calculate the linear mode function without the need to go to higher orders in perturbation theory. As a corollary, our theorem implies that in the setups where the power spectrum experiences a suppression going through a minimum, then $f_{NL}<0$ at the minimum as was observed in a specific setup in Domenech:2023dxx . Conclusions. In this note we have shown that the non-linear parameter $f_{NL}$ in single-field non-attractor models is always positive if calculated for the peak of the enhanced power spectrum. This result implies the NG always increases the PBH abundance. The sign of the NG is fixed by the response of the short-scale power spectrum to the presence of a long mode. If PBHs need to be form, the short-scale power spectrum needs to grow and this set the sign of $f_{NL}^{\rm pk}$ uniquely. This logic implies that our no-go result does not hold in the case in which the NG is generated after the inflationary phase, e.g. in the presence of a spectator field. Indeed, one can generate PBHs within a spiky model where the comoving curvature power spectrum is enhanced at small scales through a spectator isocurvature field Kawasaki:2012wr . This isocurvature perturbation will then subsequently decay into radiation perturbation and become a curvature mode after inflation. In such a case the long mode cannot be reabsorbed by a redefinition of the scale factor and therefore the sign of the NG is not defined. As a consequence, $f_{NL}$ can be negative in models with extra fields. We comment that our conclusion about the sign of $f_{NL}^{\rm pk}$ requires an implicit assumption that the transition from the non-attractor phase to the final attractor phase be sharp enough so the mode function keeps its original sign. Physically, this is the relevant case for PBHs formation since if the transition is not sharp enough, then the peak is not prominent and PBHs may not form in the first place. Acknowledgments. H.F. thanks the Department of Theoretical Physics at the University of Geneva for the kind hospitality when part of this work has been done. We thank M. Sasaki and M. H. Namjoo for insightful discussions and comments. A.R. thanks the Boninchi Fundation for support. ## References * (1) G. Agazie et al. [NANOGrav], Astrophys. J. Lett. 951, no.1, L8 (2023) [astro-ph.HE/2306.16213]. * (2) G. Agazie et al. [NANOGrav], Astrophys. J. Lett. 951, no.1, L9 (2023) [astro-ph.HE/2306.16217]. * (3) J. Antoniadis et al. [EPTA], [astro-ph.HE/2306.16214]. * (4) J. Antoniadis et al. [EPTA], [astro-ph.HE/2306.16214]. * (5) J. Antoniadis et al. [EPTA], [astro-ph.HE/2306.16224]. * (6) J. Antoniadis et al. [EPTA], [astro-ph.CO/2306.16227]. * (7) D. J. Reardon, A. Zic, R. M. Shannon, G. B. Hobbs, M. Bailes, V. Di Marco, A. Kapur, A. F. Rogers, E. Thrane and J. Askew, et al. Astrophys. J. Lett. 951, no.1, L6 (2023) [astro-ph.HE/2306.16215] * (8) A. Zic, D. J. Reardon, A. Kapur, G. Hobbs, R. Mandow, M. Curyło, R. M. Shannon, J. Askew, M. Bailes and N. D. R. Bhat, et al. [astro-ph.HE/2306.16230]. * (9) D. J. Reardon, A. Zic, R. M. Shannon, V. Di Marco, G. B. Hobbs, A. Kapur, M. E. Lower, R. Mandow, H. Middleton and M. T. Miles, et al. Astrophys. J. Lett. 951, no.1, L7 (2023) [astro-ph.HE/2306.16229]. * (10) H. Xu, S. Chen, Y. Guo, J. Jiang, B. Wang, J. Xu, Z. Xue, R. N. Caballero, J. Yuan and Y. Xu, et al. Res. Astron. Astrophys. 23, no.7, 075024 (2023) [astro-ph.HE/2306.16216]. * (11) M. Sasaki, T. Suyama, T. Tanaka and S. Yokoyama, Class. Quant. Grav. 35, no.6, 063001 (2018) [astro-ph.CO]/1801.05235] * (12) G. Franciolini, A. Iovino, Junior., V. Vaskonen and H. Veermae, [astro-ph.CO/2306.17149]. * (13) L. Liu, Z. C. Chen and Q. G. Huang, [astro-ph.CO/2307.01102]. * (14) Y. F. Cai, X. C. He, X. Ma, S. F. Yan and G. W. Yuan, [gr-qc/2306.17822]. * (15) K. Inomata, K. Kohri and T. Terada, [astro-ph.CO/2306.17834]. * (16) Q. H. Zhu, Z. C. Zhao and S. Wang, [astro-ph.CO/2307.03095]. * (17) V. De Luca, A. Kehagias and A. Riotto, [astro-ph.CO/2307.13633]. * (18) S. Young and C. T. Byrnes, JCAP 04, 034 (2015) [astro-ph.CO/1503.01505]. * (19) A. Kehagias, I. Musco and A. Riotto, JCAP 12, 029 (2019) [astro-ph.CO/1906.07135]. * (20) V. De Luca and A. Riotto, Phys. Lett. B 828, 137035 (2022) doi:10.1016/j.physletb.2022.137035 [astro-ph.CO/2201.09008]. * (21) Y. Akrami et al. [Planck], Astron. Astrophys. 641 (2020), A10 [astro-ph.CO/1807.06211]. * (22) C. T. Byrnes, P. S. Cole and S. P. Patil, JCAP 06, 028 (2019), [ [astro-ph.CO]]/1811.11158]. * (23) P. S. Cole, A. D. Gow, C. T. Byrnes and S. P. Patil, [ [astro-ph.CO]]/2204.07573]. * (24) P. Carrilho, K. A. Malik and D. J. Mulryne, Phys. Rev. D 100, no.10, 103529 (2019), [ [astro-ph.CO]]/1907.05237]. * (25) O. Özsoy and G. Tasinato, Phys. Rev. D 105, no.2, 023524 (2022), [ [astro-ph.CO]]/2111.02432]. * (26) G. Tasinato, [hep-th/2305.11568]. * (27) W. H. Kinney, Phys. Rev. D 72, 023515 (2005) [gr-qc/0503017]. * (28) M. H. Namjoo, H. Firouzjahi and M. Sasaki, [astro-ph.CO/1210.3692]. * (29) M. H. Namjoo, S. Baghram and H. Firouzjahi, Phys. Rev. D 88, 083527 (2013) [ [astro-ph.CO]]/1305.0813]. * (30) J. M. Maldacena, JHEP 05, 013 (2003), [ [astro-ph]]/astro-ph/0210603]. * (31) R. Bravo, S. Mooij, G. A. Palma and B. Pradenas, JCAP 05, 024 (2018) [astro-ph.CO/1711.02680]. * (32) B. Finelli, G. Goon, E. Pajer and L. Santoni, Phys. Rev. D 97, no.6, 063531 (2018) [hep-th/1711.03737]. * (33) Y. F. Cai, X. Chen, M. H. Namjoo, M. Sasaki, D. G. Wang and Z. Wang, JCAP 05 (2018), 012 [astro-ph.CO/1712.09998]. * (34) Y. F. Cai, X. H. Ma, M. Sasaki, D. G. Wang and Z. Zhou, JCAP 12, 034 (2022) [ astro-ph.CO/2207.11910]. * (35) G. Domènech, G. Vargas and T. Vargas, [ astro-ph.CO/2309.05750]. * (36) M. Kawasaki, N. Kitajima and T. T. Yanagida, Phys. Rev. D 87, no.6, 063519 (2013) [hep-ph/1207.2550].
††thanks: These two authors contributed equally††thanks: These two authors contributed equally # Coherent and dissipative coupling in a magneto-mechanical system P. Carrara Dipartimento di Fisica, Università degli Studi di Milano, Via Celoria 16, 20133 Milano, Italy Istituto Officina dei Materiali, Consiglio Nazionale delle Ricerche, Strada Statale 14, km 163.5, 34149 Basovizza (TS), Italy M. Brioschi Dipartimento di Fisica, Università degli Studi di Milano, Via Celoria 16, 20133 Milano, Italy Istituto Officina dei Materiali, Consiglio Nazionale delle Ricerche, Strada Statale 14, km 163.5, 34149 Basovizza (TS), Italy R. Silvani Istituto Officina dei Materiali, Consiglio Nazionale delle Ricerche, c/o Dipartimento di Fisica e Geologia, Via A. Pascoli, 06123 Perugia, Italy A.O. Adeyeye Department of Physics, Durham University, South Rd, DH1 3LE Durham, United Kingdom Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, 117576 Singapore G. Panaccione Istituto Officina dei Materiali, Consiglio Nazionale delle Ricerche, Strada Statale 14, km 163.5, 34149 Basovizza (TS), Italy G. Gubbiotti Corresponding author, email: <EMAIL_ADDRESS>Istituto Officina dei Materiali, Consiglio Nazionale delle Ricerche, c/o Dipartimento di Fisica e Geologia, Via A. Pascoli, 06123 Perugia, Italy G. Rossi Corresponding author, email<EMAIL_ADDRESS>Dipartimento di Fisica, Università degli Studi di Milano, Via Celoria 16, 20133 Milano, Italy Istituto Officina dei Materiali, Consiglio Nazionale delle Ricerche, Strada Statale 14, km 163.5, 34149 Basovizza (TS), Italy R. Cucini Istituto Officina dei Materiali, Consiglio Nazionale delle Ricerche, Strada Statale 14, km 163.5, 34149 Basovizza (TS), Italy ###### Abstract Hybrid elastic and spin waves hold promises for energy-efficient and versatile generation and detection of magnetic signals, with potentially long coherence times. Here we report on the combined elastic and magnetic dynamics in a one- dimensional magneto-mechanical crystal composed of an array of magnetic nanowires. Phononic and magnonic modes are impulsively excited by an optical ultrafast trigger and their decay is monitored by time resolved Magneto Optical Kerr Effect, with complementary Brillouin Light Scattering measurements and micromagnetic simulations. The strength and degree of mixing of coherent and dissipative coupling of the quasi-particles is determined quantitatively. In hybrid magnonics, the coupling of magnons (the quanta of spin waves) to other degrees of freedom is explored to achieve enhanced functionalities in solid-state systems and devices [1]. Novel results have been achieved in coupling magnons to microwave [2] and optical photons [3], and/or to phonons [4, 5, 6, 7], or even to superconducting qubits [8]. This approach has found fertile ground in the field of quantum engineering [9], where hybridized quasi-particles can boost transduction and sensing capabilities [10, 11], down to the single-quantum detection and manipulation [12], or allow novel computation, simulation and storage platforms [13]. Here we report the experimental observation of magnon-phonon mixed coupling in a 1D magnonic-phononic crystal via time-resolved Magneto-Optical Kerr Effect (tr-MOKE). The hybridized modes entangle the coherent and dissipative couplings of the quasi-particle subsystems, and indicate a novel energy exchange mechanism in magnonic-phononic crystals. Experiments in the time domain are crucial to explore the weak-coupling regime: the strength of coherent and dissipative coupling is quantitatively assessed employing a hamiltonian model. The coherent and dissipative coupling [14] describes the energy exchange between two coherently coupled systems directly [15], or towards a common reservoir [16], respectively. Dissipative coupling is now boosting research for its close link to non-Hermitian physics [17, 18] and non-reciprocal transport [19]. Both coupling mechanisms can coexist in a single system, creating the possibility of tuning one into the other. Nonetheless, only a few papers report the coexistence of both coherent and dissipative coupling in hybrid systems [20, 16, 21, 19], a condition we dub mixed coupling. Figure 1: (a) SEM micrograph of the investigated sample. Inset: a cross- sectional sketch of the bilayered NWs; the inter-NW distance $d$, NW width $w$, and full periodicity $D$ are also indicated (not to scale). (b) Magnetic hysteresis loop obtained via static MOKE at $\varphi=60^{\circ}$; the coercive field $B_{c}=17$ mT is indicated. (c) Sketch of the experimental setup for tr- MOKE. (d-e) Time- and frequency-domain maps of the tr-MOKE signal at $\varphi=60^{\circ}$. The magnetic field is swept from positive to negative values. The vertical dashed lines at $B_{\text{ext}}=-22$ mT highlight the magnetization reversal, which results in phase anomalies in the time domain and in discontinuity for the PM mode in the frequency domain. The enhancement of MOKE amplitude at late delays in panel (d) corresponds to the condition of PM-MEC1 coupling in panel (e). The sample employed in this study is an array of rectangular cross-sectioned bilayered nanowires (NWs) of Fe (10 nm thick) and Py (Ni80Fe20, 10 nm thick) on Si (001) substrate. Details on the fabrication process can be found in Ref.[22]. Each NW is $w=340$ nm wide, and the inter-NW spacing is $d=70$ nm; this gives an overall periodicity $D=w+d=410$ nm (Fig.1(a)). The coercive field is $B_{c}=18$ mT, as extracted from static MOKE hysteresis loops (Fig.1(b)). Such system has been characterized as a magnonic crystal via Brillouin Light Scattering (BLS), ferromagnetic resonance and micromagnetic simulations [23, 24, 25, 26]: magnon bands form as for inter-NW magnetic dipolar interaction. Similarly, the spatial modulation of the elastic properties of the sample surface gives rise to standing surface phononic modes, together with acoustic modes localized in each NW [27, 28, 29]. The optical end-station employed for pump-probe spectroscopy is sketched in Fig.1(c): the sample is excited with an ultrafast near-infrared pulsed laser (pump), and the magnetic and magneto-elastic dynamics are probed via tr-MOKE. The incidence angles from the sample normal are $12^{\circ}$ and $6^{\circ}$ for the pump and probe beams, respectively; consequently, we are primarily sensitive to the out-of-plane component of the magnetization. Further details on the setup are reported in [30] and in the Supplemental Material [31]. The pump excitation triggers acoustic modes and, via inverse magnetostriction, the phononic strain fields interact with the magnetization of the NWs, leading to coupled magneto-mechanical dynamics [32]. Moreover, it is possible to excite pure magnetization dynamics via ultrafast heat-induced magnetic anisotropy quenching upon ultrafast laser illumination. This thermal mechanism, firstly described in Ref.[33] for an easy-plane magnetic anisotropy system and then extended to different systems [34, 35], requires an external magnetic field with strength comparable to the sample magnetic anisotropy field. The magnon modes excited with this mechanism rely only on the magnetostatic properties of the NWs (magnetization and shape anisotropy) and can be observed via complementary dynamic techniques like BLS, as well as simulated with micromagnetic models. In Fig.1(d) we report a wide-scan map of tr-MOKE results, for $B_{\text{ext}}$ swept, from +90 to -90 mT, in approximately 1.5 mT steps. For every magnetic field, we record the tr-MOKE signal up to a delay of 3.3 ns. An exponential plus linear background is subtracted from each trace. The map in Fig.1(e) is obtained performing FFT of each trace. Here, three features can be identified: two flat modes with different intensities, featuring no frequency dispersion with $B_{\text{ext}}$, and a third dispersive mode. We label the formers as magneto-elastic-coupling modes (MEC1 and MEC2) and the latter as a pure magnonic mode (PM). We now briefly discuss their excitation mechanism. The pump photons are absorbed by the metallic NWs, resulting in thermo-elastic expansion (the direct of Si is well above the pump photon energy). The periodic strain field stabilizes a standing surface acoustic wave with the wavelength matching the NW array periodicity ($D$). Pump-induced heating also generates localized breathing modes within each NW. These acoustic modes drive NW magnetization, producing the time-modulated magnetic contrast we observe in tr-MOKE (flat modes in Fig.1(e)). Figure 2: (a) Time-domain trace (black circles) and fit (red line) of time- resolved reflectivity at $B_{\text{ext}}=0$ mT. The residuals (grey line) are rigidly shifted for clarity. Inset: FFT of the original trace (black line) and of the residuals (grey line). (b) The frequency of the PM mode was extracted from a time-domain fit of tr-MOKE traces (blue circles), together with the frequency of the lowest-order magnon mode as obtained from BLS measurements (orange circles) and micromagnetic simulations (black line). The azimuth is $\varphi=60^{\circ}$. The three vertical shaded bars indicate regions in which the extraction of frequency from tr-MOKE data is not possible. The attribution of such flat modes to magneto-elastic modes in similar systems is consistent with the literature [34, 32]. To further confirm the assignment, we perform time-resolved reflectivity measurements (tr-R) using identical experimental conditions as in the tr-MOKE measurements, while detecting the non-rotated component of the probe polarization. The time-domain trace after background removal is reported in Fig.2(a) (black circles), for $B_{\text{ext}}=0$ mT. The fit of the function $y=\sum_{i}A_{i}\sin{(\omega_{i}t+\phi_{i})}e^{-\gamma_{i}t}$ (1) is shown in red ($i=1,2$ denotes the lower- and upper-frequency phononic mode, respectively). The parameters $A_{i},~{}\omega_{i},~{}\phi_{i},~{}\gamma_{i}$ are the amplitude, angular frequency, phase and damping parameter for $i$-th mode. The feature-less residuals of the fitting are shown in grey and rigidly shifted for clarity. The frequencies ($f_{i}=\omega_{i}/2\pi$) and dampings obtained as best-fit parameters are $f_{\text{MEC1}}=(8.89\pm 0.05)$ GHz and $f_{\text{MEC2}}=(10.75\pm 0.05)$ GHz, and $\gamma_{\text{MEC1}}=(0.73\pm 0.09)$ rad/ns and $\gamma_{\text{MEC2}}=(1.4\pm 0.1)$ rad/ns, in agreement with the parameters for the flat modes in tr-MOKE (see below). The lower frequency mode is assigned to the Rayleigh wave of the substrate covered with the NW array [34], and the higher frequency mode to a localized (width) breathing mode of each NW. Let us now focus on the PM mode. The leading magnetic anisotropy in the NWs is shape anisotropy, which for thin NWs, favors in-plane magnetization along the NW longitudinal axis, which is thus the magnetic easy axis (EA). If $B_{\text{ext}}$ is not aligned to EA and comparable in strength to the magnetic anisotropy field, the pump absorption in NWs results in ultrafast quenching of the magnetization and concurrently to impulsive softening of the magnetic anisotropy, triggering magnetization precession; dynamics at the GHz range reflects the equilibrium eigenstates of the system [31, 33]. Furthermore, the same triggered dynamics in each NW result in synchronized precession, generating a measurable MOKE signal—a zero-wavevector magnonic mode. Note that the PM mode’s spectral weight diminishes to zero whenever the magnetization’s equilibrium axis aligns with EA: i) as $B_{\text{ext}}$ weakens (see Fig.1(e)), or ii) at $\varphi=0^{\circ}$ for any $B_{\text{ext}}$ value [31]. To confirm the PM mode assignment as the lowest-order magnetization precession, we compare tr-MOKE with BLS and micromagnetic simulations (see [31] for details). In Fig.2(b), we present the PM mode frequency from tr-MOKE fits (blue circles) alongside the lowest-order magnonic frequency’s field dependence from BLS (orange circles) and micromagnetic simulations (black line). The datasets and simulations show good agreement, especially in the concavity change at positive field and switching field value. The grey shaded bars in Fig.2(b) highlight regions in which extraction of tr-MOKE values is not possible, either because of the absence of the PM mode or because of the mixing to the MEC modes. The systematic redshift of the tr-MOKE results is compatible with a few degrees of experimental mismatch in the azimuth angle $\varphi$. We now focus on the region (50 - 80) mT, where the crossing of the PM and MEC1 modes (see Fig.1(e)) suggests the presence of coupling. To gain a deeper insight, we employ a Hamiltonian $\mathcal{H}$, modeling both coherent and dissipative coupling [16, 36]: $\mathcal{H}/\hbar=\tilde{\omega}_{A}a^{\dagger}a+\tilde{\omega}_{B}b^{\dagger}b+g\left(a^{\dagger}b+e^{i\Phi}ab^{\dagger}\right)~{}.$ (2) Here $a^{\dagger}$ and $b^{\dagger}$ ($a$ and $b$) are the creation (annihilation) operators for mode $A$ and $B$, respectively (in our experiment the modes are the phonon and the magnon); $\tilde{\omega}_{i}=\omega_{i}-i\gamma_{i}$ is the generalized angular frequency of uncoupled mode $i=(A,B)$, encompassing both the angular frequency $\omega_{i}$ and the intrinsic damping $\gamma_{i}$; $g$ is the strength of the coupling, whose nature depends on the value of the phase $\Phi$. The coupled eigenvalues of $\mathcal{H}$ are $\tilde{\omega}_{\pm}=\left(\frac{\tilde{\omega}_{A}+\tilde{\omega}_{B}}{2}\right)\pm\sqrt{\left(\frac{\tilde{\omega}_{A}-\tilde{\omega}_{B}}{2}\right)^{2}+g^{2}e^{i\Phi}}~{},$ (3) where again the real part of $\tilde{\omega}_{\pm}$ gives the angular frequency and the opposite imaginary part gives the damping parameter. In Fig.3 the coupled (red and blue lines) and uncoupled (black dashed lines) eigenvalues are plotted as a function of the uncoupled frequency detuning. At $\Phi=0$ (pure coherent coupling, panels (a) and (e)), frequency gapping at zero detuning results; this is often understood as the smoking gun for hybridization. The dampings, on the other hand, attract each other and are equal at zero detuning. At $\Phi=\pi$ (pure dissipative coupling, panels (c) and (g)) the deviation from the uncoupled values are opposite: frequency attraction extends the degeneracy to a finite region across zero detuning, while the dampings repel each other. The eigenvalue symmetry for $\Phi=0$ and $\pi$ is lost if both couplings are at play, i.e. for mixed coupling, as shown for $\Phi=\pi/2$ (panels (b) and (f)) and $\Phi=3\pi/2$ (panels (d) and (h)). The striking feature in mixed coupling is that the condition for degenerate dampings is shifted from zero detuning. We propose this shift as a phenomenological identifier of mixed coupling systems. Figure 3: Modification of the eigenvalues of $\mathcal{H}$ upon changing the coupling phase $\Phi$. (a-d) Frequency dispersions (real part). (e-h) Damping parameters (imaginary part). All plots are shown as a function of the uncoupled frequency detuning. The upper ($\tilde{\omega}_{+}$) and the lower branch ($\tilde{\omega}_{-}$) results are shown as red and blue lines, respectively. The uncoupled values for frequency and damping are shown in each plot as black dashed lines. For comparison to the experimental results shown later, in the calculations we assumed $\gamma_{A}=1.2$ rad/ns and $\gamma_{B}=0.8$ rad/ns; the coupling strength was kept at $g=0.5$ rad/ns. Figure 4: Analysis of frequency (a) and damping (b) as extracted from the time-domain fit of tr-MOKE traces acquired in a close-up of the PM-MEC1 crossing region. The experimental data for the hybridizing modes show deviations from the uncoupled modes, mixing phonon character (field- independent frequency, green circles) and magnon character (field-dependent frequency, yellow circles). The fit results for the upper hybridized branch (red line) and for the lower hybridized branch (blue line) are also shown, together with the values for the uncoupled modes (black dashed lines). The arrows highlight the mismatch between the field values for zero-detuning (upper panel) and for damping crossing (lower panel). The fit results for the non-hybridizing MEC2 mode are also reported (orange circles). The proposed hamiltonian model requires the time domain fit of the traces, acquired in finely sampled intervals (approximately 0.3 mT) of $B_{\text{ext}}$ in a close-up of the PM-MEC1 crossing region. In Fig.4 we report, as color-coded circles with error bars, the frequency (panel (a)) and damping (panel (b)) obtained as best-fit parameters. Note that the time-domain analysis reveals frequency gapping of the modes, a feature not visible in the FFT map (see also [31]). As highlighted by the black arrows, there is a mismatch between the damping crossing and the condition of zero detuning. We fit to the data in Fig.4 the eigenvalues of $\mathcal{H}$ (Eq.3): the best-fit curves are reported for the upper branch (red line) and for the lower branch (blue line). The PM mode is assumed as linear in $B_{\text{ext}}$, a reasonable approximation of the actual non-linear dispersion, given the reduced detuning window. The result of the fitting gives $g=\left(0.55\pm 0.03\right)$ rad/ns and $\Phi=1.3\pm 0.1$. These values set the investigated system as weakly coupled since the strength $g$ is lower than the intrinsic damping of both hybridizing modes; as $\Phi$ is close to $\pi/2$, both coupling mechanisms are at play with comparable strength. To summarize, we derived from tr-MOKE experimental data the coupling of phononic and magnonic modes in a 1D magnonic-phononic crystal. The time domain data give evidence of a mixed phonon-magnon coupling mechanism at play. By means of comparison of the experimental data with the eigenvalues of a comprehensive hamiltonian model we can derive the coupling strength and the competition between coherent and dissipative coupling. Such quantitative information is crucial for identifying the experimental parameters that continuously tune the coupling in a magneto-mechanical system, transitioning from purely coherent to purely dissipative coupling. Our results hint to the possibility of novel magnonic-phononic devices, in analogy to what was shown by cavity magnonics (magnon-photon hybridization)[16, 36]. Finally, we demonstrated time-domain spectroscopy as a key tool to perform a detailed investigation of weakly coupled hybrids systems, allowing for reliable quantitative analysis of the coupling even in those cases when the intrinsic damping dominates over coupling strength, challenging for frequency-domain techniques. P.C. and M.B. thank Vincent Polewczyk and Giacomo Jark for valuable discussions. P.C. is also grateful to Riccardo Panza and Alberto Scazzola for valuable discussions. This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 101007417. Research at IOM-CNR has been funded by the European Union - Next Generation EU under the Italian Ministry of University and Research (MUR) National Innovation Ecosystem grant ECS00000041 - VITALITY-CUP B43C22000470005. G.G. and G.P. acknowledge Università degli Studi di Perugia, CNR and MUR for support within the project Vitality. A.O.A. and G.G. acknowledge the funding from the Royal Society through the Wolfson Fellowship and International Exchanges IEC\R2\222074. ## References * Lachance-Quirion _et al._ [2019] D. Lachance-Quirion, Y. Tabuchi, A. Gloppe, K. Usami, and Y. Nakamura, Hybrid quantum systems based on magnonics, Applied Physics Express 12, 070101 (2019). * Zhang _et al._ [2014] X. Zhang, C.-L. Zou, L. Jiang, and H. X. Tang, Strongly coupled magnons and cavity microwave photons, Physical review letters 113, 156401 (2014). * Liu _et al._ [2016] T. Liu, X. Zhang, H. X. Tang, and M. E. Flatté, Optomagnonics in magnetic solids, Physical Review B 94, 060405 (2016). * Zhang _et al._ [2016] X. Zhang, C.-L. Zou, L. Jiang, and H. X. Tang, Cavity magnomechanics, Science advances 2, e1501286 (2016). * Berk _et al._ [2019] C. Berk, M. Jaris, W. Yang, S. Dhuey, S. Cabrini, and H. Schmidt, Strongly coupled magnon–phonon dynamics in a single nanomagnet, Nature communications 10, 2652 (2019). * Li _et al._ [2021] Y. Li, C. Zhao, W. Zhang, A. Hoffmann, and V. Novosad, Advances in coherent coupling between magnons and acoustic phonons, APL Materials 9 (2021). * Carrara _et al._ [2022] P. Carrara, M. Brioschi, E. Longo, D. Dagur, V. Polewczyk, G. Vinai, R. Mantovan, M. Fanciulli, G. Rossi, G. Panaccione, _et al._ , All-optical generation and time-resolved polarimetry of magnetoacoustic resonances via transient grating spectroscopy, Physical Review Applied 18, 044009 (2022). * Lachance-Quirion _et al._ [2017] D. Lachance-Quirion, Y. Tabuchi, S. Ishino, A. Noguchi, T. Ishikawa, R. Yamazaki, and Y. Nakamura, Resolving quanta of collective spin excitations in a millimeter-sized ferromagnet, Science Advances 3, e1603150 (2017). * Clerk _et al._ [2020] A. Clerk, K. Lehnert, P. Bertet, J. Petta, and Y. Nakamura, Hybrid quantum systems with circuit quantum electrodynamics, Nature Physics 16, 257 (2020). * Verhagen _et al._ [2012] E. Verhagen, S. Deléglise, S. Weis, A. Schliesser, and T. J. Kippenberg, Quantum-coherent coupling of a mechanical oscillator to an optical cavity mode, Nature 482, 63 (2012). * Nair _et al._ [2021] J. M. Nair, D. Mukhopadhyay, and G. Agarwal, Enhanced sensing of weak anharmonicities through coherences in dissipatively coupled anti-$\mathcal{PT}$ symmetric systems, Physical Review Letters 126, 180401 (2021). * Tabuchi _et al._ [2014] Y. Tabuchi, S. Ishino, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, Hybridizing ferromagnetic magnons and microwave photons in the quantum limit, Physical review letters 113, 083603 (2014). * Chumak _et al._ [2022] A. V. Chumak, P. Kabos, M. Wu, C. Abert, C. Adelmann, A. Adeyeye, J. Åkerman, F. G. Aliev, A. Anane, A. Awad, _et al._ , Advances in magnetics roadmap on spin-wave computing, IEEE Transactions on Magnetics 58, 1 (2022). * Harder _et al._ [2021] M. Harder, B. Yao, Y. Gui, and C.-M. Hu, Coherent and dissipative cavity magnonics, Journal of Applied Physics 129 (2021). * Hioki _et al._ [2022] T. Hioki, Y. Hashimoto, and E. Saitoh, Coherent oscillation between phonons and magnons, Communications Physics 5, 115 (2022). * Harder _et al._ [2018] M. Harder, Y. Yang, B. Yao, C. Yu, J. Rao, Y. Gui, R. Stamps, and C.-M. Hu, Level attraction due to dissipative magnon-photon coupling, Physical review letters 121, 137203 (2018). * Yang _et al._ [2020] Y. Yang, Y.-P. Wang, J. Rao, Y. Gui, B. Yao, W. Lu, and C.-M. Hu, Unconventional singularity in anti-parity-time symmetric cavity magnonics, Physical Review Letters 125, 147202 (2020). * Li _et al._ [2023] A. Li, H. Wei, M. Cotrufo, W. Chen, S. Mann, X. Ni, B. Xu, J. Chen, J. Wang, S. Fan, _et al._ , Exceptional points and non-Hermitian photonics at the nanoscale, Nature Nanotechnology , 1 (2023). * Wang _et al._ [2019] Y.-P. Wang, J. Rao, Y. Yang, P.-C. Xu, Y. Gui, B. Yao, J. You, and C.-M. Hu, Nonreciprocity and unidirectional invisibility in cavity magnonics, Physical review letters 123, 127202 (2019). * Wang _et al._ [2014] W. Wang, P. Vasa, R. Pomraenke, R. Vogelgesang, A. De Sio, E. Sommer, M. Maiuri, C. Manzoni, G. Cerullo, and C. Lienau, Interplay between strong coupling and radiative damping of excitons and surface plasmon polaritons in hybrid nanostructures, Acs Nano 8, 1056 (2014). * Zhang _et al._ [2018] S. Zhang, H. Zhang, T. Xu, W. Wang, Y. Zhu, D. Li, Z. Zhang, J. Yi, and W. Wang, Coherent and incoherent damping pathways mediated by strong coupling of two-dimensional atomic crystals with metallic nanogrooves, Physical Review B 97, 235401 (2018). * Adeyeye and Singh [2008] A. Adeyeye and N. Singh, Large area patterned magnetic nanostructures, Journal of Physics D: Applied Physics 41, 153001 (2008). * Gubbiotti _et al._ [2016] G. Gubbiotti, S. Tacchi, M. Madami, G. Carlotti, Z. Yang, J. Ding, A. Adeyeye, and M. Kostylev, Collective spin excitations in bicomponent magnonic crystals consisting of bilayer permalloy/Fe nanowires, Physical Review B 93, 184411 (2016). * Kostylev _et al._ [2016] M. Kostylev, Z. Yang, I. Maksymov, J. Ding, S. Samarin, and A. Adeyeye, Microwave magnetic dynamics in ferromagnetic metallic nanostructures lacking inversion symmetry, Journal of Applied Physics 119 (2016). * Silvani _et al._ [2018] R. Silvani, M. Kostylev, A. O. Adeyeye, and G. Gubbiotti, Spin wave filtering and guiding in Permalloy/iron nanowires, Journal of Magnetism and Magnetic Materials 450, 51 (2018). * Demand _et al._ [2002] M. Demand, A. Encinas-Oropesa, S. Kenane, U. Ebels, I. Huynen, and L. Piraux, Ferromagnetic resonance studies of nickel and permalloy nanowire arrays, Journal of Magnetism and Magnetic Materials 249, 228 (2002), international Workshop on Magnetic Wires. * Maznev _et al._ [2011] A. Maznev, O. Wright, and O. Matsuda, Mapping the band structure of a surface phononic crystal, New Journal of Physics 13, 013037 (2011). * Pan _et al._ [2013] H. Pan, V. L. Zhang, K. Di, M. H. Kuok, H. S. Lim, S. C. Ng, N. Singh, and A. O. Adeyeye, Phononic and magnonic dispersions of surface waves on a permalloy/BARC nanostructured array, Nanoscale research letters 8, 1 (2013). * Ma [2023] J. Ma, Phonon Engineering of Micro-and Nanophononic Crystals and Acoustic Metamaterials: A Review, Small Science 3, 2200052 (2023). * Brioschi _et al._ [2023] M. Brioschi, P. Carrara, V. Polewczyk, D. Dagur, G. Vinai, P. Parisse, S. Dal Zilio, G. Panaccione, G. Rossi, and R. Cucini, Multidetection scheme for transient-grating-based spectroscopy, Optics Letters 48, 167 (2023). * [31] see Supplemental Material at [url] for experimental details, micromagnetic simulations, pump fluence dependence and selection rules for excitation of the PM mode and time-domain fit. * Godejohann _et al._ [2020] F. Godejohann, A. V. Scherbakov, S. M. Kukhtaruk, A. N. Poddubny, D. D. Yaremkevich, M. Wang, A. Nadzeyka, D. R. Yakovlev, A. W. Rushforth, A. V. Akimov, _et al._ , Magnon polaron formed by selectively coupled coherent magnon and phonon modes of a surface patterned ferromagnet, Physical Review B 102, 144438 (2020). * Van Kampen _et al._ [2002] M. Van Kampen, C. Jozsa, J. Kohlhepp, P. LeClair, L. Lagae, W. De Jonge, and B. Koopmans, All-optical probe of coherent spin waves, Physical review letters 88, 227201 (2002). * Chang _et al._ [2018] C. L. Chang, R. R. Tamming, T. J. Broomhall, J. Janusonis, P. W. Fry, R. I. Tobey, and T. J. Hayward, Selective excitation of localized spin-wave modes by optically pumped surface acoustic waves, Physical Review Applied 10, 034068 (2018). * Blank _et al._ [2022] T. G. Blank, S. Hermanussen, T. Lichtenberg, T. Rasing, A. Kirilyuk, B. Koopmans, and A. V. Kimel, Laser-Induced Transient Anisotropy and Large Amplitude Magnetization Dynamics in a Gd/FeCo Multilayer, Advanced Materials Interfaces 9, 2201283 (2022). * Xu _et al._ [2019] P.-C. Xu, J. Rao, Y. Gui, X. Jin, and C.-M. Hu, Cavity-mediated dissipative coupling of distant magnetic moments: Theory and experiment, Physical Review B 100, 094415 (2019).
# MS-TP-21-12 Yukawa coupling unification in an $\mathsf{SO(10)}$ model consistent with Fermilab $(g-2)_{\mu}$ result Amin<EMAIL_ADDRESS>, Pran <EMAIL_ADDRESS>and Raza M<EMAIL_ADDRESS> aInstitut für Theoretische Physik, Westfälische Wilhelms-Universität Münster, Wilhelm-Klemm-Straße 9, 48149 Münster, Germany bDepartment of Physics, Northeastern University, Boston, MA 02115-5000, USA cDepartment of Physics, American University of Sharjah, P.O. Box 26666, Sharjah, UAE444Permanent address ###### Abstract We investigate the Yukawa coupling unification for the third generation in a class of $\mathsf{SO(10)}$ unified models which are consistent with the 4.2 $\sigma$ deviation from the standard model of the muon $g-2$ seen by the Fermilab experiment E989. A recent analysis in supergravity grand unified models shows that such an effect can arise from supersymmetric loops correction. Using a neural network, we further analyze regions of the parameter space where Yukawa coupling unification consistent with the Fermilab result can appear. In the analysis we take into account the contributions to Yukawas from the cubic and the quartic interactions. We test the model at the high luminosity and high energy LHC and estimate the integrated luminosities needed to discover sparticles predicted by the model. ###### Contents 1. 1 Introduction 2. 2 The model 3. 3 $\mathsf{SO(10)}$ SUGRA model with Yukawa unification consistent with Fermilab $(g-2)_{\mu}$ 4. 4 Sparticle hierarchies and signal region analysis 1. 4.1 Slepton pair production and event simulation at the LHC 2. 4.2 Event selection 3. 4.3 Results 5. 5 Conclusion 6. 6 Appendix: Contributions to Yukawas from higher dimensional operators ## 1 Introduction Recently the Fermilab E989 experiment [1] has measured $a_{\mu}=(g-2)_{\mu}/2$ with significantly greater accuracy than the previous Brookhaven experiment [2, 3]. Thus the combined Fermilab experimental data and Brookhaven experimental data gives $a^{\rm exp}_{\mu}=116592061(41)\times 10^{-11}\,,$ (1.1) which is to be compared with the Standard Model (SM) prediction [4] $a^{\rm SM}_{\mu}=116591810(43)\times 10^{-11}.$ (1.2) The combined Fermilab and Brookhaven result shows an excess over the SM result by an amount $\Delta a^{\rm FB}_{\mu}$ which is $\Delta a^{\rm FB}_{\mu}=a_{\mu}^{\rm exp}-a_{\mu}^{\rm SM}=251(59)\times 10^{-11}.$ (1.3) Eq. (1.3) records a $4.2\sigma$ deviation from the SM compared to $3.7\sigma$ for the Brookhaven result. Thus the Fermilab experiment further strengthens the Brookhaven result on the possible existence of new physics beyond the Standard Model. Subsequent to the Fermilab result, artificial neural network analysis was used to explore the parameter space of supergravity (SUGRA) unified models. It was seen that regions of the parameter space where supersymmetric loops can give the desired correction consistent with the Fermilab results are those where gluino-driven radiative breaking of the electroweak symmetry occurs [5], a region referred to as $\tilde{g}$SUGRA [6, 7, 8]. Using a neutral network we investigate this region further to explore the region where Yukawa unification in an $\mathsf{SO(10)}$ model [8] can occur consistent with the Fermilab result. The outline of rest of the paper is as follows: In section 2 details of the $\mathsf{SO(10)}$ model are discussed. In section 3 an analysis of the parameter space of SUGRA $\mathsf{SO(10)}$ model which gives Yukawa coupling unification consistent with the Fermilab $g-2$ result is given. Here the light and the heavy sparticle spectrum is also computed. In section 4, simulations for the observation of the sparticles predicted by the model at HL-LHC and HE- LHC are given. Conclusions are given in section 5. Some further details of the model are given in the Appendix. ## 2 The model The general class of $\mathsf{SO(10)}$ models we consider are those of [9, 10] with [8] being one of them which are similar in spirit to the missing partner $\mathsf{SU(5)}$ models [11, 12]. These models involve large Higgs representations such as $\mathsf{126+\overline{126}}$, $\mathsf{210}$, $\mathsf{120}$ for Yukawa couplings. Large Higgs representations have been used in several early works [13, 14, 15] and also more recently, e.g., [16, 17, 18, 19, 20, 21, 22] and the references therein (for a review of $\mathsf{SO(10)}$ models, see Ref. [23]). In the model we consider [10, 8], the missing partner mechanism comes about as follows: The Higgs sector consists of the fields $\mathsf{126+\overline{126}}$, $\mathsf{210}$, $2\times\mathsf{10+120}$ set of representations. The fields $\mathsf{126+\overline{126}}$, $\mathsf{210}$ are heavy which break the GUT symmetry down to the SM gauge group symmetry, while the $2\times\mathsf{10+120}$ Higgs fields are light. The heavy fields contain 3 pairs of heavy Higgs doublets while the light fields have four pairs of light Higgs doublets. When the light and heavy fields mix, three pairs of the light Higgs doublets become heavy while one combination of the light doublets remains light and is identified as the Higgs field of the MSSM. We give below further details of the model used in this analysis. The superpotential of the $\mathsf{SO(10)}$ model is given by [8] $\displaystyle W=W_{\rm GUT}+W_{\rm DT}+W_{\rm Yuk},$ (2.1) where $\displaystyle W_{\textsc{gut}}=$ $\displaystyle~{}M^{126}\Delta_{\mu\nu\rho\sigma\lambda}\overline{\Delta}_{\mu\nu\rho\sigma\lambda}+M^{210}\Phi_{\mu\nu\rho\sigma}\Phi_{\mu\nu\rho\sigma}+\eta\Phi_{\mu\nu\rho\sigma}\Delta_{\mu\nu\lambda\tau\xi}\overline{\Delta}_{\rho\sigma\lambda\tau\xi}$ $\displaystyle+\lambda\Phi_{\mu\nu\rho\sigma}\Phi_{\rho\sigma\lambda\tau}\Phi_{\lambda\tau\mu\nu}\,,$ (2.2) $\displaystyle W_{\textsc{dt}}=~{}$ $\displaystyle a~{}{{}^{1}}\Omega_{\mu}\overline{\Delta}_{\mu\nu\rho\sigma\lambda}\Phi_{\nu\rho\sigma\lambda}+\sum_{r=1}^{2}b_{r}~{}{{}^{r}}\Omega_{\mu}{\Delta}_{\mu\nu\rho\sigma\lambda}\Phi_{\nu\rho\sigma\lambda}+c~{}\Sigma_{\mu\nu\rho}\Delta_{\nu\rho\sigma\lambda\tau}\Phi_{\mu\sigma\lambda\tau}$ $\displaystyle+\overline{c}~{}\Sigma_{\mu\nu\rho}\overline{\Delta}_{\nu\rho\sigma\lambda\tau}\Phi_{\mu\sigma\lambda\tau}\,.$ (2.3) The notation used above is as follows: $\Delta_{\mu\nu\rho\sigma\lambda}$ and $\overline{\Delta}_{\mu\nu\rho\sigma\lambda}$ are fields for the $\mathsf{126}$ and $\mathsf{\overline{126}}$ representations, $\Phi_{\mu\nu\rho\sigma}$ is the field for the $\mathsf{210}$ representation and ${{}^{r}}\Omega_{\mu}(r=1,2)$ are the fields for the two $\mathsf{10}$ of Higgs representations and $\Sigma_{\mu\nu\rho}$ is the field for the $\mathsf{120}$-plet representation. In the above $W_{\rm GUT}$ breaks the $\mathsf{SO(10)}$ GUT symmetry down to the standard model gauge group $\mathsf{SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}}$ by VEV formations of $\mathcal{V}_{1_{126}}$ and $\mathcal{V}_{1_{\overline{126}}}$ and the VEVs of $\mathcal{V}_{1_{210}}$, $\mathcal{V}_{24_{210}}$, $\mathcal{V}_{75_{210}}$. The equations that determine these VEVs are derived in [8]. Thus the $\mathsf{126+\overline{126}}$-plet VEVs $\mathcal{V}_{1_{126}}$ and $\mathcal{V}_{1_{\overline{126}}}$ break the $\mathsf{SO(10)}$ symmetry down to $\mathsf{SU(5)\times U(1)}$ and the $\mathsf{210}$-plet VEVs $\mathcal{V}_{1_{210}}$, $\mathcal{V}_{24_{210}}$, $\mathcal{V}_{75_{210}}$ further break the gauge symmetry down to $\mathsf{SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}}$. The notation for the VEVs is explicit. Thus, for example, $\mathcal{V}_{1_{126}}$ stands for the VEV of the $\mathsf{SU(5)}$ singlet in the $\mathsf{SU(5)\times U(1)}$ decomposition of $\mathsf{126}$ and $\mathcal{V}_{24_{210}}$ stands for the VEV of the the $\mathsf{24}$-plet of $\mathsf{SU(5)}$ field in the $\mathsf{SU(5)\times U(1)}$ decomposition of $\mathsf{210}$. The doublet-triplet splitting is generated by $W_{\rm DT}$ which contains $2\times\mathsf{10+120}$-plets of light fields. Thus the heavy fields $\mathsf{126+\overline{126}}$-plet and $\mathsf{210}$-plet contain three heavy $\mathsf{SU(2)}$ Higgs doublet pairs while the light fields $2\times\mathsf{10+120}$-plets contain four light Higgs doublet pairs. After mixing of the light and heavy fields, three light Higgs doublets become heavy leaving one pair massless which we identify as the standard model Higgs doublet. The Yukawa couplings arise from cubic and quartic interactions. They are given by $\displaystyle W_{\rm Yuk}=W_{3}+W_{4},$ (2.4) where $W_{3}=\sum_{r=1}^{2}f^{10_{r}}~{}\langle\Psi_{(+)}^{*}|B\Gamma_{\mu}|\Psi_{(+)}\rangle~{}{{}^{r}}{\Omega_{\mu}}\,.$ (2.5) Here $B$ and $\Gamma$’s are the $\mathsf{SO(10)}$ charge conjugation and gamma matrices [16] and $W_{4}$ are the higher dimensional interactions discussed below. Yukawa couplings arising from Eq. (2.5) are given by $\displaystyle\mathcal{L}_{\textnormal{Yuk}}=+h^{0}_{\tau}~{}\epsilon^{ab}{\mathbf{H_{d}}}_{a}\mathbf{{L}}_{b}{\mathbf{E}}^{\mathtt{c}}-h^{0}_{b}~{}{\mathbf{H_{d}}}_{a}\mathbf{{Q}}^{a\alpha}{\mathbf{D}}_{\alpha}^{\mathtt{c}}-h^{0}_{t}~{}\epsilon_{ab}{\mathbf{H_{u}}}^{a}\mathbf{{Q}}^{b\alpha}{\mathbf{U}}_{\alpha}^{\mathtt{c}}+\textnormal{h.c.},$ (2.6) where $\displaystyle h^{0}_{\tau}$ $\displaystyle=i2\sqrt{2}\sum_{r=1}^{2}f^{10_{r}}V_{d_{r1}},~{}~{}h^{0}_{b}=-i2\sqrt{2}\sum_{r=1}^{2}f^{10_{r}}V_{d_{r1}},~{}~{}h^{0}_{t}=-i2\sqrt{2}\sum_{r=1}^{2}f^{10_{r}}U_{d_{r1}},$ (2.7) where $U_{d_{r1}}$ and $V_{d_{r1}}$ are defined by Eq. (2.14) and evaluated numerically in Tables 2 and 3. In addition to Yukawa couplings arising from $W_{3}$, contributions arise from higher dimensional operators in $W_{4}$ where $\displaystyle W_{4}=W^{(1)}_{4}+W^{(2)}_{4}+W^{(3)}_{4},$ (2.8) and where $\displaystyle W^{(1)}_{4}$ $\displaystyle=$ $\displaystyle-\frac{f^{(1)}}{5!M_{c}}b_{r}\langle\Psi_{(+)}^{*}|B\Gamma_{[\lambda}\Gamma_{\mu}\Gamma_{\nu}\Gamma_{\rho}\Gamma_{\sigma]}|\Psi_{(+)}\rangle~{}\left[{{}^{r}}\Omega_{\lambda}\Phi_{\mu\nu\rho\sigma}-{{}^{r}}\Omega_{\mu}\Phi_{\lambda\nu\rho\sigma}+{{}^{r}}\Omega_{\nu}\Phi_{\lambda\mu\rho\sigma}\right.$ (2.9) $\displaystyle\left.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-{{}^{r}}\Omega_{\rho}\Phi_{\lambda\mu\nu\sigma}+{{}^{r}}\Omega_{\sigma}\Phi_{\lambda\mu\nu\rho}\right]\,,$ $\displaystyle W_{4}^{(2)}$ $\displaystyle=$ $\displaystyle-\frac{f^{(2)}}{5!M_{c}}\langle\Psi_{(+)}^{*}|B\Gamma_{[\lambda}\Gamma_{\mu}\Gamma_{\nu}\Gamma_{\rho}\Gamma_{\sigma]}|\Psi_{(+)}\rangle~{}\left[\Sigma_{\lambda\alpha\beta}\Phi_{\gamma\rho\sigma\lambda}-\Sigma_{\lambda\alpha\gamma}\Phi_{\beta\rho\sigma\lambda}+\Sigma_{\lambda\alpha\rho}\Phi_{\beta\gamma\sigma\lambda}\right.$ (2.10) $\displaystyle\left.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\Sigma_{\lambda\alpha\sigma}\Phi_{\beta\gamma\rho\lambda}-\Sigma_{\lambda\gamma\beta}\Phi_{\alpha\rho\sigma\lambda}+\Sigma_{\lambda\rho\beta}\Phi_{\alpha\gamma\sigma\lambda}\right.$ $\displaystyle\left.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\Sigma_{\lambda\sigma\beta}\Phi_{\alpha\gamma\rho\lambda}-\Sigma_{\lambda\gamma\rho}\Phi_{\beta\alpha\sigma\lambda}+\Sigma_{\lambda\gamma\sigma}\Phi_{\beta\alpha\rho\lambda}\right.$ $\displaystyle\left.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\Sigma_{\lambda\rho\sigma}\Phi_{\beta\alpha\gamma\lambda}\right]\,,$ $\displaystyle W_{4}^{(3)}$ $\displaystyle=$ $\displaystyle\frac{f^{(3)}}{M_{c}}\langle\Psi_{(+)}^{*}|B\Gamma_{\mu}|\Psi_{(+)}\rangle\Sigma_{\rho\sigma\lambda}\Phi_{\rho\sigma\lambda\mu}\,.$ (2.11) Thus $W_{4}$ gives additional contributions to the Yukawa couplings for the third generation which we denote by $\delta h_{t},~{}\delta h_{b},~{}\delta h_{\tau}$ which are evaluated in the Appendix. The total Yukawa couplings arising from Eq. (2.4) is then given by $\displaystyle h_{t}=h^{0}_{t}+\delta h_{t},~{}~{}h_{b}=h^{0}_{b}+\delta h_{b},~{}~{}h_{\tau}=h^{0}_{\tau}+\delta h_{\tau}\,,$ (2.12) where $h_{b},~{}h_{t},~{}h_{\tau}$ act as boundary conditions on Yukawas of $b,t,\tau$ which are evolved down to the electroweak scale $Q$ where they are related to $b,t,\tau$ masses so that $\displaystyle m_{t}(Q)=\frac{h_{t}(Q)v\sin\beta}{\sqrt{2}},~{}~{}m_{b}(Q)=\frac{h_{b}(Q)v\cos\beta}{\sqrt{2}},~{}~{}m_{\tau}(Q)=\frac{h_{\tau}(Q)v\cos\beta}{\sqrt{2}}.$ (2.13) Here we used the relations $\langle H_{d}\rangle=\frac{v}{\sqrt{2}}\cos\beta$ and $\langle H_{u}\rangle=\frac{v}{\sqrt{2}}\sin\beta$, and where $v=246$ GeV. As noted above there are seven Higgs doublet pairs three of which are heavy and four are light, and after the mixing of the light and heavy fields three pairs of light Higgs doublets become heavy and one pair remains light. To extract the light Higgs doublets we need to diagonalize the $7\times 7$ Higgs doublet mass matrix given in [8]. The Higgs doublet mass matrix is not symmetric and is diagonalized by two unitary matrices $U_{d}$ and $V_{d}$. Thus the down Higgs and the up Higgs doublet mass matrices are diagonalized by the transformation $\displaystyle{\cal H}_{d}=V_{d}{\cal H}_{d}^{\prime},~{}~{}{\cal H}_{u}=U_{d}{\cal H}_{u}^{\prime}\,,$ (2.14) where $\displaystyle{\cal H}_{d}^{T}$ $\displaystyle=$ $\displaystyle({}^{(\overline{5}_{10_{1}})}\\!{\mathsf{D}}_{a},{}^{(\overline{5}_{10_{2}})}\\!{\mathsf{D}}_{a},{}^{(\overline{5}_{120})}\\!{\mathsf{D}}_{a},{}^{(\overline{5}_{{126}})}\\!{\mathsf{D}}_{a},{}^{(\overline{5}_{{210}})}\\!{\mathsf{D}}_{a},{}^{(\overline{45}_{120})}\\!{\mathsf{D}}_{a},{}^{(\overline{45}_{\overline{126}})}\\!{\mathsf{D}}_{a}),$ (2.15) $\displaystyle\ {\cal H}_{d}^{{}^{\prime}T}$ $\displaystyle=$ $\displaystyle({\mathbf{H_{d}}}_{a},{}^{2}\\!{\mathsf{D}}_{a}^{\prime},{}^{3}\\!{\mathsf{D}}_{a}^{\prime},{}^{4}\\!{\mathsf{D}}_{a}^{\prime},{}^{5}\\!{\mathsf{D}}_{a}^{\prime},{}^{6}\\!{\mathsf{D}}_{a}^{\prime},{}^{7}\\!{\mathsf{D}}_{a}^{\prime}),$ (2.16) $\displaystyle{\cal H}_{u}^{T}$ $\displaystyle=$ $\displaystyle({}^{(\overline{5}_{10_{1}})}\\!{\mathsf{D}}^{a},{}^{(\overline{5}_{10_{2}})}\\!{\mathsf{D}}^{a},{}^{(\overline{5}_{120})}\\!{\mathsf{D}}^{a},{}^{(\overline{5}_{{126}})}\\!{\mathsf{D}}^{a},{}^{(\overline{5}_{{210}})}\\!{\mathsf{D}}^{a},{}^{(\overline{45}_{120})}\\!{\mathsf{D}}^{a},{}^{(\overline{45}_{\overline{126}})}\\!{\mathsf{D}}^{a}),$ (2.17) $\displaystyle{\cal H}_{u}^{{}^{\prime}T}$ $\displaystyle=$ $\displaystyle({\mathbf{H_{u}}}^{a},{}^{2}\\!{\mathsf{D}}^{a\prime},{}^{3}\\!{\mathsf{D}}^{a\prime},{}^{4}\\!{\mathsf{D}}^{a\prime},{}^{5}\\!{\mathsf{D}}^{a\prime},{}^{6}\\!{\mathsf{D}}^{a\prime},{}^{7}\\!{\mathsf{D}}^{a\prime}).$ (2.18) In the above the notation is as follows: ${}^{(\overline{5}_{10_{1}})}$ stands for the down Higgs doublet in the $\mathsf{SU(5)}-\overline{\mathsf{5}}$-plet in the $\mathsf{10}_{1}$ which is one of the two $\mathsf{10}$-plets of light Higgs of $\mathsf{SO(10)}$. Further, $\mathsf{D}$’s and ${\mathsf{D}}^{\prime}$’s represent the normalized kinetic energy basis and normalized kinetic and mass eigenbasis, respectively of the Higgs doublet mass matrix. The pair of doublets $({\mathbf{H_{d}}}_{a},{\mathbf{H_{u}}}^{a})$ are identified to be light and are the normalized electroweak Higgs doublets of the minimal supersymmetric standard model (MSSM). The matrix elements of $U_{d}$ and $V_{d}$ relevant in our analysis below are those elements that connect the light doublets, i.e., $U_{d_{11}},U_{d_{21}},\cdots.U_{d_{71}}$, and the elements $V_{d_{11}},V_{d_{21}},\cdots,V_{d_{71}}$. Other matrix elements of $U_{d}$ and $V_{d}$ do not contribute in the low energy theory. As noted above the explicit form of the $7\times 7$ Higgs doublet mass matrix is given in [8]. The $U$ and the $V$ matrices are obtained by diagonalization of this matrix. Numerical values of the non-zero matrix elements of $U_{d}$ and $V_{d}$ relevant in the analysis are displayed in Tables 2 and 3 for benchmarks of Table 1. ## 3 $\mathsf{SO(10)}$ SUGRA model with Yukawa unification consistent with Fermilab $(g-2)_{\mu}$ Since the muon $g-2$ is one of the most accurately determined quantities in physics even a small deviation from the standard model prediction would be a significant indicator of new physics. For example, it is known that supersymmetric loop corrections could be of the same size as the electroweak corrections in the SM [24, 25, 26, 27, 28, 29]. Indeed the Brookhaven result in 2001 [2] resulted in several works pointing out the impact on physics expected at colliders and elsewhere [30, 31, 32, 33, 34, 35, 36, 37]. Thus the experiment became one of the important constraints on the parameter space of SUSY models. The discovery of the Higgs boson at 125 GeV further constrained the parameter space implying that the size of weak SUSY scale could be large lying in the TeV region [38, 39]. Since the Fermilab result has indicated more strongly than the Brookhaven experiment for the existence of new physics, it is interesting to ask how the $b-t-\tau$ unification is affected [1]. The early work of [40] pointed out that such a unification could occur in $\mathsf{SO(10)}$ with appropriate choice of soft parameters. Such a unification has important effects on other phenomena such as dark matter (DM) [41]. Thus it is of interest to ask if $b-t-\tau$ unification can come about consistent with Fermilab data. We investigate this question using a neural network which is found to be useful in the analysis of large parameter spaces [42, 43]). The analysis is done within the framework of supergravity grand unified models [44] using non-univeralities of gaugino masses [45, 46, 47, 48, 49]. The scan of the SUGRA parameter space is performed using an artificial neutral network (ANN) implemented in xBIT [50]. The ANN has three layers with 25 neurons per layer. It constructs the likelihood of a point using the three constraints on the Higgs mass, DM relic density and muon $g-2$, i.e., $\displaystyle m_{h^{0}}=125\pm 2~{}\text{GeV},$ $\displaystyle\Omega h^{2}<0.126,$ $\displaystyle\Delta a_{\mu}=(2.87\pm 0.97)\times 10^{-9}.$ The ANN first generates a set of points using the SUGRA input parameters which are used to train the neutral network based on the constructed likelihood function. The input parameters are $m_{0}$, $A_{0}$, $m_{1}$, $m_{2}$, $m_{3}$ and $\tan\beta$ where $m_{0}$ is the universal scalar mass, $A_{0}$ is the universal trilinear coupling, $m_{1},m_{2},m_{3}$ are the $\mathsf{U(1),SU(2),SU(3)}$ gaugino masses all at the GUT scale and $\tan\beta=\langle H_{u}\rangle/\langle H_{d}\rangle$ where $H_{u}$ gives mass to the up quarks and $H_{d}$ gives mass to the down quarks and the charged leptons. We notice that the ANN predicts a particle spectrum consistent with $\tilde{g}$SUGRA where the colored sparticles are heavy and the sleptons, staus and electroweakinos are lighter. Generating the sparticle spectrum requires evolving the renormalization group equations (RGEs) and for this we use SPheno-4.0.4 [51, 52] which implements two-loop MSSM RGEs and three-loop SM RGEs while taking into account SUSY threshold effects at the one-loop level. The larger SUSY scale makes it necessary to employ a two-scale matching condition at the electroweak and SUSY scales [53] thereby improving the calculations of the Higgs boson mass and of the sparticle spectrum. The bottom quark mass and $\alpha_{S}$ (the fine structure constant for the $\mathsf{SU(3)_{C}}$) are run up to the scale of the $Z$ boson mass, $M_{Z}$, using four-loop RGEs in the $\overline{\rm MS}$ scheme while for the top quark, the evolution starts at the pole mass and the $\overline{\rm MS}$ mass is computed by running down to the $M_{Z}$ scale including two-loop QCD corrections. The tau mass is calculated at $M_{Z}$ including one-loop electroweak corrections. The calculation of the $\overline{\rm MS}$ Yukawas at the electroweak scale involves the first matching conditions to include SM thresholds. Those couplings are then run using 3-loop SM RGEs to $M_{\rm SUSY}$ where the second matching takes place to include SUSY thresholds at the one-loop level and a shift is made to the $\overline{\rm DR}$ scheme. The 2-loop MSSM RGEs of the $\overline{\rm DR}$ Yukawas and gauge couplings are then run to the GUT scale where the soft SUSY breaking boundary conditions are applied. The obtained set of points are then passed to Lilith [54, 55], HiggsSignals [56] and HiggsBounds [57] to check the Higgs sector constraints as well as SModelS [58, 59, 60] to check the LHC constraints. Furthermore, micrOMEGAs-5.2.7 [61] has a module which we use to check the constraints from DM direct detection experiments. We discuss now the results of our analysis. In Table 1 we give an analysis of the VEVs of the heavy fields that enter in the GUT symmetry breaking for a range of GUT parameters $\eta,\lambda$, $M^{126}$ and $M^{210}$ where the VEVs are in general complex. The VEVs are obtained by solving the spontaneous symmetry breaking equations using $W_{\rm GUT}$. Using the VEVs of Table 1, one solves for the Higgs doublet mass matrix using a range of $a,b_{1},b_{2},c,\bar{c}$ that appear in $W_{\rm DT}$. The diagonalization of the Higgs mass matrix allows us to identify the linear combination of the Higgs doublet fields which are massless and correspond to the pair of MSSM Higgs. Model | $\eta$ | $\lambda$ | $M^{126}$ | $M^{210}$ | $\mathcal{V}_{1_{{}_{{210}}}}$ | $\mathcal{V}_{24_{{}_{{210}}}}$ | $\mathcal{V}_{75_{{}_{{210}}}}$ | $\mathcal{V}_{1_{{}_{{126}}}}$ ---|---|---|---|---|---|---|---|--- ​​(a) | 2.22 | 1.96 | $5.73\times 10^{17}$ | $1.14\times 10^{15}$ | $2.00\times 10^{18}$ | $(-4.00+\imath 0.42)\times 10^{18}$ | $(-8.97+\imath 0.38)\times 10^{18}$ | $(2.82-\imath 0.09i)\times 10^{17}$ (b) | 2.85 | 2.31 | $8.10\times 10^{17}$ | $2.00\times 10^{16}$ | $2.20\times 10^{18}$ | $-4.01\times 10^{18}$ | $-2.08\times 10^{18}$ | $\imath 2.93\times 10^{18}$ (c) | 3.00 | 2.88 | $4.27\times 10^{17}$ | $2.12\times 10^{15}$ | $1.10\times 10^{18}$ | $(-2.19+\imath 0.36)\times 10^{18}$ | $(-4.94+\imath 0.32)\times 10^{18}$ | $(2.45-\imath 0.12)\times 10^{17}$ (d) | 2.62 | 0.63 | $4.31\times 10^{17}$ | $4.01\times 10^{15}$ | $1.28\times 10^{18}$ | $-2.27\times 10^{18}$ | $-1.15\times 10^{18}$ | $\imath 9.20\times 10^{17}$ (e) | 1.37 | 2.61 | $5.67\times 10^{17}$ | $1.41\times 10^{16}$ | $3.20\times 10^{18}$ | $(-6.32+\imath 1.65)\times 10^{18}$ | $(-1.45+\imath 0.15)\times 10^{19}$ | $(1.60-\imath 0.12)\times 10^{18}$ (f) | 1.11 | 2.51 | $3.03\times 10^{17}$ | $1.44\times 10^{16}$ | $2.12\times 10^{18}$ | $(-4.16+\imath 1.39)\times 10^{18}$ | $(-9.65+\imath 1.27)\times 10^{18}$ | $(1.46-\imath 0.14)\times 10^{18}$ (g) | 2.24 | 0.90 | $4.04\times 10^{17}$ | $2.89\times 10^{15}$ | $1.40\times 10^{18}$ | $-2.65\times 10^{18}$ | $-1.42\times 10^{18}$ | $\imath 1.33\times 10^{18}$ (h) | 2.98 | 2.71 | $5.09\times 10^{17}$ | $1.79\times 10^{16}$ | $1.32\times 10^{18}$ | $(-2.54+\imath 1.19)\times 10^{18}$ | $(-6.09+\imath 1.11)\times 10^{18}$ | $(7.83-\imath 1.04)\times 10^{17}$ (i) | 2.07 | 1.13 | $3.11\times 10^{17}$ | $1.21\times 10^{16}$ | $1.16\times 10^{18}$ | $-1.86\times 10^{18}$ | $-8.71\times 10^{17}$ | $\imath 1.21\times 10^{18}$ (j) | 2.99 | 0.39 | $6.61\times 10^{17}$ | $1.88\times 10^{16}$ | $1.71\times 10^{18}$ | $-1.58\times 10^{18}$ | $-4.80\times 10^{17}$ | $\imath 7.54\times 10^{17}$ Table 1: A numerical estimate of the VEVs of the Standard Model singlets in $\mathsf{210}$, $\mathsf{126}$ and $\mathsf{\overline{126}}$-plets arising in the spontaneous breaking of the $\mathsf{SO(10)}$ GUT gauge symmetry under the assumption $\mathcal{V}_{1_{{}_{{126}}}}=\mathcal{V}_{1_{{}_{\overline{126}}}}$. All VEVs and masses are in GeV. The diagonalization also allows for computation of non-vanishing elements of the $U$ and $V$ matrices that connect to the light Higgs. These are the matrix elements $U_{d_{11}}$, $U_{d_{21}},~{}U_{d_{31}},~{}U_{d_{61}}$ and the matrix elements $V_{d_{11}}$, $V_{d_{21}},~{}V_{d_{31}},~{}V_{d_{61}}$. They are listed in Tables 2 and 3. In Table 4 we give a list of parameters that enter in the cubic couplings $W_{3}$ and in the quartic couplings $W_{4}$. In Table 5 we give the computations of the contributions of the cubic couplings, the quartic couplings and their sum for $b,t,\tau$ for the model points of Table 1. Computation of $b,t,\tau$ masses using the analysis of Table 5 as boundary conditions at the GUT scale and using RG evolution down to the electroweak scale is given in Table 6. An analysis of the Higgs boson mass, the light sparticle masses, the dark matter relic density and of the supersymmetric correction to the muon anomaly is given in Table 7. A comparison between Table 6 and Table 7 shows that one has a unification of Yukawas and a $g-2$ anomaly consistent with the Fermilab result of Eq. (1.3). One may note that the dark matter relic density is not fully saturated by the model points of Table 7. This implies that the dark matter may likely be multicomponent which includes other forms of dark matter, such as dark fermions of the hidden sector [62, 63, 64] or possibly a dark photon [65] or an axion [66, 67]. Model | $a$ | $b_{1}$ | $b_{2}$ | $c$ | $\bar{c}$ | $U_{d{{}_{11}}}$ | ${U_{d{{}_{21}}}}$ | ${U_{d{{}_{31}}}}$ | ${U_{d{{}_{61}}}}$ ---|---|---|---|---|---|---|---|---|--- ​​(a) | 0.22 | 1.86 | 1.14 | 1.46 | 0.18 | $-0.034+\imath 0.051$ | $0.298+\imath 0.285$ | $0.231+\imath 0.351$ | $-0.495-\imath 0.636$ (b) | 2.03 | 2.50 | 2.70 | 0.81 | 1.15 | $-0.040-\imath 0.015$ | $0.082+\imath 0.030$ | $0.185+\imath 0.067$ | $-0.917-\imath 0.333$ (c) | 1.70 | 2.95 | 1.21 | 0.21 | 2.74 | $-0.163-\imath 0.009$ | $0.381+\imath 0.080$ | $-0.098+\imath 0.409$ | $0.091-\imath 0.798$ (d) | 0.27 | 2.55 | 2.16 | 0.51 | 2.65 | $0.487+\imath 0.003$ | $-0.600-\imath 0.003$ | $-0.121-\imath 0.001$ | $0.623+\imath 0.003$ (e) | 2.33 | 1.65 | 1.04 | 0.08 | 2.95 | $-0.101+\imath 0.165$ | $0.185-\imath 0.249$ | $0.377+\imath 0.213$ | $-0.782-\imath 0.259$ (f) | 0.16 | 1.41 | 1.53 | 0.40 | 2.46 | $-0.715-\imath 0.001$ | $0.661+\imath 0.023$ | $0.015+\imath 0.106$ | $-0.075-\imath 0.187$ (g) | 0.51 | 2.90 | 1.08 | 0.19 | 1.37 | $0.096+\imath 0.138$ | $-0.272-\imath 0.390$ | $-0.103-\imath 0.148$ | $0.482+\imath 0.693$ (h) | 2.52 | 2.91 | 0.21 | 0.25 | 2.99 | $-0.067+\imath 0.047$ | $0.850-\imath 0.440$ | $0.101+\imath 0.085$ | $-0.230-\imath 0.087$ (i) | 1.57 | 1.38 | 2.45 | 0.75 | 1.41 | $0.067-\imath 0.043$ | $-0.072+\imath 0.047$ | $-0.137+\imath 0.089$ | $0.823-\imath 0.531$ (j) | 0.68 | 2.70 | 1.01 | 0.21 | 0.49 | $0.130-\imath 0.003$ | $-0.361+\imath 0.008$ | $-0.072+\imath 0.002$ | $0.920-\imath 0.021$ Table 2: A numerical estimate of the elements of the down Higgs zero mode eigenvector using the analysis of Table 1 and the couplings of Eq. (2.3). Model | $a$ | $b_{1}$ | $b_{2}$ | $c$ | $\bar{c}$ | $V_{d{{}_{11}}}$ | ${V_{d{{}_{21}}}}$ | ${V_{d{{}_{31}}}}$ | ${V_{d{{}_{61}}}}$ ---|---|---|---|---|---|---|---|---|--- ​​(a) | 0.22 | 1.86 | 1.14 | 1.46 | 0.18 | $-0.273$ | $0.411+\imath 0.083$ | $-0.401$ | $0.765+\imath 0.062$ (b) | 2.03 | 2.50 | 2.70 | 0.81 | 1.15 | $-0.091$ | $0.107$ | $-0.196$ | $0.971$ (c) | 1.70 | 2.95 | 1.21 | 0.21 | 2.74 | $0.323$ | $-0.783-\imath 0.010$ | $0.246$ | $-0.467-\imath 0.057$ (d) | 0.27 | 2.55 | 2.16 | 0.51 | 2.65 | $-0.592$ | $0.708$ | $-0.074$ | $0.378$ (e) | 2.33 | 1.65 | 1.04 | 0.08 | 2.95 | $-0.357$ | $0.568+\imath 0.010$ | $-0.345$ | $0.644+\imath 0.127$ (f) | 0.16 | 1.41 | 1.53 | 0.40 | 2.46 | $0.730$ | $-0.673-\imath 0.006$ | $0.057$ | $-0.104-\imath 0.026$ (g) | 0.51 | 2.90 | 1.08 | 0.19 | 1.37 | $-0.276$ | $0.747$ | $-0.127$ | $0.591$ (h) | 2.52 | 2.91 | 0.21 | 0.25 | 2.99 | $0.086$ | $-0.977-\imath 0.055$ | $0.089$ | $-0.157-\imath 0.055$ (i) | 1.57 | 1.38 | 2.45 | 0.75 | 1.41 | $-0.120$ | 0.095 | $-0.163$ | 0.975 (j) | 0.68 | 2.70 | 1.01 | 0.21 | 0.49 | $-0.046$ | 0.163 | $-0.077$ | 0.983 Table 3: A numerical estimate of the elements of the up Higgs zero mode eigenvector using the analysis of Table 1 and the couplings of Eq. (2.3). Model | $f^{(1)}$ | $f^{(2)}$ | $f^{(3)}$ | $f^{10_{r}}$ ---|---|---|---|--- ​​(a) | 0.16 | 0.24 | 0.03 | (0.17, 0.23) (b) | 0.12 | 0.10 | 0.12 | (0.30, 1.04) (c) | 0.40 | 0.08 | 0.08 | (2.36, 1.06) (d) | 0.68 | 0.35 | 0.22 | (0.43, 0.44) (e) | 1.24 | 0.10 | 0.04 | (0.12, 0.25) (f) | 1.58 | 0.63 | 0.10 | (2.03, 2.26) (g) | 0.79 | 0.15 | 0.14 | (0.38, 0.24) (h) | 1.55 | 0.38 | 0.22 | (1.55, 0.21) (i) | 0.15 | 0.11 | 0.08 | (1.29, 2.30) (j) | 0.44 | 0.09 | 0.22 | (0.52, 0.49) Table 4: The GUT scale parameters in the cubic and quartic superpotentials $W_{3}$, $W_{4}^{(1)}$, $W_{4}^{(2)}$ and $W_{4}^{(3)}$ for the model points (a)$-$(j). The masses are in GeV. Model | $h_{t}^{0}$ | $h_{b}^{0}$ | $h_{\tau}^{0}$ | $\delta h_{t}^{\rm GUT}$ | $\delta h_{b}^{\rm GUT}$ | $\delta h_{\tau}^{\rm GUT}$ | $h_{t}^{\rm GUT}$ | $h_{b}^{\rm GUT}$ | $h_{\tau}^{\rm GUT}$ ---|---|---|---|---|---|---|---|---|--- ​​(a) | 0.274 | 0.148 | 0.148 | 0.204 | 0.201 | 0.063 | 0.478 | 0.073 | 0.088 (b) | 0.223 | 0.238 | 0.238 | 0.259 | 0.282 | 0.183 | 0.482 | 0.044 | 0.055 (c) | 0.190 | 0.193 | 0.193 | 0.319 | 0.190 | 0.163 | 0.501 | 0.029 | 0.036 (d) | 0.161 | 0.169 | 0.169 | 0.331 | 0.236 | 0.089 | 0.492 | 0.066 | 0.081 (e) | 0.153 | 0.278 | 0.278 | 0.400 | 0.341 | 0.231 | 0.486 | 0.062 | 0.074 (f) | 0.189 | 0.118 | 0.118 | 0.298 | 0.200 | 0.108 | 0.484 | 0.091 | 0.104 (g) | 0.149 | 0.222 | 0.222 | 0.348 | 0.272 | 0.159 | 0.497 | 0.051 | 0.062 (h) | 0.210 | 0.198 | 0.198 | 0.289 | 0.220 | 0.156 | 0.487 | 0.042 | 0.053 (i) | 0.268 | 0.179 | 0.179 | 0.216 | 0.248 | 0.094 | 0.483 | 0.068 | 0.085 (j) | 0.313 | 0.160 | 0.160 | 0.177 | 0.211 | 0.099 | 0.489 | 0.051 | 0.060 Table 5: The magnitude of the contributions to the top, bottom, and tau Yukawa couplings from cubic interactions (columns 2-4), from quartic interactions (columns 5-7) and the magnitude of their complex sum (columns 8-10) at the GUT scale for the parameter set of Table 4. The Yukawa couplings are in general complex and we add the contributions of the cubic and quartic interactions as complex numbers and exhibit only their magnitudes in the table. Model | $m_{0}$ | $A_{0}$ | $m_{1}$ | $m_{2}$ | $m_{3}$ | $\tan\beta$ | $m_{t}$ (pole) | $\overline{m}_{b}(\overline{m}_{b})$ | $m_{\tau}$ (pole) ---|---|---|---|---|---|---|---|---|--- ​​(a) | 657 | -2228 | 661 | 526 | 7774 | 14.0 | 172.2 | 4.15 | 1.77682 (b) | 673 | 1127 | 939 | 570 | 8833 | 8.2 | 172.2 | 4.22 | 1.77682 (c) | 387 | 880 | 949 | 980 | 8118 | 5.3 | 172.8 | 4.19 | 1.77682 (d) | 164 | 197 | 632 | 1539 | 6171 | 12.2 | 172.9 | 4.20 | 1.77682 (e) | 416 | 339 | 740 | 416 | 4559 | 11.6 | 172.8 | 4.22 | 1.77682 (f) | 688 | 1450 | 852 | 634 | 8438 | 16.8 | 172.9 | 4.22 | 1.77682 (g) | 106 | 22.6 | 523 | 1309 | 5240 | 9.3 | 172.8 | 4.19 | 1.77682 (h) | 206 | 603 | 842 | 1298 | 7510 | 8.0 | 172.1 | 4.15 | 1.77682 (i) | 452 | 648 | 624 | 346 | 4843 | 13.1 | 172.8 | 4.20 | 1.77682 (j) | 196 | -803 | 828 | 1599 | 8929 | 9.4 | 172.6 | 4.22 | 1.77682 Table 6: The SUGRA parameters sets used for RG analysis where the boundary conditions for the Yukawas for the top, bottom, and the tau are taken from Table 5. In the analysis the GUT scale ranges from $8.6\times 10^{15}$ GeV to $2.0\times 10^{16}$ GeV. Model | $h^{0}$ | $\tilde{\mu}$ | $\tilde{\nu}_{\mu}$ | $\tilde{\tau}$ | $\tilde{\chi}^{0}_{1}$ | $\tilde{\chi}^{\pm}_{1}$ | $\Omega h^{2}$ | $\Delta a_{\mu}(\times 10^{-9})$ ---|---|---|---|---|---|---|---|--- ​​(a) | 123.3 | 459.0 | 452.6 | 270.8 | 243.1 | 323.0 | 0.103 | 2.30 (b) | 125.3 | 422.8 | 415.7 | 370.4 | 337.3 | 337.6 | 0.003 | 2.14 (c) | 123.3 | 427.2 | 420.5 | 379.6 | 369.8 | 707.7 | 0.125 | 1.91 (d) | 123.9 | 856.4 | 852.4 | 243.5 | 240.1 | 1227 | 0.016 | 1.94 (e) | 123.8 | 361.0 | 352.6 | 282.0 | 272.7 | 272.9 | 0.002 | 1.98 (f) | 123.0 | 508.1 | 502.3 | 331.9 | 324.2 | 404.3 | 0.004 | 2.11 (g) | 123.4 | 722.8 | 718.2 | 206.5 | 195.5 | 1038.4 | 0.103 | 2.57 (h) | 124.5 | 628.7 | 623.6 | 338.3 | 326.8 | 998.4 | 0.082 | 1.94 (i) | 123.7 | 346.8 | 338.0 | 240.3 | 205.6 | 205.8 | 0.001 | 2.67 (j) | 123.5 | 774.1 | 769.8 | 319.1 | 314.7 | 1247 | 0.016 | 2.59 Table 7: Low scale SUSY mass spectrum showing the Higgs boson, the smuon, the muon sneutrino, the stau and the light electroweakino masses and the LSP relic density for the benchmarks of Table 6. Also shown is $\Delta a_{\mu}$. A scan on the parameter space using the GUT scale input of $\mathsf{SO(10)}$ results in a larger set of points than those presented in Tables 1$-$5. The range of values the input parameters take are: $0.5<\eta,\lambda<6.0$, $0.1<a,b_{1},b_{2},c,\bar{c}<3.0$, $1\times 10^{16}<M^{126}<9.5\times 10^{17}$, $1\times 10^{15}<M^{210}<3.5\times 10^{16}$, $0.01<f^{(1)},f^{(2)},f^{(3)}<4.0$ and $0.1<f^{10_{r}}<5.5$. The result of the scan is shown in Fig. 1. The left panel is a scatter plot in the variables $\eta$ and $\lambda$ with the muon $g-2$ shown on the color axis consistent with $\Delta{a^{\rm FB}_{\mu}}$. The right panel shows a scatter plot in the top, bottom and tau Yukawa couplings at the GUT scale. The set of points in the scatter plot is consistent with experimental constraints and the evolution of the GUT scale Yukawas to the electroweak scale produces the correct top, bottom and tau masses within experimental uncertainties. Figure 1: Scatter plots resulting from the scan of input parameters from $\mathsf{SO(10)}$. The left panel shows the parameters $\eta$ and $\lambda$ with the color axis being the muon $g-2$. The right panel shows the top, bottom and tau Yukawa couplings at the GUT scale. ## 4 Sparticle hierarchies and signal region analysis The set of data points retained after satisfying the constraints from the Higgs sector, the DM relic density, dark matter direct detection and the LHC is further processed and points consistent with Yukawa coupling unification are kept. We observe that the spectrum consisting of light electroweakinos, sleptons (selectron and smuons) and staus belong to three cases of mass hierarchy. #### Case 1: The electroweakinos, $\tilde{\chi}^{0}_{2},\tilde{\chi}^{\pm}_{1}$ are almost degenerate, with the stau being the next-to-lightest supersymmetric particle (NLSP). The mass hierarchy here is $m_{\tilde{\tau}_{1}}<m_{\widetilde{\rm EW}}<m_{\tilde{\ell}},$ where $\widetilde{\rm EW}=(\tilde{\chi}^{0}_{2},\tilde{\chi}^{\pm}_{1})$ and $\tilde{\ell}$ represents the sleptons. #### Case 2: In this category, one of the electroweakinos ($\tilde{\chi}^{0}_{2}$ or $\tilde{\chi}^{\pm}_{1}$) is the NLSP and the hierarchy reads $m_{\widetilde{\rm EW}}<m_{\tilde{\tau}_{1}}<m_{\tilde{\ell}}\,.$ Here we distinguish two subcategories (I) and (II) where $\displaystyle m_{\tilde{\chi}^{\pm}_{1}}<m_{\tilde{\chi}^{0}_{2}}<m_{\tilde{\tau}_{1}}~{}~{}~{}\text{(I)},$ $\displaystyle m_{\tilde{\chi}^{\pm}_{1}}<m_{\tilde{\tau}_{1}}<m_{\tilde{\chi}^{0}_{2}}~{}~{}~{}\text{(II)}.$ #### Case 3: The last category also includes stau as the NLSP but the electroweakino and slepton hierarchy is inverted, i.e., $m_{\tilde{\tau}_{1}}<m_{\tilde{\ell}}<m_{\widetilde{\rm EW}}.$ Benchmarks (a), (f) belong to Case 1, while (b), (e) and (i) belong to Case 2 and (c), (d), (g), (h) and (j) belong to Case 3. Fig. 2 shows the obtained data set categorized according to the above three cases. Figure 2: A scatter plot in the $M_{0}$-$A_{0}$ plane showing the three cases (and subcases) with the chargino-second neutralino mass gap shown on the color axis. An illustration of such a complex spectrum is given in Fig. 3. The upper panels correspond to benchmark (a) while the lower ones are for (d). Cascade decays are common in high scale models which, unlike simplified models considered by ATLAS and CMS, produce more complicated event topology. Thus, for slepton pair production, analyses by ATLAS [68, 69] and CMS [70, 71] consider a 100% branching ratio of $\tilde{\ell}\to\ell\tilde{\chi}^{0}_{1}$ which can happen in spectra belonging to Case 3. However, Cases 1 and 2 do not necessarily abide by this and one can get several decay channels making the final states more complicated. In the next section, we select a set of benchmarks belonging to the three cases discussed above. We study slepton pair production and decay at HL-LHC and HE-LHC. We design a set of signal regions to target the rich final states corresponding to the three cases of mass hierarchies. For earlier works on SUSY discovery at HL-LHC and HE-LHC, see Refs. [72, 73] and the CERN yellow reports [74, 75]. Figure 3: A display of the particle spectrum using PySLHA [76] for benchmarks (a) (upper panels) and (d) (lower panels). The left panels represent the spectrum up to 13 TeV while the right panels give the low-lying masses of the spectrum. ### 4.1 Slepton pair production and event simulation at the LHC The pair production cross section of sleptons (selectrons and smuons) is proportional to the electron and muon Yukawa coupling which means that those cross sections are small compared to staus and electroweak gauginos. For our LHC analysis, we select six of the ten benchmarks shown in Table 6 corresponding to sleptons in the mass range of $\sim 350$ GeV to $\sim 850$ GeV. The production cross sections of the slepton pairs at 14 TeV and 27 TeV are calculated at the aNNLO+NNLL accuracy using Resummino-3.0 [77, 78] and the five-flavor NNPDF23NLO PDF set. The results, arranged in decreasing order of cross section, are shown in Table 8. Also shown are the different branching ratios of sleptons but for brevity we do not exhibit the branching ratios of $\tilde{\chi}^{0}_{2}$ and $\tilde{\chi}^{\pm}_{1}$ for benchmarks (b), (f) and (i). To have an idea of the decay channels involved, one can examine the right panel of Fig. 3 which shows the low-lying spectrum of benchmark (a). Since (a) and (f) both belong to Case 1, one can have an idea of the different decay channels of $\tilde{\chi}^{0}_{2}$ and $\tilde{\chi}^{\pm}_{1}$ which involve the stau. This leads to a tau-enriched final state. Model | $\sigma(pp\rightarrow\tilde{e}_{L}\,\tilde{e}_{L})$ | $\sigma(pp\rightarrow\tilde{\mu}_{L}\,\tilde{\mu}_{L})$ | Branching ratios ---|---|---|--- | 14 TeV | 27 TeV | 14 TeV | 27 TeV | $\tilde{\ell}_{L}\to\ell\tilde{\chi}^{0}_{1}$ | $\tilde{\ell}_{L}\to\ell\tilde{\chi}^{0}_{2}$ | $\tilde{\ell}_{L}\to\nu_{\ell}\tilde{\chi}^{\pm}_{1}$ ​​(i) | 2.896 | 9.633 | 2.909 | 9.673 | 31% | 6% | 63% (b) | 1.242 | 4.590 | 1.244 | 4.598 | 31% | 6% | 63% (f) | 0.541 | 2.252 | 0.543 | 2.262 | 22% | 26% | 52% (h) | 0.194 | 0.958 | 0.194 | 0.957 | 100% | - | - (g) | 0.094 | 0.533 | 0.094 | 0.533 | 100% | - | - (d) | 0.037 | 0.253 | 0.037 | 0.253 | 100% | - | - Table 8: The aNNLO+NNLL pair production cross-sections, in fb, of sleptons at $\sqrt{s}=14$ TeV and at $\sqrt{s}=27$ TeV for benchmarks (b), (d) and (f)$-$(i) of Table 1 arranged in decreasing order of production cross sections. Also shown are the slepton branching ratios to electroweakinos and leptons. The final states which make up our signal region (SR) involve two same flavor and opposite sign (SFOS) leptons with missing transverse energy (MET). We also require at least two jets (N $\geq 2$) which can be used to form kinematic variables that are effective for jetty final states. We call the signal region SR-2$\ell$Nj. For such final states, the dominant SM backgrounds are from diboson production, $Z/\gamma+$jets, dilepton production from off-shell vector bosons ($V^{*}\rightarrow\ell\ell$), $t\bar{t}$ and $t+W/Z$. The subdominant backgrounds are Higgs production via gluon fusion ($ggF$ H) and vector boson fusion (VBF). The simulation of the signal and background events is performed at LO with<EMAIL_ADDRESS>interfaced to LHAPDF [79] using the NNPDF30LO PDF set. Up to two hard jets are added at generator level. The parton level events are passed to PYTHIA8 [80] for showering and hadronization using a five-flavor matching scheme in order to avoid double counting of jets. For the signal events, the matching/merging scale is set at one-fourth the mass of the pair produced sleptons. Additional jets from ISR and FSR are added to the signal and background events. Jets are clustered with FastJet [81] using the anti-$k_{t}$ algorithm [82] with jet radius $R=0.4$. DELPHES-3.4.2 [83] is then employed for detector simulation and event reconstruction using the HL-LHC and HE-LHC card. The SM backgrounds are scaled to their relevant NLO cross sections while aNNLO+NNLL cross sections are used for the signal events. ### 4.2 Event selection The selected SFOS leptons must have a leading and subleading transverse momenta $p_{T}>15$ GeV for electrons and $p_{T}>10$ GeV for muons with $|\eta|<2.5$. Each event should contain at least two non-b-tagged jets with the leading $p_{T}>20$ GeV in the $|\eta|<2.4$ region and a missing transverse energy $E^{\rm miss}_{T}>70$ GeV. Despite the specific preselection criteria, the analysis cuts used for the six benchmarks cannot be the same. This is due to the rich final states involved. To help us discriminate the signal from the background events, we use a set of kinematic variables along with a deep neural network (DNN) which is trained and tested on two independent sets of signal and background samples. We list the kinematic variables that enter in the training of the DNN: 1. 1. $E^{\rm miss}_{T}$: the missing transverse energy in the event. It is usually high for the signal due to the presence of neutralinos. 2. 2. The transverse momentum of the leading non-b tagged jets, $p_{T}(j_{1})$. Rejecting b-tagged jets reduces the $t\bar{t}$ background. 3. 3. The transverse momentum of the leading lepton (electron or muon), $p_{T}(\ell_{1})$. 4. 4. $M_{\rm T2}$, the stransverse mass [84, 85, 86] of the leading and subleading leptons $M_{\rm T2}=\min\left[\max\left(m_{\rm T}(\mathbf{p}_{\rm T}^{\ell_{1}},\mathbf{q}_{\rm T}),m_{\rm T}(\mathbf{p}_{\rm T}^{\ell_{2}},\,\mathbf{p}_{\rm T}^{\text{miss}}-\mathbf{q}_{\rm T})\right)\right],$ (4.1) where $\mathbf{q}_{\rm T}$ is an arbitrary vector chosen to find the appropriate minimum and the transverse mass $m_{T}$ is given by $m_{\rm T}(\mathbf{p}_{\rm T1},\mathbf{p}_{\rm T2})=\sqrt{2(p_{\rm T1}\,p_{\rm T2}-\mathbf{p}_{\rm T1}\cdot\mathbf{p}_{\rm T2})}.$ (4.2) 5. 5. The quantity $M^{\rm min}_{\rm T}$ defined as $M^{\rm min}_{\rm T}=\text{min}[m_{\rm T}(\textbf{p}_{\rm T}^{\ell_{1}},\textbf{p}^{\rm miss}_{\rm T}),m_{\rm T}(\textbf{p}_{\rm T}^{\ell_{2}},\textbf{p}^{\rm miss}_{\rm T})]$. The variables $M_{\rm T2}$ and $M^{\rm min}_{\rm T}$ are effective when dealing with large MET in the final state. 6. 6. The dilepton invariant mass, $m_{\ell\ell}$, helps in rejecting the diboson background with a peak near the $Z$ boson mass which can be done by setting $m_{\ell\ell}>100$ GeV. 7. 7. The opening angle between the MET system and the dilepton system, $\Delta\phi(\textbf{p}_{\rm T}^{\ell},\textbf{p}^{\rm miss}_{\rm T})$, where $\textbf{p}_{\rm T}^{\ell}=\textbf{p}_{\rm T}^{\ell_{1}}+\textbf{p}_{\rm T}^{\ell_{2}}$. 8. 8. The smallest opening angle between the first three leading jets in an event and the MET system, $\Delta\phi_{\rm min}(\textbf{p}_{\rm T}(j_{i}),\textbf{p}^{\rm miss}_{\rm T})$, where $i=1,2,3$. We use the DNN implementation in the ‘Toolkit for Multivariate Analysis’ (TMVA) [87] framework within ROOT6 [88]. The DNN employed has three dense hidden layers with 128 neurons per layer and $\tanh$ as an activation function to define the output neurons given the input values. The DNN trains on the signal and background events using the above set of kinematic variables in three phases with a decreasing learning rate. After the ‘learning’ process is over, the DNN tests the predictions on another set of signal and background samples. Despite having one background set, the training and testing must be done every time a signal sample is used, i.e., six times in our case. During the testing stage, the DNN creates a new discriminator which is called the DNN response or the DNN score. Cuts on this new variable maximizes the signal ($S$) to background ($B$) ratio, $S/\sqrt{S+B}$. We give in Table 9 the set of analysis cuts on a select number of kinematic variables along with the new ‘DNN response’ variable. Variations in cuts are used for our six benchmarks depending on the hierarchy of the spectrum which allows us to put them in three categories with (b),(i) as the first, (f) as the second and (d),(g),(h) as the third. The values shown in parentheses are the modified cuts at 27 TeV which are essential to improving the $S/\sqrt{S+B}$ ratio. Variable | (b), (i) | (f) | (d), (g), (h) ---|---|---|--- ​​ $m_{\ell\ell}~{}\text{[GeV]}>$ | 136 (110) | 150 | 150 (110) $E^{\rm miss}_{T}/\textbf{p}^{\ell}_{\rm T}>$ | 1.9 (2.8) | - | - $\Delta\phi_{\rm min}(\textbf{p}_{\rm T}(j_{i}),\textbf{p}^{\rm miss}_{\rm T})~{}\text{[rad]}>$ | - | 0.85 (1.5) | - $p_{T}^{\ell_{2}}~{}\text{[GeV]}>$ | - | - | 190 (370) $M_{T2}~{}\text{[GeV]}>$ | \- (140) | \- (120) | 200 (300) DNN response $>$ | 0.9 | 0.9 | 0.9 $\mathcal{L}$ at 14 TeV [fb-1] | NV, 1887 | 1262 | NV, 2074, 1738 $\mathcal{L}$ at 27 TeV [fb-1] | 2804, 1320 | 694 | 1031, 689, 1194 Table 9: The analysis cuts on a set of kinematic variables at 14 TeV (27 TeV) grouped by the benchmarks of Table 6. Notice that with the exception of $m_{\ell\ell}$ harder cuts are applied at 27 TeV. Entries with a dash (-) mean that no requirement on the variable is considered. Also shown at the bottom are the required integrated luminosities for discovery at 14 TeV and 27 TeV. Entries with ‘NV’ mean that the point is not visible at the corresponding center-of-mass energy. ### 4.3 Results We begin by discussing the benchmarks (d), (g) and (h) which belong to Case 3. Here the mass splitting between the slepton and the neutralino is large, ranging from 300 GeV to 600 GeV, which produces very energetic leptons. For those benchmarks, the sleptons decay to a light lepton and a neutralino with a 100% branching ratio (see Table 8) which makes for a clean final state. The most effective kinematic variables for this case are $M_{T2}$ and $p_{T}^{\ell_{2}}$ where the latter is the transverse momentum of the subleading lepton. We present two-dimensional plots in these variables in the middle panels of Fig. 4. The left panel depicts point (d) and the right one is the dominant diboson background. One can clearly see that the largest number of background events (color axis) are concentrated at small $M_{T2}$ and $p_{T}^{\ell_{2}}$ while for the signal larger values are highly populated as well due to the energetic final states. A hard cut on $M_{T2}$ and $p_{T}^{\ell_{2}}$ as well as the ‘DNN response’ can reject most of the background events. Next, we discuss benchmarks (b) and (i) which belong to Case 2. Here the branching ratios to a lepton and a neutralino are smaller, at 31% and the slepton-neutralino mass gaps are at 85 GeV and 140 GeV, respectively. Such a mass gap is not enough to allow harder cuts on $p_{T}^{\ell_{2}}$ and that’s why it has been omitted in Table 9. For this reason, we make use of the leading and subleading transverse momenta of the leptons to reconstruct the total momentum of the system, $\textbf{p}^{\ell}_{\rm T}$, to form the new variable $E^{\rm miss}_{T}/\textbf{p}^{\ell}_{\rm T}$. Two-dimensional plots in the $E^{\rm miss}_{T}/\textbf{p}^{\ell}_{\rm T}$ and the dilepton invariant mass, $m_{\ell\ell}$, variables are shown in the top panels of Fig. 4. The left panel shows the distributions for point (b) while the right one is for dilepton production from off-shell vector bosons. For the background, most of the events lie in the region $E^{\rm miss}_{T}/\textbf{p}^{\ell}_{\rm T}<2$ and $m_{\ell\ell}<100$ GeV which is the reason for the choice of cuts in Table 9. Figure 4: Two dimensional plots in select kinematic variables with the number of events on the color axis. Top panels: $E^{\rm miss}_{T}/\textbf{p}^{\ell}_{\rm T}$ vs the dilepton invariant mass for benchmark (b) (left) and off-shell vector boson background (right). Middle panels: the subleading lepton transverse momentum vs $M_{T2}$ for benchmark (d) (left) and the diboson background (right). Bottom panels: $\Delta\phi_{\rm min}(\textbf{p}_{\rm T}(j_{i}),\textbf{p}^{\rm miss}_{\rm T})$ vs the dilepton invariant mass for benchmark (f) (left) and $Z/\gamma+\text{jets}$ background (right). Finally, for point (f) which belongs to Case 1, the branching fraction to a lepton and a neutralino is the smallest compared to its decay to a second neutralino and a chargino. The second neutralino and chargino decay predominantly to a stau which in turn decays to a neutralino and a tau. Hence we are faced with a case of tau-enriched final state which can hadronize forming jets. In our selection, we have rejected b-tagged jets but made no special requirements on tau-tagged jets. For this particular case, jets (tau- tagged or not) can be used to reject the SM background through the variable $\Delta\phi_{\rm min}(\textbf{p}_{\rm T}(j_{i}),\textbf{p}^{\rm miss}_{\rm T})$ defined above. In the bottom panels of Fig. 4 we show this variable plotted against $m_{\ell\ell}$ for point (f) (left panel) and the $Z/\gamma$+jets background (right panel). Excluding the region formed by $\Delta\phi_{\rm min}(\textbf{p}_{\rm T}(j_{i}),\textbf{p}^{\rm miss}_{\rm T})<1$ rad and $m_{\ell\ell}<100$ GeV is effective in reducing the SM background. Figure 5: Distributions in the DNN response variable at 14 TeV (left) and 27 TeV (right) for benchmarks (b) (top panels) and (g) (bottom panels). Along with cuts on the variables discussed thus far, the ‘DNN response’ plays an important role. We show in Fig. 5 distributions in this variable after the above cuts have been implemented. The top panel depicts benchmark (b) which shows clearly that at 14 TeV this point cannot be discovered with 3000 fb-1 while the signal is in excess over the background near 1 for 2800 fb-1 at 27 TeV. The bottom panels show point (g) also at 14 TeV (left) and 27 TeV (right). The benchmark is discoverable at both HL-LHC and HE-LHC but requires smaller integrated luminosity for discovery at HE-LHC (700 fb-1) than at HL- LHC (2100 fb-1). The evaluated integrated luminosities for discovery at both machines are summarized in the lower part of Table 9. Entries with ‘NV’ indicate that the benchmark is not discoverable at the corresponding machine. Note that there is a modest improvement in the integrated luminosity at HE-LHC in comparison to HL-LHC but the former is expected to gather data at the rate of $\sim 820$ fb-1 per month, so most of those points will be discoverable within the first two to three months of run. Note that points (f), (g), (h) and (i) are discoverable at both machines while (b) and (d) can only be discoverable at HE-LHC. We note that recently several works have come out regarding a SUSY explanation of the Fermilab muon $g-2$ [89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103]. ## 5 Conclusion In this work we have investigated if high scale models can produce Yukawa coupling unification consistent with the Fermilab muon $g-2$ result. We used a neural network to investigate the parameter space of a class of $\mathsf{SO(10)}$ models where Yukawa couplings arise from the cubic as well as the quartic interactions. As in a recent work it is found that the preferred parameter space lies in a region where gluino-driven radiative breaking of the electroweak symmetry occurs. The model produces a split spectrum consisting of a light sector and a heavy sector. The light sector contains light sleptons and light weakinos, and the heavy sector contains the gluino, the squarks and the heavy Higgs. The masses of the light sparticles lie in the few hundred GeV range and are accessible at the LHC. With the help of a deep neural network, we carried out a dedicated search of sleptons in the two-lepton final state at HL-LHC and HE-LHC. It is found that most of the considered benchmarks are discoverable within the optimal integrated luminosity of HL-LHC while all of them are discoverable at HE-LHC with less integrated luminosities. Acknowledgments: The research of AA was supported by the BMBF under contract 05H18PMCC1, while the research of PN was supported in part by the NSF Grant PHY-1913328. ## 6 Appendix: Contributions to Yukawas from higher dimensional operators In this appendix we give the contributions $\delta h_{b},\delta h_{t},\delta h_{\tau}$ to the Yukawas that arise from higher dimensional operators where $\displaystyle\delta h_{t}=\delta h_{t}^{(1)}+\delta h_{t}^{(2)}+\delta h_{t}^{(3)},~{}~{}\delta h_{b}=\delta h_{b}^{(1)}+\delta h_{b}^{(2)}+\delta h_{b}^{(3)},~{}~{}\delta h_{\tau}=\delta h_{\tau}^{(1)}+\delta h_{\tau}^{(2)}+\delta h_{\tau}^{(3)}\,.$ (6.1) Here $\delta h^{(1)}$ is the contribution arising from $W_{4}^{(1)}$, $\delta h^{(2)}$ is the contribution arising from $W_{4}^{(2)}$, and $\delta h^{(3)}$ is the contribution arising from $W_{4}^{(3)}$. The explicit forms of these are given below [8]. Thus $W_{4}^{(1)}$ gives the following contribution to the third generation Yukawas $\displaystyle\delta h^{(1)}_{t}$ $\displaystyle=$ $\displaystyle\frac{if^{(1)}}{60\sqrt{2}M_{c}}\left(\sum_{r=1}^{2}b_{r}U_{d_{r1}}\right)\left[\frac{5\sqrt{3}}{2}\mathcal{V}_{75_{{}_{{210}}}}-4\sqrt{15}\mathcal{V}_{24_{{}_{{210}}}}-8\sqrt{15}\mathcal{V}_{1_{{}_{{210}}}}\right],$ (6.2) $\displaystyle\delta h^{(1)}_{b}$ $\displaystyle=$ $\displaystyle\frac{if^{(1)}}{60\sqrt{2}M_{c}}\left(\sum_{r=1}^{2}b_{r}V_{d_{r1}}\right)\left[\frac{\sqrt{20}}{3}\mathcal{V}_{75_{{}_{{210}}}}-20\sqrt{\frac{5}{3}}\mathcal{V}_{24_{{}_{{210}}}}\right],$ (6.3) $\displaystyle\delta h^{(1)}_{\tau}$ $\displaystyle=$ $\displaystyle\frac{if^{(1)}}{60\sqrt{2}M_{c}}\left(\sum_{r=1}^{2}b_{r}V_{d_{r1}}\right)\left[20\sqrt{3}\mathcal{V}_{75_{{}_{{210}}}}-20\sqrt{15}\mathcal{V}_{24_{{}_{{210}}}}\right],$ (6.4) The contribution of $W_{4}^{(2)}$ to the third generation Yukawas is given by $\displaystyle\delta h^{(2)}_{t}=$ $\displaystyle-\frac{if^{(2)}}{120M_{c}}\left[\frac{10}{3}\sqrt{\frac{2}{3}}\mathcal{V}_{75_{{}_{{210}}}}U_{d_{61}}+\frac{5}{3}\sqrt{\frac{10}{3}}\mathcal{V}_{24_{{}_{{210}}}}U_{d_{61}}+6\sqrt{5}\mathcal{V}_{24_{{}_{{210}}}}U_{d_{31}}-8\sqrt{5}\mathcal{V}_{1_{{}_{{210}}}}U_{d_{31}}\right],$ (6.5) $\displaystyle\delta h^{(2)}_{b}=$ $\displaystyle-\frac{if^{(2)}}{120M_{c}}\Bigg{[}-\frac{20}{3}\sqrt{\frac{2}{3}}\mathcal{V}_{75_{{}_{{210}}}}V_{d_{61}}-\frac{20}{3}\mathcal{V}_{75_{{}_{{210}}}}V_{d_{31}}-\frac{1}{3}\sqrt{\frac{10}{3}}\mathcal{V}_{24_{{}_{{210}}}}V_{d_{61}}-\frac{10\sqrt{5}}{3}\mathcal{V}_{24_{{}_{{210}}}}V_{d_{31}}$ $\displaystyle\hskip 56.9055pt-4\sqrt{\frac{10}{3}}\mathcal{V}_{1_{{}_{{210}}}}V_{d_{61}}\Bigg{]},$ (6.6) $\displaystyle\delta h^{(2)}_{\tau}=$ $\displaystyle-\frac{if^{(2)}}{120M_{c}}\Bigg{[}-20\sqrt{\frac{2}{3}}\mathcal{V}_{75_{{}_{{210}}}}V_{d_{61}}-20\mathcal{V}_{75_{{}_{{210}}}}V_{d_{31}}-\sqrt{\frac{10}{3}}\mathcal{V}_{24_{{}_{{210}}}}V_{d_{61}}-10\sqrt{5}\mathcal{V}_{24_{{}_{{210}}}}V_{d_{31}}$ $\displaystyle\hskip 56.9055pt-4\sqrt{30}\mathcal{V}_{1_{{}_{{210}}}}V_{d_{61}}\Bigg{]}.$ (6.7) Finally, the contribution of $W_{4}^{(3)}$ to the third generation Yukawas is given by $\displaystyle\delta h_{t}^{(3)}$ $\displaystyle=-\frac{3i}{8}\frac{f^{(3)}}{M_{c}}\left[\frac{2}{3}\sqrt{\frac{2}{3}}\mathcal{V}_{75_{210}}U_{d_{61}}+\frac{1}{3}\sqrt{\frac{10}{3}}\mathcal{V}_{24_{210}}U_{d_{61}}-\frac{2}{\sqrt{5}}\mathcal{V}_{24_{210}}U_{d_{31}}+\frac{8}{3\sqrt{5}}\mathcal{V}_{1_{210}}U_{d_{31}}\right],~{}~{}~{}$ (6.8) $\displaystyle\delta h_{b}^{(3)}$ $\displaystyle=-\frac{3i}{8}\frac{f^{(3)}}{M_{c}}\left[\frac{2}{3}\sqrt{\frac{2}{3}}\mathcal{V}_{75_{210}}V_{d_{61}}+\frac{1}{3}\sqrt{\frac{10}{3}}\mathcal{V}_{24_{210}}V_{d_{61}}-\frac{2}{\sqrt{5}}\mathcal{V}_{24_{210}}V_{d_{31}}+\frac{8}{3\sqrt{5}}\mathcal{V}_{1_{210}}V_{d_{31}}\right],$ (6.9) $\displaystyle\delta h_{\tau}^{(3)}$ $\displaystyle=\frac{3i}{8}\frac{f^{(3)}}{M_{c}}\left[\frac{2}{3}\sqrt{\frac{2}{3}}\mathcal{V}_{75_{210}}V_{d_{61}}+\frac{1}{3}\sqrt{\frac{10}{3}}\mathcal{V}_{24_{210}}V_{d_{61}}-\frac{2}{\sqrt{5}}\mathcal{V}_{24_{210}}V_{d_{31}}+\frac{8}{3\sqrt{5}}\mathcal{V}_{1_{210}}V_{d_{31}}\right].$ (6.10) The total Yukawas are the sum of the contributions from the cubic and from the quartic terms at the GUT scale as given in Eq. (2.12). ## References * [1] B. Abi et al. [Muon g-2], Phys. Rev. Lett. 126, no.14, 141801 (2021) doi:10.1103/PhysRevLett.126.141801 [arXiv:2104.03281 [hep-ex]]. * [2] G. W. Bennett et al. [Muon g-2], Phys. Rev. D 73, 072003 (2006) doi:10.1103/PhysRevD.73.072003 [arXiv:hep-ex/0602035 [hep-ex]]. * [3] M. Tanabashi et al. [Particle Data Group], Phys. Rev. D 98, no.3, 030001 (2018) doi:10.1103/PhysRevD.98.030001 * [4] T. Aoyama, N. Asmussen, M. Benayoun, J. Bijnens, T. Blum, M. Bruno, I. Caprini, C. M. Carloni Calame, M. Cè and G. Colangelo, et al. Phys. Rept. 887, 1-166 (2020) doi:10.1016/j.physrep.2020.07.006 [arXiv:2006.04822 [hep-ph]]. * [5] A. Aboubrahim, M. Klasen and P. Nath, [arXiv:2104.03839 [hep-ph]]. * [6] S. Akula and P. Nath, Phys. Rev. D 87, no.11, 115022 (2013) doi:10.1103/PhysRevD.87.115022 [arXiv:1304.5526 [hep-ph]]. * [7] A. Aboubrahim and P. Nath, Phys. Rev. D 100, no.1, 015042 (2019) doi:10.1103/PhysRevD.100.015042 [arXiv:1905.04601 [hep-ph]]. * [8] A. Aboubrahim, P. Nath and R. M. Syed, JHEP 01, 047 (2021) doi:10.1007/JHEP01(2021)047 [arXiv:2005.00867 [hep-ph]]. * [9] K. S. Babu, I. Gogoladze and Z. Tavartkiladze, Phys. Lett. B 650, 49-56 (2007) doi:10.1016/j.physletb.2007.02.050 [arXiv:hep-ph/0612315 [hep-ph]]. * [10] K. S. Babu, I. Gogoladze, P. Nath and R. M. Syed, Phys. Rev. D 85, 075002 (2012) doi:10.1103/PhysRevD.85.075002 [arXiv:1112.5387 [hep-ph]]. * [11] A. Masiero, D. V. Nanopoulos, K. Tamvakis and T. Yanagida, Phys. Lett. B 115, 380-384 (1982) doi:10.1016/0370-2693(82)90522-6 * [12] B. Grinstein, Nucl. Phys. B 206, 387 (1982) doi:10.1016/0550-3213(82)90275-9 * [13] T. E. Clark, T. K. Kuo and N. Nakagawa, Phys. Lett. B 115, 26-28 (1982) doi:10.1016/0370-2693(82)90507-X * [14] C. S. Aulakh and R. N. Mohapatra, Phys. Rev. D 28, 217 (1983) doi:10.1103/PhysRevD.28.217 * [15] K. S. Babu and R. N. Mohapatra, Phys. Rev. Lett. 70, 2845-2848 (1993) doi:10.1103/PhysRevLett.70.2845 [arXiv:hep-ph/9209215 [hep-ph]]. * [16] P. Nath and R. M. Syed, Phys. Lett. B 506, 68-76 (2001) [erratum: Phys. Lett. B 508, 216-216 (2001)] doi:10.1016/S0370-2693(01)00392-6 [arXiv:hep-ph/0103165 [hep-ph]]. * [17] P. Nath and R. M. Syed, Nucl. Phys. B 618, 138-156 (2001) doi:10.1016/S0550-3213(01)00493-X [arXiv:hep-th/0109116 [hep-th]]. * [18] P. Nath and R. M. Syed, Nucl. Phys. B 676, 64-98 (2004) doi:10.1016/j.nuclphysb.2003.10.018 [arXiv:hep-th/0310178 [hep-th]]. * [19] C. S. Aulakh, B. Bajc, A. Melfo, G. Senjanovic and F. Vissani, Phys. Lett. B 588, 196-202 (2004) doi:10.1016/j.physletb.2004.03.031 [arXiv:hep-ph/0306242 [hep-ph]]. * [20] B. Bajc, A. Melfo, G. Senjanovic and F. Vissani, Phys. Rev. D 70, 035007 (2004) doi:10.1103/PhysRevD.70.035007 [arXiv:hep-ph/0402122 [hep-ph]]. * [21] C. S. Aulakh and A. Girdhar, Nucl. Phys. B 711, 275-313 (2005) doi:10.1016/j.nuclphysb.2005.01.008 [arXiv:hep-ph/0405074 [hep-ph]]. * [22] C. S. Aulakh and S. K. Garg, Nucl. Phys. B 857, 101-142 (2012) doi:10.1016/j.nuclphysb.2011.12.003 [arXiv:0807.0917 [hep-ph]]. * [23] P. Nath and P. Fileviez Perez, Phys. Rept. 441, 191-317 (2007) doi:10.1016/j.physrep.2007.02.010 [arXiv:hep-ph/0601023 [hep-ph]]. * [24] D. A. Kosower, L. M. Krauss and N. Sakai, Phys. Lett. B 133, 305-310 (1983) doi:10.1016/0370-2693(83)90152-1 * [25] T. C. Yuan, R. L. Arnowitt, A. H. Chamseddine and P. Nath, Z. Phys. C 26, 407 (1984) doi:10.1007/BF01452567 * [26] J. L. Lopez, D. V. Nanopoulos and X. Wang, Phys. Rev. D 49, 366-372 (1994) doi:10.1103/PhysRevD.49.366 [arXiv:hep-ph/9308336 [hep-ph]]. * [27] U. Chattopadhyay and P. Nath, Phys. Rev. D 53, 1648-1657 (1996) doi:10.1103/PhysRevD.53.1648 [arXiv:hep-ph/9507386 [hep-ph]]. * [28] T. Moroi, Phys. Rev. D 53, 6565-6575 (1996) [erratum: Phys. Rev. D 56, 4424 (1997)] doi:10.1103/PhysRevD.53.6565 [arXiv:hep-ph/9512396 [hep-ph]]. * [29] M. Carena, G. F. Giudice and C. E. M. Wagner, Phys. Lett. B 390, 234-242 (1997) doi:10.1016/S0370-2693(96)01396-2 [arXiv:hep-ph/9610233 [hep-ph]]. * [30] A. Czarnecki and W. J. Marciano, Phys. Rev. D 64, 013014 (2001) doi:10.1103/PhysRevD.64.013014 [arXiv:hep-ph/0102122 [hep-ph]]. * [31] U. Chattopadhyay and P. Nath, Phys. Rev. Lett. 86, 5854-5857 (2001) doi:10.1103/PhysRevLett.86.5854 [arXiv:hep-ph/0102157 [hep-ph]]. * [32] L. L. Everett, G. L. Kane, S. Rigolin and L. T. Wang, Phys. Rev. Lett. 86, 3484-3487 (2001) doi:10.1103/PhysRevLett.86.3484 [arXiv:hep-ph/0102145 [hep-ph]]. * [33] J. L. Feng and K. T. Matchev, Phys. Rev. Lett. 86, 3480-3483 (2001) doi:10.1103/PhysRevLett.86.3480 [arXiv:hep-ph/0102146 [hep-ph]]. * [34] E. A. Baltz and P. Gondolo, Phys. Rev. Lett. 86, 5004 (2001) doi:10.1103/PhysRevLett.86.5004 [arXiv:hep-ph/0102147 [hep-ph]]. * [35] D. Sabatta, A. S. Cornell, A. Goyal, M. Kumar, B. Mellado and X. Ruan, Chin. Phys. C 44, no.6, 063103 (2020) doi:10.1088/1674-1137/44/6/063103 [arXiv:1909.03969 [hep-ph]]. * [36] S. Buddenbrock, A. S. Cornell, Y. Fang, A. Fadol Mohammed, M. Kumar, B. Mellado and K. G. Tomiwa, JHEP 10, 157 (2019) doi:10.1007/JHEP10(2019)157 [arXiv:1901.05300 [hep-ph]]. * [37] N. Chen, B. Wang and C. Y. Yao, [arXiv:2102.05619 [hep-ph]]. * [38] S. Akula, B. Altunkaynak, D. Feldman, P. Nath and G. Peim, Phys. Rev. D 85, 075001 (2012) doi:10.1103/PhysRevD.85.075001 [arXiv:1112.3645 [hep-ph]]. * [39] A. Arbey, M. Battaglia, A. Djouadi, F. Mahmoudi and J. Quevillon, Phys. Lett. B 708, 162 (2012); H. Baer, V. Barger and A. Mustafayev, Phys. Rev. D 85, 075010 (2012); J. Ellis and K. A. Olive, Eur. Phys. J. C 72, 2005 (2012); S. Heinemeyer, O. Stal and G. Weiglein, Phys. Lett. B 710, 201 (2012); * [40] B. Ananthanarayan, G. Lazarides and Q. Shafi, Phys. Lett. B 300, 245-250 (1993) doi:10.1016/0370-2693(93)90361-K * [41] U. Chattopadhyay, A. Corsetti and P. Nath, Phys. Rev. D 66, 035003 (2002) doi:10.1103/PhysRevD.66.035003 [arXiv:hep-ph/0201001 [hep-ph]]. * [42] J. Hollingsworth, M. Ratz, P. Tanedo and D. Whiteson, [arXiv:2103.06957 [hep-th]]. * [43] C. Balázs et al. [DarkMachines High Dimensional Sampling Group], [arXiv:2101.04525 [hep-ph]]. * [44] A. H. Chamseddine, R. Arnowitt and P. Nath, Phys. Rev. Lett. 49 (1982) 970; P. Nath, R. L. Arnowitt and A. H. Chamseddine, Nucl. Phys. B 227, 121 (1983); L. J. Hall, J. D. Lykken and S. Weinberg, Phys. Rev. D 27, 2359 (1983). doi:10.1103/PhysRevD.27.2359 * [45] P. Nath, R. L. Arnowitt and A. H. Chamseddine, HUTP-83/A077. * [46] J. R. Ellis, K. Enqvist, D. V. Nanopoulos and K. Tamvakis, Phys. Lett. B 155, 381-386 (1985) doi:10.1016/0370-2693(85)91591-6 * [47] A. Corsetti and P. Nath, Phys. Rev. D 64, 125010 (2001); A. Birkedal-Hansen and B. D. Nelson, Phys. Rev. D 67, 095006 (2003); G. Belanger, F. Boudjema, A. Cottrant, A. Pukhov and A. Semenov, Nucl. Phys. B 706, 411 (2005); H. Baer, A. Mustafayev, E. K. Park, S. Profumo and X. Tata, JHEP 04 (2006), 041 doi:10.1088/1126-6708/2006/04/041 [arXiv:hep-ph/0603197 [hep-ph]]; I. Gogoladze, F. Nasir, Q. Shafi and C. S. Un, Phys. Rev. D 90, no. 3, 035008 (2014) doi:10.1103/PhysRevD.90.035008; S. P. Martin, Phys. Rev. D 79, 095019 (2009) doi:10.1103/PhysRevD.79.095019 * [48] D. Feldman, Z. Liu and P. Nath, Phys. Rev. D 80, 015007 (2009) doi:10.1103/PhysRevD.80.015007 [arXiv:0905.1148 [hep-ph]]. * [49] A. S. Belyaev, S. F. King and P. B. Schaefers, Phys. Rev. D 97, no.11, 115002 (2018) doi:10.1103/PhysRevD.97.115002 [arXiv:1801.00514 [hep-ph]]. * [50] F. Staub, [arXiv:1906.03277 [hep-ph]]. * [51] W. Porod, Comput. Phys. Commun. 153, 275-315 (2003) doi:10.1016/S0010-4655(03)00222-4 [arXiv:hep-ph/0301101 [hep-ph]]. * [52] W. Porod and F. Staub, Comput. Phys. Commun. 183, 2458-2469 (2012) doi:10.1016/j.cpc.2012.05.021 [arXiv:1104.1573 [hep-ph]]. * [53] F. Staub and W. Porod, Eur. Phys. J. C 77, no.5, 338 (2017) doi:10.1140/epjc/s10052-017-4893-7 [arXiv:1703.03267 [hep-ph]]. * [54] J. Bernon and B. Dumont, Eur. Phys. J. C 75, no.9, 440 (2015) doi:10.1140/epjc/s10052-015-3645-9 [arXiv:1502.04138 [hep-ph]]. * [55] S. Kraml, T. Q. Loc, D. T. Nhung and L. Ninh, SciPost Phys. 7, no.4, 052 (2019) doi:10.21468/SciPostPhys.7.4.052 [arXiv:1908.03952 [hep-ph]]. * [56] P. Bechtle, S. Heinemeyer, O. Stål, T. Stefaniak and G. Weiglein, Eur. Phys. J. C 74, no.2, 2711 (2014) doi:10.1140/epjc/s10052-013-2711-4 [arXiv:1305.1933 [hep-ph]]. * [57] P. Bechtle, D. Dercks, S. Heinemeyer, T. Klingl, T. Stefaniak, G. Weiglein and J. Wittbrodt, Eur. Phys. J. C 80, no.12, 1211 (2020) doi:10.1140/epjc/s10052-020-08557-9 [arXiv:2006.06007 [hep-ph]]. * [58] C. K. Khosa, S. Kraml, A. Lessa, P. Neuhuber and W. Waltenberger, doi:10.31526/lhep.2020.158 [arXiv:2005.00555 [hep-ph]]. * [59] S. Kraml, S. Kulkarni, U. Laa, A. Lessa, W. Magerl, D. Proschofsky-Spindler and W. Waltenberger, Eur. Phys. J. C 74, 2868 (2014) doi:10.1140/epjc/s10052-014-2868-5 [arXiv:1312.4175 [hep-ph]]. * [60] S. Kraml, S. Kulkarni, U. Laa, A. Lessa, V. Magerl, W. Magerl, D. Proschofsky-Spindler, M. Traub and W. Waltenberger, [arXiv:1412.1745 [hep-ph]]. * [61] D. Barducci, G. Belanger, J. Bernon, F. Boudjema, J. Da Silva, S. Kraml, U. Laa and A. Pukhov, Comput. Phys. Commun. 222, 327-338 (2018) doi:10.1016/j.cpc.2017.08.028 [arXiv:1606.03834 [hep-ph]]. * [62] D. Feldman, Z. Liu, P. Nath and G. Peim, Phys. Rev. D 81, 095017 (2010) doi:10.1103/PhysRevD.81.095017 [arXiv:1004.0649 [hep-ph]]. * [63] D. Feldman, P. Fileviez Perez and P. Nath, JHEP 01, 038 (2012) doi:10.1007/JHEP01(2012)038 [arXiv:1109.2901 [hep-ph]]. * [64] A. Aboubrahim and P. Nath, [arXiv:1909.08684 [hep-ph]]. * [65] A. Aboubrahim, W. Z. Feng, P. Nath and Z. Y. Wang, [arXiv:2103.15769 [hep-ph]]. * [66] H. Baer, V. Barger, D. Sengupta and X. Tata, Eur. Phys. J. C 78, no.10, 838 (2018) doi:10.1140/epjc/s10052-018-6306-y [arXiv:1803.11210 [hep-ph]]. * [67] J. Halverson, C. Long and P. Nath, Phys. Rev. D 96, no.5, 056025 (2017) doi:10.1103/PhysRevD.96.056025 [arXiv:1703.07779 [hep-ph]]. * [68] G. Aad et al. [ATLAS], Eur. Phys. J. C 80, no.2, 123 (2020) doi:10.1140/epjc/s10052-019-7594-6 [arXiv:1908.08215 [hep-ex]]. * [69] G. Aad et al. [ATLAS], Phys. Rev. D 101, no.5, 052005 (2020) doi:10.1103/PhysRevD.101.052005 [arXiv:1911.12606 [hep-ex]]. * [70] A. M. Sirunyan et al. [CMS], Phys. Lett. B 790, 140-166 (2019) doi:10.1016/j.physletb.2019.01.005 [arXiv:1806.05264 [hep-ex]]. * [71] A. M. Sirunyan et al. [CMS], [arXiv:2012.08600 [hep-ex]]. * [72] A. Aboubrahim and P. Nath, Phys. Rev. D 98, no.1, 015009 (2018) doi:10.1103/PhysRevD.98.015009 [arXiv:1804.08642 [hep-ph]]. * [73] A. Aboubrahim and P. Nath, Phys. Rev. D 98, no.9, 095024 (2018) doi:10.1103/PhysRevD.98.095024 [arXiv:1810.12868 [hep-ph]]. * [74] M. Cepeda, S. Gori, P. Ilten, M. Kado, F. Riva, R. Abdul Khalek, A. Aboubrahim, J. Alimena, S. Alioli and A. Alves, et al. CERN Yellow Rep. Monogr. 7, 221-584 (2019) doi:10.23731/CYRM-2019-007.221 [arXiv:1902.00134 [hep-ph]]. * [75] X. Cid Vidal, M. D’Onofrio, P. J. Fox, R. Torre, K. A. Ulmer, A. Aboubrahim, A. Albert, J. Alimena, B. C. Allanach and C. Alpigiani, et al. CERN Yellow Rep. Monogr. 7, 585-865 (2019) doi:10.23731/CYRM-2019-007.585 [arXiv:1812.07831 [hep-ph]]. * [76] A. Buckley, Eur. Phys. J. C 75, no.10, 467 (2015) doi:10.1140/epjc/s10052-015-3638-8 [arXiv:1305.4194 [hep-ph]]. * [77] J. Debove, B. Fuks and M. Klasen, Nucl. Phys. B 849, 64-79 (2011) doi:10.1016/j.nuclphysb.2011.03.015 [arXiv:1102.4422 [hep-ph]]. * [78] B. Fuks, M. Klasen, D. R. Lamprea and M. Rothering, Eur. Phys. J. C 73, 2480 (2013) doi:10.1140/epjc/s10052-013-2480-0 [arXiv:1304.0790 [hep-ph]]. * [79] A. Buckley, J. Ferrando, S. Lloyd, K. Nordström, B. Page, M. Rüfenacht, M. Schönherr and G. Watt, Eur. Phys. J. C 75, 132 (2015) doi:10.1140/epjc/s10052-015-3318-8 [arXiv:1412.7420 [hep-ph]]. * [80] T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen and P. Z. Skands, Comput. Phys. Commun. 191, 159-177 (2015) doi:10.1016/j.cpc.2015.01.024 [arXiv:1410.3012 [hep-ph]]. * [81] M. Cacciari, G. P. Salam and G. Soyez, Eur. Phys. J. C 72, 1896 (2012) doi:10.1140/epjc/s10052-012-1896-2 [arXiv:1111.6097 [hep-ph]]. * [82] M. Cacciari, G. P. Salam and G. Soyez, JHEP 04, 063 (2008) doi:10.1088/1126-6708/2008/04/063 [arXiv:0802.1189 [hep-ph]]. * [83] J. de Favereau et al. [DELPHES 3], JHEP 02, 057 (2014) doi:10.1007/JHEP02(2014)057 [arXiv:1307.6346 [hep-ex]]. * [84] C. G. Lester and D. J. Summers, Phys. Lett. B 463, 99-103 (1999) doi:10.1016/S0370-2693(99)00945-4 [arXiv:hep-ph/9906349 [hep-ph]]. * [85] A. Barr, C. Lester and P. Stephens, J. Phys. G 29, 2343-2363 (2003) doi:10.1088/0954-3899/29/10/304 [arXiv:hep-ph/0304226 [hep-ph]]. * [86] C. G. Lester and B. Nachman, JHEP 03, 100 (2015) doi:10.1007/JHEP03(2015)100 [arXiv:1411.4312 [hep-ph]]. * [87] P. Speckmayer, A. Hocker, J. Stelzer and H. Voss, J. Phys. Conf. Ser. 219, 032057 (2010) doi:10.1088/1742-6596/219/3/032057 * [88] I. Antcheva, M. Ballintijn, B. Bellenot, M. Biskup, R. Brun, N. Buncic, P. Canal, D. Casadei, O. Couet and V. Fine, et al. Comput. Phys. Commun. 182, 1384-1385 (2011) doi:10.1016/j.cpc.2011.02.008 * [89] S. Iwamoto, T. T. Yanagida and N. Yokozaki, [arXiv:2104.03223 [hep-ph]]. * [90] Y. Gu, N. Liu, L. Su and D. Wang, [arXiv:2104.03239 [hep-ph]]. * [91] M. Van Beekveld, W. Beenakker, M. Schutten and J. De Wit, [arXiv:2104.03245 [hep-ph]]. * [92] W. Yin, [arXiv:2104.03259 [hep-ph]]. * [93] F. Wang, L. Wu, Y. Xiao, J. M. Yang and Y. Zhang, [arXiv:2104.03262 [hep-ph]]. * [94] J. Cao, J. Lian, Y. Pan, D. Zhang and P. Zhu, [arXiv:2104.03284 [hep-ph]]. * [95] M. Chakraborti, S. Heinemeyer and I. Saha, [arXiv:2104.03287 [hep-ph]]. * [96] P. Cox, C. Han and T. T. Yanagida, [arXiv:2104.03290 [hep-ph]]. * [97] C. Han, [arXiv:2104.03292 [hep-ph]]. * [98] S. Baum, M. Carena, N. R. Shah and C. E. M. Wagner, [arXiv:2104.03302 [hep-ph]]. * [99] W. Ahmed, I. Khan, J. Li, T. Li, S. Raza and W. Zhang, [arXiv:2104.03491 [hep-ph]]. * [100] H. Baer, V. Barger and H. Serce, [arXiv:2104.07597 [hep-ph]]. * [101] M. Endo, K. Hamaguchi, S. Iwamoto and T. Kitahara, [arXiv:2104.03217 [hep-ph]]. * [102] M. Ibe, S. Kobayashi, Y. Nakayama and S. Shirai, [arXiv:2104.03289 [hep-ph]]. * [103] M. Chakraborti, L. Roszkowski and S. Trojanowski, [arXiv:2104.04458 [hep-ph]].
Open-loop potential difference games with inequality constraints[This work is supported by Science and Engineering Research Board (SERB), Government of India, Grant no. SERB–EMR/2017/001267.] 1]Aathira Prasad 1]Puduru Viswanadha Reddy [1]Department of Electrical Engineering, Indian Institute of Technology - Madras, India [ ], Static potential games are non-cooperative games which admit a fictitious function, also referred to as a potential function, such that the minimizers of this function constitute a subset (or a refinement) of the Nash equilibrium strategies of the associated non-cooperative game. In this paper we study a class $N$-player non-zero sum difference games with inequality constraints which admit a potential game structure. In particular, we provide conditions for the existence of an optimal control problem (with inequality constraints) such that the solution of this problem yields an open-loop Nash equilibrium strategy of the corresponding dynamic non-cooperative game (with inequality constraints). Further, we provide a way to construct potential functions associated with this optimal control problem. We specialize our general results to a linear-quadratic setting and provide a linear complementarity problem based approach for computing the refinements of the open-loop Nash equilibria obtained in [Reddy and Zaccour, 2015]. We illustrate our results with an example inspired by energy storage incentives in a smart grid. Potential games; dynamic games with inequality constraints; open-loop Nash equilibrium; linear complementarity problem. § INTRODUCTION Multi-agent control systems and related distributed architectures are becoming increasingly popular with emerging applications such as smart grids, Internet of Things (IoT) systems, intelligent traffic networks and cyber-security. These systems are characterized by the presence of interdependent multiple decision making entities, which are large-scale, distributed, networked and heterogeneous in nature. Game theory has emerged as a powerful tool for analyzing multi-agent systems; see [Manshaei et al., 2013], [Saad et al., 2012] and [Zhu and Başar, 2015] for applications of game theory in the areas mentioned above. In particular, dynamic game theory provides a mathematical framework for modeling multi-agent interactions which evolve over time, and has been successfully used in analyzing a variety of decision problems arising in engineering, economics and management science; see [Başar and Olsder, 1999], [Dockner et al., 2000], [Başar et al., 2018]. A significant share of dynamic game models are formulated in an unconstrained setting, that is, where the state and control variables are unconstrained; except for the state equation which captures the evolution of the interaction environment. Constraints appear naturally in real world applications in the form of production-capacity, environmental, market and budget constraints. For example, electric vehicle charging station with limited energy resources impose joint constraints on players. Further, decision problems related to network congestion, which are used in modeling traffic systems, inherently involve capacity constraints. Recently, [Reddy and Zaccour, 2015] and [Reddy and Zaccour, 2016] study a class of non-zero sum difference games with inequality constraints and provide conditions for the existence of Nash equilibria with open-loop and feedback information structures. The novelty of this paper lies in studying a class of dynamic games with inequality constraints which admit a potential game structure. Static potential games were first introduced in [Rosenthal, 1973] and further studied in [Monderer and Shapley, 1996]. Loosely speaking, a potential game is a non-cooperative game that satisfies the property that a Nash equilibrium of the game is obtained when players jointly optimize a fictitious function, also referred to as a potential function. In other words, a Nash equilibrium of the game is obtained by solving an optimization problem as opposed to a fixed point problem. An important property associated with a potential game is that the strategy profile that provides the optimum for the potential function is a pure strategy Nash equilibrium, and as a result, the existence of pure strategy Nash equilibrium is guaranteed in a potential game. It is well known ([Quint and Shubik, 1997]) that a non-cooperative game can admit more than one Nash equilibrium. The multiplicity of equilibria naturally poses a selection problem, and to address this, certain refinements of Nash equilibira have been proposed; see [Myerson, 1997]. These refinements provide a way of selecting a subset of equilibria based on additional properties that a Nash equilibrium is required to satisfy. In a potential game, the Nash equilibrium is refined with the property that players are jointly optimizing the potential function in a cooperative manner, even though they are acting strategically optimizing their individual objective functions. Due to this property, potential games find applications in the study of network congestion games <cit.>, decentralized learning algorithms ([Marden et al., 2009]) and in utility function design ([Marden and Shamma, 2018]) for multi-agent systems. Further, in [Slade, 1994] this property of potential games has been explored in the context of oligopolistic markets. In summary, the static potential games have been studied extensively in literature. The objective of this paper is to extend the notion of a potential game in a dynamic setting, and in particular, for dynamic games where inequality constraints appear jointly in control and state variables. Besides providing the conditions on the existence of this class of games, another important objective of our paper is towards computing the refinements of the open-loop Nash equlibria. We consider a class of finite horizon $N$-player non-zero sum nonlinear difference games with constraint structure similar to [Reddy and Zaccour, 2015] and [Reddy and Zaccour, 2016]. Our contributions are summarized as follows. * We define the notion of open-loop potential difference game and associate an inequality constrained optimal control problem with the dynamic non-cooperative game with inequality constraints. Further, in Lemma <ref> and Theorem <ref> we provide conditions under which the non-cooperative game admits a potential game structure. * When the potential functions associated with the optimal control problem are not specified, we provide, in Theorem <ref>, a method for constructing the potential functions from the objective functions of the players using the theory of conservative vector fields. * We specialize the obtained results to a linear quadratic setting and characterize in Theorem <ref> a class of linear quadratic potential difference games with inequality constraints. Further, in Theorem <ref> we provide a linear complementarity problem based method for computing a refinement of the open-loop Nash equilibria. The paper is organized as follows. In section <ref>, we introduce the dynamic game model with the separable structure of the objective function and inequality constraints jointly in state and decision variables. In section <ref>, we define the open-loop dynamic potential games by introducing an optimal control problem associated with the dynamic potential game. Further, we provide the structure of the dynamic game for the existence of potential functions. We demonstrate the equivalence of the solution of the optimal control problem as an open-loop Nash equilibrium of the dynamic game. When the potential functions are specified before hand, we also illustrate a procedure for the construction of potential functions. In section <ref>, we specialize these results to the linear quadratic setting. In section <ref> we illustrate our results with an application motivated by energy storage incentives in a smart grid. Finally, in section <ref> we provide conclusions and future work. §.§ Literature review The literature on dynamic potential games is limited compared to their static counter part. In a static non-cooperative game the objective functions of the players are required to satisfy certain conditions to admit a potential function. Quite naturally, an extension of these conditions to a dynamic setting imposes structural constraints on the instantaneous and terminal objective functions. In [Slade, 1994], the author studies oligopolistic markets, and provides conditions under which a single optimization problem provides a Nash equilibrium, both in the static and dynamic settings. The objective function of this optimization problem in this work is referred to as a fictitious function. In particular, in the dynamic case, existence of these functions under the open-loop and feedback information structures has been explored. In [Dragone et al., 2015], the authors consider a non-cooperative differential game with open-loop information structure, and provide conditions for the existence of Hamiltonian potential functions. Potential differential games were studied in [Fonseca-Morales and Hernández-Lerma, 2018] in an open-loop setting. The authors primarily focused on separable structure of the payoff function from which the potential function could be derived. The unconstrained discrete time game is considered in [González-Sánchez and Hernández-Lerma, 2014] in a stochastic setting. In [González-Sánchez and Hernández-Lerma, 2016], the authors provide a survey on static and dynamic potential games. All these works study dynamic potential games in discrete and continuous time settings without additional constraints on the state and control variables. In [Zazo et al., 2016], the authors consider a discrete time infinite horizon non-cooperative difference games with inequality constraints and provide conditions for the existence of potential functions. They also mention a way for constructing potential functions using the theory of conservative vector fields; an approach followed in static games [Monderer and Shapley, 1996]. Our work is closer to [Zazo et al., 2016] in spirit, but differs considerably in the model and approach. The constraint structure in our paper is inspired by our previous works [Reddy and Zaccour, 2015] and [Reddy and Zaccour, 2016]. In particular, we assume that the players have two types of control variables, namely, 1) variables that (directly) affect the dynamics but do not enter the constraints, and 2) variables that enter the constraints jointly with the state variables but do not appear in the dynamics. Further, we assume that the objective functions are separable in the two types of control variables, that is, that there are no cross-terms between them. As was shown in [Reddy and Zaccour, 2015] and [Reddy and Zaccour, 2016], we also demonstrate in this paper that these assumptions are crucial in obtaining a semi-analytical characterizations for Nash equilibria, and in providing a linear complementarity problem based method for computing them. More importantly, the linear quadratic dynamic potential games analyzed in this paper provides refinements of the open-loop Nash equilibria obtained in [Reddy and Zaccour, 2015], and a procedure for computing them. §.§ Notation We shall use the following notation. The $n$-dimensional Euclidean space is denoted by $\mathbb R^n$, $n\geq 1$. $A^\prime$ denotes the transpose of a matrix or a vector $A$. $A_1\oplus A_2\oplus \cdots \oplus A_n$ represents the block diagonal matrix obtained by taking the matrices $A_1, A_2,\cdots,A_n$ as diagonal elements in this sequence. The matrix with all entries as zeros is denoted by $\mathbf{0}$, all entries as one is denoted by $\mathbf{1}$, and the identity matrix is represented by $\mathbf{I}$. We call two vectors $x,y\in \mathbb R^n$ complementary if $x\geq 0$, $y\geq0$ and $x^\prime y=0$, and $0\leq x \perp y \geq 0$ denotes this condition. Let $A$ be a $n\times n$ matrix and $a$ be a $n\times 1$ vector. Let $n$ be partitioned as $n=n_1+n_2+\cdots+n_K$. We represent $[A]_{ij}$ as the $n_i\times n_j$ sub-matrix associated with indices $n_i$ (row) and $n_j$ (column), and $[a]_i$ as the $n_i\times 1$ sub-vector associated with indices $n_i$. $[A]_{i\bullet}$ and $[A]_{\bullet i }$ represent the $i^{th}$ row block matrix of dimensions $n_i \times n $ and $i^{th}$ column matrix of dimension $n \times n_i$ of the matrix $A$ respectively. The notation $\mathbf{e}_i$ represents a column vector with $i^{th}$ element as 1 and the rest of the elements as zero. For differentiation with respect to a vector or matrix variable we follow the convention from [Lütkepohl, 1996]. § DYNAMIC GAME MODEL In this section, we introduce a class of finite horizon discrete time non-zero sum games with inequality constraints. Let $\mathcal N=\{1,2,\cdots,N\}$ be the set of players and $\mathcal K=\{0,1,2,\cdots,K\}$ be the set of time periods. At each time instant, player $i \in \mathcal N$ chooses the following two types of actions (control variables); variables that enter the dynamics of the system, but not the constraints, denoted by $u^i_k\in U^i_k\subset \mathbb R^{m_i}$; and variables that do not affect the dynamics, but do constrain the decision making process, denoted by $v_k^i\in V_k^i \subset \mathbb R^{s_i}$. Here, $U_k^i$ and $V_k^i$ are the sets of admissible values for the two types of variables. We denote the vector of actions of all players at time period $k$ by $\mathbf u_k:=\begin{bmatrix} {u^1_k}^\prime& \end{bmatrix} ^\prime$ and $\mathbf{v}_k:=\begin{bmatrix} {v^1_k}^\prime& {v^2_k}^\prime&\cdots& {v^N_k}^\prime \end{bmatrix}^\prime$. The state $ x_k \in {X}_k \subset \mathbb R^n$ evolves according to the following discrete time dynamics \begin{align} {x}_{k+1}=f_k( x_k ,\mathbf{u}_k),~k\in \mathcal K\backslash \{K\},~x_0 \text{ given}. \label{eq:statedynamics} \end{align} Here, $X_k$ denotes the set of admissible state vectors at time period $k$. We consider the following joint inequality constraints (also called coupled constraints) associated with players strategies \begin{align} h_k( x_k ,\mathbf{v}_k)\geq 0,~\mathbf{v}_k\geq 0,~ k\in \mathcal K. \label{eq:constraints} \end{align} Notice, the decision variables $\{u^i_k,~k\in \mathcal K \backslash \{K\},~i\in \mathcal N\}$ do not enter the constraints directly but affect them indirectly through the state variables. Let the strategies of player $i$, that is, the plan of actions for the entire planning period, be denoted by $\tilde{u}^i:=\{u^i_k,~k\in \mathcal K\backslash \{K\}\}$ and $\tilde{v}^i:=\{v^i_k,~k\in \mathcal K \}$. The set of players excluding player $i$ is denoted by $-i:=\mathcal N\backslash \{i\}$. The joint strategy profiles of the players be denoted by $\tilde{\mathbf{u}}:=(\tilde{u}^i,\tilde{u}^{-i})$ and $\tilde{\mathbf{v}}:=(\tilde{v}^i,\tilde{v}^{-i})$, and the corresponding strategy sets are denoted by $\mathcal U$ and $\mathcal V$ respectively. Each player $i\in \mathcal N$ uses her strategies $\tilde{u}^i$ and $\tilde{v}^i$ to minimize the following objective function given by \begin{align} J^{i}(x_0,(\tilde{u}^i,\tilde{u}^{-i}),(\tilde{v}^i,\tilde{v}^{-i}))& = g^i_K( x_k ,\mathbf{v}_K) + \sum_{k=0}^{K-1} g^i_k( x_k ,\mathbf{u}_k,\mathbf{v}_k). \label{eq:objectives} \end{align} We have the following assumptions related to the dynamic game (<ref>)-(<ref>). * The admissible action sets $\{U_k^i,~k\in \mathcal K\backslash \{K\},~i\in \mathcal N\}$, are such that the sets of state vectors $\{X_k,~k\in \mathcal K\}$ are convex, and feasible action sets $\{V^i_k( x_k ,v_k^{-i})=\{v_k^i\in \mathbb R^{s_i}~|~ h^i( x_k ,\mathbf{v}_k)\geq 0,~ \mathbf{v}_k\geq 0\},~\forall x_k\in X_k\}$ are non-empty, convex and bounded for all $k\in \mathcal K$, $i\in \mathcal N$. A strategy pair $(\tilde{\bu},\tilde{\bv})$ associated with these actions sets is an admissible strategy pair. * The matrices $\left\{\frac{\partial h_k}{\partial v_k^i},~k\in \mathcal K,~i\in \mathcal N\right \}$ have full rank, so as to satisfy constraint qualification conditions. * The instantaneous cost functions in (<ref>) admit a separable structure, that is, they do not contain cross terms involving the strategies $\mathbf{\tilde{u}}$ and $\mathbf{\tilde{v}}$ and are represented by $g_k^i( x_k ,\mathbf{u}_k,\mathbf{v}_k)= gu_k^i( x_k ,\mathbf{u}_k)+gv_k^i( x_k ,\mathbf{v}_k)$. * The partial derivatives of the functions $f_k$, $h_k$, and the players' cost functions $\{g_k^i,~k\in \mathcal K,~i\in \mathcal N\}$ exist in their arguments, and are twice continuously differentiable in their arguments. Note that in (<ref>) the cost incurred by player $i$ not only depends on his actions but also by the actions of other players $-i$. So, (<ref>)-(<ref>) constitutes a dynamic game with inequality constraints, and we refer to it as NZDG from here on. Then, the Nash equilibrium strategies of players, denoted by $(\mathbf{\tilde{u}}^*,\mathbf{\tilde{v}}^*)$, for this class of games is defined as follows. The strategy profile $(\tilde{\mathbf{u}}^*,\tilde{\mathbf{v}}^*)$ is Nash equilibrium if for every player $i\in \mathcal N$ the strategies $(\tilde{\mathbf{u}}^*,\tilde{\mathbf{v}}^*)$ solves \begin{align} \min_{(\tilde{u}^i,\tilde{v}^i)}J^{i}(x_0,(\tilde{u}^i,{\tilde{u}^{-i*}}),(\tilde{v}^i,{\tilde{v}^{-i*}})) \text{ subject to \eqref{eq:statedynamics} and \eqref{eq:constraints}}. \end{align} From the above definition the Nash equilibrium strategy is stable against a player's unilateral deviation. In multistage games the interaction environment is dynamic, which is embedded in state variables and their evolution. It is well known that the Nash equilibrium solution varies with the information used by the players during the (dynamic) decision making process. So, in a dynamic game an information structure must be specified when players design their strategies. In an open-loop information structure, the players design their strategies using only the knowledge of the time $k$ (and initial state $ {x}_0$). Whereas in a feedback information structure, players design their equilibrium strategies using the knowledge of the state variable. In this paper, we assume open-loop information structure. This implies, that the decision variables entering the dynamics are functions of time. Next, as it is clear from the inequality constraints (<ref>) that the admissible action set of a player depends on the state variable. Therefore player $i$ takes action $v_k^i\in V^i_k( x_k ,v_k^{-i})$ as a function of time $k$, and the feasible set $V^i_k( x_k ,v_k^{-i})$ is parametrized by the state variable and the decision variables of players excluding player $i$; see also [Reddy and Zaccour, 2015] which considers open-loop information structure for this class of games. § OPEN-LOOP DYNAMIC POTENTIAL GAMES In this section we introduce the notion of a dynamic potential game and seek to find conditions under which NZDG is a dynamic potential game. It is well known that the classical potential game is a static concept, ([Monderer and Shapley, 1996]). A (static) game is said to be a potential game if there exists a potential function, and the minimum of this function provides a pure strategy Nash equilibrium of the game, there by providing refinement (or selection) of Nash equilibria. In other words, when a potential function exists for a game, a Nash equilibrium can be obtained by solving a minimization problem. So, when we extend this notion of potential game in a dynamic context, it natural to associate an optimal control problem with NZDG. We consider the following optimal control problem with inequality constraints. \begin{align} \mathrm{OCP}:\quad \quad & \min_{\tilde{\mathbf{u}},\tilde{\mathbf{v}}} J( {x}_0,\tilde{\mathbf{u}},\tilde{\mathbf{v}})\label{eq:OCP1}\\ \text{subject to}~ & {x}_{k+1}=f_{k}( x_k ,\mathbf{u}_k),~ {x}_0 \text{ is given},\label{eq:OCP2}\\ & h_k( x_k ,\mathbf{v}_k)\geq 0,~\mathbf{v}_k\geq 0, ~ k\in \mathcal K,\label{eq:OCP3}\\ \text{where}~& J(x_0,\tilde{\mathbf{u}},\tilde{\mathbf{v}})=P_K( x_k ,\mathbf{v}_K)+\sum_{k=0}^{K-1}P_k( x_k ,\mathbf{u}_k,\mathbf{v}_k),\notag \end{align} where $P_k:\mathbb{R}^n \times \mathbb {R}^{m} \times \mathbb R^s \rightarrow \mathbb R$ and $P_K:\mathbb{R}^n \times \mathbb {R}^s\rightarrow \mathbb R$ are instantaneous and terminal cost functions, which are continuous and twice continuously differentiable in their arguments. The dynamic game NZDG is referred to as open-loop potential difference game (OLPDG) if there exist cost functions $\{P_k,~k\in \mathcal K\}$ such that the optimal solution of OCP provides an open-loop Nash equilibrium of NZDG. Whenever such cost functions exist, $\{P_k,~k\in \mathcal K\}$ are referred to as potential cost functions. In the next two theorems we discuss the necessary and sufficient conditions for an admissible pair $(\tilde{\bu}^*,\tilde{\bv}^*) \in \mathcal U \times \mathcal V$ to be an optimal solution of OCP. Towards this end, we define the instantaneous and terminal Lagrangian functions as follows \begin{align} &\cL_k(x_k,\bu_k,\bv_k,\lambda_{k+1},\mu_k) =P_k(x_k,\bu_k,\bv_k)+\lambda_{k+1}^\prime f_k(x_k,\bu_k)-\mu_k^\prime h_k(x_k,\bv_k),\\ &\cL_K(x_K,\bv_K,\mu_K)=P_K(x_K,v_K)-\mu_K^\prime h_K(x_K,\bv_K). \end{align} Let Assumption <ref> holds true. Let $\mathbf{(\tilde{\mathbf{u}}^*, \tilde{\mathbf{v}}^*)}$ be an optimal admissible pair for OCP, and $\{x_k^*,~k\in \mathcal K\}$ be state trajectory generated by $\tilde{\bu}^*$, the then there exist co-states $\{\lambda_k^*,~k\in \mathcal K\}$ and a multipliers $\{\mu_k^*,~k\in \mathcal K\}$ such that the following conditions hold true: \begin{align} &\text{for $k\in \mathcal K\backslash \{K\}$}\notag \\ \bu_k^*&=\argmin_{\bu_k \in \bU_k}\mathcal L_k(x^*_k,\bu_k,\bv^*_k,\lambda_{k+1}^*,\mu_k^*)\label{eq:nceq1}\\ x^*_{k+1}&=\frac{\partial \mathcal L_k}{\partial \lambda_{k+1}}(x^*_k,\bu^*_k,\bv^*_k,\lambda_{k+1}^*,\mu_k^*),~ x^*_0=x_0 \label{eq:nceq2}\\ \lambda^*_k&=\frac{\partial \mathcal L_k}{\partial x_k}(x^*_k,\bu^*_k,\bv^*_k,\lambda_{k+1}^*,\mu_k^*),\label{eq:nceq3a}\\ \lambda^*_K&=\frac{\partial \cL_K}{\partial x_K}(x_K^*,\bv^*_K,\mu_K^*),\label{eq:nceq3b}\\ &\text{and for $k\in \mathcal K$}\notag \\ & 0\leq h(x_k^*,\bv_k^*) \perp \mu_k^* \geq 0 \label{eq:nceq4}\\ &0\leq \frac{\partial \mathcal L_k}{\partial \bv_k} (x^*_k,\bu^*_k,\bv^*_k,\lambda_{k+1}^*,\mu_k^*) \perp \bv^*_k \geq 0.\label{eq:nceq5} \end{align} The necessary conditions follow directly by applying the discrete-time maximum principle. The following theorem provides conditions under which (<ref>)-(<ref>) are also sufficient for optimality of $\mathbf{(\tilde{\mathbf{u}}^*, \tilde{\mathbf{v}}^*)}$. Let Assumption <ref> holds true. Let the pair of control strategies $( \tilde{\bu}^*,\tilde{\bv}^*)\in \mathcal U \times \mathcal V$ and the collection of trajectories $\{x_k^*,\lambda^*_k,\mu^*_k,~k \in \mathcal K\}$ satisfy (<ref>). Assume that the Lagrangian $\cL_k(x_k,\bu_k,\bv_k,\lambda_{k+1},\mu_k)$ has a minimum with respect to $(\bu_k,\bv_k)$ for all $k\in \mathcal K\backslash \{K\}$. Let the minimized Lagrangian be given by $\cL_k^*(x_k,\lambda_{k+1},\mu_k)=\min_{(\bu_k,\bv_k)}\cL_k(x_k,\bu_k,\bv_k,\lambda_{k+1},\mu_k)$ for $k\in \mathcal K\backslash \{K\}$. Assume that terminal Lagrangian $\cL_K(x_K,\bv_K,\mu_K)$ has a minimum with respect to $\bv_K$, and denote the minimum terminal cost function by $ \cL^*_K(x_K,\mu_K)=\min_{\bv_K} \cL_K(x_K,\bv_K,\mu_K)$. Then, if $\cL_k^*(x_k,\lambda_{k+1},\mu_k)$ is convex with respect to $x_k$ for all $k\in \mathcal K\backslash \{K\}$ and $\cL^*_K(x_K,\mu_K)$ is convex with respect to $x_K$, then the pair $(\tilde{\bu}^*,\tilde{\bv}^*)$ is optimal for OCP. For any admissible $(\tilde{\bu},\tilde{\bv})\in \mathcal U\times \mathcal V$ we consider the difference \begin{align} J(x_0,\tilde{\bu},\tilde{\bv})-J(x_0,\tilde{\bu}^*,\tilde{\bv}^*)=&P_K( {x}_K,\mathbf{v}_K)+\sum_{k=0}^{K-1}P_k( {x}_k,\mathbf{u}_k,\mathbf{v}_k) -P_K( {x}^*_K,\mathbf{v}^*_K)-\sum_{k=0}^{K-1}P_k({x}_k^*,\mathbf{u}^*_k,\mathbf{v}^*_k)\notag\\ =& \cL_K(x_K,\bv_K,\mu_K^*)-\cL_K(x_K^*,\bv^*_K,\mu_K^*)+{\mu_K^*}^\prime\left(h(x_K,\bv_K)-h(x_K^*,\bv^*_K)\right)\notag \\ +& \sum_{k=0}^{K-1} \cL_k(x_k,\bu_k,\bv_k,\lambda^*_{k+1},\mu^*_k) -\cL_k(x^*_k,\bu^*_k,\bv^*_k,\lambda^*_{k+1},\mu^*_k)\notag \\+&\sum_{k=0}^{K-1} -{\lambda_{k+1}^*}^\prime(x_{k+1}-x^*_{k+1})+{\mu^*_k}^\prime\left(h(x_k,\bv_k)-h(x_k^*,\bv_k^*)\right)\notag \\ \geq &\cL_K^*(x_K,\mu_K^*)-\cL_K^*(x_K^*,\mu_K^*)+ \sum_{k=0}^{K-1}-{\lambda_{k+1}^*}^\prime (x_{k+1}-x^*_{k+1}) \notag \\ +&\sum_{k=0}^{K-1}\cL_k^*(x_k,\lambda_{k+1}^*,\mu_k^*)-\cL^*(x_k^*,\lambda_{k+1}^*,\mu_k^*)\notag\\ +&\sum_{k=0}^K {\mu_k^*}^\prime \left(h(x_k,\bv_k)-h(x_k^*,\bv_k^*)\right). \label{eq:objdiff} \end{align} From the envelope theorem we have $\frac{\partial \cL^*_k}{\partial x_k}(x_k^*,\lambda_{k+1}^*,\mu_k^*)=\frac{\partial \cL_k}{\partial x_k}(x_k^*,\bu_k^*,\bv_k^*,\lambda_{k+1}^*,\mu_k^*)= \lambda_k^*$ and $\frac{\partial \cL^*_K}{\partial x_K}(x_K^*,\mu_K^*)= \frac{\partial \cL_K}{\partial x_K}(x_K^*,\bv_K^*,\mu_K^*)=\lambda_K^*$. Then using this and from the convexity of minimized Lagrangian and terminal Lagrangian with respect to state variables we get \begin{align} &\cL_k^*(x_k,\lambda_{k+1}^*,\mu_k^*)-\cL_k^*(x_k^*,\lambda_{k+1}^*,\mu_k^*)\geq \left(\frac{\partial \cL^*_k}{\partial x_k}(x_k^*,{\lambda_{k+1}}^*,\mu_k^*)\right)^\prime (x_k-x_k^*)={\lambda_k^*}^\prime(x_k-x_k^*), \\ &\cL_K^*(x_K,\mu_K^*)-\cL_K^*(x_K^*,\mu_K^*)\geq \left(\frac{\partial \cL^*_K}{\partial x_K}(x_K^*,\mu_K^*)\right)^\prime (x_K-x_K^*)= \end{align} Using (<ref>) in (<ref>) we get \begin{align*} \sum_{k=0}^K {\lambda_{k}^*}^\prime(x_{k}-x^*_{k})-\sum_{k=0}^{K-1}{\lambda_{k+1}^*}^\prime (x_{k+1}-x^*_{k+1}) +\sum_{k=0}^K {\mu_k^*}^\prime \left(h(x_k,\bv_k)-h(x_k^*,\bv_k^*)\right)\\ &={\lambda_0^*}^\prime (x_0-x_0^*)+\sum_{k=0}^K {\mu_k^*}^\prime \left(h(x_k,\bv_k)-h(x_k^*,\bv_k^*)\right)\\ &=\sum_{k=0}^K {\mu_k^*}^\prime h(x_k,\bv_k) \geq 0 \hfill. \end{align*} The last but one equality follows from the complementary condition ${\mu_k^*}^\prime h_k(x_k^*,\bv_k^*)=0$ for $k\in \mathcal K$, and the initial condition $x_0=x_0^*$. The last inequality follows as multipliers and constraints satisfy the conditions $\mu_k^*\geq0$ and $h_k(x_k,\bv_k)\geq 0$ for all $k\in \mathcal K$. The sufficient condition provided in Theorem <ref> is an adaptation of Arrow type sufficient condition <cit.> to a discrete-time setting with inequality constraints, and differs from the nonlinear programming based methods ( [Pearson and Sridhar, 1966]). §.§ Structure of OLPDG In this subsection we provide conditions under which the optimal solution of OCP provides an open-loop Nash equilibrium of NZDG. Toward this end, we have the following assumption. The cost functions $\{P_k,~k\in \mathcal K\}$ associated with OCP satisfy the following conditions for every $i\in \mathcal N$ \begin{align} &\frac{\partial P_k}{\partial u_{k}^{i}} = \frac{\partial gu_{k}^{i}}{\partial u_{k}^{i}},~ \frac{\partial P_k}{\partial v_{k}^{i}} = \frac{\partial gv_{k}^{i}}{\partial v_{k}^{i}},~\frac{\partial P_k}{\partial {x}_{k}} = \frac{\partial g_{k}^{i}}{\partial {x}_{k}},~k\in \mathcal K\backslash \{K\}, \label{eq:gradcond1}\\ &\frac{\partial P_K}{\partial {x}_{K}} = \frac{\partial g_{K}^{i}}{\partial {x}_{K}},~ \frac{\partial P_K}{\partial \mathbf{v}_{K}^i} = \frac{\partial g_{K}^{i}}{\partial \mathbf{v}_{K}^i}. \label{eq:gradcond2} \end{align} Let Assumption <ref> holds true. Then, the cost functions of the OCP and NZDG satisfy the following relation for each player $i\in \mathcal N$. \begin{align} J(x_0,(\tilde{u}^i,\tilde{u}^{-i}),(\tilde{v}^i,\tilde{v}^{-i})) - J(x_0,(\tilde{w}^i,\tilde{u}^{-i}),(\tilde{z}^i,\tilde{v}^{-i})) &=J^{i}(x_0,(\tilde{u}^i,\tilde{u}^{-i}),(\tilde{v}^i,\tilde{v}^{-i}))\notag \\ &- J^{i}(x_0,(\tilde{w}^i,\tilde{u}^{-i}),(\tilde{z}^i,\tilde{v}^{-i})) \label{eq:Difference Condition} \end{align} $\forall \tilde{u}^i:=\{u^i_k\in U_k^i,~k\in \mathcal{K}\backslash \{K\}\},~\forall\tilde{w}^i:=\{w^i_k\in U_k^i,~k\in \mathcal{K}\backslash \{K\}\},~ \forall \tilde{v}^i:=\{v^i_k \in V_k^i,~k\in \mathcal K \}$ and $\forall \tilde{z}^i:=\{z^i_k \in V_k^i,~k\in \mathcal K \}$. From (<ref>) and the separable structure of cost functions as assumed in Assumption <ref>, we have for every $k\in \mathcal K\backslash \{K\}$ \begin{align} &\frac{\partial }{\partial u_k^i}\left(P_k(x_k,\mathbf{u}_k,\mathbf{v}_k)-g_k^i(x_k,\mathbf{u}_k,\mathbf{v}_k)\right) = \frac{\partial }{\partial u_k^i}\left(P_k(x_k,\mathbf{u}_k,\mathbf{v}_k)-gu_k^i(x_k,\mathbf{u}_k)\right) = 0, \\ &\frac{\partial }{\partial v_k^i}\left(P_k(x_k,\mathbf{u}_k,\mathbf{v}_k)-g_k^i(x_k,\mathbf{u}_k,\mathbf{v}_k)\right)= \frac{\partial }{\partial v_k^i}\left(P_k(x_k,\mathbf{u}_k,\mathbf{v}_k)-gv_k^i(x_k,\mathbf{v}_k)\right) = 0, \\ &\frac{\partial }{\partial x_{k}}\left(P_k(x_k,\mathbf{u}_k,\mathbf{v}_k)-g_k^i(x_k,\mathbf{u}_k,\mathbf{v}_k) \right) = 0. \end{align} From (<ref>) we have \begin{align} &\frac{\partial }{\partial x_{K}}\left(P_K(x_K,\mathbf{v}_K)-g_K^i(x_K,\mathbf{v}_K) \right) = 0 \\ & \frac{\partial }{\partial v_{K}^i}\left(P_K(x_K,\mathbf{v}_K)-g_K^i(x_K,\mathbf{v}_K) \right) = 0. \end{align} Clearly, from (<ref>) it follows that the difference function $P_k(x_k,\bu_k,\bv_k)-g_k^i(x_k,\bu_k,\bv_k)$ is independent of (or does not contain) the variables $x_k$, $u_k^i$ and $v_k^i$. Similarly, from (<ref>) it follows that the difference function $P_K(x_K,\bv_k)-g_K^i(x_K,\bv_K)$ does not contain the variables $x_K$ and $v_K^i$. This implies that these difference functions can be expressed as \begin{align} & P_k( {x}_k,\mathbf{u}_k,\mathbf{v}_k)-g_k^i( {x}_k,\mathbf{u}_k,\mathbf{v}_k) = \Theta^i_k(u_k^{-i},v_{k}^{-i}),~ \forall k \in \mathcal K \backslash \{K\},\\ & P_K( {x}_K,\mathbf{v}_K)-g_K^i( {x}_K,\mathbf{v}_K) = \Theta^i_K(v_K^{-i}). \end{align} $\forall u_k^i \in U_k^i$ and $\forall v_k^i \in V_k^i$. Since (<ref>) is satisfied by every $u_k^i \in U_k^i$ and $v_k^i \in V_k^i$, for any $u_k^i,w_k^i \in U_k^i$ and $v_k^i,z_k^i \in V_k^i$ we obtain \begin{align*} P_k(x_k,(u_k^i,u_k^{-i}),(v_k^i,v_k^{-i}))-g_k^i(x_k,(u_k^i,u_k^{-i}),(v_k^i,v_k^{-i})) &=P_k(x_k,(w_k^i,u_k^{-i}),(z_k^i,v_k^{-i}))\\&-g_k^i(x_k,(w_k^i,u_k^{-i}),(z_k^i,v_k^{-i})),\\ P_K(x_K,(v_K^i,v_K^{-i}))-g_K^i(x_K,(v_K^i,v_K^{-i})) &= P_K(x_K,(z_K^i,v_K^{-i}))-g_K^i(x_K,(z_K^i,v_K^{-i})). %\label{eq:Lemma1 eq4} \end{align*} Upon rearranging the above equations and adding we get \begin{align} P_k(x_k,(u_k^i,u_k^{-i}),(v_k^i,v_k^{-i}))- P_k(x_k,(w_k^i,u_k^{-i}),(z_k^i,v_k^{-i}) &=g_k^i(x_k,(u_k^i,u_k^{-i}),(v_k^i,v_k^{-i}))\notag\\&-g_k^i(x_k,(w_k^i,u_k^{-i}),(z_k^i,v_k^{-i})), \label{eq:instdiff}\\ \end{align} Taking summation of (<ref>) for all time steps $k \in \mathcal{K}\backslash \{K\}$ and using (<ref>), we obtain (<ref>). Lemma <ref> provides the dynamic counterpart of the principle of exact potential games introduced by Monderer and Shapely <cit.>. Let Assumptions <ref> and <ref> hold true. Let the admissible pair $(\tilde{\bu}^*,\tilde{\bv}^*)$ be the optimal solution of OCP. Then $(\tilde{\bu}^*,\tilde{\bv}^*)$ is an open-loop Nash equilibrium of NZDG, that is, NZDG is an OLPDG with potential functions $\{P_k,~k\in \mathcal K\}$. Let $\{x^*_k,~k\in \mathcal K\}$ be the state trajectory generated by $\tilde{\bu}^*$. As $(\tilde{\bu}^*,\tilde{\bv}^*)$ is optimal for OCP there exist co-states $\{\lambda^*_k,~k\in \mathcal K\}$ and multipliers $\{\mu^*_k,~k\in \mathcal K\}$ such that the necessary conditions (<ref>) hold true. Expanding these conditions in terms of cost functions we get \begin{align} \intertext{for $~k\in \mathcal K\backslash \{K\}$} \label{eq:OCP Cond1} & \frac{\partial P_k}{\partial u_{k}^i}(x^*_k,\bu_k^*,\bv_k^*)+\left(\frac{\partial f_k}{\partial u_k^i}(x^*_k,\bu_k^*)\right)^\prime \lambda_{k+1}^{*} = 0,~i\in \mathcal N, \\ \label{eq:OCP Cond2} & x_{k+1}^{*} = f_k(x^*_k,\bu_k^*),\quad x_0^*=x_0, \\ \label{eq:OCP Cond3} & \lambda_k^*=\frac{\partial P_k}{\partial x_{k}}(x^*_k,\bu_k^*,\bv_k^*)+\left(\frac{\partial f_k}{\partial x_k}(x^*_k,\bu_k^*)\right)^{\prime} \lambda_{k+1}^*\notag \\ &\hspace{1.25in}-\left(\frac{\partial h_k}{\partial x_k}(x_k^*,\bv_k^*)\right)^\prime \mu_k^*, \\ \label{eq:OCP Cond4} & \lambda_{K}^{*}= \frac{\partial P_K}{\partial x_{K}}(x_K^*,\bv_K^*)-\left(\frac{\partial h_K}{\partial x_K}(x^*_K,\bv_K^*)\right)^\prime \mu_{K}^{*}, \\ \label{eq:OCP Cond5} & 0 \leq \left(\frac{\partial P_k}{\partial v_{k}^i}(x^*_k,\bu_k^*,\bv_k^*)-\left(\frac{\partial h_k}{\partial v_k^i}(x^*_k,\bv_k^*) \right)^\prime \mu_{k}^{*}\right) \perp v_k^{i^{*}} \notag \\&\hspace{1.85in}\geq 0,~~i\in \mathcal N,\\ \label{eq:OCP Cond6} & 0 \leq \left(\frac{\partial P_K}{\partial v_{K}^i}(x^*_K,\bv_K^*)-\left(\frac{\partial h_K}{\partial v_K^i}(x^*_K,\bv_K^*)\right)^\prime \mu_{K}^{*}\right) \perp v_K^{i^{*}}\notag \\ &\hspace{1.85in} \geq 0,~i\in \mathcal N, \\ \text{for}& ~k\in \mathcal K, ~ 0\leq h_k(x^*_k,\bv_k^*)\perp \mu_k^* \geq 0. \label{eq:OCP Cond7} \end{align} Next, we write $\bu_k^*=(u_k^{i*},u_k^{-i*})$ and $\bv_k^*=(v_k^{i*},v_k^{-i*})$ and for each player $i\in \mathcal N$ we define the multipliers \begin{align} {\Lambda_k^{i*}}:=\lambda_k^*,k\in \mathcal K\backslash \{K\},~\delta_k^{i*}:=\mu_k^*,~k\in \mathcal K. \label{eq:samemultipliers} \end{align} Now from Assumption <ref> and using the above notation we write (<ref>) as follows. \begin{align} \text{for} &~k\in \mathcal K\backslash \{K\} \notag\\ &\frac{\partial g_k^i(x^*_{k},(u_{k}^{i},u_{k}^{-i*}),(v_{k}^{i},v_{k}^{-i*}))}{\partial u_k^i}\Big|_{u_k^i = u_k^{i*}}+\left(\frac{\partial f_k(x^*_{k},(u_{k}^{i},u_{k}^{-i*}))}{\partial u_k^i}\Big|_{u_k^i = u_k^{i*}}\right)^{\prime}\Lambda_{k+1}^{i*} = 0, \label{eq:OLNEcond1} \\ & x_{k+1}^* = f(x_{k}^{*},\mathbf{u}_{k}^{*}),~x_0^*=x_0, \label{eq:OLNEcond2} \\ \Lambda_k^{i*} &= \frac{\partial g_k^i(x_{k},(u_{k}^{i*},u_{k}^{-i*}),(v_{k}^{i*},v_{k}^{-i*}))}{\partial x_k}\Big|_{x_k = x_k^{*}}+\left(\frac{\partial f_k(x_{k},(u_{k}^{i*},u_{k}^{-i*}))}{\partial x_k}\Big|_{x_k = x_k^{*}}\right)^{\prime}\Lambda_{k+1}^{i*}\notag \\ &\qquad-\left(\frac{\partial h_k(x_{k},(v_{k}^{i*},v_{k}^{-i*}))}{\partial x_k}\Big|_{x_k = x_k^{*}}\right)^{\prime}\delta_{k}^{*},\\ \Lambda_K^{i*}& = \frac{\partial g_K^i(x_{K},(v_{K}^{i*},v_{K}^{-i*}))}{\partial x_K}\Big|_{x_K = x_K^{*}} -\left(\frac{\partial h_K(x_{K},(v_{K}^{i*},v_{K}^{-i*}))}{\partial x_K}\Big|_{x_K = x_K^{*}}\right)^{\prime}\delta_K^{i*},\\ &0 \leq \frac{g_k^i(x^*_{k},(v_{k}^{i},v_{k}^{-i*}))}{\partial v_k^i}\Big|_{v_k^i =v_k^{i*}}\notag -\left(\frac{\partial h_k(x^*_{k},(v_{k}^{i},v_{k}^{-i*}))}{\partial v_k^i}\Big|_{v_k^i = v_k^{*}}\right)^{\prime}\delta_k^{i*} \perp v_k^{i*} \geq 0,\\ & 0 \leq \frac{g_K^i(x^*_{K},(v_{K}^{i},v_{K}^{-i*}))}{\partial v_K^i}\Big|_{v_K^i =v_K^{i*}}-\left(\frac{\partial h_K(x^*_{K},(v_{K}^{i},v_{K}^{-i*}))}{\partial v_K^i}\Big|_{v_K^i = v_K^{*}}\right)^{\prime}\delta_k^{i*} \perp v_K^{i*} \geq 0,\\ \text{for} &~k\in \mathcal K, ~ 0 \leq h_{k}(x_{k}^*,\mathbf{v}_{k}^*) \perp \delta_k^{i*} \geq 0. \end{align} Next, we consider Player $i$'s optimal control problem (<ref>) when players in $-i=\mathcal N\backslash \{i\}$ use strategies $(\tilde{u}^{-i*},\tilde{v}^{-i*})$. Taking the co-state vectors as $\{\Lambda_k^i,~k\in \mathcal K\}$ and Lagrange multipliers as $\{\delta_k^i,~k\in \mathcal K\}$ we write the instantaneous Lagrangian and terminal Lagrangian functions associated with Player $i$'s problem are given as \begin{align} &\mathcal L _k^i(x_k,(u_k^i,u_k^{-i*}),(v_k^{i},v_k^{-i*}),\Lambda_{k+1}^{i},\delta_k^{i})=g^i_k(x_k,(u_k^i,u_k^{-i*}),(v_k^{i},v_k^{-i*})) +{\Lambda^i_{k+1}}^\prime f_k(x_k,(u_k^i,u_k^{-i*}))\notag \\&\hspace{4.4in}-{\delta^i_k}^\prime h_k(x_k,(v_k^{i},v_k^{-i*})),\\ & \cL^i_K(x_K,(v_K^{i},v_K^{-i*}),\mu_K)=g^i_K(x_K,(v_k^{i},v_K^{-i*}))-{\delta^i_K}^\prime h_K(x_K,(v_K^{i},v_K^{-i*})). \end{align} Representing the equations (<ref>) in terms of the instantaneous Lagrangian and terminal Lagrangian functions (<ref>) associated with Player $i$'s optimal control problem (<ref>) we get \begin{align} \text{for } &k\in \mathcal K\backslash \{K\}\notag \\ &u_k^{i*}=\argmin_{u^i_k\in U^i_k}\mathcal L^i_k(x^*_k,(u_k^i,u_k^{-i*}),(v_k^{i},v_k^{-i*}),\Lambda_{k+1}^{i*},\delta_k^{i*}), \\ &x^*_{k+1}=\frac{\partial \cL^i_k}{\partial \Lambda^{i*}_{k+1}}(x^*_k,(u_k^{i*},u_k^{-i*}),(v_k^{i*},v_k^{-i*}),\Lambda_{k+1}^{i*},\delta_k^{i*}),\quad x^*_0=x_0,\\ &\lambda^*_k=\frac{\partial \mathcal L^i_k}{\partial x_k}(x^*_k,(u_k^{i*},u_k^{-i*}),(v_k^{i*},v_k^{-i*}),\Lambda_{k+1}^{i*},\delta_k^{i*}),\\& \lambda^*_K=\frac{\partial \cL^i_K}{\partial x_K}(x_K^*,(v_k^{i*},v_k^{-i*}),\delta_K^{i*}),\\ &0\leq \frac{\partial \mathcal L^i_k}{\partial v^i_k} (x^*_k,(u_k^{i},u_k^{-i*}),(v_k^{i},v_k^{-i*}),\lambda_{k+1}^*,\mu_k^*) \perp v^{i*}_k \geq 0,\\ &0\leq \frac{\partial \cL^i_K}{\partial v^i_K} (x^*_K,(v_K^{i},v_K^{-i*}),\Lambda_{k+1}^{i*},\delta_K^{i*}) \perp v^{i*}_K \geq 0,\\ \text{for } &k\in \mathcal K,~ 0\leq h(x_k^*,(v_k^{i},v_k^{-i*})) \perp \delta_k^{i*} \geq 0. \end{align} Clearly, (<ref>) constitute the necessary conditions for optimality of associated with Player i's optimal control problem (<ref>) where strategies of players in $-i$ are fixed at $(\tilde{u}^{-i*},\tilde{v}^{-i*})$. In other words, $(\tilde{u}^{i*},\tilde{v}^{i*})$ is a candidate best response to $(\tilde{u}^{-i*},\tilde{v}^{-i*})$. Next, we show that the strategy $(\tilde{u}^{i*},\tilde{v}^{i*})$ indeed minimizes the objective $J^i(x_0,(\tilde{u}^{i^*},\tilde{u}^{-i*}),(\tilde{v}^{i^*},\tilde{v}^{-i*}))$ subject to the dynamics $x_{k+1}=f_k(x_k,(u_k^i,u_k^{-i*}))$, $k\in \mathcal K \backslash \{K\}$ and constraints $h_k(x_k,(v_k^i,v_k^{-i*}))\geq 0$, $v_k^i\geq 0$, $k\in \mathcal K$. Since $(\mathbf{\tilde{u}^*,\tilde{v}^*})$ is an optimal solution of the OCP, the following inequality holds true \begin{align} J(x_0,(\tilde{u}^{i^*},\tilde{u}^{-i*}),(\tilde{v}^{i^*},\tilde{v}^{-i*})) \leq J(x_0,(\tilde{u}^{i},\tilde{u}^{-i*}),(\tilde{v}^{i},\tilde{v}^{-i*})),~\forall (\tilde{u}^i,\tilde{v}^i) \label{eq:OCP max} \end{align} with $\tilde{v}_k^i$ satisfying the constraint $h_k( x_k ,({v}_k^i,{v}_k^{\mbox{-}i^*}))\geq 0 ~\forall k \in \mathcal{K}$, where $ x_k $ is the state trajectory evolved according to the action set $(\tilde{u}^i,\tilde{u}^{\mbox{-}i^*})$. From lemma <ref>, we have \begin{multline*} J(x_0,(\tilde{u}^{i^*},\tilde{u}^{-i*}),(\tilde{v}^{i^*},\tilde{v}^{-i*})) - J(x_0,(\tilde{u}^{i},\tilde{u}^{-i*}),(\tilde{v}^{i},\tilde{v}^{-i*}))\\ = J^i(x_0,(\tilde{u}^{i^*},\tilde{u}^{-i*}),(\tilde{v}^{i^*},\tilde{v}^{-i*}))- J^i(x_0,(\tilde{u}^{i},\tilde{u}^{-i*}),(\tilde{v}^{i},\tilde{v}^{-i*})). \end{multline*} Using the above observation in (<ref>) we get \begin{align} J^i(x_0,(\tilde{u}^{i^*},\tilde{u}^{-i*}),(\tilde{v}^{i^*},\tilde{v}^{-i*})) \leq J^i(x_0,(\tilde{u}^{i},\tilde{u}^{-i*}),(\tilde{v}^{i},\tilde{v}^{-i*})) \quad \forall ((\tilde{u}^{i},\tilde{u}^{-i*}),(\tilde{v}^{i},\tilde{v}^{-i*})) \end{align} with $\tilde{v}_k^i$ satisfying the constraint $h_k( x_k ,({v}_k^i,{v}_k^{\mbox{-}i^*}))\geq 0 ~\forall k \in \mathcal{K}$. This implies $(\tilde{u}^{i^*},\tilde{v}^{i^*})$ is indeed a best response to $(\tilde{u}^{-i*},\tilde{v}^{-i*})$. So, $(\mathbf{\tilde{u}^*,\tilde{v}^*})$ is an open-loop Nash equilibrium of NZDG. Following Definition <ref> we have NZDG is an OLPDG with potential functions $\{P_k,~k\in \mathcal K\}$. The Nash equilibira correspond to fixed points of best response mapping defined over joint strategy sets of players, and as a result there can exist more than one equilibria in a non-comparative game. When OCP associated with OLPDG has an optimal solution, then from Theorem <ref> this solution is an open-loop Nash equilibrium of NZDG, thereby providing a refinement of the open-loop Nash equilibria. This implies that, solution of the OCP provides a way for selecting one among possibly many open-loop Nash equiliria. Notice, the constraints in (<ref>) are coupled. Rosen in [Rosen, 1965] studied non-cooperative games with couple constraints. The equilibria in these games are referred to as normalized Nash equlibria, and are characterized by the property that multiplier vector associated with constraints in each player's individual optimization problem is co-linear with a common multiplier vector. In our work, we observe a similar feature by construction, that is, in (<ref>) the obtained open-loop Nash equilibrium has the property that the associated multipliers and co-state variables are same for all players. §.§ Construction of potential cost functions In section <ref>, a sufficient condition is provided to verify if NZDG is an OLPDG given the potential cost functions $\{P_k,~k\in \mathcal K\}$ of the OCP. However, in practice these functions are not available before hand. In these settings, it is desirable to construct potential functions using players' cost functions. This construction procedure involves two steps. First, to verify if NZDG is an OLPDG, and then to construct the potential functions. Toward this end, we recall the following necessary and sufficient condition existence of conservative vector fields from multi-variable calculus ([Apostol, 1969]). Let $\Omega$ be a convex open subset in $\mathbb R^n$. Let $F:\Omega \rightarrow \mathbb R^n$ be a vector field with continuous derivatives defined over $\Omega$. The following conditions on $F$ are equivalent a.There exists a scalar potential function $\Pi:\Omega\rightarrow \mathbb R$ such that $F(\omega)=\nabla \Pi(\omega)$ for all $\omega \in \Omega$, where $\nabla$ denotes the gradient operator. b. The partial derivatives satisfy \begin{align} \frac{\partial [F(\omega)]_i}{\partial [\omega]_j} = \frac{\partial [F(\omega)]_j}{\partial [\omega]_i},~\forall \omega \in \Omega,~i,j=1,2,\cdots,n. \label{eq:symcond} \end{align} c. Let $a$ be a fixed point in $\Omega$, and $\mathcal{C}\subset \Omega$ be a piecewise smooth curve joining $a$ with an arbitrary point $\omega \in \Omega$. Then the potential function $\Pi$ satisfies \begin{align} \Pi(\omega) -\Pi(a)=\int_\mathcal{C} F(\omega)\bigcdot d\omega = \int_0^1 F(\alpha(z)) \bigcdot \frac{\partial \alpha}{\partial z} ~dz, \end{align} where $\bigcdot$ is the dot product, and $\alpha: [0, 1] \rightarrow \mathcal{C}$ is a bijective parametrization of the curve $\mathcal{C}$ such that $\alpha(0)=a$ and $\alpha(1)=\omega$. A vector field $F:\Omega\rightarrow \mathbb R^n$ satisfying these conditions is called a conservative vector field. See <cit.>. A consequence of the condition (<ref>) is that the Jacobian matrix of the vector field $F:\Omega \rightarrow \mathbb R^n$ evaluated at $\omega \in \Omega $ is symmetric for all $\omega\in \Omega$. Let Assumption <ref>.(1) holds true. Let the players' utility functions satisfy the following conditions, for all $i,j\in \mathcal N$, \begin{align} & \frac{\partial^2 g_k^i}{\partial (u_k^j)^\prime \partial u_k^i} =\left( \frac{\partial^2 g_k^j}{\partial (u_k^i)^\prime \partial u_k^j}\right)^\prime,~k\in \mathcal K\backslash \{K\}, \label{eq:ux}\\ &\frac{\partial g_k^i}{\partial x_k} = \frac{\partial g_k^j}{\partial x_k} \label{eq: partial x},~k\in \mathcal K,\\ & \frac{\partial^2 g_k^i}{\partial (v_k^j)^\prime \partial v_k^i} = \left(\frac{\partial^2 g_k^j}{\partial (v_k^i)^\prime \partial v_k^j}\right)^\prime,~k\in \mathcal K, \label{eq:vxx} \end{align} then NZDG is a OLPDG. Let the vector fields $F_k({x}_k,\mathbf{u}_k,\mathbf{v}_k)$ at $(x_k,\bu_k,\bv_k)\in {\Omega}_k$, where ${\Omega}_k := {X}_k \times \prod_{i \in \mathcal{N}}U_k^i \times \prod_{i \in \mathcal{N}}V_k^i$ of dimension $(n+m+s) \times 1$ and $F_K({x}_K,\mathbf{v}_K)$ at $(x_K,\bv_K)\in {\Omega}_K$ where $\Omega_K:= {X}_K \times \prod_{i \in \mathcal{N}}V_K^i$ of dimension $(n+s) \times 1$ be defined by \begin{align*} &F_k({x}_k,\bu_k,\bv_k) = \begin{bmatrix}\frac{\partial g_k^1}{\partial {x}_k}^\prime & \frac{\partial g_k^1}{\partial u_k^1}^\prime & \cdots & \frac{\partial g_k^N}{\partial u_k^N}^\prime & \frac{\partial g_k^1}{\partial v_k^1}^\prime& \cdots & \frac{\partial g_k^N}{\partial v_k^N}^\prime \end{bmatrix}^\prime,~ k \in \mathcal{K}\backslash \{K\}, \\ &F_K(x_K,\bv_K) = \begin{bmatrix}\frac{\partial g_K^1}{\partial {x}_k}^\prime & \frac{\partial g_K^1}{\partial v_K^1}^\prime& \cdots & \frac{\partial g_K^N}{\partial v_K^N}^\prime \end{bmatrix}^\prime, \end{align*} then the instantaneous and terminal potential functions are given by \begin{align} &P_k(x_k,\bu_k,\bv_k) =c_k+\int_0^1 F_k(\alpha_k(z)) \bigcdot \frac{\partial \alpha_k(z)}{\partial z}dz,~k\in \mathcal K, \label{eq:Pfk}\\ &P_K(x_K,\bv_K) =c_K+\int_0^{1} F_K(\alpha_K(z)) \bigcdot \frac{\partial \alpha_K(z)}{\partial z}dz, \label{eq:PfK} \end{align} where $\alpha_k:[0,1]\rightarrow \mathcal{C}_k$ ($k\in \mathcal K$) is a bijective parametrization of a piece-wise smooth curve $\mathcal{C}_k\subset {\Omega}_k$ such that $\alpha_k(0)= (x_{0k},\bu_{0k},\bv_{0k})$, $\alpha_k(1)=(x_{k},\bu_{k},\bv_{k})$ for $k\in\mathcal K \backslash \{K\}$, $\alpha_K(0)= (x_{0K},\bv_{0K})$, $\alpha_K(1)=(x_{K},\bv_{K})$, and $\{c_k,k\in \mathcal K\}$ are constants. We write the vector field $F_k,~k \in \mathcal{K} \backslash K $ as $F_k = \begin{bmatrix}{F_k^x}^\prime & {F_k^u}^\prime & {F_k^v}^\prime \end{bmatrix}^\prime_{(n+m+s) \times 1} $ such that \begin{align*} & F_k^x = \left[\frac{\partial g_k^1}{\partial x_k }\right]_{n \times 1}, ~ F_k^u =\begin{bmatrix} \frac{\partial g_k^1}{\partial u_k^1}^\prime & \frac{\partial g_k^2}{\partial u_k^2}^\prime & \hdots & \frac{\partial g_k^N}{\partial u_k^N}^\prime \end{bmatrix}^\prime_{m \times 1},\\& F_k^v = \begin{bmatrix} \frac{\partial g_k^1}{\partial v_k^1}^\prime & \frac{\partial g_k^2}{\partial v_k^2}^\prime & \hdots & \frac{\partial g_k^N}{\partial v_k^N}^\prime \end{bmatrix}^\prime_{s \times 1}. \end{align*} Let $\mathbf{w}_k,~k \in \mathcal{K} \backslash \{K\} $ be the $(n+m+s) \times 1$ vector given by $\mathbf{w}_k = \begin{bmatrix} x_k^\prime & \bu_k^\prime & \bv_k^\prime \end{bmatrix}^\prime_{(n+m+s) \times 1}$. Therefore, we can write the Jacobian matrix, $ \mathcal{J}_k,~k \in \mathcal{K} \backslash\{k\}$ as \begin{align} \label{eq:Jacobiank} \mathcal{J}_k = \frac{\partial F_k}{\partial (\mathbf{w}_k)^\prime } & = \begin{bmatrix}\frac{\partial F_k^x}{\partial (x_k)^\prime} & \frac{\partial F_k^x}{\partial (\bu_k)^\prime} & \frac{\partial F_k^x}{\partial (\bv_k)^\prime} \\[0.2cm] \frac{\partial F_k^u}{\partial (x_k)^\prime} & \frac{\partial F_k^u}{\partial (\bu_k)^\prime} & \frac{\partial F_k^u}{\partial (\bv_k)^\prime} \\[0.2cm] \frac{\partial F_k^v}{\partial (x_k)^\prime} & \frac{\partial F_k^v}{\partial (\bu_k)^\prime} & \frac{\partial F_k^v}{\partial (\bv_k)^\prime} \end{bmatrix}_{(n+m+s) \times (n+m+s)}, \end{align} \begin{align*} &\frac{\partial F_k^x}{\partial (x_k)^\prime } = \left[ \frac{\partial^2 g_k^1}{\partial (x_k)^\prime \partial x_k}\right]_{n \times n },~\frac{\partial F_k^u }{\partial (x_k)^\prime } = \begin{bmatrix} \frac{\partial^2 g_k^1}{\partial (x_k)^\prime \partial u_k^1} \\[0.2cm] \frac{\partial^2 g_k^2}{\partial (x_k)^\prime \partial u_k^2} \\ \vdots \\ \frac{\partial^2 g_k^N}{\partial (x_k)^\prime \partial u_k^N} \end{bmatrix}_{m \times n }, ~\frac{\partial F_k^v }{\partial (x_k)^\prime } = \begin{bmatrix} \frac{\partial^2 g_k^1}{\partial (x_k)^\prime \partial v_k^1} \\[0.2cm] \frac{\partial^2 g_k^2}{\partial (x_k)^\prime \partial v_k^2} \\ \vdots \\ \frac{\partial^2 g_k^N}{\partial (x_k)^\prime \partial v_k^N} \end{bmatrix}_{s \times n }, \end{align*} \begin{align*} & \frac{\partial F_k^x}{\partial (\bu_k)^\prime } = \begin{bmatrix} \frac{\partial^2 g_k^1}{\partial (u_k^1)^\prime \partial x_k }& \frac{\partial^2 g_k^1}{\partial (u_k^2)^\prime \partial x_k } & \hdots & \frac{\partial^2 g_k^1}{\partial (u_k^N)^\prime \partial x_k } \end{bmatrix}_{n \times m},~\frac{\partial F_k^x}{\partial (\bv_k)^\prime } = \begin{bmatrix} \frac{\partial^2 g_k^1}{\partial (v_k^1)^\prime \partial x_k }& \frac{\partial^2 g_k^1}{\partial (v_k^2)^\prime \partial x_k } & \hdots & \frac{\partial^2 g_k^1}{\partial (v_k^N)^\prime \partial x_k }\end{bmatrix}_{n \times s}, \end{align*} \begin{align*} & \frac{\partial F_k^u}{\partial (\bu_k)^\prime } = \begin{bmatrix} \frac{\partial^2 g_k^1}{\partial (u_k^1)^\prime \partial u_k^{1}} & \frac{\partial^2 g_k^1}{\partial (u_k^2)^\prime \partial u_k^{1}}& \hdots & \frac{\partial^2 g_k^1}{\partial (u_k^N)^\prime \partial u_k^{1}} \\[0.2cm] \frac{\partial^2 g_k^2}{\partial (u_k^{1})^\prime \partial u_k^2} & \frac{\partial^2 g_k^2}{\partial (u_k^2)^\prime \partial u_k^{2}}& \hdots & \frac{\partial^2 g_k^2}{\partial (u_k^N)^\prime \partial u_k^{2}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial^2 g_k^N}{\partial (u_k^{1})^\prime \partial u_k^N} & \frac{\partial^2 g_k^N}{\partial (u_k^{2})^\prime \partial u_k^N}& \hdots & \frac{\partial^2 g_k^N}{\partial (u_k^{N})^\prime \partial u_K^N} \end{bmatrix}_{m \times m},~\frac{\partial F_k^v}{\partial (\bu_k)^\prime} = \begin{bmatrix} \frac{\partial^2 g_k^1}{\partial (u_k^{1})^\prime \partial v_k^1 }& \frac{\partial^2 g_k^1}{\partial (u_k^2)^\prime \partial v_k^{1}}& \hdots & \frac{\partial^2 g_k^1}{\partial (u_k^N)^\prime \partial v_k^{1}} \\[0.2cm] \frac{\partial^2 g_k^2}{\partial (u_k^{1})^\prime \partial v_k^2} & \frac{\partial^2 g_k^2}{\partial (u_k^{2})^\prime \partial v_k^2 }& \hdots & \frac{\partial^2 g_k^2}{\partial (u_k^N)^\prime \partial v_k^{2}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial^2 g_k^N}{\partial (u_k^{1})^\prime \partial v_k^N} & \frac{\partial^2 g_k^N}{\partial (u_k^{2})^\prime \partial v_k^N}& \hdots & \frac{\partial^2 g_k^N}{\partial (u_k^{N})^\prime \partial v_k^N} \end{bmatrix}_{s \times m}, \end{align*} \begin{align*} &\frac{\partial F_k^u}{\partial (\bv_k)^\prime } = \begin{bmatrix} \frac{\partial^2 g_k^1}{\partial (v_k^{1})^\prime \partial u_k^1 }& \frac{\partial^2 g_k^1}{\partial (v_k^2)^\prime \partial u_k^{1}}& \hdots & \frac{\partial^2 g_k^1}{\partial (v_k^N)^\prime \partial u_k^{1}} \\[0.2cm] \frac{\partial^2 g_k^2}{\partial (v_k^{1})^\prime \partial u_k^2} & \frac{\partial^2 g_k^2}{\partial (v_k^{2})^\prime \partial u_k^2 }& \hdots & \frac{\partial^2 g_k^2}{\partial (v_k^N)^\prime \partial u_k^{2}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial^2 g_k^N}{\partial (v_k^{1})^\prime\partial u_k^N} & \frac{\partial^2 g_k^N}{\partial (v_k^{2})^\prime \partial u_k^N}& \hdots & \frac{\partial^2 g_k^N}{\partial (v_k^{N})^\prime \partial u_k^N} \end{bmatrix}_{m \times s},~\frac{\partial F_k^v}{\partial (\bv_k)^\prime } = \begin{bmatrix} \frac{\partial^2 g_k^1}{\partial (v_k^1)^\prime \partial v_k^{1}} & \frac{\partial^2 g_k^1}{\partial (v_k^2)^\prime \partial v_k^{1}}& \hdots & \frac{\partial^2 g_k^1}{\partial (v_k^N)^\prime \partial v_k^{1}} \\ \frac{\partial^2 g_k^2}{\partial (v_k^{1})^\prime \partial v_k^2} & \frac{\partial^2 g_k^2}{\partial (v_k^2)^\prime \partial v_k^{2}}& \hdots & \frac{\partial^2 g_k^2}{\partial (v_k^N)^\prime \partial v_k^{2}} \\[0.2cm] \vdots & \vdots & \ddots & \vdots \\ \frac{\partial^2 g_k^N}{\partial (v_k^{1})^\prime \partial v_k^N} & \frac{\partial^2 g_k^N}{\partial (v_k^{2})^\prime \partial v_k^N}& \hdots & \frac{\partial^2 g_k^N}{\partial (v_k^N)^\prime \partial v_k^{N}} \end{bmatrix}_{s \times s}. \end{align*} Firstly, as $\frac{\partial F_k^x}{\partial (x_k)^\prime } = \frac{\partial^2 g_k^1}{\partial (x_k)^\prime \partial x_k}$ we have that $\frac{\partial F_k^x}{\partial (x_k)^\prime }$ is a symmetric matrix. The off-diagonal block matrices in $\frac{\partial F_u}{\partial (\bu_k)^\prime}$ satisfy (<ref>) and this implies $\frac{\partial F_u}{\partial (\bu_k)^\prime}$ is symmetric matrix. Similarly, from (<ref>), we have that $\frac{\partial F_v}{\partial (\bv_k)^\prime}$ is also a symmetric matrix. Next, from (<ref>), $ \frac{\partial g_k^i}{\partial x_k} = \frac{\partial g_k^j}{\partial x_k}$, and as cost functions are twice continuously differentiable, we have $\frac{\partial^2 g_k^i}{\partial (u_k^j)^\prime \partial x_k} = \frac{\partial^2 g_k^j}{\partial (u_k^j)^\prime \partial x_k}$. Next, from the symmetry property of mixed partials we get $\frac{\partial^2 g_k^j}{\partial (u_k^j)^\prime \partial x_k} = \left(\frac{\partial^2 g_k^j}{\partial (x_k)^ \prime \partial u_k^j}\right)^\prime $, and then using (<ref>) we have $ \frac{\partial^2 g_k^i}{\partial (u_k^j)^\prime \partial x_k} = \left( \frac{\partial^2 g_k^j}{\partial (x_k)^\prime \partial u_k^j}\right)^\prime$, which implies $\frac{\partial F_k^x}{\partial (\bu_k)^\prime} = \left(\frac{\partial F_k^u}{\partial (x_k)^\prime} \right) ^\prime $. Again, using similar arguments we can show that $\frac{\partial F_k^x}{\partial (\bv_k)^\prime } = \left(\frac{\partial F_k^v}{\partial (x_k)^\prime } \right) ^\prime$. From the separable structure of the cost functions we have $\frac{\partial^2 g_k^i}{\partial (v_k^j)^\prime \partial u_k^i} =\left( \frac{\partial^2 g_k^j}{\partial (u_k^i)^\prime \partial v_k^j}\right)^\prime = \mathbf{0}$, which implies $\frac{\partial F_k^u}{\partial (\bv_k)^\prime } = \left(\frac{\partial F_k^v}{\partial (\bu_k)^\prime } \right) ^\prime = \mathbf{0}$. Clearly, from these observations we see that Jacobian matrix (<ref>) is a symmetric matrix, and as a result $F_k$ is a conservative vector field for all $k\in \mathcal K \backslash \{K\}$. At the terminal instant we write $F_K = \begin{bmatrix}F_K^x & F_K^v \end{bmatrix}_{(n+s) \times 1}$ and define $\mathbf{w}_K = \begin{bmatrix} x_K^\prime & \bv_K^\prime \end{bmatrix}^\prime_{(n+s) \times 1}$. From (<ref>) and using the same reasoning as above we can show that the Jacobian $ \mathcal{J}_K = \frac{\partial F_K}{\partial (\mathbf{w}_K)^\prime } = \begin{bmatrix}\frac{\partial F_K^x}{\partial (x_K)^\prime} & \frac{\partial F_K^x}{\partial (\bv_K)^\prime } \\[0.2cm] \frac{\partial F_K^v}{\partial (x_K)^\prime} & \frac{\partial F_K^v}{\partial (\bv_K)^\prime } \end{bmatrix}_{(n+s) \times (n+s)}$ is a symmetric matrix. This implies, that $F_K$ is a conservative vector field. As $\alpha_k:[0,1]\rightarrow \mathcal C_k \subset \Omega_k$ is bijective parametrization of a piece-wise path connecting a fixed point $(x_{0k},\bu_{0k},\bv_{0k})\in \Omega_k$ to an arbitrary point $(x_k,\bu_k,\bv_k)\in \Omega_k$. Following Lemma <ref>, the instantaneous potential function satisfies \begin{align*} P_k(x_k,\bu_k,\bv_k)= c_k+\int_{0}^{1}F_k(\alpha_k(z)) \bigcdot \frac{\partial{\alpha_k(z)}}{\partial z}dz,~k \in \mathcal{K}\backslash \{K\}, % \label{eq:PF_k} \end{align*} where $c_{k}=P_k(x_{0k},\bu_{0k},\bv_{0k})$ is value of the potential function evaluated at $(x_{0k},\bu_{0k},\bv_{0k})$. Similarly, the terminal potential function is given by \begin{align} P_K(x_K,\bv_K) = c_{K}+\int_{0}^{1}F_K(\alpha_K(z)) \bigcdot \frac{\partial{\alpha_K(z)}}{\partial z}dz. \notag %\label{eq:PF_K} \end{align} where $c_{K} = P_K(x_{0K},\bv_{0K})$. From (<ref>) and (<ref>), we note that the instantaneous potential function and terminal potential function at a given point are not unique, but are unique up to a constant, and depend upon the choice of the initial fixed points $\{\alpha_k(0),~k\in \mathcal K\}$. This implies that we obtain a family of potential functions, and as a result, several optimal control problems associated with OLPDG. However, as the objective functions of these problems differ by a constant, they have the same optimal solution. § OPEN-LOOP LINEAR QUADRATIC POTENTIAL DIFFERENCE GAME In this section, we specialize the results obtained from the previous section to a linear quadratic setting and provide a numerical method for computing the open-loop Nash equilibrium associated with OLPDG. Toward this end, we introduce the following $N$-player non-zero sum finite horizon linear quadratic difference game as follows. Each Player $i\in \mathcal N$ solves \begin{align} \text{NZDG1}:~& \min_{\tilde{u}^i,\tilde{v}^i} J^i(x_0,(\tilde{u}^i,\tilde{u}^{-i}),(\tilde{v}^{i},\tilde{v}^{-i})),\\ &\text{subject to}\notag \\ & x_{k+1}=A_kx_k +\sum_{i\in \mathcal N} B_k^i u_k^i,k\in \mathcal K\backslash \{K\},~ x_0 \text{ (given)}, \label{eq:LQStatedynamics}\\ & M_kx_k+N_k \bv_k +r_k \geq 0,~\bv_k \geq 0,~ k\in \mathcal K, \label{eq:LQconstraints} \end{align} \begin{align} J^i(x_0,(\tilde{u}^i,\tilde{u}^{-i}),(\tilde{v}^{i},\tilde{v}^{-i}))& = \frac{1}{2}x_K^{\prime}Q_K^ix_K+p_K^{i^\prime}x_K + \sum_{k=0}^{K-1}\left(\frac{1}{2}x_k^{\prime}Q^i_kx_k+{p_k^i}^\prime x_k+\frac{1}{2}\mathbf{u}^\prime_kR^i_k\mathbf{u}_k\right)\nonumber\\&+\sum_{k=0}^{K}\left(\frac{1}{2}\mathbf{v}_k^{\prime}D^i_k\mathbf{v}_k+{d^{i}_k}^\prime\mathbf{v}_k+x_k^{\prime}L^i_k\mathbf{v}_k \right), \label{eq:LQobjective} \end{align} where the matrices $Q^i_k\in \mathbb{R}^{n \times n}$, $i\in \mathcal N$,  $k\in \mathcal K$ are symmetric, $R_k^i\in \mathbb R^{m_i\times m_i},~i\in \mathcal N,~k\in \mathcal K\backslash \{K\}$ are symmetric and positive definite, $D_k^i\in \mathbb R^{s\times s},~i\in \mathcal N, k\in \mathcal K$ are symmetric, $p_k^i\in \mathbb R^n$, $i\in \mathcal N$, $k\in \mathcal K \backslash \{K\}$, and $d_k^i\in \mathbb R^s$, $L_k^i\in \mathbb R^{n\times s}$, $i\in \mathcal N,~k\in \mathcal K$. Associated with NZDG1 we introduce the following optimal control problem \begin{align} \mathrm{OCP1}:\quad \quad& \min_{\tilde{\mathbf{u}},\tilde{\mathbf{v}}} J(x_0,\tilde{\mathbf{u}},\tilde{\mathbf{v}}),\label{eq:LQOCP1}\\ &\text{subject to \eqref{eq:LQStatedynamics} and \eqref{eq:LQconstraints}}\notag \end{align} \begin{align} J(x_0,\tilde{\mathbf{u}},\tilde{\mathbf{v}})&=\frac{1}{2} x_k ^{\prime}Q_K x_k +p_K^{\prime} x_k + \sum_{k=0}^{K-1}\left(\frac{1}{2} x_k ^{\prime}Q_k x_k +p^{\prime}_k x_k +\frac{1}{2}\mathbf{u}_k^{\prime}R_k \mathbf{u}_k\right)\nonumber\\& +\sum_{k=0}^{K}\left(\frac{1}{2}\mathbf{v}_k^{\prime}D_k\mathbf{v}_k+d^{\prime}_k\mathbf{v}_k+ x_k ^{\prime}L_k\mathbf{v}_k \right) \label{eq:OCP1obj} \end{align} with $Q_k\in \mathbb{R}^{n \times n}$, $d_k\in \mathbb R^s$, $p_k\in \mathbb R^n$, $D_k\in \mathbb R^{s\times s}$, $L_k\in \mathbb R^{n\times s}$, $i\in \mathcal N$, $k\in \mathcal K$, and $R_k\in \mathbb R^{m \times m}$, $i\in \mathcal N$, $k\in \mathcal K \backslash \{K\}$. The admissible action sets $\{U_k^i,~k\in \mathcal K\backslash \{K\},~i\in \mathcal N\}$, are such that the sets of state vectors $\{X_k,~k\in \mathcal K\}$, obtained from (<ref>), are convex, and feasible action sets $\{V^i_k( x_k ,v_k^{-i})=\{v_k^i\in \mathbb R^{s_i}~|~ M_kx_k+N_k \bv_k+r_k\geq 0,~ \mathbf{v}_k\geq 0\},~\forall x_k\in X_k\}$ are non-empty, convex and bounded for all $k\in \mathcal K$, $i\in \mathcal N$. In the next theorem we provide conditions under which NZDG1 is an open-loop dynamic potential game, and using Theorem <ref> we construct the potential functions associated optimal control problem (OCP1). Let Assumption <ref> holds true. Let the parameters associated with NZDG1 satisfy the following conditions \begin{align} &[R_{k}^i]_{ij}=[R_k^j]_{ij}, ~i,j\in \mathcal N,~i\neq j, ~k\in \mathcal K\backslash \{K\}, \label{eq:Ri Rj}\\ &Q_k^i=Q_k^j,~p_k^i=p_k^j,~L_k^i=L_k^j,[D_k^i]_{ij}=[D_k^j]_{ij}, ~i,j\in \mathcal N,~i\neq j, ~k\in \mathcal K, \label{eq:Di Dj} \end{align} then NZDG is a OLDPG. Further, the OCP associated with the related OLDPG is described by \begin{align} \label{eq:PF payoff1} ~i\in \mathcal N,~k\in \mathcal K,\\ &[R_k]_{i\bullet}=[R_k^i]_{i\bullet},~i\in \mathcal N,~k\in \mathcal K\backslash \{K\}. \label{eq:PF payoff2} \end{align} We first consider the conditions in (<ref>) and verify the condition (<ref>), \begin{align*} \frac{\partial g_k^i}{\partial x_k} = Q^i_kx_k+p^i_k+L^i_k\mathbf{v}_k = Q^j_kx_k+p^j_k+L^j_k\mathbf{v}_k = \frac{\partial g_k^j}{\partial x_k},~k\in \mathcal K. \end{align*} $Q_k^i=Q_k^j$, $p_k^i=p_k^j$ and $L_k^i=L_k^j, \forall i,j \in \mathcal{N}$, the condition (<ref>) holds true. Next, by using (<ref>) and (<ref>) and the symmetric structure of $R_k^i$ and $D_k^i$, we verify the conditions (<ref>) and (<ref>) as follows \begin{align*} &\frac{\partial^2 g_k^i}{\partial (u_k^j)^\prime \partial u_k^i} = [R^i_k]_{ij} = [R^j_k]_{ij} =\left([R^j_k]_{ji}\right)^\prime = \left(\frac{\partial^2 g_k^j}{\partial (u_k^i)^\prime \partial u_k^j}\right)^\prime,~k \in \mathcal{K}\backslash\{K\}, \\ &\frac{\partial^2 g_k^i}{\partial (v_k^j)^\prime \partial v_k^i} = [D^i_k]_{ij} = [D^j_k]_{ij} =\left([D^j_k]_{ji}\right)^\prime =\left(\frac{\partial^2 g_k^j}{\partial (v_k^i)^\prime \partial v_k^j}\right)^\prime,~k \in \mathcal{K}. \end{align*} So, NZDG1 is a OLPDG. Next, we proceed to construct the potential functions associated with OLPDG. The gradient vector field $F_k=\begin{bmatrix} {F_k^x}^\prime& {F_k^u}^\prime& {F_k^v}^\prime \end{bmatrix}^\prime$ is calculated as \begin{align*} &F_k^x= Q_k^1x_k+p_k^1+L_k^1\mathbf{v}_k,~ F_k^u= \begin{bmatrix} \vdots\\ \end{bmatrix} \bu_k,~F_k^v= \begin{bmatrix} \vdots\\ \end{bmatrix} \bv_k + \begin{bmatrix}[d_k^1]_1\\ [d_k^2]_2\\ \vdots \\ [d_k^N]_N \end{bmatrix} [L^1_k]^\prime_{\bullet 1}\\ [L^2_k]_{\bullet 2}\\ \vdots\\ [L^N_k]^\prime_{\bullet N} \end{bmatrix}x_k. \end{align*} Since, $F_k$ is conservative, the instantaneous potential function (<ref>) evaluated as a line integral, along an arbitrary piecewise path in $\Omega_k$, depends only on the initial and final points. We consider a straight line connecting the origin in $\Omega_k$ and an arbitrary point $(x_k,\bu_k,\bv_k)\in \Omega_k$, and the associated bijective parametrization of this line is given by $\alpha_k(z)=z\begin{bmatrix}x_k^\prime & \bu_k^\prime & \bv_k^\prime \end{bmatrix}^\prime$ with $z\in [0, 1]$. The instantaneous potential function is computed as \begin{align} P_k(x_k,\bu_k,\bv_k)&= c_k+\int_0^1 F_k (\alpha_k(z)) \bigcdot \frac{d \alpha_k(z)}{dz}~dz \notag \\ &=c_k+\int_0^1 \left(x_k^\prime ~F_k^x(\alpha_k(z)) + \bu_k^\prime~F_k^u(\alpha_k(z))+\bv_k^\prime~ F_k^v(\alpha_k(z))\right) dz. \label{eq:Pkeq} \end{align} We define matrices $R_k$, $D_k$, $Q_k$, $L_k$, $p_k$ and $d_k$ such that $[R_k]_{i\bullet}=[R_k^i]_{i\bullet}$, $[D_k]_{i\bullet}=[D_k^i]_{i\bullet}$, $Q_k=Q_k^i$, $[L_k]_{\bullet i}=[L_k^i]_{\bullet i}$, $p_k=p_k^i$, and $[d_k]_i=[d_k^i]_i$ for all $i\in \mathcal N$, $k\in \mathcal K\backslash \{K\}$. Then, from (<ref>) it follows that $R_k$ and $D_k$ are symmetric matrices and $Q_k=Q_k^i=Q_k^j$, $L_k=L_k^i=L_k^j$, $p_k=p_k^i=p_k^j$ for all $i,j\in \mathcal N$, $k\in \mathcal K \backslash \{K\}$. Using this (<ref>) can be written as \begin{multline} P_k(x_k,\bu_k,\bv_k)=c_k+ \int_0^1 \left(x_k^\prime \left[Q_k (zx_k)+p_k+L_k (z\bv_k)\right]+\bu_k^\prime R_k (z\bu_k)+ \bv_k^\prime \left[ D_k (z\bv_k) + d_k + L_k (zx_k)\right] \right) dz \\ =c_k+ \frac{1}{2}x_k^\prime Q_k x_k+p_k^\prime x_k +\frac{1}{2} \bu_k^\prime R_k \bu_k +\frac{1}{2} \bv_k^\prime D_k \bv_k + d_k^\prime \bv_k + x_k^\prime L_k \bv_k. \label{eq:potfn1} \end{multline} Similarly, the terminal vector field $F_K=\begin{bmatrix}{F_K^x}^\prime & {F_K^v}^\prime \end{bmatrix}^\prime$ is calculated as \begin{align*} F_K^x= Q_K^1x_K+p_K^1+L_K^1\mathbf{v}_K,~ F_K^v= \begin{bmatrix} \vdots\\ \end{bmatrix} \bv_K + \begin{bmatrix}[d_K^1]_1\\ [d_K^2]_2\\ \vdots \\ [d_K^N]_N \end{bmatrix} [L^1_K]^\prime_{\bullet 1}\\ [L^2_K]_{\bullet 2}\\ \vdots\\ [L^N_K]^\prime_{\bullet N} \end{bmatrix}x_K. \end{align*} We define matrices $D_K$, $Q_K$, $L_K$, $p_K$ and $d_K$ such that $[D_K]_{i\bullet}=[D_K^i]_{i\bullet}$, $Q_K=Q_K^i$, $[L_K]_{\bullet i}=[L_K^i]_{\bullet i}$, $p_K=p_K^i$, and $[d_K]_i=[d_K^i]_i$ for all $i\in \mathcal N$. Then, from (<ref>) it follows that $D_K$ is a symmetric matrix and $Q_K=Q_K^i=Q_K^j$, $L_K=L_K^i=L_K^j$, $p_K=p_K^i=p_K^j$ for all $i,j\in \mathcal N$. Using this, and using the same procedure as before, the terminal potential function (<ref>) is calculated as \begin{align} P_K(x_K,\bv_K)&=c_K+\int_0^1 \left(x_K^\prime F_K^x(\alpha_K(z))\bv_K^\prime F_K^v(\alpha_K(z)) \right) dz \notag\\ &=c_K+\frac{1}{2}x_K ^\prime Q_K x_K+ p_K^\prime x_K +\frac{1}{2}\bv_K^\prime D_K \bv_K + x_K^\prime L_K \bv_K+ d_K^\prime \bv_K. \label{eq:potfn2} \end{align} The instantaneous and terminal potential function given by (<ref>) and (<ref>), respectively, constitute the objective function (<ref>) associated with the OCP1. The parameters associcated with this objective function satisfy the conditions (<ref>). §.§ Computation of open-loop Nash equilibrium associated with OLPDG In this section, under a few assumptions on the parameters we transform the necessary conditions associated with OCP1 to a large-scale linear complementarity problem, there by providing a way to compute the open-loop Nash equilibrium. Let $(\mathbf{\tilde{u}^{*},\tilde{v}^{*}})$ be the optimal solution of the OCP1 and $\{x_k ^*,~k\in \mathcal K\}$ be the state trajectory generated by $\mathbf{\tilde{u}^{*}}$. The necessary conditions of optimality of are then given by \begin{align} \text{for }& k\in \mathcal K\backslash \{K\} \notag \\ \label{eq:LQOCP Cond1} & R_k\mathbf{u}_k^*+ \mathbf{B}_k^\prime\lambda^*_{k+1} = 0,~ \mathbf{B}_k=\begin{bmatrix}B_k^1 & B_k^1 & \cdots & B_k^N \end{bmatrix}, \\ \label{eq:LQOCP Cond2} & \mathbf{x}_{k+1}^{*} = A_k x_k ^*+\sum_{l \in \mathcal N}B_k^lu_k^{l^*}, \quad x_{0}~ \text{is given},\\ \label{eq:LQOCP Cond3} & \lambda_k^* = Q_k x_k ^{*}+p_k+L_k\mathbf{v}_k^{*}+ A_k^\prime \lambda_{k+1}^{*}-M_k^{\prime}\mu_k^{*},\\ \label{eq:LQOCP Cond4} & \lambda_K^{*} = Q_K x_k ^*+p_K+L_K\mathbf{v}_K^*-M_K^{\prime}\mu_K^*,\\ \text{for }& k\in \mathcal K \notag \\ \label{eq:LQOCP Cond5} & 0 \leq \left( D_k\mathbf{v}_k^{*}+d_k+L_k^\prime x_k ^* - N_k^\prime\mu_k^*\right) \perp v_k^* \geq 0, \\ \label{eq:LQOCP Cond7} & 0 \leq \left( M_k x_k ^*+ N_k\mathbf{v}_k^*+r\right) \perp \mu_k^* \geq 0. \end{align} The above set of necessary conditions lead to a weakly coupled system of a parametric two-point boundary value problem (<ref>)-(<ref>) and a parametric linear complimentarity problem (<ref>)-(<ref>). We have the following assumption. The co-state variable $\lambda_k$ is assumed to be affine in the state variable $ x_k $ for $k \in \mathcal K$ i.e., $\lambda_k^* = H_k x_k ^*+\beta_k$ where $H_k \in \mathbb{R}^{n \times n}$ and $\beta_k \in \mathbb{R}^{n \times 1}$. Using Assumption <ref> it can be shown that the two-point boundary value problem (<ref>)-(<ref>) can be solved if the following backward equations for $k \in \mathcal K\backslash \{K\}$ and $i \in \mathcal N$ has a solution \begin{align} &\label{eq: Backward1} \Gamma_{k+1}^{k} = \mathbf{I}+S_k H_{k+1}, \\ & \label{eq: Backward2} H_{k}=Q_k+A_k^\prime H_{k+1}(\Gamma^{k}_{k+1})^{\mbox{-}1}A_k,\\ &\label{eq: Backward3} \beta_k =p_k-M^{\prime}\mu_k^*+L_k\mathbf{v}_k^*+A_k^{\prime}\beta_{k+1}-A_k^{\prime}H_{k+1}(\Gamma_{k \end{align} where $S_k = \mathbf{B}_kR_k^{\mbox{-}1}\mathbf{B}_k^{\prime}$, $H_K = Q_K$ and $\beta_K = p_K+L_K\mathbf{v}_K^*-M_K^{\prime}\mu_K^*$. Assuming $\Gamma_{k+1}^{k}$ to be invertible for $k = K-1,\dots,1$, we obtain $H_k$ and $\beta_k$ for $k = K-1,\dots,0$, and the state vector $ x_k ^*$ and the joint control vector $\mathbf{u}_k^*$ are given by \begin{align} \label{eq:State DE} \mathbf{x}_{k+1}^* & = \left(\Gamma^k_{k+1} \right)^{\mbox{-}1}\left(A_k x_k ^*-S_k\beta_{k+1} \right), ~k \in \mathcal K\backslash \{K\}, \\ \label{eq:decision Eq} \mathbf{u}_k^* & = -R_k^{\mbox{-}1}\mathbf{B}_k^{\prime}\left(H_{k+1}\mathbf{x}_{k+1}^{*}+\beta_{k+1} \right). \end{align} Suppose that the set of backward equations (<ref>)-(<ref>) admits a solution, i.e., the matrix $\Gamma^k_{k+1}$ is invertible for all $k \in \mathcal K \backslash \{K\}$, then the two point boundary value problem in (<ref>)-(<ref>) has a unique solution. To show this, let $\bar{\lambda_k} = \lambda_k^* - (H_k x_k ^*+\beta_k)$ be another solution for the two point boundary value problem (<ref>)-(<ref>). Substituting $\lambda_k^* = \bar{\lambda}_k+ H_k x_k ^*+\beta_k$ in (<ref>) and (<ref>), we get a decoupled system of equations as \begin{align} \mathbf{x}_{k+1}& = (\Gamma^k_{k+1})^{\mbox{-}1}\left(A_k\mathbf{x}_{k}^*-S_k\bar{\lambda}_{k+1}-S_k\beta_{k+1} \right),\\ \bar{\lambda}_{k} & = A_k^{\prime}\bar{\lambda}_{k+1}-A_k^{\prime}H_{k+1}(\Gamma^k_{k+1})^{\mbox{-}1}S_k\bar{\lambda}_{k+1}. \end{align} From the terminal condition, $\bar{\lambda}_{K}=0$ which results in $\bar{\lambda}_{k}=0 ~\forall k \in \mathcal K$. This proves that the solution for the two point boundary value problem described by (<ref>)-(<ref>) is unique. In view of Remark <ref> we have the following standing assumption in the remaining part of the paper. The set of matrices $\{\Gamma^k_{k+1},~ k \in \mathcal K\backslash \{K\}\}$ are invertible. Next, we note that the backward equations (<ref>) and (<ref>) are coupled but evolve independently of (<ref>). Taking $G_{k+1} = A_k^{\prime}-A_k^{\prime}H_{k+1}(\Gamma_{k +1})^{\mbox{-}1}S_k$, (<ref>) can be represented in the vector form as follows \begin{align} \label{eq:Beta DE} \beta_k = \sum_{\tau = k}^{K}\psi(k,\tau)\left( p_\tau+\begin{bmatrix}L_\tau &-M_\tau^{\prime}\end{bmatrix} \begin{bmatrix}\mathbf{v}_\tau^{*^\prime} & \mu_\tau^{*^\prime}\end{bmatrix}^{\prime}\right),~\forall k \in \mathcal K, \end{align} where the transition matrix associated with (<ref>) is $\psi(k,\tau)=G_{k+1}\cdots G_{\tau-1}G_{\tau}$, if $\tau > k$ and $\psi(k,k) = \mathbf{I}$. Thus, (<ref>) represents a parametric linear backward difference equation parametrized by $\{\mathbf{v}_k^*,\mu_k^*,~k \in \mathcal K, ~i \in \mathcal N\}$. Now, we analyse the forward difference equation for the state trajectory (<ref>). Suppose \begin{align*} \mathbf{x}_{k+1} = \bar{A}_{k} x_k ^*+\bar{B}_{k}\beta_{k+1}, \quad x_0~ \text{is given}, \end{align*} where $\bar{A}_k = (\Gamma^k_{k+1})^{\mbox{-}1}A_k$ and $\bar{B}_k = -(\Gamma^k_{k+1})^{\mbox{-}1}S_k$ $\forall k \in \mathcal K\backslash\{K\}$. Denoting the transition matrix as $\phi(\rho,k) = \bar{A}_{k-1}\bar{A}_{k-2}\cdots \bar{A}_{\rho}$ for $\rho < k$ and $\phi(k,k) = \mathbf{I}$, and from (<ref>) we have for $k \in \mathcal K\backslash\{0\}$, \begin{multline} x_k ^* = \phi(0,k)x_0+\sum_{\tau= 1}^{K}\left(\left(\sum_{\rho=1}^{\operatorname{min}(k,\tau)}\phi(\rho,k)\bar{B}_{\rho-1}\psi(\rho,\tau) \right)\left( p_\tau+\begin{bmatrix}L_\tau &-M_\tau^{\prime}\end{bmatrix} \begin{bmatrix}\mathbf{v}_\tau^{*^\prime} & \mu_\tau^{*^\prime}\end{bmatrix}^{\prime}\right)\right). \label{eq:State ForwardEq2} \end{multline} Further, we combine the variables in (<ref>) as $p_{\mathbf{K}} = \begin{bmatrix}p_{1}^{\prime}& \hdots & p_{K}^{\prime} \end{bmatrix}^{\prime} $, $~\mathbf{x}_{\mathbf{K}}^* = \begin{bmatrix}\mathbf{x}_{1}^{*^\prime}& \hdots ~ \mathbf{x}_{K}^{*^\prime} \end{bmatrix}^{\prime} $ and $y_{\mathbf{K}}^{*} =\left[\mathbf{v}_{1}^{*^\prime}~\mu_1^{*^\prime}\right.$ $\left.\hdots ~ \mathbf{v}_{K}^{*^\prime}~\mu_K^{*^\prime} \right]^{\prime}$. As a result, (<ref>) can be written as: \begin{align}\label{eq:State ForwardEq3} {x}_{\mathbf{K}}^* =\mathbf{\Phi}_{0}x_0+\mathbf{\Phi}_1p_{\mathbf{K}}+\mathbf{\Phi}_2y_{\mathbf{K}}^*, \end{align} where $[\mathbf{\Phi}_0]_k = \phi(0,k),~ [\mathbf{\Phi}_1]_{k\tau} =\sum_{\rho=1}^{\operatorname{min}(k,\tau)}\phi(\rho,k)\bar{B}_{\rho-1}\psi(\rho,\tau) $ and $[\mathbf{\Phi}_2]_{k\tau} = \sum_{\rho=1}^{\operatorname{min}(k,\tau)}\phi(\rho,k)\bar{B}_{\rho-1}\psi(\rho,\tau)\\ \begin{bmatrix}L_\tau &-M_\tau^{\prime}\end{bmatrix}$ for $k,\tau \in \mathcal K\backslash \{0\}.$ Thus, we expressed the state trajectory paramterized with $\{\mathbf{v}_k^*,\mu_k^*,~k \in \mathcal K\backslash \{0\}\}$. Next, we analyse the parametric linear complimentarity problem in (<ref>)-(<ref>) in detail. First, the vector representation of these problems is given by \begin{align}\label{eq:pLCP1} \mathrm{pLCP}( x_k ^*):~\begin{bmatrix}D_k & -N_k^{\prime} \\ N_k & 0 \end{bmatrix}\begin{bmatrix}\mathbf{v}_k^* \\ \mu_k^* \end{bmatrix}+\begin{bmatrix}L_k^{\prime} \\ M_k\end{bmatrix} x_k ^* + \begin{bmatrix}d_k \\ r_k \end{bmatrix} \perp \begin{bmatrix}\mathbf{v}_k^* \\ \mu_k^* \end{bmatrix}. \end{align} Let $\tilde{\mathbf{M}} = \begin{bmatrix}D_1 & -N_1^{\prime} \\ N_1 & 0 \end{bmatrix}\oplus \cdots \oplus \begin{bmatrix}D_K & -N_K^{\prime} \\ N_K & 0 \end{bmatrix} , \tilde{\mathbf{q}} =\begin{bmatrix}L_1^{\prime} \\ M_1\end{bmatrix} \oplus \cdots \oplus \begin{bmatrix}L_K^{\prime} \\ M_K\end{bmatrix}$ and $\tilde{\mathbf{s}} = \begin{bmatrix}d_1^\prime &r_1^\prime &\cdots &d_K^\prime& r_K^\prime \end{bmatrix}^\prime$. Aggregating these parametric problems for all time steps $k \in \mathcal K \backslash \{0\}$, we obtain a single parametric linear complementarity problem as \begin{align}\label{eq:pLCP2} \mathrm{pLCP(\mathbf{x}_\mathbf{K}^*}):\quad \tilde{\mathbf{M}}y_{\mathbf{K}}^*+\tilde{\mathbf{q}}\mathbf{x}_{\mathbf{K}}^* + \tilde{\mathbf{s}} \perp y_{\mathbf{K}}^*. \end{align} Substituting (<ref>) in (<ref>) results in the following large scale linear complimentarity problem: \begin{align}\label{eq:LCP1} \mathrm{LCP}(x_0):\quad \mathbf{M}y_{\mathbf{K}}^*+\mathbf{q} \perp y_{\mathbf{K}}^*, \end{align} where $\mathbf{M} = \tilde{\mathbf{M}} + \tilde{\mathbf{q}}\mathbf{\Phi}_2$ and $\mathbf{q} =\tilde{\mathbf{q}}(\mathbf{\Phi}_{0}x_0+\mathbf{\Phi}_1p_{\mathbf{K}})+\tilde{\mathbf{s}}$. Thus, we formulated the necessary conditions (<ref>)-(<ref>) as a single large scale linear complimentarity problem with the aid of Assumptions <ref> and <ref>. Solving (<ref>) we obtain candidate optimal solution of OCP1 there by a candidate open-loop Nash equilibrium of NZDG1. Next, we study sufficient conditions under which the solution of $\mathrm{LCP}(x_0)$ and $\mathrm{pLCP}(x_0)$ indeed minimize OCP1, and as a result provide an open-loop Nash equilibrium of NZDG1. The sufficient conditions provided in Theorem <ref> requires both the minimized instantaneous and terminal Lagrangian functions to be convex in the state variables. Since the solutions $\bv_k^*$ are obtained from solving the parametric linear complementarity problem (<ref>), upon substitution, the minimized Lagrangian functions may not be convex in the state variables. So, to derive the required sufficient conditions we transform the OCP1 as static optimization problem in the decision variables $(\tilde{\bu},\tilde{\bv})$. Toward this end, the objective function associated with the OCP1 can be written as \begin{align} J(x_0,\tilde{\mathbf{u}},\tilde{\mathbf{v}})&=\sum_{k=0}^{K-1}\frac{1}{2}\bu_k^\prime R_k \bu_k +\sum_{k=0}^K\frac{1}{2}\left( x_k ^{\prime}Q_k x_k +\left(p_k+L_k\bv_k^{*} -M_k^\prime \mu_k^*\right)^\prime x_k \right) \notag\\ &+\sum_{k=0}^{K}\left(\frac{1}{2}\mathbf{v}_k^{\prime}D_k\mathbf{v}_k+d^{\prime}_k\mathbf{v}_k+ x_k ^{\prime}L_k (\bv_k-\bv_k^*)+{\mu_k^*}^\prime M_k x_k \right), \label{eq:objOCP} \end{align} where $\{(\mathbf{v}_k^*,\mu_k^*),~k \in \mathcal K\}$ is the solution of $\mathrm{LCP}(x_0)$ and $\mathrm{pLCP}(x_0)$. We define value function $W_k,~k\in \mathcal K$ as \begin{align*} & W_k=\frac{1}{2}x_k^\prime E_k x_k + e_k^\prime x_k + w_k, \end{align*} where $E_k \in \mathbb{R}^{n \times n},~e_k \in \mathbb{R}^n,~w_k \in \mathbb{R}$ and $i \in \mathcal{N}$. Let the matrices $T_k:=R_k+ \mathbf{B}_k^\prime E_{k+1}\mathbf{B}_k$ be invertible for all $k\in \mathcal K\backslash \{K\}$ with the matrices $E_k,~ k\in \mathcal K$ computed as the solution of the following backward Ricatti difference equations \begin{align} &E_k= A_k^\prime E_{k+1} A_k +Q_k -A_k^\prime E_{k+1}\mathbf{B}_k T_k^{-1} \mathbf{B}_k^\prime E_{k+1}A_k,~ E_K=Q_K,\label{eq:LQDGSC}\\ &e_k=A_k^\prime e_{k+1}-A_k^\prime E_{k+1}\mathbf{B}_k T_k ^{-1}\mathbf{B}_k^\prime e_{k+1}+p_k+L_k\bv_k^* -M_k^\prime \mu_k^*,~e_K=p_K+L_K\bv_K^*-M_K^\prime \mu_K^*, \label{eq:lqdge}\\ &w_k=w_{k+1}-e^\prime_{k+1} \mathbf{B}_kT_k^{-1}\mathbf{B}_k^\prime e_{k+1},~w_K=0. \end{align} Then, by denoting $\Delta_k = W_{k+1}-W_{k}$, and using the sum $\sum_{K=0}^{K-1}\Delta_k$ and (<ref>) we can write the objective function (<ref>) as \begin{align} J(x_0,\tilde{\mathbf{u}},\tilde{\mathbf{v}})& = W_0 +\sum_{k=0}^{K-1} \frac{1}{2}||\mathbf{u}_k +T_k^{-1} \mathbf{B}_k^\prime\left( E_{k+1}A_kx_k+ e_{k+1}\right) ||^2_{T_k}\notag \\ &+\sum_{k=0}^{K}\left(\frac{1}{2}\mathbf{v}_k^{\prime}D_k\mathbf{v}_k+d^{\prime}_k\mathbf{v}_k+ x_k ^{\prime}L_k (\bv_k-\bv_k^*)+{\mu_k^*}^\prime M_k x_k \right).\label{eq:static} \end{align} The next lemma relates how the candidate optimal control (<ref>) obtained by solving the two-point boundary value problem (<ref>)-(<ref>) is related to the minimizer of the objective function (<ref>). Let the set of matrices $\{T_k,~k\in \mathcal K\backslash \{K\}\}$ be invertible and the solutions $E_k$ of the symmetric matrix Riccati difference equation (<ref>) exist for all $k\in \mathcal K$. If the two-point boundary value problem \begin{align} \bar{\lambda}_k &= A^\prime_k \bar{\lambda}_{k+1}+Q_k \bar{x}_k+p_k+L_k\mathbf{v}_k^{*} -M^{\prime}_k\mu_k^{*},\label{eq:lbareq1}\\ \bar{\lambda}_K &= Q_K x_k+p_K+L\mathbf{v}_K^*-M_K^{\prime}\mu_K^*,\label{eq:lbareq2}\\ \bar{x}_{k+1}&=A_k \bar{x}_k - \mathbf{B}_kR_k^{-1} \mathbf{B}_k^\prime \bar{\lambda}_{k+1},~\bar{x}_0=x_0 \end{align} has a unique solution, then we set $\bar{\bu}_k=-{R}^{-1}_k \mathbf{B}_k^\prime \bar{\lambda}_{k+1}$ and $\bar{e}_k:=\bar{\lambda}_k-E_k \bar{x}_k$. Then the sequences $\{\bar{x}_k, \bar{e}_k\}$ solve equations $\bar{\mathbf{u}}_k+T_k^{-1}\mathbf{B}_k^\prime \left(E_{k+1}A_k \bar{x}_k+\bar{e}_{k+1}\right)=0$ and (<ref>). Firstly, we have \begin{align*} \bar{\bu}_k+T_k^{-1} \mathbf{B}_k^\prime\left( E_{k+1} A_k \bar{x}_k+\bar{e}_{k+1}\right) &=-R_k^{-1} \mathbf{B}_k^\prime\bar{\lambda}_{k+1}+ T_k^{-1}\mathbf{B}_k^\prime E_{k+1} \left(\bar{x}_{k+1}+\mathbf{B}_k R_k^{-1}\mathbf{B}_k^\prime \bar{\lambda}_{k+1}\right)+T_k^{-1}\mathbf{B}_k^\prime \bar{e}_{k+1}\\ &= \left(-R_k^{-1}+T_k^{-1}\left( R_k+\mathbf{B}_k ^\prime E_{k+1}\mathbf{B}_k\right)R_k^{-1} \right) \mathbf{B}_k^\prime \bar{\lambda}_{k+1}=0. \end{align*} To prove (<ref>) we have \begin{align*} & A^\prime_k \bar{e}_{k+1}-A^\prime_k E_{k+1}\mathbf{B}_k T_k^{-1}\mathbf{B}_k^\prime\bar{e}_{k+1}+p_k+L_k\bv_k^*-M_k^\prime \mu_k^*-\bar{e}_k \\ & \hspace{1.5in} =p_k+L_k\bv_k^*-M_k^\prime \mu_k^*+A_k^\prime \bar{\lambda}_{k+1}-\bar{e}_{k}-A_k^\prime E_{k+1} \mathbf{B}_k T_{k}^{-1}\mathbf{B}_k^\prime \bar{\lambda}_{k+1}\\& \hspace{1.5in}+ A_k^\prime E_{k+1}\left(\mathbf{B}_k T_k^{-1}\mathbf{B}_k^\prime E_{k+1}-\mathbf{I}\right) \left(A_k\bar{x}_k-\mathbf{B}_k R_k^{-1}\mathbf{B}_k^\prime \bar{\lambda}_{k+1}\right)\\ &\hspace{1.5in} =\left(p_k+L_k\bv_k^*-M_k^\prime \mu_k^*\right)+A_k^\prime \bar{\lambda}_{k+1}+Q_kx_k-\bar{\lambda}_k \\ &\hspace{1.5in}-\left(A_k^\prime E_{k+1}A_k-A_k^\prime E_{k+1} \mathbf{B}_k T_k^{-1} \mathbf{B}_k^\prime E_{k+1}A_k+Q_k-E_k \right)x_k\\ &\hspace{1.5in}+A_k^\prime E_{k+1}B_k \left(R_k^{-1}-T_k^{-1} -T_k^{-1}\mathbf{B}_k^\prime E_{k+1} \mathbf{B}_k R_k^{-1}\right) B_k^\prime \bar{\lambda}_{k+1}=0. \end{align*} The last step in the above expression is obtained by using (<ref>) and (<ref>),(<ref>). In the following lemma we provide conditions under which the objective function (<ref>) is a strictly convex function of $(\tilde{\bu},\tilde{\bv})$. Let the solution $E_k$ of the symmetric Ricatti equation (<ref>) exist. Let \begin{align} \Upsilon_{k}^{\tau} = \begin{cases} \mathbf{B}_\tau E_{\tau+1} A_{\tau}A_{\tau-1}\dots A_{k+1} &0 \leq k < K-1 \\ \mathbf{0}& k = K-1 \end{cases} \end{align} Now, we define the matrix \begin{align} \mathbf{H}&=\begin{bmatrix} \mathbf{Y} & \mathbf{C}^\prime \\ \mathbf{C} & \mathbf{D} \end{bmatrix}, \label{eq:Hessian} \end{align} \begin{multline*} ~\mathbf{D}=\oplus_{k=0}^{K} D_k,\quad {\mathbf{C} = \begin{bmatrix} \mathbf{0} & \mathbf{B}_0^\prime L_1 & \mathbf{B}_0^\prime A_1^\prime L_2 & \hdots & \mathbf{B}_0^\prime A_{1}^\prime \cdots A_{K-2}^\prime A_{K-1}^\prime L_K\\ \mathbf{0} & \mathbf{0} & \mathbf{ B}_1^\prime L_2 & \hdots & \mathbf{B}_1^\prime A_{2}^\prime \cdots A_{K-2}^\prime A_{K-1}^\prime L_K \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \hdots & \mathbf{B}_2^\prime A_{3}^\prime \cdots A_{K-2}^\prime A_{K-1}^\prime L_K \\ \vdots & \vdots & \vdots & \mathbf{0} & \ddots & \vdots \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \hdots & \mathbf{B}_{K-1} ^\prime L_K \end{bmatrix}} \end{multline*} and $\mathbf{Y}$ is a $Km \times Km$ matrix where each block submatrix $[Y]_{lk}$ is a $m \times m$ matrix such that for $k \in \mathcal{K}\backslash \{K\}$ \begin{align*} [Y]_{lk}& = \begin{cases} \mathbf{B}_k^\prime E_{k+1}A_k\dots A_{l+1}\mathbf{B}_l+ \mathbf{B}_k^\prime \left( \sum_{\tau = k+1}^{K-1}( \Upsilon_{k}^{\tau})^\prime T_{ \tau}^{-1}( \Upsilon_{k}^{\tau}) A_k\dots A_{l+1}\right)\mathbf{B}_{l},~0 \leq l < k, \\ T_k + \mathbf{B}_k^\prime \left[\sum_{\tau = k+1}^{K-1}( \Upsilon_{k}^{\tau})^\prime T_{ \tau}^{-1}( \Upsilon_{k}^{\tau}) \right]\mathbf{B}_k,~ l = k, \end{cases} \\ [Y]_{kl}& = [Y]_{lk}^\prime. \end{align*} If the matrix $\mathbf{H}$ is positive definite then the objective function (<ref>) is a strictly convex function of $(\tilde{\bu},\tilde{\bv})$. We compute the Hessian matrix of the objective function (<ref>) with respect the decision variables $(\tilde{\bu},\tilde{\bv})$ by eliminating the state variable $x_k$. Now, we have $\frac{\partial^2 J}{\partial (\bv_k)^\prime \partial \bv_k}=D_k$, $\frac{\partial^2 J}{\partial (\bv_k)^\prime \partial v_l}=0$, for $k\neq l$, $k,l\in \mathcal K$. Calculating the remaining second order partial derivatives we get \begin{align*} \quad [Y]_{lk} &= \frac{\partial^2 J}{\partial (\bu_l)^\prime \partial \bu_k} = \mathbf{B}_k^\prime E_{k+1}A_k\dots A_{l+1}\mathbf{B}_l + \mathbf{B}_k^\prime \left[ \sum_{\tau = k+1}^{K-1}( \Upsilon_{k}^{\tau})^\prime T_{ \tau}^{-1}( \Upsilon_{k}^{\tau}) \right] A_k\dots A_{l+1}\mathbf{B}_{l},\; 0 \leq l < k, \\ [Y]_{kk}& = \frac{\partial^2 J}{\partial (\bu_k)^\prime \partial \bu_k} = T_k + \mathbf{B}_k^\prime \left[\sum_{\tau = k+1}^{K-1}( \Upsilon_{k}^{\tau})^\prime T_{ \tau}^{-1}( \Upsilon_{k}^{\tau}) \right]\mathbf{B}_k. \\ [\mathbf{C}]_{lk}& = \frac{\partial^2 J}{\partial (\mathbf{u}_l)^\prime \partial \mathbf{v}_k} = \begin{cases} \mathbf{B}_l^\prime A_{l+1}^\prime \cdots A_{k-1}^\prime L_{k},~ l < k-1 \\ \mathbf{B}_{l}^\prime L_{k},~ l = k-1~\text{and}\\ \mathbf{0},~ l \geq k \end{cases} \end{align*} Then, if the Hessian matrix $\mathbf{H}$ is positive definite, the objective function (<ref>) is strictly convex in $(\tilde{\bu},\tilde{\bv})$. Next, using the above result in the next theorem we show that the solutions of $\mathrm{LCP}(x_0)$ and $\mathrm{pLCP}(x_0)$ indeed provide the optimal solution of the OCP1. Let Assumptions <ref>, <ref> and <ref> hold true. Let the Hessian matrix $\mathbf{H}$ given by (<ref>) is positive definite. Then the solutions of $\mathrm{LCP}(x_0)$ and $\mathrm{pLCP}{(x_0)}$ constitute an open-loop Nash equilibrium of the NZDG1. Let $\{(\mathbf{v}_k^*,\mu_k^*),~k \in \mathcal K\}$ be the solution of $\mathrm{LCP}(x_0)$ and $\mathrm{pLCP}(x_0)$. Then transforming the objective function of the OCP1 as (<ref>) and consider the minimization problem subject to the state dynamics $x_{k+1}=A_kx_k +\mathbf{B}_k \bu_k$ and the constraints $M_k x_k+N_k \bv_k+r_k \geq 0$, $\bv_k\geq 0$. Since $\mathbf{H}$ is positive definite, we have that the objective function $J(x_0,\tilde{\bu},\tilde{\bv})$ is strictly convex in $(\tilde{\bu},\tilde{\bv})$ from Lemma <ref>. Also, from Assumption <ref>, the sets $\{U_k^i,~k\in \mathcal K \backslash \{K\}\}$ and $\{V_k^i,~k\in \mathcal K\}$ for all $i\in \mathcal N$ are non-empty, convex and bounded. Therefore, by solving the KKT conditions, we obtain the solution of the static optimization. The Lagrangian associated with this optimization problem is given by \begin{align} \mathcal L= J(x_0,\tilde{\bu},\tilde{\bv})-\sum_{k=0}^{K}\mu_k^\prime \left(M_k x_k +N_k \bv_k+r_k\right). \end{align} The KKT conditions are then given by \begin{align} &T_k \left(\bu_k+T_k^{-1}\mathbf{B}_k^\prime (E_{k+1}A_kx_k+e_{k+1})\right)\notag+\mathbf{B}_k^\prime \sum_{\tau = k+1}^{K-1} \left(( \Upsilon_{k}^{\tau})^{\prime }\left(\bu_\tau + T_\tau^{-1} \mathbf{B}_\tau^\prime ( E_{\tau+1}A_\tau x_\tau +e_{\tau+1}\right)\right)\\& +\mathbf{B}_k^\prime \left(L_{k+1} (\bv_{k+1}-\bv_{k+1}^*)-M_{k+1}^\prime (\mu_{k+1}-\mu_{k+1}^*) \right)+\sum_{\tau=k+1}^{K-1}\left(A_{\tau}A_{\tau-1}\cdots A_{k+1} \mathbf{B}_k\right)^\prime \left( L_{\tau+1}(\bv_{\tau+1}-\bv_{\tau+1}^*)\right)\notag\\&-\sum_{\tau=k+1}^{K-1}\left(M^\prime_{\tau+1}(\mu_{\tau+1}-\mu_{\tau+1}^*)\right) =0, \label{eq:kkt1} \\ &0\leq D_k \bv_k -N_k^\prime \mu_k +L_k^\prime x_k +d_k \perp \bv_k\geq 0, \label{eq:kkt2}\\ &0\leq M_k x_k+N_k \bv_k +r_k \perp \mu_k \geq 0. \label{eq:kkt3} \end{align} Let $x_k^*,~k\in \mathcal K$ be the state trajectory generated by the solution $\{(\mathbf{v}_k^*,\mu_k^*),~k \in \mathcal K\}$ using (<ref>). Next, the co-state defined by $\lambda_k^*=H_kx_k^*+\beta_k$ along with the state vector $x_k^*$ solve the two point boundary value problem (<ref>). Then, from Lemma <ref>, it follows that $ \bu_k^*+T_k^{-1}\mathbf{B}_k^\prime \left(E_{k+1} A_k x_k^*+e_{k+1}\right) =0$ for all $k\in \mathcal K \backslash \{K\}$. Using this in (<ref>) we obtain \begin{align*} \mathbf{B}_k^\prime \left(L_{k+1} (\bv_{k+1}-\bv_{k+1}^*)-M_{k+1}^\prime (\mu_{k+1}-\mu_{k+1}^*) \right)&+ \sum_{\tau=k+1}^{K-1}\left(A_{\tau}A_{\tau-1}\cdots A_{k+1} \mathbf{B}_k\right)^\prime ( L_{\tau+1}(\bv_{\tau+1}-\bv_{\tau+1}^*)\\&-\sum_{\tau=k+1}^{K-1} M^\prime_{\tau+1}(\mu_{\tau+1}-\mu_{\tau+1}^*)=0. \end{align*} The above equation and the remaining equations (<ref>) and (<ref>) are satisfied by $(\tilde{\bv}_k^*,\mu^*_k)$ as they are solutions of $\mathrm{pLCP}(x_k^*)$. This implies, we have shown that the solutions are of the $\mathrm{LCP}(x_0)$ along with $\mathrm{pLCP}(x_0)$ indeed provide the optimal solution of the OCP1. Since the OCP1 is associated with OLDPG we have that $(\tilde{\bu}^*,\tilde{\bv}^*)$, obtained from solutions of $\mathrm{LCP}(x_0)$ and $\mathrm{pLCP}(x_0)$, provides an open-loop Nash equilibrium of NZDG1. § ILLUSTRATION : SMART GRID SYSTEM WITH ENERGY STORAGE To illustrate our results, we consider a smart grid system with energy storage. Smart grids provide opportunities for exploring distributed renewable energy sources. However, integrating solar and wind based sources has been a challenge in meeting the demand due to their intermittent nature. Energy storage systems become critical in providing continuous power in the case of interruption and are often used as an emergency power supply during unforeseen outages ( [Oh, 2011] ). Power utilities can cut their generation costs by storing energy during the off‐peak hours and releasing during the peak hours. So, installing energy storage systems is crucial for an efficient, reliable and resilient smart grid, see [Kolokotsa et al., 2019]. However, setting up a centralized energy storage unit for all the smart-grid users is not practical due to high set up costs and maintenance. One alternative would be to incentivize prosumers to install energy storage units at their homes; see Figure <ref> for an illustration. In this way, the smart grid storage system becomes decentralized. Additionally, the storage for each user is limited by the total resources available in the grid as well as the battery capacity. In [Zazo et al., 2016], the authors study an energy demand problem in smart-grids without energy storage, and model the decision problem as a dynamic non-cooperative game without constraints. We build upon the model studied in [Zazo et al., 2016] by incorporating energy storage incentives for the prosumers, and model the decision problem as a dynamic game with inequality constraints. A smart grid system with energy storage Consider a smart grid with $N$ users who utilize the smart grid resources for different activities like heating, lighting, and other home appliances. Let $\mathcal{N} = \{1,2,\cdots,N\}$ be the set of users and the total time period be $\mathcal{K} = \{1,2,\cdots,K\}$. The final time $K$ is determined by the uniform interval with which the electrical data is processed in the grid in a day. For instance, if the data is processed every two hours in a day, $K =12$. The smart grid consists of $S$ type of energy resources such as solar, hydroelectric, coal etc., and each user $i \in \mathcal{N}$ consumes energy for $m_i$ activities. All resources are shared by all users. The state of the game at each time step, $X_k \in \mathbb{R}^S$ is the total amount of consumable resources in the smart grid. In the smart grid, users act as prosumers, implying that they not only consume energy, but also contribute the excess energy produced by renewable resources back to the grid. Furthermore, the resources can be autonomously recharged. Therefore, state of the game is governed by the discrete time dynamics \begin{align} X_{k+1} = \tilde{A}_k X_k +\sum_{i=1}^{N}\tilde{B}_k^i I_k^i,~k \in \mathcal{K} \backslash \{K\}, \end{align} where, at time step $k$, $\tilde{A}_k \in \mathbb{R}^{S \times S} $ governs the energy which is autonomously depleted or replenished, $I_k^i \in \mathbb{R}^{m_i} $ denotes the amount of resources consumed or contributed by user $i$ and $\tilde{B}_k^i \in \mathbb{R}^{S\times m_i}$ is the weight associated with the resource expenditure or contribution. The smart grid authority has provided a battery storage unit for each user as a secondary storage unit. This is provided for the purpose of island-ed mode, operation under unforeseen disconnection of the smart grid from the main power grid; see [Ray and Biswal, 2020]. The amount of resource stored in each player's battery unit at time instant $k$ is $K_k^i \geq 0 $. The total energy storage in batteries is limited to be $\epsilon_k >0$ units lower than the total resources of the grid, since excess of battery storage only results in higher storage costs. The limiting factor $\epsilon_k$ is chosen in such a way that the maximum storage limit represents the maximum energy required for the users to perform the essential activities during island-ed mode operation. Further, the amount of energy stored in the battery is limited by it's maximum capacity denoted by $K_{max}^i$. Therefore the constraints on the battery storage are given by \begin{align} & \sum_{i \in \mathcal{N}} K_k^i \leq \sum_{i \in S}X_k^i - \epsilon_k, \label{eq:totalstorage}\\ & 0 \leq K_k^i \leq K_{max}^i, ~ i \in \mathcal{N}. \end{align} The costs incurred by users are attributed to unsatisfied demand, unbalanced resources and battery storage. Each user has a target demand to meet which is given by $P_k^i X_k $, where $P_k^i \in \mathbb{R}^{m_i \times S}$ denotes the demand matrix. The cost associated with unsatisfied demand is characterized by the following quadratic function \begin{align} (\Pi_k^i)_{ud} = \frac{1}{2}\left(P_k^i X_k -I_k^i\right)^\prime \tilde{ R}_k^i\left(P_k^i X_k -I_k^i\right),~k \in \mathcal K \backslash \{K\}, \end{align} where $\tilde{R}_k^i ={ r}_k^i \mathbf{I} \in \mathbb{R}^{m_i \times m_i}$ with $r_k^i>0$. For higher values of the parameter $r_k^i$ the user prioritizes in minimizing the unsatisfied demand cost compared to other costs. Next, if the resources at a time step are higher than at the previous step, then there is a cost associated with storage. Similarly, there are also costs associated with productivity loss between consecutive time periods. These costs are modeled as \begin{align} (\Pi_k^i)_{ur} = \frac{1}{2} \left( X_k -{X}_{k-1}\right)^\prime \tilde{Q}_k\left(X_k -{X}_{k-1}\right), \end{align} where $\tilde{Q}_k = q_k \mathbf{I} \in \mathbb{R}^{S \times S}$ with $q_k^i>0$. For higher values of the parameter $q_k^i$ the user prioritizes in keeping the resources at steady state levels without spikes. Each player incurs a battery storage cost which is given by \begin{align} (\Pi_k^i)_{bs} = \frac{1}{2} K_k^{i^2} b_k^i, \end{align} where $b_k^i$ represents the battery storage cost per energy unit for the user $i$. Along with these costs, there is an incentive provided to users for energy storage, which has two components: a player specific incentive which depends only on the player's battery resource, and a common incentive which depends on the total grid resource. The total incentive for Player $i$ is given by \begin{align} (\Pi_k^i)_{ic} = a_k^{i} K_k^i + X_k^\prime \tilde{L}_k\begin{bmatrix} {K}_k^1 & \hdots & K_k^N \end{bmatrix}^{\prime} , \end{align} where the parameter ${a}_k^i$ reflects the player specific incentive and the parameter $\tilde{L}_k \in \mathbb{R}^{S \times N}$ the common incentive. The salvage cost incurred by Player $i$ for the amount of resources in the last stage $K$ which is given by \begin{align} \Pi^i_K = \frac{1}{2} X_K ^\prime \tilde{Q}_K X_K,~\text{with}~\tilde{Q}_K = q_K \mathbf{I} \in \mathbb{R}^{S \times S}. \end{align} Player $i$ using the consumption (or contribution) and storage schedules $\{I_k^i,~k\in \mathcal K \backslash \{K\},~K_k^i,~k\in \mathcal K\}$ seeks to minimize the total cost given by \begin{align*} J^i &= \Pi_K^i+\sum_{k=0}^{K-1} \left[(\Pi_k^i)_{ud} +(\Pi_k^i)_{ur}\right] +\sum_{k=0}^K \left[(\Pi_k^i)_{bs}-(\Pi_k^i)_{ic}\right]\\ \frac{1}{2}X_k ^\prime \tilde{Q}_K X_k + \sum_{k=0}^{K-1}\left(\frac{1}{2}\left(P^i X_k -I_k^i\right)^\prime \tilde{R}_k^i\left(P^i X_k -I_k^i\right)+\frac{1}{2}\left( X_k -{X}_{k-1}\right)^\prime \tilde{Q}_k\left( X_k -{X}_{k-1}\right)\right)\\ &+\sum_{k=0}^{K}\left(\frac{1}{2}K_k^{i^2} b_k^i - \left( a_k^iK_k^i +X_k ^\prime \tilde{L}_k^i \begin{bmatrix} {K}_k^1 & \hdots & K_k^N \end{bmatrix}^{\prime} \right)\right). \end{align*} We transform the above dynamic game problem with inequality constraints to the standard form NZDG1 as follows. \begin{align*} & x_k = \begin{bmatrix} X_k ^\prime & {X}_{k-1}^\prime\end{bmatrix}^\prime, ~\mathbf{v}_k = \begin{bmatrix} K_k^{1} & \dots & K_k^{N} \end{bmatrix}^\prime,\\ &{u}_k^i = P^i X_k - I_k^i,~ \mathbf{u}_k = \begin{bmatrix} u_k^{1^\prime} & \dots & {u}_k^{N^\prime} \end{bmatrix}^\prime. \end{align*} The smart grid resource allocation problem with energy storage is modeled as NZDG1 with parameters defined as follows \begin{align*} &A_k \triangleq \begin{bmatrix} \tilde{A}_k+ \sum_{i = 1}^{N} \tilde{B}_k^i P^i & \mathbf{0}_{S \times S} \\ \mathbf{I}_S & \mathbf{0}_{S \times S} \end{bmatrix}, \quad {B}_k^i \triangleq \begin{bmatrix} -\tilde{B}_k^i \\ \mathbf{0}_{S \times m_i}\end{bmatrix}, \\& {Q}_K = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \otimes \tilde{Q}_K ,\quad {Q}_k = \begin{bmatrix} 1 & -1 \\ -1 & 1 \end{bmatrix} \otimes \tilde{Q}_k , ~d_k^i = -a_k^i \mathbf{e}_i, \\ &D_k^i = b_k^i (\mathbf{e}_i \mathbf{e}_i^\prime),~R_k^i = \mathbf{0}_{m_1 \times m_1}\oplus \cdots \oplus \tilde{R}_k^i \oplus \mathbf{0}_{m_N \times m_N} ,\\ & M_k = \begin{bmatrix} \mathbf{1}_{1 \times S} & \mathbf{0}_{1 \times S} \\ \mathbf{0}_{N \times S} & \mathbf{0}_{N \times S} \end{bmatrix},~ N_k = -\begin{bmatrix} \mathbf{1}_{1 \times N}\\ \mathbf{I}_{N} \end{bmatrix}, \\&r_k=\begin{bmatrix} -\epsilon_k & K_{max}^1 & K_{max}^2 & \hdots & K_{max}^N \end{bmatrix}^\prime. \end{align*} Here, we observe that cost matrices $Q_k,~D_k^i,~d_k^i,~L_k,~k \in \mathcal{K}$ and $R_k^i,~k \in \mathcal{K} \backslash \{K\}$ satisfy the conditions in (<ref>). Therefore we can obtain the OCP1 associated with the related OLPDG by (<ref>) from Theorem <ref>. For illustration purpose, we assume $K = 12$, that is, that the data is processed every two hours in a day. We assume $S=4$ resources, $N=2$ users and $m_{i} = 2$ for both the users. The remaining parameters are taken as follows \begin{align*} &q_K = 2.5 ,~q_k = 1 ,~ r_k^1 = r_k^2 = 0.7,~b_k^1 = b_k^2 = 1.6,\\ &a_k^1 = 3.4,~a_k^2 = 4,~\epsilon_k = 3.5,~ K_{max}^1 = 11.2,~ K_{max}^2 = 12.2,\\ &\tilde{ L}_k =0.5 *\mathbf{1}_{3 \times 2},~ \tilde{A}_k = \mathbf{I}_S,~ P_k^1 = P_k ^2 =0.375* \mathbf{1}_{2 \times 3},\\ &\tilde{B}_k^1 = \tilde{B}_k^2 = \begin{bmatrix} 0.75 & 0 \\ 0 & 0.375 \\ 0 & 0.75\end{bmatrix},~ X_0=\begin{bmatrix}4 \\ 4 \\ 4 \end{bmatrix} ,~X_{-1} = \begin{bmatrix}0 \\ 0\\ 0\end{bmatrix}. \end{align*} We assume that both the users are identical in terms of energy demand. We can consider two households with similar energy requirements. However, we assume that user 2 with a higher user specific incentive for storage, and reflected in the parameter values $a_k^1 = 3.4$ and $a_k^2 = 4$. Subsequently, the storage capacity of user 1's batteries are set to $K_{max}^{1} = 11.2$ units, and for user 2 are set as $K_{max}^2 = 12.2.$ units. It can be verified that the sufficient conditions in Lemma <ref> and Theorem <ref> are satisfied with the chosen parameters. We have used the freely available software, the PATH solver (see <http://pages.cs.wisc.edu/ ferris/path.html>), for solving the linear complementarity problem (<ref>). Figure <ref> illustrates the evolution of resources in the grid. We have assumed three sources of energy in the grid. The consumption or contribution to the grid is represented in Figure <ref>. When a user consumes resources from the grid for an activity, the actual utility of the user corresponding to this activity is negative. Likewise, contribution to the grid results in positive utility. Here, $I_k^{i1},I_k^{i2}$ denote the consumption or contribution by the user $i$ for $m_i =2 $ activities. We note that the consumption or contribution of both the users are identical due to the identical demand costs. Further, we observe that both the users switch from contribution to consumption for the first activity at time period $k = 8$ and for the second activity at time period $k = 6$. As the users start to consume from the grid, the grid resources decrease which is evident from Figure <ref>. The evolution of battery storage decision for both the users is shown in Figure <ref>. User 2 has a higher user specific incentive as well as higher maximum storage capacity in comparison with user 1. Consequently, user 2 stores higher amount of energy in its battery. User 1 utilizes the full battery storage capacity from $k =3$ to $k =11$, and user 2 utilizes full battery storage capacity from $k =4$ to $k =10$. At the final time step, $K =12$, the total grid resources fall to a lower value, due to the salvage cost. Since the total battery storage is also limited by the total grid resources and $\epsilon_k$, the battery storage of both the users also decreases at this stage. Evolution of resources Consumption or contribution Battery storage State (grid resources) and decision (consumption/contribution and battery storage) variables for the smart grid system with energy storage. Battery storage: user 1 Battery storage: user 2 20% decrease in player specific incentive 20% decrease in player specific incentive Panels (a) and (b) illustrate the variation in battery storage with change in storage incentive. Panels (c) and (d) indicate the storage limit and total battery storage with 20% variation in the incentive towards battery storage. Next, we analyze the effect of incentive parameter $a_k^i$ on the energy storage behavior of the users. We vary the player specific incentive parameter, $a_k^i$ for both the users. As the incentive parameter $a_k^i$ is varied with a 20% variation around the baseline values, without varying the battery storage cost parameters, we deduce that both the users store higher amount of energy in their batteries with higher values of the incentive parameter. Figures <ref> and <ref> illustrate that the users utilize higher capacities for a longer time when the incentive parameter is higher. However, since the battery capacity of each player is limited, the players are unable to increase the storage continually with increase in the incentive. Besides, full capacity is utilized for a shorter period when we lower the incentive parameter. Finally, the constraint on total storage (<ref>) also makes sure that the total energy stored in the battery is lower than the total resources by at least $\epsilon_k$. This result is illustrated in Figures <ref> and <ref>. § CONCLUSIONS In this paper, we studied the conditions under which a class of $N$-player non-zero sum discrete time dynamic games with inequality constraints admits a potential game structure. Drawing motivations from the theory of static potential games, we associated an optimal control problem with inequality constraints, and derived conditions under which the solution of this optimal control problem provides a (constrained) open-loop Nash equilibrium. When the potential functions are not specified before hand, we derived conditions under which the potential functions can be constructed using the problem data. We specialized these results to a linear quadratic setting and provided a linear complementarity problem based approach for computing the open-loop Nash equilibrium. In particular, the computed equilibrium is a refinement of the open-loop Nash equilibria obtained in [Reddy and Zaccour, 2015]. We illustrated our results with an example inspired by resource allocation in a smart grid network with energy storage. For future work, we plan to investigate the existence of potential functions for this class of games under feedback information structure. [Apostol, 1969] T. M. Apostol. Calculus. Vol. II: Multi-variable Calculus and Linear Algebra, with Applications to Differential Equations and Probability. Blaisdell international textbook series. Xerox College Publ., 1969. ISBN 9780536000088. URL <https://books.google.co.in/books?id=iKC_PgAACAAJ>. [Başar and Olsder, 1999] T. Başar and G. J. Olsder. Dynamic Noncooperative Game Theory: Second Edition. Classics in Applied Mathematics. Society for Industrial and Applied Mathematics, 1999. ISBN 9780898714296. URL <https://books.google.co.in/books?id=GDGW5mZUdIUC>. [Başar et al., 2018] T. Başar, G. Zaccour, and M. Breton. Handbook of Dynamic Game Theory. Number v. 1 in Springer Refernce. Springer International Publishing, ISBN 9783319273358. URL <https://books.google.co.in/books?id=nfp3zQEACAAJ>. [Dockner et al., 2000] E. J. Dockner, S. Jørgensen, N. V. Long, and G. Sorger. Differential Games in Economics and Management Science. Cambridge, Cambridge University Press, 2000. [Dragone et al., 2015] D. Dragone, L. Lambertini, G. Leitmann, and A. Palestini. Hamiltonian potential functions for differential games. Automatica, 62:0 134–138, 2015. [Fonseca-Morales and Hernández-Lerma, 2018] A. Fonseca-Morales and O. Hernández-Lerma. Potential differential games. Dynamic Games and Applications, 80 (2):0 254–279, 2018. [González-Sánchez and Hernández-Lerma, 2014] D. González-Sánchez and O. Hernández-Lerma. Dynamic potential games: The discrete-time stochastic case. Dynamic Games and Applications, 40 (3):0 309–328, 2014. [González-Sánchez and Hernández-Lerma, 2016] D. González-Sánchez and O. Hernández-Lerma. A survey of static and dynamic potential games. Science China Mathematics, 590 (11):0 2075–2102, Nov 2016. ISSN 1869-1862. URL <https://doi.org/10.1007/s11425-016-0264-6>. [Grass et al., 2008] D. Grass, J. P. Caulkins, G. Feichtinger, G. Tragler, and D. A. Behrens. Optimal Control of Nonlinear Processes: with Applications in Drugs, Corruption, and Terror. Springer, 2008. ISBN 9783540776468. [Kolokotsa et al., 2019] D. Kolokotsa, N. Kampelis, A. Mavrigiannaki, M. Gentilozzi, F. Paredes, F. Montagnino, and L. Venezia. On the integration of the energy storage in smart grids: Technologies and applications. Energy Storage, 2019. [Lütkepohl, 1996] H. Lütkepohl. Handbook of matrices, volume 1. Wiley Chichester, 1996. [Manshaei et al., 2013] M. H. Manshaei, Q. Zhu, T. Alpcan, T. Bacşar, and J. P. Hubaux. Game theory meets network security and privacy. ACM Comput. Surv., 450 (3), July 2013. ISSN 0360-0300. URL <https://doi.org/10.1145/2480741.2480742>. [Marden and Shamma, 2018] J. R. Marden and J. S. Shamma. Game-theoretic learning in distributed control. Handbook of dynamic game theory, pages 511–546, 2018. [Marden et al., 2009] J. R. Marden, G. Arslan, and J. S. Shamma. Joint strategy fictitious play with inertia for potential games. IEEE Transactions on Automatic Control, 540 (2):0 208–220, 2009. [Monderer and Shapley, 1996] D. Monderer and L. S. Shapley. Potential games. Games and economic behavior, 140 (1):0 124–143, 1996. [Myerson, 1997] R. B. Myerson. Game Theory: Analysis of Conflict. Harvard University Press, 1997. ISBN 9780674341166. URL <https://books.google.co.in/books?id=E8WQFRCsNr0C>. [Nisan et al., 2007] N. Nisan, T. Roughgarden, E. Tardos, and V. V. Vazirani. Algorithmic Game Theory. Cambridge University Press, 2007. ISBN 9781139466547. URL <https://books.google.co.in/books?id=YCu2alSw0w8C>. [Oh, 2011] H. Oh. Optimal planning to include storage devices in power systems. IEEE Transactions on Power Systems, 260 (3):0 1118–1128, 2011. [Pearson and Sridhar, 1966] J. Pearson and R. Sridhar. A discrete optimal control problem. IEEE Transactions on automatic control, 110 (2):0 171–174, 1966. [Quint and Shubik, 1997] T. Quint and M. Shubik. A theorem on the number of nash equilibria in a bimatrix game. International Journal of Game Theory, 260 (3):0 353–359, Oct 1997. ISSN 1432-1270. URL <https://doi.org/10.1007/BF01263276>. [Ray and Biswal, 2020] P. Ray and M. Biswal. Microgrid: Operation, Control, Monitoring and Protection. Springer, 2020. [Reddy and Zaccour, 2015] P. V. Reddy and G. Zaccour. Open-loop nash equilibria in a class of linear-quadratic difference games with constraints. IEEE Transactions on Automatic Control, 600 (9):0 2559–2564, 2015. [Reddy and Zaccour, 2016] P. V. Reddy and G. Zaccour. Feedback nash equilibria in linear-quadratic difference games with IEEE Transactions on Automatic Control, 620 (2):0 590–604, 2016. [Rosen, 1965] J. B. Rosen. Existence and uniqueness of equilibrium points for concave n-person Econometrica, pages 520–534, 1965. [Rosenthal, 1973] R. W. Rosenthal. A class of games possessing pure-strategy nash equilibria. International Journal of Game Theory, 20 (1):0 65–67, 1973. [Saad et al., 2012] W. Saad, Z. Han, H. V. Poor, and T. Başar. Game-theoretic methods for the smart grid: An overview of microgrid systems, demand-side management, and smart grid communications. IEEE Signal Processing Magazine, 290 (5):0 86–105, 2012. [Slade, 1994] M. E. Slade. What does an oligopoly maximize? The Journal of Industrial Economics, pages 45–61, 1994. [Zazo et al., 2016] S. Zazo, S. V. Macua, M. Sánchez-Fernández, and J. Zazo. Dynamic potential games with constraints: Fundamentals and applications in communications. IEEE Transactions on Signal Processing, 640 (14):0 3806–3821, 2016. [Zhu and Başar, 2015] Q. Zhu and T. Başar. Game-theoretic methods for robustness, security, and resilience of cyberphysical control systems: Games-in-games principle for optimal cross-layer resilient control systems. IEEE Control Systems Magazine, 350 (1):0 46–65, 2015.
# A New Error in Variables Model for Solving Positive Definite Linear System Using Orthogonal Matrix Decompositions Negin Bagherpour Faculty of Mathematical Sciences, Sharif University of Technology, Tehran, Iran, ([email protected]). Nezam Mahdavi-Amiri Faculty of Mathematical Sciences, Sharif University of Technology, Tehran, Iran, ([email protected]). ###### Abstract The need to estimate a positive definite solution to an overdetermined linear system of equations with multiple right hand side vectors arises in several process control contexts. The coefficient and the right hand side matrices are respectively named data and target matrices. A number of optimization methods were proposed for solving such problems, in which the data matrix is unrealistically assumed to be error free. Here, considering error in measured data and target matrices, we present an approach to solve a positive definite constrained linear system of equations based on the use of a newly defined error function. To minimize the defined error function, we derive necessary and sufficient optimality conditions and outline a direct algorithm to compute the solution. We provide a comparison of our proposed approach and two existing methods, the interior point method and a method based on quadratic programming. Two important characteristics of our proposed method as compared to the existing methods are computing the solution directly and considering error both in data and target matrices. Moreover, numerical test results show that the new approach leads to smaller standard deviations of error entries and smaller effective rank as desired by control problems. Furthermore, in a comparative study, using the Dolan-Moré performance profiles, we show the approach to be more efficient. ###### keywords: Error in variables models, positive definiteness constraints, overdetermined linear system of equations, multiple right hand side vectors ###### AMS: 65F05, 65F20, 49M05 ## 1 Introduction Computing a symmetric positive definite solution of an overdetermined linear system of equations arises in a number of physical problems such as estimating the mass inertia matrix in the design of controllers for solid structures and robots; see, e.g., [9], [17], [14]. Modeling a deformable structure also leads to such a mathematical problem; e.g., see [25]. The problem turns into finding an optimal solution of the system (1) $DX\simeq T,$ where $D,T\in{\mathbb{R}}^{m\times n}$, with $m\geq n$, are given and a symmetric positive definite matrix $X\in{\mathbb{R}}^{n\times n}$ is to be computed as a solution of (1). In some special applications, the data matrix $D$ has a simple structure, which may be taken into consideration for efficiently organized computations. Estimation of the covariance matrix and computation of the correlation matrix in finance are two such examples where the data matrices are respectively block diagonal and the identity matrix; e.g., see [31]. A number of least squares formulations have been proposed for physical problems, which may be classified as ordinary and error in variables (EIV) models. Also, single or multiple right hand side least squares may arise. With a single right hand side, we have an overdetermined linear system of equations $Dx\simeq t$, where $D\in{\mathbb{R}}^{m\times n}$, $t\in{\mathbb{R}}^{m\times 1}$, with $m\geq n$, are known and the vector $x\in{\mathbb{R}}^{n\times 1}$ is to be computed. In an ordinary least squares formulation, the error is only attributed to $t$. So, to minimize the corresponding error, the following mathematical problem is devised: (2) $\displaystyle\min$ $\displaystyle\|\Delta t\|$ $\displaystyle s.t.$ $\displaystyle Dx=t+\Delta t.$ There are a number of methods for solving (2), identified as direct and iterative methods. A well known direct method is based on using the QR factorization of the matrix $D$ [27]. An iterative method has also been introduced in [7] for solving (2) using the GMRES algorithm. In an EIV model, however, errors in both $D$ and $t$ are considered; e.g., see [3]. Total least squares formulation is a well-known EIV model, where the goal is to solve the following mathematical problem (e.g., see [6] and [16]): (3) $\displaystyle\min$ $\displaystyle\|[\Delta D,\Delta t]\|$ $\displaystyle s.t.$ $\displaystyle(D+\Delta D)x=t+\Delta t.$ We note that $\|\cdot\|$ in (2) and (3) respectively denote the vector 2-norm and the matrix Frobenius norm. Both direct [24] and iterative [12] methods have been presented for solving (3). Moreover, the scaled total least squares formulation has been considered to unify both ordinary and total leats squares formulation; e.g., see [24]. In a scaled toal least squares formulation, the mathematical problem $\displaystyle\min\|[\Delta D,\Delta t]\|$ (4) $\displaystyle s.t.(D+\Delta D)x=\lambda t+\Delta t$ is to be solved for an arbitrary scalar $\lambda$. Zhou [19] has studied the effect of perturbation and gave an error analysis of such a formulation. A least squares problem with multiple right hand side vectors can also be formulated as an overdetermined system of equations $DX\simeq T$, where $D\in{\mathbb{R}}^{m\times n}$, $T\in{\mathbb{R}}^{m\times k}$, with $m\geq n$, are given and the matrix $X\in{\mathbb{R}}^{n\times k}$ is to be computed. With ordinary and total least squares formulations, the respective mathematical problems are: (5) $\displaystyle\min$ $\displaystyle\|\Delta T\|$ $\displaystyle s.t.$ $\displaystyle DX=T+\Delta T$ $\displaystyle X\in{\mathbb{R}}^{n\times k}$ and (6) $\displaystyle\min$ $\displaystyle\|[\Delta D,\Delta T]\|$ $\displaystyle s.t.$ $\displaystyle(D+\Delta D)X=T+\Delta T$ $\displaystyle X\in{\mathbb{R}}^{n\times k}.$ Common methods for solving (5) are similar to the ones for (2); see, e.g., [7], [27]. Solving (6) is possible by using the method described in [8], based on the SVD factorization of the matrix $[D,\hskip 2.84544ptT]$. Connections between ordinary least squares and total least squares formulations have been discussed in [11]. Here, we consider a newly defined EIV model for solving a positive definite linear problem. Our goal is to compute a symmetric positive definite solution $X\in{\mathbb{R}}^{n\times n}$ to the overdetermined system of equations $DX\simeq T$, where both matrices $D$ and $T$ may contain errors. We refer to this problem as positive definite linear system of equations later. No EIV model, even the well-known total least squares formulation, is considered for solving the positive definite linear system of equations in the literature. Several approaches have been proposed for this problem, commonly considering the ordinary least squares formulation and minimizing the error ${\|\Delta T\|}_{F}$ over all $n\times n$ symmetric positive definite matrices, where ${\|.\|}_{F}$ is the Frobenious norm; see e.g. [10, 23]. Larson [13] discussed a method for solving a positive definite least squares problem considering the corresponding normal system of equations. He considered both symmetric and positive definite least squares problems. Krislock [25] proposed an interior point method for solving a variety of least squares problems with positive semi-definite constraints. Woodgate [18] described a new algorithm for solving a similar problem in which a symmetric positive semi-definite matrix $P$ is computed to minimize $\|F-PG\|$, with known $F$ and $G$. Hu [10] presented a quadratic programming approach to handle the positive definite constraint. In her method, the upper and lower bounds for the entries of the target matrix can be given as extra constraints. In real measurements, however, both the data and target matrices may contain errors; hence, the total least squares formulation appears to be appropriate. The rest of our work is organized as follows. In Section 2, we define a new error function and discuss some of its characteristics. A method for solving the resulting optimization problem with the assumption that $D$ has full column rank is presented in Section 3. In Section 4, we generalize the method to the case of data matrix having an arbitrary rank. In Section 5, a detailed discussion is made on computational complexity of both methods. Computational results and comparisons with available methods are given in Section 6. Section 7 gives our concluding remarks. ## 2 Problem Formulation Consider a single equation $ax\simeq b$, where $a,b\in{\mathbb{R}}^{n}$ and $x\in{\mathbb{R}}^{+}$. Errors in the $i$th entry of $b$ and $a$ are respectively equal to $\mid a_{i}x-b_{i}\mid$ and $\mid a_{i}-\frac{b_{i}}{x}\mid$; e.g., see [24]. In [24], ${\sum}_{i=1}^{n}L_{i}$ was considered as a value to represent errors in both $a$ and $b$. As shown in Figure LABEL:f1, $L_{i}$ is the height of the triangle ABC which turns to be equal to $L_{i}=\frac{|b_{i}-a_{i}x|}{\sqrt{1+x^{2}}}$. Here, to represent the errors in both $a$ and $b$, we define the area error to be (7) ${\sum}_{i=1}^{n}|b_{i}-a_{i}x||a_{i}-\frac{b_{i}}{x}|,$ which is equal to ${\sum}_{i=1}^{n}(b_{i}-a_{i}x)(a_{i}-\frac{b_{i}}{x}),$ for $x\in{\mathbb{R}}^{+}$. Considering the problem of finding a symmetric and positive definite solution to the overdetermined system of linear equations $DX\simeq T$, in which both $D$ and $T$ include error, the values $DX$ and $TX^{-1}$ are predicted values for $T$ and $D$ from the model $DX\simeq T$; hence, vectors ${\Delta T}_{j}={(DX-T)}_{j}$ and ${\Delta D}_{j}={(D-TX^{-1})}_{j}$ are the entries of errors in the $j$th column of $T$ and $D$, respectively. Extending the error formulation (7), the value $E={\sum}_{j=1}^{n}(DX_{j}-T_{j})^{T}(D_{j}-(TX^{-1})_{j}$ seems to be an appropriate measure of error. We also have (8) $E={\sum}_{j=1}^{n}{\sum}_{i=1}^{m}{(DX-T)}_{ij}{(D-TX^{-1})}_{ij}=\mathop{\mathrm{tr}}((DX-T)^{T}(D-TX^{-1})),$ with $\mathop{\mathrm{tr}}(.)$ standing for trace of a matrix. Therefore, the problem can be formulated as (9) $\min\limits_{X\succ 0}\mathop{\mathrm{tr}}((DX-T)^{T}(D-TX^{-1})),$ where $X$ is symmetric and by $X\succ 0$, we mean $X$ is positive definite. Problem (9) poses a newly defined EIV model for solving the positive definite linear system of equations. In Lemma 2, we represent an equivalent formulation for the error, $E$. First, consider to a well-known property of positive definite matrices. Note A matrix $X\in{\mathbb{R}}^{n\times n}$ is positive definite if and only if there exists a nonsingular matrix $Y\in{\mathbb{R}}^{n\times n}$ such that $X=YY^{T}$. The following results about the trace operator are also well-known; e.g., see [21]. ###### Lemma 1. For an nonsingulartible matrix $P\in{\mathbb{R}}^{n\times n}$ and arbitrary matrices $Y\in{\mathbb{R}}^{n\times n}$, $A\in{\mathbb{R}}^{m\times n}$ and $B\in{\mathbb{R}}^{n\times m}$ we have $\indent(1)\indent\mathop{\mathrm{tr}}(Y)=\mathop{\mathrm{tr}}(P^{-1}YP).$ $\indent(2)\indent\mathop{\mathrm{tr}}(AB)=\mathop{\mathrm{tr}}(BA).$ ###### Lemma 2. The error $E$, defined by (8), is equal to (10) $E=\|DY-TY^{-T}\|_{F}^{2}$ where $X=YY^{T}$ and $\|.\|_{F}$ denotes the Frobenius norm of a matrix. ###### Proof. Substituting $X=YY^{T}$ in (8) and using Lemma 1, we get $\displaystyle E$ $\displaystyle=$ $\displaystyle\mathop{\mathrm{tr}}((DX-T)^{T}(D-TX^{-1}))=\mathop{\mathrm{tr}}((DX-T)^{T}(DX-T)X^{-1})$ $\displaystyle=$ $\displaystyle\mathop{\mathrm{tr}}((DX-T)^{T}(DX-T)Y^{-T}Y^{-1})=\mathop{\mathrm{tr}}(Y^{-1}(DX-T)^{T}(DX-T)Y^{-T})$ $\displaystyle=$ $\displaystyle\mathop{\mathrm{tr}}({(DXY^{-T}-TY^{-T})}^{T}(DXY^{-T}-TY^{-T}))$ $\displaystyle=$ $\displaystyle\mathop{\mathrm{tr}}({(DY-TY^{-T})}^{T}(DY- TY^{-T}))$ $\displaystyle=$ $\displaystyle\|DY-TY^{-T}\|_{F}^{2}.$ ∎ Considering this new formulation for $E$, it can be concluded that by use of our newly defined EIV model, computing a symmetric and positive definite solution to the over-determined system of equations $DX\simeq T$ is equivalent to computing a nonsingular matrix $Y\in{\mathbb{R}}^{n\times n}$ to be the solution of $\min\|DY-TY^{-T}\|_{F}^{2},$ and letting $X=YY^{T}$. A similar result is obtained by considering the over- determined system $DX\simeq T$ with $X=YY^{T}$ and multiplying both sides by $Y^{-T}$. We have, $DYY^{T}\simeq T,$ or equivalently, (11) $DY\simeq TY^{-T}.$ Now, to assign a solution to (11), it makes sense to minimize the norm of residual. Thus, to compute $X=YY^{T}$, it is sufficient to let $Y$ to be the solution of $\min\|DY-TY^{-T}\|_{F}^{2}.$ Note An appropriate characteristic of the error formulation proposed by (8) is that for a symmetric and positive definite matrix $X$, the value of $E$ is nonnegative and it is equal to zero if and only if $DX=T$. ## 3 Mathematical Solution: Full Rank Data Matrix Here, we are to develop an algorithm for solving (9) with the assumption that $D$ has full column rank. Using Lemma 1, with $X$ being symmetric, we have $\mathop{\mathrm{tr}}({(DX-T)}^{T}(D-TX^{-1}))=\mathop{\mathrm{tr}}(D^{T}DX+X^{-1}T^{T}T)-2\mathop{\mathrm{tr}}(T^{T}D).$ So, (9) can be written as (12) $\min\mathop{\mathrm{tr}}(AX+X^{-1}B),$ where $A=D^{T}D$ and $B=T^{T}T$ and the symmetric and positive definite matrix $X$ is to be computed. ###### Corollary 3. For each $X^{\ast}$ satisfying the first order necessary conditions of (12), the sufficient optimality conditions described in Theorem LABEL:14 are satisfied and since $\Phi(X)=\mathop{\mathrm{tr}}(AX+X^{-1}B)$ is convex on the cone of symmetric positive definite matrices, we can confirm that the symmetric positive definite matrix satisfying the KKT necessary conditions mentioned in Theorem LABEL:13 is the unique global solution of (12). ### Computing the positive definite matrix satisfying KKT conditions As mentioned in Theorem LABEL:13, the KKT conditions lead to the nonlinear matrix equation (13) $XAX=B.$ Note that (13) is an special case of the continuous time Riccati equation (CARE), [22] (14) $A^{T}XE+E^{T}XA-(E^{T}XB+S)R^{-1}(B^{T}XE+S^{T})+Q=0,$ with $R=0$, $E=\frac{A}{2}$ and $Q=-B$. There is a MATLAB routine to solve CARE for arbitrary values of $A$, $E$, $B$, $S$, $R$ and $Q$. To use the routine, it is sufficient to type the command X=care(A,B,Q,R,S,E), for the input arguments as in (14). Higham [22] developed an effective method for computing the positive definite solution to this special CARE when $A$ and $B$ are symmetric and positive definite using well-known decompositions. Lancaster and Rodman ([28]) also discussed solving different types of algebraic Riccati equations. Moreover, they derived a perturbation analysis for these matrix equations. Note (QR decomposition) The QR decomposition [27] of a matrix $A\in{\mathbb{R}}^{m\times n}$ with $m\geq n$, is a decomposition of the form $A=QR$, where $R$ is an $m\times n$ upper triangular matrix and $Q$ satisfies $QQ^{T}=Q^{T}Q=I$. Moreover, if $A$ has full column rank, then $R$ also has full column rank. Note (Cholesky decomposition) A Cholesky decomposition [27] of a symmetric positive definite matrix $A\in{\mathbb{R}}^{n\times n}$ is a decomposition of the form $A=R^{T}R$, where $R$, known as the Cholesky factor of $A$, is an $n\times n$ nonsingular upper triangular matrix. Note (Spectral decomposition) [27] All eigenvalues of a symmetric matrix, $A\in{\mathbb{R}}^{n\times n}$, are real and there exists an orthonormal matrix with columns representing the corresponding eigenvectors. Thus, there exist an orthonormal matrix $U$ with columns equal to the eigenvectors of $A$ and a diagonal matrix $D$ containing the eigenvalues such that $A=UDU^{T}$. Also, if $A$ is positive definite, then all of its eigenvalues are positive, and so we can set $D=S^{2}$. Thus, spectral decomposition for a symmetric positive definite matrix $A$ is a decomposition of the form $A=US^{2}U^{T}$, where $U^{T}U=UU^{T}=I$ and $S$ is a diagonal matrix. ###### Theorem 4. [22] Assume $D,T\in{\mathbb{R}}^{m\times n}$ with $m\geq n$ are known and $rank(D)=rank(T)=n$. Let $D=QR$ be the QR factorization of $D$. Let $A=D^{T}D$ and $B=T^{T}T$. Define the matrix $\tilde{Q}=RBR^{T}$ and compute its spectral decomposition, that is, $\tilde{Q}=RBR^{T}=U{\tilde{S}}^{2}U^{T}$. Then, (12) has a unique solution, given by $X^{\ast}=R^{-1}U\tilde{S}U^{T}R^{-T}.$ ###### Proof. Based on Theorem LABEL:14 and the afterwards discussion, it is sufficient to show that $X^{\ast}$ satisfies the necessary optimality conditions, $X^{\ast}AX^{\ast}=B$. Note that from $D=QR$, we have $A=D^{T}D=R^{T}Q^{T}QR=R^{T}R.$ Substituting $X^{\ast}$, we have $\displaystyle X^{\ast}AX^{\ast}$ $\displaystyle=$ $\displaystyle R^{-1}U\tilde{S}U^{T}R^{-T}R^{T}RR^{-1}U\tilde{S}U^{T}R^{-T}$ $\displaystyle=$ $\displaystyle R^{-1}U{\tilde{S}}^{2}U^{T}R^{-T}=R^{-1}RBR^{T}R^{-T}=B.$ ∎ Note To compute $R$, it is also possible to first compute $A=D^{T}D$ and then calculate the Cholesky decomposition for $A$. However, because of more stability, in Theorem 4 the QR decomposition of $D$ is used. We are now ready to outline the steps of our proposed algorithm. Solving the EIV model for positive definite linear system using QR decomposition. Compute the QR decomposition for D and let $D=QR$. Let $\tilde{Q}=RBR^{T}$, where $B=T^{T}T$ and compute the spectral decomposition of $\tilde{Q}$, that is, $\tilde{Q}=U{\tilde{S}}^{2}U^{T}.$ Set $X^{\ast}=R^{-1}U\tilde{S}U^{T}R^{-T}.$ Set $E=\mathop{\mathrm{tr}}((DX^{\ast}-T)^{T}(D-T{X^{\ast}}^{-1}))$. Note that Algorithm 1 computes the solution of (9) directly. The following theorem shows that by use of spectral decomposition of $A$ a method similar to the one introduced in [22] is in hand for solving the continuous time Riccati equation. ###### Theorem 5. Let $A=D^{T}D$ and $B=T^{T}T$ with $D,T\in{\mathbb{R}}^{m\times n}$, $m\geq n$ and $rank(D)=n.$ Let the spectral decomposition of $A$ be $A=US^{2}U^{T}$. Define the matrix $\tilde{Q}=SU^{T}BUS$ and compute its spectral decomposition, $\tilde{Q}=SU^{T}BUS=\bar{U}{\bar{S}}^{2}{\bar{U}}^{T}$. Then, the unique minimizer of (12) is $X^{\ast}=US^{-1}\bar{U}\bar{S}{\bar{U}}^{T}S^{-1}U^{T}.$ ###### Proof. Similar to the proof of Theorem 4, it is sufficient to show that the mentioned $X^{\ast}$ satisfies $X^{\ast}AX^{\ast}=B$. Substituting $X^{\ast}$, we have $\displaystyle X^{\ast}AX^{\ast}$ $\displaystyle=US^{-1}\bar{U}\bar{S}{\bar{U}}^{T}S^{-1}U^{T}US^{2}U^{T}US^{-1}\bar{U}\bar{S}{\bar{U}}^{T}S^{-1}U^{T}$ $\displaystyle=US^{-1}\bar{U}{\bar{S}}^{2}{\bar{U}}^{T}S^{-1}U^{T}=US^{-1}SU^{T}BUSS^{-1}U^{T}=B.$ ∎ Next, based on Theorem 5, we outline an algorithm for solving (9). Solving the EIV model for positive definite linear system using spectral decomposition. Let $A=D^{T}D$ and compute its spectral decomposition: $A=US^{2}U^{T}.$ Let $\tilde{Q}=SU^{T}BUS$, where $B=T^{T}T$ and compute the spectral decomposition of $\tilde{Q}$, that is, $\tilde{Q}=\tilde{U}{\tilde{S}}^{2}{\tilde{U}}^{T}.$ Set $X^{\ast}=US^{-1}\tilde{U}\tilde{S}{\tilde{U}}^{T}S^{-1}U^{T}.$ Set $E=\mathop{\mathrm{tr}}((DX^{\ast}-T)^{T}(D-T{X^{\ast}}^{-1}))$. In Section 4 we generalize our proposed method for solving positive definite linear system of equations when the data matrix is rank deficient. ## 4 Mathematical Solution: Rank Deficient Data Matrix Since the data matrix $D$ is usually produced from experimental measurements, we may have $rank(D)<n$. Here, we are to generalize Algorithm 1 for solving (9), assuming that $rank(D)=r<n$. In Section 4.1 we outline two algorithms to compute the general solution of (9). It will be shown that, in general, (9) may not have a unique solution. Hence, in section 4.2 we discuss how to find a particular solution of (9) having desirable characteristics for control problems. ### 4.1 General solution Based on theorems LABEL:13 and LABEL:14, a symmetric positive definite matrix $X^{\ast}$ is a solution of (9) if and only if (15) $X^{\ast}AX^{\ast}=B.$ Therefore, in the following, we discuss how to find a symmetric positive definite matrix $X^{\ast}$ satisfying (15). First we note that in case $D$ and $T$ are rank deficient, there might be no solution for (15), and if there is any, it is not necessarily a unique solution; see, e.g., [22]. Higham [22] considered to $X=B^{\frac{1}{2}}{(B^{\frac{1}{2}}AB^{\frac{1}{2}})}^{-\frac{1}{2}}B^{\frac{1}{2}}$ as a solution of (15), which is symmetric and positive semidefinite. However, we are interested in finding a symmetric positive definite solution to (15). Hence, in the following, first the necessary and sufficient conditions on $A$ and $B$ to guarantee the existence of positive definite solution to (15) are discussed. We then outline two algrithms to compute such a solution. Let the spectral decomposition of $A$ be $A=U\left(\begin{array}[]{cc}S^{2}&0\\\ 0&0\\\ \end{array}\right)U^{T}$, where $S^{2}\in{\mathbb{R}}^{r\times r}$ is a diagonal matrix having the positive eigenvalues of $A$ as its diagonal entries. Substituting the decomposition in (15), we get (16) $X^{\ast}U\left(\begin{array}[]{cc}S^{2}&0\\\ 0&0\\\ \end{array}\right)U^{T}X^{\ast}=B.$ Since $U$ is orthonormal, (16) can be written as $U^{T}X^{\ast}U\left(\begin{array}[]{cc}S^{2}&0\\\ 0&0\\\ \end{array}\right)U^{T}X^{\ast}U=U^{T}BU.$ Then, letting $\tilde{X}=U^{T}XU$ and $\tilde{B}=U^{T}BU$, we have (17) $\tilde{X}\left(\begin{array}[]{cc}S^{2}&0\\\ 0&0\\\ \end{array}\right)\tilde{X}=\tilde{B}.$ Thus, the matrix $X=U\tilde{X}U^{T}$ is a solution of (9) if and only if $\tilde{X}$ is symmetric positive definite and satisfies (17). Substituting the block form $\tilde{X}=\left(\begin{array}[]{cc}{\tilde{X}}_{rr}&{\tilde{X}}_{r,n-r}\\\ {\tilde{X}}_{n-r,r}&{\tilde{X}}_{n-r,n-r}\\\ \end{array}\right)$, where ${\tilde{X}}_{rr}\in{\mathbb{R}}^{r\times r}$, ${\tilde{X}}_{r,n-r}={\tilde{X}}_{n-r,r}^{T}\in{\mathbb{R}}^{r\times(n-r)}$ and ${\tilde{X}}_{n-r,n-r}\in{\mathbb{R}}^{(n-r)\times(n-r)}$, in (17) leads to $\left(\begin{array}[]{cc}{\tilde{X}}_{rr}S^{2}{\tilde{X}}_{rr}&{\tilde{X}}_{rr}S^{2}{\tilde{X}}_{r,n-r}\\\ {\tilde{X}}_{n-r,r}S^{2}{\tilde{X}}_{rr}&{\tilde{X}}_{n-r,r}S^{2}{\tilde{X}}_{r,n-r}\\\ \end{array}\right)=\tilde{B}=\left(\begin{array}[]{cc}{\tilde{B}}_{rr}&{\tilde{B}}_{r,n-r}\\\ {\tilde{B}}_{n-r,r}&{\tilde{B}}_{n-r,n-r}\\\ \end{array}\right),$ which is satisfied if and only if (18a) ${\tilde{X}}_{rr}S^{2}{\tilde{X}}_{rr}={\tilde{B}}_{rr},$ (18b) ${\tilde{X}}_{rr}S^{2}{\tilde{X}}_{r,n-r}={\tilde{B}}_{r,n-r},$ (18c) ${\tilde{X}}_{n-r,r}S^{2}{\tilde{X}}_{r,n-r}={\tilde{B}}_{n-r,n-r}.$ Before discussing how to compute $\tilde{X}$, we show that if (15) has a symmetric and positive definite solution, then ${\tilde{B}}_{rr}$ must be nonsingular. The matrix ${\tilde{X}}_{rr}$ as a main minor of the positive definite matrix $\tilde{X}$ is nonsingular. $S$ is also supposed to be nonsingular. Hence, it can be concluded from (18a) that ${\tilde{B}}_{rr}$ is nonsingular. Let $\bar{D}=S$ and suppose $\bar{T}$ satisfies ${\bar{T}}^{T}{\bar{T}}={\tilde{B}}_{rr}$. Consider problem (9) corresponding to the data and target matrices $\bar{D}$ and $\bar{T}$ as follows: (19) $\min\limits_{\bar{X}\succ 0}\mathop{\mathrm{tr}}((\bar{D}\bar{X}-\bar{T})^{T}(\bar{D}-\bar{T}{\bar{X}}^{-1})).$ We know from theorems LABEL:13 and LABEL:14 that the necessary and sufficient optimality conditions for the unique solution of problem (19) implies (18a). Thus, ${\tilde{X}}_{rr}$ can be computed using Algorithm 1 for the input arguments $\bar{D}$ and $\bar{T}$. Substituting the computed ${\tilde{X}}_{rr}$ in (18b), the linear system of equations (20) ${\tilde{X}}_{rr}S^{2}{\tilde{X}}_{r,n-r}={\tilde{B}}_{r,n-r}$ arises, where ${\tilde{X}}_{rr},S^{2}\in{\mathbb{R}}^{r\times r}$ are known and ${\tilde{X}}_{r,n-r}\in{\mathbb{R}}^{r\times(n-r)}$ is to be computed. Since ${\tilde{X}}_{rr}$ is positive definite and $S^{2}$ is nonsingular, the coefficient matrix of the linear system (20) is nonsingular and ${\tilde{X}}_{r,n-r}$ can be uniquely computed. It is clear that since $\tilde{X}$ is symmetric, ${\tilde{X}}_{n-r,r}$ is the same as ${{\tilde{X}}_{r,n-r}}^{T}$. Now, we check whether the computed ${\tilde{X}}_{n-r,r}$ and ${\tilde{X}}_{r,n-r}$ satisfy (18c). Inconsistency of (19) means that there is no symmetric positive definite matrix satisfying (18a)-(18c), and if so, (9) has no solution. Thus, in solving an specific positive definite system with rank deficient data and target matrices using the presented EIV model, a straightforward method to investigate the existence of solution is to check whether (18c) holds for the given data and target matrices. On the other hand, for numerical results, it is necessary to generate meaningful test problems. Hence, in the following two lemmas, we investigate the necessary and sufficient conditions for satisfaction of (18c). ###### Lemma 6. Let the spectral decomposition of $A$ be determined as $A=U\left(\begin{array}[]{cc}S^{2}&0\\\ 0&0\\\ \end{array}\right)U^{T}$ where $S^{2}\in{\mathbb{R}}^{r\times r}$ and $rank(A)=rank(B)=r$. The necessary and sufficient condition for satisfaction of (18c) is $BU_{r}{({U_{r}}^{T}BU_{r})}^{-1}{U_{r}}^{T}B-B\in\mathop{\mathrm{Null}}({U_{n-r}}^{T}).$ ###### Proof. From (18a), we have (21) ${{\tilde{X}}_{rr}}^{-1}S^{-2}{{\tilde{X}}_{rr}}^{-1}={{\tilde{B}}_{rr}}^{-1},$ and from (18c), we get (22) ${\tilde{X}}_{r,n-r}=S^{-2}{{\tilde{X}}_{rr}}^{-1}{{\tilde{B}}_{r,n-r}},$ ${\tilde{X}}_{n-r,r}={\tilde{B}}_{n-r,r}{{\tilde{X}}_{rr}}^{-1}S^{-2}.$ Manipulating (18c) with (21) and (22), we get (23) ${\tilde{B}}_{n-r,r}{{\tilde{B}}_{rr}}^{-1}{\tilde{B}}_{r,n-r}={\tilde{B}}_{n-r,n-r}.$ Considering the block form $U=\left(\begin{array}[]{cc}U_{r}&U_{n-r}\\\ \end{array}\right)$, where $U_{r}\in{\mathbb{R}}^{n\times r}$ and $U_{n-r}\in{\mathbb{R}}^{n\times(n-r)}$, we have (27) $\displaystyle\tilde{B}=U^{T}BU$ $\displaystyle=$ $\displaystyle\left(\begin{array}[]{c}{U_{r}}^{T}\\\ {U_{n-r}}^{T}\\\ \end{array}\right)B\left(\begin{array}[]{cc}U_{r}&U_{n-r}\\\ \end{array}\right)$ (30) $\displaystyle=$ $\displaystyle\left(\begin{array}[]{cc}{U_{r}}^{T}BU_{r}&{U_{r}}^{T}BU_{n-r}\\\ {U_{n-r}}^{T}BU_{r}&{U_{n-r}}^{T}BU_{n-r}\\\ \end{array}\right).$ Rewriting (23) results in (31) ${U_{n-r}}^{T}BU_{r}{({U_{r}}^{T}BU_{r})}^{-1}{U_{r}}^{T}BU_{n-r}={U_{n-r}}^{T}BU_{n-r},$ which is equivalent to (e.g., see [20]) (32) $BU_{r}{({U_{r}}^{T}BU_{r})}^{-1}{U_{r}}^{T}B=B+Z,$ where $Z\in{\mathbb{R}}^{n\times n}$ is in the null space of ${U_{n-r}}^{T}$. Thus, (15) has a positive definite solution if and only if (33) $BU_{r}{({U_{r}}^{T}BU_{r})}^{-1}{U_{r}}^{T}B-B\in\mathop{\mathrm{Null}}({U_{n-r}}^{T}).$ ∎ Note For real problems with arbitrary values of $D$ and $T$, the necessary and sufficient condition given in Lemma 6 may not be satisfied, in general. Hence, we are to propose a threshold to determine if (34) $F={U_{n-r}}^{T}\left(BU_{r}{({U_{r}}^{T}BU_{r})}^{-1}{U_{r}}^{T}B-B\right)$ is close enough to zero. In the following, we show that if $\|F\|<\delta$, for a sufficiently small scalar $\delta$, then $X_{r(n-r)}$ computed from (18b) is a proper approximation for the solution of (18c). Substituting $F$ in (31), we have (35) ${\tilde{B}}_{n-r,r}{{\tilde{B}}_{rr}}^{-1}{\tilde{B}}_{r,n-r}-{\tilde{B}}_{n-r,n-r}=FU_{n-r},$ and (36) ${\tilde{X}}_{n-r,r}S^{2}{\tilde{X}}_{r,n-r}-{\tilde{B}}_{n-r,n-r}=FU_{n-r}.$ Let $X^{\ast}$ satisfy (18c), that is, (37) ${X^{\ast}}_{n-r,r}S^{2}{X^{\ast}}_{r,n-r}-{\tilde{B}}_{n-r,n-r}=0.$ Then, we have (38) ${\tilde{X}}_{n-r,r}S^{2}{\tilde{X}}_{r,n-r}-{X^{\ast}}_{n-r,r}S^{2}{X^{\ast}}_{r,n-r}=FU_{n-r}.$ Letting $\tilde{Y}=S{\tilde{X}}_{r,n-r}$ and $Y^{\ast}=S{X^{\ast}}_{r,n-r}$, (38), we get (39) ${\tilde{Y}}^{T}\tilde{Y}-{Y^{\ast}}^{T}Y^{\ast}=FU_{n-r}$ and (40) ${\tilde{y}}_{i}^{T}{\tilde{y}}_{j}-{y_{i}^{\ast}}^{T}y_{j}^{\ast}=(FU_{n-r})_{ij},$ where ${\tilde{y_{i}}}$ and ${y_{i}^{\ast}}$ are the $i$th column of $\tilde{Y}$ and $Y^{\ast}$ respectively. Now, since the 2 norm of each column of $U_{n-r}$ is equal to one, every entry of $U_{n-r}$ is less than or equal to one. Moreover, under the assumption $\|F\|<\delta$, none of the entries of $F$ are greater than $\delta$. Hence, we have (41) $|{(FU_{n-r})}_{ij}|=|f_{i}^{T}u_{j}|\leq|f_{i1}+\cdots+f_{i(n-r)}|<(n-r)\delta,$ where $f_{i}^{T}$ and $u_{j}$ are the $i$th row of $F$ and the $j$th column of $U_{n-r}$ respectively. Now, (40) together with (41) gives (42) $|{\tilde{y_{i}}}^{T}\tilde{y_{j}}-{y_{i}^{\ast}}^{T}y_{j}^{\ast}|<{(n-r)}\delta.$ Hence, there is a constant $c_{ij}$ such that (43) $|{\tilde{y}}_{ij}-y_{ij}^{\ast}|<c_{ij},$ where $\tilde{y}_{ij}$ and $y_{ij}^{\ast}$ are the $(i,j)$th entry of $\tilde{Y}$ and $Y^{\ast}$ respectively. Letting $S=diag(s_{1},\cdots,s_{r})$, from (43) we get (44) $|s_{i}||({\tilde{X}}_{n-r,r})_{ij}-(X^{\ast}_{n-r,r})_{ij}|\leq c_{ij},$ for $i=1,\cdots,r$ and $j=1,\cdots,n-r$ and ${\|{\tilde{X}}_{r,n-r}-{X^{\ast}}_{r,n-r}\|}\leq C.$ Hence, assuming ${\tilde{X}}_{r,r}={X^{\ast}}_{r,r},$ we have $\|\tilde{X}-X^{\ast}\|<\alpha$ which means that if $FU_{n-r}$ is close enough to zero, then the computed solution from the approximate satisfaction of (18c) would be close enough to the exact solution. In the following lemma, we give a sufficient condition which guarantees the existence of a solution for (15). We later use this result to generate consistent test problems in Section 6. ###### Lemma 7. Let the spectral decomposition of $B$ be $B=V\left(\begin{array}[]{cc}{\sum}^{2}&0\\\ 0&0\\\ \end{array}\right)V^{T},$ where ${\sum}^{2}\in{\mathbb{R}}^{r\times r}$ and $rank(A)=rank(B)=r$. A sufficient condition for satisfaction of (18c) is that (45) $V=U\left(\begin{array}[]{cc}Q&0\\\ 0&P\\\ \end{array}\right),$ where $Q\in{\mathbb{R}}^{r\times r}$ and $P\in{\mathbb{R}}^{(n-r)\times(n-r)}$ satisfy $QQ^{T}=Q^{T}Q=I$ and $PP^{T}=P^{T}P=I$. ###### Proof. A possible choice for $Z$ in Lemma 7 is zero, for which (32) is equivalent to (46) $U_{r}{({U_{r}}^{T}BU_{r})}^{-1}{U_{r}}^{T}=B^{+}+W,$ with $W\in{\mathbb{R}}^{n\times n}$ in the null space of $B$. To obtain a simplified sufficient condition for existence of a positive definite solution to (15), we let $W=0$. Multiplying (46) by ${U_{r}}^{T}$ and $U_{r}$ respectively on the left and right, and substituting the spectral decomposition of $B$, we get (47) ${({U_{r}}^{T}V_{r}{\sum}^{2}{V_{r}}^{T}U_{r})}^{-1}={U_{r}}^{T}B^{+}U_{r}={U_{r}}^{T}V_{r}{\sum}^{-2}{V_{r}}^{T}U_{r}.$ Letting $M={U_{r}}^{T}V_{r}$, we get ${(M{\sum}^{2}M^{T})}^{-1}=M{\sum}^{-2}M^{T}.$ Since $M$ has full rank, we get (48) $M^{-T}{\sum}^{-2}M^{-1}=M{\sum}^{-2}M^{T}.\\\ $ Now, since ${\sum}^{-2}$ is nonsingular, (48) holds if and only if (49) $M^{T}M=I.$ This leads to (50) ${({U_{r}}^{T}V_{r})}^{T}{U_{r}}^{T}V_{r}={V_{r}}^{T}U_{r}{U_{r}}^{T}V_{r}=I.$ Since $U$ is orthonormal, we have $UU^{T}=U_{r}{U_{r}}^{T}+U_{n-r}{U_{n-r}}^{T}=I$. Hence, we get (51) $U_{r}{U_{r}}^{T}=I-U_{n-r}{U_{n-r}}^{T}.$ Substituting (51) in (50), we get ${V_{r}}^{T}(I-U_{n-r}{U_{n-r}}^{T})V_{r}=I-{V_{r}}^{T}U_{n-r}{U_{n-r}}^{T}V_{r}=I,$ which is satisfied if and only if ${U_{n-r}}^{T}V_{r}=0$. Since the columns of $U_{r}$ form an orthogonal basis for the null space of ${U_{n-r}}^{T}$ [27], it can be concluded that each column of $V_{r}$ is a linear combination of the columns of $U_{r}$. Thus, (52) $V_{r}=U_{r}Q$ is a necessary condition for (49) to be satisfied, and since both $U_{r}$ and $V_{r}$ have orthogonal columns, $Q\in{\mathbb{R}}^{r\times r}$ satisfies $QQ^{T}=Q^{T}Q=I$. On the other hand, we know from the definition of the spectral decomposition that $VV^{T}=UU^{T}=I$. Thus, $\displaystyle V_{r}{V_{r}}^{T}+V_{n-r}{V_{n-r}}^{T}=I,$ (53) $\displaystyle U_{r}{U_{r}}^{T}+U_{n-r}{U_{n-r}}^{T}=I.$ Manipulating (52) with (53), we get (54) $V_{n-r}{V_{n-r}}^{T}=U_{n-r}{U_{n-r}}^{T},$ which holds if and only if there exists a matrix $P\in{\mathbb{R}}^{(n-r)\times(n-r)}$ such that $PP^{T}=P^{T}P=I$ and (55) $V_{n-r}=PU_{n-r}.$ It can be concluded from (52) and (55) that $V=U\left(\begin{array}[]{cc}Q&0\\\ 0&P\\\ \end{array}\right)$, where $QQ^{T}=Q^{T}Q=I$ and $PP^{T}=P^{T}P=I$. ∎ ###### Corollary 8. The matrices $P$ and $Q$ defined in Lemma 6 can set to be rotation matrices [27] to satisfy $\displaystyle PP^{T}=P^{T}P=I,$ $\displaystyle QQ^{T}=Q^{T}Q=I.$ Thus, to compute a target matrix, $T$, satisfying Lemma 6, it is sufficient to first compute $V$ from (45) with $Q\in{\mathbb{R}}^{r\times r}$ and $P\in{\mathbb{R}}^{(n-r)\times(n-r)}$ arbitrary rotation matrices and $U$ as defined in Lemma 6 and then set $T=\bar{U}\left(\begin{array}[]{cc}\sum&0\\\ 0&0\\\ \end{array}\right)V^{T}$, where $\bar{U}\in{\mathbb{R}}^{m\times m}$ and $\sum\in{\mathbb{R}}^{r\times r}$ are arbitrary orthonormal and diagonal matrices. Thus, problem (9) has a solution if and only if the data and target matrices satisfy (33). In this case, ${\tilde{X}}_{rr}$, ${\tilde{X}}_{r,n-r}$ and its transpose, ${\tilde{X}}_{n-r,r}$, are respectively computed from (18a) and (18b). Hence, the only remaining step is to compute ${\tilde{X}}_{n-r,n-r}$ so that $\tilde{X}$ is symmetric and positive definite. We know that $\tilde{X}$ is symmetric positive definite if and only if there exists a nonsingular lower triangular matrix $L\in{\mathbb{R}}^{n\times n}$ so that (56) $\tilde{X}=LL^{T},$ where $L$ is lower triangular and nonsingular. Considering the block forms $\tilde{X}=\left(\begin{array}[]{cc}{\tilde{X}}_{rr}&{\tilde{X}}_{r,n-r}\\\ {\tilde{X}}_{n-r,r}&{\tilde{X}}_{n-r,n-r}\\\ \end{array}\right)$ and $L=\left(\begin{array}[]{cc}L_{rr}&0\\\ L_{n-r,r}&L_{n-r,n-r}\\\ \end{array}\right)$, where $L_{n-r,r}$ is an $(n-r)\times r$ matrix and $L_{rr}\in{\mathbb{R}}^{r\times r}$ and $L_{n-r,n-r}\in{\mathbb{R}}^{(n-r)\times(n-r)}$ are nonsingular lower triangular matrices, we get (64) $\displaystyle\left(\begin{array}[]{cc}{\tilde{X}}_{rr}&{\tilde{X}}_{r,n-r}\\\ {\tilde{X}}_{n-r,r}&{\tilde{X}}_{n-r,n-r}\\\ \end{array}\right)$ $\displaystyle=$ $\displaystyle\left(\begin{array}[]{cc}L_{rr}&0\\\ L_{n-r,r}&L_{n-r,n-r}\\\ \end{array}\right)\left(\begin{array}[]{cc}{L_{rr}}^{T}&{L_{n-r,r}}^{T}\\\ 0&{L_{n-r,n-r}}^{T}\\\ \end{array}\right).$ Thus, (65a) ${\tilde{X}}_{rr}=L_{rr}{L_{rr}}^{T},$ (65b) ${\tilde{X}}_{r,n-r}=L_{rr}{L_{n-r,r}}^{T},$ (65c) ${\tilde{X}}_{n-r,r}=L_{n-r,r}{L_{rr}}^{T},$ (65d) ${\tilde{X}}_{n-r,n-r}=L_{n-r,r}{L_{n-r,r}}^{T}+L_{n-r,n-r}{L_{n-r,n-r}}^{T}.$ Therefore, to compute a symmetric positive definite $\tilde{X}$, (65a)–(65d) must be satisfied. Let ${\tilde{X}}_{rr}=\tilde{L}{\tilde{L}}^{T}$ be the Cholesky decomposition of ${\tilde{X}}_{rr}$. $L_{rr}=\tilde{L}$ satisfies (65a). Substituting $L_{rr}$ in (65b), ${L_{n-r,r}}^{T}$ is computed uniquely by solving the resulting linear system. Since (65c) is transpose of (65b), it does not give any additional information. Finally, to compute a matrix ${\tilde{X}}_{n-r,n-r}$ to satisfy (65d), it is sufficient to choose an arbitrary lower triangular nonsingular matrix $L_{n-r,n-r}$ and substitute it in (65d). The resulting ${\tilde{X}}_{n-r,n-r}$ gives a symmetric positive definite $\tilde{X}$ as follows: $\tilde{X}=\left(\begin{array}[]{cc}{\tilde{X}}_{rr}&{\tilde{X}}_{r,n-r}\\\ {\tilde{X}}_{n-r,r}&{\tilde{X}}_{n-r,n-r}\\\ \end{array}\right).$ Now, based on the above discussion, we outline the steps of our algorithm for solving (9) in the case $rank(D)=r<n.$ Solving the EIV model for positive definite linear system with rank deficient data and target matrices using spectral decomposition. $\delta$ as the upper bounds for absolute error is taken to be close to the machine (or user’s) zero. Let $A=D^{T}D$ and compute its spectral decomposition: $A=U\left(\begin{array}[]{cc}S^{2}&0\\\ 0&0\end{array}\right)U^{T}.$ Let $B=T^{T}T$ and $\tilde{B}=U^{T}BU$. Compute $rank(D)=r$ and let $\displaystyle{\tilde{B}}_{rr}=\tilde{B}(1:r,1:r),$ $\displaystyle{\tilde{B}}_{r,n-r}=\tilde{B}(1:r,r+1:n),$ $\displaystyle{\tilde{B}}_{n-r,n-r}=\tilde{B}(r+1:n,r+1:n)$ Let $\bar{D}=S$, assume $\bar{T}$ satisfies ${\tilde{B}}_{rr}={\bar{T}}^{T}{\bar{T}}$. Perform Algorithm 1 with input parameters $D=\bar{D}$ and $T=\bar{T}$, and let ${\tilde{X}}_{rr}=X^{\ast}$. Solve the linear system (18b) to compute ${\tilde{X}}_{r,n-r}$ and let ${\tilde{X}}_{n-r,r}={{\tilde{X}}_{r,n-r}}^{T}$. Compute the spectral decomposition for $B$, that is, $B=V\left(\begin{array}[]{cc}D^{2}&0\\\ 0&0\\\ \end{array}\right)V^{T}.$ Compute $M={U_{r}}^{T}V_{r}$. If $\|{U_{n-r}}^{T}(BU_{r}{({U_{r}}^{T}BU_{r})}^{-1}{U_{r}}^{T}B-B)\|\geq\delta$ stop ((9) has no solution) Else Let the Cholesky decomposition of ${\tilde{X}}_{rr}$ be ${\tilde{X}}_{rr}=\tilde{L}{\tilde{L}}^{T}$ and set $L_{rr}=\tilde{L}$. Solve the lower triangular system (65b) to compute $L_{n-r,r}$. Let $L_{n-r,n-r}\in{\mathbb{R}}^{(n-r)\times(n-r)}$ be an arbitrary nonsingular lower triangular matrix and compute ${\tilde{X}}_{n-r,n-r}$ using (65d). Let $\tilde{X}=\left(\begin{array}[]{cc}{\tilde{X}}_{rr}&{\tilde{X}}_{r,n-r}\\\ {\tilde{X}}_{n-r,r}&{\tilde{X}}_{n-r,n-r}\\\ \end{array}\right)$ and $X^{\ast}=U\tilde{X}U^{T}$. Compute $E=\mathop{\mathrm{tr}}((DX^{\ast}-T)(D-T{X^{\ast}}^{-1})).$ EndIf. Next, we show how to use the complete orthogonal decomposition of the data matrix $D$ instead of the spectral decomposition of $A$. Note (Complete Orthogonal Decomposition) [27] Let $A\in{\mathbb{R}}^{m\times n}$ be an arbitrary matrix with $rank(A)=r$. There exist $R\in{\mathbb{R}}^{r\times r}$, $U\in{\mathbb{R}}^{m\times m}$ and $V\in{\mathbb{R}}^{n\times n}$ so that $R\in{\mathbb{R}}^{r\times r}$ is upper triangular, $UU^{T}=U^{T}U=I$, $VV^{T}=V^{T}V=I$ and $A=U\left(\begin{array}[]{cc}R&0\\\ 0&0\\\ \end{array}\right)V^{T}.$ Next, Algorithm 4 is presented using the complete orthogonal decomposition of $D$. Solving the EIV model for positive definite linear system with rank deficient data and target matrices using complete orthogonal decomposition. $\delta$ as the upper bounds for absolute error is taken to be close to the machine (or user’s) zero. Compute the complete orthogonal decomposition of $D$, that is, $D=U\left(\begin{array}[]{cc}R&0\\\ 0&0\\\ \end{array}\right)V^{T}.$ Let $A=D^{T}D=V_{r}R^{T}R{V_{r}}^{T}$, $B=T^{T}T$ and $\tilde{B}=V^{T}BV$, where $V_{r}$ consists of the first $r$ columns of $V$. Compute $rank(D)=r$ and let $\displaystyle{\tilde{B}}_{rr}=\tilde{B}(1:r,1:r),$ $\displaystyle{\tilde{B}}_{r,n-r}=\tilde{B}(1:r,r+1:n),$ $\displaystyle{\tilde{B}}_{n-r,n-r}=\tilde{B}(r+1:n,r+1:n).$ Let $\bar{D}=R$, assume $\bar{T}$ satisfies ${\tilde{B}}_{rr}={\bar{T}}^{T}{\bar{T}}$. Perform Algorithm 1 with input parameters $D=\bar{D}$ and $T=\bar{T}$, and let ${\tilde{X}}_{rr}=X^{\ast}$. Solve the linear system (18b) to compute ${\tilde{X}}_{r,n-r}$ and let ${\tilde{X}}_{n-r,r}={{\tilde{X}}_{r,n-r}}^{T}$. Compute the spectral decomposition for $B$, that is, $B=V\left(\begin{array}[]{cc}D^{2}&0\\\ 0&0\\\ \end{array}\right)V^{T}.$ Compute $M={U_{r}}^{T}V_{r}$. If $\|{U_{n-r}}^{T}(BU_{r}{({U_{r}}^{T}BU_{r})}^{-1}{U_{r}}^{T}B-B)\|\geq\delta$ stop ((9) has no solution) Else Let the Cholesky decomposition of ${\tilde{X}}_{rr}$ be ${\tilde{X}}_{rr}=\tilde{L}{\tilde{L}}^{T}$ and set $L_{rr}=\tilde{L}$. Solve the lower triangular system (65b) to compute $L_{n-r,r}$. Let $L_{n-r,n-r}\in{\mathbb{R}}^{(n-r)\times(n-r)}$ be an arbitrary nonsingular lower triangular matrix and compute ${\tilde{X}}_{n-r,n-r}$ using (65d). Let $\tilde{X}=\left(\begin{array}[]{cc}{\tilde{X}}_{rr}&{\tilde{X}}_{r,n-r}\\\ {\tilde{X}}_{n-r,r}&{\tilde{X}}_{n-r,n-r}\\\ \end{array}\right)$ and $X^{\ast}=U\tilde{X}U^{T}$. Compute $E=\mathop{\mathrm{tr}}((DX^{\ast}-T)(D-T{X^{\ast}}^{-1})).$ EndIf. Thus, based on the above study, the computational complexity of PDEIV-QR is lower than that of PDEIV-Spec, for all matrix sizes. But, for the case of rank deficient data matrix, depending on the matrix size and rank, one of the algorithms PDEIV-RD-Spec and PDEIV-RD-COD may have a lower computational complexity. ## References * [1] Alizadeh F., Pierre J., Heaberly A., Overton M. L.: Primal-dual interior point methods for semidefinite programming: convergence rates, stability and numerical result, SIAM J. Optim., 8, 746-768 (1998) * [2] Aubry A., Maio A. D., Pallotta L., Farina A.: Maximum likelihood estimation of a structured covariance matrix with a condition number constraint, IEEE Trans. On Signal Processing, 60(6), 3004-3021 (2012) * [3] Cheng C. L., Kukush A., Mastronardi N., Paige C., Van Huffel S.: Total Least Squares and Errors-in-variables Modeling, Comput Stat Data An, 52, 1076-1079 (2007) * [4] Deng Y., Boley D.: On the Optimal Approximation for the Symmetric Procrustes Problems of the Matrix Equation AXB = C, Proceedings of the International Conference on Computational and Mathematical Methods in Science and Engineering, Chicago, 159-168 (2007) * [5] Dolan E. D, Moré J. J.: Benchmarking optimization software with performance profiles, Mathematical Programming, 91, 201-213 (2012) * [6] Golub G. H., Van Loan C. F.: An analysis of the total least squares problem, SIAM J. Numer. Anal., 17, 883-893 (1980) * [7] Hayami K., Yin J. F., Ito T.: GMRES method for least squares problems, SIAM. J. Matrix Anal. and Appl., 31(5), 2400-2430 (2010) * [8] Hnětynková I., Plešinger M., Sima D. M., Strakoš Z., Van Huffel S.: The total least squares problem in $AX\approx B$, A new classification with the relationship to the classical works, SIAM J. Matrix Anal. Appl., 32(3), 748-770 (2011) * [9] Hu H., Olkin I.: A numerical procedure for finding the positive definite matrix closest to a patterned matrix, Statistical and Probability Letters, 12, 511-515 (1991) * [10] Hu H.: Positive definite constrained least-squares estimation of matrices, Linear Algebra and its Applications, 229, 167-174 (1995) * [11] Van Huffel S., Vandewalle J.: Algebraic connections between the least squares and total least squares problems, Numer. Math., 55, 431-449 (1989) * [12] Kang B., Jung S., Park P.: A new iterative method for solving total least squares problem, Proceeding of the 8th Asian Control Conference (ASCC), Kaohsiung, Taiwan, (2011) * [13] Larson H. J.: Least squares estimation of the components of a symmetric matrix, Technometrics, 8(2), 360-362 (1966) * [14] McInroy J., Hamann J. C.: Design and control of flexure jointed hexapods, IEEE Trans. Robotics and Automation, 16(4), 372-381 (2000) * [15] Moré J. J., Wild S. M.: Benchmarking derivative-free optimization algorithms, SIAM J. Optim., 20, 172-191 (2009) * [16] Paige C. C., Strakoš Z.: Scaled total least squares fundamentals, Numer. Math., 91, 117-146 (2000) * [17] Poignet P., Gautier M.: Comparison of Weighted Least Squares and Extended Kalman Filtering Methods for Dynamic Identification of Robots, Proceedings of the IEEE Conference on Robotics and Automation, San Francisco, CA, USA, 3622-3627 (2000) * [18] Woodgate K. G.: Least-squares solution of $F=PG$ over positive semidefinite symmetric P, Linear Algebra Appl., 245, 171-190 (1996) * [19] Zhou L., Lin L., Wei Y., Qiao S.: Perturbation analysis and condition numbers of scaled total least squares problems, Numer. Algorithms, 51, 381-399 (2009) * [20] Banerjee S., Roy A.: Quadratic Forms, Linear Algebra and Matrix Analysis for Statistics, Chapman Hall/CRC Texts in Statistical Sciences, 441-442 (2014) * [21] Gill P. E. , Murray W., Wright M. H.: Numerical Linear Algebra and Optimization, Addison Wesley, (1991) * [22] Higham N. J.: Functions of Matrices: Theory and Computation, SIAM, Philadelphia (2008) * [23] Horn R. A., Johnson C. R.: Topics in Matrix Analysis. Cambridge University Press (1991) * [24] Van Huffel S., Vandewalle J.: The Total Least Squares Problem: Computational Aspects and Analysis. SIAM, Philadelphia (1991) * [25] Krislock N. G.: Numerical Solution of Semidefinite Constrained Least Squares Problems, M. Sc. Thesis, University of British Colombia (2003) * [26] Demmel J. W.: Applied Numerical Linear Algebra, 3rd edition, SIAM, Philadelphia (1996) * [27] Golub G. H., Van Loan C. F.: Matrix Computation, 4th edition, JHU Press (2012) * [28] Lancaster P., Rodman L.: Algebraic Riccati Equations, Clarendon Press (1995) * [29] Magnus J. R., Neudecker H.: Matrix Differential Calculus with Applications in Statistics and Econometrics, 2nd edition, John Wiley Sons (1999) * [30] Nocedal J., Wright S. J.: Numerical Optimization, Springer, New York, (1999) * [31] Higham N. J.: Computing the nearest correlation matrix (A problem from ?nance), MIMS EPrint: 2006.70, http://eprints.ma.man.ac.uk/, (2006). Accessed 26 June 2012 * [32] Petersen K. B., Pedersen M. S.: The Matrix Cookbook, http://orion.uwaterloo.ca/ hwolkowi/matrixcookbook.pdf, (2008). Accessed 11 January 2013 * [33] Vershynin R.: Introduction to the non-asymptotic analysis of random matrices, http://arxiv.org/pdf/1011.3027v7.pdf, (2011). Accessed 01 February 2013 * [34] American Mathematical Society, Eigenvalues and sums of Hermitian matrices, http://www.ams.org/bookstore/pspdf/gsm132prev.pdf, (2009). Accessed 18 March 2013
# Dummy Prototypical Networks for Few-Shot Open-Set Keyword Spotting ###### Abstract Keyword spotting is the task of detecting a keyword in streaming audio. Conventional keyword spotting targets predefined keywords classification, but there is growing attention in few-shot (query-by-example) keyword spotting, e.g., $N$-way classification given $M$-shot support samples. Moreover, in real-world scenarios, there can be utterances from unexpected categories (open-set) which need to be rejected rather than classified as one of the $N$ classes. Combining the two needs, we tackle few-shot open-set keyword spotting with a new benchmark setting, named splitGSC. We propose episode-known dummy prototypes based on metric learning to detect an open-set better and introduce a simple and powerful approach, Dummy Prototypical Networks (D-ProtoNets). Our D-ProtoNets shows clear margins compared to recent few-shot open-set recognition (FSOSR) approaches in the suggested splitGSC. We also verify our method on a standard benchmark, miniImageNet, and D-ProtoNets shows the state- of-the-art open-set detection rate in FSOSR. Index Terms: Few-shot learning, Open-set Recognition, Keyword Spotting, Dummy Prototype, Prototypical Networks ## 1 Introduction Keyword spotting (KWS) detects keywords like “Hey, Google” and “Hey, Siri” in streaming audio. KWS systems usually target edge devices such as mobile phones and smart speakers, and previous studies have concentrated on better network designs in terms of detection rate [1, 2] and computational cost [3, 4] while targeting multi-keyword classifications with Google speech commands dataset (GSC) version 1 and 2 [5]. Recently, there has been growing attention in query-by-example (few-shot) keyword spotting systems [6, 7, 8, 9]. Few-shot learning (FSL) absorbs knowledge from a training dataset and leverages the knowledge to adapt to evaluation tasks of unseen categories using only a few labeled (support) samples. However, on top of FSL, real-world scenarios naturally meet utterances of unexpected categories without support examples, and neural networks tend to be over-confident [10] and can misjudge those unexpected samples by one of the FSL classes. Thus, there is a need to detect those unseen open-set classes (Open-Set Recognition (OSR) [11]). This work introduces OSR to few-shot keyword spotting that is a more challenging setting, few-shot open-set recognition (FSOSR) [12] for KWS. Figure 1 shows example episodes of FSL and FSOSR while using an FSL method, Prototypical Networks (ProtoNets) [13]. In an $N$-way $M$-shot episode, the goal of FSL is correctly classifying $N$ classes that are unseen during training but known using $M$ support samples for each. FSL does not consider open-set classes out of the $N$ classes. On the other hand, FSOSR needs to distinguish an unknown open-set from the known classes while still performing FSL. FSOSR is more challenging than conventional OSR because an open-set changes over episodes based on the choice of $N$ classes (Figure 1 bottom). Thus, a desirable FSOSR method needs to adapt to the varying open-set. We predict episode-specific (episode-known) dummies based on support examples in each episode and classify an open-set as the dummies. Using the episode-known dummies, we propose Dummy Prototypical Networks (D-ProtoNets). For few-shot open-set keyword spotting (FSOS-KWS), we introduce a benchmark setting named splitGSC, a subset of GSC ver2. Our D-ProtoNets achieves state- of-the-art (SOTA) performance in splitGSC. We also verify D-ProtoNets on miniImageNet [14], a widely used FSL benchmark, and D-ProtoNets is better in detecting open-set than other baselines. Figure 1: Example episodes: In few-shot learning, decision boundaries classify classes given few support samples. Few-shot open-set recognition is required to distinguish samples of unknown categories (open-set) from those of known classes. Open-set can vary over episodes in FSOSR. Figure 2: Dummy Prototypical Network. $f_{\phi}(\cdot)$ and $g_{\varphi}(\cdot)$ are the encoder and the dummy generator, respectively. ‘K’ and ‘U’ stand for a set of known classes having support samples and unknown open-set, respectively, and $d(\cdot)$ is a distance metric. ## 2 Related Works The literature on few-shot learning and open-set recognition are vast, thus we focus on the most relevant works. Few-Shot Learning. Few-shot learning has three popular branches, adaptation, hallucination, and metric learning methods. The adaptation methods [15] make a model easy to fine-tune in the low-shot regime, and the hallucination methods [16] augment training examples for data starved classes. Our approach aligns with the last one, metric-based learning [13, 14], which learns a metric space in which distance metrics can classify samples. Especially our method is designed on top of Prototypical Networks (ProtoNets) [13]. Recently, FEAT [17] shows that it is helpful to make support samples task-specific using a set-to- set function, Transformer [18], in FSL. Also, some approaches have addressed few-shot KWS [7, 8, 9], but to the best of our knowledge, this is the first work introducing few-shot open-set keyword spotting (FSOS-KWS). Open-Set Recognition. [11] introduced OSR to deep learning, and discriminative or generative approaches have been proposed by [19, 20, 21]. Recently, [22] suggests placeholders for both data and classifiers using manifold mixup [23] and learnable dummy classifiers, respectively. Their dummy classifiers are learnable, fixed, and thus suitable for OSR with fixed closed-set. On the other hand, our D-ProtoNets suggests episode-known, varying dummies for FSOSR. Few-shot open-set recognition. There is growing attention to the FSOSR due to its importance, and the previous studies [12, 24] concentrate on better OSR while preserving FSL performance. PEELER [12] suggests entropy maximization loss for an open-set and Gaussian Embedding for flexible decision boundaries. Based on a set-to-set transformation method [17], [24] introduces SnaTCHer, the SOTA FSOSR approach that compares the distance between transformed and modified prototypes and detects open-set using transformation consistency. Those works try to detect abnormality of open-set while our method directly learns a dummy class to detect open-set. ## 3 Method ### 3.1 Preliminaries Notations. An FSOSR setting consists of seen training data $\mathcal{D}_{\text{train}}$ and unseen evaluation data $\mathcal{D}_{\text{eval}}$ that do not overlap classes. $\mathcal{D}_{\text{train}}$ is composed of labeled samples, $\\{(\bm{x}_{i},y_{i})\\}_{i=1}^{|\mathcal{D}_{\text{train}}|}$, where $\bm{x_{i}}$ is an input feature, and $y_{i}$ is its corresponding label. During training, a model learns from $N$-way $M$-shot pseudo-FSOSR episodes, each of which has $N$ known classes offering $M$ support examples for each class and pseudo-unknown (pseudo-open-set) classes without any support examples. We denote a support set and a query set as $S$ and $Q$, respectively, in an episode. $S$ contains samples from $N$ classes, and $Q$ contains queries from both $N$ known and $N_{U}$ unknown classes where $|S|=NM$, $|Q|=(N+N_{U})M_{Q}$, and $M_{Q}$ is the number of queries for each class. At inference time, all classes of $\mathcal{D}_{\text{eval}}$ are unseen, and again an episode consists of $N$ known classes with support samples and $N_{U}$ unknown open-set classes. Prototypical Networks. Our work is in line with metric-learning-based approaches and is based on the representative work, Prototypical Networks (ProtoNets) [13]. In an N-way M-shot episode, ProtoNets gets $N$ prototypes, $\\{\bm{c}_{n}\\}_{n=1}^{N}$, using the average of the support samples of each class, $n$ , where $\bm{c}_{n}=\frac{1}{M}\sum_{(\bm{x}_{i},y_{i})\in S_{n}}{f_{\phi}(\bm{x}_{i})},$ (1) $S_{n}$ is a subset of $S$ whose labels are $n$, $|S_{n}|=M$, and $f_{\phi}$ is an encoder with $\phi$ parameters. Based on the prototypes, ProtoNets gets distribution over $N$ classes, $p_{\phi}(y=n|\bm{x})=\frac{\exp(-d(f_{\phi}(\bm{x}),\bm{c}_{n}))}{\sum_{n^{\prime}=1}^{N}{\exp(-d(f_{\phi}(\bm{x}),\bm{c}_{n^{\prime}})})},$ (2) where $d(\cdot)$ is a distance metric, e.g., Euclidean distance, $d(z,z^{\prime})=||z-z^{\prime}||^{2}$. Using Eq. 2, ProtoNets minimizes negative log-probability, $-\log p_{\phi}(y=n|\bm{x})$, of the true class $n$. ### 3.2 Few-shot Open-set Keyword Spotting Here, we introduce a benchmark-setting of FSOS-KWS using Google speech commands dataset (GSC) ver2 [5]. There are 35 keywords in total, and conventional KWS does 12 class classifications following the settings of [5]. The 12 classes consist of 10 keywords ( “Yes,” “No,” “Up,” “Down,” “Left,” “Right,” “On,” “Off,” “Stop,” and “go”) and two additional classes “Unknown words” which refers to the remaining 25 keywords, and “Silence” (background noise only). We split the dataset by class label like miniImageNet [14]. Our split has 15, 10, and 10 keywords for train, validation, and test set, respectively, as follows: * • Train keywords: “Happy,” “House,” “Bird,” “Bed,” “Backward,” “Sheila,” “Marvin,” “Wow,” “Tree,” “Follow,” “Dog,” “Visual,” “Forward,” “Learn,” and “Cat”. * • Validation keywords (numbers): “Zero,” “One,” “Two,” “Three,” “Four,” “Five,” “Six,” “Seven,” “Eight,” and “Nine”. * • Test keywords (10 keyword classes used in conventional 12 class KWS): “Yes,” “No,” “Up,” “Down,” “Left,” “Right,” “On,” “Off,” “Stop,” and “go”. These fixed keyword splits can prevent possible performance variance from split changes over trials. On top of the split, we add the particular class “Silence” which can only be included in an open-set as a background noise class. For example, in a 5-way 5-shot episode, we randomly choose five known classes without “Silence” and then choose the same number of open-set classes from the remaining classes, including “Silence”. We name this specific setting as split Google speech commands dataset (splitGSC). More details are available in Section 4.1. ### 3.3 Dummy Prototypical Network We suggest episode-known dummy prototypes and introduce a simple but powerful system named Dummy Prototypical Networks (D-ProtoNets) to handle varying open- sets over episodes. Episode-known Dummy Prototype. Let us consider $N$ prototypes, $\\{\bm{c}_{n}\\}_{n=1}^{N}$, in an episode, where $\bm{c}\in R^{1\times D}$ with the output dimension of $f_{\phi}$, $D$. The prototypes are permutation invariant each other for ProtoNets. Thus, we suggest a dummy generator, $g_{\varphi}$, with parameters $\varphi$ based on DeepSets [25], which is inherently permutation invariant. In detail, $g_{\varphi}$ generates a dummy $\bm{c}_{d}$ using given $N$ prototypes $C=[\bm{c}_{1};\bm{c}_{2};\cdots;\bm{c}_{N}]\in R^{N\times D}$ as an input, hence episode-known: $\bm{c}_{d}=g_{\varphi}(C)=\operatorname*{Maxpool}(g_{1}(C))W_{g},$ (3) where $g_{1}$ consists of fully-connected (FC) layers with nonlinearity, and $g_{1}(C)\in R^{N\times H}$ with a hidden dimension $H$. $\operatorname*{Maxpool}$ operates over $N$ and outputs a feature in $R^{1\times H}$. $W_{g}$ is a learnable $H\times D$ matrix, and $\bm{c}_{d}\in R^{1\times D}$. Finally, we get an augmented prototype set $\\{\bm{c}_{1},\cdots,\bm{c}_{N},\bm{c}_{d}\\}$. We set the labels of open-set queries to the N+1-th label $y_{d}$ which corresponds to the dummy, $\bm{c}_{d}$. Then, the Eq. 2 becomes $p_{\theta}(y=n|\bm{x})=\frac{\exp(-d(f_{\phi}(\bm{x}),\bm{c}_{n})/\tau_{n})}{\sum_{n^{\prime}=1}^{N+1}{\exp(-d(f_{\phi}(\bm{x}),\bm{c}_{n^{\prime}})/\tau_{n})}},$ (4) where $\theta$ consists of the encoder $\phi$ and the dummy generator $\varphi$, and $\tau_{n}$ is a softmax temperature which is usually same over classes. Here we use a larger $\tau_{N+1}$ compared to other $\tau_{n\neq N+1}$ to let the dummy easily reduce its loss, i.e., $\tau_{N+1}=\gamma\cdot\tau_{n\neq N+1}$, where $\gamma>1$. The N+1 classification is learned by cross entropy loss as below: $\displaystyle\mathcal{L}^{K}_{CE}=\sum_{(\bm{x}_{i},y_{i})\in Q_{K}}{-\log p_{\theta}(y=y_{i}|\bm{x}_{i})},$ $\displaystyle\mathcal{L}^{U}_{CE}=\sum_{(\bm{x}_{i},y_{d})\in Q_{U}}{-\log p_{\theta}(y=y_{d}|\bm{x}_{i})},$ (5) where $Q_{K}$ and $Q_{U}$ are known and unknown queries, respectively. We balance the two losses by $\lambda$, and the total loss is $\mathcal{L}_{CE}=\mathcal{L}^{K}_{CE}+\lambda\cdot\mathcal{L}^{U}_{CE}$. During a test, we classify $N$ known classes by $\hat{y_{i}}=\operatorname*{arg\,max}_{n\in\\{1,\cdots,N\\}}p_{\theta}(y_{i}=n|\bm{x}_{i})$. We verify whether a $x_{i}$ is an open-set by comparing $p_{\theta}(y_{i}=y_{d}|\bm{x}_{i})$ to a threshold $\delta$. Overall system is described in Figure 2. Multiple Dummies. We expand D-ProtoNets with multi $L$ dummies by changing $W_{g}$ to a $H\times(L\cdot D)$ matrix. The model can naively choose most probable dummy for an input $\bm{x}_{i}$ by $\operatorname*{arg\,max}_{l}(-d(\bm{x}_{i},\bm{c}_{l}))$. Instead, we use Gumbel softmax [26] to replace the non-differentiable sample, $\operatorname*{arg\,max}$, with a differentiable sample. The probability of choosing dummy $l$ is $p(y^{L}=l|\bm{x})=\frac{\exp((-d(f_{\phi}(\bm{x}),\bm{c}_{l})+\epsilon_{l})/\tau)}{\sum_{l^{\prime}=1}^{L}{\exp((-d(f_{\phi}(\bm{x}),\bm{c}_{l^{\prime}})+\epsilon_{l^{\prime}})/\tau})},$ (6) where $\epsilon_{1},\cdots,\epsilon_{L}$ are i.i.d samples drawn from the standard Gumbel distribution of $\mu=0$ and $\beta=1$ following [26], and $y^{L}$ is a temporal dummy label among L dummies. During training, we get $\bm{c}_{d}=\sum_{l}{p(y^{L}=l|\bm{x})\cdot\bm{c}_{l}}$, and we choose $y^{L}$ by $\operatorname*{arg\,max}_{l}p(y^{L}=l|\bm{x})$ at inference time. ## 4 Experiments ### 4.1 Experimental Settings Dataset. We use Google speech commands (GSC) dataset ver2 [5], containing 105,829 utterances from 2,618 speakers. The dataset is first split to train, validate, and test sets with 84,843, 9,981, and 11,005 utterances, respectively, using the official split (using a hash function on the name of each utterance file) [5]. Then, we chose the samples based on our splitGSC split and got 22,916, 3,643, and 4,074 samples for train, validation, and test, respectively. Then following the settings of [5], we add “Silence” samples to each split by the average number of utterances per class of each split, and finally, splitGSC has 24,444, 4,007, and 4,482 utterances for train, validation and test, respectively. Especially, we use the official test set that [5] offers and apply our split to it. During the training, we use minimal data augmentation, commonly used in GSC tasks [1, 4, 5]: Adding official background noise offered by GSC with the probability of 0.8. Backbones. We experiment with two widely used backbones in previous FSL benchmarks [13, 17], Conv4-64 [14] and ResNet-12 [27], and one backbone designed for KWS, BCResNet-8 [4]. Each corresponds to the encoder, $f_{\phi}$, and their output dimensions are 768, 512, and 256 for Conv4-64, ResNet12, and BCResNet-8, respectively. Conv4-64 does not have global average pooling at the end, and thus we get 768 dimensions, larger than its number of channels, 64. Implementation Details. Each utterance in GSC is 1 sec long, and the sampling rate is 16 kHz. We use input features of 40-dimensional log Mel-spectrograms with frameshift and window length of 10 and 30 ms, respectively, following [4]. We train a model for 100 epochs with Adam optimizer [28] of initial learning rate of 0.001. The learning rate is step decayed by multiplying 0.5 at every 20 epochs. Each epoch consists of 100 episodes, and an episode has 5 known (5-way) and 5 open-set classes. We use 5 support examples (5-shot), and all the classes have 5 and 15 queries for each during training and test, respectively. We use early-stop by few-shot validation accuracy and evaluate a trained model with 1,000 episodes. We design the dummy generator, $g$, as simple as possible. We use $g_{1}$ of FC-ReLU-FC with hidden $D=32$. We use Euclidean distance for $d$ and set hyperparameter $\lambda=0.1$ as a default setting. During training, the softmax temperatures $\tau_{n\neq N+1}$ are fixed to 1, and $\gamma=3$, i.e., $\tau_{N+1}=3$, for $\mathcal{L}_{CE}$. The $\tau$ in Gumbel softmax is cosine annealed from 2 to 0.5 in Eq. 6 following [26]. We use $L=3$ dummies as default. Table 1: splitGSC: 5-way {1, 5}-shot FSOSR results. The numbers are mean (std) over 5 trials (%). (bold: best) | | 1-shot | 5-shot ---|---|---|--- Model | Backbone | Accuracy | AUROC | Accuracy | AUROC ProtoNet | Conv4-64 | 43.2 (0.7) | 55.3 (0.5) | 67.6 (1.0) | 63.8 (0.6) FEAT | Conv4-64 | 46.9 (1.2) | 61.1 (0.3) | 65.6 (1.3) | 65.5 (0.7) PEELER | Conv4-64 | 42.8 (1.5) | 57.5 (1.0) | 66.3 (1.8) | 66.7 (1.0) SnaTCHer-F | Conv4-64 | 47.3 (1.2) | 51.1 (0.7) | 67.0 (1.6) | 53.1 (1.4) D-ProtoNet, L=3 | Conv4-64 | 45.3 (0.9) | 65.9 (0.3) | 69.6 (0.8) | 73.9 (0.7) ProtoNet | ResNet-12 | 68.3 (1.0) | 60.7 (0.7) | 85.9 (0.7) | 68.4 (1.2) FEAT | ResNet-12 | 68.8 (1.1) | 65.6 (0.7) | 84.0 (0.9) | 71.0 (0.7) PEELER | ResNet-12 | 65.4 (1.5) | 66.2 (0.9) | 82.4 (1.1) | 72.1 (1.7) SnaTCHer-F | ResNet-12 | 70.4 (1.0) | 73.2 (1.0) | 84.7 (0.6) | 83.3 (0.8) D-ProtoNet, L=3 | ResNet-12 | 69.7 (1.0) | 78.8 (0.3) | 86.9 (0.3) | 86.7 (0.3) ProtoNet | BCResNet-8 | 66.7 (0.7) | 60.9 (0.7) | 83.1 (0.5) | 68.8 (0.6) D-ProtoNet, L=3 | BCResNet-8 | 65.2 (1.4) | 75.7 (0.9) | 81.9 (1.1) | 82.3 (1.5) Baselines. We compare our method to other notable approaches: ProtoNet [13], FEAT [17], PEELER [12], and SOTA FSOSR method, SnaTCHer [24] based on their available official implementations. There are the following changes to fit the baselines better to our splitGSC carefully. We set the hidden dimension of 16 for the Transformers in FEAT and SnaTCHer-F. Also, we set dropout rates in the Transformers of 0.5 and 0 for Conv4-64 and 0.6 and 0.1 for ResNet-12 experiments. ### 4.2 Results Table 1 shows overall results in splitGSC. ‘Acc’ stands for FSL accuracy, and we use the threshold-free area under the receiver-operating characteristics (AUROC) as an OSR measure following previous FSOSR approaches [12, 24]. By introducing dummy prototypes and additional losses to various backbones, our D-ProtoNet significantly improves vanilla ProtoNet in AUROC and shows better FSL accuracies. Other baselines also improve vanilla ProtoNet, but D-ProtoNets shows clear margins. SnaTHCer is the strong baseline, and they suggest directly using distance metric instead of softmax output to detect open-set, i.e., $\bm{x}_{i}$ is an open-set sample if $\max_{n\in{1,\cdots,N}}\\{-d(f_{\phi}(\bm{x}_{i}),\bm{c}_{n})\\}<\delta$ while other approaches [12, 29, 30] usually use $\max_{n\in{1,\cdots,N}}p(y_{i}=n|\bm{x}_{i})$. However, we observed that the unnormalized distance metric does not always hold for detecting open-set and shows poor AUROC with Conv4-64 in splitGSC and our training details. Table 2: miniImageNet: 5-way {1, 5}-shot FSOSR results. The numbers are mean (std) over 5 trials (%). *SnaTCHer results are quoted from the paper. | | 1-shot | 5-shot ---|---|---|--- Model | Backbone | Accuracy | AUROC | Accuracy | AUROC ProtoNet | ResNet-12 | 63.1 (0.4) | 55.3 (0.1) | 82.2 (0.2) | 61.0 (0.4) FEAT | ResNet-12 | 67.1 (0.4) | 59.6 (0.5) | 82.3 (0.4) | 62.5 (0.2) PEELER | ResNet-12 | 62.1 (0.2) | 59.2 (0.3) | 81.8 (0.2) | 67.4 (0.3) SnaTCHer-F | ResNet-12 | 67.4 (0.4) | 69.4 (0.8) | 82.4 (0.1) | 76.4 (0.7) *SnaTCHer-F [24] | ResNet-12 | 67.0 | 68.3 | 82.0 | 77.4 *SnaTCHer-T [24] | ResNet-12 | 66.6 | 70.2 | 81.8 | 76.7 *SnaTCHer-L [24] | ResNet-12 | 67.6 | 69.4 | 82.4 | 76.2 D-ProtoNet, L=1 | ResNet-12 | 63.2 (0.3) | 69.7 (0.3) | 82.1 (0.1) | 76.6 (0.5) D-ProtoNet, L=3 | ResNet-12 | 63.4 (0.4) | 70.2 (0.4) | 81.8 (0.2) | 77.8 (0.3) D-ProtoNet, L=3 + FEAT | ResNet-12 | 65.1 (0.2) | 70.6 (0.6) | 81.9 (0.1) | 78.5 (0.9) ### 4.3 miniImageNet We further verify D-ProtoNets on widely used mini-ImageNet [14]. The dataset is the subset of ImageNet [31] and contains 100 classes with 600 84$\times$84 images for each. Following [32], we split the data into train, validation, and test sets with 64, 16, and 20 classes, respectively. We follow the implementations and training details of FEAT and SnaTCHer’s official implementations for miniImageNet. The backbone is the ResNet-12 introduced by [33], whose output dimension is 640. Backbones are pretrained by adding a softmax layer to classify all 64 seen classes [17, 24]. Then, the model is trained for 200 epochs with 100 episodes for each. In each 5-way episode, we randomly choose 5 open-set classes, and 15 queries for all classes. We use softmax temperature $\tau=64$ for FEAT and SnaTCHer, following their best settings. We optimized $\tau$ of vanilla ProtoNet in $\\{0.1,1,16,32,64,128\\}$ following [17] and found that $\tau=16$ works best. D-ProtoNets use $\tau_{n\neq N+1}=16$ and $\gamma=3$. Table 2 shows the comparison between D-ProtoNets and various baselines. Our dummy prototypes improve vanilla ProtoNets in detecting open-set samples while preserving FSL accuracies, and achieve SOTA level open-set detection. FEAT and SnaTCHer show better FSL accuracies than ours, especially for 1-shot settings. We interpret that their additional set-to-set transformation for support examples makes the extreme low shot regime more robust by using the relation between a few support examples. However, both methods do additional transformations, and SnaTCHer requires heavy computations due to the transformations for all the queries. We also tried the transformation method FEAT to our D-ProtoNets to improve further. At inference time, we observed that using an unnormalized distance metric, $-d(f_{\phi}(\bm{x}_{i}),\bm{c}_{N+1})$, instead of softmax output [24] increases the 1-shot and 5-shot AUROC from 68.5 to 70.6 and from 78.2 to 78.5, respectively, while using the D-ProtoNets+FEAT. The result implies that D-ProtoNets can be used with a complementary concept like FEAT. ### 4.4 Ablation Study Table 3: Ablations. splitGSC: 5-way {1, 5}-shot FSOSR results using ResNet-12. The numbers are mean (std) over 5 trials (%). ‘Gum.’ stands for Gumbel softmax. Methods | 1-shot | 5-shot ---|---|--- L | $\mathcal{L}^{U}_{CE}$ | $\gamma$ | Gum. | Accuracy | AUROC | Accuracy | AUROC - | | - | - | 68.3 (1.0) | 60.7 (0.7) | 85.9 (0.7) | 68.4 (1.2) 3 | ✓ | 1 | | 69.9 (1.1) | 76.4 (1.1) | 86.4 (1.1) | 82.7 (1.2) 3 | ✓ | 3 | | 69.2 (1.4) | 78.7 (0.9) | 86.5 (0.7) | 86.2 (0.9) 3 | ✓ | 3 | ✓ | 69.7 (1.0) | 78.8 (0.3) | 86.9 (0.3) | 86.7 (0.3) 1 | ✓ | 3 | - | 70.0 (0.5) | 78.7 (0.1) | 86.7 (0.3) | 85.9 (0.4) 2 | ✓ | 3 | ✓ | 69.1 (0.5) | 78.5 (0.4) | 86.4 (0.2) | 86.1 (0.6) 3 | ✓ | 3 | ✓ | 69.7 (1.0) | 78.8 (0.3) | 86.9 (0.3) | 86.7 (0.3) 4 | ✓ | 3 | ✓ | 70.3 (0.7) | 78.8 (0.6) | 86.9 (0.7) | 86.5 (1.2) 5 | ✓ | 3 | ✓ | 69.2 (0.5) | 78.4 (0.7) | 86.4 (0.4) | 86.7 (0.5) 3 | ✓ | 1 | ✓ | 70.1 (2.0) | 76.8 (0.8) | 86.4 (1.1) | 82.2 (0.6) 3 | ✓ | 2 | ✓ | 69.8 (1.0) | 78.3 (0.4) | 87.0 (0.7) | 85.9 (0.6) 3 | ✓ | 3 | ✓ | 69.7 (1.0) | 78.8 (0.3) | 86.9 (0.3) | 86.7 (0.3) 3 | ✓ | 5 | ✓ | 69.8 (1.1) | 78.7 (0.6) | 86.5 (0.6) | 86.4 (0.8) 3 | ✓ | 10 | ✓ | 70.3 (1.0) | 79.1 (0.5) | 86.3 (0.6) | 86.7 (0.3) Ablations. We do some ablation studies for D-ProtoNets, and Table 3 shows the effect of our loss terms, number of dummies, $\tau_{N+1}$, and Gumbel softmax. The top row implies the baseline, ProtoNet. By adding dummies, there are considerable improvements of more than 10 % in AUROC. We could get further improvements through introducing Gumbel softmax and the $\tau_{N+1}=3$ while $\tau_{n\neq N+1}$ are fixed to 1. We also experiment with how the number of dummies, L, affects D-ProtoNets. D-ProtoNets shows better results as L increases from $1$ to $3$ and does not show further improvements with $L=4$ and $5$ (seem converged at $L=3$). We further experiment the effect of various $\gamma\in\\{1,2,3,5,10\\}$ for $\tau_{N+1}$. The AUROC increases as $\gamma$ increases from $1$ to $3$ and seems to converge near $\gamma=3$. Erase undesirable instance discrepancy. Recently, [34] shows the importance of making few-shot models concentrate on foreground objects rather than backgrounds in images. Motivated by [34], we use an explicit normalization along the frequency-axis, named Relaxed instance Frequency-wise Normalization (RFN) [35, 36], to reduce undesirable instance discrepancy in audio features. The RFN module operates its input $\bm{x}$ and outputs $\lambda\cdot\text{LN}(\bm{x})+(1-\lambda)\cdot\text{IFN}(\bm{x})$, where IFN is instance normalization [37] along frequency-axis, and LN is layer normalization [38] for relaxing the effect of IFN. Here we use the relaxation $\lambda=0.5$ and apply RFN at the input of the encoder $f_{\phi}$. We expect RFN to make the model concentrate on keywords more than other discrepancies, e.g., speaker ID. Table 4 shows that RFN consistently improves ProtoNet and D-ProtoNet with various backbones. The results indicate that erasing undesirable instance discrepancy is highly important in FSOS-KWS. Table 4: Using RFN. splitGSC: 5-way {1, 5}-shot FSOSR results. The numbers are mean (std) over 5 trials (%). | | 1-shot | 5-shot ---|---|---|--- Model | Backbone | Accuracy | AUROC | Accuracy | AUROC ProtoNet | Conv4-64 | 43.2 (0.7) | 55.3 (0.5) | 67.6 (1.0) | 63.8 (0.6) \+ RFN | Conv4-64 | 46.3 (0.9) | 57.3 (0.7) | 69.9 (1.0) | 65.3 (0.6) D-ProtoNet, L=3 | Conv4-64 | 45.3 (0.9) | 65.9 (0.3) | 69.6 (0.8) | 73.9 (0.7) \+ RFN | Conv4-64 | 48.8 (0.6) | 69.3 (0.5) | 71.9 (0.7) | 76.9 (0.4) ProtoNet | ResNet-12 | 68.3 (1.0) | 60.7 (0.7) | 85.9 (0.7) | 68.4 (1.2) \+ RFN | ResNet-12 | 70.5 (1.0) | 62.9 (0.7) | 87.3 (0.8) | 70.9 (1.4) D-ProtoNet, L=3 | ResNet-12 | 69.7 (1.0) | 78.8 (0.3) | 86.9 (0.3) | 86.7 (0.3) \+ RFN | ResNet-12 | 72.6 (0.5) | 80.3 (1.0) | 88.3 (0.6) | 87.8 (0.9) ProtoNet | BCResNet-8 | 66.7 (0.7) | 60.9 (0.7) | 83.1 (0.5) | 68.8 (0.6) \+ RFN | BCResNet-8 | 71.1 (1.6) | 65.2 (1.0) | 86.3 (0.7) | 74.2 (1.0) D-ProtoNet, L=3 | BCResNet-8 | 65.2 (1.4) | 75.7 (0.9) | 81.9 (1.1) | 82.3 (1.5) \+ RFN | BCResNet-8 | 69.7 (0.5) | 78.3 (0.3) | 85.5 (0.3) | 85.4 (0.6) ## 5 Conclusion This work tackles few-shot open-set recognition in keyword spotting (FSOS-KWS) and suggests a new benchmark, splitGSC. To adapt to the varying open-set, we introduce episode-known dummies to Prototypical Networks (ProtoNets), named Dummy Prototypical Networks (D-ProtoNets). D-ProtoNets shows clear margins compared to recent baselines in splitGSC and achieves SOTA open-set detection in miniImageNet. We also suggest future research, erasing undesirable instance discrepancy, for FSOS-KWS. ## References * [1] R. Tang and J. Lin, “Deep residual learning for small-footprint keyword spotting,” in _ICASSP_. IEEE, 2018, pp. 5484–5488. * [2] M. Lee, J. Lee, H. J. Jang, B. Kim, W. Chang, and K. Hwang, “Orthogonality constrained multi-head attention for keyword spotting,” in _ASRU_. IEEE, 2019, pp. 86–92. * [3] M. Xu and X. Zhang, “Depthwise separable convolutional resnet with squeeze-and-excitation blocks for small-footprint keyword spotting,” in _INTERSPEECH_. ISCA, 2020, pp. 2547–2551. * [4] B. Kim, S. Chang, J. Lee, and D. Sung, “Broadcasted Residual Learning for Efficient Keyword Spotting,” in _Proc. Interspeech 2021_ , 2021, pp. 4538–4542. * [5] P. Warden, “Speech commands: A dataset for limited-vocabulary speech recognition,” _arXiv preprint arXiv:1804.03209_ , 2018. * [6] B. Kim, M. Lee, J. Lee, Y. Kim, and K. Hwang, “Query-by-example on-device keyword spotting,” in _ASRU_. IEEE, 2019, pp. 532–538. * [7] Y. Chen, T. Ko, L. Shang, X. Chen, X. Jiang, and Q. Li, “An investigation of few-shot learning in spoken term classification,” in _INTERSPEECH_. ISCA, 2020, pp. 2582–2586. * [8] A. Parnami and M. Lee, “Few-shot keyword spotting with prototypical networks,” _CoRR_ , vol. abs/2007.14463, 2020. * [9] J. Huang, W. Gharbieh, H. S. Shim, and E. Kim, “Query-by-example keyword spotting system using multi-head attention and soft-triple loss,” in _ICASSP_. IEEE, 2021, pp. 6858–6862. * [10] A. M. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” in _CVPR_. IEEE Computer Society, 2015, pp. 427–436. * [11] W. J. Scheirer, A. de Rezende Rocha, A. Sapkota, and T. E. Boult, “Toward open set recognition,” _IEEE Trans. Pattern Anal. Mach. Intell._ , vol. 35, no. 7, pp. 1757–1772, 2013. * [12] B. Liu, H. Kang, H. Li, G. Hua, and N. Vasconcelos, “Few-shot open-set recognition using meta-learning,” in _CVPR_. Computer Vision Foundation / IEEE, 2020, pp. 8795–8804. * [13] J. Snell, K. Swersky, and R. S. Zemel, “Prototypical networks for few-shot learning,” in _NIPS_ , 2017, pp. 4077–4087. * [14] O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra, “Matching networks for one shot learning,” in _NIPS_ , 2016, pp. 3630–3638. * [15] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in _ICML_ , ser. Proceedings of Machine Learning Research, vol. 70. PMLR, 2017, pp. 1126–1135. * [16] B. Hariharan and R. B. Girshick, “Low-shot visual recognition by shrinking and hallucinating features,” in _ICCV_. IEEE Computer Society, 2017, pp. 3037–3046. * [17] H. Ye, H. Hu, D. Zhan, and F. Sha, “Few-shot learning via embedding adaptation with set-to-set functions,” in _CVPR_. Computer Vision Foundation / IEEE, 2020, pp. 8805–8814. * [18] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in _NIPS_ , 2017, pp. 5998–6008. * [19] R. Yoshihashi, W. Shao, R. Kawakami, S. You, M. Iida, and T. Naemura, “Classification-reconstruction learning for open-set recognition,” in _CVPR_. Computer Vision Foundation / IEEE, 2019, pp. 4016–4025. * [20] Z. Ge, S. Demyanov, and R. Garnavi, “Generative openmax for multi-class open set classification,” in _BMVC_. BMVA Press, 2017. * [21] P. Perera, V. I. Morariu, R. Jain, V. Manjunatha, C. Wigington, V. Ordonez, and V. M. Patel, “Generative-discriminative feature representations for open-set recognition,” in _CVPR_. Computer Vision Foundation / IEEE, 2020, pp. 11 811–11 820. * [22] D. Zhou, H. Ye, and D. Zhan, “Learning placeholders for open-set recognition,” in _CVPR_. Computer Vision Foundation / IEEE, 2021, pp. 4401–4410. * [23] V. Verma, A. Lamb, C. Beckham, A. Najafi, I. Mitliagkas, D. Lopez-Paz, and Y. Bengio, “Manifold mixup: Better representations by interpolating hidden states,” in _ICML_ , ser. Proceedings of Machine Learning Research, vol. 97. PMLR, 2019, pp. 6438–6447. * [24] M. Jeong, S. Choi, and C. Kim, “Few-shot open-set recognition by transformation consistency,” in _CVPR_. Computer Vision Foundation / IEEE, 2021, pp. 12 566–12 575. * [25] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Póczos, R. Salakhutdinov, and A. J. Smola, “Deep sets,” in _NIPS_ , 2017, pp. 3391–3401. * [26] E. Jang, S. Gu, and B. Poole, “Categorical reparameterization with gumbel-softmax,” in _ICLR (Poster)_. OpenReview.net, 2017. * [27] B. N. Oreshkin, P. R. López, and A. Lacoste, “TADAM: task dependent adaptive metric for improved few-shot learning,” in _NeurIPS_ , 2018, pp. 719–729. * [28] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in _ICLR (Poster)_ , 2015. * [29] A. Bendale and T. E. Boult, “Towards open set deep networks,” in _CVPR_. IEEE Computer Society, 2016, pp. 1563–1572. * [30] L. Neal, M. L. Olson, X. Z. Fern, W. Wong, and F. Li, “Open set learning with counterfactual images,” in _ECCV (6)_ , ser. Lecture Notes in Computer Science, vol. 11210. Springer, 2018, pp. 620–635. * [31] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” _International Journal of Computer Vision (IJCV)_ , vol. 115, no. 3, pp. 211–252, 2015. * [32] S. Ravi and H. Larochelle, “Optimization as a model for few-shot learning,” in _ICLR_. OpenReview.net, 2017\. * [33] K. Lee, S. Maji, A. Ravichandran, and S. Soatto, “Meta-learning with differentiable convex optimization,” in _CVPR_. Computer Vision Foundation / IEEE, 2019, pp. 10 657–10 665. * [34] X. Luo, L. Wei, L. Wen, J. Yang, L. Xie, Z. Xu, and Q. Tian, “Rectifying the shortcut learning of background: Shared object concentration for few-shot image recognition,” _CoRR_ , vol. abs/2107.07746, 2021. * [35] B. Kim, S. Yang, J. Kim, H. Park, J. Lee, and S. Chang, “Domain generalization with relaxed instance frequency-wise normalization for multi-device acoustic scene classification,” _CoRR_ , vol. abs/2206.12513, 2022. * [36] B. Kim, S. Yang, J. Kim, and S. Chang, “Domain generalization on efficient acoustic scene classification using residual normalization,” in _Proceedings of the 6th Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021)_ , Barcelona, Spain, November 2021, pp. 21–25. * [37] D. Ulyanov, A. Vedaldi, and V. S. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” _CoRR_ , vol. abs/1607.08022, 2016\. * [38] L. J. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” _CoRR_ , vol. abs/1607.06450, 2016. * [39] B. Kim, S. Yang, J. Kim, and S. Chang, “QTI submission to DCASE 2021: Residual normalization for device-imbalanced acoustic scene classification with efficient design,” DCASE2021 Challenge, Tech. Rep., June 2021.
# The effect of magnetic field on the inner Galactic rotation curve Man Ho Chan1 , Antonino Del Popolo2,3 1Department of Science and Environmental Studies, The Education University of Hong Kong, Tai Po, Hong Kong 2Dipartimento di Fisica e Astronomia, University of Catania, Viale Andrea Doria 6, 95125, Catania, Italy 3Institute of Astronomy, Russian Academy of Sciences, Pyatnitskaya str. 48, 119017 Moscow, Russia<EMAIL_ADDRESS> (Accepted XXXX, Received XXXX) ###### Abstract In the past few decades, some studies pointed out that magnetic field might affect the rotation curves in galaxies. However, the impact is relatively small compared with the effects of dark matter and the baryonic components. In this letter, we revisit the impact of magnetic field on the rotation curve of our Galaxy. We show that the inner Galactic rotation curve could be affected significantly by the magnetic field. The addition of the inner bulge component, which has been proposed previously to account for the inner rotation curve data, is not necessary. The magnetic field contribution can fully account for the excess of the inner rotation velocity between 5 pc to 50 pc from the Galactic Centre. Our analysis can also constrain the azimuthal component of the central regular magnetic field strength to $B_{0}\sim 50-60$ $\mu$G, which is consistent with the observed range. ###### keywords: Galaxy: centre; Galaxy: kinematics and dynamics ††pagerange: The effect of magnetic field on the inner Galactic rotation curve–References††pubyear: XXXX ## 1 Introduction The rotation curves of galaxies are important indicators of mass distribution in galaxies. Many rotation curves were revealed by the neutral atomic gas (e.g. HI, CO) which is believed to be a very good tracer of the gravitational field (e.g. Spitzer Photometry and Accurate Rotation Curves (Lelli, McGaugh & Schombert, 2016)). In particular, many past studies have shown that large- scale magnetic field can affect the gas dynamics in spiral galaxies (Piddington, 1964; Nelson, 1988; Battaner et al., 1992; Battaner & Florido, 1995). Some studies even show that this magnetic field effect can explain the flatness of rotation curves in galaxies without the need of dark matter (Nelson, 1988; Battaner et al., 1992; Battaner & Florido, 1995). This is known as the ‘magnetic alternative to dark matter’ (Sánchez-Salcedo & Reyes-Ruiz, 2004). Nevertheless, later studies have shown that the boost of rotation curves due to magnetic field contribution is less than 20 km/s at the outermost point of rotation curves (Sánchez-Salcedo & Reyes-Ruiz, 2004; Sánchez-Salcedo & Santillán A., 2013). Since then, the ‘magnetic alternative to dark matter’ was no longer a popular model. In the past decade, the idea of the magnetic field effect on galactic rotation curves was revived. Some studies have shown that the magnetic field effect can explain why rotation curves in some galaxies start to rise again at the outer edges of the HI discs, such as our Galaxy (Ruiz-Granados et al., 2012) and the M31 galaxy (Ruiz-Granados et al., 2010). Considering the effects of magnetic field can somewhat improve the fits of the outer part of galactic rotation curves. However, some other studies have argued that the effect in the outer rotation curve region is not very significant (Sánchez-Salcedo & Santillán A., 2013; Elstner, Beck & Gressel, 2014). Although the effect of magnetic field in the outer rotation curve region has been greatly debated, such effect in the inner region of a galaxy has not been discussed thoroughly. In this letter, we particularly investigate the magnetic field effect on the inner rotation curve of our Galaxy. There is a small rotation velocity excess range (between 5 pc to 50 pc from the Galactic Centre) which could not be accounted by the contributions of the supermassive black hole and the central bulge component (Sofue, 2013). An extra inner bulge has to be added to account for this abnormal excess range. We show that this small excess range could be explained by the magnetic field effect so that adding the extra inner bulge component is not necessary. ## 2 Magnetic field effect on rotation curve In a gaseous disc in equilibrium, magnetic field effects on the gas can be modelled as a pressure term in the asymmetric drift (Sánchez-Salcedo & Santillán A., 2013). Such asymmetric drift is a consequence of the support by thermal, turbulent, cosmic ray and magnetic pressures. The dynamical effects of the regular magnetic field can significantly boost the gravitational orbital velocity due to the magnetic tension (Nelson, 1988). The total magnetic field in a galaxy can be simply expressed as a sum of the regular field term (the azimuthal component) $B_{\phi}$ and a random field term (the turbulent magnetic field component) $B_{\rm ran}$ (Sánchez-Salcedo & Santillán A., 2013; Elstner, Beck & Gressel, 2014). In particular, the random field can be isotropic or anisotropic. The contribution of the regular magnetic field component to the circular velocity is given by (Ruiz-Granados et al., 2010) $v_{\rm B1}^{2}=\frac{r}{4\pi\rho_{g}}\left(\frac{B_{\phi}^{2}}{r}+\frac{1}{2}\frac{dB_{\phi}^{2}}{dr}\right),$ (1) where $\rho_{g}$ is the gas density and $r$ is the radial distance from the Galactic Centre. The random magnetic field component would contribute to the circular velocity via the magnetic pressure term $P_{B}$ as (Sánchez-Salcedo & Santillán A., 2013) $v_{\rm B2}^{2}=\frac{r}{\rho_{g}}\frac{dP_{B}}{dr}=\frac{r}{\rho_{g}}\frac{d}{dr}\frac{\langle B_{\rm ran}^{2}\rangle}{8\pi},$ (2) where $\langle B_{\rm ran}^{2}\rangle$ is the mean-square value of the random magnetic field strength. Therefore, the total contribution of the magnetic field to the circular velocity is: $v_{\rm mag}^{2}=v_{\rm B1}^{2}+v_{\rm B2}^{2}=\frac{r}{8\pi\rho_{g}}\left[\frac{2B_{\phi}^{2}}{r}+\frac{d}{dr}(B_{\phi}^{2}+\langle B_{\rm ran}^{2}\rangle)\right].$ (3) In the inner Galactic Centre region ($r\leq 500$ pc), the rotation curve is also contributed by the supermassive black hole $v_{\rm BH}$ and the baryonic bulge $v_{\rm bulge}$ components. Therefore, including the magnetic field contribution, the observed total rotation curve is $v^{2}=v_{\rm BH}^{2}+v_{\rm bulge}^{2}+v_{\rm mag}^{2}.$ (4) The supermassive black hole rotation curve contribution is $v_{\rm BH}^{2}=\frac{GM_{\rm BH}}{r},$ (5) where $M_{\rm BH}=(4.154\pm 0.014)\times 10^{6}M_{\odot}$ is the mass of the supermassive black hole (Abuter et al., 2020). The bulge mass density can be modeled by the exponential spheroid model with scale radius $a$ as (Sofue, 2013): $\rho_{\rm bulge}(r)=\rho_{c}e^{-r/a},$ (6) where $\rho_{c}$ is the central bulge mass density. Therefore, the bulge rotation curve contribution is given by $v_{\rm bulge}^{2}=\frac{GM_{0}}{r}F\left(\frac{r}{a}\right),$ (7) where $M_{0}=8\pi a^{3}\rho_{c}$ and $F(x)=1-e^{-x}(1+x+x^{2}/2)$ (Sofue, 2013). We take the values $M_{0}=8.4\times 10^{9}M_{\odot}$ and $a=0.12$ kpc obtained in Sofue (2013) to perform our analysis. To match the inner Galactic rotation curve data without considering the magnetic field contribution, two bulge components (inner bulge and outer bulge) were assumed in previous studies (Sofue, 2013). However, in the followings, we will investigate whether the magnetic field contribution can mimic the effect of the inner bulge component. Therefore, we assume that there is only one bulge component to minimise the number of parameters in our analysis. Moreover, the dark matter contribution is not considered here because it is not significant in the inner Galactic Centre region ($r\leq 500$ pc). Assuming the Navarro-Frenk-White (NFW) profile (Navarro, Frenk & White, 1997), the total mass of dark matter is less than 5% of the bulge mass inside 500 pc, with the fitted parameters in Sofue (2015). Therefore, we neglect the contribution of dark matter for simplicity. We also neglect the gravitational contribution of gas in the inner Galactic Centre region as the gas mass is less than 3% of the bulge mass inside 500 pc (Ferrière, Gillard & Jean, 2007). Over $\sim 300$ pc along the Galactic plane and $\sim 150$ pc in the vertical direction at the Galactic Centre, the magnetic field is approximately horizontal and the strength is very strong, which can range from $B\sim 0.01-1$ mG for general intercloud medium (Ferrière, 2009). The equipartition between magnetic field energy and turbulent energy suggests that $B\propto\rho_{g}^{1/2}$ (Schleicher et al., 2010). This is also supported by the observed relation between magnetic field and star-formation rate (Heesen et al., 2014; Tabatabaei et al., 2017). Therefore, we assume that the regular magnetic field profile follows the exponential spheroid model as $B_{\phi}=B_{0}e^{-r/2a},$ (8) where $B_{0}$ is the central regular magnetic field strength. Note that Eq. (8) follows from Eq. (6) only if gas density is assumed to be proportional to the baryonic bulge mass density $\rho_{\rm bulge}(r)$. For the random magnetic field, it is an important component in the interstellar medium of galaxies (Beck et al., 2019). We define $\eta$ as the ratio of the regular magnetic field to the total magnetic field so that the random magnetic field strength can be expressed as $\langle B_{\rm ran}^{2}\rangle=(\eta^{-2}-1)B_{\phi}^{2}$. Although some earlier studies found that the ratio is around $\eta\sim 0.6-0.7$ in some galaxies (Fletcher et al., 2004; Beck, 2007; Sánchez-Salcedo & Santillán A., 2013), recent studies have shown that these ordered fields are dominated by anisotropic random fields and the ratio should be $\eta\sim 0.01-0.3$ in galactic disk region (Beck et al., 2019). Nevertheless, the actual value of $\eta$ at the Galactic Centre is uncertain and the value of $\eta$ may be larger. In the followings, we will first assume $\eta=0.65$. Then, we will also demonstrate the cases for $\eta=0.3$ and $\eta=0.9$ for comparison. For the gas density, observational data show that the gas number density is close to $n_{H}\sim 10-100$ cm-3 at the inner Galactic Centre and $n_{H}\sim 1$ cm-3 out to $\sim 220$ pc along the Galactic plane (Ferrière, Gillard & Jean, 2007; Ferrière, 2009). We follow Ferrière, Gillard & Jean (2007) to assume that the gas density decreases exponentially with $r$ for small $r$ and then approaches a constant value $\rho_{0}^{\prime}$ when $r$ becomes large. Therefore, we write the gas density profile as $\rho_{g}=\rho_{0}\exp\left(-\frac{r}{1~{}\rm pc}\right)+\rho_{0}^{\prime}.$ (9) Putting Eq. (8) and Eq. (9) into Eq. (3), we get $v_{\rm mag}^{2}=\frac{v_{0}^{2}e^{-r/a}[1-\eta^{-2}(r/2a)]}{\exp(-r/1~{}{\rm pc})+y},$ (10) where $v_{0}^{2}=B_{0}^{2}/(4\pi\rho_{0})$, $y=\rho_{0}^{\prime}/\rho_{0}$. Here, $v_{0}$ and $y$ are free parameters in our model. ## 3 Data analysis The inner rotation curve data have been obtained in Sofue (2013). We will focus on the region $r\leq 360$ pc because the rotation curve attains its maximum at around $r=360$ pc. At this position, the percentage contribution of the bulge component is maximised. Larger than $r=360$ pc, the dark matter and disc components start to contribute more to the Galactic rotation curve. We fit our predicted total rotation curve $v(r)$ with the observed data $v_{\rm obs}(r)$. To quantify the goodness of fits, we calculate the reduced $\chi^{2}$ value of the fits, which is defined as $\chi_{\rm red}^{2}=\frac{1}{N-M}\sum_{i=1}^{N}\frac{(v_{i}-v_{{\rm obs},i})^{2}}{\sigma_{i}^{2}},$ (11) where $\sigma_{i}$ is the uncertainty of the observed rotation curve data, $N$ is the total number of data points and $M$ is the number of free parameters. Here, we have $M=2$. In Fig. 1, we present the best fit of our model (with $\eta=0.65$) and separate the corresponding rotation curve contributions. The best-fit values are $v_{0}=16.4$ km/s and $y=0.023$, with $\chi_{\rm red}^{2}=0.14$. We can see that including the magnetic field contribution could provide an excellent fit to the observed inner rotation curve without introducing any inner bulge component suggested in Sofue (2013). The reduced $\chi^{2}$ value for adding an inner bulge component without magnetic field contribution is $\chi_{\rm red}^{2}=0.12$, almost the same goodness of fit. Note that the contribution of the magnetic field effect to the rotation velocity could be slightly negative when $r>2\eta^{2}a\approx 0.1$ kpc. Fig. 2 shows the comparison of the residual plots between the Galactic total rotation curve data and the two scenarios. No systematic trend of the residuals is shown for both scenarios. At the Galactic Centre, the gas density is $\rho_{g}\approx 1.4m_{p}n_{H}$, where $m_{p}$ is the proton mass. If we take the asymptotic number density $n_{H}=1$ cm-3 (Ferrière, Gillard & Jean, 2007), we have $\rho_{0}^{\prime}\approx 2.3\times 10^{-24}$ g/cm3. Using the best-fit values $y=0.023$ and $v_{0}=16.4$ km/s, we get $\rho_{0}=1.0\times 10^{-22}$ g/cm-3 (i.e. $n_{H}\approx 43$ cm-3) and $B_{0}=58$ $\mu$G. These values are consistent with the number density observed and the central total magnetic field constrained ($B\sim 10-100$ $\mu$G) (Ferrière, Gillard & Jean, 2007; Ferrière, 2009; Guenduez et al., 2020). As the actual value of $\eta$ at the Galactic Centre is uncertain, we also investigate the cases for $\eta=0.3$ and $\eta=0.9$. For $\eta=0.9$, we also get a very good fit with $v_{0}=15.2$ km/s ($B_{0}=54$ $\mu$G) and $y=0.023$ ($\chi_{\rm red}^{2}=0.11$). However, for $\eta=0.3$, a relatively poor fit is obtained ($\chi_{\rm red}^{2}=1.46$). We plot the corresponding components and the total rotation curves fitted in Fig. 3. Generally speaking, for $\eta>0.6$, a very good fit would be obtained ($\chi_{\rm red}^{2}<0.17$). Figure 1: The black dots with error bars represent the inner Galactic rotation curve data for $r=1-360$ pc (Sofue, 2013). The red, green, and blue solid lines indicate the best-fit rotation curve components of the supermassive black hole, bulge, and magnetic field contribution respectively. The orange line is the best-fit total rotation curve in our model. The violet dotted line represents the inner bulge contribution assumed in Sofue (2013). Here, we have assumed $\eta=0.65$. Figure 2: The red circles and blue triangles represent the residuals of the best-fit total rotation curve due to the contributions of the magnetic field effect and the inner bulge component respectively compared with the inner Galactic rotation curve data. Figure 3: The black dots with error bars represent the inner Galactic rotation curve data for $r=1-360$ pc (Sofue, 2013). The red, green, and blue lines indicate the best-fit rotation curve components of the supermassive black hole, bulge, and magnetic field contribution respectively (blue solid: $\eta=0.3$; blue dashed: $\eta=0.9$). The orange lines are the best-fit total rotation curve in our model (orange solid: $\eta=0.3$; orange dashed: $\eta=0.9$). ## 4 Discussion In this letter, we show that adding the magnetic field contribution can satisfactorily explain the inner Galactic rotation curve data without invoking an inner bulge component. The magnetic field effect on rotation curve is a predicted effect in magneto-hydrodynamics (MHD). Our results provide an indirect evidence that magnetic field can affect the galactic rotation curve significantly. Previous studies have shown that the magnetic field contribution cannot boost the gas rotating speed by more than 20 km/s in the outermost region (Sánchez-Salcedo & Santillán A., 2013). Now we show that such effect can be large in the central region of a galaxy. The maximum contribution of the magnetic field on the rotation curve is 97 km/s at $r\sim 12$ pc. In our analysis, we have assumed that magnetic field strength traces the baryonic distribution (i.e. the bulge density), which is predicted by theoretical models. For example, numerical simulations and the equipartition theory show that the magnetic field strength follows the baryonic density in galaxy clusters and galaxies (Dolag et al., 2001; Govoni et al., 2017; Schleicher et al., 2010) and it is supported by observational data (Heesen et al., 2014; Tabatabaei et al., 2017; van Weeren et al., 2019). Our constrained magnetic field strength at the Galactic Centre ($B_{0}\sim 50-60$ $\mu$G) is also consistent with the order of magnitude of the observed total magnetic field strength $B\sim 10-100$ $\mu$G (Ferrière, 2009; Guenduez et al., 2020). These could be verified by future observational data of magnetic field at the Galactic Centre. Moreover, we have first taken the value of $\eta$ to be a constant $\eta=0.65$. Some other studies have found that the magnetic field strength might be dominated by anisotropic random field rather than the regular field (Houde et al., 2013; Beck et al., 2019). The value of $\eta$ can be as small as $\sim 0.1$ for the disk region in spiral galaxies (Beck et al., 2019). However, some studies have revealed a very large regular field strength $\sim 1$ mG at the Galactic Centre (Eatough et al., 2013). Therefore, the actual value of $\eta$ in the Galactic Centre region is uncertain. We have particularly investigated the cases of $\eta=0.3$ and $\eta=0.9$ for comparison. We have found that $\eta=0.9$ can also give a good fit for the data. Generally, $\eta>0.6$ could provide good fits with the rotation curve data without invoking the inner bulge component. Further radio observations are definitely required to examine the value of $\eta$ as well as our model presented. On the other hand, we have also assumed that the gas density follows the exponential density profile in the deep central region and approaches a constant value at a relatively large $r\sim 100$ pc. The constant gas density at $r\sim 100$ pc is supported by observational data (Ferrière, Gillard & Jean, 2007; Ferrière, 2009). We have also tried the isothermal density profile, which is commonly used as a model for interstellar medium (Kalashnikov & Chechetkin, 2022), to model the gas density distribution. A good fit can still be obtained (with $B_{0}=62$ $\mu$G and $\chi_{\rm red}^{2}=0.12$). Therefore, our results are not very sensitive to the gas density profile assumed. To conclude, we show that magnetic field can affect inner rotation curve significantly. We anticipate that such effect can also be seen in the inner region of other galaxies. Future high-quality observations of the inner galactic rotation curves could verify our suggestion. ## 5 acknowledgements We thank the anonymous referee for useful constructive feedback and comments. This work was partially supported by the Seed Funding Grant (RG 68/2020-2021R) and the Dean’s Research Fund (activity code: 04628) from The Education University of Hong Kong. ## 6 Data availability statement The data underlying this article will be shared on reasonable request to the corresponding author. ## References * Abuter et al. (2020) Abuter R. et al., 2020, Astron. Astrophys. 636, L5. * Battaner et al. (1992) Battaner E., Garrido J. L., Membrado M. & Florido E., 1992, Nature 360, 652. * Battaner & Florido (1995) Battaner E. & Florido E., 1995, Mon. Not. R. Astron. Soc. 277, 1129. * Beck (2007) Beck R., 2007, Astron. Astrophys. 470, 539. * Beck et al. (2019) Beck R., Chamandy L., Elson E. & Blackman E. G., 2019, Galaxies 8, 4. * Dolag et al. (2001) Dolag K., Schindler S., Govoni F. & Feretti L., 2001, Astron. Astrophys. 378, 377. * Eatough et al. (2013) Eatough R. P. et al., 2013, Nature 501, 391. * Elstner, Beck & Gressel (2014) Elstner D., Beck R. & Gressel O., 2014, Astron. Astrophys. 568, A104. * Ferrière, Gillard & Jean (2007) Ferrère K., Gillard W. & Jean P., 2007, Astron. Astrophys. 467, 611. * Ferrière (2009) Ferrière K., 2009, Astron. Astrophys. 505, 1183. * Fletcher et al. (2004) Fletcher A., Berkhuijsen E. M., Beck R. & Shukurov A., 2004, Astron. Astrophys. 414, 53. * Govoni et al. (2017) Govoni F. et al., 2017, Astron. Astrophys. 603, A122. * Guenduez et al. (2020) Guenduez M., Tjus J. B., Ferrière K., Dettmar R.-J., 2020, Astron. Astrophys. 644, A71. * Heesen et al. (2014) Heesen V., Brinks E., Leroy A. K., Heald G., Braun R., Bigiel F. & Beck R., 2014, Astron. J. 147, 103. * Houde et al. (2013) Houde M., Fletcher A., Beck R., Hildebrand R. H., Vaillancourt J. E. & Stil J. M., 2013, Astrophys. J. 766, 49. * Kalashnikov & Chechetkin (2022) Kalashnikov I. Y. & Chechetkin V. M., 2022, Mon. Not. R. Astron. Soc. 514, 1351. * Lelli, McGaugh & Schombert (2016) Lelli F., McGaugh S. S. & Schombert J. M., 2016, Astron. J. 152, 157. * Navarro, Frenk & White (1997) Navarro J. F., Frenk C. S. & White, S. D. M., 1997, Astrophys. J. 490, 493. * Nelson (1988) Nelson A. H., 1988, Mon. Not. R. Astron. Soc. 233, 155. * Piddington (1964) Piddington J. H., 1964, Mon. Not. R. Astron. Soc. 128, 345. * Ruiz-Granados et al. (2010) Ruiz-Granados B., Rubiño-Martìn J. A., Florido E. & Battaner E., 2010, Astrophys. J. 723, L44. * Ruiz-Granados et al. (2012) Ruiz-Granados B., Battaner E., Calvo J., Florido E. & Rubiño-Martìn J. A., 2012, Astrophys. J. 755, L23. * Sánchez-Salcedo & Reyes-Ruiz (2004) Sánchez-Salcedo F. J. & Reyes-Ruiz, 2004, Astrophys. J. 607, 247. * Sánchez-Salcedo & Santillán A. (2013) Sánchez-Salcedo F. J. & Santillán A., 2013, Mon. Not. R. Astron. Soc. 433, 2172. * Schleicher et al. (2010) Schleicher D. R. G., Banerjee R., Sur S., Arshakian T. G., Klessen R. S., Beck R. & Spaans M., 2010, Astron. Astrophys. 522, A115. * Sofue (2013) Sofue Y., 2013, Pub. Astron. Soc. Jpn. 65, 118. * Sofue (2015) Sofue Y., 2015, Pub. Astron. Soc. Jpn. 67, 75. * Tabatabaei et al. (2017) Tabatabaei F. S. et al., 2017, Astrophys. J. 836, 185. * van Weeren et al. (2019) van Weeren R. J., de Gasperin F., Akamatsu H., Brüggen M., Feretti L., Kang H., Stroe A. & Zandanel F., 2019, Sp. Sci. Rev. 215, 16.
# Has Telescope Array Discovered Electroweak Monopole? Y. M. Cho<EMAIL_ADDRESS>School of Physics and Astronomy, Seoul National University, Seoul 08826, Korea Center for Quantum Spacetime, Sogang University, Seoul 04107, Korea Franklin H. Cho<EMAIL_ADDRESS>Center for Quantum Nano Science, Ewha Woman’s University, Seoul 03766, Korea ###### Abstract We propose the ultra high energy cosmic ray recently detected by Telescope Array to be the electroweak monopole, and present theoretical arguments which support this. This strongly motivates the necessity for the “cosmic” MoEDAL experiment which could back up our proposal. To confirm this we propose Telescope Array to measure the magnetic charge of the ultra high energy cosmic ray particles with SQUID. ultra high energy cosmic ray, Greisen-Zatsepin-Kuzmin (GZK) limit, electroweak monopole as ultra high energy cosmic ray, enhanced Chrenkov radiation of electroweak monopole, cosmological production of electroweak monopole, remnant electroweak monopole density Recently the Telescope Array Group (TAG) has announced the detection of an ultra high energy cosmic ray (UHECR) of energy 244 EeV ($244\times 10^{18}$ eV) [1]. This is the most recent confirmation of the existence of the UHECR particles, following the 320 EeV particle in 1991, 213 EeV particle in 1993, and 280 EeV particle in 2001 [2]. This tells that the UHECR particles exist in nature. The TAG (and similar previous) report is very interesting and remarkable in two aspects. First, the energy of the cosmic ray exeeds the Greisin-Zatsepin- Kuzmin (GZK) energy limit [3]. Second, the arrival direction of the UHECR implies that it came from the Local Void. This is puzzling, because there are few known particles in nature which could produce such UHECR. A natural candidate for the UHECR is the proton, but it is very difficult for proton to generate such high energy. A relativistic proton moving through the cosmic microwave background, after collision with the 3 K microwave photons becomes $\Delta^{*}$ and decays to nucleons and pions, $\displaystyle p+\gamma\rightarrow\Delta^{*}\rightarrow N+\pi.$ (1) And the mean free path for this process is known to be about 6 Mpc. This resonant scattering degrades the energy of the relativistic protons and prevent them to acquire the energy above $5\times 10^{19}$ eV. This is the GZK limit [3]. This implies that, if the UHECR of TAG is proton, it should have originated nearby, or should have energy far above the GZK limit. But these possibilities seems unlikely. The other point is that the UHECR observed by TAG appears to come from the void, which suggests that the origin of this UHECR is not galactic center or other astronomical objects like the neutron stars. This is another puzzling feature of this UHECR [1]. From this we may conclude that the UHECR of TAG is not likely to be an ultra relativistic proton, or any known particle produced by the astronomical objects. This lack of a possible explanation for the UHECR is disappointing, and it has been suggested that this could be due to “an incomplete knowledge of particle physics” [1]. The purpose of this Letter is to argue that the UHECR observed by TA could be the remnant electroweak monopole produced in the early universe during the electroweak phase transition. We present theoretical reasons to support this, and discuss how to confirm this proposal experimentally. The proposition that the monopoles could be the source of the UHECR is not new [4]. Obviously the monopole, if exists, becomes an ideal candidate for the UHECR particle. It has the strong magnetic interaction, stronger than the electric interaction of proton by the factor $1/\alpha$. It has the absolute stability guaranteed by the $\pi_{2}(S^{2})$ monopole topology, which is required for the UHECR particles. Moreover, it may have the mass much heavier than the proton. For this reason it has been asserted that the grand unification monopole could generate the UHECR [5]. But in this paper we argue that the electroweak (“Cho-Maison”) monopole could be the UHECR particles. To do that we must understand why the electroweak monopole, not others, should be viewed as the UHECR of TAG. With the advent of the Dirac monopole, the magnetic monopole has become an obsession in physics [6, 7]. After the Dirac monopole we have had the Wu-Yang monopole [8], the ’tHooft-Polyakov monopole [9], the grand unification monopole [10], and the electroweak monopole [11, 12]. But the electroweak monopole stands out as the most realistic monopole that exists in nature and could actually br detected by experiment [13, 15, 16, 14, 17, 18]. This is because the Dirac monopole in electrodynamics transforms to the electroweak monopole after the unification of the electromagnetic and weak interactions, and the Wu-Yang monopole in QCD becomes unobservable after the monopole condensation which confines the color. Moreover, the ’tHooft-Polyakov monopole exists only in a hypothetical theory, and the grand unification monopole which could have been amply produced at the grand unification scale in the early universe probably has become completely irrelevant at present universe after inflation. Unlike other monopoles the electroweak monopole has the following unique features [11]. First, the magnetic charge is not $2\pi/e$ but $4\pi/e$, twice that of the Dirac monopole. This is because the period of the electromagnetic U(1) subgroup of the standard model is $4\pi$, since the electromagnetic U(1) comes from the U(1) subgroup of SU(2). Second, the mass is of the order of several TeV, probably between 4 to 10 TeV. This is because the mass basically comes from the same Higgs mechanism which makes the W boson massive, except that here the coupling is magnetic (i.e., $4\pi/e$). This makes the monopole mass $1/\alpha$ times heavier than the W boson mass. In spite of this, the size of the monopole is set by the weak boson masses. This is because the monopole has the Higgs and W boson dressing which has the exponential damping fixed by the weak boson masses. Third, this is the monopole which exists within (not beyond) the standard model as the electroweak generalization of the Dirac monopole, as a hybrid between Dirac and ’tHooft-Polyakov monopoles. Finally, this monopole is absolutely stable. The topological stability must be obvious, but it also has the dynamical stability [18]. The importance of the electroweak monopole comes from the fact that it must exist if the standard model is correct. This means that the discovery of the monopole, not the Higgs particle, should be viewed as the final (and topological) test of the standard model. Moreover, if discovered, it will become the first topologically stable magnetically charged elementary particle in the history of physics [13]. Furthermore, the monopoles produced in the early universe could play important roles in physics, in particular in cosmology [14]. Indeed, when coupled to gravity, they automatically turn to the primordial magnetic black holes which could account for the dark matter, become the seed of stellar objects and galaxies, creating the large scale structures of the universe. As importantly, they could generate the intergalactic magnetic field, and could be the source of the ultra high energy cosmic rays [14]. This makes the experimental detection of the electroweak monopole a most urgent issue after the discovery of the Higgs particle. For this reason MoEDAL and ATLAS at CERN are actively searching for the monopole [19, 20, 21]. Since the electroweak monopole has unique characteristics, they could detect the monopole without much difficulty, if LHC could produce them. If the monopole mass exceeds 7 TeV, however, the present 14 Tev LHC may not be able to produce the monopoles. In this case we may have to wait for the next upgrading of LHC, or else look for the remnant monopoles produced in the early universe during the electroweak phase transition. How can we detect the remnant electroweak monopoles? Obviously we could design a “cosmic” MoEDAL experiment located at high mountains to detect them, and this is in planning now. At this point one might wonder if the underground experiments like IceCube or Antares could be helpful. Unfortunately the underground experiments may have difficulty to detect them because the penetration length of the monopole in matters is expected to be very short (of the order of 10 meters in aluminum) because of the strong magnetic interaction [22, 23]. Another way to detect the electroweak monopole is to detect UHECR particles which could be generated by the remnant monopoles, using the cosmic ray experiments. The primary purpose of this type of experiments, of course, is to detect the high energy cosmic rays (not the monopoles). Nevertheless this type of experiments could be used to detect the remnant monopoles, and in this connection the TA experiment could play an important role. The question here is why and how the remnant electroweak monopoles could be identified as the UHECR particles. Now, we are ready to discuss how they can become the UHECR particles. To see this we first notice that the remnant electroweak monopoles could easily acquire the energy above the GZK limit from the intergalactic magnetic field. Since the average intergalactic magnetic field $B$ is about $3\times 10^{-6}$ G with the coherent length $L$ of the order of 300 pc, we could estimate the monopole energy gain traveling through the intergalactic magnetic field to be [14] $\displaystyle\Delta E\simeq\frac{4\pi}{e}~{}BL\simeq 1.2\times 10^{20}~{}\text{eV}.$ (2) Moreover, we can easily show that the monopole energy loss due to the linear acceleration is completely negligible. This confirms that they could acquire the energy above the GZK limit without any problem. Moreover, unlike the proton, the monopole retains its energy traveling through the cosmic microwave background. This is because the photon-monopole scattering cross section is givden by the classical Thompson scattering cross section, which is many orders of magnitude down the photon-proton cross section described by (1). Notice that (2) is independent of the monopole mass. So, depending on the mass the monopole (even with the above energy) could be relativistic or non- relativistic. For example, if it is the grand unification monopole of mass of $10^{17}$ GeV, it becomes non-relativistic and thus can not generate relativistic secondaries in the cosmic ray shower. In contrast, (2) makes the electroweak monopole with mass of $M_{W}/\alpha$ extremely relativistic, so that it could easily generates the relativistic showers. And in reality we do have the relativistic showers in the UHECR. This strongly indicates that the UHECR could be the electroweak monopole. In the following we will assume the monopole mass to be $M=M_{W}/\alpha$ for simplicity. How is the electromagnetic shower of the electroweak monopole? The magnetic field of the monopole $B=(4\pi/e)\hat{r}/r^{2}$, when boosted to the energy (2), generates the electric field $\vec{E}=\gamma\beta\vec{v}\times\vec{B}$. So, the electromagnetic energy loss of the relativistic electroweak monopole should be similar to that of a heavy charged particle of mass $M_{W}/\alpha$ with similar $\gamma$ factor (for our monopole with energy (2), we have $\gamma\geq 10^{7}$) and charge $1/\alpha$. Moreover, the Cherenkov radiation of the monopole is enhanced by the factor $(1/\alpha)^{2}$, compared to that of the proton. This enhanced Cherenkov radiation is an important feature of the UHECR generated by the electroweak monopole, which could be useful in identifying the UHECR particle as the electroweak monopole. As for the hadronic shower of the electroweak monopole, notice that the maximum fraction of the energy transferred from our monopole of mass $M$ to the target particles with mass $m$ is given by [5] $\displaystyle\frac{E^{\prime}_{m}}{E_{M}}=\frac{m}{E_{M}}\Big{(}\frac{4E_{M}^{2}-2M^{2}+m^{2}}{4mE_{M}+2M^{2}}\Big{)}.$ (3) Now, for our case we have $E_{M}^{2}>>M^{2}>>m^{2}$, so that $\displaystyle\frac{E^{\prime}_{m}}{E_{M}}\simeq\frac{2mE_{M}}{2mE_{M}+M^{2}}\simeq 1.$ (4) This should be compared to the maximum energy transfer of the proton (with $M\simeq m$) $\displaystyle\frac{E^{\prime}_{m}}{E_{M}}=1-\frac{m}{2E_{M}}\simeq 1,$ (5) which is not so different from (4). From this we may conclude that our monopoles transfer most of the energy in the first forward hadronic scattering, and thus produce an air shower resembling a typical hadron initiated shower. This implies that trying to identify the UHECR with hadronic shower pattern may not be a wise strategy. Now, we have to discuss the remnant electroweak monopole density at present universe. This is a complicated issue, but fortunately this has already been studied before [14]. To summarize the results we start from the temperature dependent effective action of the standard model $\displaystyle V_{eff}(\rho)=V_{0}(\rho)-\frac{C_{1}}{12\pi}\rho^{3}T+\frac{C_{2}}{2}\rho^{2}T^{2}-\frac{\pi^{2}}{90}g_{*}T^{4}+\delta V_{T},$ $\displaystyle V_{0}(\rho)=\frac{\lambda}{8}(\rho^{2}-\rho_{0}^{2})^{2},$ $\displaystyle C_{1}=\frac{6M_{W}^{3}+3M_{Z}^{3}}{\rho_{0}^{3}}\simeq 0.36,$ $\displaystyle C_{2}=\frac{4M_{W}^{2}+2M_{Z}^{2}+M_{H}^{2}+4m_{t}^{2}}{8\rho_{0}^{2}}\simeq 0.37,$ (6) where $V_{0}$ is the zero-temperature potential, $g_{*}$ is the total number of distinct helicity states of the particles with mass smaller than $T$, $C_{1}$ and $C_{2}$ are the contributions from the weak bosons and fermions, $m_{t}$ is the top quark mass, and $\delta V_{T}$ is the slow-varying logarithmic corrections and the lighter quark contributions which we will neglect from now on. Figure 1: The temperature dependent effective potential (6) at various temperatures, where $T_{G}$ is the Ginzburg temperature. Notice that the potential at $T_{1},~{}T_{c},~{}T_{2}$ are almost indistinguishable. The potential has three local extrema at $\displaystyle\rho_{s}=0,$ $\displaystyle\rho_{\pm}(T)=\Big{\\{}\frac{C_{1}}{4\pi\lambda}\pm\sqrt{\Big{(}\frac{C_{1}}{4\pi\lambda}\Big{)}^{2}+\frac{\rho_{0}^{2}}{T^{2}}-\frac{2C_{2}}{\lambda}}\Big{\\}}~{}T^{.}$ (7) The first extremum $\rho_{s}=0$ represents the Higgs vacuum of the symmetric (unbroken) phase, the second one $\rho_{-}(T)$ represents the local maximum, and the third one $\rho_{+}(T)$ represent the local minimum Higgs vacuum of the broken phase. It is charactrized by three temperatures, $\displaystyle T_{1}=\frac{4\pi\lambda}{\sqrt{32\pi^{2}\lambda C_{2}-C_{1}^{2}}}~{}\rho_{0}\simeq 146.13~{}\text{GeV},$ $\displaystyle T_{c}=\sqrt{\frac{18}{36\pi^{2}\lambda C_{2}-C_{1}^{2}}}~{}\pi\lambda\rho_{0}\simeq 146.09~{}{\rm GeV},$ $\displaystyle T_{2}=\sqrt{\frac{\lambda}{2C_{2}}}~{}\rho_{0}\simeq 145.82~{}\text{GeV}.$ (8) Above $T_{1}$ only $\rho_{s}=0$ becomes the true vacuum of the effective potential, and the electroweak symmetry remains unbroken. At the critical temperature $T_{c}$, the two vacua $\rho_{s}$ and $\rho_{+}$ are degenerate and the electroweak phase transition starts. At $T_{2}$ we have only one vacuum $\rho_{+}$ with $\rho_{0}=\rho_{-}$, and the phase transition ends. So, in principle the electroweak phase transition is the first order. Since $T_{1}$, $T_{c}$, and $T_{2}$ are very close, however, the phase transition practically becomes the second order [14]. The effective potential (6) is graphically shown in Fig. 1. The monopole production in the second order phase transition is supposed to be described by the Kibble-Zurek mechanism, so that the monopole production start from $T_{c}$. However, the thermal fluctuations of the Higgs vacuum which create the seed of the monopoles continue till the universe cools down to the Ginzburg temperature $T_{G}\simeq 57.6~{}\text{GeV}$, where the monopole production stops[14, 24]. The Ginzburg temperature is shown in Fig. 1. So, the electroweak monopole formation takes place between $T_{c}$ and $T_{G}$, or in average around $T_{i}$, $\displaystyle T_{i}=\frac{T_{c}+T_{G}}{2}\simeq 101.7~{}\text{GeV}.$ (9) In time scale, we can say that the electroweak monopole production start from $1.8\times 10^{-11}sec$ to $1.2\times 10^{-10}sec$ after the big bang for the period of $10.3\times 10^{-11}~{}sec$, or around $3.5\times 10^{-11}sec$ after the big bang in average. Two important parameters of the electroweak phase transition are the temperature dependent Higgs boson mass $\bar{M}_{H}$ which determine the correlation length $\xi=1/\bar{M}_{H}$ and the W-boson mass $\bar{M}_{W}$ which determines the monopole mass $\bar{M}\simeq\bar{M}_{W}/\alpha$. The Higgs boson acquires its minimum mass 5.54 GeV at $T=T_{c}$ which approaches to the zero temperature value 125.2 GeV as the universe cools down. The W-boson which is massless before the symmetry breaking becomes massive toward the value 6.76 GeV at $T_{c}$ and 73.2 GeV at $T_{G}$. This tells that the infant monopole masses at $T_{c}$ and $T_{G}$ are around 1.4 TeV and 10 TeV (with the adolescent mass 10.7 TeV). According to the Kibble-Zurek mechanism the initial monopole density is determined by the mean value of two correlation lengths at $T_{c}$ and $T_{G}$ [25, 26], $\displaystyle\xi_{i}=\frac{\xi(T_{c})+\xi(T_{G})}{2}\simeq 9.4\times 10^{-16}~{}\text{cm}.$ (10) From this we can estimate the initial density of the monopoles to be $n_{i}\simeq T_{i}^{3}/\xi_{i}^{3}\simeq 0.2~{}T_{i}^{3}$. This estimate, however, has a defect that the energy within one correlation volume is not enough to provide the monopole mass. This is because the monopole mass is $M_{W}/\alpha$, but the size is fixed by $M_{W}$. A natural way to cure this defect is to adopt a new correlation length $\bar{\xi}_{i}$ which satisfies the energy constraint, $\displaystyle\bar{\xi}_{i}=\Big{(}\frac{1}{\alpha}\Big{)}^{1/3}\xi_{i}\simeq 5.16\times\xi_{i}.$ (11) With this the initial monopole density is given by $\displaystyle n_{i}\simeq\frac{T_{i}^{3}}{\bar{\xi}_{i}^{3}}=\alpha\times\frac{T_{i}^{3}}{\xi_{i}^{3}}\simeq 1.5\times 10^{-3}~{}T_{i}^{3}.$ (12) This is smaller than the Kibble-Zurek estimate by the factor $\alpha$. Figure 2: The cosmic evolution of the electroweak monopole density $n_{m}/T^{3}$ against $\tau=M_{m}/T$. To determine the remnant monopole density at present universe, however, we have to know how it evolves in cosmology. The evolution of the monopoles is determined by the Boltzmann’s equation [27] $\displaystyle\frac{dn}{dt}+3Hn=-\sigma n^{2},$ (13) where $H$ is the Hubble parameter and $\sigma\simeq 1/3\alpha T^{2}$ is the monopole annihilation cross section. The solution of the evolution equation is shown in Fig. 2. Notice that the monopoles, as soon as produced, quickly annihilate each other. This is because at the initial stage of the monopole production, the capture radius of the monopole and anti-monopole is much bigger than the correlation length. This quickly reduces the initial monopole density by the factor $10^{-6}$. Moreover, the final value of the monopole density becomes independent of the initial value and approaches to $n\rightarrow 18.25~{}(T/M_{P})\times T^{3}$ regardless of the initial condition, where $M_{P}$ is the Planck mass [27]. And the monopole- antimonopole annihilation ceases at the temperature $T_{f}\simeq 60~{}{\text{MeV}}$. This is below the muon decoupling temperature, which tells that the annihilation of the monopoles continues very long time. Below $T_{f}$ the monopoles are free streaming, and we can estimate the remnant monopole density at the present universe. The monopole density at $T_{f}$ becomes $\displaystyle n_{f}\simeq 0.9\times 10^{-19}~{}T_{f}^{3},$ (14) which is much lower than the initial density given by (12). The number of monopole within the co-moving volume is conserved thereafter. But they still interact with the electron pairs in the hot plasma before decouple around $T_{d}\simeq 0.5~{}\text{MeV}$, when the electron pairs disappear and the interaction rate becomes less than the Hubble expansion rate. Since the decoupling temperature of the electroweak monopole is much less than the monopole mass, the free streaming monopoles just after the decoupling start as completely non-relativistic. But eventually they become extremely relativistic by the acceleration of the intergalactic magnetic field. Assuming that the expansion is adiabatic, the current number density and the energy density of the monopole is given by [14] $\displaystyle n_{0}=\frac{g_{0}}{g_{f}}~{}\Big{(}\frac{T_{0}}{T_{f}}\Big{)}^{3}~{}n_{f},$ $\displaystyle\rho_{m}=n_{0}~{}M=\frac{g_{0}}{g_{f}}~{}\Big{(}\frac{T_{0}}{T_{f}}\Big{)}^{3}~{}n_{f}~{}M,$ (15) where $T_{0}=2.73~{}\text{K}=2.35\times 10^{-13}~{}\text{GeV}$ is the temperature of the universe today and $g$ is the effective number of degrees of freedom in entropy. So we have the current density of monopole $\displaystyle\Omega_{m}~{}h^{2}=\frac{\rho_{m}}{\rho_{c}}~{}h^{2}\simeq 1.2\times 10^{-10},$ (16) where $\rho_{c}$ is the critical density of present universe and $h\simeq 0.678$ is the scaled Hubble constant in the unit $H_{0}/(100~{}\text{km}~{}\text{s}^{-1}~{}\text{Mpc}^{-1})$. This is about $1.3\times 10^{-9}$ of the baryon density. This assures that the electroweak monopole does not alter the standard big bang cosmology in any significaly way. In terms of the number density, this translates to about $6.1\times 10^{-5}/~{}\text{Km}^{3}$, or about $2.3\times 10^{-13}~{}n_{b}$, where $n_{b}$ is the number density of the baryons. This is roughly $10^{5}$ times bigger than the monopole density set by the Parker bound [28], which implies that (15) is an over estimation. Actually there are reasons that the real remnant monopole density could be much less [14]. First, as the only heavy stable particle with mass about $10^{4}$ times heavier than the proton, they can easily generate the density perturbation and might have been buried in galactic centers. Second, they have a very short penetration length in matters, so that most of them could have been trapped and filtered out by the stellar objects after collision with them. Third (and most importantly), when coupled to gravity, they automatically turn to Reissner-Nordstrom type black holes and become the premordial magnetic black holes. This strongly implies that indeed (15) could be made consistent with the Parker bound. With this remark we can confidently say that the UHRCR particle observed by TA could be the remnant electroweak monopole produced in the early universe, as one of us has pointed out in an earlier work [14]. In particular, our estimate of the monopole density appears to be consistent with the UHECR event rate at TA, and it comes from the void as TA indicated. Unfortunately, this proposition could only be confirmed indirectly at TA at the moment, with the enhanced Cherenkov radiation of the monopole. To confirm this directly, TA should be able to measure the magnetic charge of the UHECR. In principle this could be done by installing SQUID to each of the surface detectors at TA. We hope that TAG could measure the magnetic charge of the UHECR with SQUID in the near future. The details of our discussions will be published in a separate paper [29]. ACKNOWLEDGEMENT The work is supported in part by the National Research Foundation funded by the Ministry of Education (Grant 2018-R1D1A1B0-7045163), the Ministry of Science and Technology (Grant 2022-R1A2C1006999), and by Center for Quantum Spacetime, Sogang University, Korea. ## References * [1] Telescope Array Collaboraltion, Science 382, 903 (2023). * [2] D. Bird et al. Astrophys. J. 441, 144 (1995); N. Hayashida et al. Phys. Rev. Lett. 73, 3491 (1994); M. Chikawa et al. Proceedings of the 27th Int’l Cosmic Ray Conference (Hamburg) 1, 333 (2001). * [3] K. Greisen, Phys. Rev. Lett. 16, 748 (1996); G. Zatsepin, V. Kuzmin, Pisma Zh. Eksp. Teor. Fiz. 4, 114 (1996). * [4] N. Porter, Nuovo Cim. 16, 958 (1960). * [5] T. Kephart and T. Weiler, Astropart. Phys. 4, 271 (1996). * [6] P. A. M. Dirac, Proc. R. Soc. Lond. A 133, 60 (1931); Phys. Rev. 74, 817 (1948). * [7] B. Cabrera, Phys. Rev. Lett. 48, 1378 (1982). * [8] T. T. Wu and C. N. Yang, Properties of Matter under Unusual Conditions, Interscience, New York (1969); Y. M. Cho, Phys. Rev. Lett. 44, 1115 (1980). * [9] G. ’t Hooft, 1974, Nucl. Phys. B 79, 276 (1974); A. Polyakov, JETP Lett. 20, 194 (1974). * [10] C. Dokos and T. Tomaras, Phys. Rev. D 21, 2940 (1980). * [11] Y. M. Cho and D. Maison, Phys. Lett. B 391, 360 (1997). * [12] Y. Yisong, 1998, Proc. R. Soc. Lond. A 454, 155 (1998); Solitons in Field Theory and Nonlinear Analysis, Springer-Verlag (2001). * [13] Kyoungtae Kimm, J. H. Yoon, and Y. M. Cho, 2015, Eur. Phys. J. C 75, 67 (2015); Y. M. Cho, Kyoungtae Kimm, and J. H. Yoon, Phys. Lett. B 761, 203 (2016). * [14] Y. M. Cho, Phil. Trans. R. Soc. A 377, 20190038 (2019). * [15] J. Ellis, N. Mavromatos, and T. You, Phys. Lett. B 756, 29 (2016). * [16] F. Blaschke and P. Beneš, Prog. Theor. Exp. Phys. 2018, 073B03 (2018). * [17] P. Zhang, L. Zou, and Y. M. Cho, Eur. Phys. J. C 80, 280 (2020). * [18] R. Gervalle and M. Volkov, Nucl. Phys. B 984, 115937 (2022); Nucl. Phys. B 987, 116112 (2023). * [19] MoEDAL Collaboration, Phys. Rev. Lett. 118, 061801 (2017); Phys. Rev. Lett. 123, 021802 (2019). * [20] MoEDAL Collaboration, Phys. Rev. Lett. 126, 071801 (2021); Nature 602, 63 (2022). * [21] ATLAS Collaboration, Phys. Rev. Lett. 109, 261803 (2012); Phys. Rev. Lett. 124, 031802 (2020). * [22] IceCube Collaboration, Phys. Rev. D 87, 022001 (2013); Eur. Phys. J. C 78, 924 (2018). * [23] Antares Collaboration, Astropart. Phys. 35, 634 (2012); Pierre Auger Collaboration, Phys. Rev. D 94, 082002 (2016). * [24] V. Ginzburg, Sov. Phys. Solid State 2, 1824 (1961). * [25] T. W. B. Kibble, J. Phys. A 9, 1387 (1976). * [26] W. Zurek, Phys. Rep. 276, 177 (1996). * [27] J. Preskill, Phys. Rev. Lett. 43, 1365 (1979). * [28] E. Parker, Astrophys. J. 162, 665 (1970). * [29] Y. M. Cho and F. H. Cho, to be published.
# The Kohn-Luttinger effect in dense matter and its implications for neutron stars. Mia Kumamoto<EMAIL_ADDRESS>Institute for Nuclear Theory, University of Washington, Seattle, WA USA Department of Physics, University of Washington, Seattle, WA USA Sanjay Reddy<EMAIL_ADDRESS>Institute for Nuclear Theory, University of Washington, Seattle, WA USA ###### Abstract Repulsive short-range interactions can induce p-wave attraction between fermions in dense matter and lead to Cooper pairing at the Fermi surface. We investigate this phenomenon, well-known as the Kohn-Luttinger effect in condensed matter physics, in dense matter with strong short-range repulsive interactions. We find that repulsive interactions required to stabilize massive neutron stars can induce p-wave pairing in neutron and quark matter. When massive vector bosons mediate the interaction between fermions, the induced interaction favors Cooper pairing in the ${}^{3}P_{2}$ channel. For the typical strength of the interaction favored by massive neutron stars, the associated pairing gaps in neutrons can be in the range of 10 keV to 10 MeV. Strong and attractive spin-orbit and tensor forces between neutrons can result in repulsive induced interactions that greatly suppress the ${}^{3}P_{2}$ pairing gap in neutron matter. In quark matter, the induced interaction is too small to result in pairing gaps of phenomenological relevance. ## I Introduction The discovery of massive neutron stars by radio observations of pulsars Cromartie _et al._ (2019); Antoniadis _et al._ (2013); Demorest _et al._ (2010) confirmed that the maximum mass of neutron stars $M_{\mathrm{max}}>2~{}M_{\odot}$, and gravitational wave and x-ray observations constrain the radius of a neutron star with mass $~{}\simeq 1.4~{}M_{\odot}$ to the range $11-13$ km Abbott _et al._ (2018); De _et al._ (2018); Capano _et al._ (2020); Miller _et al._ (2019); Riley _et al._ (2019). These constraints and theoretical calculations of the EOS of neutron- rich matter at $n_{B}\lesssim 2n_{\mathrm{sat}}$ Hebeler and Schwenk (2010); Gandolfi _et al._ (2014); Tews _et al._ (2013); Lynn _et al._ (2016); Drischler _et al._ (2020), taken together strongly suggest a rapid increase in the pressure and the speed of sound in the NS core Tews _et al._ (2018). This, in turn, implies strong repulsive interactions are necessary for any putative phase of high-density matter in the core. This article addresses whether such repulsion can have other observable consequences. In particular, we investigate if such repulsion can lead to Cooper pairing between fermions with non-zero angular momentum due to the Kohn-Luttinger(KL) effect Kohn and Luttinger (1965) in the cores of neutron stars. The KL effect, which arises because the interaction at the Fermi surface is modified due to screening in the medium, implies that the Cooper pairing instability in high angular momentum states is inevitable and occurs even when the bare interaction is repulsive Kohn and Luttinger (1965). The effect has been discussed extensively in condensed matter physics (For a recent pedagogic review, see Ref. Kagan (2013)). In the context of dense nuclear matter, early work in Fay and Layzer (1968); Pines (1971); Clark _et al._ (1976) recognized that the interaction between nucleons induced by polarization effects in the medium would significantly alter the pairing gaps (for recent reviews, see Dean and Hjorth-Jensen (2003); Gezerlis _et al._ (2014)). The induced interaction, typically calculated in second-order perturbation theory or Fermi liquid theory, naturally incorporates the KL effect. In dilute Fermi systems with attractive s-wave short-range interactions, it has been known since the work of Gor’kov and Melik-Barkhudarov that the induced interaction suppresses the s-wave pairing gap relative to the BCS prediction Gorkov and Melik- Barkhudarov (1961). In neutron matter, when the s-wave interaction is repulsive, the induced interaction was initially expected to increase the p-wave attraction between neutronsFay and Layzer (1968). However, more recent work in Ref. Schwenk and Friman (2004) finds that the induced spin-orbit interaction can dominate and result in a net suppression instead at modest density. Here, we revisit calculating the induced interaction in high-density matter characterized by a large sound speed to study its implications for ${}^{3}P_{2}$ pairing. We consider short-range interactions that contain central and non-central components and study the competition between the attractive and repulsive components of the induced p-wave interaction and its density dependence. In quark matter, when the Fermi surfaces of up, down, and strange quarks are split due to charge neutrality and a larger strange quark mass, the KL effect provides a mechanism to pair quarks of the same flavor and color. However, in this case, we find that p-wave interaction induced by short-range repulsion introduced to increase the pressure of quark matter is too small to be of phenomenological relevance. Our study, which relies on extrapolating results derived from perturbation theory to strong coupling, provides order-of-magnitude estimates for the pairing gaps. Although the method we employ is inadequate to make quantitative predictions, it identifies a mechanism for ${}^{3}P_{2}$ pairing in dense Fermi systems with large repulsive interactions mediated by short-range interactions mediated by heavy vector bosons. In section II, we review the KL mechanism for non-relativistic fermions. In section III, we derive the induced interaction between neutrons at high density by assuming that the bare interaction is due to the exchange of heavy vector mesons. In section IV, we consider the possible effects of the KL mechanism in quark matter. We discuss the implications for neutron star cooling in section V, summarize our main findings, and discuss open questions in section VI. ## II Kohn-Luttinger Mechanism Kohn and Luttinger showed that a short-range repulsive potential can induce attraction in large odd partial waves due to medium effects that can overscreen the effective interaction between fermions at finite density Kohn and Luttinger (1965). There has been renewed interest in studying the KL effect in condensed matter systems because calculations suggest that the induced pairing gaps in $p$-waves and low-order partial waves could be large enough to be realized in experiments (see, for example, Baranov _et al._ (1992); González (2008); Nandkishore _et al._ (2014); González and Stauber (2019)). KL’s original calculation included terms at second order in the potential; more recent analysis Efremov _et al._ (2000) calculates the potential up to fourth order in a constant potential characterized by a large scattering length as well as including retardation effects where pairing occurs away from the Fermi surface, also contributing at fourth order. In weak coupling, the KL effect arises naturally at second order in the potential by evaluating the diagrams in Fig. 1. We refer to these diagrams from left to right as the screening, vertex, and exchange diagrams, respectively. The vertex diagram also has a mirror image, which must be included. We consider interactions that occur at the Fermi surface, so $|k|=|k^{\prime}|=k_{F}$. The momentum transfer is labeled $q=k^{\prime}-k$. Figure 1: Irreducible second order diagrams for Kohn-Luttinger mechanism diagb (80,80) t1,t2 b1,b2 l1 r1 fermionb1,v1 fermionv1,t1 fermionb2,v4 fermionv4,t2 photon,tension=0.2v1,v2 fermion,left,label=$\ell+q$,tension=0.1v2,v3 fermion,left,label=$\ell$,tension=0.1v3,v2 photon,tension=0.2v3,v4 label=$-k;1$b1 label=$k;2$b2 label=$-k^{\prime};3$t1 label=$k^{\prime};4$t2 diagc (80,80) l1,l2,l3,l4,l5 r1,r2,r3,r4,r5 fermionl1,l3 fermionl3,l5 fermionr1,r2 fermionr4,r5 fermion,label=$\ell$,label.side=leftr2,v1 fermion,label=$\ell+q$,label.side=leftv1,r4 photonr2,r4 photonl3,v1 label=$-k;1$l1 label=$-k^{\prime};3$l5 label=$k;2$r1 label=$k^{\prime};4$r5 diagd (80,80) l1,l2,l5,l3,l4 r1,r2,r5,r3,r4 fermionl1,l2 fermion,label=$\ell+q$,label.side=leftl2,l3 fermionl3,l4 fermionr1,r2 fermion,label=$\ell$r2,r3 fermionr3,r4 photonr2,l3 photonl2,r3 label=$-k;1$l1 label=$k^{\prime};4$l4 label=$k;2$r1 label=$-k^{\prime};3$r4 For a short-range potential with zero range and strength denoted by $U_{0}$, the non-relativistic calculation of these diagrams is straightforward. In this case, the screening diagram cancels the contribution from the two vertex diagrams, and only the exchange diagram contributes. The exchange diagram also gets an overall sign since it is crossed and is given by $V_{KL}(q)=U_{0}^{2}\frac{1}{\beta}\sum_{\ell_{0}}\int\frac{d^{3}\ell}{(2\pi)^{3}}\frac{1}{\ell_{0}-\ell^{2}/2m}\frac{1}{\ell_{0}-(\ell+q)^{2}/2m}$ (1) Notice that since $|k|=|k^{\prime}|$, the frequency transfer $q_{0}$ is just zero. Taking the Matsubara sum and simplifying it gives a singular contribution to the potential. Since we are considering the effects of interactions with the medium, the loop integral has a factor $n_{F}(\ell^{2}/2m)=1/(e^{\beta(\ell^{2}/2m-\mu)}+1)$ and does not need to be regulated. At the low temperatures we consider ($T\ll E_{F}=k_{F}^{2}/2m$) this simplifies $n_{F}(\ell^{2}/2m)\approx\Theta(k_{F}-\ell)$. The momentum integral in Eq. 1 yields the Lindhard function defined by $\begin{split}U(q)=-\frac{m}{4\pi^{3}q}\int\ell\,d\ell\,d\Omega_{\ell}\frac{\Theta(k_{F}-\ell)}{\cos\theta_{q\ell}-q/2\ell}=\frac{mk_{F}}{4\pi^{2}}\left[1-\frac{1}{\overline{q}}\left(1-\frac{\overline{q}^{2}}{4}\right)\mbox{log}\left|\frac{1-\overline{q}/2}{1+\overline{q}/2}\right|\right]\end{split}$ (2) where $\bar{q}=q/k_{\text{F}}$. Thus, Eq. 1 can be written as $V_{KL}=-U_{0}^{2}~{}U(q)$ (3) For any potential, the contribution to the induced potential from the singularity at $q=2k_{F}$ of the Lindhard function scales as $(-1)^{L}L^{-4}$ for large L Kohn and Luttinger (1965); Maiti and Chubukov (2013) where $L$ is the angular momentum quantum number. Since the regular contributions to the total potential falls off exponentially with $L$, attraction is guaranteed for large odd partial waves. Although these results are only generically true for large $L$, they persist for relatively low partial waves for some potentials. It was shown by Baranov _et al._ (1992); Efremov _et al._ (2000) that the constant potential calculated above would result in p-wave attraction. The p-wave contribution from the potential in Eq. 3 can be found easily by making a change of integration variable from $\int_{-1}^{1}d\cos\theta$ to $\int_{0}^{2}\overline{q}d\overline{q}$. The matrix element in the Born approximation should be doubled due to a diagram with outgoing momenta switched, but we will absorb this normalization into the gap equation to match the literature. The p-wave potential from the exchange diagram and its crossed counterpart is then given by $V_{\ell=1}=-U_{0}^{2}\frac{mk_{F}}{4\pi}\frac{4}{5\pi}(2\log 2-1)$ (4) The superfluid gap due to the induced attraction in p-waves was calculated several decades earlier in Ref. Fay and Layzer (1968); Kagan and Chubukov (1988). In the BCS approximation, the p-wave gap $\Delta_{p}\simeq\epsilon_{F}~{}\exp{\left(\frac{2}{N(0)V_{\ell=1}}\right)}=\epsilon_{F}~{}\exp{\left(\frac{-5\pi^{2}}{4(2\ln{2}-1)(ak_{\text{F}})^{2}}\right)}\,,$ (5) where $\epsilon_{F}=k_{\text{F}}^{2}/2m$ is the Fermi energy, $N(0)=mk_{\text{F}}/2\pi^{2}$ is the density of states at the Fermi surface for each spin, and the scattering length $a=mU_{0}/4\pi$ in weak coupling. In strong coupling, we cannot calculate the effective interaction at the Fermi surface reliably, and in the following, we shall assume that Eq. 5 provides a useful estimate. Further, we shall also assume that the s-wave scattering amplitude between quasi-particles at the Fermi surface, denoted by $f_{0}$, is directly related to the strength of the bare interaction $U_{0}$. In Fermi liquid theory (FLT), the sound speed $c_{s}=\frac{k_{F}}{\sqrt{3mm^{*}}}\sqrt{(1+F_{0})}\,,$ (6) where $F_{0}=N(0)f_{0}$ is dimensionless measure of the quasi-particle interaction and $m^{*}$ is the fermion effective mass at the Fermi surface. Using this relation, we can estimate the interaction strength $U_{0}\approx f_{0}$ at a given density if $c_{s}$ and $m^{*}$ are known. If $m^{*}\approx m$ and $U_{0}=f_{0}$, then the induced p-wave gap in Eq. 5 can be rewritten as $\Delta_{p}\approx\epsilon_{F}~{}\exp{\left(-\frac{5}{(2\ln{(2)}-1)F_{0}^{2}}\right)}\,,$ (7) to illustrate its extreme sensitivity to $F_{0}$ and the sound speed through Eq. 6. For example, models of high-density neutron matter typically predict $F_{0}\gtrsim 2$ for $n_{B}\gtrsim 3~{}n_{\mathrm{sat}}$ Friman and Weise (2019). Under these conditions, Eq.7 predicts robust p-wave pairing with gaps $\Delta_{p}\gtrsim 1$ MeV due to the induced interaction. In the next section, we will calculate the induced interaction between neutrons in more realistic scenarios where the bare potential is momentum- dependent and contains central and non-central components. ## III Induced p-wave pairing in dense neutron matter The s-wave potential at the Fermi surface becomes repulsive in the neutron star core when $n_{B}\gtrsim n_{\mathrm{sat}}/2$. At these higher densities, ${}^{3}P_{2}$ pairing is favored because the bare potential in this channel remains attractive, and non-central components of the interaction, especially the spin-orbit interaction, favor the alignment of spin and orbital angular momentum. Calculations of the ${}^{3}P_{2}$ pairing gap in the BCS approximation reported in Refs.Baldo _et al._ (1998); Gezerlis _et al._ (2014) show that the pairing gaps are model dependent, especially for $n_{B}>2n_{\mathrm{sat}}$ because the nucleon-nucleon potentials at the relevant momenta are not well constrained by scattering data. In these calculations, the maximum value of the gap $\Delta_{{}^{3}P_{2}}\simeq 1-2$ MeV occurs between $2-3~{}n_{\mathrm{sat}}$ and decreases rapidly with increasing density. At lower density, when the nucleon momenta $p\ll\Lambda_{\chi}$ where $\Lambda_{\chi}\simeq 500$ MeV is the breakdown scale of chiral EFT, a recent study used chiral EFT potentials and found that the maximum value $\Delta_{{}^{3}P_{2}}\simeq 0.4$ MeV was reached at $n_{B}\simeq 1.3~{}n_{\mathrm{sat}}$ and its decrease at higher density was found to be sensitive to the details of the short-distance physics Drischler _et al._ (2017). Together, these findings suggest that if neutron matter persists at the highest densities encountered in neutron stars, the bare ${}^{3}P_{2}$ potential could be small, and $\Delta_{{}^{3}P_{2}}$ depends on model assumptions about the nuclear interaction at short distances. When the bare ${}^{3}P_{2}$ potential weakens, the gap is especially sensitive to corrections due to induced interactions in the medium (see, for example, the discussion in section 3.4 of Ref. Gezerlis _et al._ (2014)). In early work, the interaction induced by the central components of the nuclear force was found to increase the ${}^{3}P_{2}$ gap Fay and Layzer (1968); Pethick and Ravenhall (1991), as would be expected from the discussion of the KL mechanism in section II. However, as mentioned earlier, calculations that employed realistic low-energy nuclear potentials with significant non-central components found that the interference between the central and spin-orbit component of the nuclear force led to significant suppression of the ${}^{3}P_{2}$ gap for $n_{B}<2n_{\mathrm{sat}}$ Schwenk and Friman (2004). At $n_{B}\gtrsim 2n_{\mathrm{sat}}$, the description of nuclear interactions relies on model assumptions since the typical nucleon momenta $p\gtrsim\Lambda_{\chi}$. To investigate the competition between a strong and repulsive central force and the spin-orbit component of the nuclear force at high density, we revisit the calculation of the induced interaction in simple models. In what follows, we shall assume that the dominant contribution to the nucleon-nucleon interaction at short distances is due to the exchange of heavy vector mesons such as the $\omega$ and $\rho$ mesons with masses $m_{\omega}\approx m_{\rho}\simeq 800$ MeV. For $n_{B}\lesssim 4n_{\mathrm{sat}}$, $k_{\text{F}}/\Lambda$ where $\Lambda\simeq m_{\omega}$ remains a useful expansion parameter. In this case, including terms up to ${\cal O}[(k_{\text{F}}/\Lambda)^{2}]$, the interaction can be described by the potential $\begin{split}V(q,q^{\prime})&=C_{0}+\tilde{C}_{0}\sigma_{1}\cdot\sigma_{2}+C_{2}(q^{2}+q^{\prime 2})+C^{\prime}_{2}(q^{\prime 2}-q^{2})\\\ &+\left[\tilde{C}_{2}(q^{2}+q^{\prime 2})+\tilde{C}^{\prime}_{2}(q^{\prime 2}-q^{2})\right]\sigma_{1}\cdot\sigma_{2}+iV_{SO}~{}q\times q^{\prime}\cdot(\sigma_{1}+\sigma_{2})\\\ &+V_{T}q\cdot\sigma_{1}q\cdot\sigma_{2}\,.\end{split}$ (8) In neutron matter, due to the Pauli principle, only the combinations $\bar{C}_{0}=C_{0}-3\tilde{C}_{0}$, $\bar{C}_{2}=C_{2}-3\tilde{C}_{2}$, and $\bar{C}^{\prime}_{2}=C^{\prime}_{2}+\tilde{C}^{\prime}_{2}$ are relevant. In the full expansion of the vector meson potential, there is also a term proportional to $(q\times q^{\prime}\cdot\sigma_{1})(q\times q^{\prime}\cdot\sigma_{2})$. An exchange tensor term proportional to $q^{\prime}\cdot\sigma_{1}q^{\prime}\cdot\sigma_{2}$ is also allowed by the symmetries of the interaction but is not present in the vector exchange. In this exploratory study, we shall neglect the spin-orbit squared and tensor exchange components, and in this case, 5 LECs denoted by $\bar{C}_{0}$, $\bar{C}_{2}$, $\bar{C}^{\prime}_{2}$, $V_{SO}$, and $V_{T}$ are adequate. The large and attractive spin-orbit interaction, whose strength is set by $V_{SO}$ plays an important role, as discussed below. Here, $q=k_{1}-k_{3}$ and $q^{\prime}=k_{1}-k_{4}$, and the momenta of neutrons in the initial state are given by $k_{1}$ and $k_{2}$, and the final state momenta are $k_{3}$ and $k_{4}$. It is straightforward to repeat the calculation of the diagrams described in the preceding section with the potential in Eq. 8. However, it is simpler to define the anti-symmetrized potential $\begin{split}V(q,q^{\prime})&=C_{0}(\delta_{13}\delta_{24}-\delta_{14}\delta_{23})+C_{2}(q^{2}+q^{\prime 2})(\delta_{13}\delta_{24}-\delta_{14}\delta_{23})\\\ &+C^{\prime}_{2}(q^{\prime 2}-q^{2})(\delta_{13}\delta_{24}+\delta_{14}\delta_{23})+\tilde{C}_{0}(\sigma_{13}\cdot\sigma_{24}-\sigma_{14}\cdot\sigma_{23})\\\ &+\tilde{C}_{2}(q^{2}+q^{\prime 2})(\sigma_{13}\cdot\sigma_{24}-\sigma_{14}\cdot\sigma_{23})+\tilde{C}^{\prime}_{2}(q^{\prime 2}-q^{2})(\sigma_{13}\cdot\sigma_{24}+\sigma_{14}\cdot\sigma_{23})\\\ &+2iV_{SO}q\times q^{\prime}\cdot(\sigma_{13}\delta_{24}+\sigma_{24}\delta_{13})+V_{T}(q\cdot\sigma_{13}q\cdot\sigma_{24}-q^{\prime}\cdot\sigma_{14}q^{\prime}\cdot\sigma_{23})\end{split}$ (9) that includes the effect of the exchange processes. We use the notation $\delta_{ij}=\chi_{j}^{\dagger}\chi_{i}$ and $\sigma_{ij}=\chi_{j}^{\dagger}\sigma\chi_{i}$ for incoming and outgoing two- component spinors $\chi_{i}$ and $\chi_{j}^{\dagger}$. In this case, the induced interaction at second-order is calculated by evaluating the diagrams depicted in Fig. 2. first-diagram (80,80) t1,t2 b1,b2 l1 r1 fermionb1,v1 fermionv1,t1 fermionb2,v2 fermionv2,t2 fermion,left,label=$\ell+q;b$,tension=0.4v1,v2 fermion,left,label=$\ell;a$,tension=0.4v2,v1 decoration.shape=circle,decoration.filled=shaded,label=$V_{L}$,label.dist=40v1 decoration.shape=circle,decoration.filled=shaded,label=$V_{R}$,label.dist=40v2 label=$-k;1$b1 label=$k;2$b2 label=$-k^{\prime};3$t1 label=$k^{\prime};4$t2 second-diagram (80,80) t1,t2 b1,b2 l1 r1 fermionb1,v1 fermionv1,t1 fermionb2,v2 fermionv2,t2 fermion,left,label=$\ell+q^{\prime};b$,tension=0.4v1,v2 fermion,left,label=$\ell;a$,tension=0.4v2,v1 decoration.shape=circle,decoration.filled=shaded,label=$\bar{V}_{L}$,label.dist=40v1 decoration.shape=circle,decoration.filled=shaded,label=$\bar{V}_{R}$,label.dist=40v2 label=$-k;1$b1 label=$k;2$b2 label=$k^{\prime};4$t1 label=$-k^{\prime};3$t2 Figure 2: ZS (left) and ZS’ (right) diagrams. The hatched blobs represent the anti-symmetrized interaction defined in Eq. 9. The diagram on the left is called the zero-sound diagram and denoted as ZS, and the diagram on the right is called the exchange zero-sound diagram and is denoted by the symbol ZS’. A detailed derivation of the total induced interaction $V_{ind}=V_{ind}^{ZS}-V_{ind}^{ZS^{\prime}}$ is presented in Appendix A. First, we present the result obtained by neglecting the momentum-dependent components of the bare central interaction. In this case, the induced potential $\begin{split}V^{\rm ind}&=-(C_{0}^{2}+3\tilde{C}_{0}^{2})(U(q)\delta_{14}\delta_{23}-U(q^{\prime})\delta_{13}\delta_{24})+6C_{0}\tilde{C}_{0}(U(q)\delta_{13}\delta_{24}-U(q^{\prime})\delta_{14}\delta_{23})\\\ &+(-\tilde{C}_{0}^{2}+2C_{0}\tilde{C}_{0})(\sigma_{13}\cdot\sigma_{24}-\sigma_{14}\cdot\sigma_{23})(U(q)+U(q^{\prime}))\\\ &-3\tilde{C}_{0}^{2}(\sigma_{13}\cdot\sigma_{24}+\sigma_{14}\cdot\sigma_{23})(U(q)-U(q^{\prime}))\end{split}$ (10) The s- and p-wave potentials given by this interaction are $\begin{split}V^{\rm ind}_{{}^{1}S_{0}}(0)=\bar{C}^{2}_{0}\frac{mk_{F}}{3\pi^{2}}(2\log{2}+1)\\\ V^{\rm ind}_{{}^{3}P_{J}}(0)=-\bar{C}^{2}_{0}\frac{mk_{F}}{5\pi^{2}}(2\log{2}-1)\,.\end{split}$ (11) where again $\bar{C}_{0}=C_{0}-3\tilde{C}_{0}$ is the momentum-independent bare ${}^{1}S_{0}$ potential. The calculation of the induced potential, including the momentum-dependent central interactions, is tedious, and the analytic results contain a large number of terms. Details of the intermediate expressions can be found in Appendix A. We find analytic results for the second-order induced potentials in the $s$ and $p$ induced potentials. The induced ${}^{1}S_{0}$ and ${}^{3}P_{J}$ potentials calculated to $\mathcal{O}(mk_{F}^{3})$ are given by $\begin{split}V^{\rm ind}_{{}^{1}S_{0}}&=mk_{F}\bar{C}_{0}^{2}\frac{1}{3\pi^{2}}(1+2\log{2})+mk_{F}^{3}[\bar{C}_{0}\bar{C}_{2}\frac{2}{3\pi^{2}}(5+4\log{2})\\\ &+\bar{C}_{0}\bar{C}^{\prime}_{2}\frac{2}{5\pi^{2}}(7-4\log{2})-\bar{C}_{0}V_{T}\frac{16}{15\pi^{2}}(2+\log 2)]\,,\end{split}$ (12) and $\begin{split}V^{\rm ind}_{{}^{3}P_{J}}&=mk_{F}\bar{C}_{0}^{2}\frac{1}{5\pi^{2}}(1-2\log{2})+mk_{F}^{3}[\bar{C}_{0}\bar{C}_{2}\frac{2}{105\pi^{2}}(59-68\log{2})\\\ &-\bar{C}_{0}\bar{C}^{\prime}_{2}\frac{2}{105\pi^{2}}(29+52\log{2})\\\ &+\bar{C}_{0}V_{T}\frac{1}{105\pi^{2}}(91J^{2}-221J+50+(224J^{2}-544J+220)\log 2)]\,,\end{split}$ (13) respectively. When momentum dependence of the central interaction is neglected, the bare spin-orbit force does not contribute to the induced interaction, and the leading order dependence of the induced potential on $J$ is determined by the tensor interaction. See Appendix A for a detailed discussion. These contributions have the same behavior as the leading order KL result. The contribution to the induced interaction in a particular partial wave arising from terms in the bare interaction that do not contribute to that partial wave is strongly influenced by the KL singularity at $q=2k_{\text{F}}$. For this reason, their contribution is suppressed relative to other terms in the interaction at the same order in the expansion. Notice, for example, that in the p-wave induced potential, the term proportional to $\bar{C}_{0}\bar{C}^{\prime}_{2}$ has a numerical factor five times larger than $\bar{C}_{0}\bar{C}_{2}$ and fifteen times larger than $\bar{C}_{0}V_{T}$ for ${}^{3}P_{2}$. This implies that the singularity at $q=2k_{\text{F}}$ does not play an essential role when $\bar{C}^{\prime}_{2}$ is of modest size. By comparing the relevant terms in Eq. 13, we find that the KL singularity plays an essential role only when $\bar{C}^{\prime}_{2}\ll\bar{C}_{0}/(16k_{\text{F}}^{2})$. In what follows, we shall continue to use the term Kohn-Luttinger effect to refer to the induced interaction, but it should be borne in mind that the singularity at $q=2k_{\text{F}}$ does not play a dominant role for typical values of $\bar{C}^{\prime}_{2}$ we explore in this study. Since the spin-orbit force is strong and important in the ${}^{3}P_{2}$ channel, we expect that it may contribute to the induced interaction even though it enters at a higher order in the momentum expansion. To investigate the impact of the spin-orbit coupling, we calculate a subset of the terms that contain the central and spin-orbit interactions at $\mathcal{O}(mk_{F}^{5})$. These corrections to the induced potentials are given by: $\begin{split}V_{{}^{1}S_{0}}^{(5)}&=mk_{F}^{5}[\bar{C}_{2}^{2}\frac{8}{315\pi^{2}}(277+96\log{2})-\bar{C}_{2}^{\prime 2}\frac{8}{105\pi^{2}}(43+24\log{2})\\\ &+\bar{C}_{2}\bar{C}^{\prime}_{2}\frac{32}{105\pi^{2}}(37+6\log{2})+V_{SO}^{2}\frac{8}{35\pi^{2}}(17+16\log 2)]\,,\end{split}$ (14) and $\begin{split}V_{{}^{3}P_{J}}^{(5)}&=mk_{F}^{5}[\bar{C}_{2}^{2}\frac{16}{567\pi^{2}}(83-24\log{2})+\bar{C}_{2}^{\prime 2}\frac{64}{567\pi^{2}}(34-3\log{2})\\\ &-\bar{C}_{2}\bar{C}^{\prime}_{2}\frac{16}{2835\pi^{2}}(523+204\log{2})+\bar{C}^{\prime}_{2}V_{SO}[J(J+1)-4]\frac{32}{945\pi^{2}}(43+24\log{2})\\\ &+V_{T}V_{SO}[J(J+1)-4]\frac{16}{945\pi^{2}}(43+24\log 2)\\\ &+V_{SO}^{2}(7J^{2}-17J+10)\frac{32}{4725\pi^{2}}(43+24\log 2)]\,.\end{split}$ (15) These are not all of the terms that contribute at $\mathcal{O}(mk_{F}^{5})$. Not included are terms proportional to $\bar{C}_{2}V_{T}$, $\bar{C}^{\prime}_{2}V_{T}$, and $V_{T}^{2}$. From Eq. 15, we deduce that when the bare spin-orbit interaction is attractive, it could enhance ${}^{3}P_{2}$ pairing if $2\bar{C}^{\prime}_{2}+V_{T}+\frac{4}{5}V_{SO}>0\,.$ (16) The relation of this result to the suppression of ${}^{3}P_{2}$ pairing at low density due to spin-orbit interactions found in Ref. Schwenk and Friman (2004), which employed a realistic low-momentum nucleon-nucleon potential fit to scattering data warrants further study. We consider three scenarios to study the implications of these results for dense neutron matter. Each scenario is defined by a specification of the LECs that appear in the bare potential defined in Eq. 8. In scenario A, we shall assume that the exchange of heavy vector mesons mediates the interactions between neutrons. When the mass of the vector meson is large compared to the neutron Fermi momentum, and interaction is described by a current-current four-fermion Lagrangian $\mathcal{L}_{\rm int}=-\frac{G_{V}}{2}(\overline{n}\gamma_{\mu}n)(\overline{n}\gamma^{\mu}n)\,.$ (17) Retaining only the leading terms in the $p/m_{n}$ expansion, the LECs appearing in Eqs. 12 and 13 are given by $\bar{C}_{0}=G_{V},\quad\bar{C}_{2}=\frac{5G_{V}}{8m_{n}^{2}},\quad\bar{C}^{\prime}_{2}=\frac{3G_{V}}{8m_{n}^{2}},\quad V_{SO}=-\frac{3G_{V}}{8m_{n}^{2}},\quad V_{T}=\frac{G_{V}}{4m_{n}^{2}}$ (18) Although the simple vector interaction cannot capture the complex nature of interactions between neutrons, which could involve a richer operator structure due to pion exchange and many nucleon forces, it is able to describe the qualitative aspects of the nucleon-nucleon interaction at high momentum; it predicts negative phase shifts in the ${}^{1}S_{0}$, ${}^{3}P_{0}$, and ${}^{3}P_{1}$ channels. The phase shift in the ${}^{3}P_{2}$ channel vanishes because $V_{SO}=-\bar{C}^{\prime}_{2}$, and the spin-orbit interaction exactly cancels the contribution from the central force. This aspect of short-range vector interactions that leads to a vanishing bare potential in the ${}^{3}P_{2}$ channel is a generic feature of any four-fermion interaction without derivative couplings since initial and final states constructed only from spin and helicity operators cannot be combined to form a tensor of rank greater than 1. As a result, such an interaction cannot generate potentials in channels with $J\geq 2$ at the tree level. Including momentum dependence in the meson propagator, explicit derivative couplings (i.e., momentum dependence beyond that found in the Dirac spinors), or momentum dependence from loops lifts this restriction. Figure 3: Total potential for heavy vector boson exchange at $\mathcal{O}(mk_{F}^{3})$ without (left) and with (right) $V_{SO}$ corrections at $\mathcal{O}(mk_{F}^{5})$. The coupling constant is tuned to $F_{0}=3$ at $3n_{\mathrm{sat}}$. The couplings are related to the Fermi liquid parameters $F_{0}$ and $G_{0}$. In the mean-field theory, $F_{0}$ and $G_{0}$ depend only on the central components of the interaction and are given by $\displaystyle F_{0}=N(0)\left(\bar{C}_{0}+2k_{\text{F}}^{2}(\bar{C}_{2}+3\bar{C_{2}}^{\prime})\right)\,,$ (19) $\displaystyle G_{0}=-N(0)\left(\bar{C}_{0}+2k_{\text{F}}^{2}(\bar{C}_{2}-\bar{C_{2}}^{\prime})\right)\,,$ (20) where $N(0)=\sqrt{k_{\text{F}}^{2}+m_{n}^{2}}k_{\text{F}}/2\pi^{2}$ is the density of states for each spin at the Fermi surface. Thus, if $F_{0}$ and $G_{0}$ are specified, the strength of the s-wave components of the interaction are constrained by the equation $(2\bar{C}_{0}+4k_{F}^{2}\bar{C}_{2})=(F_{0}-3G_{0})/2N(0)$ and the p-wave component is obtained using the relation $\bar{C}^{\prime}_{2}=(F_{0}+G_{0})/(8N(0)k_{F}^{2})$. In scenario A, the interaction contains just one parameter, $G_{V}$. In this case, $F_{0}$ and $G_{0}$ are not independent and $G_{V}$ is determined by specifying $F_{0}$, which we take to be in the range $2-4$ at $n_{B}=3~{}n_{\mathrm{sat}}$. Figure 4: The ${}^{3}P_{2}$ gap corresponding to the short-range vector interaction in Eq. 17. Results at $\mathcal{O}(mk_{F}^{3})$ are shown in the left panel and at $\mathcal{O}(mk_{F}^{5})$ in the right panel. The coupling constant is tuned to the given value of $F_{0}$ at $3n_{\mathrm{sat}}$. (color online) The induced and total p-wave interactions at the Fermi surface are shown in Fig. 3 for $F_{0}=3$ at $n_{B}=3n_{\mathrm{sat}}$. The actual value shown $VN(0)/2$ is the quantity in the exponent of the BCS equation. The model naturally prefers ${}^{3}P_{2}$ pairing because although the induced interaction at the Fermi surface is attractive for all values of the total angular momentum $J=0,1,2$, the sum $V^{\rm bare}+V^{\rm ind}$ is only attractive for $J=1,2$ and the net attraction is larger for $J=2$. In Fig. 4, we show the ${}^{3}P_{2}$ pairing gaps calculated using the BCS formula in Eq. 5. Results are shown for three choices of the coupling $G_{V}$ obtained by setting $F_{0}=2,3$ and $4$ at $n_{B}=3~{}n_{\mathrm{sat}}$. To study the interplay between the central p-wave and the spin-orbit interactions, we consider scenario B, in which we introduce parameters $\xi_{\rm p}$ and $\xi_{\rm SO}$ to control the strength of the central p-wave interaction and the spin-orbit interaction, respectively. In this case, we neglect the tensor coupling, and the LEC constants are given by: $\bar{C}_{0}=G_{V},\quad\bar{C}_{2}=\frac{G_{V}}{\Lambda^{2}},\quad\bar{C}^{\prime}_{2}=\xi_{\rm p}\frac{G_{V}}{\Lambda^{2}},\quad V_{SO}=-\xi_{SO}\frac{G_{V}}{\Lambda^{2}},\quad V_{T}=0$ (21) As a generic choice at the same order as the nucleon and meson masses, we take $\Lambda=1\,\mathrm{GeV}$. Fig.5 shows curves of constant $VN(0)/2$ in the space of $\xi_{p}$ and $\xi_{SO}$ for $G_{V}=20\,\mathrm{GeV}^{-2}$ and $G_{V}=40\,\mathrm{GeV}^{-2}$. Corresponding values of $F_{0}$ for each value of $\xi_{p}$ are shown on the right axis. Figure 5: Contours of constant ${}^{3}P_{2}$ potential in model B in the $\xi_{p}-\xi_{SO}$ plane for two choices of $G_{V}$ at $3n_{\mathrm{sat}}$. $F_{0}$, calculated using Eq. 19 is also shown. The black dashed line shows where the bare interaction vanishes. The black line shows where the bare interaction is zero. Above this line, the bare interaction is repulsive, and below it is attractive. The general behavior of the induced interaction is set by terms proportional to $\bar{C}^{\prime 2}_{2}$, $\bar{C}^{\prime}_{2}V_{SO}$, and $V_{SO}^{2}$ with some secondary effects from terms proportional to $\bar{C}_{0}\bar{C}^{\prime}_{2}$ and $\bar{C}_{2}\bar{C}^{\prime}_{2}$. Even though some of these terms enter at higher order in the gradient expansion, they are generally more important than lower-order terms in the induced interaction because they do not rely on the KL singularity to contribute to the ${}^{3}P_{2}$ potential. Of the five terms listed, the only ones that can be attractive are $\bar{C}_{0}\bar{C}^{\prime}_{2}$ and $\bar{C}_{2}\bar{C}^{\prime}_{2}$ when $\xi_{p}$ is positive, and $\bar{C}^{\prime}_{2}V_{SO}$ when $\xi_{SO}$ and $\xi_{p}$ are of the same sign. As a result, for stronger couplings, the overall interaction is repulsive when $\xi_{p}$ is negative and of reasonable size, even though that corresponds to a more attractive bare interaction. There is significant net attraction only when $\xi_{p}$ and $\xi_{SO}$ are both positive and $\xi_{p}$ is not much larger than $\xi_{SO}$ as this leads to a more repulsive bare interaction without producing enough induced attraction to match. Fig.6 shows the ${}^{3}P_{2}$ gap as a function of $\xi_{SO}$ for a few choices of $\xi_{p}$ and the same choices of $G_{V}$. This interplay between the relative size of $\xi_{p}$ and $\xi_{SO}$ sensitively determines the size of the gap. This model is not detailed enough to make quantitative predictions, but the general trend remains that high-order terms in the expansion play an important role in determining the size of the gap. An attractive spin-orbit appears to be necessary to have gaps of a reasonable size. An attractive bare p-wave potential precludes pairing even though the bare interaction is stronger because of the repulsion due to the induced interaction. Figure 6: ${}^{3}P_{2}$ gap as a function of $\xi_{SO}$ in model B for a few choices of $\xi_{p}$ and $G_{V}$ at $3n_{\mathrm{sat}}$. To incorporate trends observed in the nucleon-nucleon phase shifts that have been measured up to $E_{\rm lab}\simeq 300$ MeV which correspond $p_{\rm CM}=\sqrt{m_{n}E_{\rm lab}/2}\simeq 375$ MeV, we consider scenario C in which we incorporate a non-zero $V_{T}$ by introducing a parameter $\xi_{T}$ that sets the strength of the tensor interaction, and $V_{T}=-G_{V}\xi_{T}/\Lambda^{2}$. This allows us to obtain any desired ordering of the p-wave phase shifts for $J=0,1,2$ and match scattering data that require a weakly attractive bare ${}^{3}P_{2}$ interaction and repulsive interactions in the ${}^{3}P_{0}$ and ${}^{3}P_{1}$ channels. Since the induced interaction at ${\cal O}(mk_{\text{F}}^{5})$ in Eq. 15 neglected the part of the potential proportional to $\bar{C}^{\prime}_{2}V_{T}$, the results we present here are strictly only valid for $\xi_{T}\ll\xi_{SO}$. Thus, scenario C must be viewed as an initial exploration into the effects of $V_{T}$ to be continued in future work. To obtain the correct ordering of the p-wave interactions, the parameters must satisfy the following conditions. To have an attractive bare ${}^{3}P_{2}$ potential, $\xi_{SO}>\xi_{p}$ and to have a repulsive bare ${}^{3}P_{0}$, $\xi_{p}+2\xi_{SO}-3\xi_{T}/2>0$. The condition $2\xi_{SO}/5<\xi_{T}<2\xi_{SO}$ ensures that the ${}^{3}P_{1}$ potential is most repulsive and ${}^{3}P_{2}$ is most attractive. We define $\alpha$ and $\beta$ to be the ratio between the bare potentials given by $\begin{split}\alpha\equiv\frac{V_{{}^{3}P_{2}}}{V_{{}^{3}P_{0}}}=\frac{\xi_{p}-\xi_{SO}}{\xi_{p}+2\xi_{SO}-3\xi_{T}/2}\\\ \beta\equiv\frac{V_{{}^{3}P_{1}}}{V_{{}^{3}P_{0}}}=\frac{\xi_{p}+\xi_{SO}+\xi_{T}}{\xi_{p}+2\xi_{SO}-3\xi_{T}/2}\end{split}$ (22) Phase shifts for lab energies between $250\,\mathrm{MeV}$ and $350\,\mathrm{MeV}$ favor $\alpha$ between $-1$ and $-3$ and $\beta$ between $2$ and $4$. The blue and orange bands in Fig. 7 and Fig. 8 identify regions $-3<\alpha<-1$ and $2<\beta<4$, respectively with the correct sign for each bare potential. We only show regions with $\xi_{SO}>0$ and $\xi_{T}>0$ since this is required to obtain the correct ordering of p-waves phase shifts. These are also the signs favored by a tensor interaction arising from pion exchange and a spin-orbit force from heavy meson exchange. Figure 7: The bare ${}^{3}P_{2}$ potential, the induced potential at $\mathcal{O}(mk_{F}^{3})$, and the induced $V_{SO}$ at $n_{B}=3n_{\mathrm{sat}}$. $G_{V}=40\,\mathrm{GeV}^{-2}$ in the upper panel and $G_{V}=20\,\mathrm{GeV}^{-2}$ in the lower panel. To illustrate the relevance of the induced interaction, Fig. 7 shows the interaction for scenario C broken down into bare potential, $\mathcal{O}(mk_{F}^{3})$ induced potential, and $\mathcal{O}(mk_{F}^{5})$ corrections to the induced potential for a few choices of $\xi_{SO}$ and $\xi_{T}$ and the same two values of $G_{V}$ used for scenario B. In the right panel, $\xi_{SO}$ is fixed to match the value of $\xi_{p}$ so that the bare interaction is always zero, as is found in the meson exchange model. The range of $\xi_{p}$ chosen is motivated by the observation that meson exchange models predict $0.5\lesssim\xi_{p}\lesssim 1.5$ and matching to phase shifts between 250 and 350 MeV predicts $-0.5\lesssim\xi_{p}\lesssim 0$. As seen before, the $\mathcal{O}(mk_{F}^{3})$ induced interaction is dominated by the term proportional to $\bar{C}_{0}\bar{C}^{\prime}_{2}$ and gives more attraction for positive $\xi_{p}$ and repulsion for negative $\xi_{p}$. The induced interaction does not exactly go through the origin for $\xi_{p}=0$, but the KL suppression of the terms that do not include $\bar{C}^{\prime}_{2}$ or $V_{SO}$ is sufficient that it is not visible on this scale. As expected from Eq. 23, negative values of $\xi_{p}$ lead to more repulsive $\mathcal{O}(mk_{F}^{5})$ corrections, with this effect being stronger for larger values of $\xi_{SO}$ and $\xi_{T}$. Figure 8: Curves of constant total potential for different choices of $\xi_{p}$ at $3n_{\mathrm{sat}}$. The black dashed line shows where the spin- orbit correction vanishes. The coupling constant is $G_{V}=40\,\mathrm{GeV}^{-2}$. (color online) Fig. 8 shows the contours of constant potential for scenario C. As in scenario B, negative or zero $\xi_{p}$ corresponds to repulsion for most of the parameter space. However, unlike scenario B, when $\xi_{T}$ is of reasonable size, increasing $\xi_{SO}$ results in repulsion much more quickly. This is a result of the term proportional to $V_{T}V_{SO}$. The contribution of the spin-orbit terms can be easily summarized in terms of the constants of this model: $V^{(5SO)}_{{}^{3}P_{2}}=\xi_{SO}\left(\xi_{T}+\frac{4\xi_{SO}}{5}-2\xi_{p}\right)\frac{G_{V}^{2}}{\Lambda^{4}}\frac{32mk_{F}^{5}}{945\pi^{2}}(43+24\log 2)$ (23) In scenario A, $\xi_{T}$ is negative and $\xi_{SO}=\xi_{p}$, so the term in parentheses is always negative, and the spin-orbit corrections give additional attraction. However, when we allow the constants to vary and take $\xi_{T}$ positive as is favored by pion exchange and phase shift data, much of the favored region of the phase diagram shows suppression due to these corrections. The black dashed line in Fig. 8 shows where $\xi_{T}+4\xi_{SO}/5-2\xi_{p}=0$. The spin-orbit corrections suppress the potential above and to the right of this line, while the potential is enhanced below and to the left. Figure 9: ${}^{3}P_{2}$ gap as a function of $\xi_{p}$ for a few choices of $G_{V}$, $\xi_{SO}$, and $\xi_{T}$ at $n_{B}=3n_{\mathrm{sat}}$. Fig. 9 shows the ${}^{3}P_{2}$ gap at $3n_{\mathrm{sat}}$ as a function of $\xi_{p}$ for a few choices of $\xi_{SO}$ and $\xi_{T}$. For negative $\xi_{p}$, even though the bare interaction is more attractive, the induced interaction strongly suppresses the gap. The contribution of positive $\xi_{T}$ suppresses the gap. For positive $\xi_{p}$, MeV-scale gaps are possible. These results are sensitive to the value of $F_{0}$, as can be inferred by comparing the upper and lower panels of Fig.7. For smaller $F_{0}$, the relative importance of the induced interaction is diminished, and for most of the parameter space, the induced interaction suppresses an attractive bare interaction. This still has a significant effect as the gap is exponentially sensitive to the potential. For almost all the explored parameter space, the induced interaction suppresses the gap by an order of magnitude or more. ## IV Induced p-wave pairing in quark matter The inner cores of neutron stars may be made up of deconfined quark matter. Quarks are expected to form a color superconductor at asymptotic densities, with the dominant pairing being in the color antitriplet channel (antisymmetric). See Alford (2001) for a review. Color symmetric pairing is generally not considered since gluon exchange in the color sextet (symmetric) is repulsive, while the antitriplet channel is attractive. At asymptotic densities, pairing all three colors and flavors (up, down, and strange) in color and flavor antisymmetric pairs is expected (the CFL phase). At lower densities where the strange quark mass cannot be neglected, up and down quarks can pair in the color antisymmetric channel. This pairing involves two colors, typically denoted as red and green, and is called the 2SC phase. In this phase, the strange quarks and blue-up and blue-down quarks are unpaired. The possibility that these quarks could pair due to the KL mechanism in QCD was first studied in Ref. Schäfer (2006). This study investigated the KL effect in gauge theories where fermions interact via long-range forces that are dynamically screened due to Landau damping of the magnetic gauge bosons. Here, the energy dependence of the interaction plays a critical role, and the results of Ref. Schäfer (2006) indicate that a gap arises via a mechanism analogous to the Kohn-Luttinger effect but conclude that it is too small to be phenomenologically relevant. To assess if the KL mechanism could be relevant in quark matter with short- range interactions that are independent of energy, we shall calculate the induced interaction in the ${}^{3}P_{2}$ channel between quarks of the same flavor and color due to the short-range, flavor and color-independent, repulsive vector interaction. Such pairing would include the strange and the up-and-down blue quarks. In what follows, we shall focus on the induced interaction between strange quarks of the same color at moderate density when $m_{s}\gg k_{Fs}$ The specific question we address here is if repulsive short- range interactions introduced to stabilize quark matter inside neutron stars Baym _et al._ (2018) can lead to pairing gaps of phenomenological relevance. For concreteness, we consider a description of quark matter within the purview of the Nambu-Jona-Lasino(NJL) models (see Buballa (2005) for a comprehensive review). In these models, defined by the interaction Lagrangian Buballa (2005); Baym _et al._ (2018); Song _et al._ (2019) ${\cal L}_{V}=G(\bar{q}q)^{2}+H(\bar{q}\bar{q})(qq)-g_{V}~{}(\bar{q}\gamma_{\mu}q)^{2}\,,$ (24) where $G$ and $H$ are the four-fermion scalar quark-antiquark and diquark coupling strengths, and vector coupling $g_{V}$ is introduced to generate higher pressures as noted earlier. The scalar interaction between quarks and antiquarks leads to a non-trivial vacuum with $\langle\bar{q}q\rangle\neq 0$ that spontaneously breaks chiral symmetry and the coefficient, $G$, is determined by hadron masses and the pion decay constant in the vacuum. For typical momentum cut-off $\Lambda_{\rm NJL}\approx 600$ MeV, $G\Lambda_{\rm NJL}^{2}\simeq 2$ Buballa (2005). At densities of interest to neutron stars, chiral symmetry remains broken. The constituent strange quark mass is expected to be $300-500$ MeV, while the up and down quark masses can be significantly smaller. The diquark coupling $H$ and the vector coupling $g_{V}$ are expected to be of similar size because they can be thought of as arising from the same underlying high-energy color current-current interactions in QCD Song _et al._ (2019). Their values at the densities of interest to neutron stars are determined phenomenologically. The diquark coupling $H$, which encodes the attraction in the color antisymmetric channel, leads to s-wave pairing between quarks. For $H\simeq G$, the s-wave pairing gap between up and down quarks is about $50$ MeV and is typically inadequate to induce pairing between strange quarks and light quarks Alford _et al._ (2008), as mentioned above. The analysis of the quark matter EOS in Baym _et al._ (2018, 2019) concluded that vector coupling needed to be of moderate size with $g_{V}\simeq G$ to support the large sound speed needed to support a two solar mass neutron star. First, we note that the contribution to the induced interaction from the closed fermion loop (the first diagram in Fig. 1) is enhanced by a factor $N_{f}N_{c}$. This is because the bare vector interaction introduced to stiffen the quark matter EOS is independent of color and flavor. Thus, in contrast to the one-component fermi system, where the contribution from the closed fermion loop was canceled by the diagram that encoded the vertex corrections, in quark matter with $N_{f}=N_{c}=3$, the first diagram in Fig. 1 makes the dominant contribution to the induced potential. In computing this diagram, the up and down quarks must be treated as relativistic particles, leading to a somewhat more complicated expression. After doing the Matsubara sum and noting that $\bar{u}_{3}\not{q}u_{1}=\bar{u}_{4}\not{q}u_{2}$ for $u_{1}$ and $u_{2}$ incoming and $\bar{u}_{3}$ and $\bar{u}_{4}$ outgoing spinors, the induced potential from the first diagram is given by: $V^{\rm ind}=g_{V}^{2}(\bar{u}_{3}\gamma_{\mu}u_{1})(\bar{u}_{4}\gamma_{\nu}u_{2})\left(\frac{E_{k}+m_{s}}{2m_{s}}\right)^{2}\sum_{f,c}\int\frac{\ell d\ell d\Omega_{\ell}}{4\pi^{3}qE_{\ell}}\frac{\Theta(k_{fc}-\ell)}{c_{q\ell}-q/2\ell}(2\ell^{\mu}\ell^{\nu}-g^{\mu\nu}\vec{\ell}\cdot\vec{q})$ (25) Figure 10: Induced p-wave potential between strange quarks for $g_{V}=2/\Lambda^{2}_{NJL}$, $\Lambda_{NJL}=600\,\mathrm{MeV}$, and $m_{s}=350\,\mathrm{MeV}$. The calculation of the induced interaction, including both the electric ($\bar{u}\gamma^{i}u$ for $i=0$) and magnetic ($\bar{u}\gamma^{i}u$ for $i=(1,2,3)$) components is unwieldy. In what follows, we shall focus on the electric component as the magnetic component is suppressed by the strange quark mass. In this case, setting $\mu=\nu=0$ in Eq. 25 we find that $V^{\rm ind}=\frac{g_{V}^{2}\delta_{13}\delta_{24}}{2\pi^{2}q}\sum_{c}\int\frac{d\ell dc_{q\ell}}{c_{q\ell}-q/2\ell}\left[\sum_{f=u,d}(2\ell^{2}-\ell qc_{q\ell})\Theta(k_{fc}-\ell)+2\ell m_{s}\Theta(k_{sc}-\ell)\right]\,.$ (26) After performing the momentum integrals, we find that $V^{\rm ind}=g_{V}^{2}\delta_{13}\delta_{24}\sum_{c}\left[\sum_{f=u,d}\left(-\frac{k_{fc}^{2}}{2\pi^{2}}+\frac{q^{2}}{2}U_{0}^{\rm rel}\left(\frac{q}{k_{fc}}\right)-2k_{fc}^{2}U_{2}^{\rm rel}\left(\frac{q}{k_{fc}}\right)\right)-2U(q)\right]$ (27) where the relativistic Lindhard functions $U_{0}^{\rm rel}$ and $U_{2}^{\rm rel}$ defined in Appendix B. Note that by not including the magnetic part of the vector integration ($\bar{u}\gamma^{i}u$ for $i=(1,2,3)$), explicit dependence on $J$ has been removed, and all the p-waves have the same potential. Nonetheless, the ${}^{3}P_{2}$ gap remains of primary interest because the bare interaction vanishes for $J=2$, while it is repulsive for $J=0,1$. Fig. 10 shows the induced p-wave potential and ${}^{3}P_{2}$ gap for strange quarks of mass $350$ MeV, for $g_{V}=2/\Lambda_{NJL}^{2}$ and $\Lambda_{NJL}=600\,\mathrm{MeV}$. Densities of each color and flavor are determined assuming 2SC pairing of up and down quarks, charge neutrality, and beta equilibrium. This value of $g_{V}$ suggested by comparison with NJL models is smaller than the smallest coupling we considered for nucleons. However, it should be noted that even the smallest values of $G_{V}$ considered for nucleons would result in a sound speed greater than one for the lightest quarks so this is a physically reasonable choice. ## V Implications for Neutron Stars The thermal evolution of neutron stars, especially those that are reheated by accretion from a companion at late times, is sensitive to heat capacity and neutrino emissivity in their cores Yakovlev and Pethick (2004); Page _et al._ (2009); Potekhin _et al._ (2015). The neutrino emissivity and the specific heat of dense matter are both strongly modified by Cooper pairing. When the pairing gap is large compared to the temperature, the neutrino emissivity and the specific heat are exponentially suppressed by the factor $\exp{(-\Delta/kT)}$. Additionally, in the vicinity of the critical temperature, Cooper pair breaking and formation (PBF) processes enhance the neutrino emissivity, and this enhancement is especially important for neutron Cooper pairing in the ${}^{3}P_{2}$ channel in the core of the neutron star Page _et al._ (2009); Potekhin _et al._ (2015). Studies of isolated neutron star cooling reported in Ref. Page _et al._ (2009) that include the modified URCA ($nn\rightarrow npe^{-}\bar{\nu}_{e}$ and $e^{-}pn\rightarrow nn\nu_{e}$) reactions and the PBF process but discount the possibility of other more rapid neutrino emission processes such as direct URCA Lattimer _et al._ (1991) find that a critical temperature for ${}^{3}P_{2}$ pairing, $T_{c}\approx\Delta_{{}^{3}P_{2}}/1.7$, that is larger than $5\times 10^{8}$ K ($\approx 50$ keV) throughout the inner core would be disfavored by observations. This favors a scenario in which the $\Delta_{{}^{3}P_{2}}$ is suppressed at the modest density encountered in the outer core due to the competition between the interactions induced by the central and spin-orbit components of the nuclear forces Schwenk and Friman (2004) but is insensitive to the behavior of the gap at higher density. Accreting neutron stars exhibit a diversity of cooling behaviors, and a few neutron stars show behavior that requires rapid neutrino cooling Yakovlev and Pethick (2004); Brown _et al._ (2018). Such rapid neutrino cooling can be realized in the dense nuclear matter when the proton fraction in the core exceeds about $11\%$ to lift kinematic restrictions on the direct URCA reactions $e^{-}+p\rightarrow n+\nu_{e}$ and $n\rightarrow e^{-}+p+\bar{\nu}_{e}$ Lattimer _et al._ (1991). In addition, rapid cooling would also require ${}^{3}P_{2}$ pairing to be absent at high density. Our finding that the induced interaction disfavors ${}^{3}P_{2}$ when the spin- orbit and tensor forces are strong and attractive provides some insight into the conditions necessary to realize unpaired neutron matter at high density characterized by a high sound speed. On the other hand, if the central component of the p-wave interaction is strongly repulsive and the non-central components are weak, the induced interaction favors ${}^{3}P_{2}$ pairing between neutrons, and rapid neutrino cooling cannot be realized in nuclear matter at high density. In this scenario, rapid cooling in neutron stars would require new ungapped fermionic excitations, such as hyperons or quarks, to enable the direct URCA reaction. Figure 11: Contours for $\Delta_{{}^{3}P_{2}}=10\,\mathrm{keV}$ for $G_{V}=40\,\mathrm{GeV}^{-2}$ at $3n_{\mathrm{sat}}$ for a few choices of $\xi_{p}$. Below and to the right of the contour, the gap is larger than 10 keV. In transiently accreting neutron stars, inference of the heat deposition due to deep crustal heating from observations of accretion outbursts and the inference of the core temperature from subsequent observation in quiescence have been used to derive a lower limit to the neutron star core heat capacity Cumming _et al._ (2017); Brown _et al._ (2018). Further, if the neutron stars cooling can be observed during quiescence, an upper limit on the core heat capacity can also be deduced from observations Cumming _et al._ (2017); Brown _et al._ (2018). For neutron stars in the low mass x-ray binaries KS 1731-260, MXB 1659-29, and XTE J1701-462, with core temperatures in the range $10^{7}-10^{8}$ K, the lower limit was found to be a factor of a few below the core heat capacity expected if neutrons and protons in the core are paired. However, upper limits from future cooling observations in these systems could constrain the extent of neutron pairing in the neutron star core. For example, the analysis in Ref. Brown _et al._ (2018) suggests that if the neutron star in MXB 1659-29 cools by about 4% during a 10-year period, a very large fraction of the neutrons in the neutron star core must be superfluid with a gap that is much larger than a few keV. If observed, it would disfavor a large attractive tensor interaction and would require an attractive spin-orbit interaction as shown in Fig.11. Repulsive bare p-wave interactions permit larger regions of parameter space, while attractive bare p-wave interactions are more restrictive. ## VI Conclusion We have calculated the induced potential between fermions at the Fermi surface to study the role of polarization effects in the dense medium. We find that short-range repulsive interactions due to the exchange of heavy vector mesons between neutrons, whose strength is related to the Fermi liquid parameter $F_{0}$ and the sound speed at high density, induce an attractive p-wave potential. Using a model that allows us to independently vary the strength of central and non-central p-wave interactions, we have investigated the competition between the bare and induced interactions to determine the conditions necessary to realize ${}^{3}P_{2}$ pairing in neutron matter at high density. When neutron matter is characterized by a large speed of sound $c_{s}^{2}>1/3$ and $F_{0}\gtrsim 2$, the induced interaction plays an important role. We find that * • The contribution to the induced interaction in a particular partial wave arising from terms in the bare interaction that do not contribute to that partial wave are suppressed because their contribution is strongly influenced by the KL singularity at $q=2k_{\text{F}}$. For this reason, the bare p-wave and the spin-orbit interactions are generally more important than lower-order terms for the induced ${}^{3}P_{2}$ potential. * • The induced interaction favors ${}^{3}P_{2}$ pairing if the central component of the s-wave and p-wave interaction are strongly repulsive and the non- central components are small. The resulting gap, $\Delta_{{}^{3}P_{2}}$, can be in the range $0.1-10$ MeV in the neutron star core and is exponentially sensitive to the induced potential. * • When the central p-wave and the spin-orbit interaction are both strong and attractive, the induced interaction is repulsive. Although the bare interaction is strongly attractive, the induced repulsion can preclude pairing or suppress $\Delta_{{}^{3}P_{2}}$ by orders of magnitude. * • In the presence of a strongly attractive spin-orbit interaction, the induced interaction favors ${}^{3}P_{2}$ pairing when the central p-wave is repulsive. Pairing persists even when the strength of the central p-wave repulsion is greater than the attractive spin-orbit interaction. An important caveat to these findings is our assumption that the bare interaction at the Fermi surface is well represented by Eq. 8. Further, as noted earlier, at the high momenta of relevance when $n_{B}>2n_{\mathrm{sat}}$, the nucleon-nucleon potential, and thereby the parameters of our model, are not well constrained by scattering data. Nonetheless, results obtained within the purview of the model allowed us to explore the connection between pairing and the strong repulsive central interactions needed to generate a high sound speed and large $F_{0}$ at densities expected in the cores of massive neutron stars. As noted earlier, our calculation, which includes the effect due to strong spin-orbit forces, provides simple formulae to gauge the interplay between repulsive central interaction and attractive spin-orbit interactions. However, a study of the role of strong tensor interactions warrants further study. Another aspect that warrants mention is the role of many-body forces. Although we have not explicitly accounted for them in our study here, earlier work has demonstrated that three-body forces can be incorporated through a density- dependent two-body potential that can then be constructed by normal ordering the three-body force with respect to a convenient reference state, such as the ground state of the non-interacting many-body system Hebeler and Schwenk (2010); Holt _et al._ (2020). Including the three-body force would thereby introduce a density dependence to the parameters of our model that set the strength of the two-body s-wave and p-wave interactions in dense matter. We believe the large range of parameter values we explored should be sufficient to account for corrections due to many-body forces partially. The density- dependence of the two-nucleon partial-wave matrix elements at the Fermi surface and the correlation between the parameters induced by the 3-body forces will be explored in future work. ## VII Acknowledgements The U.S. DOE supported the work of M. K. and S. R. under Grant No. DE-FG02- 00ER41132. We thank Silas Beane, Roland Farrell, Bengt Friman, Yuki Fujimoto, and Achim Schwenk for their suggestions and helpful discussions. S. R. also thanks the members of the N3AS Physics Frontier Center, funded by the NSF Grant No. PHY-2020275 for useful conversations. ## Appendix A Induced interactions In this appendix, we derive analytic results for the induced interaction in two steps. First, for the sake of simplicity and clarity, we assume that the bare potential only contains a momentum-independent s-wave interaction characterized by the $C_{0}$ and $\tilde{C}_{0}$, and spin-orbit force with strength $V_{SO}$. In this case, the ZS diagram involves the product $V_{L}\times V_{R}$, where $\begin{split}V_{L}&=C_{0}(\delta_{13}\delta_{ab}-\delta_{1b}\delta_{a3})+\tilde{C}_{0}(\sigma_{13}\cdot\sigma_{ab}-\sigma_{1b}\cdot\sigma_{a3})-V_{SO}2iq\times(\ell+k^{\prime})\cdot(\sigma_{13}\delta_{ab}+\sigma_{ab}\delta_{13})\\\ V_{R}&=C_{0}(\delta_{24}\delta_{ba}-\delta_{2a}\delta_{b4})+\tilde{C}_{0}(\sigma_{24}\cdot\sigma_{ba}-\sigma_{2a}\cdot\sigma_{b4})+V_{SO}2iq\times(\ell-k)\cdot(\sigma_{24}\delta_{ba}+\sigma_{ba}\delta_{24})\,.\end{split}$ (28) Evaluating term-by-term, we find that the $C_{0}^{2}$ contribution is given by $\begin{split}C_{0}^{2}\sum_{ab=\\{\uparrow,\downarrow\\}}(\delta_{13}\delta_{ab}-\delta_{1b}\delta_{a3})(\delta_{24}\delta_{ba}-\delta_{2a}\delta_{b4})=C_{0}^{2}\delta_{14}\delta_{23}\end{split}$ (29) To calculate the $\tilde{C}^{2}_{0}$ contribution, we use the following identities: $\begin{split}\sum_{ab=\\{\uparrow,\downarrow\\}}\sigma_{ab}^{i}\sigma_{ba}^{j}&=2\delta^{ij}\\\ \sum_{b=\\{\uparrow,\downarrow\\}}\sigma_{bc}^{j}\sigma_{ab}^{i}&=\chi^{\dagger}_{b}\sigma^{j}\sigma^{i}\chi_{a}=\delta_{ab}\delta^{ij}-i\varepsilon^{ijk}\sigma_{ab}^{k}\\\ \sum_{bc=\\{\uparrow,\downarrow\\}}\sum_{i=1}^{3}\sigma_{cd}^{i}\sigma_{bc}^{j}\sigma_{ab}^{i}&=\sum_{i=1}^{3}\chi_{d}^{\dagger}\sigma^{i}(2\delta^{ij}-\sigma^{i}\sigma^{j})\chi_{a}=-\sigma_{ad}^{j}\end{split}$ (30) to find that $\begin{split}\tilde{C}^{2}_{0}\sum_{ab=\\{\uparrow,\downarrow\\}}(\sigma_{13}\cdot\sigma_{ab}-\sigma_{1b}\cdot\sigma_{a3})(\sigma_{24}\cdot\sigma_{ba}-\sigma_{2a}\cdot\sigma_{b4})=\tilde{C}_{0}^{2}(4\sigma_{13}\cdot\sigma_{24}+3\delta_{14}\delta_{23}+2\sigma_{14}\cdot\sigma_{23})\end{split}$ (31) The $C_{0}\tilde{C}_{0}$ contribution is calculated by noting that $\sum_{ab}\delta_{ab}\sigma_{ba}=\mbox{Tr}[\sigma]=0$ and $\sum_{b}\sigma_{bc}\cdot\sigma_{ab}=3\delta_{ac}$. Explicitly, $\begin{split}C_{0}\tilde{C}_{0}&\sum_{ab=\\{\uparrow,\downarrow\\}}[(\delta_{13}\delta_{ab}-\delta_{1b}\delta_{a3})(\sigma_{24}\cdot\sigma_{ba}-\sigma_{2a}\cdot\sigma_{b4})+(\sigma_{13}\cdot\sigma_{ab}-\sigma_{1b}\cdot\sigma_{a3})(\delta_{24}\delta_{ba}-\delta_{2a}\delta_{b4})]\\\ &=C_{0}\tilde{C}_{0}[-2(3\delta_{13}\delta_{24}+\sigma_{13}\cdot\sigma_{24})+2\sigma_{14}\cdot\sigma_{23}]\end{split}$ (32) We have calculated the leading order contributions from the spin-orbit interaction, proportional to $C_{0}~{}V_{SO}$ and $\tilde{C}_{0}~{}V_{SO}$ and find that their contributions vanish. First, consider the $C_{0}~{}V_{SO}$ term $\begin{split}C_{0}V_{SO}&\sum_{ab=\\{\uparrow,\downarrow\\}}[(\delta_{13}\delta_{ab}-\delta_{1b}\delta_{a3})2iq\times(\ell-k)\cdot(\sigma_{24}\delta_{ba}+\sigma_{ba}\delta_{24})\\\ &+(\delta_{24}\delta_{ba}-\delta_{2a}\delta_{b4})2iq\times(-\ell-k^{\prime})\cdot(\sigma_{13}\delta_{ab}+\sigma_{ab}\delta_{13})]\\\ &=2iC_{0}V_{SO}[q\times(\ell-k)\cdot(2\delta_{13}\sigma_{24}-\sigma_{24}\delta_{13}-\sigma_{13}\delta_{24})\\\ &+q\times(-\ell-k^{\prime})\cdot(2\delta_{24}\sigma_{13}-\sigma_{13}\delta_{24}-\sigma_{24}\delta_{13})]\,.\end{split}$ (33) Eq. 33 can be simplified further by noting that terms proportional to $q\times\ell$ vanish upon integrating over the angle $\theta_{q\ell}$ and using the fact that $q\times k=q\times k^{\prime}=-q\times q^{\prime}/2$. We find the induced interaction proportional to $C_{0}~{}V_{SO}$ $\begin{split}iC_{0}V_{SO}[q\times q^{\prime}\cdot(2\delta_{13}\sigma_{24}-\sigma_{24}\delta_{13}-\sigma_{13}\delta_{24}+2\delta_{24}\sigma_{13}-\sigma_{13}\delta_{24}-\sigma_{24}\delta_{13})]=0\end{split}$ (34) To see that $q\times\ell$ terms vanish, notice that the only angular dependence from the loop integral is on the angle between $q$ and $\ell$. Consider the integral $\int d\Omega_{\ell}\hat{\ell}\cdot\hat{u}f(\hat{\ell}\cdot\hat{q})$ where $f(\hat{\ell}\cdot\hat{q})$ contains the angular dependence of the loop integral and $\hat{\ell}\cdot\hat{u}$ corresponds to terms like $q\times\ell\cdot\sigma$. Rotate $\Omega_{\ell}$ so that $\hat{q}=\hat{z}$ and $\phi_{\ell}=0$ corresponds to the azimuthal angle of $\hat{u}$ calling these angles $\theta_{q\ell}$ and $\phi_{u\ell}$. Also define the polar angle of $\hat{u}$ as $\theta_{uq}$ Now $\hat{\ell}\cdot\hat{u}=\sin{\theta}_{q\ell}\cos{\phi}_{u\ell}\sin{\theta}_{uq}+\cos{\theta}_{q\ell}\cos{\theta}_{uq}$. Doing the integral $\int d\phi_{u\ell}\cos{\phi}_{u\ell}=0$ so the only term that survives is proportional to $\cos{\theta}_{uq}$. In the spin-orbit terms, $\ell$ always enters as $q\times\ell\cdot\sigma=\ell\cdot(\sigma\times q)$ with $\sigma\times q$ orthogonal to $q$, so this contribution always vanishes. Similarly, the contribution proportional to $\tilde{C}_{0}V_{SO}$ can also be simplified by making the substitutions $2iq\times(\ell-k)\rightarrow iq\times q^{\prime}$ and $2iq\times(-\ell-k^{\prime})\rightarrow iq\times q^{\prime}$ and using the identities in Eq. 30. We find that $\begin{split}\tilde{C}_{0}V_{SO}&\sum_{ab=\\{\uparrow,\downarrow\\}}[(\sigma_{13}\cdot\sigma_{ab}-\sigma_{1b}\cdot\sigma_{a3})iq\times q^{\prime}\cdot(\sigma_{24}\delta_{ba}+\sigma_{ba}\delta_{24})\\\ &+(\sigma_{24}\cdot\sigma_{ba}-\sigma_{2a}\cdot\sigma_{b4})iq\times q^{\prime}\cdot(\sigma_{13}\delta_{ab}+\sigma_{ab}\delta_{13})]\\\ &=\tilde{C}_{0}V_{SO}~{}iq\times q^{\prime}\cdot(2\sigma_{13}\delta_{24}-3\sigma_{24}\delta_{13}+\sigma_{13}\delta_{24}\\\ &+2\sigma_{24}\delta_{13}-3\sigma_{13}\delta_{24}+\sigma_{24}\delta_{13})\\\ &=0\,.\end{split}$ (35) Thus, spin-orbit terms do not contribute to the induced interaction at leading order in $V_{SO}$. Up to this order, including all of the non-zero terms associated with the product $V_{L}\times V_{R}$ and performing the particle- hole loop integration, we find that the induced interaction due to the ZS diagram is given by $\begin{split}V^{\rm ind}_{ZS}&=-U(q)\left[(C_{0}^{2}+3\tilde{C}_{0}^{2})\delta_{14}\delta_{23}-6C_{0}\tilde{C}_{0}\delta_{13}\delta_{24})\right]\\\ &-U(q)\left[(4\tilde{C}_{0}^{2}-2C_{0}\tilde{C}_{0})\sigma_{13}\cdot\sigma_{24}+(2\tilde{C}_{0}^{2}+2C_{0}\tilde{C}_{0})\sigma_{14}\cdot\sigma_{23})\right]\end{split}$ (36) where $\begin{split}U(q)&=-\frac{1}{\beta}\sum_{\ell_{0}}\int\frac{d^{3}\ell}{(2\pi)^{3}}\frac{1}{\ell_{0}-\ell^{2}/2m}\frac{1}{\ell_{0}-(\ell+q)^{2}/2m}\\\ &=-\frac{m}{2\pi^{2}q}\int_{0}^{k_{F}}\ell d\ell\int_{-1}^{1}\frac{d\cos{\theta}_{q\ell}}{\cos{\theta}_{q\ell}-q/2\ell}\\\ &=-\frac{mk_{F}^{2}}{2\pi^{2}q}\left[-\frac{q}{2k_{F}}+\frac{1}{2}\left(1-\frac{q^{2}}{4k_{F}^{2}}\right)\mbox{log}\left|\frac{1-q/2k_{F}}{1+q/2k_{F}}\right|\right]\end{split}$ (37) is the positive Lindhard function. The contribution from the ZS’ diagram is obtained by switching indices 3 and 4 and by replacing $q$ by $q^{\prime}$ in the loop integral. Explicitly, $\begin{split}V^{\rm ind}_{ZS^{\prime}}&=-U(q^{\prime})\left[(C_{0}^{2}+3\tilde{C}_{0}^{2})\delta_{13}\delta_{24}-6C_{0}\tilde{C}_{0}\delta_{14}\delta_{23})\right]\\\ &-U(q^{\prime})\left[(4\tilde{C}_{0}^{2}-2C_{0}\tilde{C}_{0})\sigma_{14}\cdot\sigma_{23}+(2\tilde{C}_{0}^{2}+2C_{0}\tilde{C}_{0})\sigma_{13}\cdot\sigma_{24})\right]\,.\end{split}$ (38) The calculation of the momentum-dependent part of the induced potential is similar but a bit more tedious and the analytic results involves a large number of terms. To obtain useful formula with fewer terms we present results for the spin singlet and spin-triplet contributions. These will require the second and fourth moments of the Lindhard function denoted $U_{2}$ and $U_{4}$. $U_{2}$ is defined as follows: $\begin{split}U_{2}(q)&=-\frac{m}{2\pi^{2}q}\int_{0}^{k_{F}}\ell^{3}d\ell\int_{-1}^{1}\frac{d\cos{\theta}_{q\ell}}{\cos{\theta}_{q\ell}-q/2\ell}\\\ &=-\frac{mk_{F}^{4}}{2\pi^{2}q}\left[-\frac{q}{12k_{F}}-\frac{q^{3}}{16k_{F}^{3}}+\frac{1}{4}\left(1-\frac{q^{4}}{16k_{F}^{4}}\right)\log\left|\frac{1-q/2k_{F}}{1+q/2k_{F}}\right|\right]\end{split}$ (39) $U_{4}$ is defined analagously and is given by: $U_{4}(q)=-\frac{mk_{F}^{6}}{2\pi^{2}q}\left[-\frac{q}{30k_{F}}-\frac{q^{3}}{72k_{F}^{3}}-\frac{q^{5}}{96k_{F}^{5}}+\frac{1}{6}\left(1-\frac{q^{6}}{64k_{F}^{6}}\right)\log\left|\frac{1-q/2k_{F}}{1+q/2k_{F}}\right|\right]$ (40) Five momentum structures appear corresponding to the five pairings of the combinations of constants given above. The momentum-dependent parts of $V_{L}$ and $V_{R}$ take the following form for the spin-independent terms. The spin- dependent terms are analagous. $\begin{split}V_{L}&\supset C_{2}((-\ell-k^{\prime})^{2}+q^{2})(\delta_{13}\delta_{ab}-\delta_{1b}\delta_{a3})+C^{\prime}_{2}((-\ell-k^{\prime})^{2}-q^{2})(\delta_{13}\delta_{ab}+\delta_{1b}\delta_{a3})\\\ V_{R}&\supset C_{2}((\ell-k)^{2}+q^{2})(\delta_{24}\delta_{ba}-\delta_{2a}\delta_{b4})+C^{\prime}_{2}((\ell-k)^{2}-q^{2})(\delta_{24}\delta_{ba}+\delta_{2a}\delta_{b4})\end{split}$ (41) The contributions of the momentum dependence to the induced potential are: $\begin{split}\xi_{a}(q)&=\frac{1}{\beta}\sum_{\ell_{0}}\int\frac{d^{3}\ell}{(2\pi)^{3}}\Delta(\ell)\Delta(\ell+q)[2q^{2}+(-\ell-k^{\prime})^{2}+(\ell-k)^{2}]\\\ \xi_{b}(q)&=\frac{1}{\beta}\sum_{\ell_{0}}\int\frac{d^{3}\ell}{(2\pi)^{3}}\Delta(\ell)\Delta(\ell+q)[-2q^{2}+(-\ell-k^{\prime})^{2}+(\ell-k)^{2}]\\\ \xi_{c}(q)&=\frac{1}{\beta}\sum_{\ell_{0}}\int\frac{d^{3}\ell}{(2\pi)^{3}}\Delta(\ell)\Delta(\ell+q)[(-\ell-k^{\prime})^{2}+q^{2}][(\ell-k)^{2}+q^{2}]\\\ \xi_{d}(q)&=\frac{1}{\beta}\sum_{\ell_{0}}\int\frac{d^{3}\ell}{(2\pi)^{3}}\Delta(\ell)\Delta(\ell+q)[(-\ell-k^{\prime})^{2}-q^{2}][(\ell-k)^{2}-q^{2}]\\\ \xi_{e}(q)&=\frac{1}{\beta}\sum_{\ell_{0}}\int\frac{d^{3}\ell}{(2\pi)^{3}}\Delta(\ell)\Delta(\ell+q)[((-\ell-k^{\prime})^{2}-q^{2})((\ell-k)^{2}+q^{2})\\\ &+((\ell-k^{\prime})^{2}+q^{2})((\ell-k)^{2}-q^{2})]\end{split}$ (42) $\Delta(\ell)=(\ell_{0}-\ell^{2}/2m)^{-1}$ is the fermion propagator. After doing the loop integral, these give: $\begin{split}\xi_{a}(q)&=-\frac{2mk_{F}^{3}}{3\pi^{2}}-(q^{2}+2k_{F}^{2})U(q)-2U_{2}(q)\\\ \xi_{b}(q)&=-\frac{2mk_{F}^{3}}{3\pi^{2}}+(3q^{2}-2k_{F}^{2})U(q)-2U_{2}(q)\\\ \xi_{c}(q)&=-\frac{mk_{F}^{3}}{\pi^{2}}\left(\frac{11}{15}k_{F}^{2}+\frac{7}{12}q^{2}\right)-\left(k_{F}^{4}+\frac{3}{2}q^{2}k_{F}^{2}+\frac{q^{4}}{8}\right)U(q)\\\ &-\frac{3}{2}q^{2}U_{2}(q)-U_{4}(q)\\\ \xi_{d}(q)&=-\frac{mk_{F}^{3}}{\pi^{2}}\left(\frac{11}{15}k_{F}^{2}-\frac{3}{4}q^{2}\right)-\left(k_{F}^{4}-\frac{5}{2}q^{2}k_{F}^{2}+\frac{17}{8}q^{4}\right)U(q)\\\ &+\frac{5}{2}q^{2}U_{2}(q)-U_{4}(q)\\\ \xi_{e}(q)&=-\frac{mk_{F}^{3}}{\pi^{2}}\left(\frac{22}{15}k_{F}^{2}-\frac{q^{2}}{6}\right)-\left(2k_{F}^{4}-q^{2}k_{F}^{2}-\frac{7}{4}q^{4}\right)+q^{2}U_{2}(q)-2U_{4}(q)\end{split}$ (43) The total central induced potential in the spin triplet channel: $\begin{split}V^{\rm ind}_{S=1}&=-\bar{C}_{0}^{2}(U(q)-U(q^{\prime}))+\bar{C}_{0}\bar{C}_{2}(\xi_{a}(q)-\xi_{a}(q^{\prime}))+\bar{C}_{0}\bar{C}^{\prime}_{2}(\xi_{b}(q)-\xi_{b}(q^{\prime}))\\\ &+\bar{C}_{2}^{2}(\xi_{c}(q)-\xi_{c}(q^{\prime}))+5\bar{C}_{2}^{\prime 2}(\xi_{d}(q)-\xi_{d}(q^{\prime}))+\bar{C}_{2}\bar{C}^{\prime}_{2}(\xi_{e}(q)-\xi_{e}(q^{\prime}))\end{split}$ (44) The total central induced potential in the spin singlet channel: $\begin{split}V^{\rm ind}_{S=0}&=\bar{C}_{0}^{2}\left(U(q)+U(q^{\prime})\right)-\bar{C}_{0}\bar{C}_{2}(\xi_{a}(q)+\xi_{a}(q^{\prime}))+3\bar{C}_{0}\bar{C}^{\prime}_{2}(\xi_{b}(q)+\xi_{b}(q^{\prime}))\\\ &-\bar{C}_{2}^{2}\left(\xi_{c}(q)+\xi_{c}(q^{\prime})\right)+3\left(\bar{C}_{2}^{\prime 2}(\xi_{d}(q)+\xi_{d}(q^{\prime}))+\bar{C}_{2}\bar{C}^{\prime}_{2}(\xi_{e}(q)+\xi_{e}(q^{\prime})\right)\,.\end{split}$ (45) This gives s- and p-wave central potentials: $\begin{split}{}^{1}S_{0}&:\bar{C}_{0}^{2}\frac{mk_{F}}{3\pi^{2}}(1+2\log 2)+mk_{F}^{3}[\bar{C}_{0}\bar{C}_{2}\frac{2}{3\pi^{2}}(5+4\log{2})\\\ &+\bar{C}_{0}\bar{C}^{\prime}_{2}\frac{2}{5\pi^{2}}(7-4\log{2})]+mk_{F}^{5}[\bar{C}_{2}^{2}\frac{8}{315\pi^{2}}(277+96\log{2})\\\ &-\bar{C}_{2}^{\prime 2}\frac{8}{105\pi^{2}}(43+24\log{2})+\bar{C}_{2}\bar{C}^{\prime}_{2}\frac{32}{105\pi^{2}}(37+6\log{2})]\end{split}$ (46) $\begin{split}{}^{3}P_{J}&:\bar{C}_{0}^{2}\frac{mk_{F}}{5\pi^{2}}(1-2\log 2)+mk_{F}^{3}[\bar{C}_{0}\bar{C}_{2}\frac{2}{105\pi^{2}}(59-68\log{2})\\\ &-\bar{C}_{0}\bar{C}^{\prime}_{2}\frac{2}{105\pi^{2}}(29+52\log{2})]+mk_{F}^{5}[\bar{C}_{2}^{2}\frac{16}{567\pi^{2}}(83-24\log{2})\\\ &+\bar{C}_{2}^{\prime 2}\frac{64}{567\pi^{2}}(34-3\log{2})-\bar{C}_{2}\bar{C}^{\prime}_{2}\frac{16}{2835\pi^{2}}(523+204\log{2})]\end{split}$ (47) The spin-orbit potential gives an additional contribution to the p-waves: $\begin{split}2\bar{C}^{\prime}_{2}V_{SO}\frac{1}{\beta}&\sum_{\ell_{0}}\int\frac{d^{3}\ell}{(2\pi)^{3}}\Delta(\ell)\Delta(\ell+q)[((-\ell-k^{\prime})^{2}-q^{2})iq\times(\ell-k)\cdot(3\delta_{13}\sigma_{24}+\delta_{24}\sigma_{13})\\\ &+((\ell-k)^{2}-q^{2})iq\times(-\ell-k^{\prime})\cdot(3\delta_{24}\sigma_{13}+\delta_{13}\sigma_{24})]\\\ &=\bar{C}^{\prime}_{2}V_{SO}iq\times q^{\prime}\cdot(\sigma_{13}\delta_{24}+\sigma_{24}\delta_{13})\xi_{f}(q)\end{split}$ (48) The function $\xi_{f}(q)$ is given by: $\xi_{f}(q)=-\frac{2mk_{F}^{3}}{3\pi^{2}}+(5q^{2}-4k_{F}^{2})U(q)$ (49) This gives a contribution to the p-waves after including the ZS’ diagram: ${}^{3}P_{J}:[J(J+1)-4]\bar{C}^{\prime}_{2}V_{SO}\frac{32mk_{F}^{5}}{945\pi^{2}}(43+24\log{2})$ (50) We calculate the contribution of the tensor interaction only to $\mathcal{O}(mk_{F}^{3})$. The term proportional to $C_{0}V_{T}$ gives: $\begin{split}C_{0}V_{T}\frac{1}{\beta}&\sum_{\ell_{0}}\int\frac{d^{3}\ell}{(2\pi)^{3}}\Delta(\ell)\Delta(\ell+q)[-\delta_{13}\delta_{24}((-\ell-k^{\prime})^{2}+(\ell-k)^{2})-2q\cdot\sigma_{13}q\cdot\sigma_{24}\\\ &+(-\ell-k^{\prime})\cdot\sigma_{23}(-\ell-k^{\prime})\cdot\sigma_{14}+(\ell-k)\cdot\sigma_{23}(\ell-k)\cdot\sigma_{14}]\end{split}$ (51) Doing the calculation for the $\tilde{C}_{0}V_{T}$ term gives $-3$ times the result for the term proportional to $C_{0}V_{T}$ after reducing to spin singlet or triplet. The potentials in these channels after including the ZS’ diagram are given by: $V^{\rm ind}_{S=0}=-\bar{C}_{0}V_{T}(2q^{2}U(q)+2q^{\prime 2}U(q^{\prime}))$ (52) $\begin{split}V^{\rm ind}_{S=1}&=\bar{C}_{0}V_{T}\Bigl{[}\frac{mk_{F}^{3}}{6\pi^{2}}(-\hat{q}\cdot\sigma_{13}\hat{q}\cdot\sigma_{24}+\hat{q}^{\prime}\cdot\sigma_{13}\hat{q}^{\prime}\cdot\sigma_{24})+U(q)\left(\frac{3}{4}q\cdot\sigma_{13}q\cdot\sigma_{24}\right.\\\ &-\left.\frac{1}{2}q^{\prime}\cdot\sigma_{13}q^{\prime}\cdot\sigma_{24}+\frac{q^{2}}{4}\right)-U(q^{\prime})\left(\frac{3}{4}q^{\prime}\cdot\sigma_{13}q^{\prime}\cdot\sigma_{24}-\frac{1}{2}q\cdot\sigma_{13}q\cdot\sigma_{24}+\frac{q^{\prime 2}}{4}\right)\\\ &+\bigl{.}U_{2}(q)(1+\hat{q}\cdot\sigma_{13}\hat{q}\cdot\sigma_{24})-U_{2}(q^{\prime})(1+\hat{q}^{\prime}\cdot\sigma_{13}\hat{q}^{\prime}\cdot\sigma_{24})\Bigr{]}\end{split}$ (53) where we define the unit vector $\hat{q}=\vec{q}/|q|$. For the spin triplet, outgoing spin indices are exchanged on some terms to simplify the equations. Doing the integrals gives: $\begin{split}{}^{1}S_{0}&:-\bar{C}_{0}V_{T}\frac{16mk_{F}^{3}}{15\pi^{2}}(2+\log 2)\\\ {}^{3}P_{2}&:-\bar{C}_{0}V_{T}\frac{4mk_{F}^{3}}{15\pi^{2}}(1-\log 2)\\\ {}^{3}P_{1}&:-\bar{C}_{0}V_{T}\frac{4mk_{F}^{3}}{21\pi^{2}}(4+5\log 2)\\\ {}^{3}P_{0}&:\bar{C}_{0}V_{T}\frac{2mk_{F}^{3}}{21\pi^{2}}(5+22\log 2)\end{split}$ (54) The part of the interaction proportional to $V_{SO}^{2}$ takes the form: $\begin{split}-8V_{SO}^{2}\frac{1}{\beta}&\sum_{\ell_{0}}\int\frac{d^{3}\ell}{(2\pi)^{3}}\Delta(\ell)\Delta(\ell+q)[(q\times(-\ell-k^{\prime})\cdot\sigma_{13})(q\times(\ell-k)\cdot\sigma_{24})\\\ &+\delta_{13}\delta_{24}(q\times(-\ell-k^{\prime}))\cdot(q\times(\ell-k))]\\\ &=-\frac{2mk_{F}^{3}}{3\pi^{2}}(q^{2}(\sigma_{13}\cdot\sigma_{24}+2\delta_{13}\delta_{24})-(q\cdot\sigma_{13})(q\cdot\sigma_{24}))+U(q)[q^{4}\sigma_{13}\cdot\sigma_{24}\\\ &-q^{2}(q\cdot\sigma_{13})(q\cdot\sigma_{24})+2(q\times q^{\prime}\cdot\sigma_{13})(q\times q^{\prime}\cdot\sigma_{24})+8\delta_{24}\delta_{24}q^{2}k_{F}^{2}]\\\ &-U_{2}(q)[4(q^{2}\sigma_{13}\cdot\sigma_{24}-(q\cdot\sigma_{13})(q\cdot\sigma_{24}))+8\delta_{13}\delta_{24}q^{2}]\end{split}$ (55) A tedious calculation gives the following contribution: $\begin{split}{}^{1}S_{0}&:V_{SO}^{2}\frac{8mk_{F}^{5}}{35\pi^{2}}(17+16\log 2)\\\ {}^{3}P_{2}&:V_{SO}^{2}\frac{128mk_{F}^{5}}{4725\pi^{2}}(43+24\log 2)\\\ {}^{3}P_{1}&:0\\\ {}^{3}P_{0}&:V_{SO}^{2}\frac{64mk_{F}^{5}}{945\pi^{2}}(43+24\log 2)\end{split}$ (56) ## Appendix B Induced interaction between quarks Treating quarks relativistically, the screening diagram is given by $\begin{split}V^{\rm ind}&=g_{V}^{2}(\bar{u}_{3}\gamma_{\mu}u_{1})(\bar{u}_{4}\gamma_{\nu}u_{2})\left(\frac{E_{k}+m_{s}}{2m_{s}}\right)^{2}\sum_{f,c}\frac{1}{\beta}\sum_{\ell_{0}}\int\frac{d^{3}\ell}{(2\pi)^{3}}\\\ &\times\frac{\mathrm{Tr}[\gamma^{\mu}(\not{\ell}+\not{q}+m_{f})\gamma^{\nu}(\not{\ell}+m_{f})]}{(\ell_{0}^{2}-\ell^{2}-m_{f}^{2})(\ell_{0}^{2}-(\ell+q)^{2}-m_{f}^{2})}\end{split}$ (57) where $f=u,d,s$ and $c=r,g,b$ denotes the flavor and color of quarks that appear in the particle-hole loop. Since the screening diagram is enhanced by the number of flavors and colors and the other diagrams are not, we calculate only this part of the potential. We neglect anti-particle contributions by discarding the Matsubara sums that produce terms proportional to $(\exp[\beta(E+\mu)]+1)^{-1}$, which are negligible at small temperatures. Doing the trace and noticing that $\bar{u}_{3}\not{q}u_{1}=\bar{u}_{4}\not{q}u_{2}=0$, Eq. 57 can be written as $V^{\rm ind}=g_{V}^{2}(\bar{u}_{3}\gamma_{\mu}u_{1})(\bar{u}_{4}\gamma_{\nu}u_{2})\left(\frac{E_{k}+m_{s}}{2m_{s}}\right)^{2}\sum_{f,c}\int\frac{\ell d\ell d\Omega_{\ell}}{4\pi^{3}qE_{\ell}}\frac{\Theta(k_{fc}-\ell)}{\cos\theta_{q\ell}-q/2\ell}(2\ell^{\mu}\ell^{\nu}-g^{\mu\nu}\vec{\ell}\cdot\vec{q})$ (58) $k_{fc}$ is the Fermi momentum of flavor $f$ and color $c$ with the normal subscript $F$ suppressed for readability. Since the Fermi momentum of strange quarks is approximately the same for all colors, also suppress the color label on $k_{s}$. Expanding to zeroth order in $k_{s}/m_{s}$, only the $\mu=\nu=0$ components contribute and we get: $V^{\rm ind}=g_{V}^{2}\delta_{13}\delta_{24}\frac{1}{2\pi^{2}q}\sum_{f,c}\int\frac{\ell d\ell}{\sqrt{\ell^{2}+m_{f}^{2}}}\frac{dc_{q\ell}}{\cos\theta_{q\ell}-q/2\ell}\Theta(k_{fc}-\ell)(2\ell^{2}+2m_{f}^{2}-2\ell q\cos\theta_{q\ell})$ (59) Setting $m_{u}=m_{d}=0$ and discarding components from the strange quark that are not proportional to $m_{s}$ gives: $V^{\rm ind}=g_{V}^{2}\delta_{13}\delta_{24}\sum_{c}\left[\sum_{f=u,d}\left(-\frac{k_{fc}^{2}}{2\pi^{2}}+\frac{q^{2}}{2}U_{0}^{\rm rel}\left(\frac{q}{k_{fc}}\right)-2k_{fc}^{2}U_{2}^{\rm rel}\left(\frac{q}{k_{fc}}\right)\right)-2U(q)\right]$ (60) where the mass in the Lindhard function $U(q)$ is the strange quark mass and we define relativistic dimensionless Lindhard functions in analogy with the relativistic ones (defining $\tilde{q}=q/k_{fc}$ to be distinguished from $\bar{q}=q/k_{s}$): $\begin{split}U_{0}^{\rm rel}(\tilde{q}=q/k_{fc})&=-\frac{1}{2\pi^{2}\tilde{q}}\int d\bar{\ell}\log\left|\frac{1-\tilde{q}/2\bar{\ell}}{1+\tilde{q}/2\bar{\ell}}\right|\\\ &=\frac{1}{2\pi^{2}\tilde{q}}\left[\log\left|1+\frac{2}{\tilde{q}}\right|-\left(1-\frac{\tilde{q}}{2}\right)\log\left|\frac{1-\tilde{q}/2}{1+\tilde{q}/2}\right|\right]\\\ U_{2}^{\rm rel}(\tilde{q})&=-\frac{1}{2\pi^{2}\tilde{q}}\int\bar{\ell}^{2}d\bar{\ell}\log\left|\frac{1-\tilde{q}/2\bar{\ell}}{1+\tilde{q}/2\bar{\ell}}\right|\\\ &=\frac{1}{6\pi^{2}\tilde{q}}\left[\frac{\tilde{q}}{2}+\frac{\tilde{q}^{3}}{4}\log\left|1+\frac{2}{\tilde{q}}\right|-\left(1-\frac{\tilde{q}^{3}}{8}\right)\log\left|\frac{1-\tilde{q}/2}{1+\tilde{q}/2}\right|\right]\end{split}$ (61) Analytical expressions for s- and p-wave potentials can easily be found with Mathematica or equivalent, but are long and unenlightening so we do not reproduce them here. ## References * Cromartie _et al._ (2019) H. T. Cromartie _et al._ (NANOGrav), Nature Astron. 4, 72 (2019), arXiv:1904.06759 [astro-ph.HE] . * Antoniadis _et al._ (2013) J. Antoniadis _et al._ , Science 340, 6131 (2013), arXiv:1304.6875 [astro-ph.HE] . * Demorest _et al._ (2010) P. Demorest, T. Pennucci, S. Ransom, M. Roberts, and J. Hessels, Nature 467, 1081 (2010), arXiv:1010.5788 [astro-ph.HE] . * Abbott _et al._ (2018) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 121, 161101 (2018), arXiv:1805.11581 [gr-qc] . * De _et al._ (2018) S. De, D. Finstad, J. M. Lattimer, D. A. Brown, E. Berger, and C. M. Biwer, Phys. Rev. Lett. 121, 091102 (2018), [Erratum: Phys.Rev.Lett. 121, 259902 (2018)], arXiv:1804.08583 [astro-ph.HE] . * Capano _et al._ (2020) C. D. Capano, I. Tews, S. M. Brown, B. Margalit, S. De, S. Kumar, D. A. Brown, B. Krishnan, and S. Reddy, Nature Astron. 4, 625 (2020), arXiv:1908.10352 [astro-ph.HE] . * Miller _et al._ (2019) M. C. Miller _et al._ , Astrophys. J. Lett. 887, L24 (2019), arXiv:1912.05705 [astro-ph.HE] . * Riley _et al._ (2019) T. E. Riley _et al._ , Astrophys. J. Lett. 887, L21 (2019), arXiv:1912.05702 [astro-ph.HE] . * Hebeler and Schwenk (2010) K. Hebeler and A. Schwenk, Phys. Rev. C 82, 014314 (2010), arXiv:0911.0483 [nucl-th] . * Gandolfi _et al._ (2014) S. Gandolfi, J. Carlson, S. Reddy, A. W. Steiner, and R. B. Wiringa, Eur. Phys. J. A 50, 10 (2014), arXiv:1307.5815 [nucl-th] . * Tews _et al._ (2013) I. Tews, T. Krüger, K. Hebeler, and A. Schwenk, Phys. Rev. Lett. 110, 032504 (2013), arXiv:1206.0025 [nucl-th] . * Lynn _et al._ (2016) J. E. Lynn, I. Tews, J. Carlson, S. Gandolfi, A. Gezerlis, K. E. Schmidt, and A. Schwenk, Phys. Rev. Lett. 116, 062501 (2016), arXiv:1509.03470 [nucl-th] . * Drischler _et al._ (2020) C. Drischler, R. J. Furnstahl, J. A. Melendez, and D. R. Phillips, Phys. Rev. Lett. 125, 202702 (2020), arXiv:2004.07232 [nucl-th] . * Tews _et al._ (2018) I. Tews, J. Carlson, S. Gandolfi, and S. Reddy, Astrophys. J. 860, 149 (2018), arXiv:1801.01923 [nucl-th] . * Kohn and Luttinger (1965) W. Kohn and J. M. Luttinger, Phys. Rev. Lett. 15, 524 (1965). * Kagan (2013) M. Y. Kagan, _Modern trends in Superconductivity and Superfluidity_ (Springer Netherlands, 2013). * Fay and Layzer (1968) D. Fay and A. Layzer, Phys. Rev. Lett. 20, 187 (1968). * Pines (1971) D. Pines, in _Proc. XIth Intern. Conf. on Low temperature physics_ (Academic Press of Japan, Tokyo, 1971) p. 10. * Clark _et al._ (1976) J. Clark, C.-G. Källman, C.-H. Yang, and D. Chakkalakal, Physics Letters B 61, 331 (1976). * Dean and Hjorth-Jensen (2003) D. J. Dean and M. Hjorth-Jensen, Rev. Mod. Phys. 75, 607 (2003), arXiv:nucl-th/0210033 . * Gezerlis _et al._ (2014) A. Gezerlis, C. J. Pethick, and A. Schwenk, (2014), arXiv:1406.6109 [nucl-th] . * Gorkov and Melik-Barkhudarov (1961) L. P. Gorkov and T. K. Melik-Barkhudarov, Sov. Phys. JETP 13, 1018 (1961). * Schwenk and Friman (2004) A. Schwenk and B. Friman, Phys. Rev. Lett. 92, 082501 (2004), arXiv:nucl-th/0307089 . * Baranov _et al._ (1992) M. A. Baranov, A. V. Chubukov, and M. Y. Kagan, Int. J. Mod. Phys. B 6, 2471 (1992). * González (2008) J. González, Phys. Rev. B 78, 205431 (2008). * Nandkishore _et al._ (2014) R. Nandkishore, R. Thomale, and A. V. Chubukov, Phys. Rev. B 89, 144501 (2014). * González and Stauber (2019) J. González and T. Stauber, Phys. Rev. Lett. 122, 026801 (2019). * Efremov _et al._ (2000) D. V. Efremov, M. S. Mar’enko, M. A. Baranov, and M. Y. Kagan, J. Exp. Theor. Phys. 90, 861 (2000), arXiv:cond-mat/0007334 . * Maiti and Chubukov (2013) S. Maiti and A. V. Chubukov, AIP Conference Proceedings 1550, 3 (2013). * Kagan and Chubukov (1988) M. Y. Kagan and A. Chubukov, JETP Lett 47, 614 (1988). * Friman and Weise (2019) B. Friman and W. Weise, Phys. Rev. C 100, 065807 (2019), arXiv:1908.09722 [nucl-th] . * Baldo _et al._ (1998) M. Baldo, O. Elgarøy, L. Engvik, M. Hjorth-Jensen, and H.-J. Schulze, Phys. Rev. C 58, 1921 (1998). * Drischler _et al._ (2017) C. Drischler, T. Krüger, K. Hebeler, and A. Schwenk, Phys. Rev. C 95, 024302 (2017), arXiv:1610.05213 [nucl-th] . * Pethick and Ravenhall (1991) C. J. Pethick and D. G. Ravenhall, Annals of the New York Academy of Sciences 647, 503 (1991), https://nyaspubs.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1749-6632.1991.tb32200.x . * Alford (2001) M. Alford, Annual Review of Nuclear and Particle Science 51, 131 (2001). * Schäfer (2006) T. Schäfer, Phys. Rev. D 74, 054009 (2006), arXiv:hep-ph/0606026 . * Baym _et al._ (2018) G. Baym, T. Hatsuda, T. Kojo, P. D. Powell, Y. Song, and T. Takatsuka, Reports on Progress in Physics 81, 056902 (2018). * Buballa (2005) M. Buballa, Physics Reports 407, 205 (2005), arXiv:hep-ph/0402234 . * Song _et al._ (2019) Y. Song, G. Baym, T. Hatsuda, and T. Kojo, Phys. Rev. D 100, 034018 (2019). * Alford _et al._ (2008) M. G. Alford, A. Schmitt, K. Rajagopal, and T. Schäfer, Rev. Mod. Phys. 80, 1455 (2008), arXiv:0709.4635 [hep-ph] . * Baym _et al._ (2019) G. Baym, S. Furusawa, T. Hatsuda, T. Kojo, and H. Togashi, The Astrophysical Journal 885, 42 (2019). * Yakovlev and Pethick (2004) D. G. Yakovlev and C. J. Pethick, Ann. Rev. Astron. Astrophys. 42, 169 (2004), arXiv:astro-ph/0402143 . * Page _et al._ (2009) D. Page, J. M. Lattimer, M. Prakash, and A. W. Steiner, Astrophys. J. 707, 1131 (2009), arXiv:0906.1621 [astro-ph.SR] . * Potekhin _et al._ (2015) A. Y. Potekhin, J. A. Pons, and D. Page, Space Sci. Rev. 191, 239 (2015), arXiv:1507.06186 [astro-ph.HE] . * Lattimer _et al._ (1991) J. M. Lattimer, M. Prakash, C. J. Pethick, and P. Haensel, Phys. Rev. Lett. 66, 2701 (1991). * Brown _et al._ (2018) E. F. Brown, A. Cumming, F. J. Fattoyev, C. J. Horowitz, D. Page, and S. Reddy, Phys. Rev. Lett. 120, 182701 (2018), arXiv:1801.00041 [astro-ph.HE] . * Cumming _et al._ (2017) A. Cumming, E. F. Brown, F. J. Fattoyev, C. J. Horowitz, D. Page, and S. Reddy, Phys. Rev. C 95, 025806 (2017), arXiv:1608.07532 [astro-ph.HE] . * Holt _et al._ (2020) J. W. Holt, M. Kawaguchi, and N. Kaiser, Front. in Phys. 8, 100 (2020), arXiv:1912.06055 [nucl-th] .
# Simulation of Attacker Defender Interaction in a Noisy Security Game Erick Galinkin, Emmanouil Pountourakis, John Carter, Spiros Mancoridis ###### Abstract In the cybersecurity setting, defenders are often at the mercy of their detection technologies and subject to the information and experiences that individual analysts have. In order to give defenders an advantage, it is important to understand an attacker’s motivation and their likely next best action. As a first step in modeling this behavior, we introduce a security game framework that simulates interplay between attackers and defenders in a noisy environment, focusing on the factors that drive decision making for attackers and defenders in the variants of the game with full knowledge and observability, knowledge of the parameters but no observability of the state (“partial knowledge”), and zero knowledge or observability (“zero knowledge”). We demonstrate the importance of making the right assumptions about attackers, given significant differences in outcomes. Furthermore, there is a measurable trade-off between false-positives and true-positives in terms of attacker outcomes, suggesting that a more false-positive prone environment may be acceptable under conditions where true-positives are also higher. ## Introduction The use of artificial intelligence in cybersecurity holds tremendous promise, as time is of the essence in dealing with an intrusion. However, while tasks like malware detection (Ucci, Aniello, and Baldoni 2019; Raff et al. 2017) and network anomaly detection (Fernandes et al. 2019; Carter and Mancoridis 2021) have seen substantial progress, the automation of mitigation and response remains a challenge. Part of this challenge lies in the fact that there are many possible ways to represent highly heterogeneous security data, while other challenges involve finding appropriate responses to particular events. In general, defenders seek to find a response that has a minimal effect on the legitimate use of the information systems they are tasked with defending, while being maximally impactful to potential attackers. This sort of min-max problem can be viewed through a game theoretic lens. In the development of game theoretic models, there is a fundamental tension between constructing a model that is both simple enough to be tractable and descriptive enough to be useful. In cybersecurity, this tension is exacerbated due to the high-stakes associated with decisions in conjunction with the fact that the generation, transmission, and response to information happens rapidly and with little human intervention such that on-system activity is effectively instantaneous. In these games, a common simplifying assumption is that defenders are operating against some adversary that operates according to a fixed strategy (Liang and Xiao 2013; Manshaei et al. 2013), often represented using the SIRE model from epidemiology (Khouzani, Sarkar, and Altman 2012; Liu, Peng, and Zhong 2020; Tambe 2011). In practice, these sorts of adversaries are rare, and there are human decision-making aspects to how an attacker behaves within a victim network. Understanding attacker decision-making is crucial for the mitigation of attacks. Ideally, defenders desire near real-time responses to attacker activity as a way to minimize an attacker’s ability to achieve their goals. In order to generate these responses, it is beneficial to predict the next best action for the attacker and prevent that action from being taken. This necessitates modeling both the attacker’s uncertainty about the defender – not knowing what technologies are available to the defender or what actions they are likely to take – and the defender’s uncertainty about the attacker. To the best of our knowledge, ours is the first work that prioritizes the attacker’s decision-making process and places it within a game theoretic framework. We leverage the framework of Stochastic Bayesian Games (Albrecht and Ramamoorthy 2013) (SBG) as our foundation and consider three different models of attacker knowledge: full knowledge about the state, actions, history, and parameter space; partial knowledge, where the state is hidden but the parameters of the environment are known; and zero-knowledge, where the attacker knows nothing but the actions they have taken and what actions they can take. In prior work, the full state and action space is always known to all players, as well as the history of play – only the strategies and utility function, which are determined by the type of the attacker are unknown. The modification to include hidden information necessitates some adaptation of the most common solution method for Stochastic Bayesian Games: Harsanyi-Bellman ad hoc coordination, which assumes that information about the states and actions is public knowledge. A key contribution of this work is the development of direct solutions to a partially observable SBG with limited state and action spaces given some conditions on the environment. In order to study the impact of attacker knowledge, we begin by considering our model with a highly restricted state and action space and simulate attacker interactions with the model. We find that these scenarios show that a full-knowledge assumption yields much higher expected outcomes for attackers – an assumption that demands a more aggressive response from defenders. However, in limited-knowledge scenarios, attackers must conduct some level of reconnaissance or training against a target to estimate their ability to operate, otherwise they perform very poorly. Moreover, we find that attackers outcomes are largely indifferent to overall alert rates when the rate of detecting malicious activity is high, suggesting that there is a quantifiable trade off against alerts unrelated to malicious activity. ## Related Work Recent challenges like CAGE (TTCP 2021) have encouraged development of models like CybORG (Foley et al. 2022) that use reinforcement learning to train autonomous agents that defend against cyber attacks. The Ph.D thesis of Campbell (Campbell 2022) also considers a very similar problem space and solution. These models approach the same problem we wish to consider – the development of a defensive agent that disrupts an adversary while minimizing impact to network users. Our work approaches a similar concept from first principles and contributes observations and insights that emerge even in simple models, particularly from the attacker’s perspective. This paper builds primarily on prior work by Albrecht and Ramamoorthy (Albrecht and Ramamoorthy 2013) on Stochastic Bayesian Games (SBG), and the Ph.D dissertation of Maccarone (Maccarone 2021) on applications of SBG to nuclear power plant cybersecurity. While many security games use Nash equilibria or some comparable equilibrium concept, the complexity of cybersecurity means that often, there are many equilibria and due to hidden information, it is unlikely that all players will identify the same equilibrium. HBA relies on players selecting actions according to the observed history of the game and some set of decision criteria as the game is played and is not “predictive” in the sense of e.g., Nash equilibrium. We expand the scope of Maccarone’s work using the HBA concept in a more general cybersecurity landscape. The partial observability of the proposed SBG draws on the work of Tomášek, Bošanský, and Nguyen (Tomášek, Bošanskỳ, and Nguyen 2020) on one-sided partially observable stochastic games. Their work considered sequential attacks on some target and develops scalable algorithms for solving zero-sum security games in this setting. These algorithms consider upper and lower value bounds on a subgame, and we couple this partial observability with the aforementioned SBG to more closely approach a real-world setting. ## Game Model ###### Definition 1. A Partially Observable Stochastic Bayesian security game is defined as: * • $S$, the state space, with initial state $s^{0}$ and terminal states $\overline{S}\subseteq S$ * • $N=\\{\alpha,\delta\\}$, the players of the game: in our case, the attacker $\alpha$ and the defender $\delta$. For each player $i$, we define: * – action space $A_{i}$ * – type space $\Theta_{i}$ * – utility function $u_{i}:S\times A\times S\rightarrow\mathbb{R}$ * – strategy $\pi_{i}:\mathbb{H}\times A_{i}\times\Theta_{i}\rightarrow[0,1]$ * • $T:S\times A\times S\rightarrow[0,1]$, the state transition function * • $\Delta:\mathbb{N}_{0}\times\Theta\rightarrow[0,1]$, the distribution of types * • $p\coloneqq[0,1]\times A_{\alpha}$ The alert probability vector The element $\mathbb{H}$ contains the history of the game, a sequence of prior states and actions. Given the speed at which actions occur, we assume that the attacker and defender move simultaneously at each time step, and let the action $a^{t}=(a_{\alpha}^{t},a_{\delta}^{t})$ denote the joint action taken at time $t$. The history $H^{t}=\langle s^{0},a^{0},s^{1},a^{1},...,s^{t}\rangle$ is a concatenation of all states and actions taken from time $0$ until time $t$. Our type distributions, $\Delta$ are static in the sense that each player has the same type throughout all time steps of the game. In our game, the transition function $T$ is always not known by the players, though in some of the discussed cases, parameters of $T$ may be known. The type space $\Theta$ is the type space of Harsanyi (Harsanyi 1967), meaning that a player’s utility and strategies are determined by their type. We assume that strategies for particular types are known, despite a player’s type being known only to themselves. In alignment with Albrecht and Ramamoorthy (Albrecht and Ramamoorthy 2013), a type in this context can be thought of as a “program” that governs a player’s behavior. In the cybersecurity context, we can also contextualize the player’s type as the sort of adversary we are playing against. This can be defined at a high level e.g., $\Theta_{\alpha}=\\{$nation-state, cybercriminal, insider threat$\\}$ or at a low level e.g., $\Theta_{\alpha}=\\{$APT28, LockBit, Greg, …$\\}$. A highly granular $\Theta$ will offer more specificity, but increases the complexity of playing the game. ###### Remark 2. Although the model presented in Definition 1 contains a single attacker and a single defender, the model can support multiple attackers at the cost of additional state space complexity, since each attacker has their own unique values for the hidden information. Throughout this paper, all examples assume there is only a single attacker. ###### Remark 3. We use the term “know” to describe information that a player believes with probability 1. Thus, when considering $H$, each player may have beliefs – even very strong ones – about the other player’s actions, but knows only the actions they have taken and the elements of $s^{t}$ that they are able to directly observe. It is this feature that makes the game “partially observable” in contrast to conventional SBG where it is assumed that states and actions are common knowledge. ### Harsanyi-Bellman Ad Hoc Coordination Ad hoc coordination is derived from the notion of private information in Bayesian games (Harsanyi 1967) coupled with the inclusion of state, probabilistic state transition, and time from Stochastic games (Shapley 1953). If every player in a HBA knows the type distribution exactly, then the game admits a Bayesian Nash equilibrium (Harsanyi 1968). Ad hoc coordination then, is based on the assertion that each player does not know the type space $\Theta_{j}$ of the other players and is thus ignorant of $\Delta$. Since the players are ignorant of $\Delta$, they may identify different posterior distributions and cannot guarantee convergence to a Nash equilibrium (Dekel, Fudenberg, and Levine 2004), even a sub-optimal one. Thus, Albrecht, and Ramamoorthy (Albrecht and Ramamoorthy 2013) use a best-response rule that maximizes expected utility with respect to each player’s belief about the type of the other players. ###### Definition 4. Harsanyi-Bellman Ad Hoc Coordination (HBA) is defined as $a_{i}^{t}\sim\operatorname*{arg\,max}_{a_{i}}E_{s^{t}}^{a_{i}}\left[H^{t}\right]$ where: $\begin{split}E_{s}^{a_{i}}\left[\hat{H}\right]&=\sum_{\theta_{-i}^{*}\in\Theta_{-i}^{*}}Pr(\theta_{-i}^{*}|H^{t})\\\ &\quad\sum_{a_{-i}\in A_{-i}}Q_{s}^{a_{i,-i}}(\hat{H})\\\ &\quad\prod_{j\neq i}\pi_{j}(\hat{H},a_{j},\theta_{j}^{*})\end{split}$ (1) is the expected long-term payoff for player $i$ taking action $a_{i}$ in state $s$, given history up to time $t$ $H^{t}$ and $\begin{split}Q_{s}^{a}(\hat{H})&=\sum_{s^{\prime}\in S}T(s,a,s^{\prime})(u_{i}(s,a,s^{\prime})\\\ &+\gamma\max_{a_{i}}E_{s^{\prime}}^{a_{i}}\left[\langle\hat{H},a,s^{\prime}\rangle\right])\end{split}$ (2) is the expected long-term payoff for player $i$ when the joint action $a$ is taken in state $s$ and discount factor $\gamma$. We denote the “projected history” of the game, $\hat{H}$, to be the entire history of the game from $0\leq\tau\leq t$ including the current time step, $t$, assuming that a particular action is taken and concatenated to the history. This solution concept is very effective in conventional SBG settings, but under our restrictions of partial observability, there are components of $s$ and $a$ that are not seen by both players, creating uncertainty about $H$ for players at any time step. We propose non-HBA solutions for our simplified settings in their respective sections. ## Simplified Setting In our simplified game, we assume an attacking agent and a defending agent who choose their moves simultaneously. The game is played on an infinite graph, and the attacker’s objective is to control as many nodes as possible. In this game our state space $S$ is a tuple of two integers: one that serves as a count of “alerts” in the environment and a second that consists of the number of nodes the attacker has infected. The action space for defenders consists of two actions: a pass action, where they do nothing, and a shutdown action, that ends the game. The attacker’s action space consists of two actions: infecting an adjacent node and ending the game. The game is characterized by: * • $p$, the probability that an alert occurs given that the attacking player has taken an action. * • $q$, the probability that an alert occurs whether or not the attacking player has taken an action. * • $Th$, the defender-selected threshold of alerts that ends the game, selected before play begins. Let $p$ and $q$ parameterize two Bernoulli random variables, $X_{m}\sim\text{Bernoulli}(p)$, $X_{n}\sim\text{Bernoulli}(q)$. We define the indicator function $\mathbf{1}_{X}=\begin{cases}1~{}&{\text{ if }}X_{m}=1\text{ or }X_{n}=1\\\ 0~{}&{\text{ if }}X_{n}=0\text{ and }X_{n}=0\end{cases}$ Let $s_{A}$ be the total number of alerts that have occurred. Formally, $s_{A}=\sum_{i=1}^{t}\mathbf{1}_{X}$. This random variable parameterized by $p$ and $q$ also represents part of our transition function $T$, in that the resultant state $s^{\prime}=(s_{A},s_{I})$ follows from the change in $s_{A}$. If the attacking player ends the game before the defending player, they get a reward, equal to $s_{I}$, the number of nodes they have infected and the defender loses exactly this amount. If the defending player ends the game and the attacking player has gained control of at least one node, then both players get zero reward. Clearly, in this setting, the optimal first-move for the defender is to end the game immediately and set $Th=0$. Thus, in our simplified environment, we also assume that if the defender ends the game when the attacker is not present, they incur a very large cost that demands $Th>0$. We analyze three cases of this game: 1. 1. All parameters are common knowledge and $s_{A}$ is observable to both players 2. 2. All parameters are common knowledge but $s_{A}$ is not observable to the attacker 3. 3. Parameters are hidden from the attacker and $s_{A}$ is not observable to the attacker In all cases of the game, the actions taken by the attacking player and $s_{I}$ are unknown to the defender. This means that in the full knowledge setting, the parameters $p$ and $q$ of $T$ are known to all players, $s_{A}$ but not $s_{I}$ is observable by the defender, and the full state at each time step is observed by the attacker. In the partial knowledge setting, $p$ and $q$ are known by all players, but only the defender can observe $s_{A}$ and only the attacker can observe $s_{I}$. In the zero knowledge setting, only the defender knows $p$ and $q$ and can observe $s_{A}$, and the attacker can observe only $s_{I}$. ### Full knowledge In the full knowledge setting, the attacker and defender can both observe $s_{A}$, and know $p$, $q$, and $Th$. Since the cost to the defender of ending the game when they are not under attack is so large, they must choose $Th$ such that the probability an attacker is present is close to 1. Since $Th$ is known and $s_{A}$ is observable, the attacker-optimal strategy is to end the game when $SA=Th$. ###### Proposition 5. The defender-optimal threshold, $Th$ is given by $\left\lfloor\left\lceil\frac{1}{p}\right\rceil(p+q-pq)\right\rfloor$. ###### Proof. At each time step from 1 to $t$, there is probability $q$ that an alert fires independent of any other action. If the attacking player takes an action, there is probability $p$ that an alert fires. This lets us treat $p$ as the rate of arrival for alerts and view $X_{m}$ as a Bernoulli process. We wish to find the number of trials required until the first success, which follows the Geometric distribution and consequently, the expected number of trials before the first success is $\frac{1}{p}$. However, $t$, $X_{n}$, and $X_{m}$ cannot be directly observed. Since only $s_{A}$ can be observed, we must find the expected number of alerts that occur at time $t=\frac{1}{p}$. As our events are discrete, we’ll need to take the ceiling of $t$. The four possible outcomes of $\mathbf{1}_{X}$ are: $\displaystyle(X_{m}=0,X_{n}=0),$ $\displaystyle(X_{m}=1,X_{n}=0),$ $\displaystyle(X_{m}=0,X_{n}=1),$ $\displaystyle(X_{m}=1,X_{n}=1)$ and all but the first outcome yields an alert. We can therefore find the probability that an alert occurs at some time $\tau$ by taking the complement of the probability of the first event. Since $X_{m}$ and $X_{n}$ are independent, that probability is $(1-p)(1-q)$. So our expected number of alerts at time $t=\left\lceil\frac{1}{p}\right\rceil$ is: $\displaystyle((1-p)(1-q))t$ $\displaystyle=(1-p)(1-q)\left\lceil\frac{1}{p}\right\rceil$ $\displaystyle=(pq-p-q+1)\left\lceil\frac{1}{p}\right\rceil$ $\displaystyle=\left\lceil\frac{1}{p}\right\rceil(1-(pq-p-q+1))$ $\displaystyle=\left\lceil\frac{1}{p}\right\rceil(1-pq+p+q-1)$ $\displaystyle=\left\lceil\frac{1}{p}\right\rceil(p+q-pq)$ We cannot guarantee that this number is an integer, and since the defender wants the minimal threshold, we seek the floor of the number of alerts and set: $Th=\left\lfloor\left\lceil\frac{1}{p}\right\rceil(p+q-pq)\right\rfloor$ (3) ∎ The attacker-optimal strategy then, is dependent upon how ties are broken. If the attacker always wins ties, they should act until $s_{A}=Th$. If the defender always wins ties or if ties are broken randomly, the attacker should act until $Th-1<s_{A}\leq Th$. For simplicity, we assume that the attacker always wins ties throughout this work. ### Zero Observability In the second case, the parameters of the game are known to both players – that is, $p,q$ are common knowledge. Since $Th$ is chosen a priori and known by the defender, the attacking player can conclude the value of $Th$ for a rational opponent by Proposition 5. However, $s_{A}$, cannot be observed by the attacker. The defending player sets $Th=\left\lfloor\left\lceil\frac{1}{p}\right\rceil(p+q-pq)\right\rfloor$ as in Equation 3. The attacking player must therefore determine the maximum number of actions they can expect to take before $Th$ is reached. ###### Proposition 6. The optimal number of actions for the attacker to take when $s_{A}$ cannot be observed is $\frac{1-p-q+pq}{p}$. ###### Proof. $p$ and $q$ being known, the attacker seeks to find the expected arrival time of the $Th$th alert generated by the Bernoulli process described by the joint distribution of $X_{m}$ and $X_{n}$. Since the arrival rate for the joint process is $p+q-pq$ as demonstrated in Proposition 5, the attacker seeks to find $t$ such that they are able to maximize their actions. We want to know the number of trials, or actions, required before $Th$ alerts occur, which can be represented by the negative binomial distribution (Casella and Berger 2021) with probability mass function: $\binom{k+r-1}{r-1}(1-p)^{k}p^{r}$ Where $r$ is the number of successes, $k$ is the number of failures, $p$ is the probability of success, and $\binom{k+r-1}{r-1}$ is the binomial coefficient. We call this random variable $X\sim NB(k;r,p)$. The expected value of $X$ is given by $\frac{r(1-p)}{p}$, so the expected number of actions that can be taken by an attacker before the threshold is reached is given by: $\displaystyle t$ $\displaystyle=E[X]$ $\displaystyle=\frac{r(1-p)}{p}$ $\displaystyle=\frac{T(1-(p+q-pq)}{p+q-pq}$ $\displaystyle=Th\frac{1-p-q+pq}{p+q-pq}$ $\displaystyle\approx(\frac{1}{p}p+q-pq)\frac{1-p-q+pq}{p+q-pq}$ $\displaystyle=\frac{1-p-q+pq}{p}$ (4) ∎ The attacker, not knowing the value of $s_{A}$, rationally draws the same conclusion as the defender – since the threshold is defined as a function of the Bernoulli processes by Equation 3, they can operate until the expected number of alerts generated is equal to the expected threshold, given by Equation 4. In our simulation, this value is chosen by the attacker a priori. Once the attacker has taken all their allocated actions, they end the game – if they choose to take another action when the defender decides to end the game, they receive no reward. ### Unknown parameters In the setting where $p$ and $q$ are unknown, the attacker cannot make a rational assessment about $Th$. Since the attacker is still utility maximizing, we must establish some criteria by which the attacker decides whether or not to stop. In the zero-shot setting, where both the count of alerts and the environment parameters are unknown, the attacking agent has no way to reasonably choose a number of actions to take. Attacker-defender interaction here would demand a more complex model, an example of which we discuss in the conclusion. If the attacker is able to conduct multiple trials against a defender, they can use the number of actions $t$ they’ve taken along with a Bayesian updating rule to estimate $Th$ as a function of $t$. Over time, this attacker should optimize to approach both the partial knowledge and full knowledge attacker. However, our goal here is to understand the impact of limited information on the agents, and so we choose an arbitrary small number of attempts for the experimental setting. Our simple Bayesian learning agent is given 10 attempts to interact with the defender, and for each experiment, we return their maximum score and average win rate. ## Simulation and Discussion We evaluate the three models described over 20 evenly spaced values of $p$ and $q$ between 0.01 and 0.99, for a total of 400 combinations of parameters. In each model, play proceeds as follows: 1. 1. The defender calculates $Th$ according to Equation 3 2. 2. Players take an action, and $s_{I}$ is incremented unless either player ends the game. 3. 3. Bernoulli trials are run for $X_{m}$ and $X_{n}$, returning 1 with probability $p$ and $q$, respectively. 4. 4. If either trial returns 1, increment $s_{A}$ 5. 5. The defender and attacker choose their next action and repeat steps 2-5 until the game ends The defender’s choice of action is always to pass unless $s_{A}=Th$, in which case, they end the game. The attacker’s choice of action varies between models. In the full knowledge model, the attacker ends the game if $s_{A}=Th$, otherwise they attack another node. In the partial knowledge model, the attacker chooses a lifespan based on Equation 4. If $s_{I}$ is less than that lifespan, they attack another node; otherwise, they end the game. In the zero knowledge model, the attacker is following the same rules as the partial knowledge model, but is trying to find the optimal lifespan – at the end of each one of their attempts, they update their expected value of the joint distribution of $p$ and $q$, determine a new optimal lifespan, and try until they have exhausted their 10 total attempts. The average win rate and average score of those 10 attempts are then taken as one trial. Each combination of $p$ and $q$ was run over 1 million trials, and the results of those trials – the attacker score and wins – averaged. Figures 1, 2, and 3 show the average score of each model over 1 million trials as a function of $p$ and $q$. Within these figures, we can see two emergent phenomena. First, we can see that the general shape of the surface is the same across all three models, peaking, as we would expect, when both $p$ and $q$ are very low. Second, looking at specifically the index of the Z-axis, we see that the highest attained value decreases monotonically as more information is removed from the attacker. Figure 1: Average score over 1 million trials for full knowledge model Figure 2: Average score over 1 million trials for partial knowledge model In the partial knowledge trials shown in Figure 2, there is a curious dip in the score achieved when both $p$ and $q$ are very low. Looking closely at both the scores and win rates shows that this is because while the win rate is very high in the partial knowledge setting in general, there is a win rate decrease at very low detection rates. Despite boasting an average win rate of .9823 across all parameter settings and trials, the partial knowledge attacker’s win rate at the lowest rate of alerting, $p=q=0.01$ is only .41012. This occurs due to the high number of actions that the attacker expects to achieve in the average case combined with the large variance that is observed when $p$ and $q$ are low. Figure 3: Average score over 1 million trials for zero knowledge model The first phenomenon, that the achieved score is a relatively direct function of the alert rates, follows directly from our assumptions about the threshold and how attackers gain rewards: the longer an attacker is present, the higher their potential reward; if the rate of alerts is higher, then the attacker has less time to accomplish whatever task they aim to. Although the values of the attacker’s score where $p$ and $q$ are larger are still usually greater than zero, they are dwarfed by the values achieved when the probability of alerts is very low. We can also observe, by looking at the X-axis of the figures, that the value of $p$ is much more meaningful than the value of $q$. This makes sense, since $p$ is the parameter with the largest effect in both Equations 3 and 4. More interesting is the fact that for most values of $p$, the impact of $q$ is meaningful only when the attacker’s knowledge is limited. Specifically, in the full knowledge scenario, the difference between some fixed $p$ across all values of $q$ is small. However, as is most obvious in Figure 2, the impact of a high $q$ when $p$ is low significantly changes the expectations of an attacker to persist in an environment – information that a savvy defender could potentially exploit. The second phenomenon is most pronounced by looking at the Z-axis across the three figures, which differs meaningfully across them. Examining the values, we find that across all parameters and trials, the full-knowledge attacker attains a mean score of 9.4701, the partial knowledge attacker attains a mean score of 3.8224, and the zero knowledge attacker attains a mean score of 2.3751. At the high end, the full knowledge attacker attains a maximum score of 113.0872, the partial knowledge attacker attains a maximum score of 72.0665, and the zero knowledge attacker attains a maximum score of 16.2920. The value of additional information to the attacker is therefore quite substantial. ## Conclusion and Future Work This work considers a partially observable stochastic Bayesian game that offers useful insights even under strong assumptions about the state and action space. In particular, we can quantify the value of total alert rate versus the true positive rate, finding that the value of an alert, from an defender’s perspective, is almost entirely about how effective it is at detecting attacker activity, while overall alert rate is essentially meaningless to the attacker, except at the extremes where the overall alert rate far exceeds the true positive rate. This is most pronounced in the scenarios where an attacker’s knowledge about the parameters and the state of the world are limited – scenarios that map more closely to real-life. An extension of this work where the state and action spaces were expanded would offer more useful parallels and insights about these phenomena. In this work, we have found optimal solutions for our particular limited state and action spaces. Since Harsanyi-Bellman ad hoc coordination depends heavily on observing the history and there is substantial uncertainty about how to choose a strategy or predict what actions an opponent may have taken – particularly since any attacker action having been taken would demand a defender response and any defender response would end the game – a generalization of this concept to the partially observable case of this game is difficult to assess in our limited game. Future work in this area will generalize the HBA solution concept to arbitrary partially observable SBGs. This work has also demonstrated that an attacker’s potential success in an environment is monotonically linked to their knowledge of the environment. Crucially, this means that when we are emulating attackers or building game theoretic models of attack and defense, the assumptions about what an attacker knows are nontrivial, and making incorrect assumptions about attacker knowledge and behavior can have negative impacts on defender optimization – to wit, if we assume that attackers are omnipotent, then our optimal defensive strategy is very aggressive, while an optimal defensive strategy under better assumptions may yield more uptime for users. Here, our defender admits only a fixed strategy against an attacker for whom they have no options. However, a more dynamic defender would be able to anticipate an attacker’s actions given their knowledge, and optimize against that. Finally, our zero-knowledge attacker is given a small, arbitrary number of opportunities to learn the environment and optimize their score. In order to conduct a more reasonable few-shot approach to the zero-knowledge environment, we should consider a model with a larger state and action space. In order for the attacker to be a learning agent, some local feedback – feedback aside from winning or losing the game – would allow for a more robust interplay and allow for the incorporation of attacker learning dynamics. Given the extant literature on the use of reinforcement learning for analogous problems, we would seek to apply that same technique in few-shot settings of a stochastic Bayesian game. ## References * Albrecht and Ramamoorthy (2013) Albrecht, S. V.; and Ramamoorthy, S. 2013. A game-theoretic model and best-response learning method for ad hoc coordination in multiagent systems. In _Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems_ , 1155–1156. * Campbell (2022) Campbell, R. G. 2022. _Autonomous Network Defense Using Multi-Agent Reinforcement Learning and Self-Play_. Ph.D. thesis. * Carter and Mancoridis (2021) Carter, J.; and Mancoridis, S. 2021. Evaluation of an Anomaly Detector for Routers using Parameterizable Malware in an IoT Ecosystem. In _Inernational Conference on Ubiquitous Security_ , 53–65. Springer. * Casella and Berger (2021) Casella, G.; and Berger, R. L. 2021. _Statistical inference_. Cengage Learning. * Dekel, Fudenberg, and Levine (2004) Dekel, E.; Fudenberg, D.; and Levine, D. K. 2004. Learning to play Bayesian games. _Games and economic behavior_ , 46(2): 282–303. * Fernandes et al. (2019) Fernandes, G.; Rodrigues, J. J.; Carvalho, L. F.; Al-Muhtadi, J. F.; and Proença, M. L. 2019. A comprehensive survey on network anomaly detection. _Telecommunication Systems_ , 70(3): 447–489. * Foley et al. (2022) Foley, M.; Hicks, C.; Highnam, K.; and Mavroudis, V. 2022. Autonomous Network Defence using Reinforcement Learning. In _Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security_ , 1252–1254. * Harsanyi (1967) Harsanyi, J. C. 1967. Games with incomplete information played by “Bayesian” players, I–III Part I. The basic model. _Management science_ , 14(3): 159–182. * Harsanyi (1968) Harsanyi, J. C. 1968. Games with incomplete information played by “Bayesian” players part II. Bayesian equilibrium points. _Management Science_ , 14(5): 320–334. * Khouzani, Sarkar, and Altman (2012) Khouzani, M. H.; Sarkar, S.; and Altman, E. 2012. Saddle-point strategies in malware attack. _IEEE Journal on Selected Areas in Communications_ , 30(1): 31–43. * Liang and Xiao (2013) Liang, X.; and Xiao, Y. 2013. Game theory for network security. _IEEE Communications Surveys and Tutorials_ , 15(1): 472–486. * Liu, Peng, and Zhong (2020) Liu, G.; Peng, B.; and Zhong, X. 2020. A novel epidemic model for wireless rechargeable sensor network security. _Sensors_ , 21(1): 123. * Maccarone (2021) Maccarone, L. T. 2021. _Stochastic Bayesian Games for the Cybersecurity of Nuclear Power Plants_. Ph.D. thesis, University of Pittsburgh. * Manshaei et al. (2013) Manshaei, M. H.; Zhu, Q.; Alpcan, T.; Bacşar, T.; and Hubaux, J.-P. 2013. Game theory meets network security and privacy. _ACM Computing Surveys (CSUR)_ , 45(3): 1–39. * Raff et al. (2017) Raff, E.; Barker, J.; Sylvester, J.; Brandon, R.; Catanzaro, B.; and Nicholas, C. 2017. Malware Detection by Eating a Whole EXE. * Shapley (1953) Shapley, L. 1953. Stochastic Games. _Proceedings of the National Academy of Sciences_. * Tambe (2011) Tambe, M. 2011. _Security and game theory: algorithms, deployed systems, lessons learned_. Cambridge university press. * Tomášek, Bošanskỳ, and Nguyen (2020) Tomášek, P.; Bošanskỳ, B.; and Nguyen, T. H. 2020. Using one-sided partially observable stochastic games for solving zero-sum security games with sequential attacks. In _International Conference on Decision and Game Theory for Security_ , 385–404. Springer. * TTCP (2021) TTCP. 2021. cage-challenge. https://github.com/cage-challenge. * Ucci, Aniello, and Baldoni (2019) Ucci, D.; Aniello, L.; and Baldoni, R. 2019. Survey of machine learning techniques for malware analysis. _Computers & Security_, 81: 123–147.
# Cool Gaseous Exoplanets: surveying the new frontier with Twinkle Luke Booth,1 Subhajit Sarkar,1 Matt Griffin,1 and Billy Edwards,2 1Cardiff Hub for Astrophysics Research and Technology (CHART), School of Physics and Astronomy, Cardiff University, 5 The Parade, CF24 3AA, United Kingdom 2SRON, Netherlands Institute for Space Research, Niels Bohrweg 4, NL-2333 CA, Leiden, The Netherlands<EMAIL_ADDRESS> (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract Cool gaseous exoplanets ($1.75\ R_{\oplus}<R_{\text{p}}<3\ R_{\text{J}}$, $200$ K $<T_{\text{eq}}<1000$ K) are an as-yet understudied population, with great potential to expand our understanding of planetary atmospheres and formation mechanisms. In this paper, we outline the basis for a homogeneous survey of cool gaseous planets with Twinkle, a 0.45-m diameter space telescope with simultaneous spectral coverage from 0.5-4.5 $\mu$m, set to launch in 2025. We find that Twinkle has the potential to characterise the atmospheres of 36 known cool gaseous exoplanets (11 sub-Neptunian, 11 Neptunian, 14 Jovian) at an SNR $\geq$ 5 during its 3-year primary mission, with the capability of detecting most major molecules predicted by equilibrium chemistry to > $5\sigma$ significance. We find that an injected mass- metallicity trend is well-recovered, demonstrating Twinkle’s ability to elucidate this fundamental relationship into cool regime. We also find Twinkle will be able to detect cloud layers at 3$\sigma$ or greater in all cool gaseous planets for clouds at $\leq$ 10 Pa pressure level, but will be insensitive to clouds deeper than $10^{4}$ Pa in all cases. With these results we demonstrate the capability of the Twinkle mission to greatly expand the current knowledge of cool gaseous planets, enabling key insights and constraints to be obtained for this poorly-charted region of exoplanet parameter space. ###### keywords: exoplanets – planets and satellites: gaseous planets – planets and satellites: atmospheres – techniques: spectroscopic – instrumentation: spectrographs ††pubyear: 2024††pagerange: Cool Gaseous Exoplanets: surveying the new frontier with Twinkle–A ## 1 Introduction Over the past three decades, the field of exoplanet science has progressed rapidly, from the first detections in the 1990s (Wolszczan, 2012; Mayor & Queloz, 1995) and the first atmospheric spectrum in 2002 (Charbonneau et al., 2002), to the revolution in exoplanet demographics resulting from Kepler (Lissauer et al., 2014; Fulton et al., 2017) and over a decade of transmission spectra from the Hubble Space Telescope (HST) (Edwards et al., 2022). Most recently, the James Webb Space Telescope (JWST) is now returning high precision transmission spectra, resulting in new discoveries such as the first detection of sulphur-bearing species in an atmosphere (Rustamkulov et al., 2023; Alderson et al., 2023; Tsai et al., 2023) and the first detection of carbon-bearing molecules in a habitable zone planet (Madhusudhan et al., 2023). Today over 5500111NASA Archive Planetary Systems Composite Table https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph- tblView?app=ExoTbls&config=PSCompPars (27/09/2023) confirmed exoplanets are known, the majority of which have been detected by transit photometry using large dedicated space-based surveys such as Kepler, K2, CHEOPS and TESS or ground-based surveys such as WASP, HATNet and NGTS. Such transiting planets provide potential targets for atmospheric characterization through transmission and/or eclipse spectroscopy. Using these techniques, in the next decade, the upcoming Ariel and Twinkle missions will perform the first dedicated population-level surveys of exoplanet atmospheres (Tinetti et al., 2018; Edwards et al., 2019a). The currently known transiting exoplanet population is highly diverse in both radius and temperature, containing planets with radii that vary from less than that of Mercury to several times the size of Jupiter and equilibrium temperatures, $T_{\text{eq}}$, that span the range from less than $200$ to over $4000$ K. Selection effects of the two most prolific detection methods (transit and radial velocity) bias the currently discovered planetary population towards shorter period planets. Super-Earths and sub-Neptunes are the most frequent type of planet, often found in closely packed multi-planet systems, e.g. the TRAPPIST-1 (Gillon et al., 2017), Kepler-296 (Barclay et al., 2015), Kepler-32 (Swift et al., 2013) and K2-384 (Christiansen et al., 2022) systems. A large population of "hot Jupiters" is also known, their large size, short periods and high transit probabilities positively biasing their transit detectability. Notably, some planet populations appear more sparse. These include the "hot Neptune desert" (Szabó & Kiss, 2011; Mazeh et al., 2016; Edwards et al., 2023) as well as colder gas giants. Wittenmyer et al. (2020), relying on long-duration radial velocity data, determined that the occurrence rates of giant planets around solar-type stars was fairly constant below 300 days and was increased at longer periods. The occurrence rates of hot Jupiters and temperate gas giants would thus be similar ($\sim$ 1%) but very cold giants, analogous to Jupiter or Saturn, are more frequent ($\sim$ 7%). Despite this, in practice there is a dearth of known transiting gas giants at cooler temperatures, with about half as many transiting giants having $T_{\text{eq}}$ less than 1000 K as those with $T_{\text{eq}}$ greater than 1000 K (around all types of star) (Table 1). To date only a small fraction of all known transiting exoplanets, $\sim$ 180 planets222ExoAtmospheres community database (13/11/2023) http://research.iac.es/proyecto/exoatmospheres/index.php, have had their atmospheres characterised through a combination of transmission, emission, cross-correlation and direct imaging spectroscopy, with specific molecular detections reported in about half these cases333Defined as where one or more molecules have definitive detections reported in the ExoAtmospheres community database, but excluding cases where only upper limits are given.. The most successful method applied to date has been transmission spectroscopy. This technique is most sensitive to high scale height atmospheres and large planetary radii which tend to augment spectral feature amplitudes. The amplitude of spectral features in transmission $A_{\text{p}}$ can be approximated by: $A_{\text{p}}=\frac{2R_{\text{p}}\cdot nH}{R_{\text{s}}^{2}}$ (1) where $R_{\text{p}}$ is the radius of the planet, $R_{\text{s}}$ is the radius of the host star and $H$ is the pressure scale height. $n$ is commonly taken to have a value of 5, and gives the number of scales heights of a typical spectral features. The scale height is given by: $H=\frac{k_{\text{B}}T_{\text{eq}}}{\mu g}$ (2) where $\mu$ is the mean molecular weight of the atmosphere, commonly taken to be $\sim 2.3$ for hydrogen/helium-dominated (H2-He) atmospheres, $g$ is the surface gravity of the planet, $k_{\text{B}}$ is Boltzmann’s constant and $T_{\text{eq}}$ is the equilibrium temperature of the planet. As well as making the best targets for transmission spectroscopy, large, very hot planets also make better targets for day-side emission spectroscopy due to their higher thermal flux. As a result, currently almost two thirds of all planets that have had their atmospheres analysed have $T_{\text{eq}}$ > $1000$ K, with this population accounting for $\sim$70% of planets that have molecular detections33footnotemark: 3. Early searches for trends in exoplanet atmospheres (Sing et al., 2016; Tsiaras et al., 2018; Edwards et al., 2022) have also typically been dominated by hot Jupiters and warm Neptunes. Trends reported include temperature vs cloud/hazes (Crossfield & Kreidberg, 2017; Libby-Roberts et al., 2020; Guilluy et al., 2021; Estrela et al., 2022) and planet mass vs metallicity ([e.g. Wakeford et al., 2018; Welbanks et al., 2019). Color-magnitude diagrams and trends regarding phase-curve properties and day-night temperature variations with equilibrium temperature have also been reported (e.g. Zhang, 2020). Such population level studies are a start to understanding how atmospheric properties and planet composition relate to fundamental initial conditions, and are key to a full understanding of planet formation and evolution. However the lower temperature planets are poorly represented due to a paucity of atmospheric spectra in this regime. This in turn is due to a combination of few strong candidates for spectroscopic follow-up and the intrinsic challenge in obtaining transmission spectra for cooler atmospheres (which will have smaller scale heights) at sufficient signal-to-noise ratio (SNR). In this paper, we examine the capability of the upcoming Twinkle space mission to advance the understanding of gaseous planets in the "cool" regime (which we define as being between 200 and 1000 K), through a dedicated spectroscopic survey consisting of a statistically meaningful sample of such planets. The sample will cover a wide range of temperatures, and include planet sizes ranging from sub-Neptunes to Jovian planets. Twinkle will obtain transmission spectra in the $0.5$-$4.5\mu$m wavelength range. Such a survey would have the potential to verify and extend trends into the cool regime, providing key observational constraints for for atmospheric models and planet formation theories. The paper is structured as follows: Section 2 outlines the scientific case for studying cool gaseous exoplanets, which is followed by a brief description of the Twinkle Space Telescope in Section 3. Section 4 describes the construction of a preliminary candidate list based on known exoplanets spanning the parameter space of the cool gaseous planet population. Sections 5 and 6 describe two simulated studies performed on planets from the preliminary candidate list. Section 5 explores the ability of Twinkle to constrain atmospheric metallicity and recover an injected metallicity trend, whilst Section 6 examines Twinkle’s sensitivity to detecting clouds in gaseous planets over a range of sizes and temperatures. ## 2 Cool gaseous planets Spectroscopic observations of “cool” gaseous planets provide the opportunity to shed light on the physical and chemical processes that govern H2-He dominated atmospheres at low temperatures and the formation histories of this population. ### 2.1 Categorisation In this paper we sub-categorise the cool gaseous planets into nine sub-groups based on size and temperature. A survey of cool gaseous planets across the full parameter space of size and temperature will allow the possibility of trends to be elucidated with respect to both parameters. We limit the lower bound of temperature we consider to 200 K. Below this level, the temperatures are generally beyond the outer limits of the "habitable zone", corresponding closer to those of the cold gas and ice giants of our solar system and in practice, there are hardly any transiting planets below this lower limit. Our upper temperature bound is 1000 K, which previous studies have used to define as boundary between hot Jupiters and cool giants (Thorngren et al., 2019; Wallack et al., 2019). Avoiding terms like "temperate" or "warm" which have no consensus definitions, we call all planets in this temperature range "cool". We choose to further sub-divide the large cool temperature regime into three distinct temperature brackets: C1 (200-500 K), C2 (500-750 K) and C3 (750-1000 K) as distinctive patterns of chemical and physical processes are likely to occur with temperature. The C1 category would encompass planets in the "habitable zone". In terms of size, we include established planetary classes with radii above the Kepler radius valley mid-point ($1.75~{}R_{\oplus}$): sub-Neptunes ($1.75~{}R_{\oplus}-3~{}R_{\oplus}$), Neptunes ($3~{}R_{\oplus}-0.5~{}R_{\text{J}}$) and Jovians ($0.5~{}R_{\text{J}}-3~{}R_{\text{J}}$). These are planets where primary H2-He dominated atmospheres are likely to be the norm (Fulton et al., 2017; Gupta & Schlichting, 2019). Their low mean molecular weights will mitigate some of the challenges associated with observing cool atmospheres, and hence they are favoured over rocky planets in this temperature regime for transmission spectroscopy. ### 2.2 The workings of cool atmospheres In the low temperature regime we would expect molecules such as NH3 and CH4 to dominate over N2, CO2 and CO in thermochemical equilibrium, with final abundances modulated by bulk elemental composition (higher metallicities tend to favour CO and particularly CO2 over CH4 and N2 over NH3) or the C/O ratio (Madhusudhan, 2012; Moses, 2014). However at lower temperatures reaction rates slow, such that the timescale for reaching chemical equilibrium increases compared to hotter atmospheres. Competing disequilibrium processes such as transport processes (e.g convection and eddy diffusion) and photochemistry would be expected to have stronger effects than in hotter atmospheres (Prinn & Barshay, 1977; Zahnle & Marley, 2014). Molecules that tend to occur at low temperatures, like CH4 and NH3, are also more sensitive to photochemistry than their hot counterparts (CO and N2) (Moses, 2014). Such processes can change the composition, radiative balance and temperature-pressure profiles (Moses, 2014). HCN may be a significant molecule in cooler atmospheres as a result of coupled NH3-CH4 photochemistry, and CO may occur in the IR photosphere through CH4-H2O photochemistry or transport-induced quenching, but always at lower abundances than CH4 (Moses, 2014). The quench level is the atmospheric level where the chemical equilibrium time scale for a given reaction just falls below that for vertical (or horizontal) mixing, and the molecular abundances at this level are then transported to higher altitudes, resulting in a complex disequilibrium composition (also modulated by photochemistry) in the upper layers potentially probed in transmission. In these upper layers, photochemical timescales may be expected to be shorter than chemical equilibrium so we might expect to see the byproducts of photochemistry at greater levels than for hotter atmospheres. From breakdown of the photosensitive molecules CH4 and NH3, complex hydrocarbons and nitriles are more likely to occur in colder than in hotter atmospheres (Moses, 2014). This could include high-altitude photochemically- produced hazes which may modulate the energy balance of the planet. Photochemical reactions can also result in species that cause atmospheric warming and inversions or conversely could act as coolants (e.g. cooling by C2H2 and C2H6, byproducts of methane photolysis in the atmosphere of Jupiter). Cloud condensates may form, reflecting the condensation profiles of low temperature molecules, giving rise to water, methane or ammonia clouds, which will also impact albedo and energy balance. The presence of clouds will also impact the measured quantities of the condensed species at higher altitudes reflected in the spectrum. The exact mechanisms and chemical pathways for disequilibrium chemistry remain an area of active research, as highlighted by the detection and subsequent interpretation of SO2 in the atmosphere of the hot Saturn WASP-39 b by the JWST Early Release Science (ERS) team (Tsai et al., 2023; Rustamkulov et al., 2023; Alderson et al., 2023). Therefore, while disequilibrium processes are expected and sophisticated atmospheric models exist to simulate the transport-induced quenching (e.g Moses et al., 2011; Drummond et al., 2020; Zamyatina et al., 2023) and photochemistry (e.g. Tsai et al., 2017) there are few if any observational constraints on the chemical kinetics from exoplanet observations. Further to this, there is a present lack of data on how planet and therefore gaseous envelope size would affect these processes. A large and diverse spectroscopic survey of cool gaseous planets is ideally placed to find such "smoking guns". ### 2.3 Planet formation quandaries Clues to the formation history of an exoplanet are encoded in its composition and therefore its atmospheric spectrum. Elemental ratios, such as the C/O ratio, may be able to locate a planet origin location relative to different "ice lines" in the protoplanetary disk (e.g. Öberg et al., 2011) and could be measurable in an exoplanet atmosphere. The C/O ratio may be complicated by planet migration and/or planetesimal pollution. It may be possible to disentangle such pollution effects from origin location effects by examining a range of element ratios (e.g. Turrini et al., 2021; Pacetti et al., 2022). Measurement of the atmospheric metallicity may also provide evidence for the formation mechanism. In core-accretion scenarios, the atmospheric metallicity is expected to be increased compared to the host star (Thorngren et al., 2016), whereas in gravitational instability scenarios we would expect a near stellar metallicity. Current characterisations of the mass-metallicity relationship (e.g Wakeford et al., 2018; Welbanks et al., 2019) are supportive of core-accretion, but have large uncertainties and are derived mostly from hot giant exoplanets. Theoretical structural evolution models (Thorngren et al., 2019) indicate that the mass-metallicity trend should continue in planets $<$ 1000 K; however, there is currently limited data to test and confirm this. Cool gaseous planets may present a challenge to planet formation theories. To hold onto a H2-He atmospheres requires rapid formation of a massive core $\geq$ 10 $M_{\oplus}$ (e.g. Pollack et al., 1996). Traditional core accretion theory holds that such cores are more likely to form beyond the ice line where water ice adds to bulk and adhesion. However many gaseous planets ranging from sub-Neptunes, through Neptune-sizes to Jupiter-sizes are found with equilibrium temperatures that would put them within the ice line. Indeed Hill et al. (2018) found $>$ 70 planets of size $>$ 3 $R_{\oplus}$ in the habitable zones of G, K and M-type stars, with occurrence rates ranging from 6 to 11.5% depending on the stellar class. A planet formation quandary therefore exists in explaining the formation of these planets, requiring some modifications to basic core accretion models. Another problem is the presence of gas giants around M-dwarf stars (many of which are in the cool regime) (e.g. Kanodia et al., 2023), where core accretion models predict slower accretion rates (e.g. Ida & Lin, 2005; Burn et al., 2021) that would make large core formation challenging on the timescale of the disc life time. Gravitational stability is also a potential pathway to forming giant exoplanets (Boss, 1997) and has seen renewed interest due to its ability to explain the existence of the growing number of M-dwarf gas giant exoplanets. For gaseous planets within the water ice-line, formation scenarios include core-accretion beyond the water-ice line followed by disc-migration (Paardekooper & Johansen, 2018), or core formation interior to the water-ice line, followed by in-situ enrichment via gas, dust and pebble accretion close to the host star (Knierim et al., 2022), along with other variations of the above models proposed in recent years. Compositional information including atmospheric metallicity and elemental ratios such as the carbon-to-oxygen ratio (C/O) may therefore shed light on planetary formation and evolution processes in this regime. ### 2.4 Moons and habitability Giant planets have long been postulated to be likely exomoon hosts (Heller & Pudritz, 2015; Spalding et al., 2016; Hill et al., 2019; Saillenfest et al., 2023), and although multiple exomoon candidates have been identified, there has yet to be a definitive detection (Sucerquia et al., 2020; Heller, 2020; Rovira-Navarro et al., 2021; Kipping & Yahalomi, 2023). Though moons can form around planets of any size, cool gaseous planets may have a higher probability of hosting exomoons, owing to their having probably migrated inwards a shorter distance than hotter planets of a similar size and therefore being less likely to have undergone disruption the orbits of, or ejection of, their moons (Spalding et al., 2016). Furthermore, if such moons are sufficiently large, they themselves may have atmospheres, generated through mass transfer from their parent planet or, more likely, outgassing or volcanism as a result of tidal heating. Transmission spectroscopy may therefore provide a method by which exomoons could be detected around cool gaseous planets, with typical products of volcanism (including sodium- and potassium-bearing species) (Rovira-Navarro et al., 2021) unlikely to feature in cooler gaseous atmospheres. Thus cool giant spectroscopy may potentially yield evidence for the presence of exomoons. However, investigating this fascinating possibility requires a determination of the abundance of potential volcanic tracers and subsequently their detection feasibility with current and future instrumentation (eg: Twinkle, Ariel, JWST and ELTs). This is beyond the scope of the current paper and is left to future work. We also note that such exomoons could be in the habitable zone in some cases, and therefore could be locations for potential habitability. The habitability of cold giants themselves is an unexplored topic, and while liquid water layers or oceans such as those postulated for Hycean sub-Neptunes (Madhusudhan et al., 2021) can be ruled out on the basis of pressure and temperature, the possibility of aerial biospheres could be explored. This has previously been raised in the context of Jupiter (Sagan & Salpeter, 1976), and more recently in brown dwarfs (Yates et al., 2017; Lingam & Loeb, 2019), sub-Neptunes (Seager et al., 2021) and Venus (Greaves et al., 2021). ### 2.5 Previous observations Relatively few cool gaseous planets have been studied spectroscopically to date. Sub-Neptunes are ubiquitous but small in size (reducing the $A_{\text{p}}$ factor in Equation 1), so despite the large number, there are few targets suitable for spectroscopic follow-up. Nonetheless, several sub- Neptunes in the "cool" regime have been studied spectroscopically. A non- exhaustive list includes: K2-18 b (Tsiaras et al., 2019; Benneke et al., 2019b; Bézard et al., 2020; Madhusudhan et al., 2023), GJ 1214 b (Kreidberg et al., 2014; Gao et al., 2023; Kempton et al., 2023), GJ 9827 d (Roy et al., 2023), HD 3167 c (Guilluy et al., 2021), HD 97658 b (Knutson et al., 2014b) and TOI-270 d (Mikal-Evans et al., 2023). Spectra of these planets have revealed greatly contrasting atmospheres. Observations conducted by HST, and more recently JWST, have shown the spectrum of GJ 1214 b to be flat well into the mid-infrared (Kreidberg et al., 2014; Gao et al., 2023), requiring significant cloud / haze production to explain, whilst HST spectra of K2-18 b showed clear absorption features at 1.4 $\mu$m relatively unimpeded by the presence of cloud. This feature has been inferred to be due to the presence of H2O (Tsiaras et al., 2019; Benneke et al., 2019b) or CH4 (Bézard et al., 2020; Blain et al., 2021), with recent JWST/NIRISS and NIRSpec observations strongly detecting CH4 (Madhusudhan et al., 2023). Absorption features at $\sim$1.4 $\mu$m, interpreted as due to H2O have also been seen in HST spectra of GJ 9827 d, HD 3167 c and TOI-270 d, though observations spanning broader wavelengths are required to fully resolve the known degeneracy between H2O and CH4. Neptune to Jupiter-sized giant planets in the cool regime are also poorly characterised spectroscopically. In terms of temperature, such planets provide a "missing link" between the two well-studied planetary populations of hot Jupiter exoplanets and the giants of our own Solar System. Previous spectra of such planets include those for GJ 436 b (Knutson et al., 2014a; Hu et al., 2015), GJ 3470 b (Benneke et al., 2019a), HD 106315 c (Guilluy et al., 2021; Kreidberg et al., 2022), HIP 41378 f (Alam et al., 2022), Kepler-51 b, d (Libby-Roberts et al., 2020), K2-33 b (Thao et al., 2023), WASP-29 b (Wong et al., 2022), WASP-80 b (Bell et al., 2023a), and WASP-107 b (Kreidberg et al., 2018; Piaulet et al., 2019; Spake et al., 2021), though many additional planets in this size regime have been the target of ground-based searches for metastable helium absorption (Vissapragada et al., 2022; Allart et al., 2023). Despite the increased number of spectra in the cool giant regime, these atmospheres remain poorly understood. Multiple planets, including HIP-41378 f, Kepler-51 b, d, K2-33 b and WASP-29 b, exhibit spectra that are flat and featureless across the HST/WFC3 wavelength range, while others exhibit clear or muted absorption features at $\sim$1.4 $\mu$m. ### 2.6 Surveys of cool gaseous planets There is a scarcity of systematic surveys of cool gaseous planets with high precision spectra. Kammer et al. (2015) obtained Spitzer secondary eclipse measurements at at 3.6 and 4.5 µm for five gas giants in the temperature range 980-1184 K. They used the atmospheric CH4/CO ratio as a marker of atmospheric metallicity, with results somewhat supportive of increased metallicity with lower masses. Wallack et al. (2019) performed a similar Spitzer study on five further gas giants with $T_{\text{eq}}<1000$ K. They found no evidence for a solar-system-like mass-metallicity relationship but did find a relationship between inferred CH4/(CO+CO2) and stellar metallicity. More recently, Baxter et al. (2021) performed transmission photometry of 33 gaseous planets at 3.6 and 4.5 µm using Spitzer, of which 13 had temperatures between 500 and 1000 K. There was some evidence of a mass-metallicity relation: the cool planets ($<$ 1000 K) were generally biased with lower mass and appeared to have higher metallicity as well as lower eddy diffusion coefficients and a lack of methane compared to expectations. A lack of methane had previously been noted on a number of cool planets compared to equilibrium expectations constituting the so-called “missing methane problem”. Methane has now recently been detected in two “cool” gaseous planets with JWST: K2-18 b and WASP-80 b (Madhusudhan et al., 2023; Bell et al., 2023b). More recently a Cycle 2 JWST survey of seven giant planets in the “cool” regime orbiting M-dwarf stars has been planned (JWST Proposal 3171, PI: S. Kanodia). There is therefore a need for a homogeneous cool gaseous planet survey with wide wavelength coverage to further explore and constrain the relationship between planet mass and atmospheric metallicity, and open up this field of study. Cooler gaseous planets may provide more robust metallicity measurements than hotter gaseous planets as they will be significantly less affected by the degeneracy in radius between poorly-understood radius inflationary effects and increasing metallicity (and thus mean-molecular weight) which acts to suppress the atmospheric extent (Thorngren et al., 2016). Improved metallicity measurements may also help to validate and extend reported mass-metallicity trends (Wakeford et al., 2018; Welbanks et al., 2019) or support the absence of a trend (Edwards et al., 2022). Furthermore, the temperature range spanned by cool gaseous planets is likely to aid in the exploration of cloud and haze coverage trends predicted from theoretical and laboratory work, hints of which have been seen in early HST observations (Dymont et al., 2022; Estrela et al., 2022; McGruder et al., 2023). Whilst the number of known cool giant planets is comparatively small, it has been steadily growing. In the last 18 months alone, TESS photometry has lead to the discovery of intriguing cool giants such as the low-density warm-Jovian TOI-1420 b (Yoshida et al., 2023), a warm Saturn TOI-199 b (Hobson et al., 2023) and several around M-dwarf stars (e.g. Powers et al., 2023; Harris et al., 2023; Han et al., 2023). Re-observation of TESS sectors during the second extended mission has enabled single transit and “duotransit” planetary candidates (Hawthorn et al., 2023), many with orbital periods longer than $\sim$15 days, to be confirmed, with follow-up efforts by ground-based surveys such as the Next Generation Transit Survey (NGTS) and the CHEOPS space telescope subsequently hunting down and constraining the true orbital periods of these candidates. Examples of “cool” planets found in this manner include the sub-Neptunes HD 22946 d (Garai et al., 2023) and HD 15906 b and c (Tuson et al., 2023), the Neptunes HD 9618 b and c (Osborn et al., 2023), TOI-5678 b (Ulmer-Moll et al., 2023) and the Jupiters TOI-4600 b and c (Mireles et al., 2023) (planet c having a $T_{\text{eq}}$ of 191K). However, compared to planets with hot atmospheres, cooler gaseous planets will have smaller scale heights and thus spectral features will give lower SNRs. Co-adding of multiple transit observations is frequently used to improve SNR, but increases total observing time required. This is even more problematic for planets orbiting Sun-like stars where orbital distances are greater and periods longer. As such it is observationally more intensive to characterise cool gaseous planets (especially around Sun-like stars), and this is one reason why they are a challenging population. Dedicated spectroscopic surveys permitting repeat observations over several years would be key to opening up this population to detailed characterisation and obtaining sufficient homogenized samples for population-level studies. Two such dedicated surveys are planned in the coming decade: the ESA Ariel mission due for launch in 2029 and the Twinkle mission which will precede it with a planned launch in 2025. ## 3 Twinkle Developed commercially by Blue Skies Space Ltd. (BSSL), Twinkle is a fast- tracked satellite based on heritage components, and that is expected to characterise many exoplanetary and solar system targets during its nominal 7-year lifetime (Edwards et al., 2019a; Stotesbury et al., 2022). Expected to launch in late 2025, the spacecraft will carry a 0.45-m diameter primary mirror, with an inner sanctum that is actively cooled to < 90 K. Twinkle has a spectrometer with two simultaneously operating channels across the visible and infrared: CH0 covering 0.5-2.4 $\mu$m at R $\leq$ 70 and CH1 covering 2.4-4.5 $\mu$m at R$\leq$50, each channel having its own grism element (Stotesbury et al., 2022). Twinkle will therefore expand on the total spectral wavelength coverage of HST WFC3 G102 (0.8-1.15 $\mu$m) and G141 (1.075-1.7 $\mu$m) gratings by a factor of $\sim 5$, whilst retaining similar spectral resolution. This will open up the opportunity to potentially break degeneracies between H2O and CH4 that are known exist in WFC3 observations, whilst also enabling the detection of strong absorption features from molecules such as CO2 (2.0, 2.7, 4.3 $\mu$m), CO (2.34, 4.67 $\mu$m) and NH3 (1.5, 2.0, 2.3, 3.0 $\mu$m) which lie outside the wavelength range covered by WFC3. Twinkle has a field of regard centred on the anti-Sun vector encompassing $\pm 40^{\circ}$ about the ecliptic (Edwards et al., 2019a). The primary exoplanet survey mission will take place in the first 3 years of operation. Twinkle will therefore be uniquely positioned to provide homogeneous spectroscopic characterisation of a large number of exoplanetary atmospheres, something that will be challenging to achieve with JWST due to competition with other astrophysical disciplines for valuable telescope time. Launching several years prior to the European Space Agency’s M4 mission, Ariel, which has a 1-m class primary mirror and wavelength coverage from 0.5-7.8 $\mu$m (Tinetti et al., 2022), Twinkle will additionally act as a useful precursor, observing many targets that fall within the current realisation of the Ariel target list over a substantially shared region of wavelength space. Consequently, insights gained from Twinkle may be useful for informing future iterations of the Ariel target list, allowing the combined science output of the two missions to be optimised. With the capability to provide the first large homogeneous survey of exoplanet atmospheres, we seek to explore Twinkle’s ability to identify key molecules and elucidate trends in a population level study of cool gaseous planets, which we propose to integrate into the science plan of the Twinkle Extrasolar Survey (Twinkle collaboration, in prep.). ## 4 Establishing a candidate list for the Twinkle Cool Gaseous Planet Survey Candidate list construction for the proposed Twinkle cool gaseous planet survey uses the database of confirmed planets from the NASA Exoplanet Archive444Planetary Systems Composite Table accessed 22/07/2022 to establish a preliminary target list of known planets. Transiting planets are selected based on three criteria: * 1) the existence of "transit" listed as the discovery method; * 2) the presence of a non-zero transit depth; * 3) the presence of a transit duration value, with any planet not meeting one or more of these criteria being filtered out. Planets with radii < 1.75 $R_{\oplus}$ are also removed, resulting in a list of transiting sub-Neptunian, Neptunian and Jovian class planets (see Table 1). To obtain the sample observable with Twinkle, we perform a cut that eliminates planets with host stars outside $\pm 40^{\circ}$ of the ecliptic. The remaining 383 planets with radii between $1.75~{}R_{\oplus}$ and 3 $R_{\text{J}}$ and $T_{\text{eq}}$ between 200 to 1000 K form an initial Twinkle cool gaseous planet candidate list. The candidate list will be modified prior to launch as new discoveries are made. The initial candidate list is subjected to an SNR study, used to identify the number of transits required for each planet to achieve atmospheric detectability with Twinkle. The findings of this study are then used to further filter the candidate list, leaving only planets that can be observed at or above the target SNR threshold during Twinkle’s mission lifetime. This includes a cautious estimate of Twinkle’s observing efficiency, scaling up the number of transits required to meet the SNR threshold by a uniform factor for each planet and resulting in our final lists of suitable and preferred candidates for the primary 3-year and extended 7-year Twinkle exoplanet surveys. Table 1: Population statistics for transiting exoplanets [derived from NASA Exoplanet Archive Planetary Systems Composite Table accessed 22/07/2022]. Total numbers are shown in black, and the corresponding numbers of cool gaseous planets accessible within the field-of-regard (FOR) of the Twinkle space telescope are shown bracketed in gray. The equilibrium temperatures for this table were obtained from the NASA archive where available or otherwise calculated from stellar and orbital parameters (assuming an albedo of 0.3). At the time of submission, the number of cool gaseous planets in the Twinkle FOR had increased from the 383 shown to 416. | | | Sub-Neptunian --- ($1.75~{}R_{\oplus}$ $\leq$ $R_{\text{p}}$ < $3~{}R_{\oplus}$) | Neptunian --- (3 $R_{\oplus}$ $\leq$ $R_{\text{p}}$ < 0.5 $R_{\text{J}}$) | Jovian --- (0.5 $R_{\text{J}}$ $\leq$ $R_{\text{p}}$ < 3 $R_{\text{J}}$) Hot | ($T_{\text{eq}}\geq$ 1000 K) | 222 | 78 | 499 C3 | (750 < $T_{\text{eq}}$ <1000 K) | 388 (75) | 117 (30) | 84 (40) C2 | (500 < $T_{\text{eq}}$ $\leq$ 750 K) | 528 (109) | 166 (32) | 54 (22) C1 | (200 < $T_{\text{eq}}$ $\leq$ 500 K) | 309 (59) | 118 (7) | 75 (9) Cold | ($T_{\text{eq}}$ $\leq$ 200 K) | 1 | 3 | 6 ### 4.1 Establishing candidate planet SNR Before examining the number of transits needed for each planets, we need to decide on a threshold SNR for spectral feature detection where the SNR is the ratio of the amplitude of a typical spectral feature $A_{\text{p}}$ to the noise on the transit depth (1$\sigma$ error bar) $\sigma_{\text{p}}(\lambda)$ at a given spectral binning. If we assume a typical spectral feature corresponds to 5 scale heights, then we can use Equation 1 to find $A_{\text{p}}$ with $n=5$, and $H$ for a given planet is obtained from Equation 2. The error bar for a given target SNR is given by $\centering|\text{error bar}|=\frac{2\cdot 5\cdot H\cdot R_{\text{p}}}{R_{\text{s}}^{2}}\cdot\frac{1}{\text{SNR}_{\text{target}}}\@add@centering$ (3) We wished to verify that SNRs calculated this way corresponded to detectability of prominent molecules at high significance when simulated using an atmospheric radiative transfer code with parameters retrieved via Bayesian parameter estimation ("spectral retrieval"). The latter reflects the method of analysis that would be applied to a real observed transmission spectrum. We decided to investigate nominal SNRs of 3 and 5. To this end a subset of the Twinkle cool gaseous candidates (listed in Table 2) have their atmospheres simulated using TauREx3 (hereafter TauREx) (Waldmann et al., 2015; Al-Refaie et al., 2021) as described further below. For a given planet, in the calculation of $H$, $T_{\text{eq}}$ and $g$ are obtained and derived respectively from system values used by the Twinkle radiometric tool, TwinkleRad (Stotesbury et al., 2022), based on ExoRad (Mugnai et al., 2022), whilst $\mu$ is obtained directly from the TauREx atmospheric model. This allows calculation of $A_{\text{p}}$ for each planet, and error bars for SNRs of 3 and 5 obtained using Equation 2. For each planet, the atmospheric model and resulting model transmission spectrum are obtained as follows. TauREx was run initially using a model with 100 plane-parallel atmospheric layers spanning pressures from $10^{-6}$ to $10^{5}$ Pa, an isothermal temperature-pressure (T-P) profile at equilibrium temperature, $T_{\text{eq}}$, and equilibrium chemistry set by the taurex_ace plugin based on the ACE equilibrium chemistry regime of Agúndez et al. (2012, 2020), generated using solar C/O-ratio and metallicity values obtained for each planet using the trend found in Welbanks et al. (2019). Altitude- dependent volume mixing ratios (VMRs) obtained this way for each molecule were then simplified to a single VMR value by taking the average across the profile (Figure 1) with this single value subsequently being used to set the free chemistry in the final model. The model was thus run again with the same initial conditions except this time with free chemistry, i.e. the fixed VMRs for the most abundant molecules (VMR $>$ $10^{-8}$), to give a final transmission spectrum. We use opacity cross-sections from ExoMol (Tennyson & Yurchenko, 2016) that include H2O, CH4, CO2, CO and NH3. In addition to molecular absorption, contributions from Rayleigh scattering and collision- induced absorption (CIA) between H2-H2 and H2-He were included555This work utilizes molecular cross-sections from 0.3-50 $\mu$m sampled at R=15000, which can be found at https://www.dropbox.com/sh/13y33d02vh56jh2/AACh03L5h1QEbDYN7_-jMjBza/xsec/xsec_sampled_R15000_0.3-50?dl=0&subfolder_nav_tracking=1. Rayleigh scattering data for all included atmospheric molecules and CIA files from HITRAN 2011 available at the above link are also used.. To simulate an observed spectrum, the resulting near-local thermal equilibrium (near-LTE) cloud-free atmospheric spectra were then binned across the wavelength range covered by both Twinkle spectroscopic channels to a fixed spectral resolution of $R$=50, approximating the performance of the instrument as detailed above. To these binned points error bars were then added according to Equation 1, for the SNR = 3 and SNR = 5 cases. An example of such a simulated observed spectrum is shown in Figure 2 together with the different contributions to the spectrum. Figure 1: Modelled VMRs for HD 106315 c. Solid lines denote altitude-dependent chemical profiles under equilibrium conditions, whilst dashed vertical lines denote profile-averaged VMRs. Figure 2: Forward modelled spectrum for candidate HD 106315 c, binned to R=50 from 0.5-4.5 $\mu$m. Molecular absorption components are shown at the binned resolution of the final spectrum. Also shown are the total contributions to the final spectrum from CIA (between H2-H2 and H2-He) and Rayleigh scattering (all atmospheric molecules), with the final spectrum shown in black. We note here that although disequilibrium chemistry processes and clouds and hazes are expected to be present in sub-$1000$ K planetary atmospheres (Fortney et al., 2020; Dymont et al., 2022; Fleury et al., 2023), such processes are poorly constrained at present, and hence the extent to which they may impact an individual planet cannot be evaluated with accuracy at this time. We have therefore not attempted to include these processes in the atmospheric models created for this study, which should suffice for the purpose of establishing detectability of general spectral features. Bayesian spectral retrievals are then conducted on the binned forward-modelled spectra produced using the nested sampling algorithm nestle666https://github.com/kbarbary/nestle initiated in ‘single’ mode with 150 live points. For each planet (with error bars corresponding to SNR = 3 or SNR = 5), the following retrievals are performed. Firstly a baseline retrieval with the “full atmospheric” model, containing all atmospheric constituents with VMRs $>$ $10^{-8}$ (which generally included H2O, CH4, CO2, CO and NH3). The remaining three retrieval models are each initiated in the same manner as the full atmospheric model, but without one of either H2O, CH4 or CO. We select these molecules as they are found to consistently have the highest VMRs in our forward models. By comparing the Bayesian evidence obtained to that of the full atmospheric model, detection significances could be ascertained for each molecule. The detection significance of the three molecules is obtained via the log Bayes factor (Trotta, 2008; Benneke & Seager, 2013), which is the difference in the log Bayesian evidence between full atmospheric model and the model with the molecule omitted. We find that for all the selected planets (Table 2), CH4 is detected at > 5$\sigma$ in all cases whether the error bars are derived from an SNR of 5 or 3. With SNR=5, H2O is detected to $\geq$ 5$\sigma$ in all cases, but at SNR= 3, water detection significance falls to a minimum of $\sim 3$ (Table 2). Our molecular detectability study also reveals that for the eight planets studied, CO is never detected above $2\sigma$, irrespective of SNR. Given the high VMR of this molecule in the forward models (log[VMR${}_{\text{CO}}$] = -3 to -4), we attribute the weakness of detection to two factors. As can be seen in Figure 2, CO features at shorter wavelengths ($\sim$1.6 and 2.34 $\mu$m) are masked by spectral features from CH4 and H2O when all three molecules have similar atmospheric abundances, owing to CH4 and H2O having larger cross- sections. This challenge in robustly detecting CO is further compounded by the fact that the strongest observable band peaks at 4.7$\mu$m, just beyond the wavelength range covered by Twinkle (Edwards et al., 2019a). Given that when we utliize error bars derived from an assumed SNR of 5 we obtain excellent detection significances for H2O and CH4 in all cases studied, we proceed by adopting SNR = 5 as the threshold to attain for all planets in the Twinkle initial candidate list. ### 4.2 Preliminary candidate list We next take the initial 383 candidates and estimate the number of transits needed in each case to reach an SNR of 5. To calculate SNR we again find $A_{\text{p}}$ for each planet using Equation 1, but this time we obtain the transit depth errors from the radiometric tool, TwinkleRad (Stotesbury et al., 2022) [via B. Edwards, private communication]. TwinkleRad gives the 1$\sigma$ errorbar values on the transit depth for for a single transit. These values account for photon noise and instrumental effects and assume 100% observing efficiency. The error bars from TwinkleRad are given at the “native” wavelength grid of its Twinkle model, which has a median resolving-power of 42 (ranging from 18-70). Model transmission spectra are thus binned to this native grid. Table 2: Detection significances (in $\sigma)$ of H2O and CO obtained from retrievals conducted on simulated atmospheres for eight planets spanning the cool gaseous planets parameter space. Detection significance of CH4 is >5 $\sigma$ in all cases. Planet name | Detection significance of individual molecules ---|--- | Water (H2O) | Carbon Monoxide (CO) | SNR $\geq$ 3 | SNR $\geq$ 5 | SNR $\geq$ 3 | SNR $\geq$ 5 WASP-11 b | 4.4 | >5.0 | 1.8 | 1.7 HD 63935 b | >5.0 | >5.0 | 1.7 | 1.7 HD 106315 c | >5.0 | >5.0 | 1.8 | 1.3 TOI-1130 c | 3.5 | >5.0 | 1.8 | 1.7 GJ 436 b | 4.2 | >5.0 | 1.7 | 1.9 HD 136352 c | 4.5 | >5.0 | 1.6 | 2.0 AU Mic c | 3.1 | 5.0 | 1.8 | 1.9 TOI-178 g | 4.3 | >5.0 | 2.0 | 2.0 Since the error on the transit depth is in reality wavelength dependent, so is the SNR. However a single representative value was needed for a given planet for calculation of the number of transits. For this we use the lower quartile value for SNR across the full wavelength range covered by both channels. This ensures that $75\%$ or more of the spectrum achieves or exceeds the target SNR of the observation and that individual planets are not negatively biased by a single low-impact data point. In order to account for loss of data during Earth-occultation events that will arise due Twinkle’s low-Earth orbit, we scale the TwinkleRad error bars by a factor of 1/$\sqrt{0.75}$. This is done to simulate a conservative observing efficiency of 75%, assumed to be the case for all planets within our observing sample. Consequently, we re-calculate the representative single transit SNR of each planet, then obtain the number of transits, $N_{\text{t}}$, required to reach a threshold SNR of 5, rounded up to the nearest integer: $N_{t}=\left(\frac{5}{\text{SNR}_{1}}\right)^{2}$ (4) where $\text{SNR}_{1}$ is the lower quartile SNR for a single transit. We combine this information with the orbital period for each planet to compute if the required number of transits could be observed within 3 or 7 years. This way we obtain a final candidate list of 36 and 57 planets for the primary (3-year) and extended (7-year) missions respectively. We show the distribution in the radius-temperature plane of the 57 candidates in the Twinkle 7-year extended mission list in Figure 3. The final candidates are further separated into five distinct tiers using the following assigned criteria based on $N_{\text{t}}$ and the total integration time, $T_{\text{int}}$. The latter is calculated assuming a single transit observation lasts 3 $\times T_{\text{14}}$, where $T_{\text{14}}$ is the transit duration. The expectation is that a subset of higher tier candidates will be observed by Twinkle during its lifetime. * • Tier 1: $N_{\text{t}}$ < 25 and $T_{\text{int}}\leq 25$ days * • Tier 2: $N_{\text{t}}$ < 50 * • Tier 3: $N_{\text{t}}$ < 100 * • Tier 4: $N_{\text{t}}$ < 150 * • Tier 5: any other candidates The candidates are listed with their $N_{\text{t}}$, $T_{\text{int}}$ and tierings in Table 5 and Table 6. These planets therefore give the preliminary candidate list for the Twinkle cool gaseous survey and for the studies in this paper. We note that 26/36 planets of the 3-year survey and 46/57 for the 7-year survey do not have any current transmission spectra. However since this analysis was completed, the number of cool gaseous targets in the Twinkle FOR has increased by 8% (Table 1), and will continue to do so up to the time of the mission. Our sample size thus represents a conservative value, with the final target list likely to be somewhat larger. While it is possible that some of these planets will be observed with JWST prior to the launch of Twinkle, the Twinkle survey being a homogeneous survey, inclusion of JWST planets in the sample would still be important. We note that large numbers of transits are required for many planets in the sample which may ultimately not be practical in the real mission. However, we expect Tier 1 planets (which range in $N_{\text{t}}$ from 1 to 22 transits) would be practical. Tier 2 planets may also be quite possible. Thus while the full sample shown here may not be ultimately adopted, a significant sub-sample of high-tier planets exists, covering a wide range of size and temperature, which would make a sizeable survey. Figure 3: Population of known exoplanets [NASA Exoplanet Archive, accessed 18/05/23], overlain by candidates targeted by this study for the primary 3-yr and extended 7-year Twinkle exoplanet surveys. The different temperature regimes are indicated: Hot, C3, C2, C1 and cold. C1, C2 and C3 are three sub- categories of the "cool" regime. The bounded region indicated the parameter space of cool gaseous planets as defined in this paper. The equilibrium temperatures shown for Twinkle 3- and 7-year mission candidates are obtained from methodologies outlined in Edwards et al. (2019b) and used in the TwinkleRad database. For all other planets, the equilibrium temperatures shown are obtained from the NASA archive where available or otherwise calculated from stellar and orbital parameters (assuming an albedo of 0.3). ## 5 Metallicity Trend detection study One of the key properties for planet characterisation is metallicity, which is commonly split into two regimes: bulk, describing the heavy-element content of the total planet and atmospheric, pertaining to the atmosphere alone. As mentioned previously, atmospheric metallicity and elemental ratios can provide clues to the formation mechanism, location and migration history of the planet. Atmospheric metallicity and elemental ratios, e.g. the C/O ratio, also set the initial elemental mix that controls the abundances of molecular species seen in thermochemical equilibrium. The possible inverse relationship between planet mass and metallicity is consistent with core accretion scenarios. In this study we take the Twinkle cool gaseous planet preliminary sample and investigate Twinkle’s ability to elucidate a mass-metallicity trend in this sample, if one exists. ### 5.1 Elucidation of a mass-metallicity trend We explore here whether an injected atmospheric metallicity trend can be recovered from a simulated atmospheric survey of cool gaseous planets using the Twinkle 7-year candidate list (Table 5 and Table 6). The candidate list is well-suited for this investigation spanning two orders of magnitude in mass. We construct atmospheric forward models for each planet on the list, using TauREx, binning these to the native Twinkle wavelength grid and adding re- scaled wavelength-dependent error bars obtained from TwinkleRad to the resulting spectrum. The scaling accounts for the assumed observing efficiency of Twinkle (75%) and the unique number of transits, $N_{\text{t}}$, required for each planet in the sample to reach the desired SNR threshold of 5. Our forward models have molecular abundances dictated solely by ACE equilibrium chemistry initialised with C/O=0.54 (solar) and a unique metallicity value for each planet obtained using the H2O abundance mass-metallicity trend (including WASP-39 b) of Wakeford et al. (2018). We use this reference since the metallicity values are given in units of solar metallicity, and the retrieved metallicities from TauREx are given in the same units. Ten planets in the sample do not have currently measured masses but their estimated masses given in the NASA Exoplanet Archive are used for this purpose (these are planets for which there are no errors given in Table 5 and Table 6). Model atmospheres are again generated using a simple isothermal T-P profile spanning 100 plane-parallel layers $10^{-6}$ to $10^{5}$ Pa (10-11 bar to 1 bar) and assume cloud-free conditions; however unlike the final models used in subsection 4.1, we retain the full, altitude-dependent, VMR profiles for each chemical species. Transmission spectra are then generated from this atmospheric model by including molecular absorption, CIA from both H2-H2 and H2-He, and Rayleigh scattering. We subsequently perform Bayesian spectral self-retrievals to retrieve for atmospheric metallicity and C/O ratio. Each retrieval is initiated with the parameters and priors listed below in Table 3, with atmospheric metallicity and C/O retaining the same initialisation value and prior range for the full sample. Results are obtained for all candidates in the Twinkle 7-year candidate list, with the exception of K2-138 f, which is subsequently. excluded from any analysis completed on the sample thereafter. We found persistent errors halting the retrieval of K2-138 f. While the exact cause of the error was not established, we note that K2-138 f has a very low mass, low gravity and an extremely high scale height of 1000 km, which could possibly be related to the computational failure. We plot the results obtained, along with their 1$\sigma$ error bars (mass from literature, metallicity from retrieval results) in Figure 4, along with the injected mass-metallicity trend. In this figure the central points are the medians of the posterior distributions and the errors bars encompass the 16th-84th percentile ranges. Although retrieved metallicity is found to be over-estimated in all but four cases across the sample, this is a small effect, with 51/56 planets (91%) of the population results having the truth value within the $1\sigma$ confidence interval. This study indicates that Twinkle has the capability of recovering a mass- metallicity trend in cloud-free cool gaseous planet atmospheres. However, we acknowledge that some planets may have hazes and clouds. Also our assumption of equilibrium chemistry is likely a simplification given the unknown nature of such atmospheres and the likelihood of disequilibrium processes. Further studies could therefore examine the robustness of recovering a mass- metallicity trend under a more complex variety of atmospheric scenarios, including a mix of cloudy, cloudless and disequilibrium chemistry effects. Table 3: Initial values and prior ranges of free parameters in retrieval models used in the mass-metallicity study. Where factor bounds are used, values specified are multiplied by the truth value. Parameter | Input | Prior Type | Bounds ---|---|---|--- Metallicity, Z | 50 | Uniform, Linear | [0.01, 750] C/O ratio | 0.54 | Uniform, Linear | [0.1, 5.0] Radius | $R_{\text{p}}$ | Uniform, Factor | [0.8 $R_{\text{p}}$, 1.2 $R_{\text{p}}$] Temperature | $T_{\text{eq}}$ | Uniform, Factor | [0.8 $T_{\text{eq}}$, 1.2 $T_{\text{eq}}$] Figure 4: Retrieved metallicity plotted against literature mass. Inset shows retrieved C/O ratio against literature mass. Central points are the medians of the posterior distributions while error bars denoted the 1$\sigma$ confidence interval (16th-84th percentile ranges). Here we plot planets with currently measured masses as solid dots, whilst planets with masses estimated from M-R relations are plotted as crosses. Residuals with respect to input trend are shown in panel 2, whilst the residual normalised by the truth value is shown in panel 3. ### 5.2 C/O ratio retrieval In addition to retrieving for atmospheric metallicity, our study also retrieves for the carbon-to-oxygen ratio (C/O). Retrievals use the truth value as the input value in all cases, but implement a broad, uninformative prior as shown in Table 3. We find that the truth value is recovered within the 1$\sigma$ confidence interval in all cases (as shown in the inset in Figure 4). We do note that for 7 planets, the retrieved median values deviate strongly (by $\geq$ 0.135) from the injected truth, with large error bars. We find that in the majority of these deviated cases, the posterior distributions were asymmetric with an extended positive tail, which may in part explain the over-estimation given by the median. ### 5.3 Exploring sources of bias in metallicity retrievals To explore the slight over-estimation in retrieved metallicity seen for the bulk of the Twinkle 7yr candidate population, we first explore the posterior distributions generated from the nested sampling retrieval results. Although there is some evidence of non-Gaussianity in the posterior distributions, as shown for TOI-1130 c in Figure 5, we find no evidence for systematic over- estimation of the median due to effects of sampling an asymmetric distribution. We therefore elect to investigate the effect, if any, that the modelling and retrieval process has in creating this bias, by varying combinations of the wavelength grid and spectral resolution of the modelled spectra. This approach is taken as atmospheres are initially generated in TauREx using cross sections that span the wavelength range 0.3-50 $\mu$m at a spectral resolution of $R=15000$, resulting in a substantial loss of information from the model inputs when spectra are binned down to the specifications of the observing instrument. Three additional sets of models (Table 4, cases 2, 3 and 4) are generated using the wavelength ranges and spectral resolutions as listed, for each planet in the Twinkle 7-year candidate list. Figure 5: Retrieved joint posterior distribution for Tier 1 candidate TOI-1130 c. Table 4: Metallicity bias study test cases. Case Number & Name | Wavelength Range | Wavelength Grid ---|---|--- 1 Twinkle native | 0.5-4.5 $\mu$m | Twinkle native (average $R$ $\sim$ 42) 2 Twinkle \- HST WFC3/G141 range | 1.1-1.7 $\mu$m | Twinkle native (average $R$ $\sim$ 44) 3 Twinkle $R=70$ | 0.5-4.5 $\mu$m | fixed $R=70$ 4 Ariel $R=70$ | 0.5-7.8 $\mu$m | fixed $R=70$ Figure 6: The upper plot shows statistical significance of cloud detection, derived from the Bayes factor preference for a model with clouds over model without clouds, for eight planets. The lower plot shows the temperature-radius plane for those eight planets with red circles indicating cases where detection significance is $>3\sigma$ while black circles denote cases at $<3\sigma$. Here, case 1 is considered to be our baseline case, representing the performance of Twinkle based on current knowledge and modelling. We base our case 2 model on the widely-used HST WFC3/G141 instrument configuration to examine the effect of reducing the wavelength coverage compared to case 1 on retrieved metallicity, retaining the wavelength resolution grid of Twinkle such that any changes in retrieved metallicity can be attributed solely to the reduction in wavelength coverage. We use cases 3 and 4 to see the effect of increasing wavelength coverage from that of Twinkle to that of Ariel (approximately doubling the wavelength range). For consistency cases 3 and 4 utilize a fixed $R$ of 70. This $R$ value does not reflect the true $R$ for Twinkle or Ariel (which will vary will wavelength), but is chosen as a nominal value for the purposes of this comparison. Increased wavelength coverage would be expected to boost the sensitivity to of the retrieval to molecules such as CO (see subsection 4.1). This approach is taken rather than accurately modelling an observation of these targets with Ariel due to the differences in native spectral resolution between Twinkle and Ariel, which would inhibit the ability to isolate the dependence on wavelength coverage on any systematic findings. We keep the error bars on the spectra the same for each individual planet across all cases, and perform self-retrievals on the semi-physically- motivated models of cases 2, 3 and 4 in the same manner described in section 5. Our findings are as follows: for case 2, average retrieved 1$\sigma$ error ranges are just over a factor of 3 higher for atmospheric metallicity compared to case 1, yet despite this, values are typically found to be proportionately more over-estimated, with only 35/56 planets (62.5% of the population) having the truth value within the $1\sigma$ confidence interval. Comparisons between the baseline case (case 1) and case 3 show that the average retrieval error bars are comparable; the average 1$\sigma$ confidence interval is 5% greater in case 3 compared to case 1. The number of planets with truth values within the 1$\sigma$ range is the same. Similar results are seen when comparing cases 3 and 4: in both cases 51/56 planets have truth values within 1$\sigma$ of the retrieved atmospheric metallicity values. However, the average retrieved 1$\sigma$ range is 10% smaller in case 4 compared to case 3. Importantly, we find that in a little over two thirds of the sample (38/56 planets), case 4 spectra, with broader wavelength coverage, yield metallicity values that lie closer to the truth than in case 3. While this suggests that increased wavelength coverage leads to an improvement in the ability to recover atmospheric metallicity from spectra, further investigation shows that typically the absolute difference between retrieved metallicity and the truth value,$|Z_{\text{ret}}-Z_{\text{truth}}|$, is only marginally different between cases 3 and 4. We calculate this difference for each of the 56 planets in our sample using: $\delta=|Z_{\text{ret}}-Z_{\text{truth}}|_{3}-|Z_{\text{ret}}-Z_{\text{truth}}|_{4}$ (5) The average value for $\delta$ is 1.3, which is well within the quadrature sum of the 1$\sigma$ errors of the retrieved metallicities for each case (where the 1$\sigma$ error is taken to be half of the 1$\sigma$ confidence interval which is itself the 16th-84th percentile range). This suggests that the differences between cases 4 and 3 are not significant. We conclude that the slight bias seen in retrieved metallicity is likely at least in part due to information loss from reduced wavelength coverage, although other possible causes may exist. ## 6 Cloud Detection Study Clouds are thought to be present to some extent in the atmospheres of all planets (Helling, 2021) and are known to make a significant contribution to the overall planetary albedo, a key parameter in the determination of equilibrium temperature (Estrela et al., 2022). Capable of influencing temperature, and hence the energy budget of an atmosphere, clouds are intrinsically coupled to T-P structure of the atmosphere and its chemical composition (Madhusudhan et al., 2016), with clouds of varying compositions expected to form as atmospheric species cross condensation fronts. Although the basic mechanisms by which cloud formation occurs are well-formulated, the precise details and balance of pathways are currently debated (Helling, 2019, 2021). Consequently, the detection of clouds in in cool gaseous planets and subsequent inference of the altitudes at which they are present, has the potential to provide insight into atmospheric processes governing cloud formation in this regime. We therefore investigate Twinkle’s capability to detect global cloud layers for a sub-sample of planets taken from the candidate list (Table 5 and Table 6). We selected one representative planet from each of the nine major sub- divisions shown in Figure 3 with the exception of C1-Jovian planets for which no examples exist in our list. The selected planets are indicated by name in Figure 3. Our goal here is to find constraints on the pressure levels at which cloudy atmospheres can be distinguished from cloud-free ones, functioning also as a top-level search for potential trends in cloud detectability across the different planetary types. Forward models are constructed using TauREx for each of the eight planets selected. Models are generated as previously described with isothermal profiles. ACE equilibrium chemistry is initiated with C/O=0.54 (solar) and unique metallicity values for each planet from the H2O mass-metallicity trend (including WASP-39 b) of Wakeford et al. (2018). The resulting altitude-dependent chemical profiles are then simplified to an individual altitude-independent VMR as described in subsection 4.1 for H2O, CH4, CO2, CO, NH3 and N2, and the model run again with fixed VMRs to generate the final transmission spectra. We include opacity contributions from molecules, Rayleigh scattering and CIA. In addition, an optically thick grey cloud layer is modelled at a given pressure level, which is varied between one of four pressures: 104, 103, 102 and 10 Pa in different runs. In each case, the resulting forward model spectra are then subjected to two Bayesian spectral retrievals. Both retrievals are conducted with identical initial input parameters and bounds, retrieving for all molecule VMRs, temperature and planet radius, with one retrieval model retrieving for cloud pressure level, while the other does not include clouds. Planetary radius and equilibrium temperature are fit with the same priors from Table 3, whilst log-uniform priors with bounds [$10^{-12}$, $10^{-1}$] are used for all molecule VMRs (H2O, CH4, CO2, CO, NH3 and N2). Where clouds are included in the retrieval model, log-uniform priors with bounds [$10^{-6}$, $10^{5}$] Pa are used. We use the difference in the log Bayesian evidence from each pair of retrievals to determine a detection significance (Trotta, 2008; Benneke & Seager, 2013) for cloud (at the given pressure level) in each case. The results are illustrated in Figure 6. We find that high-altitude clouds at 10 Pa are detectable above $3\sigma$ irrespective of the planetary or temperature regime they are in. Deeper clouds at 100 Pa can be detected above $3\sigma$ across all planet sizes, but only in C3 (500-1000 K) planetary atmospheres. At cloud pressure of 1000 Pa, we obtain $3\sigma$ detections only in the two Jovian planets, suggesting it may be possible to probe physical processes and T-P structure in the deeper layers down to 0.01 bar. At $10^{4}$ Pa (0.1 bar), we are unable to distinguish cloudy and cloud-free atmospheres at $\geq$ 3$\sigma$ in any of the cases. These results indicate that Twinkle will be unlikely to detect clouds deeper than 0.1 bar in any cool gaseous planets, and should be able to detect clouds at 0.0001 bar (10 Pa) or higher in all cases. We find a rough indication of a trend of improved detectability with temperature and size of the planet. We also observe a tentative trend of decreasing sensitivity with planetary size across the sub-Neptune group and sub-Neptune-Neptune boundary; however this may also be explicable by reduction in temperature in the lower detection planets. The robustness of these conclusions is limited due to the small sub- sample size used here. These initial results can be built on by further investigation using a larger sub-sample or a large grid of completely simulated planets (providing exact temperature and radius controls). ## 7 Conclusions In summary, we present a first realisation of a tiered candidate list for the proposed Twinkle cool gaseous planets survey, based on currently confirmed exoplanets. We find that Twinkle has the potential to characterise the atmospheres of up to 36 and 57 planets in the primary 3-year and extended 7-year survey respectively. Candidates identified include 27 and 46 planets respectively that, to the best of the authors’ knowledge at the time of submission, do not have precise transmission spectra. A survey using just Tier 1 and Tier 2 planets, would yield 20 planets ranging in mass from 0.0079 to 0.97 $M_{\text{J}}$ and 480 to 941 K in temperature. Due to growing numbers of discoveries in the cool gaseous regime, e.g. by TESS, these sample sizes can be considered conservative and will be greater by the time of Twinkle’s launch. The final candidate list used for the Twinkle mission will be updated to include new discoveries Twinkle is well-positioned to provide the first opportunity to garner insights into cool gaseous planets at a population level. The 3-year baseline survey includes all 15 Tier 1 candidates (7 Jovians, 4 Neptunians, 4 sub-Neptunians, collectively spanning $\sim$480-934 K) and all 5 Tier 2 candidates. These planets will be the highest priority candidates for the survey. If planets up to Tier 3 are included, a 3-year survey would have 31 planets, and a 7-year survey would have 35 planets. Our study predicts that major molecular species expected to be present under cloud-free near-equilibrium conditions in sub-1000 K atmospheres can be detected to high significance, and that if present, trends in atmospheric metallicity can be reliably identified across the sample of surveyed planets. The C/O ratio was also recovered to within 1$\sigma$. These studies show that Twinkle will provide valuable data capable of contributing towards the understanding of trends within the cool gaseous planet demographic and beyond, as shown in Figure 4. Such data has the potential to inform atmospheric models and further planet formation theories for giant planets in this understudied temperature regime. We also predict that based on a simple grey cloud model, Twinkle has the potential to detect high-altitude cloud decks at or above 10 Pa in all atmospheric spectra in the cool regime, but would be insensitive to clouds below $10^{4}$ Pa in all cases. We find a rough and tentative trend in cloud detectability with planetary temperature and size. The cool gaseous planets represent a new frontier in exoplanet science, promising to expand our understanding of atmospheric physics and chemistry, as well as planet formation and evolution. The Twinkle cool gaseous planet survey has the potential to open up this uncharted territory of exoplanet parameter space with paradigm-shifting results. 20 additional survey candidates present across tiers 2 and 3, and many new candidates, typically discovered around bright stars, hence amenable for transmission spectroscopy, confirmed and being actively refined by TESS and other facilities (including 32 new cool gaseous planets in the Twinkle field-of-regard) this proposed survey to be conducted by Twinkle promises to deliver a small, but statistically meaningful sample of homogeneous spectra. ## Acknowledgements We thank Ben Wilcock of Blue Skies Space Ltd. for his comments on the manuscript. L.B. is supported by a STFC doctoral training grant. Additionally, we thank the anonymous reviewer for their helpful comments. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author. ## References * Agúndez et al. (2012) Agúndez M., Venot O., Iro N., Selsis F., Hersant F., Hébrard E., Dobrijevic M., 2012, A&A, 548, A73 * Agúndez et al. (2020) Agúndez M., Martínez J. I., de Andres P. L., Cernicharo J., Martín-Gago J. A., 2020, A&A, 637, A59 * Al-Refaie et al. (2021) Al-Refaie A. F., Changeat Q., Waldmann I. P., Tinetti G., 2021, ApJ, 917, 37 * Alam et al. (2022) Alam M. K., et al., 2022, ApJ, 927, L5 * Alderson et al. (2023) Alderson L., et al., 2023, Nature, 614, 664 * Allart et al. (2023) Allart R., et al., 2023, A&A, 677, A164 * Barclay et al. (2015) Barclay T., Quintana E. V., Adams F. C., Ciardi D. R., Huber D., Foreman-Mackey D., Montet B. T., Caldwell D., 2015, ApJ, 809, 7 * Baxter et al. (2021) Baxter C., et al., 2021, A&A, 648, A127 * Bell et al. (2023a) Bell T. J., et al., 2023a, arXiv e-prints, p. arXiv:2309.04042 * Bell et al. (2023b) Bell T. J., et al., 2023b, arXiv e-prints, p. arXiv:2309.04042 * Benneke & Seager (2013) Benneke B., Seager S., 2013, ApJ, 778, 153 * Benneke et al. (2019a) Benneke B., et al., 2019a, Nature Astronomy, 3, 813 * Benneke et al. (2019b) Benneke B., et al., 2019b, ApJ, 887, L14 * Bézard et al. (2020) Bézard B., Charnay B., Blain D., 2020, arXiv e-prints, p. arXiv:2011.10424 * Blain et al. (2021) Blain D., Charnay B., Bézard B., 2021, A&A, 646, A15 * Boss (1997) Boss A. P., 1997, Science, 276, 1836 * Burn et al. (2021) Burn R., Schlecker M., Mordasini C., Emsenhuber A., Alibert Y., Henning T., Klahr H., Benz W., 2021, A&A, 656, A72 * Charbonneau et al. (2002) Charbonneau D., Brown T. M., Noyes R. W., Gilliland R. L., 2002, ApJ, 568, 377 * Christiansen et al. (2022) Christiansen J. L., et al., 2022, AJ, 163, 244 * Crossfield & Kreidberg (2017) Crossfield I. J. M., Kreidberg L., 2017, AJ, 154, 261 * Drummond et al. (2020) Drummond B., et al., 2020, A&A, 636, A68 * Dymont et al. (2022) Dymont A. H., Yu X., Ohno K., Zhang X., Fortney J. J., Thorngren D., Dickinson C., 2022, ApJ, 937, 90 * Edwards et al. (2019a) Edwards B., et al., 2019a, Experimental Astronomy, 47, 29 * Edwards et al. (2019b) Edwards B., Mugnai L., Tinetti G., Pascale E., Sarkar S., 2019b, AJ, 157, 242 * Edwards et al. (2022) Edwards B., et al., 2022, arXiv e-prints, p. arXiv:2211.00649 * Edwards et al. (2023) Edwards B., et al., 2023, arXiv e-prints, p. arXiv:2306.13645 * Estrela et al. (2022) Estrela R., Swain M. R., Roudier G. M., 2022, ApJ, 941, L5 * Fleury et al. (2023) Fleury B., Benilan Y., Venot O., Henderson B. L., Swain M., Gudipati M. S., 2023, ApJ, 956, 134 * Fortney et al. (2020) Fortney J. J., Visscher C., Marley M. S., Hood C. E., Line M. R., Thorngren D. P., Freedman R. S., Lupu R., 2020, AJ, 160, 288 * Fulton et al. (2017) Fulton B. J., et al., 2017, AJ, 154, 109 * Gao et al. (2023) Gao P., et al., 2023, ApJ, 951, 96 * Garai et al. (2023) Garai Z., et al., 2023, arXiv e-prints, p. arXiv:2306.04468 * Gillon et al. (2017) Gillon M., et al., 2017, Nature, 542, 456 * Greaves et al. (2021) Greaves J. S., et al., 2021, Nature Astronomy, 5, 655 * Guilluy et al. (2021) Guilluy G., et al., 2021, AJ, 161, 19 * Gupta & Schlichting (2019) Gupta A., Schlichting H. E., 2019, MNRAS, 487, 24 * Han et al. (2023) Han T., et al., 2023, arXiv e-prints, p. arXiv:2310.20634 * Harris et al. (2023) Harris M., et al., 2023, arXiv e-prints, p. arXiv:2310.15118 * Hawthorn et al. (2023) Hawthorn F., et al., 2023, arXiv e-prints, p. arXiv:2310.17268 * Heller (2020) Heller R., 2020, arXiv e-prints, p. arXiv:2009.01881 * Heller & Pudritz (2015) Heller R., Pudritz R., 2015, ApJ, 806, 181 * Helling (2019) Helling C., 2019, Annual Review of Earth and Planetary Sciences, 47, 583 * Helling (2021) Helling C., 2021, in Madhusudhan N., ed., , ExoFrontiers; Big Questions in Exoplanetary Science. pp 20–1, doi:10.1088/2514-3433/abfa8fch20 * Hill et al. (2018) Hill M. L., Kane S. R., Seperuelo Duarte E., Kopparapu R. K., Gelino D. M., Wittenmyer R. A., 2018, ApJ, 860, 67 * Hill et al. (2019) Hill M., Kane S., Seperuelo Duarte E., Kopparapu R., Gelino D., Wittenmyer R., 2019, in American Astronomical Society Meeting Abstracts #233. p. 404.04 * Hobson et al. (2023) Hobson M. J., et al., 2023, AJ, 166, 201 * Hu et al. (2015) Hu R., Seager S., Yung Y. L., 2015, ApJ, 807, 8 * Ida & Lin (2005) Ida S., Lin D. N. C., 2005, ApJ, 626, 1045 * Kammer et al. (2015) Kammer J. A., et al., 2015, ApJ, 810, 118 * Kanodia et al. (2023) Kanodia S., et al., 2023, AJ, 165, 120 * Kempton et al. (2023) Kempton E. M. R., et al., 2023, arXiv e-prints, p. arXiv:2305.06240 * Kipping & Yahalomi (2023) Kipping D., Yahalomi D. A., 2023, MNRAS, 518, 3482 * Knierim et al. (2022) Knierim H., Shibata S., Helled R., 2022, A&A, 665, L5 * Knutson et al. (2014a) Knutson H. A., Benneke B., Deming D., Homeier D., 2014a, Nature, 505, 66 * Knutson et al. (2014b) Knutson H. A., et al., 2014b, ApJ, 794, 155 * Kreidberg et al. (2014) Kreidberg L., et al., 2014, Nature, 505, 69 * Kreidberg et al. (2018) Kreidberg L., Line M. R., Thorngren D., Morley C. V., Stevenson K. B., 2018, ApJ, 858, L6 * Kreidberg et al. (2022) Kreidberg L., et al., 2022, AJ, 164, 124 * Libby-Roberts et al. (2020) Libby-Roberts J. E., et al., 2020, AJ, 159, 57 * Lingam & Loeb (2019) Lingam M., Loeb A., 2019, ApJ, 883, 143 * Lissauer et al. (2014) Lissauer J. J., Dawson R. I., Tremaine S., 2014, Nature, 513, 336 * Madhusudhan (2012) Madhusudhan N., 2012, ApJ, 758, 36 * Madhusudhan et al. (2016) Madhusudhan N., Agúndez M., Moses J. I., Hu Y., 2016, Space Sci. Rev., 205, 285 * Madhusudhan et al. (2021) Madhusudhan N., Piette A. A. A., Constantinou S., 2021, ApJ, 918, 1 * Madhusudhan et al. (2023) Madhusudhan N., Sarkar S., Constantinou S., Holmberg M., Piette A. A. A., Moses J. I., 2023, arXiv e-prints, p. arXiv:2309.05566 * Mayor & Queloz (1995) Mayor M., Queloz D., 1995, Nature, 378, 355 * Mazeh et al. (2016) Mazeh T., Holczer T., Faigler S., 2016, A&A, 589, A75 * McGruder et al. (2023) McGruder C. D., López-Morales M., Brahm R., Jordán A., 2023, ApJ, 944, L56 * Mikal-Evans et al. (2023) Mikal-Evans T., et al., 2023, AJ, 165, 84 * Mireles et al. (2023) Mireles I., et al., 2023, ApJ, 954, L15 * Moses (2014) Moses J. I., 2014, Philosophical Transactions of the Royal Society of London Series A, 372, 20130073 * Moses et al. (2011) Moses J. I., et al., 2011, ApJ, 737, 15 * Mugnai et al. (2022) Mugnai L. V., Pascale E., Edwards B., Papageorgiou A., Sarkar S., 2022, ExoRad2: Generic point source radiometric model, Astrophysics Source Code Library, record ascl:2210.006 (ascl:2210.006) * Öberg et al. (2011) Öberg K. I., Murray-Clay R., Bergin E. A., 2011, ApJ, 743, L16 * Osborn et al. (2023) Osborn H. P., et al., 2023, MNRAS, 523, 3069 * Paardekooper & Johansen (2018) Paardekooper S.-J., Johansen A., 2018, Space Sci. Rev., 214, 38 * Pacetti et al. (2022) Pacetti E., et al., 2022, ApJ, 937, 36 * Piaulet et al. (2019) Piaulet C., Benneke B., Rubenzahl R., Howard A., Kreidberg L., Werner M. W., Crossfield I., Sinukoff E., 2019, in AAS/Division for Extreme Solar Systems Abstracts. p. 102.04 * Pollack et al. (1996) Pollack J. B., Hubickyj O., Bodenheimer P., Lissauer J. J., Podolak M., Greenzweig Y., 1996, Icarus, 124, 62 * Powers et al. (2023) Powers L. C., et al., 2023, AJ, 166, 44 * Prinn & Barshay (1977) Prinn R. G., Barshay S. S., 1977, Science, 198, 1031 * Rovira-Navarro et al. (2021) Rovira-Navarro M., van der Wal W., Steinke T., Dirkx D., 2021, Planetary Science Journal, 2, 119 * Roy et al. (2023) Roy P.-A., et al., 2023, ApJ, 954, L52 * Rustamkulov et al. (2023) Rustamkulov Z., et al., 2023, Nature, 614, 659 * Sagan & Salpeter (1976) Sagan C., Salpeter E. E., 1976, ApJS, 32, 737 * Saillenfest et al. (2023) Saillenfest M., Sulis S., Charpentier P., Santerne A., 2023, arXiv e-prints, p. arXiv:2306.07348 * Seager et al. (2021) Seager S., Petkowski J. J., Günther M. N., Bains W., Mikal-Evans T., Deming D., 2021, Universe, 7, 172 * Sing et al. (2016) Sing D. K., et al., 2016, Nature, 529, 59 * Spake et al. (2021) Spake J. J., Oklopčić A., Hillenbrand L. A., 2021, AJ, 162, 284 * Spalding et al. (2016) Spalding C., Batygin K., Adams F. C., 2016, ApJ, 817, 18 * Stotesbury et al. (2022) Stotesbury I., et al., 2022, in Coyle L. E., Matsuura S., Perrin M. D., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 12180, Space Telescopes and Instrumentation 2022: Optical, Infrared, and Millimeter Wave. p. 1218033 (arXiv:2209.03337), doi:10.1117/12.2641373 * Sucerquia et al. (2020) Sucerquia M., Ramírez V., Alvarado-Montes J. A., Zuluaga J. I., 2020, MNRAS, 492, 3499 * Swift et al. (2013) Swift J. J., Johnson J. A., Morton T. D., Crepp J. R., Montet B. T., Fabrycky D. C., Muirhead P. S., 2013, ApJ, 764, 105 * Szabó & Kiss (2011) Szabó G. M., Kiss L. L., 2011, ApJ, 727, L44 * Tennyson & Yurchenko (2016) Tennyson J., Yurchenko S. N., 2016, International Journal of Quantum Chemistry, 117, 92 * Thao et al. (2023) Thao P. C., et al., 2023, AJ, 165, 23 * Thorngren et al. (2016) Thorngren D. P., Fortney J. J., Murray-Clay R. A., Lopez E. D., 2016, ApJ, 831, 64 * Thorngren et al. (2019) Thorngren D. P., Marley M. S., Fortney J. J., 2019, Research Notes of the American Astronomical Society, 3, 128 * Tinetti et al. (2018) Tinetti G., Drossart P., Eccleston P., et al. 2018, Experimental Astronomy, 46, 135 * Tinetti et al. (2022) Tinetti G., Eccleston P., Lueftinger T., Salvignol J.-C., Fahmy S., Alves de Oliveira C., 2022, in European Planetary Science Congress. pp EPSC2022–1114, doi:10.5194/epsc2022-1114 * Trotta (2008) Trotta R., 2008, Contemporary Physics, 49, 71 * Tsai et al. (2017) Tsai S.-M., Lyons J. R., Grosheintz L., Rimmer P. B., Kitzmann D., Heng K., 2017, ApJS, 228, 20 * Tsai et al. (2023) Tsai S.-M., et al., 2023, Nature, 617, 483 * Tsiaras et al. (2018) Tsiaras A., et al., 2018, AJ, 155, 156 * Tsiaras et al. (2019) Tsiaras A., Waldmann I. P., Tinetti G., Tennyson J., Yurchenko S. N., 2019, Nature Astronomy, 3, 1086 * Turrini et al. (2021) Turrini D., et al., 2021, ApJ, 909, 40 * Tuson et al. (2023) Tuson A., et al., 2023, arXiv e-prints, p. arXiv:2306.04511 * Ulmer-Moll et al. (2023) Ulmer-Moll S., et al., 2023, A&A, 674, A43 * Vissapragada et al. (2022) Vissapragada S., et al., 2022, AJ, 164, 234 * Wakeford et al. (2018) Wakeford H. R., et al., 2018, AJ, 155, 29 * Waldmann et al. (2015) Waldmann I. P., Tinetti G., Rocchetto M., Barton E. J., Yurchenko S. N., Tennyson J., 2015, ApJ, 802, 107 * Wallack et al. (2019) Wallack N. L., et al., 2019, AJ, 158, 217 * Welbanks et al. (2019) Welbanks L., Madhusudhan N., Allard N. F., Hubeny I., Spiegelman F., Leininger T., 2019, ApJ, 887, L20 * Wittenmyer et al. (2020) Wittenmyer R. A., et al., 2020, MNRAS, 492, 377 * Wolszczan (2012) Wolszczan A., 2012, New Astronomy Reviews, 56, 2 * Wong et al. (2022) Wong I., et al., 2022, AJ, 164, 30 * Yates et al. (2017) Yates J. S., Palmer P. I., Biller B., Cockell C. S., 2017, ApJ, 836, 184 * Yoshida et al. (2023) Yoshida S., et al., 2023, AJ, 166, 181 * Zahnle & Marley (2014) Zahnle K. J., Marley M. S., 2014, ApJ, 797, 41 * Zamyatina et al. (2023) Zamyatina M., et al., 2023, MNRAS, 519, 3129 * Zhang (2020) Zhang X., 2020, Research in Astronomy and Astrophysics, 20, 099 ## Appendix A Candidate Lists Table 5: Candidate planets in Tiers $1$, $2$ and $3$ for the Twinkle cool gaseous planet survey. Planetary and stellar parameters listed are used throughout this work and were obtained from the NASA Exoplanet Archive or calculated based on assumptions presented in Edwards et al. (2019b). Where unavailable in the archive, host star spectral type uses SIMBAD values (‡) or estimation based on comparable stars (†). Planet name | $\bm{N_{\text{t}}}$ | $\bm{T_{\text{int}}}$ | $\bm{R_{\text{p}}}$ | $\bm{M_{\text{p}}}$ | $\bm{T_{\text{eq}}}$ | In 3yr | In 7yr | Tier | Transmission | $\bm{R_{\text{s}}}$ | $\bm{T_{\text{s}}}$ | Spectral ---|---|---|---|---|---|---|---|---|---|---|---|--- | | [days] | [$R_{\text{J}}$] | [$M_{\text{J}}$] | [K] | List | List | | Spectra | [$\bm{R_{\odot}}$] | [K] | Type AU Mic b | 2 | 0.904 | 0.363 | $0.0368_{-0.0157}^{+0.0157}$ | 626.428 | Y | Y | 1 | False | 0.75 | 3700 | M1 AU Mic c | 20 | 11.359 | 0.289 | $0.0699_{-0.0211}^{+0.0211}$ | 479.593 | Y | Y | 1 | False | 0.75 | 3700 | M1 GJ 1214 b | 15 | 1.581 | 0.245 | $0.0257_{-0.0014}^{+0.0014}$ | 603.949 | Y | Y | 1 | True | 0.21 | 3250 | M4 GJ 3470 b | 7 | 1.602 | 0.408 | $0.0396_{-0.0040}^{+0.0040}$ | 702.732 | Y | Y | 1 | True | 0.55 | 3600 | M1.5 GJ 436 b | 4 | 0.590 | 0.372 | $0.0799_{-0.0063}^{+0.0066}$ | 708.023 | Y | Y | 1 | True | 0.46 | 3586 | M2.5 HD 136352 c | 17 | 6.975 | 0.260 | $0.0354_{-0.0020}^{+0.0021}$ | 701.029 | Y | Y | 1 | False | 1.06 | 5564 | G4 HD 63433 b | 20 | 8.014 | 0.192 | $0.0166$ | 934.262 | Y | Y | 1 | False | 0.91 | 5640 | G5 HD 63433 c | 17 | 8.584 | 0.238 | $0.0239$ | 698.391 | Y | Y | 1 | False | 0.91 | 5640 | G5 K2-141 c | 1 | 0.333 | 0.624 | $0.0233$ | 720.185 | Y | Y | 1 | False | 0.68 | 4599 | K7 K2-24 c | 22 | 19.008 | 0.669 | $0.0485_{-0.0057}^{+0.0060}$ | 610.027 | Y | Y | 1 | False | 1.16 | 5625 | G9 TOI-1130 c | 8 | 2.056 | 1.500 | $0.9740_{-0.0440}^{+0.0430}$ | 658.620 | Y | Y | 1 | False | 0.69 | 4250 | K7 V1298 Tau b | 22 | 18.094 | 0.916 | $0.2360$ | 695.412 | Y | Y | 1 | False | 1.34 | 4970 | K0 WASP-107 b | 1 | 0.351 | 0.940 | $0.0960_{-0.0050}^{+0.0050}$ | 719.812 | Y | Y | 1 | True | 0.67 | 4425 | K6 WASP-69 b | 2 | 0.556 | 1.110 | $0.2600_{-0.0185}^{+0.0185}$ | 928.605 | Y | Y | 1 | True | 0.86 | 4700 | K5 WASP-80 b | 10 | 2.695 | 0.999 | $0.5400_{-0.0350}^{+0.0360}$ | 799.369 | Y | Y | 1 | True | 0.59 | 4143 | K7-M0 HD 183579 b | 42 | 23.804 | 0.317 | $0.0352_{-0.0170}^{+0.0170}$ | 736.129 | Y | Y | 2 | False | 0.99 | 5788 | G2 TOI-1064 c | 36 | 10.697 | 0.237 | $0.0079_{-0.0057}^{+0.0063}$ | 653.762 | Y | Y | 2 | False | 0.73 | 4734 | K3-K5† TOI-178 d | 46 | 13.424 | 0.229 | $0.0095_{-0.0032}^{+0.0025}$ | 708.942 | Y | Y | 2 | False | 0.65 | 4316 | K7† TOI-421 c | 27 | 10.674 | 0.454 | $0.0517_{-0.0033}^{+0.0033}$ | 717.787 | Y | Y | 2 | False | 0.87 | 5325 | G7 WASP-29 b | 29 | 9.770 | 0.770 | $0.2450_{-0.0220}^{+0.0230}$ | 941.816 | Y | Y | 2 | True | 0.79 | 4800 | K4‡ GJ 9827 d | 79 | 12.006 | 0.180 | $0.0127_{-0.0026}^{+0.0026}$ | 705.214 | Y | Y | 3 | False | 0.60 | 4340 | K6 HATS-72 b | 57 | 21.946 | 0.722 | $0.1254_{-0.0039}^{+0.0039}$ | 714.490 | Y | Y | 3 | False | 0.72 | 4656 | K5† HD 106315 c | 96 | 67.606 | 0.388 | $0.0478_{-0.0116}^{+0.0116}$ | 858.332 | N | Y | 3 | False | 1.30 | 6327 | F5 HD 63935 b | 71 | 29.872 | 0.267 | $0.0340_{-0.0057}^{+0.0057}$ | 877.525 | Y | Y | 3 | False | 0.96 | 5534 | G5‡ HD 63935 c | 97 | 59.066 | 0.259 | $0.0349_{-0.0076}^{+0.0076}$ | 701.589 | N | Y | 3 | False | 0.96 | 5534 | G5‡ HD 73583 b | 54 | 14.314 | 0.249 | $0.0321_{-0.0098}^{+0.0107}$ | 688.149 | Y | Y | 3 | False | 0.65 | 4511 | K4 HD 97658 b | 62 | 21.795 | 0.189 | $0.0261_{-0.0035}^{+0.0035}$ | 720.334 | Y | Y | 3 | True | 0.73 | 5212 | K1 K2-406 b | 71 | 41.930 | 0.411 | $0.0603$ | 693.947 | N | Y | 3 | False | 0.96 | 5784 | G4 TOI-1130 b | 83 | 21.998 | 0.326 | $0.0407$ | 786.160 | Y | Y | 3 | False | 0.69 | 4250 | K7 TOI-178 g | 94 | 25.339 | 0.256 | $0.0124_{-0.0051}^{+0.0041}$ | 483.212 | N | Y | 3 | False | 0.65 | 4316 | K7† TOI-620 b | 61 | 9.190 | 0.335 | $0.0428$ | 620.967 | Y | Y | 3 | False | 0.55 | 3708 | M2.5 TOI-674 b | 68 | 10.035 | 0.468 | $0.0743_{-0.0104}^{+0.0104}$ | 698.867 | Y | Y | 3 | False | 0.42 | 3514 | M2 V1298 Tau c | 83 | 48.488 | 0.499 | $0.0839$ | 932.639 | Y | Y | 3 | False | 1.34 | 4962 | K0 V1298 Tau d | 65 | 44.450 | 0.572 | $0.1060$ | 814.103 | Y | Y | 3 | False | 1.34 | 4962 | K0 WASP-11 b | 57 | 17.901 | 1.110 | $0.5320_{-0.0200}^{+0.0210}$ | 918.615 | Y | Y | 3 | False | 0.89 | 4800 | K3 Table 6: Candidate planets in Tiers $4$ and $5$ for the Twinkle cool gaseous planet survey. Planetary and stellar parameters listed are used throughout this work and were obtained from the NASA Exoplanet Archive or calculated based on assumptions presented in Edwards et al. (2019b). Where unavailable in the archive, host star spectral type uses SIMBAD values (‡) or estimation based on comparable stars (†). Planet name | $\bm{N_{\text{t}}}$ | $\bm{T_{\text{int}}}$ | $\bm{R_{\text{p}}}$ | $\bm{M_{\text{p}}}$ | $\bm{T_{\text{eq}}}$ | In 3yr | In 7yr | Tier | Transmission | $\bm{R_{\text{s}}}$ | $\bm{T_{\text{s}}}$ | Spectral ---|---|---|---|---|---|---|---|---|---|---|---|--- | | [days] | [$R_{\text{J}}$] | [$M_{\text{J}}$] | [K] | List | List | | Spectra | [$\bm{R_{\odot}}$] | [K] | Type HD 73583 c | 136 | 60.023 | 0.213 | $0.0305_{-0.0054}^{+0.0057}$ | 510.880 | N | Y | 4 | False | 0.65 | 4511 | K4 K2-287 b | 143 | 80.315 | 0.847 | $0.3150_{-0.0270}^{+0.0270}$ | 790.363 | N | Y | 4 | False | 1.07 | 5695 | G8 K2-32 b | 126 | 54.426 | 0.473 | $0.0472_{-0.0054}^{+0.0057}$ | 808.226 | N | Y | 4 | False | 0.86 | 5271 | G9 LP 714-47 b | 143 | 27.265 | 0.419 | $0.0969_{-0.0047}^{+0.0047}$ | 686.753 | Y | Y | 4 | False | 0.58 | 3950 | M0 TOI-561 c | 142 | 66.441 | 0.257 | $0.0220_{-0.0072}^{+0.0072}$ | 789.277 | N | Y | 4 | False | 0.85 | 5455 | G9 TOI-776 b | 140 | 42.583 | 0.165 | $0.0126_{-0.0028}^{+0.0028}$ | 530.605 | N | Y | 4 | False | 0.54 | 3709 | M1 WASP-132 b | 134 | 53.011 | 0.897 | $0.4100_{-0.0300}^{+0.0300}$ | 736.759 | Y | Y | 4 | False | 0.75 | 4714 | K4 WASP-84 b | 123 | 42.392 | 0.942 | $0.6940_{-0.0470}^{+0.0490}$ | 773.076 | Y | Y | 4 | False | 0.75 | 5314 | K0 G 9-40 b | 375 | 58.498 | 0.181 | $0.0150$ | 454.032 | N | Y | 5 | False | 0.31 | 3348 | M2.5 K2-121 b | 309 | 96.569 | 0.671 | $0.1390$ | 786.419 | N | Y | 5 | False | 0.67 | 4690 | K5‡ K2-138 f | 181 | 72.914 | 0.259 | $0.0051_{-0.0037}^{+0.0067}$ | 715.911 | N | Y | 5 | False | 0.86 | 5356 | G8 LTT 3780 c | 174 | 38.081 | 0.204 | $0.0198_{-0.0019}^{+0.0020}$ | 363.418 | N | Y | 5 | False | 0.37 | 3331 | M4 TOI-1201 b | 234 | 44.232 | 0.215 | $0.0198_{-0.0028}^{+0.0026}$ | 682.774 | Y | Y | 5 | False | 0.51 | 3476 | M2 TOI-1422 b | 173 | 96.480 | 0.353 | $0.0283_{-0.0063}^{+0.0072}$ | 838.973 | N | Y | 5 | False | 1.02 | 5840 | G2 TOI-1478 b | 208 | 108.544 | 1.060 | $0.8510_{-0.0470}^{+0.0520}$ | 889.609 | N | Y | 5 | False | 1.05 | 5597 | G8 TOI-1634 b | 1888 | 244.729 | 0.160 | $0.0319_{-0.0030}^{+0.0030}$ | 894.156 | N | Y | 5 | False | 0.45 | 3550 | M2 TOI-178 e | 231 | 71.965 | 0.197 | $0.0121_{-0.0030}^{+0.0039}$ | 616.710 | N | Y | 5 | False | 0.65 | 4316 | K7† TOI-3714 b | 693 | 150.617 | 1.010 | $0.7000_{-0.0300}^{+0.0300}$ | 749.785 | N | Y | 5 | False | 0.51 | 3660 | M2 TOI-421 b | 322 | 50.217 | 0.239 | $0.0226_{-0.0021}^{+0.0021}$ | 982.024 | N | Y | 5 | False | 0.87 | 5325 | G9 WASP-156 b | 280 | 84.476 | 0.510 | $0.1280_{-0.0090}^{+0.0100}$ | 938.278 | Y | Y | 5 | False | 0.76 | 4910 | K3 WASP-8 b | 236 | 94.124 | 1.130 | $2.1320_{-0.0810}^{+0.0800}$ | 896.199 | N | Y | 5 | False | 1.03 | 5600 | G8 Wolf 503 b | 271 | 69.390 | 0.182 | $0.0197_{-0.0022}^{+0.0022}$ | 764.360 | N | Y | 5 | False | 0.69 | 4716 | K3.5
Medium-induced radiative kernel with the Improved Opacity Expansion a]João Barata, [a]Instituto Galego de Fisica de Altas Enerxias (IGFAE), Universidade de Santiago de Compostela,E-15782 Galicia, Spain b,c]Yacine Mehtar-Tani, [b]Physics Department, Brookhaven National Laboratory, Upton, NY 11973, USA [c]RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, NY 11973, USA d]Alba Soto-Ontoso, [d]Institut de Physique Théorique, Université Paris-Saclay, CNRS, CEA, F-91191, Gif-sur-Yvette, France e]and Konrad Tywoniuk [e]Department of Physics and Technology, University of Bergen, 5007 Bergen, Norway We calculate the fully differential medium-induced radiative spectrum at next-to-leading order (NLO) accuracy within the Improved Opacity Expansion (IOE) framework. This scheme allows us to gain analytical control of the radiative spectrum at low and high gluon frequencies simultaneously. The high frequency regime can be obtained in the standard opacity expansion framework in which the resulting power series diverges at the characteristic frequency $\omega_c\sim \hat q L^2$. In the IOE, all orders in opacity are resumed systematically below $\omega_c$ yielding an asymptotic series controlled by logarithmically suppressed remainders down to the thermal scale $T \ll \omega_c$, while matching the opacity expansion at high frequency. Furthermore, we demonstrate that the IOE at NLO accuracy reproduces the characteristic Coulomb tail of the single hard scattering contribution as well as the Gaussian distribution resulting from multiple soft momentum exchanges. Finally, we compare our analytic scheme with a recent numerical solution, that includes a full resummation of multiple scatterings, for LHC-inspired medium parameters. We find a very good agreement both at low and high frequencies showcasing the performance of the IOE which provides for the first time accurate analytic formulas for radiative energy loss in the relevant perturbative kinematic regimes for dense media. § INTRODUCTION High-energy collisions of heavy nuclei provide the necessary conditions for creating an extended medium of hot and dense nuclear matter, referred to as quark-gluon plasma (QGP). The appearance of a short-lived stage of this exotic state of matter leaves a strong imprint on particle production at all momentum scales and is quantified by high-precision experimental measurements. In this work, particular attention is devoted to the high-energy particles that traverse the medium and can be used as perturbatively well controlled probes of the microscopic properties of the QGP <cit.>. In terms of experimental observables, these objects emerge in the detectors as collimated sprays of particles and energy, colloquially referred to as jets. Jet modifications, quantified with respect to a baseline obtained in proton-proton collisions (or “vacuum”), have been intensively studied at RHIC <cit.> and LHC <cit.> since more than a decade. The suppression and modification of jets produced in heavy ion collisions, commonly known as jet quenching, is driven by two main phenomena: transverse momentum broadening and energy loss. The former refers to the acquisition of transverse momentum by the highly energetic partons that make up the jet through elastic interactions with the medium following Brownian motion. This diffusion in momentum space is characterized by the transport coefficient: q̂ ≡⟨k_⊥^2 ⟩_typ/t , where $\langle k_\perp^2 \rangle_{\rm typ}$ is the typical squared transverse momentum transfer. An important role is also played by induced energy loss, such as caused by drag and inelastic, or radiative, processes. The latter component is a result of bremsstrahlung radiation triggered by collisions with medium constituents. The associated mean energy loss was found to scale as $L^2$, where $L$ is the length of the plasma, and thus constitutes an important, if not the dominant, source of jet quenching for large media, even though it is “naively” suppressed by a power of the coupling constant <cit.>. The above physical picture applies for the regime of multiple scattering during the passage through the medium. This is the case when the opacity $\chi = L/\ell_{\rm mfp}$, defined as the ratio between the medium length, $L$, and the mean free path, $\ell_{\rm mfp}=(\rho\sigma_{\rm el})^{-1}$ (where $\rho$ is the density of scattering centers and $\sigma_{\rm el}$ the total elastic cross section), is of order one or larger, i.e. $\chi \gtrsim 1$. In effect, many interactions, i.e. those that occur within the formation time, coherently participate in inducing gluon radiation, in an analogous way to the well-known Landau-Pomeranchuk-Migdal (LPM) effect in QED <cit.>. It was soon understood that neglecting these interference effects would fail in adequately describing the radiative spectrum in a substantial region of phase space in such media. Focusing on the diffusion approximation, and thereby neglecting the Coulomb tail that captures the physics of rare hard momentum transfers, the radiative spectrum could be analytically computed albeit with an ambiguity in setting the upper bound for the Coulomb logarithm in $\hat q$ <cit.>. More precise calculations can be performed in the dilute regime, where at most a few scatterings contribute and the full parton-medium interaction potential can be used <cit.>. However, their domain of applicability is limited to low opacity $\chi \ll 1$ or large momentum transfer. Due to the finite radius of convergence of the opacity expansion such an approach diverges <cit.> and thus cannot be applied in the case of a dense medium. As we have mentioned, in both low and high opacity regimes, under a set of physically motivated assumptions that will be revisited in what follows, the medium-induced spectrum can be computed analytically. However, the scenario explored in current experimental facilities, such as the LHC or RHIC, is expected to be the one where the jet undergoes a handful of scatterings, $\mathcal{O}(1-50)$, with the medium <cit.>. As such, neither of the above limiting forms is in its exact domain of applicability, thus hindering quantitative theory-to-data comparisons and the extraction of the medium parameters through the jet physics program in current colliders. These limitations in the analytic front have motivated the investigation of the radiative spectrum numerically <cit.>. However, the proposed approaches are in general less transparent and potentially more computationally costly as compared to their analytic counterparts for computing jet observables where multiple gluon radiation is to be resummed to all orders <cit.>, for applications to jet suppression <cit.>, or as a building block for Monte Carlo event generators <cit.>. A first step towards a more precise control over the accuracy of analytic calculations was first taken in Ref. <cit.>, where the next-to-leading logarithmic corrections to $\hat q $ were computed in the multiple soft scattering regime or, equivalently, in the infinite length medium limit. More recently, substantial progress was made in unifying the low and large opacity regimes while recovering the results of Ref. <cit.> in the soft regime. This new scheme, dubbed the Improved Opacity Expansion (IOE), has been shown to successfully meet this goal when applied to the description of the single particle broadening probability <cit.> and to the medium-induced gluon energy spectrum <cit.>, which constitute the major tools used in a multitude of well established phenomenological models of jet quenching <cit.>. In a nutshell, this framework is built as a series expansion of the in-medium scattering cross section where the zeroth order term encodes the multiple scattering solution and higher N$^n$LO orders[The nomenclature used here to denote the orders in the IOE should not be confused with the more familiar perturbative expansion in powers of the coupling constant.] in the series account for $n$ hard scatterings with the medium. This paper aims at computing the fully differential medium-induced spectrum at NLO accuracy in the IOE framework. For different values of the gluon energy and transverse momentum, ($\omega,\k$),\footnote{Bold letters denote 2D transverse vectors in this paper, while their modulus is written as $|\k|\equiv k_\perp$.} we provide an analytic formula for an arbitrary medium profile that requires numerical integrations of the same order in computational complexity as the ones encountered in the multiple soft scattering limit~\cite{ASW1,ASW2}. In the simplified scenario in which the medium is treated as a brick of constant density, we have obtained closed, analytic formulas for the asymptotic behavior of the spectrum computed in the three different setups considered in this work: the IOE at NLO, the single-hard scattering approximation ($SH$) and the multiple soft scattering regime ($MS$). In addition, we make a phenomenologically oriented comparison with the all-orders numerical spectrum presented in Ref.~\cite{CarlotaFabioLiliana}. The numerical routines used in this publication are provided as ancillary files. The remainder of this paper is structured as follows. Section~\ref{sec:gluon_generic_IOE} revisits the Improved Opacity Expansion framework in full generality, including its application to the single particle momentum broadening and the medium-induced energy spectrum calculations. The core of this paper is Section~\ref{sec:IOE_radiation_spectrum}, where the fully differential spectrum is calculated with a high level of detail. Those readers more interested in the final result and not so much in the technicalities can find a summary of the ready-to-use formulas in Section~\ref{sec:summary}. Finally, numerical results for LHC-motivated medium parameters are presented in Section~\ref{sec:numerics}, including a comparison to BDMPS-Z, GLV and the resummed to all orders spectrum. Further details on the analytic calculations can be found in Appendices~\ref{app:cK_appendix}, \ref{app:Q}, \ref{app:vacuum-derivation} and \ref{app:In_Out_example}. \section{The Improved Opacity Expansion: an overview } \label{sec:gluon_generic_IOE} The Improved Opacity Expansion draws on the seminal 1948 work by Moli\`ere~\cite{Moliere}, where the transverse momentum broadening of charged particles in QED was described in such a way that the multiple soft scattering solution, well described by a Gaussian distribution, and the Coulomb power-law tail were reproduced in the appropriate limits. The IOE program consists in extending Moli\`ere's original approach to QCD and to more complex observables. So far, this strategy has been successfully applied to compute the medium-induced gluon energy spectrum~\cite{IOE1,IOE2,IOE3} and the transverse momentum broadening distribution of an energetic parton propagating through a dense QCD plasma~\cite{broadening_paper}. As an introduction to the IOE, it will be instructive to first revisit how it applies to the two aforementioned observables: transverse momentum broadening and medium-induced gluon radiative spectrum. This will also serve us to lay the basis of the calculation of the fully differential spectrum. \subsection{Transverse momentum broadening } \label{sec:momentum-broadening} The elementary in-medium process that underlies the observables that we discuss in this work is the elastic collision rate $γ_el≡ σ_el/^2$, where $≡(q_1,q_2)$ corresponds to the transverse momentum transfer in the $t$-channel between the hard probe and the medium. At leading order in the coupling the rate reads $γ_el ∼g^4n/q_⊥^4$, where $n$ corresponds to the density of scattering centers in the medium and the $1/q_⊥^4$ dependence denotes that, at short distances, the interaction is Coulomb-like. On the other hand, when $q_⊥→0$, the power law divergence should be screened by the medium at, roughly speaking, the Debye mass $m_D^2$ in the plasma. Equipped with $γ_el$, we can readily write a rate equation for the transverse momentum broadening distribution $(;̨t)$, which gives the probability for a parton in color representation $R$ to acquire transverse momentum $$̨ due to in-medium propagation during a time $t$, (,̨t)/t = C_R∫_ γ_el() [(-̨,t) -(,̨t) ] , where the final time corresponds to the length of the medium $t=L$ and $C_R$ is the color factor associated to a representation $R$ of SU$(3)$.[In this paper we use the notation $\int_\q = \int \frac{\rmd^2 \q}{(2\pi)^2}$ to describe transverse momentum space integrals and $\int_\x= \int \rmd^2 \x$ for integration in position space.] The boundary condition at initial time $t=0$ is simply $\cP(\k,0) = (2\pi)^2 \delta^{(2)}(\k)$. The first term in Eq. (<ref>) accounts for the gain in transverse momentum of the initial parton while the second term reflects the loss of probability for finding said parton with the measured momentum $\k$. Notice that due to rotational symmetry the broadening probability is a function of the modulus of the transverse momentum vector, i.e. $k_⊥≡||̨$. The integral of the collision rate yields the inverse mean-free-path between two collisions, i.e $ℓ^-1_mfp ≡∫_ γ_el()$. At low opacity, $χ∼L/ℓ_mfp ≪1$, the distribution is dominated by at most a single hard scattering (SH) and one finds \beq\label{eq:oe-lo-br} \cP^{\rm SH}(\k,L)= C_R\gamma_{\rm el}(\k)\, L \sim (4\pi)^2\frac{\alpha_s^2C_R nL}{k_\perp^4}\, . \eeq Conversely, at high opacity, multiple (soft) scatterings occur with order one probability and Eq.~\eqref{eq:rate-eq} can be approximated by a diffusion equation for which analytic solutions exist. This is done by expanding in gradients for $q_⊥≪k_⊥$. The first non-vanishing contribution involves the jet quenching parameter $q̂$, \beq \hat q = C_R\, \int^{q_{\rm max}}\frac{\rmd^2 \q }{(2\pi)^2} \, q_\perp^2 \, \frac{\rmd \sigma_{\rm el}}{\rmd^2 \q } \approx 4\pi \alpha_s^2 C_R n(t)\log\frac{q_{\rm max}^2}{\mu_\ast^2} \,, \eeq where the integral over $$ is divergent in the ultraviolet and thus must be regulated, giving rise to the standard Coulomb logarithm, while the infrared region is cut-off by the screening mass that we denote by $μ^2_∗$. Assuming $q̂ $ to be constant in time, the solution to the diffusion equation is a Gaussian~\cite{broadening_paper} and the associated broadening distribution reads \beq\label{eq:gaussian} \cP^{\rm MS}(\k,L) =\frac{4\pi }{\hat q L } \rme^{-\frac{k_\perp^2}{ \hat q L }} \, . \eeq Although this result describes the physics of multiple soft scattering (MS) of the probe in the medium, the diffusion approximation has two major drawbacks: (i) it misses the heavy $1/q_⊥^4$ tail associated with large momentum exchanges and (ii) the transport coefficient depends, logarithmically, on an undetermined ultraviolet cutoff scale. The IOE overcomes these two limitations by shifting the expansion point of the opacity scheme from the vacuum to the harmonic oscillator potential, resulting in the Gaussian distribution presented in Eq.~\eqref{eq:gaussian}. This shift in the expansion is easily performed in position space and thus we should consider the Fourier pair of $(,̨t)$, \beq\label{eq:rate-momentum} \cP(\x,t) = \int_\k \cP(\k,t) \, \rme^{i\x\cdot \k }\, . \eeq In position space, \eqn{eq:rate-eq} becomes local \beq\label{eq:rate-position} \frac{\del \cP(\x,t)}{\del t } = - \, v(\x) \cP(\x,t) \,, \eeq implying $ (,t)=^-v() t$, where the scattering potential $v()$ combines the gain and loss terms and is thus ultraviolet finite \beq \label{eq:v-llog} v(\x) = C_R\int_\q \, \gamma_{\rm el}(\q) \left(1- \rme^{i\q\cdot \x}\right) \propto x_\perp^2 \log \frac{1}{x_\perp^2 \mu_\ast^2 } \, . \eeq In this example, \eqn{eq:rate-position} can be directly integrated, but this is not generally possible as we shall see in the case of the radiative spectrum. Furthermore, one still needs to invert the Fourier transform and this is where the IOE scheme will be particularly useful as it allows us to reduce the Fourier transform to a sum of standard integrals. Let us recall the main difference between the Improved Opacity Expansion procedure and the usual Opacity Expansion (OE) strategy~\cite{Gyulassy:2000fs,Wiedemann,Guo:2000nz}. The latter performs an expansion directly in powers of $v()$ of \eqn{eq:rate-position}, yielding a series in powers of $q_⊥^-2$ once introduced in \eqn{eq:rate-momentum}, with the leading contribution given by \eqn{eq:oe-lo-br}. In the IOE, one shifts the expansion point to be a solution to Eq.~\eqref{eq:rate-position} with the potential $v = ≡q̂ x_⊥^2/4$, whose Fourier transform can be carried out to yield \eqn{eq:gaussian}. If we denote such a solution by $^LO$, then the aforementioned shift of the expansion point leads to \begin{equation} \cP(\x,L) =\big[ 1- \delta v(\x) \big]\cP^{\rm LO}(\x,L) +\mathcal{O}(\delta v^2) \, . \end{equation} Here the scattering potential is split into two terms, i.e. $v= +δv$, such that $|δv| ≪|| $, in which case $δv$ can be regarded as a perturbation around the potential $$. In doing so we aim to tame the divergence of the plain Opacity Expansion series at low enough $$, typically when $q_⊥^2 < q̂ L$. This separation of $v$ into $$ and $δv$ is in general arbitrary and requires the introduction of a matching scale $Q$. Clearly, truncating the IOE series at a fixed order introduces a residual dependence on the separation scale that is of the order of the remainder and thus can be safely neglected. It can nevertheless be used to gauge the uncertainty associated with the fixed order calculation very much like scale dependence encountered in \emph{standard} perturbation theory calculations. To illustrate this point consider the leading logarithmic form given in \eqn{eq:v-llog}. One would trivially write \begin{equation}\label{eq:ppp} x_\perp^2 \log \frac{1}{x_\perp^2 \mu_\ast^2 } = x_\perp^2 \left[ \log \frac{Q^2}{ \mu_\ast^2 } + \log \frac{1}{ x_\perp^2 Q^2 } \right] \end{equation} and define $∝x_⊥^2 logQ^2/ μ_∗^2 $ and $ δv ∝x_⊥^2 log1/ x_⊥^2 Q^2 $, up to overall time-dependent factors. A natural candidate for the separation scale in the case of momentum broadening is $Q^2 ∼q̂ t$, corresponding to the average momentum squared accumulated by the probe due to multiple soft momentum exchanges with the medium. In general, when considering other observables, the LO provides guidance to what scale should be chosen for $Q^2$. We shall see below how to make this observation more precise, in particular regarding the ultraviolet behavior of $q̂$. Not only the IOE allows to fix the diverging behavior of the Opacity Expansion, but also provides a good approximation of the exact result at low transverse momentum provided the following hierarchy of scales is met \beq Q^2 \gg \mu_\ast^2 \,. \eeq This ensures that the Coulomb logarithm is large, i.e. $log(Q^2/μ_∗^2) ≫1$, and since at low $k_⊥$, the $$ integral is dominated by the region $x_⊥^2 ∼1/Q^2$, we also have that $log1/ x_⊥^2 Q^2 ∼1$. On the other hand, at large $k_⊥$ the rapidly oscillating Fourier phase implies that $x_⊥≪k_⊥^-1 ≪Q^-1$ which flips the relative order of the LO and its correction, i.e. $| δv| ≫|| $, and thus the logarithmic function in $v()$ can no longer be neglected. Since large momentum transfers are associated with steeply falling cross sections, such a case is associated with rare hard scatterings in the medium, and perturbation theory is applicable, recovering the standard Opacity Expansion. Following this more qualitative discussion that highlights the strengths of the IOE approach, let us make the discussion more quantitative and rigorous by recalling some of the results presented in Ref.~\cite{broadening_paper}. First, in jet quenching phenomenology two models for the in-medium scattering rate are typically considered. One option, referred to as Gyulassy-Wang (GW) model~\cite{GW}, is to describe the medium as an ensemble of static scattering centers with Yukawa like potentials, with the in-medium rate given by \beq\label{eq:GW} \gamma_{\rm el}^{\rm GW} (\q,t)= \frac{g^4 n(t)}{(q_\perp^2+\mu^2)^2} \, , \eeq where $μ$ is the GW screening mass and $n$ the density of scattering centers in the medium. This leads, see Eq.~\eqref{eq:v-llog}, to a scattering potential of the form \begin{align} \label{eq:v_GW_text} v^{\rm GW}(\x,t)&=\frac{\hat{q}_0(t)}{\mu^2} \big[ 1-\mu x_\perp K_1(\mu x_\perp)\big] \, , \end{align} where we have introduced the \textit{bare} jet quenching parameter $q̂_0(t)=4πα_s^2 C_A n(t)$ and $K_1$ is the modified Bessel function of the second kind of order $1$. Another popular choice is to describe the medium as a thermal bath, so that the scattering potential can be perturbatively computed using Hard Thermal Loop (HTL) effective theory~\cite{HTL}, with \beq\label{eq:HTL} \gamma_{\rm el}^{\rm HTL} (\q,t)= \frac{g^2m_D^2(t)T}{q_\perp^2\,\big[q_\perp^2+m_D^2(t) \big]} \,. \eeq Here $T$ is temperature of the medium and $m_D$ the Debye screening mass. The corresponding scattering potential reads \begin{align} \label{eq:v_HTL_text} v^{\rm HTL}(\x,t)= \frac{2\hat{q}_0(t)}{m_D^2(t)}\left[ K_0(m_D(t)x_\perp)+\log\left(\frac{m_D(t)x_\perp}{2}\right)+\gamma_E \right] \, , \end{align} where now $q̂_0(t)=α_s C_A m_D^2(t) T$, $γ_E = 0.577216…$ is the Euler-Mascheroni constant and $K_0$ is the modified Bessel function of the second kind of order $0$.\footnote{Here we defined the jet quenching parameter for gluons, i.e. $C_R=C_A$.} The differences and similarities between these two models have been extensively discussed in Refs.~\cite{IOE3,broadening_paper}. To leading logarithmic order, they can be unified in an universal form, in accordance with \eqn{eq:v-llog}, \begin{equation}\label{eq:v_LL} v(\x,t)\equiv \frac{1}{4} \hat{q}_0(t) x_\perp^2 \log \frac{1}{x_\perp^2\mu_\ast^2} +\cO(x_\perp^4\mu_\ast^2)\, , \end{equation} where $μ_∗$ is a universal screening mass that can be mapped to the masses of both models considered above.\footnote{The GW mass $\mu$ is related to the universal mass $\mu_\ast$ by $4\mu_\ast^2=\mu^2 \rme^{-1+2\gamma_E}$, and the Debye mass $m_D$ in HTL corresponds to $4\mu_\ast^2= m_D^2 \rme^{-2+2\gamma_E} $ \cite{broadening_paper}.} Applying the IOE prescription to split the potential as $v=+δv$, see \eqn{eq:ppp}, and inserting it back into \eqn{eq:rate-position}, we obtain, after expanding in powers of the perturbative potential $δv$, \begin{equation}\label{eq:expli_cP_series} \begin{split} \cP(\k,L)&=\int_\x \, \rme^{-i \x \cdot \k }\rme^{-\frac{1}{4}x_\perp^2Q^2}\sum_{n=0}^{n_{\rm max}} \, \frac{(-1)^n Q_{s0}^{2n}}{4^nn!} \, x_\perp^{2n} \log^{n}\frac{1}{x_\perp^2Q^2} \\ &\equiv \cP^{\rm LO}(\k,L) + \cP^{\rm NLO}(\k,L) + \cP^{\rm NNLO}(\k,L) + \ldots \, , \end{split} \end{equation} where we identify the next-to-leading order (NLO) term with the contribution $𝒪(δv)$, the next-to-next-to-leading order (NNLO) with the $𝒪(δv^2)$ term, and so on.\footnote{The series is truncated at $n_{\rm max}\sim Q_{s0}^2/\mu_\ast^2$ since formally this is a divergent asymptotic series; the divergence is physically associated to the fact that $x_\perp$ can not be smaller than $1/\mu_\ast$ --- see Ref.~\cite{Iancu:2004bx} for a further discussion on this truncation.} In Eq.~\eqref{eq:expli_cP_series}, we have introduced the \emph{bare} saturation scale \begin{align}\label{eq:old_Qs0} &Q_{s0}^2(L) = \int_{0}^{L} \rmd t\, \hat q_0(t) \, , \end{align} where we allow the \textit{bare} jet quenching parameter to vary in time. In addition, we define the \textit{effective} jet quenching parameter $q̂(t)=q̂_0(t) logQ^2/μ_∗^2$, where the logarithmic dependence appears naturally from the splitting of $v()$. As discussed above, the definition of the matching scale, $Q$, can not be cast in a closed form, since it enters the definition of $q̂$ as well as depends on it directly. In turn, it is obtained by solving the transcendental equation \begin{align}\label{eq:old_Qb} &Q_b^2\equiv Q_s^2(L) = \int_0^{ L} \rmd t\, \hat q_0(t) \,\log \frac{Q_b^2( L)}{\mu_\ast^2}\, , \end{align} where, following our previous reasoning, we have identified $Q^2 ≡Q^2_b$ with the \emph{effective} saturation scale $Q_s^2$. We truncate \eqn{eq:expli_cP_series} at NLO accuracy, since already at this order both the hard and soft regimes should be well described. The resulting broadening distribution reads~\cite{broadening_paper} \begin{equation}\label{eq:golden} \cP^{\rm{LO+NLO}}(\k,L)= \frac{4\pi}{Q_s^2} \rme^{-x} - \frac{4\pi}{Q_s^2} \lambdaq \left\{1-2 \rme^{-x} + \left(1-x\right) \left[{\rm Ei}\left( 4x\right)-\log 4x\right] \right\}\, , \end{equation} where $x = k_⊥^2/Q_s^2$, and \beq \lambdaq \equiv \frac{\hat{q}_0}{\hat{q}}=\frac{1}{\log \frac{Q^2}{\mu_\ast^{ 2}}} \ll 1\,, \eeq is the expansion parameter of the series in the regime $k_⊥^2 ≲Q_s^2$.\footnote{The exponential integral function is defined as ${\rm {Ei}}(x)=\displaystyle\int_{-\infty}^x \rmd t \, \frac{\rme^{t}}{t}$.} At large momentum exchanges, $k_⊥^2 ≫Q_s^2$, one obtains from the NLO term, and in accordance with \eqn{eq:oe-lo-br}, that \begin{align}\label{eq:plot_ppp} \cP(\k,L)^{\rm{NLO}}\Big\vert_{k_\perp^2\gg Q_{s}^2}=4\pi\frac{Q_{s0}^2}{k_\perp^4}+\mathcal{O}\left(\frac{Q_{s0}^4}{k_\perp^6}\right) \, , \end{align} while the LO term is exponentially suppressed. In this high momentum limit, we recover the Coulomb tail encoded in a single scattering in the medium. On the other end, when $k_⊥^2 ≪Q_s^2$, we find \beq \label{eq:assist_1} \cP(\k,L)^{\rm{LO+NLO}}\Big\vert_{k_\perp^2\ll Q_{s}^2} = \frac{4\pi}{Q_s^2}\left( 1+ \lambdaq\log 4 \rme^{1-\gamma_E} \right)+ \cO\left( \lambdaq^2\right)\, . \eeq The first term corresponds to the LO contribution. Thus, the NLO term, up to a small constant logarithm, is of the same functional form as the LO but power suppressed by $≪1$. In fact, one can show that, in this regime, perturbative corrections in the IOE scale as the LO term, each increasing order suppressed by an extra power of $=q̂_0/q̂$. Hence, in this limit the LO term dominates and one recovers the multiple soft solution, which correctly describes the physics at play. In Fig.~\ref{fig:broad} we numerically compare the broadening distribution $(,̨L)$, for a medium with constant $q̂_0$, computed up to LO and NLO in the IOE, with the full $$ obtained using Eqs.~\eqref{eq:rate-momentum}, \eqref{eq:rate-position} and the GW potential in \eqn{eq:v_GW_text}. The result follows the above discussion: at large momentum transfers, $k_⊥^2≫q̂L$, the NLO term dominates and converges to the full result dominated by the single hard scattering result ($k^-4_⊥$). On the other hand, at low momentum transfers the LO and LO+NLO become comparable, reproducing the full result within an uncertainty band associated to the remaining freedom in the definition of $Q_b^2$. The biggest mismatch between the LO+NLO result and the full distribution happens near the peak of the distribution and could be eventually improved by adding more orders in the series. Nonetheless, it is clear that the IOE approach provides a neat interpolation between the soft and hard regime, instead of properly describing just one of these regions. \begin{figure}[t!] \centering \includegraphics[scale=.8]{plot-broadening.pdf} \caption{Comparison between the broadening probability distribution for the IOE at LO (dashed, green), at LO+NLO (solid, red) and the exact GW model result (solid, navy). In addition, we provide the single hard scattering solution given by \eqn{eq:plot_ppp}, which we denote by $k^{-4}_\perp$(dotted, purple). The ratio to the full solution is presented in the bottom panels. The uncertainty band arises from variations in the matching scale by factors of $2$ and $1/2$. The medium parameters are $\hat q_0\!=\!0.16$~GeV$^3$, $L=6$~fm and $\mu_\ast=0.355$~GeV. They are identical to the ones used in Section~\ref{sec:numerics}.} \label{fig:broad} \end{figure} \subsection{The energy spectrum} \label{sec:energy-spectrum} As a second illustrative example, we consider the application of the IOE to compute the medium-induced gluon energy spectrum. The in-medium emission spectrum of a soft gluon with energy $ω$ from a hard parton with energy $E≫ω$ in color representation $R$ can be compactly cast as \cite{Blaizot:2015lma} \begin{align} \label{eq:BDMPS_spec_energy} \omega\frac{\rmd I}{\rmd \omega } =\frac{2\bar{\alpha} \pi}{\omega^2} \Re \int_0^\infty \rmd t_2 \int_0^{t_2} \rmd t_1 \, \bdel_\y\cdot \bdel_\x \big[\cK(\x,t_2;\y,t_1) - \cK_0(\x,t_2;\y,t_1)\big]_{\x=\y=0} \, . \end{align} Here $α̅=α_sC_R/π$ and $(,t_2;,t_1)$ is an effective emission kernel describing the broadening of the emitted gluon during its formation. It corresponds to the evolution operator of a quantum particle immersed in the imaginary potential $iv()$ in 2+1 dimensions and obeys the Schr\"odinger equation \begin{equation} \label{eq:cK_Sch} \left[i\frac{\partial}{\partial t}+\frac{\bdel^2_\x}{2\omega}+iv(\x,t)\right]\cK(\x,t;\y,t_1)=i\delta^{(2)}(\x-\y)\delta(t-t_1) \,, \end{equation} which resums multiple scatterings of the radiated gluon with the medium between the emission times $t_1$ and $t_2$ in the amplitude and its complex conjugate, respectively. For a general potential $v(,t)$ that includes the Coulomb tail at large momentum transfers, a closed form solution to Eq.~\eqref{eq:cK_Sch} is not known. An analytical solution can nevertheless be obtained for two special choices of the potential: vacuum and harmonic oscillator. In the vacuum case, setting $v(,t)=0$ leads to the following solution of Eq.~\eqref{eq:cK_Sch} \begin{align}\label{eq:cK_vac} \cK_0(\Delta\x,\Delta t) = \frac{\omega }{2\pi i \Delta t} \exp\left(i\frac{\omega\Delta\x^2}{2 \Delta t}\right) \, , \end{align} where $Δ= -$ and $Δt = t_2-t_1$. Note that this contribution is explicitly removed in Eq.~\eqref{eq:BDMPS_spec_energy} so that the result is only sensitive to the purely medium-induced contribution. In fact, we can also express the resummed propagator, given by the solution of Eq.~\eqref{eq:cK_Sch}, as a Dyson-like iterative equation that resums multiple interactions around the vacuum solution, namely \begin{align} \label{eq:cK-opacity-expansion} \cK(\x,t_2;\y,t_1) &= \cK_0(\x-\y, t_2-t_1) - \int_{t_1}^{t_2} \rmd s \int_{\z} \, \cK_0(\x-\z,t_2 -s) v(\z,s) \cK(\z,s; \y,t_1) \,. \end{align} From the structure of the equation, we immediately see that, for a time independent rate $v(,t) = v()$, the function $$ only depends on $τ≡t_2-t_1$. This equation is equivalent to an expansion in medium opacity $χ$, defined as $χ= L/ℓ_mfp$. Computing the radiative spectrum by truncating the expansion in \eqn{eq:cK-opacity-expansion} at a fixed order in $v(,t)$, or $χ$, corresponds to the Opacity Expansion introduced in the previous section. Consistently, the $n-$th term in the OE scales as $I^n/ω∼𝒪 ( χ^n )$. The single scattering solution corresponds to the $n=1$ truncation of the expansion and is often referred to as the GLV spectrum~\cite{GLV,Wiedemann}. The other special case where Eq.~\eqref{eq:cK_Sch} is analytically solvable is when $v(,t) = (,t) = 1/4q̂(t) x_⊥^2$, that is, when the potential reduces to that of an harmonic oscillator. We recall that the IOE splits the leading logarithmic potential given in \eqn{eq:v_LL} as \begin{equation}\label{eq:v_IOE} v(\x,t)\equiv \vLO+\delta v =\frac{1}{4} \hat{q}_0(t) x_\perp^2 \log \frac{Q^2}{\mu_\ast^2}+\frac{1}{4} \hat{q}_0(t)x_\perp^2 \log \frac{1}{x_\perp^2Q^2} \, , \end{equation} where $Q$ is for now an undetermined matching scale, different from the one used for the broadening case. Yet, the \textit{effective} jet quenching parameter is $q̂(t) = q̂_0(t) logQ^2/μ_∗^2$. Thus, similarly to transverse momentum broadening discussed in the previous section, the solution to Eq.~\eqref{eq:cK_Sch} with a quadratic potential, that we denote as $ = ^LO$, corresponds to the leading order (LO) term in the Improved Opacity Expansion. It reads \begin{align} \label{eq:cK_BDMPS_2} \cK^{\rm LO}(\x,t_2;\y,t_1)&= \frac{\omega}{2\pi i S(t_2,t_1)}\exp\left( \frac{i\omega}{2S(t_2,t_1)} \left[ C(t_1,t_2)\,\x^2+C(t_2,t_1)\,\y^2-2 \x\cdot\y\right] \right) \,. \end{align} Here, $C(t_2,t_1)$ and $S(t_2,t_1)$ are purely time dependent functions which are solutions to the initial condition problems~\cite{Arnold_simple} \begin{equation} \label{eq:cs-ho-equations} \begin{split} &\left[\frac{\rmd^2}{\rmd^2 t}+\Omega^2(t)\right]S(t,t_0)=0 \, ,\quad S(t_0,t_0)=0 \,,\quad \partial_t S(t,t_0)_{t=t_0}=1 \, , \\ &\left[\frac{\rmd^2}{\rmd^2t}+\Omega^2(t)\right]C(t,t_0)=0 \, ,\quad C(t_0,t_0)=1 \,,\quad \partial_t C(t,t_0)_{t=t_0}=0\, , \end{split} \end{equation} with the complex harmonic oscillator frequency $Ω(t)$ given by \begin{equation} \Omega(t)=\frac{1-i}{2}\sqrt{\frac{\hat{q}(t)}{\omega}} \, . \end{equation} More details on the properties of these functions can be found in Appendix~\ref{app:cK_appendix}. Inserting \eqn{eq:cK_BDMPS_2} back into Eq.~\eqref{eq:BDMPS_spec_energy} and performing the time integrals, one obtains the spectrum at leading order in the IOE (or equivalently in the harmonic approximation). The final expression reads, \begin{equation} \label{eq:dIdw_BDMPS} \omega\frac{\rmd I^{\rm LO}}{\rmd \omega} = 2\Bar{\alpha}\log \big\vert C(0,L) \big\vert\,, \end{equation} and is often referred to as the BDMPS-Z spectrum \cite{BDMPS3,BDMPS2}. The LO contribution to the IOE spectrum takes a particularly simple form in the case where the medium has an extension $L$ with a constant density $n$; we refer to this simple medium model as the plasma brick model. In the brick model one can simply define the jet quenching parameter as $q̂(t) =q̂ Θ(L-t) $, which allows one to write the $C$ and $S$ functions as \beq \label{eq:S-and-C-functions} S(t_2,t_1) = \frac{1}{\Omega} \sin \Omega (t_2-t_1) \,, \qquad \text{and} \qquad C(t_2,t_1)= \cos \Omega (t_2-t_1) \,. \eeq In this case, the well-known behavior of the spectrum at asymptotically at low and high frequencies is \beq \label{eq:dIdw_BDMPS_cases} \omega \frac{\rmd I^{\rm LO}}{\rmd \omega} \simeq 2\bar \alpha \begin{dcases} \,\,\sqrt{\frac{\omega_c}{2\omega}} \quad\qquad\text{for}\quad \omega \ll \omega_c\\ \,\,\frac{1}{12} \left(\frac{\omega_c}{\omega} \right)^2\quad\text{for} \quad \omega \gg \omega_c \,,\\ \end{dcases} \eeq where the characteristic gluon energy $ω_c = q̂ L^2/2$ corresponds to gluons with maximal formation time, i.e. $ t_f = L$. The behaviour in the soft limit highlights the Landau-Pomeranchuk-Migdal (LPM) interference~\cite{LPM1,LPM2} that occurs since the gluon is formed over timescales involving multiple interactions with the medium. The strong suppression at high gluon energies follows directly from the approximation of multiple soft interactions, implicit in the harmonic form. At these frequencies, i.e. $ω> ω_c$, the contribution from a single, hard scattering can be shown to dominate, as we will discuss below. Let us now construct the contributions to the IOE beyond the LO term. Adopting the decomposition provided by \eqn{eq:v_IOE} that allows us to separate the harmonic part from the $$ dependent Coulomb logarithm, and in analogy to the resummation around the vacuum solution given by Eq.~\eqref{eq:cK-opacity-expansion}, the full kernel can be written as \begin{equation} \label{eq:cK_ful_IOE} \cK(\x,t_2;\y,t_1) = \cK^{\rm LO}(\x,t_2;\y,t_1)- \int_{t_1}^{t_2} ds \int_\z \, \cK^{\rm LO}(\x,t_2;\z,s)\, \delta v(\z,s)\cK(\z,s;\y,t_1) \,. \end{equation} Truncating this relation at $𝒪(δv^2)$, it is easily seen that the LO kernel is given by $^LO$ in Eq.~\eqref{eq:cK_BDMPS_2}. The NLO kernel reads \begin{align} \cK^{\rm NLO}(\x,t_2;\y,t_1) &= - \int_{t_1}^{t_2} \rmd s \int_\z \, \cK^{\rm LO}(\x,t_2;\z,s) \delta v(\z,s) \cK^{\rm LO}(\z,s;\y,t_1) \, , \end{align} which can be used in Eq.~\eqref{eq:BDMPS_spec_energy} to compute the NLO contribution to the IOE spectrum, as was done for the LO term. Like in the broadening case, we do not consider higher order terms since truncating the series at NLO is enough to reproduce the single hard and multiple soft regimes. At this order, the spectrum reads~\cite{IOE1,IOE2,IOE3} \beq \label{eq:dIdw_LO_p_NLO} \omega\frac{\rmd I^{{\rm LO+NLO}}}{\rmd \omega} = 2\bar{\alpha}\log \big\vert C(0,L) \big\vert +\frac{1}{2}\bar{\alpha}\hat{q}_0 \,\Re \int_0^L \rmd s\, \frac{-1}{k^2(s)} \log\frac{-k^2(s)}{Q^2 \rme^{-\gamma_E}} \,, \eeq \begin{equation}\label{eq:k_NLO} k^2(s)= -\frac{i\omega}{2} \left[{\rm Cot}(s,\infty)+{\rm Cot}(0,s)\right] \, . \end{equation} and we defined the ratio $Cot(t_2,t_1) ≡C(t_1,t_2)/S(t_2,t_1)$. We now analyze the asymptotic forms of the spectrum by considering the brick model. In this case \eqn{eq:k_NLO} reduces to \begin{equation} \label{eq:k_NLO_brick} k^2(s) = \frac{i\omega \Omega}{2} \big[ {\cot}\,\Omega s - \tan\Omega(L-s) \big] \, . \end{equation} At high frequencies, that is $ω≫ω_c$ or $ΩL≪1$, one finds that Eq.~\eqref{eq:k_NLO_brick} leads to $k^2(s)≃iω/(2s)$. The high-frequency behavior of the NLO term is given by \begin{equation} \label{eq:NLO_scaling_high_energy} \omega \frac{\rmd I^{\rm NLO}}{\rmd \omega}\simeq \Bar{\alpha}\hat{q}_0\frac{\pi}{4}\frac{L^2}{2\omega} =\frac{ \Bar{\alpha} \pi}{4} \chi\,\frac{\bar{\omega}_c}{\omega} \, . \end{equation} It dominates the spectrum, given the quadratic $ω$ suppression of the LO term, see Eq.~\eqref{eq:dIdw_BDMPS}. In this last equation, we recall that the medium opacity parameter is $χ≡q̂_0 L / μ_∗^2 ∼L/ℓ_mfp$ and we introduced the high energy cut frequency $ω_c=1/2μ_∗^2 L$. Higher-order terms are all suppressed by at least one additional power of $1/ω$ as well~\cite{IOE3}. Thus, similar to the discussion for the broadening distribution $()̨$, one observes that the dominant term, given in Eq.~\eqref{eq:NLO_scaling_high_energy}, comes solely from the NLO contribution and it can be shown to exactly match the medium-induced spectrum obtained by considering a single hard scattering in the medium~\cite{IOE1,GLV}, i.e. $n=1$ in the traditional Opacity Expansion. Furthermore, Eq.~\eqref{eq:NLO_scaling_high_energy} is independent of the matching scale, analogous to what was observed for $()̨$ in \eqn{eq:plot_ppp}. At low frequencies, i.e. for $ω≪ω_c$ or $ΩL ≫1$, the NLO term, containing the single hard scattering physics, can be simplified by noticing that $k^2(s) ≃-ωΩ$, leading to~\cite{IOE1,IOE2,IOE3} \begin{equation} \label{eq:NLO_smallw_maineq} \omega \frac{\rmd I^{\rm NLO}}{\rmd\omega} \simeq \Bar{\alpha} \lambdaq \sqrt{\frac{\omega_c}{2\omega}}\left[\gamma_E+\log\left(\frac{\sqrt{\omega \hat{q}}}{\sqrt{2}Q^2}\right)+\frac{\pi}{4}\right] \,, \end{equation} which is equivalent to the next-to-leading logarithmic result derived in Ref.~\cite{Arnold:2008zu}. Again, as observed for the broadening distribution, in the soft regime higher order terms in the IOE scale as the LO contribution, see \eqn{eq:dIdw_BDMPS_cases}, with increasing power suppression by $= (logQ^2/μ_∗^2 )^-1 ≪1$. In fact, one can show that \begin{equation} \label{eq:expansion_IOE_small_frequency} \left.\frac{\rmd I/\rmd \omega}{\rmd I^{\rm LO}/\rmd \omega} \right|_{\omega \ll \omega_c} = 1+\lambdaq \left(a_0+a_1\log\frac{\sqrt{\omega \hat{q}}}{Q^2} \right) + \lambdaq^2 \left(b_0 + b_1 \log\frac{\sqrt{\omega \hat{q}}}{Q^2} + b_2\log^2\frac{\sqrt{\omega \hat{q}}}{Q^2}\right) +\ldots \, , \end{equation} where ${a_i, b_i}$ are purely numerical coefficients \cite{IOE3}. This result implies that, in the soft limit, the full spectrum can be written in terms of the LO result with an effective jet quenching coefficient $q̂_eff$ that absorbs the additional logarithmic dependencies. More importantly, Eq.~\eqref{eq:expansion_IOE_small_frequency} imposes further constraints on the matching scale $Q^2$. To see this, let us first assume that the matching scale associated to the radiation spectrum $Q^2 = Q_r^2$ is independent of $ω$. Then, in \eqn{eq:expansion_IOE_small_frequency} the logarithms in the denominators would be frozen. However, the numerator logarithms would evolve quite rapidly for $μ_∗^2≪ω^2 ≪(q̂^2Q_r^4)^-1$, leading to a divergent series (notice that the LO contribution would be negligible in this case). Thus, one concludes that $Q_r = Q_r(ω)$ in order for the spectrum to be free of unphysical divergences. In addition, one sees that the natural way to regulate the numerators is to take\footnote{Here we assume that $\hat{q}$ is time independent to simplify the discussion. The generic form for \eqn{eq:Q_r} is discussed in the next section.} \begin{equation}\label{eq:Q_r} Q_r^2=\sqrt{\hat{q}_0\omega\log{\frac{Q_r^2}{\mu_\ast^2}}} \, . \end{equation} Moreover, it can be shown~\cite{IOE3} that this form follows directly from the fact that once all orders in the IOE are resummed the spectrum takes the functional form of the LO term. For the present paper and the following calculations, the main message is that Eq.~\eqref{eq:Q_r} ensures that at low energies the spectrum is well behaved and non-physical divergences are absent. Also, and again in analogy to the broadening, at leading logarithmic order $Q_r^2 ∼√(q̂_0ω)$, which using the above relations for the gluon formation time and the average accumulated momentum, can be translated into the typical momentum acquired by a gluon with frequency $ω≪ω_c$. The solutions of Eq.~\eqref{eq:Q_r} are discussed in Appendix~\ref{app:Q}. Finally, we still need to ensure that $Q_r^2≫μ_∗^2$ in order to justify the expansion. Ignoring the logarithmic dependence in the matching scale, we observe that the IOE approach only works if \begin{equation} \omega_{\rm BH} \ll \omega \,, \end{equation} where we defined the characteristic Bethe-Heitler (BH) frequency as $ω_BH = μ_∗^4/q̂_0$. This condition means that the current scheme is not valid in the BH regime~\cite{Bethe:1953va}, see Ref.~\cite{Andres:2020kfg} for a similar conclusion and further discussion regarding the analytic treatment of the BH region. This regime is characterized by gluons with a formation time of the order of the mean free path in the medium, acquiring a momentum $k_⊥^2∼q̂_0 ℓ_fmp ∼μ_∗^2$ and with a typical energy $ω_BH ∼T$ of the order of the medium temperature. At this scale, non-linear dissipation effects take place \cite{Baier:2000sb} such as gluon absorption. However, in the case of large or dense enough media (such that $Q^2 ≫m_D^2$) the BH regime is power suppressed and radiative energy loss is dominated by frequencies in the deep LPM regime in the calculation of inclusive jet observables \cite{Baier:2001yt}. \begin{figure}[t!] \centering \includegraphics[scale=0.8]{plot-integrated-spectrum.pdf} \caption{Comparison between the energy spectrum computed with GLV (dotted, purple), the IOE at LO (dashed, green), at LO+NLO (solid, red) and the all-order spectrum (solid, navy) as computed in~\cite{CarlotaFabioLiliana}. The ratio to the full solution is presented in the bottom panels. The uncertainty band arises from variations in the matching scale and the gray region indicates the regime in which Eq.~\eqref{eq:Q_r} does not have a solution. The parameters used are identical to those of Fig.~\ref{fig:broad} and $\omega_{c0}\!\equiv\!\hat q_0 L^2$.} \label{fig:spectrum} \end{figure} In Fig.~\ref{fig:spectrum}, we compute the medium-induced single gluon spectrum up to NLO in the IOE, comparing with a full numerical solution to Eq.~\eqref{eq:BDMPS_spec_energy}~\cite{CarlotaFabioLiliana} and the GLV spectrum, corresponding to the limit of single scattering in the medium. The gray band indicates the region in which Eq.~\eqref{eq:Q_r} does not have a valid solution, i.e. where the IOE approach is not valid. A similar numerical comparison was previously carried out in Ref.~\cite{Andres:2020kfg}. As discussed above, we numerically observe that in the soft sector, $ω_BH ≪ω≪ω_c$, the difference between the full result and the LO contribution is small, and including the NLO provides a very good approximation. In addition, the IOE has no divergences since the matching scale is chosen for each $ω$ by solving Eq.~\eqref{eq:Q_r}. At frequencies $ω≫ω_c$, we observe that the LO is power suppressed, but the NLO term matches the full result, even faster than the GLV approximation. Overall, the agreement between the LO+NLO result and the full numerical solution is outstanding. \section{The medium-induced radiative kernel with the IOE}\label{sec:IOE_radiation_spectrum} After having revised the building blocks of the IOE, we proceed to compute the fully differential medium-induced spectrum for a gluon with energy $ω$ and transverse momentum $$̨. We assume that the emitted gluon is soft, $\omega \ll E$, and collinear, $\theta^2\sim k_\perp^2/\omega^2\ll 1$, with $E$ being the energy of the emitter. The emitter follows an eikonal trajectory and its kinematics are frozen. Regarding the medium properties, which are encapsulated by the jet quenching parameter $\hat{q}$, we assume that it has a smooth time profile almost everywhere and that, at large distances, the system reaches the vacuum sufficiently fast, i.e. $\lim\limits_{t\to \infty}\hat{q}(t)= 0$. Then, we study a particular scenario where the medium has a simple time dependence: up to a distance $L$ the jet quenching parameter is positive and constant, while for times larger than $L$, $\hat{q}\!=\!0$. This corresponds to the previously mentioned plasma brick model, where the medium is a slab with longitudinal size $L$, after which there is vacuum; mathematically it corresponds to defining the jet quenching parameter as $\hat{q}(t) \!=\! \hat{q}\,\Theta(L-t) $. Under these assumptions, the purely medium-induced spectrum can be expressed as a convolution between the broadening probability distribution, $ \cP$, and the splitting kernel, $\cK$, that we have introduced in the previous section. It reads, \begin{align}\label{eq:spectrum} (2\pi)^2\omega\frac{\rmd I}{\rmd \omega \rmd^2 \k}&=\lim_{\epsilon\to 0}\frac{2\bar{\alpha}\pi}{\omega^2} \Re \int_0^\infty \rmd t_2 \, \rme^{- \epsilon (t_2+t_1)} \int_0^{t_2} \rmd t_1 \int_\x \, \rme^{-i \k \cdot \x}\, \cP(\x,\infty;t_2) \nn & \bdel_\x\cdot \bdel_\y \cK(\x,t_2;\y,t_1)_{\y=0} - (2\pi)^2\omega\frac{ \rmd I^{\rm vac}}{\rmd\omega \rmd^2\k} \, , \end{align} where $t_1$ and $t_2$ correspond to the gluon splitting light-cone times in amplitude and conjugate amplitude respectively, and span from the creation point inside the medium at $t_1\!=\!t_2\!=\!0$ up to any possible in-vacuum or in-medium splitting time. In eq:spectrum, we explicitly denote the starting time $t_2$ in the broadening distribution so, compared to our formulas in Section <ref>, $\cP(\x,L) \equiv \cP(\x,L;0)$. Also, in eq:spectrum we employ the adiabatic turn-off prescription <cit.>, which prevents the emission of purely vacuum-like radiation at asymptotically large times, with the $\epsilon\to 0$ limit being implicit for the rest of the paper. The last term in the formula subtracts a contribution corresponding to purely vacuum radiation off the hard emitter given by (see Appendix <ref>) (2π)^2ωI^vac/ ω^2 = 4α̅π/k_⊥^2. Before proceeding further with the explicit analytic evaluation of eq:spectrum, we anticipate a subtlety when carrying out the time integrals with the adiabatic turn-off prescription. Ignoring the $\rme^{-\epsilon t_1}$ suppression factor, the $t_1$ integral can be performed using eq:id_1. Keeping the prescription yields only an additional negative vacuum-like term, $-(2\pi)^2\omega\frac{ \rmd I^{\rm vac}}{\rmd\omega \rmd^2\k}$, so that eq:spectrum can be expressed in a more convenient form as follows \begin{align}\label{eq:spectrum-2} (2\pi)^2\omega\frac{\rmd I}{\rmd \omega \rmd^2 \k}&=\frac{2\bar{\alpha}\pi}{\omega^2} \Re \int_0^\infty \rmd t_2 \, \rme^{- \epsilon t_2} \int_0^{t_2} \rmd t_1 \int_\x \, \rme^{-i \k \cdot \x}\, \cP(\x,\infty;t_2) \nn & \bdel_\x\cdot \bdel_\y \cK(\x,t_2;\y,t_1)_{\y=0} - \frac{8\bar{\alpha}\pi}{k_\perp^2} \,, \end{align} where now the $\epsilon$ for the $t_1$ integral prescription has been removed at the cost of a factor 2 multiplying the vacuum term. The limit $\epsilon \to 0$ has to be taken after the integral over $t_2$. The details regarding the treatment of the adiabatic prescription are discussed in Appendix <ref>. In what follows, we will compute Eq. (<ref>) in the IOE approach, including all terms up to $\mathcal{O}(\delta v)$ (NLO). For that, we extend eq:rate-position for a generic medium and express the broadening distribution $\cP$ as (,t;t_0) = ^-∫_t_0^t ds (,s) ^-∫_t_0^t ds δv(,s) =^LO(,t;t_0) ^-∫_t_0^t ds δv(,s) . Similarly, the emission kernel $\cK$ can be expanded as in eq:cK_ful_IOE: \begin{align} \label{eq:k-expansion} \cK(\x,t_2;\y,t_1) = \cK^{\rm LO}(\x,t_2;\y,t_1)-\int_\z \int_{t_1}^{t_2} \dd s \, \cK^{\rm LO}(\x,t_2;\z,s)\delta v(\z,s)\cK(\z,s;\y,t_1) \,. \end{align} Truncating these relations up to NLO accuracy and inserting them into Eq. (<ref>), we obtain the spectrum which we write in the following way, I/ ω^2 = I^LO/ ω^2 + I^NLO/ ω^2 + 𝒪(δv^2) . To reiterate, the LO and NLO terms resum arbitrary number of soft medium interactions, encoded in $\vLO$, and a fixed number (zero for LO, and one for NLO) number of hard interactions with the medium, through the potential $\delta v$. While the vacuum spectrum is already given in Eq (<ref>), the medium read as follows, \begin{align} \label{eq:spectrum-ioe-lo} (2\pi)^2\omega\frac{\rmd I^{\rm LO}}{ \rmd \omega \rmd^2 \k}& = \frac{2\bar{\alpha}\pi}{\omega^2} \Re \int_0^\infty \rmd t_2\, \rme^{-\epsilon t_2} \int_0^{t_2} \rmd t_1 \int_\x\, \rme^{-i \k \cdot \x} \nn & \times \cP^{\rm LO}(\x,\infty;t_2) \bdel_\x\cdot \bdel_\y \cK^{\rm {LO}}(\x,t_2;\y,t_1)_{\y=0} - \frac{8 \bar \alpha \pi}{k_\perp^2} \,, \\ \label{eq:spectrum-ioe-nlo} (2\pi)^2\omega\frac{\rmd I^{\rm NLO}}{ \rmd \omega \rmd^2 \k}& = \frac{2\bar{\alpha}\pi}{\omega^2} \Re \int_0^\infty \rmd t_2\, \rme^{-\epsilon t_2} \int_0^{t_2} \rmd t_1 \int_\x\, \rme^{-i \k \cdot \x} \nn &\times \Big[ \cP^{\rm LO}(\x,\infty;t_2) \bdel_\x\cdot \bdel_\y \cK^{\rm {NLO}}(\x,t_2;\y,t_1)_{\y=0} \nn &+ \cP^{\rm NLO}(\x,\infty;t_2) \bdel_\x\cdot \bdel_\y \cK^{\rm {LO}}(\x,t_2;\y,t_1)_{\y=0} \Big]\,, \end{align} \begin{align}\label{eq:mmmm_1} \cP^{\rm NLO}(\x, \infty;t) &= - \cP^{\rm LO}(\x,\infty;t) \int_t^\infty \rmd s \, \delta v(\x,s) \, , \end{align} \begin{align}\label{eq:mmmm_2} \cK^{\rm NLO}(\x,t_2;\y,t_1) &= - \int_\z \int_{t_1}^{t_2} \rmd s\, \cK^{\rm LO}(\x,t_2;\z,s) \delta v(\z,s) \cK^{\rm LO}(\z,s;\y,t_1) \,. \end{align} The LO term captures the physics associated with the production of gluon radiation due to multiple soft scattering in the medium, thus recovering the BDMPS-Z solution. The first term in the NLO term eq:spectrum-ioe-nlo includes the possibility of producing the gluon due to a hard scattering in the medium and when integrated over $\k$ gives the NLO contribution to the integrated spectrum studied in the previous section, Eq.~\eqref{eq:dIdw_LO_p_NLO} \cite{IOE1}. Finally, the last term in \eqn{eq:spectrum-ioe-nlo} arises from expanding the final state broadening distribution $$. Thus, it only affects the redistribution of the radiated gluon transverse momentum and it vanishes upon integration over $$̨. In the following sections, we proceed to explicitly compute Eqs. (<ref>) and (<ref>). §.§ Leading order contribution The leading order contribution to the spectrum is captured by Eq. (<ref>). The broadening distribution $\cP^{\rm LO}$, implicitly given in eq:p-expansion, reads \begin{align} \label{eq:P_LO_x} \cP^{\rm LO}(\x,t;t_0) = \exp\left[ -\frac{1}{4}Q_{s0}^2(t,t_0) \log \frac{Q_b^2}{\mu_\ast^2}\, x_\perp^2 \right]\, , \end{align} where we define the bare saturation scale as a slight generalization of eq:old_Qs0, reading \begin{align} Q_{s0}^2(t,t_0) = \int_{t_0}^{t} \rmd s\, \hat q_0(s) \, , \end{align} and the matching scale, $Q_b$, satisfies (following eq:old_Qb) \begin{align}\label{eq:Qb_new} Q_b^2\equiv \int_0^{ \infty} \rmd t\, \hat q_0(t) \,\log \frac{Q_b^2}{\mu_\ast^2}\, . \end{align} Furthermore, the kernel $\cK^{ \rm LO}$ can be found in Eq. (<ref>) but, for consistency, we also repeat it here in a slightly different form, namely \begin{align}\label{eq:cK_BDMPS_3} \cK^{\rm LO}(\x,t_2;\y,t_1) = \frac{\omega}{2\pi i S(t_2,t_1)} \exp \left[i\frac{\omega}{2} \left( {\rm Cot}(t_2,t_1) \, \x^2 - {\rm Cot}(t_1,t_2) \, \y^2 - \frac{2}{S(t_2,t_1) }\, \x\cdot \y \right) \right] \,, \end{align} where we recall that (see Appendix <ref> for further details) Cot(t_2,t_1) = C(t_1,t_2)/S(t_2,t_1) . Also, in eq:cK_BDMPS_3 we have taken advantage of the anti-symmetry of $S$, i.e. $S(t_2,t_1) = - S(t_1,t_2)$. In all these functions, the value of $\hat q$ enters as an argument and thus they are sensitive to the definition of the matching scale for radiation, $Q_r$, that, as discussed in Section <ref>, is obtained by solving the transcendental equation (see eq:Q_r) Q^2_r(t)=√(q̂(t) ω)=√(q̂_0(t) ωlogQ^2_r(t)/μ_∗^2). At this point, an important remark is in order. The functional form of the matching scale is constrained by making the spectrum finite in the infrared. This leads to the $\omega$-dependence in Eq. (<ref>), which is constrained in this way up to an overall numerical coefficient. As was shown in Ref. <cit.>, the dependence in such a factor is sub-leading for a fixed order calculation in the IOE. As such, and since all the time dependence of $Q_r^2$ emerges from $\hat{q}_0$, it is more convenient to define $Q_r^2$ as a static scale. This simplifies the time integrations needed for the spectrum calculation without downgrading its accuracy. In order to further simplify Eq. (<ref>), we make use of a series of identities satisfied by the functions $C(t_2,t_1)$ and $S(t_2,t_1)$ that enter in the definition of $\cK^{ \rm LO}$, see eq:cs-ho-equations. In particular, we use Eq. (<ref>) to obtain \begin{align}\label{eq:id_1} \int_0^{t_2} \rmd t_1 \, \partial_\y \cK^{\rm LO}(\x,t_2;\y,t_1)_{\y=\0} &= -\frac{\omega^2}{2\pi}\int_0^{t_2} \rmd t_1\, \frac{\x}{S^2(t_2,t_1)}\rme^{\frac{i\omega}{2} {\rm Cot}(t_2,t_1) \x^2} \nn &= \frac{\omega}{\pi i} \frac{\x}{\x^2} \rme^{\frac{i\omega }{2} {\rm Cot}( t_2,0) \x^2} \, , \end{align} where in the second step we dropped an infinite phase which has already been accounted for in the vacuum subtraction term in eq:spectrum-2. A careful treatment of this technical point is presented in Appendix <ref>, see eq:t1-int and related discussion. The identity eq:id_1 together with Eq. (<ref>) lead to the following expression for the leading-order spectrum \begin{align} \label{eq:HO_any_med_1a} (2\pi)^2\omega\frac{\rmd I^{\rm LO}}{\rmd \omega \rmd^2 \k} &=2\bar{\alpha} \Re \int_0^\infty \rmd t_2 \, \rme^{-\epsilon t_2}\, {\rm Cot}(t_2,0)\int_\x \rme^{-i \k \cdot \x} \, \cP^{\rm LO}(\x,\infty;t_2) \, \rme^{\frac{i\omega }{2} {\rm Cot}(t_2,0)\, \x^2} \nn &-\frac{8\pi \abar}{k_\perp^2} \, , \end{align} where we used that $\partial_\x \cdot \frac{\x}{\x^2}=0$. The result obtained in Eq. (<ref>), although compact, is somewhat obscure from a physical perspective. A more intuitive description of the in-medium emission can be achieved by using a momentum space representation \begin{align}\label{eq:HO_any_med_2a} (2\pi)^2\omega\frac{\rmd I^{\rm LO}}{\rmd \omega \rmd^2 \k} &=\frac{4\bar{\alpha}\pi}{\omega} \Re\, i\int_0^\infty \rmd t_2 \, \rme^{-\epsilon t_2}\, \int_{\p} \, \cP^{\rm LO}(\k-\p,\infty;t_2) \rme^{-i\frac{\p^2 }{2 \omega {\rm Cot}(t_2,0)} }-\frac{8\pi \abar}{k_\perp^2} \, . \end{align} In this form, we can interpret the first term in Eq. (<ref>) as describing the emission of a gluon via some effective kernel at time $t_2$ followed by final state broadening. The second term corresponds to a vacuum-like subtraction contribution. Furthermore, since $\cP^{\rm LO}(\p)$ is Gaussian, the remaining momentum integral can be performed ∫_ ^LO(-̨,∞;t) ^- i^2 /2ωCot(t,0) = -2iωCot(t,0) ^-^̨2/^2(t,0) /^2(t,0) , where we introduced the function ^2(t_2,t_1) = Q^2_s(∞,t_2) -2iωCot(t_2,t_1) , and the effective saturation scale is defined as $Q_s^2(t_2,t_1) = \int_{t_1}^{t_2} \rmd t \, \hat{q}_0(t) \log \frac{Q_b^2}{\mu_\ast^2}$, with the logarithmic dependence in $\hat{q}$ determined by $Q_b$. Inserting this result into Eq. (<ref>), we finally obtain \begin{align}\label{eq:HO_any_med_3} (2\pi)^2\omega\frac{\rmd I^{\rm LO}}{\rmd \omega \rmd^2 \k}=8\Bar{\alpha} \pi \, \Re \int_0^\infty \rmd t \, \rme^{-\epsilon t}\, \frac{{\rm Cot}(t,0)}{\khat^2(t,0)} \rme^{-\frac{k_\perp^2}{\khat^2(t,0)}} -\frac{8\pi \abar}{k_\perp^2} \,. \end{align} This form of the spectrum was first derived in Ref. <cit.>. Integrating over $\k$ and using $∫_()̨=1$, we recover the LO contribution to the energy spectrum discussed in the previous section \cite{IOE1,Arnold_simple}, see Eq.~\eqref{eq:dIdw_BDMPS}. As a sanity check, one can verify that \eqn{eq:HO_any_med_3} vanishes in the vacuum, i.e. in the limit $Ω→0$, so that \beq {\rm Cot}(t,0) \to \frac{1}{t} \, \quad \text{and} \quad \khat^2(t_2,t_1) \to -\frac{2i\omega}{ t_2-t_1}\,. \eeq Thus, we obtain \begin{align}\label{eq:HO_any_vac_3} (2\pi)^2\omega\frac{\rmd I^{\rm LO}}{\rmd \omega \rmd^2 \k}=8\Bar{\alpha} \pi \, \Re \int_0^\infty \rmd t \, \rme^{-\epsilon t} \,\frac{1}{2i \omega } \rme^{-i \frac{k_\perp^2}{2\omega}t} -\frac{8\pi \abar}{k_\perp^2}=0 \,. \end{align} \paragraph*{Plasma brick model.} We proceed to evaluate the previous expressions for a concrete medium model, namely the brick where $q̂(t)= q̂ Θ(L-t)$. Inside the medium, i.e. when both $t_1<L$ and $t_2<L$, the $C$ and $S$ functions take simple forms (see Appendix~\ref{app:cK_appendix}) \begin{align}\label{eq:SC-brick} S(t_2,t_1)=\frac{\sin(\Omega(t_2-t_1))}{\Omega} \, , \quad C(t_2,t_1)=\cos(\Omega(t_2-t_1)) \, , \end{align} and $Cot(t_2,t_1) = Ω(Ω(t_2-t_1))$, where $Ω=1-i/2√(q̂_0/ ω logQ_r^2/μ_∗^2)$. On the other hand, for both $t_1>L$ and $t_2 >L$, the system evolves as in vacuum and the $C$ and $S$ are obtained by setting $Ω→0$ in the previous equations such that \begin{align}\label{eq:SC-vacuum} S(t_2,t_1) = t_2-t_1 \, , \quad C(t_2,t_1)=1 \,. \end{align} This sharp separation of the problem into processes happening inside the medium and outside of it, suggests that an efficient way to evaluate Eq.~\eqref{eq:HO_any_med_3} consists in splitting the time integral into two regions: one where $0<t<L$ and a vacuum-like region where $t>L$. We refer to the first region as \textbf{in-in} since the gluon emission occurs inside the medium in both the amplitude and its conjugate. It can be easily obtained by replacing the upper limit in the integral in Eq.~\eqref{eq:HO_any_med_3} by $L$ and employing Eq.~\eqref{eq:SC-brick}. Since the integral over $t_2$ has a finite extension, we can safely neglect the adiabatic suppression factor. This contribution to the total medium-induced LO spectrum reads \begin{align}\label{eq:HO_brick_InIn} (2\pi)^2\omega\frac{\rmd I^{\rm LO}_{\text{in-in}}}{\rmd \omega \rmd^2 \k}&=8\Bar{\alpha} \pi \, \Re \int_0^L \rmd t \, \Omega {\rm cot}(\Omega t) \frac{ \rme^{-\frac{k_\perp^2}{\hat{q}(L-t) -2i \omega \Omega \cot(\Omega t)} }}{\hat{q}(L-t) - 2i \omega \Omega {\rm cot}(\Omega t)}\, , \end{align} where we have used that the saturation scale reduces to $Q_s^2(∞,t)=q̂(L-t)$. The remaining region of phase space is obtained by imposing $t>L$ in Eq.~\eqref{eq:HO_any_med_3}. In this case, there are two types of contributions: (i) a purely vacuum term corresponding to the scenario where the gluon is outside the medium both in the amplitude and its conjugate, in this situation the first and second terms in Eq.~\eqref{eq:HO_any_med_3} cancel by construction, and (ii) an interference term where the amplitude gluon is emitted inside the medium while its conjugate counterpart is emitted in the vacuum (or vice-versa). The latter contribution, which we shall refer to as \textbf{in-out}, requires further manipulations. We begin by constructing the $C$ and $S$ functions which have support inside and outside the medium. This is done by using the decomposition for the $C$ and $S$, given in Eq.~\eqref{eq:linear_rel_C_S}~\cite{Arnold_simple} \begin{equation} \begin{split} &S(t_2,t_1)=C(t_1,t_0)S(t_2,t_0)-S(t_1,t_0)C(t_2,t_0) \, ,\\ & C(t_2,t_1)=-\partial_{t_1}C(t_1,t_0)S(t_2,t_0)+\partial_{t_1}S(t_1,t_0)C(t_2,t_0) \, . \end{split} \end{equation} Taking $t_1=0$, $t_2=t>L$, $t_0=L$ and using the appropriate form for the $C$ and $S$ in each region (see Eqs.~\eqref{eq:SC-brick} and \eqref{eq:SC-vacuum}), we obtain \begin{equation} \begin{split} &S(t,0)=(t-L) \cos \Omega L + \frac{\sin \Omega L}{\Omega }\, ,\\ &C(0,t)=\cos(\Omega L)\, , \end{split} \end{equation} which yields \beq {\rm Cot} (t,0)= \frac{ \Omega \cot\Omega L}{\Omega (t-L) \cot \Omega L +1 }\, . \eeq In addition, in Eq.~\eqref{eq:SC-brick} the broadening term (encapsulated in $Q_s^2$) has support in $(t,L)$ which is the vacuum region. Then, there is no final state broadening and one can take \beq \cP^{\rm LO}(\k-\p,\infty;t) \big|_{t>L}= (2\pi)^2 \delta^{(2)}(\k) \, , \eeq or, equivalently, $Q_s^2=0$ in Eq.~\eqref{eq:SC-brick}. Combining all these results, we find that \begin{align}\label{eq:HO_brick_out_2} (2\pi)^2\omega\frac{\rmd I_{\text{in-out}}^{\rm LO}}{\rmd \omega \rmd^2 \k} &=\frac{4\bar{\alpha}\pi}{\omega} \Re \,i \int_L^\infty \rmd t \, \rme^{-i\frac{k_\perp^2 }{2\omega }(t-L)- i \frac{k_\perp^2}{2\omega \Omega \, \cot\Omega L } } -\frac{8\pi \abar}{k_\perp^2} \, \nn &= \frac{8\bar{\alpha}\pi }{k_\perp^2}\Re\left(\rme^{-i\frac{k_\perp^2}{2\omega \Omega \, \cot\Omega L } }- 1\right) \, , \end{align} where we have included the vacuum subtraction term, and where the implicit adiabatic prescription $∼^-ϵt$ in the first line allowed to drop the contribution from $t→∞$. This contribution, together with Eq.~\eqref{eq:HO_brick_InIn}, constitute the medium-induced leading order spectrum, analogous to the BDMPS-Z result. The medium-induced spectrum at LO in the IOE reads then, \beq \frac{\rmd I^{\rm LO}}{\rmd \omega \rmd^2 \k} = \frac{\rmd I^{\rm LO}_{\text{in-in}}}{\rmd \omega \rmd^2 \k} + \frac{\rmd I^{\rm LO}_{\text{in-out}}}{\rmd \omega \rmd^2 \k} \,. \eeq \subsection{Next-to-leading order contribution}\label{sec:nlo} The computation of the next-to-leading order contribution to the spectrum can be done using similar manipulations to the ones performed for the LO term in the previous section. The NLO spectrum is defined in Eq.~\eqref{eq:spectrum-ioe-nlo}. The first term corresponds to a genuine correction to the emission kernel, referred below to as the \textbf{in} contribution, while the second term, which we shall refer to as \textbf{broad} contribution, introduces the possibility of a hard scattering in the final state broadening process. Also, the vacuum-like subtraction terms that appeared in the LO contribution are absent at $𝒪(δv)$. To summarize, the two contributions to the NLO spectrum read \begin{align} \label{eq:spectrum-ioe-nlo-in} (2\pi)^2\omega\frac{\rmd I^{\rm NLO}_{\rm in}}{\rmd \omega \rmd^2 \k} &= -\frac{2\bar{\alpha}\pi}{\omega^2} \Re \int_0^\infty \rmd t_2 \int_0^{t_2} \rmd t_1 \int_0^{t_1} \rmd s \int_{\x,\z}\, \rme^{-i \k \cdot \x} \, \cP^{\rm LO}(\x,\infty ;t_2) \nn & \times \delta v(\z,t_1) \bdel_\x \cK^{\rm {LO}}(\x,t_2;\z,t_1) \cdot \bdel_\y \cK^{\rm {LO}}(\z,t_1;\y,s) \,,\\ \label{eq:spectrum-ioe-nlo-broad} (2\pi)^2\omega\frac{\rmd I^{\rm NLO}_{\rm broad}}{\rmd \omega \rmd^2 \k} &= -\frac{2\bar{\alpha}\pi}{\omega^2} \Re \int_0^\infty \rmd s \int_0^s \rmd t_2 \int_0^{t_2} \rmd t_1 \int_\x \rme^{-i \k \cdot \x} \, \cP^{\rm LO}(\x,\infty,t_2) \nn & \times \delta v(\x,s) \bdel_\y \cdot \bdel_\x \cK^{\rm LO}(\x,t_2;\y,t_1) \bigg)_{\y=0} \,, \end{align} where we have rearranged the time integrations. \footnote{Note that in \eqn{eq:spectrum-ioe-nlo-in} $t_1$ is the time at which the hard interaction occurs, while we have labelled this time as $s$ in other equations.} Let us begin by considering Eq.~\eqref{eq:spectrum-ioe-nlo-in}. Note, that both the $t_1$ and $s$ integrals have support only inside the medium, hence the naming as the \textbf{in} contribution. By using Eq.~\eqref{eq:id_1}, we can directly perform the $s$-integral such that \begin{align}\label{eq:IOE_med_2} (2\pi)^2\omega\frac{ \rmd I_{\text{in}}^{\rm NLO}}{\rmd\omega \rmd^2 \k}&=\frac{2\bar{\alpha}}{\omega} \,\Re\, i \int_0^\infty \rmd t_2 \int_\x \,\rme^{-i \k \cdot \x} \, \cP^{\rm LO}(\x,\infty; t_2) \nn &\times \int_0^{t_2} \rmd t_1 \int_\z \,\delta v(\z,t_1) \frac{\z}{\z^2}\cdot \partial_\x \cK^{\rm LO}(\x,t_2;\z,t_1) \rme^{\frac{i\omega}{2}{\rm Cot}(t_1,0)\z^2} \, . \end{align} The remaining derivative operator gives \begin{align} \partial_\x \cK^{\rm LO}(\x,t_2;\z,t_1) &= \frac{\omega^2}{2\pi S^2(t_2,t_1)} \big(\x C(t_1,t_2)-\z \big) \nn &\times \exp\left[\frac{i\omega}{2S(t_2,t_1)} \big( C(t_1,t_2) \x^2 + C(t_2,t_1)\z^2 - 2 \x\cdot\z \big)\right] \, , \end{align} so that the spectrum further simplifies to (adopting hereafter the more compact notation $Cot_21 ≡Cot(t_2,t_1)$, $C_12≡C(t_1,t_2)$ and $S_12≡S(t_2,t_1)$) \begin{align} \label{eq:IOE_med_3} (2\pi)^2\omega\frac{\rmd I_{\text{in}}^{\rm NLO}}{\rmd \omega \rmd^2 \k} & =\frac{\bar{\alpha}\omega}{\pi}\Re\, i \int_0^\infty \rmd t_2\int_0^{t_2} \rmd t_1 \int_{\x,\z}\, \rme^{-i \k \cdot \x}\rme^{-\frac{1}{4}Q_s^2(\infty,t_2) \x^2} \nn &\times \frac{1}{S^2_{21}} \delta v(\z,t_1) \frac{\z}{\z^2}\cdot (\x C_{12}-\z) \nn &\times \rme^{\frac{i\omega}{2S_{21}} \left(C_{12}\x^2+C_{21}\z^2-2\x\cdot\z\right)} \rme^{\frac{i\omega}{2}{\rm Cot}_{10} \z^2} \, . \end{align} The remaining integration in $$ is Gaussian, and can be executed to obtain \begin{align} \int_\x \rme^{-i \k \cdot \x}\rme^{-\frac{1}{4}Q_s^2(\infty,t_2) \x^2} & \rme^{\frac{i\omega}{2S_{21}} \left(C_{12}\x^2 - 2\x\cdot\z \right)}(\x C_{12}-\z) \nn =&-\frac{4\pi}{\khat^2_{21}} \rme^{-\frac{\left(\k-\frac{\omega }{S_{12}} \z\right)^2}{\khat^2_{21}}}\left[\z+\frac{2iC_{12}}{\khat^2_{21}}\left(\k-\frac{\omega}{S_{12}} \z\right)\right] \, . \end{align} Let us re-emphasize that, in the previous expression, the matching scale associated with $Q_s$ in $_21$ is $Q_b$. Replacing $δv$ by its explicit definition, that depends on $Q_r$, the $in$ contribution reads \begin{align} \label{eq:IOE_med_4} &(2\pi)^2\omega\frac{\rmd I_\text{in}^{\rm NLO}}{\rmd \omega \rmd^2 \k} =-\bar{\alpha}\omega \Re\, i\int_0^\infty \rmd t_2 \int_0^{t_2} \rmd t_1 \, \hat{q}_0(t_1) \frac{\rme^{-\frac{k_\perp^2}{\khat^2_{21}}}}{S^2_{21}\khat^4_{21}} \nn &\times \int_\z \, \rme^{\frac{i\omega}{2}(-{\rm Cot}_{12}+{\rm Cot}_{10}+\frac{2i\omega}{S^2_{21}\khat^2_{21}}) \z^2} \rme^{-\frac{2\omega}{\khat^2_{21}S_{21}} \k\cdot \z} \log \frac{1}{Q_r^2 \z^2} \left(Q^2_s(\infty,t_2) \z^2+2iC_{12}\k\cdot \z\right) \, . \end{align} Notice that we tend to write the largest time as the first argument in the functions. Nonetheless, this is not always possible since in general the $C$ function has no definite parity under the exchange of the arguments, unlike the $S$ function which is always odd. Comparing Eq.~\eqref{eq:IOE_med_4} to the LO contribution given by Eq.~\eqref{eq:HO_any_med_1a}, we observe an additional transverse integral in the $$ variable which is no longer Gaussian due to the logarithmic dependence in $δv$. Nevertheless, this integration can be performed analytically too. The angular part can be performed by recalling the definitions of the Bessel functions of the first kind, \begin{align} \int_0^{2\pi} \frac{\rmd \theta}{2\pi } \rme^{-i z \cos\theta} = J_0(z) \, , \quad \int_0^{2\pi} \frac{\rmd \theta}{2\pi } \cos\theta \, \rme^{-i z \cos\theta} = -i J_1(z) \,, \end{align} that lead to \begin{align} \label{eq:nlo-in} &(2\pi)^2\omega\frac{\rmd I_\text{in}^{\rm NLO}}{\rmd \omega \rmd^2 \k}=\frac{\bar{\alpha}\pi}{2\omega k_\perp^4} \Re \, i\int_0^\infty \rmd t_2 \,\int_0^{t_2} \rmd t_1 \, \frac{\hat{q}_0(t_1)}{\rhat_{21}^2} \rme^{-\frac{k_\perp^2}{\khat^2_{21}}} \nn &\times \int_0^\infty \rmd z_\perp \, z_\perp \, \rme^{-\jhat_{21}\frac{z_\perp^2}{4 k_\perp^2} } \log \left(\frac{k_\perp^2\rhat^2_{21}}{Q_r^2z_\perp^2}\right) \left[Q^2_s(\infty,t_2) z_\perp^2J_0(z_\perp)+2 C_{12}\rhat_{21} k_\perp^2 z_\perp J_1( z_\perp) \right] \, , \end{align} where we have introduced the auxiliary functions \begin{align} &\jhat(t_2,t_1)\equiv \jhat_{21}=\frac{i}{2\omega}\left(-{\rm Cot}_{12}+{\rm Cot}_{10}+\frac{2i\omega}{S^2_{21}\khat^2_{21}} \right)S_{21}^2 \khat_{21}^4\, , \nn &\rhat(t_2,t_1)\equiv\rhat_{21} =-\frac{2i\omega}{\khat^2_{21}S_{21}} \, . \end{align} The more challenging radial integral can be solved by using a convenient decomposition of the logarithmic function. Namely, the relation \begin{equation}\label{eq:Moliere_log} \log\frac{1}{u^2}=\lim_{\epsilon\to0} \int_\epsilon^\infty \frac{\rmd t}{t} \, \left(\rme^{-u^2t}-\rme^{-t}\right) \, , \end{equation} allows us to transform the original integral into a sum of Gaussian integrations that can be readily performed. In particular, the $z$-integrals in Eq.~\eqref{eq:nlo-in} can be compactly expressed as \beq I_a(x,y ) = \int_0^\infty \rmd z J_0(z)\, z^3 \,\log\frac{y}{z^2} \, \rme^{ - \frac{z^2}{4x}}\, , \eeq \begin{align} I_b(x,y )=\int_0^\infty \rmd z \, z^2 \log \frac{y}{z^2} J_1\left(z \right)\rme^{-\frac{z^2}{4x}} \, . \end{align} Then, replacing the logarithms in the previous equations by the decomposition in Eq.~\eqref{eq:Moliere_log} allows one to write $I_a$ and $I_b$ in terms of the exponential integral function $Ei$, leading to \beq\label{eq:bbS_alpha} I_a\left(x,y \right) &=&8 x^2\, \rme^{-x}(-2+\rme^{x}) + 8 x^2 \, \rme^{-x}(1-x)\left[{\rm Ei}\left(x\right)-\log\frac{4x^2 }{y}\right]\, , \eeq \begin{align}\label{eq:bbS_beta} &I_b\left(x,y\right)=-4x\left(1-\rme^{-x}\right)+4x^2\rme^{-x}\left[{\rm Ei}(x)-\log \frac{4x^2}{y}\right]\, . \end{align} Taking advantage of these further simplifications, the $𝐢𝐧$ contribution to the NLO gluon spectrum can be compactly written as \begin{align}\label{eq:IOE_NLO_IN} (2\pi)^2\omega\frac{\rmd I_{\text{in}}^{\rm NLO}}{\rmd \omega \rmd^2 \k}&=\frac{\bar{\alpha}\pi}{2\omega k_\perp^4} \Re\, i\int_0^\infty \rmd t_2 \int_0^{t_2} \rmd t_1 \, \frac{\hat{q}_0(t_1)}{\rhat_{21}^2} \rme^{-\frac{ k_\perp^2}{\khat^2_{21}}} \nn &\times \left[Q^2_s(\infty,t_2) I_a\left(\frac{k_\perp^2}{\jhat_{21}},\frac{k_\perp^2\rhat^2_{21}}{Q_r^2}\right)+2 C_{12}\rhat_{21} k_\perp^2 I_b\left(\frac{k_\perp^2}{\jhat_{21}},\frac{k_\perp^2\rhat^2_{21}}{Q_r^2}\right)\right] \,. \end{align} Hence, we have managed to reduce the number of integrals over transverse positions and times down to two time-integrations. Turning now to the \textbf{broad} contribution, given in \eqn{eq:spectrum-ioe-nlo-broad}, we can perform the derivatives on $^LO$ and integrate over $t_1$ using \eqn{eq:id_1}. Then it reads \begin{align}\label{eq:IOE_med_broad1} d I_{\rm \text{broad}}^{\rm NLO}}{\rmd \omega \rmd^2\k}&=-2\bar{\alpha} \Re \int_0^\infty \rmd t_2 \int_{t_2}^\infty \rmd s \int_\x \rme^{-i \k \cdot \x} \, \cP^{\rm LO}(\x,\infty;t_2)\nn &\times \delta v(\x,s)\, {\rm Cot}_{20} \, \rme^{\frac{i\omega}{2}{\rm Cot}_{20}\x^2} \, , \end{align} with $^LO(,∞;t_2) $ introduced in Eq.~\eqref{eq:P_LO_x}. Using the definition of the function $I_b(x,y)$ in \eqn{eq:bbS_alpha}, the $𝐛𝐫𝐨𝐚𝐝$ contribution to the spectrum can finally be written as \begin{align}\label{eq:IOE_med_broad2} \rmd I_{\rm \text{broad}}^{\rm NLO}}{\rmd \omega \rmd^2 \k} &= -\frac{\pi \bar{\alpha}}{k_\perp^4} \Re \int_0^\infty \rmd t_2 \, {\rm Cot}_{20}\, Q^2_{s0}(\infty,t_2) \, I_a\left(\frac{k_\perp^2}{\khat^2_{20}},\frac{k_\perp^2}{Q_b^2}\right)\bigg]\, , \end{align} where $^2_20 ≡^2(t_2,0)$ given by \eqn{eq:Lambda21}. \paragraph*{Plasma brick model.} So far, we have made no approximation regarding the time profile of the medium. As for the LO contribution, we now assume the plasma brick model, i.e. $q̂(t)=q̂ Θ(L-t)$. As in the previous case, this simple model allows one to further simplify Eqs.~\eqref{eq:IOE_NLO_IN} and \eqref{eq:IOE_med_broad2} by splitting the time integrations appropriately. We start with the \textbf{broad} term since it has only one time integral left. In addition, the spectrum is proportional to $Q_s0^2 $, and thus this term only has support inside the medium $t_2<L$. As a consequence, the $C$ and $S$ functions are given directly by Eq.~\eqref{eq:SC-brick}, and Eq.~\eqref{eq:IOE_med_broad2} reduces to \begin{align}\label{eq:IOE_med_broad_brick} \rmd I_{\rm \text{broad}}^{\rm NLO}}{\rmd \omega \rmd^2 \k} &= -\frac{\hat{q}_0\pi \bar{\alpha}}{k_\perp^4} \Re\int_0^L \rmd t_2 \, \Omega \, {\rm cot}(\Omega t_2)\, (L-t_2) \,I_a\left(\frac{k_\perp^2}{\khat^2(t_2,0)},\frac{k_\perp^2}{Q_b^2}\right) \, , \end{align} \beq\label{eq:help_1} \khat^2(t_2,0)= Q^2_s(L,t_2) -2i\omega\Omega{\rm cot}(\Omega t_2)\, , \quad Q^2_s(L,t_2)=\hat{q}_0(L-t_2) \log \frac{Q_b^2}{\mu_\ast^2} \, . \eeq Next, let us analyze the \textbf{in} contribution given by Eq.~\eqref{eq:IOE_NLO_IN}. It has support both inside and outside of the medium and thus two contributions appear. One option is that the gluon is emitted inside the medium in the amplitude and its conjugate, which we identify as the \textbf{in-in} contribution. The second term refers to the case in which one of the emissions happens outside of the medium, that we denote as \textbf{in-out}. The \textbf{in-in} term obeys the time ordering $∫_0^L t_2 ∫_0^t_2 t_1$ in Eq.~\eqref{eq:IOE_NLO_IN}, with the $C$ and $S$ having support only inside the medium and thus are given by Eq.~\eqref{eq:SC-brick}. Therefore, this contribution reads \begin{align}\label{eq:IOE_NLO_ININ} (2\pi)^2\omega\frac{\rmd I_{\text{in-in}}^{\rm NLO}}{\rmd \omega \rmd^2 \k}&=\frac{\bar{\alpha}\pi\hat{q}_0}{2\omega k_\perp^4} \Re\, i\int_0^L \rmd t_2 \,\int_0^{t_2} \rmd t_1 \, \frac{\rme^{-\frac{k_\perp^2}{\khat^2_{21}}}}{\rhat_{21}^2} \nn &\times \left[Q^2_s(L,t_2) I_a\left(\frac{k_\perp^2}{\jhat_{21}},\frac{k_\perp^2\rhat^2_{21}}{Q_r^2}\right)+2 C_{12}\rhat_{21} k_\perp^2 I_b\left(\frac{k_\perp^2}{\jhat_{21}},\frac{k_\perp^2\rhat^2_{21}}{Q_r^2}\right)\right] \,, \end{align} where again $Q^2_s(L,t_2)=q̂_0(L-t_2)logQ_b^2/μ_∗^2 $, and the auxiliary functions reduce to \begin{align}\label{eq:help_2} & \khat^2_{21} = Q^2_s(L,t_2) -2i\omega \Omega \, {\rm cot}(\Omega (t_2-t_1))\, , \nn & \jhat_{21}=\frac{i }{2\omega}\left(\Omega{\rm cot}(\Omega(t_2-t_1))+\Omega{\rm cot}(\Omega t_1)+\frac{2i\omega}{S^2_{21}\khat^2_{21}} \right)S_{21}^2 \khat_{21}^4\, , \nn &\rhat_{21} =-\frac{2i\omega}{\khat^2_{21}S_{21}} \, . \end{align} The \textbf{in-out} contribution can be further simplified following similar steps as in the LO case. Now, the time integrals read $∫_L^∞ t_2 ∫_0^Lt_1 $ and one can set $Q_s^2=0$ everywhere in Eq.~\eqref{eq:IOE_NLO_IN} since this term only has support outside of the medium. Then, the spectrum reads \begin{align}\label{eq:IOE_NLO_INPOUT_1} &(2\pi)^2\omega\frac{\rmd I_{\text{in-out}}^{\rm NLO}}{\rmd \omega \rmd^2 \k}=\frac{\bar{\alpha}\hat{q}_0\pi}{\omega k_\perp^2} \Re\, i\int_L^\infty \rmd t_2 \,\int_0^{L} \rmd t_1 \, \frac{C_{12}}{\rhat_{21}} \rme^{-\frac{ k_\perp^2}{\khat^2_{21}}} I_b\left(\frac{k_\perp^2}{\jhat_{21}},\frac{k_\perp^2\rhat^2_{21}}{Q_r^2}\right) \, , \end{align} where we recall that the $C$ and $S$ functions are distinct from the ones used in the \textbf{in-in} term, since they have support both inside and outside of the medium. Nonetheless, as for the LO case, they can be written in terms of the purely in-medium and in-vacuum $C$ and $S$ functions by using Eq.~\eqref{eq:linear_rel_C_S}. Taking $t_0=L$, we find that for the above time ordering \begin{align} & S_{21}=C_{1L}S_{2L}-S_{1L}C_{2L}=\cos(\Omega (L-t_1))(t_2-L)+\frac{\sin(\Omega(L-t_1))}{\Omega}\, , \nn & C_{12}=-\partial_2 S_{12}=\partial_2 S_{21}=\cos(\Omega(L-t_1)) \, , \end{align} such that \begin{align} {\rm Cot}_{21}&=\frac{C_{12}}{S_{21}}=\frac{\Omega}{\Omega (t_2-L) + \tan (\Omega(L-t_1))} \, . \end{align} For the reversed time ordering, we find \begin{align} S_{12}&=-S_{21}=-\cos(\Omega (L-t_1))(t_2-L)-\frac{\sin(\Omega(L-t_1))}{\Omega} \, , \nn C_{21}&=-\partial_1 S_{21}=\cos(\Omega(L - t_1)) -\Omega (t_2-L) \sin(\Omega(L - t_1)) \, , \end{align} leading to \begin{equation} {\rm Cot}_{12}=\frac{C_{21}}{S_{12}}=-\frac{\Omega - \Omega^2(t_2-L)\tan \Omega (L-t_1)}{\Omega(t_2-L)+\tan \Omega (L-t_1)} \, . \end{equation} Combining all these results, the auxiliary functions now read \begin{align} \khat^2_{21} &= -2i\omega {\rm Cot}_{21}=\frac{-2i\omega \Omega}{\Omega (t_2-L) + \tan (\Omega(L-t_1))} \, ,\\ \jhat_{21} &= 2i\omega \Omega \cos^2(\Omega(L-t_1)) \big[\tan(\Omega(L-t_1))-\cot(\Omega t_1) \big] \, , \\ \rhat_{21} &= \frac{2i\omega }{ \khat^2_{21}S_{12}} =\frac{1}{\cos (\Omega(L-t_1))} \,. \end{align} Inserting these expressions into Eq.~\eqref{eq:IOE_NLO_INPOUT_1}, one realizes that the remaining $t_2$ integral can be carried out \begin{align} \int_L^\infty \rmd t_2 \,\rme^{-\frac{k_\perp^2}{\khat^2_{21}}} =\int_L^\infty \rmd t_2 \, \rme^{-i\frac{k_\perp^2}{2\omega}\left[(t_2-L)+\frac{\tan(\Omega(L-t_1))}{\Omega}\right]} =\frac{2\omega}{ik_\perp^2} \rme^{-i\frac{k_\perp^2}{2\omega\Omega}\tan(\Omega(L-t_1))} \, . \end{align} As a consequence, the \textbf{in-out} contribution to the NLO spectrum can be finally written as \begin{align}\label{eq:In_out_NLO_final} (2\pi)^2\omega\frac{\rmd I_{\text{in-out}}^{\rm NLO}}{\rmd \omega \rmd^2 \k} &= \frac{2\bar{\alpha}\hat{q}_0\pi}{k_\perp^4} \Re \int_0^{L} \rmd t_1 \, \cos^2(\Omega(L-t_1)) \nn &\times I_b\left(\frac{k_\perp^2}{\jhat_{21}},\frac{k_\perp^2}{Q_r^2 \cos^2(\Omega(L-t_1))}\right) \rme^{-i \frac{k_\perp^2}{2\omega \Omega}\tan(\Omega(L-t_1))} \, . \end{align} \subsection{Final formulas}\label{sec:summary} At this point, we summarize the main results obtained in the two previous sections. Our aim is to provide a set of compact equations which can be directly used in phenomenological studies or implemented in jet quenching Monte-Carlo codes. In what follows, we first present the results for a generic medium profile and then take the brick limit. \subsubsection{Spectrum at LO+NLO for a generic medium profile} Up to NLO in the IOE, the purely medium-induced gluon spectrum, i.e. after subtracting vacuum radiation, can be written as \begin{align}\label{eq:spec_res_generic_1} \frac{\rmd I^{\rm LO+NLO}}{\rmd \omega \rmd^2 \k} = \frac{\rmd I^{\rm LO}}{\rmd\omega \rmd^2 \k}+\frac{\rmd I_{\text{in}}^{\rm NLO}}{\rmd \omega \rmd^2 \k} +\frac{\rmd I_{\text{broad}}^{\rm NLO}}{\rmd \omega \rmd^2 \k} \, . \end{align} The LO term reads \begin{align}\label{eq:spec_res_generic_2} (2\pi)^2\omega\frac{\rmd I^{\rm LO}}{\rmd \omega \rmd^2 \k}&=8\Bar{\alpha} \pi \, \Re \int_0^\infty \rmd t \, {\rm Cot}(t) \frac{\rme^{-\frac{k_\perp^2}{\khat^2(t,0)} } }{\khat^2(t,0)} -\frac{8\pi \abar}{k_\perp^2} \, . \end{align} In addition, the NLO contributions are given by \begin{align}\label{eq:spec_res_generic_3} (2\pi)^2\omega\frac{\rmd I_{\text{in}}^{\rm NLO}}{\rmd \omega \rmd^2 \k} &=\frac{\bar{\alpha}\pi}{2\omega k_\perp^4} \Re\, i\int_0^\infty \rmd t_2 \,\int_0^{t_2} \rmd t_1 \, \frac{\hat{q}_0(t_1)}{\rhat_{21}^2} \rme^{-\frac{k_\perp^2}{\khat^2_{21}}} \nn &\times \left[Q^2_s(\infty,t_2) I_a\left(\frac{k_\perp^2}{\jhat_{21}},\frac{k_\perp^2\rhat^2_{21}}{Q_r^2}\right)+2 C_{12}\rhat_{21} k_\perp^2 I_b\left(\frac{k_\perp^2}{\jhat_{21}},\frac{k_\perp^2\rhat^2_{21}}{Q_r^2}\right)\right] \, , \end{align} \begin{align}\label{eq:spec_res_generic_4} \rmd I_{\rm \text{broad}}^{\rm NLO}}{\rmd \omega \rmd^2 \k}&= -\frac{\pi \bar{\alpha}}{k_\perp^4} \Re\int_0^\infty \rmd t_1 \, {\rm Cot}(t_1)\, Q^2_{s0}(\infty,t_1) \, I_a\left(\frac{k_\perp^2}{\khat^2(t_1)},\frac{k_\perp^2}{Q_b^2}\right)\, . \end{align} In the above equations we introduced \begin{align} {\rm Cot}(t_2,t_1)= {\rm Cot}_{21}\equiv \frac{C(t_1,t_2)}{S(t_2,t_1)}=\frac{C_{12}}{S_{21}} \,, \end{align} where the $C$ and $S$ functions are described in \eqn{eq:Abel} and the functions $I_a$ and $I_b$ are given in Eqs.~\eqref{eq:bbS_alpha} and \eqref{eq:bbS_beta}, respectively. Further, the accumulated transverse momentum scales are $Q_s0^2(t_2,t_1)=∫_t_1^t_2 t q̂_0(t) $ and $Q_s^2(t_2,t_1) = Q_s0^2(t_2,t_1) logQ_b^2/μ^2_∗$. The remaining functions are defined as follows, \begin{align} \khat^2_{21} &= Q^2_s(\infty,t_2) -2i\omega{\rm Cot}_{21} \, ,\nn \jhat_{21} &=\frac{i}{2\omega}\left(-{\rm Cot}_{12}+{\rm Cot}_{10}+\frac{2i\omega}{S^2_{21}\khat^2_{21}} \right)S_{21}^2 \khat_{21}^4\, , \nn \rhat_{21} & =-\frac{2i\omega}{\khat^2_{21}S_{21}} \, . \end{align} The matching scale $Q_b$, that enters everywhere into the $broad$ and only into the $Q_s$ definition for the $in$ case, is obtained by solving the transcendental equation \beq\label{eq:Qb_final} Q_b=\displaystyle\int_0^{\tilde L} \rmd t \, \hat q_0(t)\log\frac{Q^2_b}{\mu^2_\ast} \, , \eeq with $L̃$ some effective medium length. The exact value of $L̃$ is not important, as long as it is taken such that the relevant support for the integration of $q̂$ is covered. The radiative matching scale, $Q_r$, appears in all other terms related to the kernel expansion and is the solution of \beq\label{eq:Qr_final} Q^2_r=\sqrt{\hat q_0(t)\omega \log\frac{Q^2_r}{\mu^2_\ast}} \, . \eeq \subsubsection{Spectrum at LO+NLO for the brick model}\label{subsubsec:LO_NLO_brick} The previous results are simplified when the medium is modelled as a plasma brick of length $L$ with \begin{equation} \hat{q}(t)=\hat{q} \, \Theta(L-t) \, . \end{equation} In this case, the full medium-induced spectrum at NLO in the IOE can be written as \begin{align}\label{eq:spec_res_brick_1} \frac{\rmd I^{\rm LO+NLO}}{\rmd \omega \rmd^2\k} = \frac{\rmd I_{\text{in-in}}^{\rm LO}}{\rmd \omega \rmd^2 \k}+ \frac{\rmd I_{\text{in-out}}^{\rm LO}}{\rmd \omega \rmd^2 \k}+\frac{\rmd I_{\text{in-in}}^{\rm NLO}}{\rmd \omega \rmd^2 \k}+\frac{\rmd I_{\text{in-out}}^{\rm NLO}}{\rmd \omega \rmd^2 \k} +\frac{ \rmd I_{\text{broad}}^{\rm NLO}}{\rmd \omega \rmd^2 \k} \, . \end{align} The leading order terms read \begin{align}\label{eq:spec_res_brick_2} (2\pi)^2\omega\frac{\rmd I^{\rm LO}_{\text{In-In}}}{\rmd \omega \rmd^2 \k}&=8\Bar{\alpha} \pi \, \Re \int_0^L \rmd t \, \Omega \, {\rm cot}(\Omega t) \frac{\rme^{-\frac{k_\perp^2}{Q_s^2(L,t) -2i \omega \Omega {\rm cot}(\Omega t)} }}{Q_s^2(L,t) - 2i \omega \Omega {\rm cot}(\Omega t)} \,, \end{align} \begin{align}\label{eq:spec_res_brick_3} (2\pi)^2\omega\frac{\rmd I_{\text{In-Out}}^{\rm LO}}{\rmd \omega \rmd^2 \k} &= \frac{8\bar{\alpha}\pi }{k_\perp^2}\Re \left(\rme^{-\frac{ik_\perp^2}{2\omega \Omega \, \cot\Omega L } } -1\right) \, . \end{align} Here we used \begin{align} &\Omega = \frac{1-i}{2}\sqrt{\frac{\hat{q}_0}{\omega} \log\frac{Q_r^2}{\mu^2_\ast}} \, , \\ &Q_s^2(L,t) = \hat{q}_0 \log\frac{Q_b^2}{\mu^2_\ast} (L-t_2) \, . \end{align} The NLO \textbf{in-in} term can be written as \begin{align}\label{eq:spec_res_brick_4} (2\pi)^2\omega\frac{\rmd I_{\text{in-in}}^{\rm NLO}}{\rmd \omega \rmd^2 \k}&=\frac{\bar{\alpha}\pi\hat{q}_0}{2\omega k_\perp^4} \Re\, i\int_0^L \rmd t_2 \,\int_0^{t_2} \rmd t_1 \, \frac{\rme^{-\frac{k_\perp^2}{\khat^2_{21}}}}{\rhat_{21}^2} \nn &\times \left[ Q^2_s(L,t_2) I_a\left(\frac{k_\perp^2}{\jhat_{21}},\frac{k_\perp^2\rhat^2_{21}}{Q_r^2}\right)+2 C_{12}\rhat_{21} k_\perp^2 I_b\left(\frac{k_\perp^2}{\jhat_{21}},\frac{k_\perp^2\rhat^2_{21}}{Q_r^2}\right)\right] \, , \end{align} where $Q^2_s(L,t_2)$ is defined as above, the $C$ and $S$ functions are given in Eq.~\eqref{eq:SC-brick}, and \begin{align} & \khat^2_{21} = Q^2_s(L,t_2) -2i\omega \Omega \, {\rm cot}(\Omega (t_2-t_1))\, , \nn & \jhat_{21}=\frac{i }{2\omega}\left(\Omega{\rm cot}(\Omega(t_2-t_1))+\Omega{\rm cot}(\Omega t_1)+\frac{2i\omega}{S^2_{21}\khat^2_{21}} \right)S_{21}^2 \khat_{21}^4\, , \nn &\rhat_{21} =-\frac{2i\omega}{\khat^2_{21}S_{21}} \, . \end{align} The NLO \textbf{in-out} piece is \begin{align} (2\pi)^2\omega\frac{\rmd I_{\text{in-out}}^{\rm NLO}}{\rmd \omega \rmd^2 \k} &= \frac{2\bar{\alpha}\hat{q}_0\pi}{k_\perp^4} \Re \int_0^{L} \rmd t_1 \, \cos^2(\Omega(L-t_1)) \nn &\times I_b\left(\frac{k_\perp^2}{\jhat_{21}},\frac{k_\perp^2}{Q_r^2 \cos^2(\Omega(L-t_1))}\right) \rme^{-i \frac{k_\perp^2}{2\omega \Omega}\tan(\Omega(L-t_1))} \, . \end{align} \begin{align} &\khat^2_{21}=\frac{-2i\omega \Omega}{\Omega (t_2-L) + \tan (\Omega(L-t_1))} \, ,\nn &\jhat_{21}=2i\omega \Omega \cos^2(\Omega(L-t_1)) \big[\tan(\Omega(L-t_1))-\cot(\Omega t_1) \big]\, , \nn &\rhat_{21} = \frac{1}{\cos (\Omega(L-t_1))} \, . \end{align} Finally, the NLO \textbf{broad} contribution is given by \begin{align} \rmd I_{\rm \text{broad}}^{\rm NLO}}{\rmd \omega \rmd^2 \k} &= -\frac{\hat{q}_0\pi \bar{\alpha}}{k_\perp^4} \Re \int_0^L \rmd t_2 \, \Omega \, {\rm cot}(\Omega t_2)\, (L-t_2) \, I_a\left(\frac{k_\perp^2}{\khat^2(t_2,0)},\frac{k_\perp^2}{Q_b^2}\right) \, , \end{align} \beq \khat^2(t_1,0)= \hat{q}_0 \log\frac{Q_s^2}{\mu^2_\ast}(L-t_1) -2i\omega\Omega{\rm cot}(\Omega t_1)\, . \eeq \subsection{Asymptotic behavior}\label{sec:asymp_beahvior} The complete expressions for the in-medium branching kernel, that we have summarized in the previous section, are written in terms of a few integrations that we did not manage to solve analytically. Before presenting their numerical implementation, we would like to give further analytical insight into the discussion. To that end, we analyze the behavior of the IOE spectrum up to NLO in two physically relevant asymptotic regimes: when the emitted gluon is either (i) soft ($ω≪ω_c$) and collinear ($k_⊥^2≪q̂ L$) or (ii) hard ($ω≫ω_c$) and wide angled ($k_⊥^2≫q̂ L$). That is, the regime of validity of BDMPS-Z and GLV approaches, respectively. Our results below are obtained by taking the brick limit, although similar conclusions are obtained for other choices of medium profile. In addition, we neglect the purely vacuum radiation as it is completely irrelevant for this discussion. \subsubsection{Multiple soft scattering regime}\label{sec:MS_IOE} We begin by analyzing the regime in which the emitted gluon is soft, i.e. $ω≪ω_c$, and its typical formation time is much shorter than the medium length $t_f∼√(ω/q̂)≪L$. The latter condition can be translated into a constraint on the transverse momentum off the emission, $q_⊥^2∼q̂ t_f ∼√(q̂ω)≪q̂L$,\footnote{Notice that this condition refers to the momentum of the in-medium vertex, rather than the final momentum of the gluon. Even for soft gluon emissions, final state broadening can lead to a final momentum $k_\perp^2\sim \hat{q }L$. } which implies that short formation time gluons typically acquire most of their transverse momentum due to final state broadening. Under these conditions, the IOE spectrum simplifies significantly. The general formula for the medium-induced spectrum given by Eq.~\eqref{eq:spectrum} can be re-written in momentum space as\footnote{Strictly speaking, the upper bound of the integrals should be $L$. We take $L\to \infty$ to facilitate analytical manipulations.} \begin{align} \label{eq:didwdkt-ms} (2\pi)^2\omega\frac{\rmd I}{\rmd \omega \rmd^2 \k}&=\frac{2\bar{\alpha}\pi}{\omega^2} \Re\bigg[\int_0^\infty \rmd t_2 \int_0^{t_2} \rmd \tau \int_{\x,\q} \rme^{-i \q\cdot \x}\cP(\k-\q;L-t_2) \nn &\times\bdel_\y\cdot \bdel_\x \, \cK(\x,t_2;\y,t_2-\tau)_{\y=0} \bigg] \, . \end{align} where we have exploited that $$ and $$ are invariant under time translations, i.e. depend only on time differences, when the plasma is homogenous. Eq.~\eqref{eq:didwdkt-ms} can be simplified noting that $τ∼t_f ≪t_2$ and thus one can set $t_2→∞$ in the $τ$ integration upper limit. That is, the two time integrations decouple. In addition, we note that $q_⊥∼1/x_⊥$ corresponds to the transverse momentum acquired in the branching process, $q_⊥∼√(q̂ω)$, that, as we have anticipated, is small with respect to the characteristic broadening momentum, i.e. $q_⊥≪q̂L$. As a consequence, we neglect $$ with respect to $$̨ inside $\cP$. Then, the $\q$ integral acts solely on $\cK$ and the $\x$ and $\q$ integrals yield \begin{align} \label{eq:factorized} (2\pi)^2\omega\frac{\rmd I}{\rmd \omega \rmd^2 \k}&=\frac{2\bar{\alpha}\pi}{\omega^2} \Re\bigg[\int_0^\infty \rmd t_2 \int_0^{\infty} \rmd \tau \, \cP(\k;L-t_2) \bdel_\y\cdot \bdel_\x \cK(\x,t_2;\y,t_2-\tau)_{\x=\y=0} \bigg] \, . \end{align} A familiar element in the previous equation is the integrated spectrum defined as ωI/ω = 2α̅π/ω^2[ ∫_0^∞ τ _·_ (,t_2;,t_2-τ)_==0 ] . Then, Eq. (<ref>) can be finally written as (2π)^2ωI/ω^2 = ∫_0^∞t_2 𝒫(;̨ L-t_2) ωI/ω . That is, in the soft and collinear limit, the spectrum is given by the product of the time integral of the broadening distribution and the energy spectrum. This result is not tied with the IOE approach and has been previously obtained in the literature in the context of BDMPS-Z calculations <cit.> and exploited in Monte-Carlo simulations <cit.>. We proceed to compute Eq.(<ref>) in the IOE. The soft limit of the IOE energy spectrum at all orders was computed in Refs. <cit.> and reads \begin{align}\label{eq:ll_1} \omega \frac{ \rmd I^{\rm IOE}}{\rmd \omega}= \omega \frac{\rmd I^{\rm LO}}{d\omega}(\hat{q}\to \hat{q}_{\rm eff}) \, , \end{align} where $\omega \frac{\rmd I^{\rm LO}}{\rmd \omega}= \bar{\alpha }\sqrt{\frac{\hat{q}L^2}{\omega}}$ corresponds to the well known BDMPS-Z result. The effective jet quenching parameter is given at leading-logarithmic order by <cit.>[As shown in Ref. <cit.>, if all terms in $\hat{q}_{\rm eff}$ are resummed, Eq. (<ref>) gives the full energy spectrum for $\omega \ll \omega_c$.] \begin{align}\label{eq:ll_3} \hat{q}_{\rm eff}=\hat{q}_0 \log\left(\frac{Q_r^2}{\mu_\ast^2}\right) \, \left(1+\frac{1.016}{\log\left(\frac{Q_r^2}{\mu_\ast^2}\right)}+\mathcal{O}\left(\frac{1}{\log^2\left(\frac{Q_r^2}{\mu_\ast^2}\right)}\right)\right) \, . \end{align} Eqs. (<ref>) and (<ref>) show that the IOE energy spectrum is governed by the LO result with higher orders suppressed by a logarithmic power which can be written in terms of the ratio $\frac{\hat{q}_0}{\hat{q}}$. A similar conclusion can be reached regarding the broadening distribution in the kinematical limit $k_\perp^2\ll \hat{q}L$, see for example eq:assist_1 for the result up to NLO. Combining these two results, one concludes that the soft and collinear limit of the fully differential spectrum in Eq. (<ref>) also obeys this functional form. Note that the spectrum will consist of terms where the matching scale is given by $Q_r$ and others where it is $Q_b$, depending if the terms come from the expansion of the kernel or of the broadening distribution. In Appendix <ref> we explicitly show that the in-out NLO contribution in the IOE spectrum scales as the LO term multiplied by a logarithm that arises from the ratio $\frac{\hat{q}_0}{\hat{q}}$. §.§.§ Rare, hard scattering regime Let us now consider the orthogonal regime with respect to the previous section. Here, the gluon is hard, $\omega\gg \omega_c$, and carries a large transverse momentum $k_\perp^2\gg \hat{q} L$, i.e. the intrinsic momentum of the gluon is significantly larger than what it typically acquires through broadening in the medium. In this case, the multiple soft scattering contribution is suppressed by the LPM effect and the emission spectrum is dominated by single hard scattering in the medium. This corresponds to the truncation of the opacity expansion, considered by GLV, at first order <cit.>, leading to an emission spectrum reading \begin{equation}\label{eq:glv} \begin{split} (2\pi)^2\omega\frac{\rmd I^{\rm GLV}}{\rmd \omega \rmd^2 \k} &= \frac{2\Bar{\alpha}\hat{q}_0L^3\pi}{\omega^2}\int_0^\infty \rmd x \, \frac{x-\sin(x)}{x^2} \frac{\gamma+u-x}{(u^2+2u(\gamma-x)+(\gamma+x)^2)^{3/2}} \, , \end{split} \end{equation} where $u=\frac{L}{2\omega}k_\perp^2$ and $\gamma=\frac{\mu^2L}{2\omega}$. The medium potential was taken to be the GW model and thus $\mu$ is its infrared regulator. It can be related to the universal physical mass $\mu_\ast$, as mentioned in Section <ref> and detailed in Refs. <cit.>. Note that the conditions $\omega \gg \omega_c$ and $k_\perp^2\gg \hat q L$ correspond to $u\gg 1 \gg \gamma$ in the previous equation. To take this limit in Eq. (<ref>), let us consider the integral \begin{equation}\label{eq:glv_app1} \begin{split} I &\equiv \int_0^\infty \rmd x \, \frac{x-\sin(x)}{x^2} \frac{\gamma+u-x}{(u^2+2u(\gamma-x)+(\gamma+x)^2)^{3/2}} \, , \end{split} \end{equation} in two regions: (i) $u\gg x\gg \gamma$ ($<$) and (ii) $u\sim x\gg 1$, but $u-x\gg \gamma$ ($>$). First, we split $I$ using \begin{equation} \int_0^\infty \to \lim_{\epsilon \to 0} \, \int_0^{\epsilon u} + \int_{\epsilon u}^\infty \equiv I_< + I_> \,, \end{equation} with $\epsilon u$ held constant. The contribution from the first region can be easily computed to leading-logarithmic accuracy \begin{equation} \begin{split} I_<&= \lim_{\epsilon \to 0} \, \int_0^{\epsilon u} \rmd x \, \frac{x-\sin(x)}{x^2} \frac{1}{u^{2}} + \mathcal{O}\left(\frac{x}{u}\right) =\lim_{\epsilon \to 0} \, \frac{1}{u^{2}} G(\epsilon u)\approx \frac{1}{u^2}\left[\log(\epsilon u)-1+\gamma_E\right] \, , \end{split} \end{equation} where we introduced[${\rm Ci}(x)=-\int_x^\infty \rmd t \, \frac{\cos (t)}{t}$.] \begin{align}\label{eq:G} G(a)\equiv \int_0^{a} \rmd x \, \frac{x-\sin(x)}{x^2}=\log(a)-1+\gamma_E- {\rm Ci}(a)+\frac{\sin(a)}{a} \, . \end{align} In the case of $I_>$, we first notice that $x\gg1$ so we can drop the $\sin(x)$ term. Defining $a\equiv \gamma/u\ll 1$ we have \begin{equation} \begin{split} I_>&\approx \frac{1}{u^2}\int_{\epsilon u}^\infty \frac{\rmd z}{z} \frac{1-z}{((1-z)^2+4za)^{\frac{3}{2}}} \\ &=\frac{1}{u^2}\left[\frac{2(4a-3+z)}{4(a-1)\sqrt{z^2+1+(4a-2)z}}-\arctan\left(\frac{1+(2a-1)z}{\sqrt{z^2+(4a-2)z+1}}\right)\right]_{\epsilon u}^\infty\\ &=\frac{1}{u^2}\left[-2+\frac{1}{2}\log \left(\frac{1-a}{a}\frac{-1}{a(a-1)(\epsilon u)^2}\right)\right]=\frac{1}{u^2}\left(-2+\log\left(\frac{u}{\gamma}\right)-\log(\epsilon u)\right) \, . \end{split} \end{equation} Combining the results from the two different regions, the $I$ integral gives \begin{equation} I=\frac{1}{u^2}\left(-3+\gamma_E+\log\left(\frac{u^2}{\gamma}\right)\right)+ \mathcal{O}\left(\frac{1}{u^3}\right) \, . \end{equation} Inserting this result into the GLV spectrum given in Eq. (<ref>) yields \begin{equation}\label{eq:glv_largekt} \begin{split} (2\pi)^2\omega\frac{\rmd I^{\rm GLV}}{\rmd \omega \rmd^2 \k} &\approx \frac{2\bar{\alpha}\hat{q}_0L^3\pi}{\omega^2}\frac{1}{u^2}\left(\log\left(\frac{u^2}{\gamma}\right)+\gamma_E-3 \right)=\frac{8\bar{\alpha}\hat{q}_0L\pi}{k_\perp^4}\log\left(\frac{k_\perp^4L \rme^{\gamma_E-3}}{2\omega \mu^2}\right)\, . \end{split} \end{equation} The expected $1/k_\perp^4$ power tail naturally arises from a Coulomb-like single hard scattering in the medium. Counterintuitively, even though we are considering here the high energy limit, the spectrum is still sensitive to the infrared details of the in-medium scattering potential via the thermal mass $\mu$. For the sake of comparing with the IOE in what follows, a couple of manipulations are required. First, we re-write the resulting logarithm as \begin{equation} \log\left(\frac{k_\perp^4L \rme^{\gamma_E-3}}{2\omega \mu^2}\right)=\left(\gamma_E-3+\log\left(\frac{k_\perp^2}{\mu^2}\right)+\log\left(\frac{k_\perp^2L}{2\omega}\right)\right)\, , \end{equation} and then replace the GW mass by the universal infrared scale $\mu_\ast$ through the leading-logarithmic prescription $4\mu_\ast^2=\mu^2 \rme^{-1+2\gamma_E}$ <cit.>. This leads to our final expression for the GLV spectrum: \begin{equation} \begin{split} (2\pi)^2\omega\frac{\rmd I^{\rm GLV}} {\rmd \omega \rmd^2 \k}=\frac{8\bar{\alpha}\pi\hat{q}_0L}{ k_\perp^4}\left(3\gamma_E-4+\log\left(\frac{k_\perp^2}{4\mu_\ast^2}\right)+\log\left(\frac{k_\perp^2L}{2\omega}\right)\right)\, . \end{split} \end{equation} Let us now take the $\omega\gg \omega_c$ and $k_\perp^2\gg \hat{q}L$ limits in the IOE spectrum. At leading order, since $k_\perp^2\gg \hat{q}L \sim Q_s^2$, broadening contributions are sub-leading and thus can be ignored in Eq. (<ref>). Further, $\omega \gg \omega_c$ is equivalent to $\Omega L \ll 1$. Combining this pair of observations yields the LO contributions at $\mathcal O(\k^2)$: \begin{align}\label{eq:oo_1} (2\pi)^2\omega\frac{\rmd I^{\rm LO}_{\rm \text{in-in}}}{\rmd \omega \rmd^2 \k}&\approx\frac{4\bar{\alpha} \pi}{\omega} \, \Re \bigg[ \, \int_0^L \rmd t \, \exp\left[-\frac{ik_\perp^2}{ 2 \omega } t\right] \bigg]=\frac{8\Bar{\alpha}\pi}{k_\perp^2}\left(1-\cos \frac{k_\perp^2L}{2\omega }\right)\,, \end{align} \begin{align} (2\pi)^2\omega\frac{\rmd I_{\rm \text{in-out}}^{\rm LO}}{\rmd \omega \rmd^2 \k} &\approx -\frac{8\Bar{\alpha}\pi}{k_\perp^2}\left(1-\cos \frac{k_\perp^2L}{2\omega }\right)\,. \end{align} Adding the two components results into a vanishing spectrum at this order in $\k$ and indicates the need to go to higher orders that, as we will see, affect rather differently the \textbf{in-in} and \textbf{in-out} terms. The latter is exponentially suppressed when including higher orders as can be derived from Eq.~\eqref{eq:HO_brick_out_2}. In turn, the \textbf{in-in} term follows a power-law suppression. To see this, we perform a second order gradient expansion of the broadening distribution in the $Q_s^2≪k_⊥^2$ limit as \beq\label{eq:grad_exp} \int_\p \cP^{\rm LO}(\k-\p) u(\p) &=& \int_\q \cP^{\rm LO}(\q) u(\k-\q) \nn &\approx& \int_\q \cP^{\rm LO}(\q) \left[ 1+ \q^i \nabla^i _\k +\frac{1}{2} \q^i\q^j \nabla^i _\k \nabla^j _\k \right] u(\k)\nn &=& \Bigg[ 1+ \frac{1}{4} \underbrace{\int_\q q_\perp^2 \cP^{\rm LO}(\q) }_{\hat{q}L} \nabla^2 _\k \Bigg] u(\k) \, , \eeq where $u()$ is a test function, we have used unitarity in the first line and rotational symmetry to drop the linear term. The first term in brackets corresponds to the result already obtained in Eq.~\eqref{eq:oo_1}, where the broadening distribution was replaced by a Dirac $δ$-function. Keeping only the second term in Eq.~\eqref{eq:grad_exp} and plugging it into Eq.~\eqref{eq:HO_brick_InIn} we obtain \begin{align} (2\pi)^2\omega\frac{\rmd I_{\rm \text{in-in}}^{\rm LO}}{\rmd \omega \rmd^2 \k}&\approx\frac{\bar{\alpha}\pi\hat{q}L}{\omega} \nabla^2_\k \Re \bigg[i \, \int_0^L \rmd t_2 \, \rme^{-\frac{i \k^2 \tan(\Omega t_2) }{2 \omega \Omega } }\bigg] \, . \end{align} Now, because $$̨ is large the phase oscillates rapidly unless $t_2$ is small enough. To estimate the support of the $t_2$ integral, we exploit the fact that in the high energy regime $\Omega L \ll 1$ and thus the dominant contribution to the integral comes from the region where $t_2\ll \frac{2\omega}{k_\perp^2}\ll \Omega^{-1}$. As a consequence, one can replace the integration limit $L\to \infty$ and linearize the tangent, to obtain the leading asymptotic behavior of the spectrum \begin{align} \label{eq:lo-highenergy} (2\pi)^2\omega\frac{\rmd I_{\rm \text{in-in}}^{\rm LO}}{\rmd \omega \rmd^2 \k}&\approx\frac{\bar{\alpha}\pi\hat{q}L}{\omega} \partial_\k^2\left(4k_\perp^2 \partial_\k^2\right)\Re \bigg[i \, \int_0^\infty \rmd t_2 \, \rme^{-i \frac{\k^2 t_2}{2\omega } } \bigg] = \frac{8\bar{\alpha} \pi \hat{q}_0L}{k_\perp^4}\log \frac{Q_b^2}{\mu_\ast^2} \, . \end{align} Notice that the logarithm depends on the broadening matching scale since it originates from the last line in Eq. (<ref>). Interestingly, the leading order contribution exhibits a $1/k_\perp^4$ tail, physically corresponding to early time hard emissions, which then suffer multiple scatterings in the medium, acquiring a momentum $\hat{q}L$ much smaller than the momentum off the emission vertex. Compared to the vacuum like emission in Eq. (<ref>), we observe that although final state broadening does not change the power-law dependence on the transverse momentum, it power suppresses this second order by a $\frac{\hat{q}L}{k_\perp^2}\ll 1$ factor. At NLO, we need to analyze individually each of the three identified terms in this asymptotic regime. Starting with the broad term, we note that in the high energy limit ^2(t,0) ≃q̂ (L-t) -2iω/t≃-2iω/t , so that \begin{align}\label{eq:IOE_med_broad_app_3} \rmd I_{\rm \text{broad}}^{\rm NLO}}{\rmd \omega \rmd^2 \k}= -\frac{\pi \bar{\alpha}\hat q_0}{k_\perp^4} \Re\bigg[\,\int_0^L \frac{\rmd t }{t}\, \, \, (L-t) \, I_a\left(i\frac{k_\perp^2}{ 2\omega}t,\frac{k_\perp^2}{Q_b^2}\right)\bigg]\,. \end{align} It is convenient to write the remaining integral as ∫_0^L t /t (L-t) I_a(ik_⊥^2/ 2ωt,k_⊥^2/Q_b^2)= 2ω/ik_⊥^2∫_0^x_max x /x (x_max-x) I_a(x,y ) , where we used $x = i\frac{k_\perp^2}{2\omega}t$, $y= \frac{k_\perp^2}{Q_b^2}$ and $x_{\rm max}= i\frac{k_\perp^2}{2\omega}L\gg1$. This simplified integral is analytically solvable \begin{align} &\int_0^{x_{\rm max}} \frac{\rmd x}{x}(x_{\rm max}-x)I_a(x,y)=-8\Bigg(-x^2+2\rme^{-x}(x_{\rm max}-6-2x)+(x_{\rm max}-4)\left[\log(x)\right. \nn &\left.+x-2{\rm Ei}(-x)\right]+\rme^{-x}\left[4(1+x)+2x^2+x^3-x_{\rm max}(1+x+x^2)\right]\left[{\rm Ei}(x)-\log\frac{4x^2}{y}\right] \Bigg) \, . \end{align} Truncating the previous exact result to leading order in $x_{\rm max}$, we obtain ∫_0^x_max x /x (x_max-x) I_a(x,y ) ≈8 [ log4/yx_max + 5 - 3 γ_E ] x_max . Plugging this last result into Eq. (<ref>) yields \begin{align}\label{eq:IOE_med_broad_app_4} \rmd I_{\rm \text{Broad}}^{\rm NLO}}{\rmd \omega \rmd^2 \k}\approx \frac{8\pi \bar{\alpha}\hat q_0 L}{k_\perp^4} \left[ \log \frac{k_\perp^2}{4Q_b^2}+ \log \frac{k_\perp^2}{2\omega}L - 5 +3 \gamma_E \right] \,. \end{align} We restrain ourselves from doing any physical interpretation of this result at this stage and proceed to compute the in-in contribution. In this case, the lack of a vacuum-like ($\rmd t/t$) divergence simplifies the calculation. First, we use the asymptotic form of the exponential integral function, \begin{align} {\rm Ei}(x)= \frac{\rme^x}{x}\sum_{n=0}^{N-1} \frac{n!}{x^n}+\mathcal{O}\left(\frac{N!}{x^N}\right) \, , \end{align} in Eqs. (<ref>) and (<ref>), to obtain the leading asymptotic forms of $I_a$ and $I_b$ I_a ≈-8 , I_b ≈4 . Moreover, since $\Omega L\ll 1$ we can simplify the auxiliary functions given by Eq. (<ref>) down to _21 = C_12≈1 , _21^2≈-2iω/(t_2-t_1) . Consequently, the in-in spectrum reduces to \begin{align} (2\pi)^2\omega\frac{\rmd I^{\rm NLO}_{\rm \text{in-in}}}{\rmd \omega \rmd^2 \k}&\approx \frac{\pi \bar{\alpha}\hat q_0}{2\omega k_\perp^4} \Re\bigg[ i\int_0^L \rmd t_2 \,\int_0^{t_2} \rmd t_1 \, \rme^{-i\frac{ k_\perp^2}{2\omega}(t_2-t_1)} \left(-8\hat q (L-t_2) +8 k_\perp^2\right) \bigg] \nn &\approx \frac{4\pi \bar{\alpha}\hat q_0}{\omega k_\perp^2} \Re\bigg[ i\int_0^L \rmd t_2 \,\int_0^{t_2} \rmd t_1 \, \rme^{-i\frac{k_\perp^2}{2\omega}(t_2-t_1)} \bigg]\, \nn & = \frac{8\pi \bar{\alpha}\hat q_0L}{k_\perp^4} +\cO(k_\perp^{-6})\,. \end{align} Finally, for the in-out term we obtain \begin{align} &(2\pi)^2\omega\frac{\rmd I_{\rm \text{in-out}}^{\rm NLO}}{\rmd \omega \rmd^2 \k}\approx\frac{8\bar{\alpha}\hat{q}_0\pi}{k_\perp^4} \Re\bigg[ \int_0^{L} \rmd t_1 \, \rme^{-i \frac{k_\perp^2}{2\omega }t_1} \bigg]=\frac{16\bar{\alpha}\hat{q}_0\pi \omega}{k_\perp^6}\sin\left(\frac{k_\perp^2L}{2\omega}\right) \, , \end{align} which is power-suppressed with respect to the broad and in-in terms and can be ignored. Then, the NLO contribution to the IOE spectrum is given by \begin{align}\label{eq:IOE_large_kt} &(2\pi)^2\omega\frac{\rmd I^{\rm NLO}}{\rmd \omega \rmd^2 \k}\approx\frac{8\pi \bar{\alpha}\hat q_0L}{k_\perp^4}\left[3 \gamma_E-4+\log\left(\frac{k_\perp^2}{4Q_b^2}\right)+\log\left(\frac{k_\perp^2L}{2\omega}\right)\right]\,. \end{align} Finally, combining the LO and NLO results we obtain that the IOE spectrum at high energies reduces to \begin{align} \label{eq:final-highenergy} (2\pi)^2\omega\frac{\rmd I^{\rm LO+NLO}}{\rmd \omega \rmd^2 \k}&=\frac{8\pi \bar{\alpha}\hat q_0L}{k_\perp^4}\left[3 \gamma_E-4+\log\left(\frac{k_\perp^2}{4Q_b^2}\right)+\log\left(\frac{k_\perp^2L}{2\omega}\right)+\log\left( \frac{Q_b^2}{\mu_\ast^2} \right)\right] \nn &= \frac{8\pi\bar{\alpha}\hat{q}_0L}{ k_\perp^4}\left[3\gamma_E-4+\log\left(\frac{k_\perp^2}{4\mu_\ast^2}\right)+\log\left(\frac{k_\perp^2L}{2\omega}\right)\right]\nn &= (2\pi)^2\omega\frac{\rmd I^{\rm GLV}} {\rmd \omega \rmd^2 \k}\,. \end{align} A few important remarks are in order at this point. The most obvious one is that the LO+NLO exactly matches the GLV result in the large $\omega$, large $\k$ regime. This was somehow expected but not trivial to confirm explicitly. Among other technicalities, a second order gradient expansion in transverse momentum for the LO term was essential. Another remarkable and related feature of the final result is that both the LO and NLO depend on $Q_b$, while their sum does not. This was already encountered in the energy spectrum calculation as discussed in Ref.~\cite{IOE3} and constitutes a sanity check of the Improved Opacity Expansion framework. We note again that unlike the soft limit considered before, this regime, although having a non-trivial cancellation of the matching scale dependence between different orders, does not provide any constraint on the functional form of $Q_b$, as can be observed from Eq.~\eqref{eq:final-highenergy}. This is unlike, for example, the result in Eq.~\eqref{eq:IOE_smallkt_final}, that forbids the matching from being a numerical constant. From a more pragmatic point of view, the exact matching between the IOE and GLV provides a non-trivial check on the computations performed in the previous sections. \section{Numerical results}\label{sec:numerics} In this section, we numerically explore the IOE spectrum in the brick model. We compare our results of the IOE spectrum truncated at LO and LO+NLO (see Section~\ref{subsubsec:LO_NLO_brick}) to (i) the single, hard scattering limit encompassed in the GLV spectrum (see Eq.~\eqref{eq:glv}) and (ii) an all-orders resummation of the spectrum presented in~\cite{CarlotaFabioLiliana}. Notice that the LO result can be considered as the BDMPS-Z solution with ultraviolet regulator taken to be the radiative matching scale $Q_r$. These comparisons should be regarded, at this point, as a merely theoretical exercise. However, we choose the medium parameters to be in the ballpark of LHC conditions. More concretely, the LHC-inspired medium has: $q̂_0=0.156$~GeV$^3$, length $L=6$~fm and infrared regulator $μ_∗=0.355$~GeV. Further, we take a fixed value of the strong coupling constant $α_s=0.28$ and consider radiation off a hard quark such that $C_R=C_F=4/3$. These set of parameters lead to a critical frequency scale $ω_c0≡q̂_0 L^2=140$~GeV and the saturation density $Q^2_s0≡q̂_0 L=4.68$~GeV$^2$. Regarding our numerical routines, they run in a regular laptop with an average computing time of $𝒪(1)$ seconds for each pair of ($ω,$̨) values, considering the above set of parameters. The computing time is significantly smaller, $\mathcal{O}(10^{-2})$ seconds, if not too extreme values of the kinematic variables are chosen. Before comparing our result to other approaches, we first address a natural question: what is the dependence of the IOE spectrum on the matching scales $Q_r$ and $Q_b$? In Fig. <ref> we plot the medium-induced spectrum as a function of $k_\perp/Q_{s0}$ for two different gluon frequencies, $\omega\!=\!5, 100$ GeV, truncating the spectrum at LO (left) or at LO+NLO (right). The central curves are obtained by solving Eqs. (<ref>) and (<ref>). Then, we perform an independent variation of the matching scales by a factor of $2$ ($1/2$) that lead to the uncertainty bands around the mid value. We recall that this variation is associated to the uncertainty in the definition of such scales, which are constrained up to an overall constant factor. Let us discuss first the large $\omega$, large $k_\perp$ regime, i.e. the inset in the bottom row plots. Analytically, we have shown that, in the asymptotic kinematical region, the dependence on the matching scale vanishes when one considers LO+NLO contribution, see Eq. (<ref>). This is exactly what we observe numerically. Although this conclusion was reached for highly energetic gluons, the numerical results indicate that it holds reasonably well in the case of soft gluons too. Notice that when analyzing only the LO, a bigger, but still weak dependence on the matching scale $Q_b$ is observed. This corresponds to Eq. (<ref>), where a logarithmic dependence on $Q_b$ appears. Then, we reemphasize that only when considering LO+NLO the dependence on the matching scale is residual as due to the cancellation occurring between these two terms. The small $\omega$, small $\k$ scenario is represented by the top row plots in Fig.~\ref{fig:qs-variations}. In this region, we argued in the previous section that all orders scale as the LO term, with logarithmic power corrections as one goes higher in the IOE, see Eqs.~\eqref{eq:ll_2} and \eqref{eq:ll_1}. Numerically, we observe that there is a large uncertainty due to the variation of the matching scales at LO, mainly from $Q_r$. However, if one includes the NLO term (top right) the dependence on the matching scales almost disappears. Regarding $Q_b$, we note that higher orders in the broadening factor $$ also enter through logarithmic power corrections on top of the LO term. Thus, the weaker dependence of LO+NLO on $Q_b$ as compared to LO is to be expected. These findings are inline with~\cite{IOE3}, where it was observed that for the energy spectrum the variations of the matching scale $Q_r$ although could drastically change the LO and the NLO terms separately, the sum LO+NLO was only sensitive to these variations at NNLO. This is a consequence of the fact once all orders in Eq.~\eqref{eq:ll_3} are considered, the spectrum becomes independent of $Q_r$. Since in Fig.~\ref{fig:qs-variations} the largest uncertainty comes from the scale $Q_r$, we argue that the result obtained here is a manifestation of the findings of~\cite{IOE3}. \begin{figure}[ht] \centering \includegraphics[scale=0.45,page=2]{plot-Qs-variations.pdf} \includegraphics[scale=0.45,page=1]{plot-Qs-variations.pdf} \\ \includegraphics[scale=0.45,page=4]{plot-Qs-variations.pdf} \includegraphics[scale=0.45,page=3]{plot-Qs-variations.pdf} \caption{Impact of variations by a factor of 2 in the two matching scales, $Q_b$ and $Q_r$, on the LO (left column) and the LO+NLO (right column) at two different frequencies: $\omega\!=\!5, 100$ GeV on the top and bottom rows, respectively. } \label{fig:qs-variations} \end{figure} In Fig.~\ref{fig:lhc} we present the final comparison between the IOE spectrum and the other approaches mentioned above. We considered two gluon frequencies, $ω= 0.05 ω_c0$ and $ω= 2 ω_c0$, corresponding to the cases of soft and hard gluon and we use the GW mass $μ^2=0.43$~GeV$^2$. Again, we vary the matching scales of the IOE spectrum up and down by a factor of two, independently and then take the envelope to build the uncertainty band. The overall conclusion from the two plots in Fig.~\ref{fig:lhc}, and the most important result of this paper, is that the IOE spectrum, up to NLO, already does a reasonable job at capturing the full solution (less than 25\% deviations), with the advantage that it requires considerably less computational power. The observed deviation from the full numerical result reflects the sensitivity of the transverse momentum distribution to the infrared. A wider separation between $μ^2$ and $Q_b^2$ or $Q_r^2$ would yield a better agreement. Let us split the discussion of Fig.~\ref{fig:lhc} into small and large transverse momentum. The small $k_⊥$ and small $ω$ regime is dominated by multiple scattering contributions. Then, it is natural that the GLV spectrum fails to capture the full solution. The LO term of the IOE (related to the BDMPS-Z solution) already does a good job at describing the full result. Nonetheless, including the NLO term, improves not only the overall agreement with the full result, but also reduces the uncertainty band associated with the variation of the matching scales, as discussed in the previous section. When increasing the gluon's frequency, and still at small $k_⊥$, we observe that the LO+NLO result remarkably captures the full solution up to $5%$ deviations. Regarding GLV, its agreement with the full solution is improved with respect to the small frequency case and, curiously, coincides with the LO term. We do not expect this to be a systematic result for other choices of the medium parameters. The large $k_⊥$ tail is generated by rare, hard scatterings in the medium. It is well known that the BDMPS-Z approximation does not correctly capture such contributions and, therefore, fails to describe the full solution. At small frequencies, we observe that the LO+NLO result approaches much faster the full solution than compared to GLV. This is to be expected, as at large, but not infinite $k_⊥$, multiple scatterings still play a role, despite being sub-leading. GLV lacks those effects and then needs an asymptotically large value of $k_⊥$ to reproduce the full solution, while the LO+NLO works even far from the asymptotic regime. At large frequencies, the spectrum is really dominated by a single hard scattering in the medium and thus the LO+NLO, full and GLV results rapidly converge. \begin{figure}[h!] \centering \includegraphics[scale=.78]{plot-wx005-new.pdf} \includegraphics[scale=.78]{plot-wx2-new.pdf} \caption{Comparison between the GLV spectrum (dotted, purple), the LO result (dashed, green), the IOE at LO+NLO (solid, red) and the all-order spectrum (solid, navy) as computed in Ref.~\cite{CarlotaFabioLiliana} for two gluon frequencies: $\omega= 0.05\, \omega_{c0}$ (left) and $\omega= 2\, \omega_{c0}$ (right). The ratio to the full solution is presented in the bottom panels. The uncertainty band arises from variations in the matching scale.} \label{fig:lhc} \end{figure} \section{Conclusion and Outlook}\label{sec:conclusion} This work constitutes a natural extension of the recent studies of the medium-induced energy spectrum and broadening distribution using the Improved Opacity Expansion~\cite{IOE1,IOE2,IOE3,broadening_paper}. We have computed the in-medium radiative kernel using the IOE up to next-to-leading order accuracy, in the soft gluon approximation for a generic medium profile as well as for a brick plasma for which we have performed numerical computations that we compare to full numerical results from \cite{CarlotaFabioLiliana}. We observe a very good agreement for a LHC-motivated choice of medium parameters. From a theoretical viewpoint, the differential spectrum calculation highlights the role played by the matching scales that enters the definition of $q̂$, given that it convolutes contributions due to final state broadening with in-medium radiative terms. Each of these physical processes enters the IOE expansion with its own matching scale that we denote by $Q_b$, associated to final state broadening terms, and $Q_r$, related to the radiative kernel. These two scales are obtained by solving their corresponding transcendental equations that are given, in full generality, by Eqs.~\eqref{eq:Qb_final} and \eqref{eq:Qr_final}. We emphasize that these two scales have to be treated separately in order for the expansion to be consistent and well defined. Taking this into account, we derive the medium-induced spectrum for a smooth medium time profile and also in the brick limit. The final formulas are given in Sec.~\ref{sec:summary}. Besides the master formulas, we analytically study the IOE for the plasma brick model in two asymptotic kinematical regimes. Firstly, we consider the regime where multiple soft exchanges with the medium constitute the dominant contribution, i.e. $ω≪ω_c$, and with the further assumption that the gluon is collinear, i.e. $k_⊥^2≪q̂L$. In this case, we recover the well-known factorization formula given by Eq.~\eqref{eq:ll_2}, often used in jet quenching phenomenology~\cite{Saclay,BDIM1,Blanco:2020uzy,Kutak1}. In particular, this result implies that in the soft limit the differential spectrum can be written as the product between the LO term and powers of $q̂_0/q̂$ that correspond to higher order contributions. This result agrees with what was observed in the energy spectrum calculation, as detailed in~\cite{IOE3}. Secondly, we study the physically opposite regime where a single hard scattering with the medium governs the dynamics of the medium-induced spectrum. This corresponds to a region of phase space where $ω≫ω_c$ and final momentum of the gluon satisfies $k_⊥^2≫q̂L$. In this regime, it is expected that the exact spectrum is given by the GLV result, and thus should be reproduced by the IOE approach. Indeed, we confirm that after considering both the LO and NLO contributions, non-trivial cancellations between these two orders occur such that one recovers the GLV spectrum. Not only the GLV result is reproduced, but also the dependence of the spectrum on the matching scales disappears order by order in $1/k^2_⊥$. Again, these cancellations resemble the situation in the energy spectrum calculation~\cite{IOE3}. Additionally, the explicit and detailed calculation carried out in order to check that the IOE recovers the GLV result provides a non-trivial cross-check on the main formulas derived in this paper. Regarding the final numerical evaluation we find that, for the plasma brick model with LHC-inspired parameters, the computing time is in the ballpark of the LO/BDMPS-Z result~\cite{ASW2}. More concretely, we have evaluated the code's performance and encountered that the small $ω$ and small $$̨ regime requires more computational power due to the oscillating phases in the integrands. We provide ancillary Python files with the IOE spectrum together with the GLV expressions. The comparison with an all-orders resumed spectrum reveals a globally good agreement between the NLO spectrum from the Improved Opacity Expansion and the full solution, for this set of medium parameters. In particular, the agreement improves at high-frequencies, where the deviations between the two approaches are below $10\%$. This is a remarkable result given the relative simplicity of the approach presented in this paper as compared to larger computational cost needed to resum the spectrum to all orders. It is indicative of the power of the IOE approach to capture the correct dynamics at small and large frequencies simultaneously. A more thorough comparison with the full numerical result including a scan of the parameters space is left for future work. We expect the agreement to systematically improve for denser or larger plasma for which the scale separation between $Q_r$ or $Q_b$ and the infrared scale $\mu^2$ is larger. Obviously, the IOE scheme is exact asymptotically. Our results provide for the first time a unified analytic framework for the fully differential medium-induced radiation spectrum that accounts for both the GLV and BDMPS limits. We expect that adopting our radiative kernel in future phenomenological studies would substantially reduce model dependence of jet quenching observables as well as theoretical uncertainties on the extraction of medium transport properties such as the jet quenching parameter, $\hat q$ that is a function of the typical scale of the process. Two phenomenological applications have been already proposed in the literature that concern quenching effects on the jet spectrum <cit.>. A natural continuation of this work is to use the in-medium radiative kernel that we have derived in this paper to analytically compute observables where the gluon transverse momentum information is not integrated out, namely jet substructure observables. In particular, on-going measurements of the $k_\perp$-distribution of the hardest splitting <cit.> would benefit from a theoretical calculation in which both multiple soft scatterings and hard momentum exchanges are correctly incorporated. This study would open up a new theoretical window onto extending the IOE framework to describe the energy loss of a quark-antiquark antenna. In parallel, we would like to implement the formulas that we have derived in this paper in a suitable Monte Carlo framework such as Ref. <cit.>. § NOTE While this manuscript was being produced an independent derivation of the differential spectrum (for a massive quark) using the IOE approach was presented in Ref. <cit.>. Although performing a numerical comparison is beyond the scope of this work we would like to point out a couple of differences. Firstly, in comparison with <cit.> we have presented results for a generic medium profile for which we were able to reduce further the number of integration variables. Secondly, in Ref. <cit.> a single matching scale was used for the radiative and broadening parts, i.e., $Q_b=Q_r$ which we found leads to an incorrect description of the spectrum. § ACKNOWLEDGEMENTS We are grateful to the authors of Ref. <cit.> for providing the numerical results from their study. In particular, we wish to thank Carlota Andrés and Fabio Dominguez for clarifying some important details in their work and for the careful reading of the present manuscript. We wish to thank Carlos Salgado for helpful discussions on related problems and Liliana Apolinário for providing clarifications regarding the GLV result obtained in Ref. <cit.>. The project that gave rise to these results received the support of a fellowship from “la Caixa" Foundation (ID 100010434). The fellowship code is LCF/BQ/ DI18/11660057. This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 713673. J.B. is supported by Ministerio de Ciencia e Innovacion of Spain under project FPA2017-83814-P; Unidad de Excelencia Maria de Maetzu under project MDM-2016-0692; European research Council project ERC-2018-ADG-835105 YoctoLHC; and Xunta de Galicia (Conselleria de Educacion) and FEDER. The work of Y. M.-T. was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under contract No. DE- SC0012704. K. T. is supported by a Starting Grant from Trond Mohn Foundation (BFS2018REK01) and the University of Bergen. Y. M.-T. acknowledges support from the RHIC Physics Fellow Program of the RIKEN BNL Research Center. A.S.O.’s work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 788223, PanScales). § THE ANALYTIC SOLUTIONS OF THE EMISSION KERNEL $\CK$ In this appendix, we discuss two analytic solutions for the emission kernel $\cK$ satisfying \begin{equation}\label{eq:cK_Sch_app} \left[i\partial_{t_2}+\frac{\bdel^2_\x}{2\omega^2}+iv(\x,t_2)\right]\cK(\x,t_2;\y,t_1)=i\delta^{(2)}(\x-\y)\delta(t_2-t_1) \, . \end{equation} This propagator obeys a Dyson-like relation reading <cit.> \begin{equation}\label{eq:Dyson_GLV_app} \cK(\x,t_2;\y,t_1) = \cK_0(\x,t_2;\y,t_1)-\int_\z \int_{t_1}^{t_2} \rmd s \, \cK_0(\x,t_2;\z,s)v(\z,s)\cK(\z,s;\y,t_1) \, , \end{equation} where $\cK_0$ corresponds to vacuum solution to Eq. (<ref>) with $v=0$. Alternatively, and as discussed in the main text, one can write an equivalent relation to eq:Dyson_GLV_app, but expanding around the solution to Eq. (<ref>) with $v\to \vLO$ and using the decomposition $v=\vLO+\delta v$ (see Eq. (<ref>)), \begin{equation}\label{eq:cK_ful_IOE_app} \cK(\x,t_2;\y,t_1) = \cK^{\rm LO}(\x,t_2;\y,t_1)-\int_\z \int_{t_1}^{t_2} \rmd s \, \cK^{\rm LO}(\x,t_2;\z,s)\delta v(\z,s)\cK(\z,s;\y,t_1) \, . \end{equation} Eqs. (<ref>) and (<ref>) are particularly useful since $\cK_0$ and $\cK^{\rm LO}$ admit a closed form, which can be obtained by directly solving Eq. (<ref>), and thus they can be easily applied in a perturbative framework. In the first case where $v=0$, $\cK_0$ is the Green's function of a Schrödinger equation describing the motion of a non-relativistic free particle in two dimensions, and thus reads \begin{align}\label{eq:cK_vac_app} \cK_0(\x,t_2;\y,t_1)= \frac{\omega }{2\pi i (t_2-t_1) } \exp\left(i\frac{\omega(\x-\y)^2}{2(t_2-t_1)}\right) \, . \end{align} The case where $v(\x)=\vLO(\x)$ in Eq. (<ref>) is also easily solved since in this case $\cK$ is the Green's function associated to the motion of a single particle in a harmonic potential. For quadratic potentials, the exact solution to Eq. (<ref>) can be exactly obtained by using the so called method of fluctuations <cit.>, resulting in \begin{align}\label{eq:cK_BDMPS_app} \cK^{\rm LO}(\x,t_2;\y,t_1)&= \frac{\omega}{2\pi i S(t_2,t_1)}\exp\left( \frac{i\omega}{2S(t_2,t_1)} \left[ C(t_1,t_2)\,\x^2+C(t_2,t_1)\,\y^2-2 \x\cdot\y\right] \right) \,, \end{align} where we recall that the $C$ and $S$ functions satisfy \begin{equation}\label{eq:Abel} \begin{split} &\left[\frac{\rmd^2}{\rmd^2t}+\Omega^2(t)\right]S(t,t_0)=0 \, ,\quad S(t_0,t_0)=0 \,,\quad \partial_t S(t,t_0)_{t=t_0}=1 \, , \\ &\left[\frac{\rmd^2}{\rmd^2t}+\Omega^2(t)\right]C(t,t_0)=0 \, ,\quad C(t_0,t_0)=1 \,,\quad \partial_t C(t,t_0)_{t=t_0}=0\, . \end{split} \end{equation} For any $\Omega$, i.e. for a generic time profile of the medium, one can derive certain identities relating the $C$ and $S$ functions that were employed in the main text to simplify the emission spectrum. Firstly, these solutions are related by $C(t_1,t_2)=\partial_{t_2}S(t_2,t_1)$ and by the associated Wronskian ($W$), which reads \begin{equation} W=C(t_1,t_2)\partial_{t_1}S(t_1,t_2)-\partial_{t_1}C(t_1,t_2)S(t_1,t_2)=1\, , \end{equation} where we used the above the initial conditions. The condition $W=1$ can be used to show that \begin{equation}\label{eq:prop_app_1} \partial_{t_1}\frac{C(t_1,t_2)}{S(t_1,t_2)}=-\frac{C(t_1,t_2)\partial_{t_1}S(t_1,t_2)-\partial_{t_1}C(t_1,t_2)S(t_1,t_2)}{S^2(t_1,t_2)} =-\frac{1}{S^2(t_1,t_2)} \, , \end{equation} which is used to derive eq:id_1. In addition, $W=1$ implies that $C$ and $S$ are linearly independent solutions and thus any other solution to the above ordinary differential equation can be written as a linear combination of them. Using this fact, and for a time ordering $t_2>t_1>t_0$, any solution in $(t_2,t_1)$ can be written as <cit.> \begin{equation}\label{eq:linear_rel_C_S} \begin{split} &S(t_2,t_1)=C(t_1,t_0)S(t_2,t_0)-S(t_1,t_0)C(t_2,t_0) \, ,\\ &C(t_2,t_1)=-\partial_{t_1}C(t_1,t_0)S(t_2,t_0)+\partial_{t_1}S(t_1,t_0)C(t_2,t_0) \, , \end{split} \end{equation} where it is easy to verify that these equations satisfy the above initial conditions and that $S(t_2,t_1)=-S(t_1,t_2)$. This decomposition of the $C$ and $S$ is extensively used in the main text; here we give a simple application to derive another useful identity. Let us consider the brick model introduced in the main text, with a medium of extension $L$ such that $\hat{q}(t\geq L)=0$, but still allowing the jet quenching parameter not to be constant in time inside the medium. In the vacuum, the solutions to the $C$ and $S$ functions are trivially found \begin{equation} \begin{split} S(t_2,t_1)=t_2-t_1 \, , \quad C(t_2,t_1)=1 \, , \end{split} \end{equation} and indeed when introduced back in Eq. (<ref>) give Eq. (<ref>). Using Eq. (<ref>) with $t_2=+\infty$, $t_1>L$ and $t_0=L$, combined with the explicit forms for vacuum $C$ and $S$ solutions, one observes that terms proportional to $S(t_2,t_1)$ dominate, leading to the handy formula \begin{equation} \frac{C(\infty,t_1)}{S(\infty,t_1)} = -\frac{\partial_{t_1} C(t_1,L)}{C(t_1,L)}=\Omega^2(t_1) \frac{S(t_1,L)}{C(t_1,L)} \, , \end{equation} where the last equality holds if $C$ is even in its arguments.
Mean | 0.261 | 0.268 | 0.259 | 0.250 | 0.267 | 0.155 | 0.215 | 0.15 | 0.246 | 0.205 | 0.176 | 0.264 | 0.228 | 0.14 | 0.254 | 0.200 | 0.123 | 0.241 | 0.188 | 0.273 | 0.216 | 0.215 | 0.212 | 0.229 | 0.155 | 0.215 | 0.17 | 0.217 | 0.224 | 0.321 | 0.309 | 0.318 | 0.273 | 0.216 | 0.215 | 0.212 | 0.229 | 0.148 | 0.245 | 0.174 | 0.211 | 0.496 | 0.379 | 0.549 | 0.469 | 0.475 | 0.438 | 0.454 | 0.501 | 0.445 | 0.394 | 0.567 | 0.413 | 0.548 (a) Average AE values of each algorithm per data set | AC | AC-LR | AC-RF | AC-SV | AC-AB | PAC | PAC-LR | TSX | TSX-LR | TSX-SV | TS50 | TS50-LR | TS50-SV | TSMax | TSMax-LR | TSMax-SV | MS | MS-LR | MS-SV | GAC | GAC-LR | GAC-RF | GAC-SV | GAC-AB | GPAC | GPAC-LR | DyS | DyS-LR | DyS-SV | FMM | FMM-LR | FMM-SV | HDy | HDy-LR | HDy-RF | HDy-SV | HDy-AB | FM | FM-LR | EM | EM-LR | CDE | CDE-LR | CC | CC-LR | CC-RF | CC-SV | CC-AB | PCC | PCC-LR | SVM-K | SVM-Q | RBF-K | RBF-Q ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- ads | 0.078 | 0.082 | 0.094 | 0.081 | 0.058 | 0.074 | 0.067 | 0.042 | 0.062 | 0.040 | 0.066 | 0.086 | 0.069 | 0.028 | 0.057 | 0.031 | 0.029 | 0.059 | 0.035 | 0.077 | 0.041 | 0.063 | 0.043 | 0.055 | 0.074 | 0.066 | 0.019 | 0.03 | 0.026 | 0.066 | 0.062 | 0.068 | 0.076 | 0.040 | 0.060 | 0.042 | 0.053 | 0.039 | 0.065 | 0.027 | 0.045 | 0.186 | 0.115 | 0.134 | 0.104 | 0.120 | 0.108 | 0.056 | 0.108 | 0.089 | 0.071 | 0.107 | 0.156 | 0.173 alco | 0.287 | 0.450 | 0.405 | 0.372 | 0.455 | 0.251 | 0.334 | 0.269 | 0.449 | 0.321 | 0.295 | 0.457 | 0.327 | 0.171 | 0.417 | 0.274 | 0.17 | 0.418 | 0.275 | 0.277 | 0.182 | 0.181 | 0.176 | 0.240 | 0.258 | 0.329 | 0.191 | 0.215 | 0.211 | 0.258 | 0.217 | 0.239 | 0.273 | 0.179 | 0.183 | 0.174 | 0.236 | 0.260 | 0.426 | 0.102 | 0.194 | 0.779 | 0.715 | 0.391 | 0.499 | 0.497 | 0.457 | 0.499 | 0.254 | 0.272 | 0.167 | 0.238 | 0.363 | 0.308 bc-cat | 0.067 | 0.052 | 0.065 | 0.052 | 0.071 | 0.064 | 0.036 | 0.037 | 0.063 | 0.028 | 0.093 | 0.103 | 0.070 | 0.020 | 0.050 | 0.020 | 0.021 | 0.055 | 0.027 | 0.088 | 0.030 | 0.062 | 0.030 | 0.071 | 0.064 | 0.036 | 0.024 | 0.029 | 0.026 | 0.105 | 0.112 | 0.132 | 0.087 | 0.029 | 0.057 | 0.029 | 0.069 | 0.023 | 0.036 | 0.16 | 0.032 | 0.405 | 0.079 | 0.182 | 0.056 | 0.060 | 0.054 | 0.038 | 0.123 | 0.054 | 0.080 | 0.316 | 0.038 | 0.05 bc-cont | 0.030 | 0.037 | 0.036 | 0.034 | 0.044 | 0.039 | 0.029 | 0.015 | 0.029 | 0.016 | 0.066 | 0.083 | 0.086 | 0.012 | 0.026 | 0.014 | 0.018 | 0.043 | 0.030 | 0.051 | 0.025 | 0.036 | 0.023 | 0.047 | 0.039 | 0.029 | 0.009 | 0.014 | 0.019 | 0.087 | 0.090 | 0.090 | 0.051 | 0.025 | 0.035 | 0.023 | 0.046 | 0.015 | 0.019 | 0.087 | 0.059 | 0.181 | 0.135 | 0.066 | 0.028 | 0.025 | 0.026 | 0.026 | 0.060 | 0.052 | 0.035 | 0.447 | 0.033 | 0.009 cappl | 0.146 | 0.232 | 0.186 | 0.204 | 0.151 | 0.092 | 0.162 | 0.080 | 0.221 | 0.106 | 0.141 | 0.256 | 0.153 | 0.052 | 0.210 | 0.089 | 0.058 | 0.213 | 0.090 | 0.155 | 0.108 | 0.102 | 0.102 | 0.108 | 0.094 | 0.156 | 0.065 | 0.094 | 0.104 | 0.155 | 0.142 | 0.171 | 0.153 | 0.107 | 0.097 | 0.101 | 0.105 | 0.053 | 0.189 | 0.037 | 0.132 | 0.410 | 0.404 | 0.244 | 0.260 | 0.217 | 0.238 | 0.176 | 0.159 | 0.178 | 0.093 | 0.086 | 0.188 | 0.227 cars | 0.044 | 0.044 | 0.058 | 0.038 | 0.039 | 0.050 | 0.037 | 0.025 | 0.039 | 0.026 | 0.048 | 0.058 | 0.051 | 0.015 | 0.030 | 0.019 | 0.018 | 0.033 | 0.023 | 0.051 | 0.031 | 0.036 | 0.027 | 0.026 | 0.049 | 0.036 | 0.014 | 0.014 | 0.017 | 0.061 | 0.053 | 0.052 | 0.050 | 0.031 | 0.036 | 0.027 | 0.026 | 0.030 | 0.038 | 0.034 | 0.019 | 0.208 | 0.045 | 0.099 | 0.050 | 0.065 | 0.038 | 0.046 | 0.083 | 0.045 | 0.051 | 0.045 | 0.241 | 0.288 conc | 0.222 | 0.142 | 0.150 | 0.099 | 0.129 | 0.129 | 0.121 | 0.057 | 0.127 | 0.076 | 0.091 | 0.154 | 0.089 | 0.037 | 0.119 | 0.061 | 0.041 | 0.117 | 0.064 | 0.155 | 0.086 | 0.113 | 0.066 | 0.105 | 0.129 | 0.114 | 0.072 | 0.057 | 0.058 | 0.149 | 0.108 | 0.116 | 0.153 | 0.084 | 0.110 | 0.065 | 0.102 | 0.090 | 0.113 | 0.325 | 0.066 | 0.795 | 0.264 | 0.494 | 0.211 | 0.195 | 0.146 | 0.168 | 0.245 | 0.152 | 0.074 | 0.306 | 0.067 | 0.253 contra | 0.449 | 0.448 | 0.320 | 0.420 | 0.354 | 0.245 | 0.301 | 0.206 | 0.422 | 0.373 | 0.234 | 0.425 | 0.385 | 0.128 | 0.405 | 0.324 | 0.121 | 0.403 | 0.323 | 0.241 | 0.204 | 0.176 | 0.190 | 0.151 | 0.245 | 0.295 | 0.227 | 0.215 | 0.29 | 0.266 | 0.203 | 0.289 | 0.236 | 0.201 | 0.173 | 0.188 | 0.149 | 0.258 | 0.409 | 0.125 | 0.18 | 0.839 | 0.809 | 0.579 | 0.539 | 0.429 | 0.513 | 0.460 | 0.286 | 0.280 | 0.197 | 0.381 | 0.213 | 0.352 craft | 0.119 | 0.086 | 0.077 | 0.090 | 0.120 | 0.048 | 0.056 | 0.025 | 0.061 | 0.050 | 0.035 | 0.067 | 0.051 | 0.015 | 0.051 | 0.033 | 0.019 | 0.055 | 0.038 | 0.105 | 0.065 | 0.066 | 0.067 | 0.086 | 0.048 | 0.054 | 0.010 | 0.137 | 0.029 | 0.058 | 0.053 | 0.079 | 0.103 | 0.063 | 0.064 | 0.066 | 0.084 | 0.026 | 0.053 | 0.089 | 0.031 | 0.728 | 0.332 | 0.317 | 0.216 | 0.208 | 0.250 | 0.230 | 0.199 | 0.167 | 0.090 | 0.306 | 0.079 | 0.211 drugs | 0.082 | 0.151 | 0.253 | 0.164 | 0.170 | 0.056 | 0.087 | 0.063 | 0.131 | 0.058 | 0.077 | 0.139 | 0.068 | 0.041 | 0.132 | 0.05 | 0.046 | 0.132 | 0.051 | 0.092 | 0.071 | 0.103 | 0.072 | 0.099 | 0.057 | 0.085 | 0.022 | 0.077 | 0.042 | 0.088 | 0.103 | 0.108 | 0.090 | 0.069 | 0.101 | 0.071 | 0.097 | 0.044 | 0.128 | 0.022 | 0.069 | 0.092 | 0.420 | 0.144 | 0.230 | 0.321 | 0.243 | 0.241 | 0.134 | 0.172 | 0.078 | 0.088 | 0.239 | 0.269 flare | 0.395 | 0.504 | 0.481 | 0.474 | 0.486 | 0.245 | 0.337 | 0.274 | 0.490 | 0.385 | 0.304 | 0.495 | 0.398 | 0.197 | 0.478 | 0.338 | 0.2 | 0.479 | 0.340 | 0.294 | 0.181 | 0.199 | 0.187 | 0.293 | 0.242 | 0.330 | 0.213 | 0.243 | 0.368 | 0.290 | 0.239 | 0.353 | 0.289 | 0.179 | 0.199 | 0.186 | 0.288 | 0.232 | 0.448 | 0.081 | 0.244 | 0.707 | 0.720 | 0.419 | 0.512 | 0.501 | 0.497 | 0.479 | 0.256 | 0.284 | 0.159 | 0.243 | 0.259 | 0.295 grid | 0.029 | 0.027 | 0.050 | 0.007 | 0.021 | 0.015 | 0.017 | 0.009 | 0.011 | 0.004 | 0.009 | 0.014 | 0.006 | 0.006 | 0.009 | 0.003 | 0.005 | 0.007 | 0.004 | 0.034 | 0.027 | 0.042 | 0.006 | 0.017 | 0.015 | 0.017 | 0.002 | 0.003 | 0.001 | 0.017 | 0.017 | 0.006 | 0.033 | 0.026 | 0.041 | 0.006 | 0.017 | 0.009 | 0.011 | 0.014 | 0.038 | 0.409 | 0.319 | 0.188 | 0.152 | 0.176 | 0.030 | 0.124 | 0.151 | 0.145 | 0.596 | 0.425 | 0.037 | 0.23 mush | 0.001 | 0.001 | 0.002 | 0.008 | 0.002 | 0.001 | 0.001 | 0.001 | 0.001 | 0.007 | 0.009 | 0.013 | 0.019 | 0.001 | 0.001 | 0.005 | 0.001 | 0.003 | 0.005 | 0.002 | 0.001 | 0.003 | 0.008 | 0.002 | 0.001 | 0.001 | 0.000 | 0.000 | 0.005 | 0.003 | 0.003 | 0.006 | 0.002 | 0.001 | 0.003 | 0.008 | 0.002 | 0.001 | 0.001 | 0.001 | 0.000 | 0.004 | 0.001 | 0.003 | 0.001 | 0.002 | 0.001 | 0.001 | 0.006 | 0.002 | 0.016 | 0.007 | 0.002 | 0.202 music | 0.329 | 0.474 | 0.472 | 0.347 | 0.458 | 0.246 | 0.343 | 0.255 | 0.446 | 0.291 | 0.274 | 0.454 | 0.304 | 0.182 | 0.439 | 0.258 | 0.185 | 0.437 | 0.261 | 0.257 | 0.188 | 0.191 | 0.178 | 0.275 | 0.246 | 0.340 | 0.184 | 0.238 | 0.212 | 0.256 | 0.205 | 0.261 | 0.252 | 0.185 | 0.187 | 0.175 | 0.270 | 0.222 | 0.420 | 0.082 | 0.214 | 0.825 | 0.773 | 0.473 | 0.538 | 0.543 | 0.450 | 0.476 | 0.270 | 0.285 | 0.136 | 0.204 | 0.290 | 0.369 musk | 0.040 | 0.038 | 0.075 | 0.027 | 0.042 | 0.028 | 0.029 | 0.029 | 0.036 | 0.016 | 0.043 | 0.050 | 0.045 | 0.014 | 0.023 | 0.01 | 0.018 | 0.025 | 0.013 | 0.045 | 0.029 | 0.051 | 0.023 | 0.036 | 0.028 | 0.028 | 0.008 | 0.013 | 0.008 | 0.041 | 0.038 | 0.035 | 0.044 | 0.028 | 0.050 | 0.023 | 0.034 | 0.028 | 0.033 | 0.011 | 0.023 | 0.195 | 0.064 | 0.116 | 0.076 | 0.127 | 0.041 | 0.078 | 0.109 | 0.074 | 0.049 | 0.087 | 0.088 | 0.282 spam | 0.058 | 0.021 | 0.032 | 0.018 | 0.019 | 0.035 | 0.017 | 0.009 | 0.017 | 0.013 | 0.026 | 0.030 | 0.026 | 0.007 | 0.014 | 0.009 | 0.008 | 0.015 | 0.010 | 0.121 | 0.017 | 0.030 | 0.015 | 0.019 | 0.035 | 0.017 | 0.005 | 0.011 | 0.007 | 0.020 | 0.015 | 0.022 | 0.118 | 0.016 | 0.030 | 0.015 | 0.019 | 0.013 | 0.015 | 0.218 | 0.009 | 0.714 | 0.019 | 0.351 | 0.064 | 0.071 | 0.057 | 0.047 | 0.200 | 0.062 | 0.061 | 0.298 | 0.045 | 0.265 study | 0.124 | 0.125 | 0.102 | 0.113 | 0.087 | 0.114 | 0.108 | 0.082 | 0.120 | 0.091 | 0.102 | 0.132 | 0.105 | 0.058 | 0.107 | 0.078 | 0.056 | 0.105 | 0.074 | 0.144 | 0.078 | 0.077 | 0.073 | 0.084 | 0.114 | 0.105 | 0.078 | 0.053 | 0.055 | 0.152 | 0.107 | 0.116 | 0.142 | 0.077 | 0.072 | 0.072 | 0.083 | 0.102 | 0.101 | 0.084 | 0.052 | 0.684 | 0.123 | 0.336 | 0.157 | 0.162 | 0.153 | 0.124 | 0.202 | 0.120 | 0.213 | 0.283 | 0.233 | 0.306 telco | 0.204 | 0.171 | 0.147 | 0.215 | 0.158 | 0.037 | 0.074 | 0.027 | 0.093 | 0.165 | 0.035 | 0.103 | 0.176 | 0.017 | 0.093 | 0.145 | 0.023 | 0.095 | 0.148 | 0.119 | 0.074 | 0.074 | 0.086 | 0.069 | 0.039 | 0.073 | 0.012 | 0.075 | 0.114 | 0.050 | 0.071 | 0.125 | 0.116 | 0.072 | 0.074 | 0.084 | 0.067 | 0.031 | 0.092 | 0.007 | 0.042 | 0.527 | 0.556 | 0.283 | 0.316 | 0.311 | 0.351 | 0.309 | 0.186 | 0.204 | 0.099 | 0.151 | 0.224 | 0.299 thrm | 0.352 | 0.377 | 0.309 | 0.299 | 0.340 | 0.249 | 0.296 | 0.220 | 0.378 | 0.298 | 0.250 | 0.383 | 0.298 | 0.142 | 0.344 | 0.234 | 0.146 | 0.348 | 0.231 | 0.223 | 0.165 | 0.181 | 0.171 | 0.257 | 0.249 | 0.291 | 0.202 | 0.181 | 0.231 | 0.245 | 0.179 | 0.276 | 0.220 | 0.163 | 0.176 | 0.168 | 0.251 | 0.219 | 0.352 | 0.191 | 0.18 | 0.833 | 0.638 | 0.532 | 0.444 | 0.387 | 0.386 | 0.379 | 0.275 | 0.261 | 0.164 | 0.295 | 0.176 | 0.281 turk | 0.450 | 0.544 | 0.555 | 0.436 | 0.584 | 0.192 | 0.275 | 0.208 | 0.444 | 0.453 | 0.213 | 0.438 | 0.457 | 0.125 | 0.421 | 0.373 | 0.123 | 0.421 | 0.374 | 0.246 | 0.182 | 0.184 | 0.177 | 0.191 | 0.190 | 0.266 | 0.115 | 0.309 | 0.32 | 0.221 | 0.282 | 0.292 | 0.239 | 0.178 | 0.180 | 0.175 | 0.188 | 0.205 | 0.415 | 0.048 | 0.195 | 0.839 | 0.837 | 0.611 | 0.640 | 0.635 | 0.592 | 0.650 | 0.292 | 0.299 | 0.215 | 0.294 | 0.195 | 0.39 vgame | 0.067 | 0.131 | 0.171 | 0.128 | 0.153 | 0.044 | 0.096 | 0.038 | 0.129 | 0.076 | 0.037 | 0.124 | 0.076 | 0.024 | 0.123 | 0.063 | 0.027 | 0.125 | 0.065 | 0.130 | 0.055 | 0.098 | 0.062 | 0.075 | 0.044 | 0.093 | 0.019 | 0.061 | 0.053 | 0.061 | 0.060 | 0.072 | 0.127 | 0.054 | 0.097 | 0.061 | 0.073 | 0.040 | 0.125 | 0.013 | 0.061 | 0.758 | 0.294 | 0.323 | 0.286 | 0.373 | 0.312 | 0.325 | 0.215 | 0.196 | 0.114 | 0.267 | 0.443 | 0.397 voice | 0.014 | 0.008 | 0.010 | 0.006 | 0.007 | 0.025 | 0.007 | 0.005 | 0.005 | 0.006 | 0.022 | 0.021 | 0.026 | 0.003 | 0.004 | 0.004 | 0.006 | 0.008 | 0.009 | 0.066 | 0.008 | 0.009 | 0.006 | 0.008 | 0.024 | 0.007 | 0.003 | 0.002 | 0.002 | 0.025 | 0.017 | 0.020 | 0.065 | 0.008 | 0.010 | 0.006 | 0.007 | 0.014 | 0.004 | 0.121 | 0.002 | 0.462 | 0.012 | 0.153 | 0.012 | 0.013 | 0.011 | 0.010 | 0.113 | 0.021 | 0.032 | 0.183 | 0.013 | 0.149 wine | 0.316 | 0.201 | 0.096 | 0.136 | 0.244 | 0.049 | 0.117 | 0.025 | 0.174 | 0.098 | 0.027 | 0.176 | 0.096 | 0.016 | 0.178 | 0.081 | 0.014 | 0.176 | 0.080 | 0.164 | 0.081 | 0.071 | 0.068 | 0.102 | 0.048 | 0.114 | 0.022 | 0.078 | 0.073 | 0.074 | 0.087 | 0.103 | 0.161 | 0.080 | 0.072 | 0.067 | 0.100 | 0.047 | 0.167 | 0.211 | 0.091 | 0.827 | 0.781 | 0.523 | 0.380 | 0.296 | 0.328 | 0.388 | 0.262 | 0.238 | 0.248 | 0.513 | 0.115 | 0.391 yeast | 0.483 | 0.426 | 0.257 | 0.318 | 0.302 | 0.178 | 0.290 | 0.167 | 0.418 | 0.264 | 0.206 | 0.442 | 0.287 | 0.099 | 0.377 | 0.228 | 0.101 | 0.386 | 0.229 | 0.199 | 0.180 | 0.152 | 0.162 | 0.187 | 0.181 | 0.278 | 0.128 | 0.206 | 0.215 | 0.216 | 0.187 | 0.235 | 0.196 | 0.177 | 0.149 | 0.159 | 0.183 | 0.188 | 0.388 | 0.373 | 0.175 | 0.838 | 0.774 | 0.650 | 0.533 | 0.370 | 0.433 | 0.423 | 0.291 | 0.273 | 0.228 | 0.634 | 0.174 | 0.5 Mean | 0.183 | 0.199 | 0.183 | 0.170 | 0.187 | 0.104 | 0.135 | 0.090 | 0.182 | 0.136 | 0.113 | 0.196 | 0.153 | 0.059 | 0.171 | 0.114 | 0.061 | 0.173 | 0.117 | 0.139 | 0.088 | 0.096 | 0.084 | 0.109 | 0.105 | 0.132 | 0.068 | 0.098 | 0.104 | 0.124 | 0.110 | 0.136 | 0.136 | 0.086 | 0.094 | 0.083 | 0.106 | 0.091 | 0.169 | 0.103 | 0.09 | 0.552 | 0.384 | 0.317 | 0.263 | 0.254 | 0.238 | 0.240 | 0.187 | 0.164 | 0.136 | 0.259 | 0.163 | 0.271 (b) Average NKLD scores for each algorithm per data set Table 6: Main results for binary quantification using tuned classifiers. We show the averaged error scores over all scenarios per algorithm and data set. | AC | AC-LR | AC-RF | AC-SV | AC-AB | PAC | PAC-LR | TSX | TSX-LR | TSX-SV | TS50 | TS50-LR | TS50-SV | TSMax | TSMax-LR | TSMax-SV | MS | MS-LR | MS-SV | GAC | GAC-LR | GAC-RF | GAC-SV | GAC-AB | GPAC | GPAC-LR | DyS | DyS-LR | DyS-SV | FMM | FMM-LR | FMM-SV | HDy | HDy-LR | HDy-RF | HDy-SV | HDy-AB | FM | FM-LR | EM | EM-LR | CC | CC-LR | CC-RF | CC-SV | CC-AB | PCC | PCC-LR ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- conc | 0.623 | 0.524 | 0.347 | 0.353 | 0.421 | 0.566 | 0.513 | 0.528 | 0.520 | 0.325 | 0.517 | 0.524 | 0.322 | 0.455 | 0.444 | 0.296 | 0.459 | 0.460 | 0.323 | 0.486 | 0.302 | 0.299 | 0.259 | 0.423 | 0.473 | 0.300 | 0.589 | 0.542 | 0.380 | 0.627 | 0.532 | 0.445 | 0.484 | 0.298 | 0.310 | 0.260 | 0.421 | 0.510 | 0.303 | 0.498 | 0.241 | 0.915 | 0.568 | 0.563 | 0.459 | 0.733 | 0.692 | 0.534 contra | 0.658 | 0.705 | 0.627 | 0.788 | 0.654 | 0.494 | 0.549 | 0.483 | 0.578 | 0.577 | 0.486 | 0.574 | 0.580 | 0.451 | 0.501 | 0.540 | 0.443 | 0.499 | 0.529 | 0.600 | 0.495 | 0.490 | 0.534 | 0.517 | 0.515 | 0.580 | 0.519 | 0.574 | 0.691 | 0.653 | 0.600 | 0.674 | 0.594 | 0.494 | 0.498 | 0.537 | 0.521 | 0.512 | 0.623 | 0.396 | 0.459 | 0.833 | 0.823 | 0.808 | 0.835 | 0.829 | 0.699 | 0.702 craft | 0.496 | 0.451 | 0.325 | 0.521 | 0.473 | 0.525 | 0.426 | 0.420 | 0.434 | 0.330 | 0.393 | 0.395 | 0.302 | 0.394 | 0.401 | 0.323 | 0.400 | 0.419 | 0.320 | 0.296 | 0.239 | 0.250 | 0.280 | 0.337 | 0.190 | 0.194 | 0.519 | 0.539 | 0.370 | 0.547 | 0.501 | 0.473 | 0.288 | 0.234 | 0.239 | 0.271 | 0.325 | 0.190 | 0.196 | 0.191 | 0.181 | 0.752 | 0.674 | 0.673 | 0.707 | 0.716 | 0.654 | 0.620 drugs | 0.240 | 0.327 | 0.373 | 0.277 | 0.378 | 0.179 | 0.188 | 0.222 | 0.193 | 0.191 | 0.227 | 0.198 | 0.192 | 0.230 | 0.201 | 0.197 | 0.210 | 0.193 | 0.183 | 0.256 | 0.259 | 0.284 | 0.247 | 0.391 | 0.199 | 0.212 | 0.194 | 0.188 | 0.203 | 0.396 | 0.387 | 0.410 | 0.255 | 0.257 | 0.276 | 0.246 | 0.395 | 0.181 | 0.193 | 0.218 | 0.207 | 0.465 | 0.522 | 0.623 | 0.518 | 0.648 | 0.482 | 0.545 thrm | 1.059 | 1.003 | 0.688 | 0.772 | 0.885 | 0.621 | 0.646 | 0.655 | 0.718 | 0.617 | 0.605 | 0.696 | 0.608 | 0.541 | 0.615 | 0.554 | 0.534 | 0.612 | 0.547 | 0.780 | 0.694 | 0.565 | 0.645 | 0.760 | 0.629 | 0.661 | 0.689 | 0.750 | 0.678 | 0.736 | 0.695 | 0.645 | 0.778 | 0.694 | 0.567 | 0.648 | 0.759 | 0.663 | 0.752 | 0.494 | 0.533 | 1.042 | 1.028 | 0.893 | 0.928 | 1.115 | 0.769 | 0.759 turk | 0.569 | 0.670 | 0.797 | 0.699 | 0.792 | 0.348 | 0.387 | 0.420 | 0.442 | 0.598 | 0.438 | 0.454 | 0.606 | 0.392 | 0.415 | 0.505 | 0.393 | 0.416 | 0.505 | 0.525 | 0.506 | 0.572 | 0.518 | 0.562 | 0.342 | 0.385 | 0.422 | 0.492 | 0.554 | 0.607 | 0.602 | 0.647 | 0.524 | 0.511 | 0.561 | 0.520 | 0.566 | 0.392 | 0.455 | 0.277 | 0.371 | 0.976 | 0.987 | 1.003 | 1.028 | 0.987 | 0.727 | 0.731 vgame | 0.712 | 0.736 | 0.735 | 0.779 | 0.775 | 0.633 | 0.614 | 0.606 | 0.598 | 0.626 | 0.612 | 0.598 | 0.625 | 0.558 | 0.544 | 0.576 | 0.558 | 0.549 | 0.580 | 0.520 | 0.519 | 0.529 | 0.536 | 0.567 | 0.460 | 0.460 | 0.543 | 0.564 | 0.647 | 0.482 | 0.486 | 0.523 | 0.519 | 0.524 | 0.527 | 0.534 | 0.576 | 0.474 | 0.466 | 0.322 | 0.338 | 0.590 | 0.575 | 0.658 | 0.614 | 0.694 | 0.520 | 0.494 wine | 0.873 | 0.823 | 0.692 | 0.679 | 0.803 | 0.663 | 0.648 | 0.604 | 0.636 | 0.607 | 0.611 | 0.661 | 0.595 | 0.525 | 0.579 | 0.530 | 0.532 | 0.584 | 0.529 | 0.656 | 0.556 | 0.572 | 0.567 | 0.699 | 0.575 | 0.650 | 0.656 | 0.649 | 0.640 | 0.537 | 0.529 | 0.568 | 0.653 | 0.556 | 0.574 | 0.583 | 0.716 | 0.605 | 0.693 | 0.757 | 0.458 | 0.965 | 0.776 | 0.708 | 0.647 | 0.843 | 0.636 | 0.587 yeast | 0.705 | 0.590 | 0.535 | 0.617 | 0.682 | 0.544 | 0.509 | 0.492 | 0.497 | 0.495 | 0.475 | 0.505 | 0.486 | 0.447 | 0.439 | 0.445 | 0.457 | 0.463 | 0.461 | 0.567 | 0.430 | 0.386 | 0.415 | 0.497 | 0.408 | 0.406 | 0.435 | 0.469 | 0.528 | 0.448 | 0.442 | 0.497 | 0.563 | 0.436 | 0.389 | 0.413 | 0.495 | 0.413 | 0.380 | 0.613 | 0.325 | 0.878 | 0.515 | 0.478 | 0.512 | 0.611 | 0.612 | 0.463 Mean | 0.659 | 0.648 | 0.569 | 0.610 | 0.651 | 0.508 | 0.498 | 0.492 | 0.513 | 0.485 | 0.485 | 0.512 | 0.480 | 0.444 | 0.460 | 0.441 | 0.443 | 0.466 | 0.442 | 0.521 | 0.444 | 0.438 | 0.445 | 0.528 | 0.421 | 0.428 | 0.507 | 0.530 | 0.521 | 0.559 | 0.531 | 0.542 | 0.517 | 0.445 | 0.438 | 0.446 | 0.530 | 0.438 | 0.451 | 0.419 | 0.346 | 0.824 | 0.719 | 0.712 | 0.694 | 0.797 | 0.643 | 0.604 (a) Average AE values of each algorithm per data set | AC | AC-LR | AC-RF | AC-SV | AC-AB | PAC | PAC-LR | TSX | TSX-LR | TSX-SV | TS50 | TS50-LR | TS50-SV | TSMax | TSMax-LR | TSMax-SV | MS | MS-LR | MS-SV | GAC | GAC-LR | GAC-RF | GAC-SV | GAC-AB | GPAC | GPAC-LR | DyS | DyS-LR | DyS-SV | FMM | FMM-LR | FMM-SV | HDy | HDy-LR | HDy-RF | HDy-SV | HDy-AB | FM | FM-LR | EM | EM-LR | CC | CC-LR | CC-RF | CC-SV | CC-AB | PCC | PCC-LR ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- conc | 0.516 | 0.377 | 0.281 | 0.276 | 0.371 | 0.423 | 0.372 | 0.356 | 0.361 | 0.267 | 0.350 | 0.372 | 0.262 | 0.300 | 0.288 | 0.191 | 0.318 | 0.325 | 0.251 | 0.307 | 0.204 | 0.189 | 0.160 | 0.309 | 0.462 | 0.226 | 0.300 | 0.261 | 0.178 | 0.292 | 0.225 | 0.197 | 0.292 | 0.201 | 0.195 | 0.156 | 0.301 | 0.451 | 0.227 | 0.46 | 0.099 | 0.639 | 0.255 | 0.262 | 0.171 | 0.361 | 0.276 | 0.177 contra | 0.506 | 0.534 | 0.460 | 0.583 | 0.492 | 0.372 | 0.382 | 0.346 | 0.419 | 0.469 | 0.382 | 0.440 | 0.462 | 0.273 | 0.335 | 0.377 | 0.282 | 0.347 | 0.379 | 0.446 | 0.339 | 0.332 | 0.365 | 0.356 | 0.466 | 0.466 | 0.294 | 0.340 | 0.415 | 0.299 | 0.250 | 0.370 | 0.428 | 0.327 | 0.325 | 0.355 | 0.346 | 0.442 | 0.480 | 0.237 | 0.198 | 0.464 | 0.457 | 0.451 | 0.455 | 0.468 | 0.280 | 0.284 craft | 0.384 | 0.369 | 0.255 | 0.534 | 0.412 | 0.450 | 0.361 | 0.376 | 0.375 | 0.304 | 0.327 | 0.324 | 0.290 | 0.319 | 0.324 | 0.254 | 0.332 | 0.348 | 0.253 | 0.170 | 0.138 | 0.124 | 0.145 | 0.183 | 0.148 | 0.130 | 0.231 | 0.286 | 0.154 | 0.196 | 0.181 | 0.188 | 0.158 | 0.131 | 0.121 | 0.138 | 0.176 | 0.115 | 0.117 | 0.159 | 0.062 | 0.397 | 0.309 | 0.309 | 0.360 | 0.360 | 0.242 | 0.218 drugs | 0.152 | 0.262 | 0.283 | 0.201 | 0.260 | 0.096 | 0.116 | 0.150 | 0.117 | 0.111 | 0.123 | 0.101 | 0.085 | 0.104 | 0.084 | 0.074 | 0.119 | 0.105 | 0.091 | 0.178 | 0.147 | 0.175 | 0.130 | 0.244 | 0.148 | 0.154 | 0.063 | 0.060 | 0.064 | 0.136 | 0.124 | 0.147 | 0.170 | 0.141 | 0.156 | 0.125 | 0.240 | 0.124 | 0.134 | 0.049 | 0.139 | 0.151 | 0.233 | 0.310 | 0.236 | 0.352 | 0.147 | 0.18 thrm | 0.842 | 0.802 | 0.570 | 0.633 | 0.759 | 0.550 | 0.553 | 0.625 | 0.660 | 0.574 | 0.593 | 0.641 | 0.552 | 0.430 | 0.499 | 0.472 | 0.442 | 0.511 | 0.470 | 0.602 | 0.521 | 0.477 | 0.507 | 0.573 | 0.643 | 0.612 | 0.534 | 0.528 | 0.478 | 0.472 | 0.435 | 0.504 | 0.582 | 0.503 | 0.465 | 0.498 | 0.558 | 0.716 | 0.717 | 0.439 | 0.301 | 0.692 | 0.650 | 0.496 | 0.538 | 0.711 | 0.340 | 0.334 turk | 0.495 | 0.558 | 0.633 | 0.600 | 0.636 | 0.270 | 0.287 | 0.314 | 0.352 | 0.498 | 0.334 | 0.378 | 0.498 | 0.225 | 0.273 | 0.360 | 0.226 | 0.271 | 0.359 | 0.408 | 0.356 | 0.375 | 0.381 | 0.402 | 0.343 | 0.330 | 0.230 | 0.284 | 0.339 | 0.281 | 0.301 | 0.370 | 0.384 | 0.336 | 0.357 | 0.359 | 0.385 | 0.368 | 0.422 | 0.105 | 0.163 | 0.585 | 0.608 | 0.690 | 0.639 | 0.635 | 0.296 | 0.299 vgame | 0.682 | 0.715 | 0.739 | 0.725 | 0.709 | 0.578 | 0.523 | 0.501 | 0.488 | 0.576 | 0.466 | 0.454 | 0.532 | 0.457 | 0.414 | 0.508 | 0.469 | 0.441 | 0.526 | 0.581 | 0.524 | 0.558 | 0.526 | 0.566 | 0.519 | 0.493 | 0.382 | 0.346 | 0.454 | 0.241 | 0.243 | 0.306 | 0.560 | 0.509 | 0.524 | 0.505 | 0.552 | 0.506 | 0.467 | 0.132 | 0.136 | 0.238 | 0.248 | 0.363 | 0.312 | 0.412 | 0.170 | 0.157 wine | 0.693 | 0.667 | 0.642 | 0.633 | 0.695 | 0.627 | 0.565 | 0.574 | 0.558 | 0.585 | 0.563 | 0.574 | 0.543 | 0.444 | 0.464 | 0.489 | 0.468 | 0.487 | 0.508 | 0.433 | 0.466 | 0.518 | 0.531 | 0.578 | 0.617 | 0.594 | 0.459 | 0.406 | 0.435 | 0.285 | 0.244 | 0.323 | 0.424 | 0.447 | 0.497 | 0.526 | 0.571 | 0.615 | 0.620 | 0.778 | 0.243 | 0.713 | 0.495 | 0.445 | 0.372 | 0.605 | 0.240 | 0.209 yeast | 0.674 | 0.583 | 0.557 | 0.638 | 0.662 | 0.587 | 0.559 | 0.549 | 0.570 | 0.540 | 0.496 | 0.549 | 0.512 | 0.491 | 0.496 | 0.465 | 0.510 | 0.546 | 0.498 | 0.357 | 0.386 | 0.297 | 0.326 | 0.406 | 0.430 | 0.356 | 0.341 | 0.358 | 0.346 | 0.226 | 0.235 | 0.303 | 0.356 | 0.385 | 0.296 | 0.318 | 0.400 | 0.398 | 0.330 | 0.699 | 0.2 | 0.584 | 0.234 | 0.295 | 0.325 | 0.500 | 0.224 | 0.140 Mean | 0.549 | 0.541 | 0.491 | 0.536 | 0.555 | 0.439 | 0.413 | 0.421 | 0.433 | 0.436 | 0.404 | 0.426 | 0.415 | 0.338 | 0.353 | 0.355 | 0.352 | 0.376 | 0.371 | 0.387 | 0.342 | 0.338 | 0.341 | 0.402 | 0.420 | 0.373 | 0.315 | 0.319 | 0.318 | 0.270 | 0.249 | 0.301 | 0.373 | 0.331 | 0.326 | 0.331 | 0.392 | 0.415 | 0.390 | 0.34 | 0.171 | 0.496 | 0.388 | 0.402 | 0.379 | 0.489 | 0.246 | 0.222 (b) Average NKLD scores for each algorithm per data set Table 7: Main results for multiclass quantification using tuned classifiers. We show the averaged error scores over all scenarios per algorithm and data set.
equation imposes the constraint that the spatial components of the field strength (and its barred counterpart) must vanish, $\tilde{\mathcal{F}}=\tilde{\overline{\mathcal{F}}}=0\,.$ (118) These constraints are solved by writing $\tilde{A}=g^{-1}\tilde{d}g\,,\quad\tilde{\bar{A}}=\bar{g}^{-1}\tilde{d}\bar{g}\,.$ (119) We write the group elements in a Gauss parametrization $\displaystyle g=\left(\begin{array}[]{cc}1&0\\\ -F&1\\\ \end{array}\right)\left(\begin{array}[]{cc}\lambda&0\\\ 0&\lambda^{-1}\\\ \end{array}\right)\left(\begin{array}[]{cc}1&\Psi\\\ 0&1\\\ \end{array}\right)\,,$ (120) $\displaystyle\bar{g}=\left(\begin{array}[]{cc}1&\bar{F}\\\ 0&1\\\ \end{array}\right)\left(\begin{array}[]{cc}\bar{\lambda}^{-1}&0\\\ 0&\bar{\lambda}\\\ \end{array}\right)\left(\begin{array}[]{cc}1&0\\\ -\bar{\Psi}&1\\\ \end{array}\right)\,.$ It is a straightforward exercise to rewrite the boundary conditions (103) in terms of the functions appearing in (LABEL:zm). The $(\Psi,\bar{\Psi})$ are determined as $\displaystyle\Psi$ $\displaystyle=-{\lambda^{\prime}\over\lambda^{3}F^{\prime}}+{\omega_{x}\over 2\lambda^{2}F^{\prime}}\,,$ (121) $\displaystyle\bar{\Psi}$ $\displaystyle=-{\bar{\lambda}^{\prime}\over\bar{\lambda}^{3}\bar{F}^{\prime}}-{\omega_{x}\over 2\bar{\lambda}^{2}\bar{F}^{\prime}}\,,$ where $\omega_{x}$ is the space component of the boundary spin connection, fixed in terms of the boundary vielbein. The remaining boundary conditions amount to the following differential equations $\displaystyle 2e^{+}_{x}$ $\displaystyle=\lambda^{2}F^{\prime}-\bar{\Psi}^{\prime}-\bar{\lambda}^{2}\bar{\Psi}^{2}\bar{F}^{\prime}-\frac{2\bar{\Psi}}{\bar{\lambda}}\bar{\lambda}^{\prime}\,,$ (122) $\displaystyle-2e^{-}_{x}$ $\displaystyle=\bar{\lambda}^{2}\bar{F}^{\prime}-\Psi^{\prime}-\lambda^{2}\Psi^{2}F^{\prime}-{2\Psi\over\lambda}\lambda^{\prime}\,.$ The equations (121) and (122) are to be imposed at the boundary surface $r=r_{c}$. Having chosen the Gauss parametrization, one finds that the bulk Lagrangian becomes a total derivative and so the complete action takes the form of a boundary term. After performing some algebra (detailed in appendix 8.C), we obtain: $\displaystyle S_{\text{grav}}$ $\displaystyle=-{k\over 2\pi}\int_{\partial M_{3}}\\!d^{2}x\left({\lambda^{\prime}\partial_{t}\lambda\over\lambda^{2}}-\lambda^{2}F^{\prime}\partial_{t}\Psi\right)-{k\over\pi}\int_{\partial M_{3}}\\!d^{2}x\Big{(}\lambda^{2}\Psi^{2}F^{\prime}+\Psi^{\prime}+{2\Psi\lambda^{\prime}\over\lambda}\Big{)}e^{+}_{t}$ (123) $\displaystyle\quad+{k\over 2\pi}\int_{\partial M_{3}}\\!d^{2}x\left({\bar{\lambda}^{\prime}\partial_{t}\bar{\lambda}\over\bar{\lambda}^{2}}-\bar{\lambda}^{2}\bar{F}^{\prime}\partial_{t}\bar{\Psi}\right)-{k\over\pi}\int_{\partial M_{3}}\\!d^{2}x\Big{(}\bar{\lambda}^{2}\bar{\Psi}^{2}\bar{F}^{\prime}+\bar{\Psi}^{\prime}+{2\bar{\Psi}\bar{\lambda}^{\prime}\over\bar{\lambda}}\Big{)}e^{-}_{t}\,.$ The boundary conditions (121)-(122) imply four equations for the six Gauss functions, leaving two free functions, which we can take to be $(F,\bar{F})$. So, in principle, we should use (121) and (122) to obtain $(\Psi,\bar{\Psi},\lambda,\bar{\lambda})$ in terms of $(F,\bar{F})$ and substitute into (123) to obtain the reduced action. However, in practice, it is not possible to carry this out analytically222222 Even though it is not possible to solve the boundary conditions analytically, they do have a beautiful physical interpretation. They correspond to the definition of the stress tensor in a $T\overline{T}$-deformed theory, understood as a theory coupled to topological gravity. See appendix 8.D for more details.. To obtain explicit results we either need to consider the asymptotic AdS3 case of $r_{c}\rightarrow 0$, or use perturbation theory. We discuss these in turn below. One feature to keep in mind is that we only need to solve for the Gauss functions on the cutoff surface. These functions determine the connections restricted to that surface, which we call $(a,\overline{a})$. The full connections $(A,\bar{A})$ may then be determined away from the boundary by the construction $A=b^{-1}ab+b^{-1}db\,,\quad\bar{A}=b\bar{a}b^{-1}+bdb^{-1}\,,$ (124) where $b=e^{-{1\over 2}\ln\left({r\over r_{c}}\right)L_{0}}$ (125) and with $a$ and $\bar{a}$ functions of only the boundary coordinates. This is the Chern-Simons equivalent of radial gauge, which we can always choose at least in a neighborhood of the boundary. Flat boundary connections $(a,\overline{a})$ are thereby promoted to flat bulk connections $(A,\overline{A})$. #### 5 Asymptotic boundary In this subsection, we consider imposing boundary conditions at the asymptotic boundary of AdS3. The results obtained here are found in [58]. Asymptotically AdS3 boundary conditions correspond to taking $r_{c}\rightarrow 0$ with boundary vielbein $e^{a}_{\mu}\sim r_{c}^{-\frac{1}{2}}$. The boundary conditions (121) and (122) imply $(\Psi,\bar{\Psi})\sim r_{c}^{\frac{1}{2}}$ and $(\lambda,\overline{\lambda})\sim r_{c}^{-\frac{1}{4}}$, while $(F,\bar{F})$ stay finite. The solution for (122) reads $\lambda=\sqrt{2e_{x}^{+}\over F^{\prime}}\,,\quad\bar{\lambda}=\sqrt{-\frac{2e_{x}^{-}}{\bar{F}^{\prime}}}\,,$ (126) while the boundary action evaluates to $\displaystyle S_{\text{grav}}$ $\displaystyle=-{k\over\pi}\int_{\partial M_{3}}\\!d^{2}x\left({\lambda^{\prime}D\lambda\over\lambda^{2}}-\lambda^{2}F^{\prime}D\Psi\right)+{k\over\pi}\int_{\partial M_{3}}\\!d^{2}x\left({\bar{\lambda}^{\prime}\bar{D}\bar{\lambda}\over\bar{\lambda}^{2}}-\bar{\lambda}^{2}\bar{F}^{\prime}\bar{D}\bar{\Psi}\right)$ (127) $\displaystyle\quad-{k\over 8\pi}\int_{\partial M_{3}}\\!d^{2}x{e_{t}^{+}e_{x}^{-}-e_{t}^{-}e_{x}^{+}\over e_{x}^{+}e_{x}^{-}}\omega_{x}^{2}$ with $D={1\over 2}\left(\partial_{t}-{e^{+}_{t}\over e^{+}_{x}}\partial_{x}\right)\,,\quad\text{and}\quad\bar{D}={1\over 2}\left(\partial_{t}-{e^{-}_{t}\over e^{-}_{x}}\partial_{x}\right)\,.$ (128) The term in the second line of (127) is a constant determined by the boundary conditions. For a flat planar boundary, we arrive at the Alekseev-Shatashvili action $S_{\text{grav}}=S_{\text{AS}}[F]+S_{\text{AS}}[\bar{F}]\,,$ (129) with $\displaystyle S_{\text{AS}}[F]$ $\displaystyle={k\over 4\pi}\int_{\partial M_{3}}\\!d^{2}x\left({1\over F^{\prime}}\right)^{\prime\prime}\partial_{\bar{z}}F$ (130) $\displaystyle={k\over 4\pi}\int_{\partial M_{3}}d^{2}x\left[{\dot{F}\over F^{\prime}}\left({F^{\prime\prime\prime}\over F^{\prime}}-{F^{\prime\prime 2}\over 2F^{\prime 2}}\right)+{F^{\prime\prime\prime}\over F^{\prime}}-{3\over 2}{F^{\prime\prime 2}\over F^{\prime 2}}\right]\,.$ As noted previously by (58), the field redefinition $F^{\prime}=e^{f}\,,\quad\bar{F}^{\prime}=e^{\bar{f}}\,,$ (131) yields the free boson action $S_{\text{grav}}[f,\bar{f}]=-{k\over 4\pi}\int_{\partial M_{3}}d^{2}x\,\left[f^{\prime}\partial_{z}f+\bar{f}^{\prime}\partial_{\bar{z}}\bar{f}\right]\,.$ (132) #### 6 Perturbation theory for planar cutoff boundary We now consider the case of a boundary at a finite cutoff $r=r_{c}$, with the simplifying assumption of a flat boundary geometry. We will be able to solve the boundary conditions (122) by perturbing around a reference solution. Explicitly, we keep $r_{c}$ finite and fixed and take the boundary vielbein corresponding to a flat plane $e^{+}={1\over 2\sqrt{r_{c}}}(dx+dt)\,,\quad e^{-}=-{1\over 2\sqrt{r_{c}}}(dx- dt)\,.$ (133) The corresponding solution to (108) is $\omega_{x}=0$. We will perturb around the solution $F^{(0)}=\bar{F}^{(0)}=x\,,$ (134) which implies $\Psi^{(0)}=\bar{\Psi}^{(0)}=0$ and $\lambda^{(0)}=\bar{\lambda}^{(0)}=r_{c}^{-\frac{1}{4}}$. This solution corresponds to the Poincaré AdS3 background metric $ds^{2}={dr^{2}\over 4r^{2}}+\frac{dx^{2}-dt^{2}}{r}\,.$ (135) Having identified a background field configuration, we expand around it order by order. We adopt the following notation for the perturbations: $\displaystyle\lambda^{2}=$ $\displaystyle{1\over\sqrt{r_{c}}}\left(1+f+f^{(2)}+f^{(3)}+\cdots\right)\,,$ (136) $\displaystyle\bar{\lambda}^{2}=$ $\displaystyle{1\over\sqrt{r_{c}}}\left(1+\bar{f}+\bar{f}^{(2)}+\bar{f}^{(3)}+\cdots\right)\,,$ $\displaystyle F^{\prime}=$ $\displaystyle 1+g^{(1)}+g^{(2)}+g^{(3)}+\cdots\,,$ $\displaystyle\bar{F}^{\prime}=$ $\displaystyle 1+\bar{g}^{(1)}+\bar{g}^{(2)}+\bar{g}^{(3)}+\cdots\,.$ We will regard $f$ and $\bar{f}$ as the fundamental fields of our perturbative action, while $f^{(i)}$, $g^{(i)}$ and their barred counter-parts will be chosen so that the boundary conditions (122) are satisfied perturbatively. The boundary conditions fully determine $g^{(i)}$ while the functions $f^{(i)}$ can be chosen freely. This freedom amounts to a field redefinition of $(f,\bar{f})$, which will be used to obtain the simplest action possible. Solving the boundary conditions perturbatively, which means working order-by- order in the amplitudes of $(f,\bar{f})$, we find the following expressions for the first few functions $g^{(i)}$: $\displaystyle g^{(1)}=$ $\displaystyle-f-{r_{c}\over 2}\bar{f}^{\prime\prime}\,,$ (137) $\displaystyle g^{(2)}=$ $\displaystyle f^{2}-f^{(2)}+{r_{c}\over 4}\left(\bar{f}^{\prime 2}+2f\bar{f}^{\prime\prime}+2\bar{f}\bar{f}^{\prime\prime}-2f^{(2)\prime\prime}\right)-{1\over r_{c}^{2}}\left(f^{\prime\prime}\bar{f}^{\prime\prime}+\bar{f}^{\prime}f^{\prime\prime\prime}\right)\,,$ and similarly for $\bar{g}^{(i)}$. The formulas for higher order terms are easily found since the boundary conditions amount to linear equations for $g^{(i)}$. However, their expressions are not illuminating and get messy at higher orders, so we do not write them explicitly. As mentioned above, we are free to choose the functions $f^{(i)}$, which amounts to a choice of field redefinition. Just as we found in the metric formulation, a judicious choice simplifies the expression of the action greatly. First of all, demand $4GT_{xt}={1\over 4}\left(\bar{f}^{\prime 2}-f^{\prime 2}\right)+\text{total derivatives}\,,$ (138) which implies a simple expression for the part of the action involving time derivatives, agreeing with (132), and essentially corresponds to choosing Darboux coordinates. The field redefinition that achieves this reads $\displaystyle f^{(2)}=$ $\displaystyle{1\over 2}f^{2}-{r_{c}\over 2}f^{\prime}\bar{f}^{\prime}+\cdots\,,$ (139) $\displaystyle f^{(3)}=$ $\displaystyle{1\over 6}f^{3}-{r_{c}\over 2}ff^{\prime}\bar{f}^{\prime}+{r_{c}^{2}\over 4}\left[{1\over 2}\bar{f}^{\prime 2}f^{\prime\prime}+f^{\prime 2}\bar{f}^{\prime\prime}+\cdots\right]\,,$ and similarly for the barred functions. The second condition that can be satisfied is that all higher derivatives of $f$ and $\overline{f}$ can be canceled in the action, i.e. only powers of the first derivatives appear. This condition first appears in the fourth order, where the appropriate choice of field redefinition reads $\displaystyle f^{(4)}$ $\displaystyle=\frac{1}{4!}f^{4}-\frac{r_{c}}{4}f^{2}f^{\prime}\bar{f}^{\prime}-\frac{r_{c}^{2}}{8}(f^{\prime 3}\bar{f}^{\prime}-ff^{\prime\prime}\bar{f}^{\prime 2}-2ff^{\prime 2}\bar{f}^{\prime\prime}+f^{\prime}\bar{f}^{\prime 3}-2f^{\prime 2}\bar{f}^{\prime 2})$ $\displaystyle\mathrel{\phantom{=}}-\frac{r_{c}^{3}}{16}(4f^{\prime}f^{\prime\prime}\bar{f}^{\prime}\bar{f}^{\prime\prime}+\tfrac{1}{3}f^{\prime\prime\prime}\bar{f}^{\prime 3}+f^{\prime 3}\bar{f}^{\prime\prime\prime})\,.$ (140) Perturbation theory subject to these conditions can be automated using computer algebra software (we used Mathematica) and performed to higher orders. One useful observation is that the terms needed in the choice of $f^{(n)}$ and $\bar{f}^{(n)}$ to satisfy the two aforementioned conditions already appear (with different coefficients) in the Hamiltonian density at order $n-1$. More specifically, the Hamiltonian density has a simple expression up to a total double derivative contribution, and the terms that appear in this double derivative are exactly the ones that make up our choice of $f^{(n)}$ and $\bar{f}^{(n)}$. We carried out this perturbation theory to the eighth order (i.e. computing all terms of the schematic form $f^{n}\bar{f}^{m}$ with $n+m\leq 8$). The result coincides with the expansion to this order of the Nambu-Goto action (25). We naturally conjecture that this result extends to all orders, but we do not have proof. This analysis also yields expressions for the boundary stress tensor $T_{ij}$ to eighth order. These agree with the expressions found in the metric formulation (up to quartic order, which is as far as we pushed the computation in the metric formulation). Since our computations below only use the stress tensor up to cubic order, written in (92), we refrain from writing the higher order expressions, which rapidly become complicated. ### 5 Correlation functions In this section, we discuss the computation of correlation functions of the fundamental fields $(f,\bar{f})$ and the stress tensor $T_{ij}$. We will work up to a two-loop order where, as seen from (91), $G$ acts as a loop counting parameter. Some subtleties have to do with the realization of symmetries in this theory. For example, the action is not manifestly Lorentz invariant, even though the underlying theory is Lorentz invariant since it was obtained by expanding around a Lorentz invariant background (the flat plane). We expect the stress tensor should behave in correlators like a Lorentz tensor. As was discussed above, Lorentz symmetry is realized nonlinearly on the $(f,\bar{f})$ fields. A general phenomenon that can occur when doing perturbation theory in a QFT with a nonlinearly realized symmetry is that one encounters divergent terms that are not invariant under the symmetry. One then needs to perform a field redefinition to restore the symmetry (or equivalently, to modify the symmetry transformation), e.g., [148]. Another approach is to modify the theory off- shell to preserve the symmetry, e.g., [149]. Our approach is to modify perturbation theory in a way that maintains Lorentz invariance while only changing contact terms in correlators. In particular, correlation functions of stress tensors at non-coincident points will respect Lorentz invariance. #### 1 Action We found that the action to quartic order is $I={1\over 16\pi G}\int_{\partial M_{3}}\\!d^{2}x\Big{(}f^{\prime}\partial_{\bar{z}}f+\bar{f}^{\prime}\partial_{z}\bar{f}+{1\over 4}r_{c}f^{\prime 2}\bar{f}^{\prime 2}+\cdots\Big{)}\,.$ (141) Recall that $z=x+it$ and $\bar{z}=x-it$ so that $\partial_{z}={1\over 2}(\partial_{x}-i\partial_{t})\,,\quad\partial_{\bar{z}}={1\over 2}(\partial_{x}+i\partial_{t})\,.$ (142) Here $G$ is the loop counting parameter. In particular, since the stress tensor also has a $1/G$ prefactor, it follows that an $L$ loop contribution to a stress tensor correlator has dependence $G^{L-1}$. #### 2 Propagator Let’s first discuss the propagators in momentum space using the Fourier transform convention $\psi(x,t)=\int\\!{d^{2}p\over(2\pi)^{2}}\psi(p)e^{ip_{t}t+ip_{x}x}$ (143) or in complex coordinates $\psi(z,\bar{z})=\int\\!{d^{2}p\over(2\pi)^{2}}\psi(p)e^{ip_{z}z+ip_{\bar{z}}\bar{z}}$ (144) with $p_{x}=p_{z}+p_{\bar{z}}\,,\quad p_{t}=i(p_{z}-p_{\bar{z}})\,.$ (145) Note also that $p^{2}=p_{t}^{2}+p_{x}^{2}=4p_{z}p_{\bar{z}}\,.$ (146) The free two-point functions are $\langle f^{\prime}(p)f^{\prime}(p^{\prime})\rangle_{0}=32\pi G{p_{x}p_{z}\over p^{2}}(2\pi)^{2}\delta^{2}(p+p^{\prime})\,,\quad\langle\bar{f}^{\prime}(p)\bar{f}^{\prime}(p^{\prime})\rangle_{0}=32\pi G{p_{x}p_{\bar{z}}\over p^{2}}(2\pi)^{2}\delta^{2}(p+p^{\prime})\,.$ (147) We wrote the results for the fields with an $x$-derivative since $(f,\bar{f})$ always appears in the action and stress tensor with at least one $x$-derivative. We will be using dimensional regularization to compute loop diagrams. Our convention for going from two to $d$ dimensions is that we introduce $d-2$ new spatial dimensions. We continue to refer to momenta in the original two dimensions by $(p_{x},p_{t})$ or $(p_{z},p_{\bar{z}})$, but $p^{2}$ is taken to run over all dimensions: $p^{2}=p_{t}^{2}+p_{x}^{2}+\sum_{i=2}^{d}p_{i}^{2}$. In particular, the relation (146) only holds in $d=2$. Coming back to the propagators, after stripping off delta functions and using (145), we have $\langle f^{\prime}(p)f^{\prime}(-p)\rangle_{0}=32\pi G\left({p_{z}^{2}\over p^{2}}+{p_{z}p_{\bar{z}}\over p^{2}}\right)\,,\quad\langle\bar{f}^{\prime}(p)\bar{f}^{\prime}(-p)\rangle_{0}=32\pi G\left({p_{\bar{z}}^{2}\over p^{2}}+{p_{z}p_{\bar{z}}\over p^{2}}\right)\,.$ (148) We now argue that we can drop the ${p_{z}p_{\bar{z}}\over p^{2}}$ terms. First, note that in $d=2$ this term is constant in momentum space and corresponds to a delta function contribution to the propagator in position space. Including such delta functions in propagators is equivalent to a redefinition of couplings and operators since they contract lines down to points, thereby inducing new vertices. The situation in dimensional regularization with $d=2+\varepsilon$ is a bit more subtle. While the violation of $p^{2}=4p_{z}p_{\bar{z}}$ is morally proportional to $\varepsilon$, this can, of course, be compensated by factors of $\frac{1}{\varepsilon}$ arising from divergent loop integrals. Nonetheless, as shown by explicit computation (see appendix 4), the effect of including or excluding the ${p_{z}p_{\bar{z}}\over p^{2}}$ terms in the propagator is the same as changing the coupling in front of some local operator. In general, this local operator will be non-Lorentz invariant. We will allow ourselves to add local operators to maintain Lorentz invariance, and what we see from the present discussion is that the simplest way to do this is to drop the ${p_{z}p_{\bar{z}}\over p^{2}}$ terms from the propagators. This should be thought of as part of our renormalization scheme. We take the propagators to be (149) Arrows indicate momentum flow. With this propagator rule, $f^{\prime}$ is effectively the same as $\partial_{z}f$, and $\bar{f}^{\prime}$ is effectively the same as $\partial_{\bar{z}}\bar{f}$. We then see from (92) that the stress tensor components have indices that match the $\partial_{z}$ and $\partial_{\bar{z}}$ derivatives that appear. This implies that stress tensor correlators will be Lorentz covariant. It will also be useful to Fourier transform back to position space. To perform the $d$-dimensional Fourier transform of the propagators (149) is a straightforward application of the integral (85), the result of which produces the position space propagators $\displaystyle\langle f^{\prime}(x)f^{\prime}(0)\rangle_{0}$ $\displaystyle=-8G\pi^{1-{d\over 2}}\Gamma\left({d\over 2}+1\right){\bar{z}^{2}\over(x\cdot x)^{{d\over 2}+1}}\,,$ (150) $\displaystyle\langle\bar{f}^{\prime}(x)\bar{f}^{\prime}(0)\rangle_{0}$ $\displaystyle=-8G\pi^{1-{d\over 2}}\Gamma\left({d\over 2}+1\right){z^{2}\over(x\cdot x)^{{d\over 2}+1}}\,,$ where $x\cdot x=z\bar{z}+\sum_{i=2}^{d}(x^{i})^{2}$. In $d=2$, (150) becomes $\displaystyle\langle f^{\prime}(x)f^{\prime}(0)\rangle_{0}$ $\displaystyle=-{8G\over z^{2}}\,,$ (151) $\displaystyle\langle\bar{f}^{\prime}(x)\bar{f}^{\prime}(0)\rangle_{0}$ $\displaystyle=-{8G\over\bar{z}^{2}}\,.$ We take $\langle f^{\prime}f^{\prime}\rangle_{0}$ and $\langle\bar{f}^{\prime}\bar{f}^{\prime}\rangle_{0}$ as the propagators. When we refer to an “amputated” diagram, we mean that we have divided by these propagators. #### 3 Interaction vertex To the order we work, there is a single quartic interaction vertex whose Feynman rule is (152) #### 4 Stress tensor in terms of Feynman diagrams From (92) and the Feynman rules we have derived, we can express the components of the deformed stress tensor $T_{\mu\nu}$ in terms of the following Feynman diagrams: (153) #### 5 Structure of stress tensor two-point function The general stress tensor two-point function can be reconstructed from $\langle T_{z\bar{z}}T_{z\bar{z}}\rangle$, as in [88]. To see this, note that Lorentz invariance and parity implies $\displaystyle\langle T_{zz}(x)T_{zz}(0)\rangle$ $\displaystyle={1\over z^{4}}f_{1}(y)\,,$ (154) $\displaystyle\langle T_{zz}(x)T_{z\bar{z}}(0)\rangle$ $\displaystyle={1\over z^{3}\bar{z}}f_{2}(y)\,,$ $\displaystyle\langle T_{zz}(x)T_{\bar{z}\bar{z}}(0)\rangle$ $\displaystyle={1\over z^{2}\bar{z}^{2}}f_{3}(y)\,,$ $\displaystyle\langle T_{z\bar{z}}(x)T_{z\bar{z}}(0)\rangle$ $\displaystyle={1\over z^{2}\bar{z}^{2}}f_{4}(y)\,,$ where the dimensionless variables $y$ is $y={z\bar{z}\over r_{c}}\,.$ (155) Stress tensor conservation implies $\displaystyle f_{1}^{\prime}+y^{3}\left(f_{2}\over y^{3}\right)^{\prime}$ $\displaystyle=0\,,$ (156) $\displaystyle\left(f_{2}\over y\right)^{\prime}+y\left(f_{3}\over y^{2}\right)^{\prime}$ $\displaystyle=0\,,$ $\displaystyle\left(f_{2}\over y\right)^{\prime}+y\left(f_{4}\over y^{2}\right)^{\prime}$ $\displaystyle=0\,.$ As $r_{c}\rightarrow 0$, we should recover the usual CFT correlators, which implies that we are looking for solutions with $f_{1}\rightarrow{c\over 2}$ as $y\rightarrow\infty$, and with the other functions vanishing in this limit. The central charge $c$ will be computed in terms of $G$ momentarily. Note that $f_{3}=f_{4}$, which implies that $\langle T_{zz}(x)T_{\bar{z}\bar{z}}(y)-T_{z\bar{z}}(x)T_{z\bar{z}}(y)\rangle=0$. This is compatible with the trace relation $T_{z\bar{z}}=\pi\lambda_{T\overline{T}}\det T$ given that $\langle T_{z\bar{z}}\rangle=0$. We find the central charge $c$ by computing correlators at $r_{c}=0$, where the stress tensor is $T_{zz}|_{r_{c}=0}={1\over 8G}\left(f^{\prime\prime}-{1\over 2}f^{\prime 2}\right)\,,\quad T_{\bar{z}\bar{z}}|_{r_{c}=0}={1\over 8G}\left(\bar{f}^{\prime\prime}-{1\over 2}\bar{f}^{\prime 2}\right)\,,\quad T_{z\bar{z}}|_{r_{c}=0}=0\,.$ (157) Using (151), we have $\displaystyle\langle T_{zz}(x)T_{zz}(0)\rangle|_{r_{c}=0}={c\over 2z^{4}}\,,\quad\langle T_{\bar{z}\bar{z}}(x)T_{\bar{z}\bar{z}}(0)\rangle|_{r_{c}=0}={c\over 2\bar{z}^{4}}\,,$ (158) $\displaystyle\langle T_{z\bar{z}}(x)T_{z\bar{z}}(0)\rangle|_{r_{c}=0}=\langle T_{zz}(x)T_{z\bar{z}}(0)\rangle|_{r_{c}=0}=0\,,$ with $c={3\over 2G}+1=c_{0}+1\,.$ (159) This one-loop correction to the Brown-Henneaux formula is the same as in [58]. We display the contributing diagrams as (160) where the unfilled circles denote stress tensor insertions. In particular, the details of the calculation in (160) are straightforward (161) where the tree-level propagators in terms of the central charge ($G=\frac{3}{2c_{0}}$) are $\displaystyle\langle f^{\prime}(z_{1})f^{\prime}(z_{2})\rangle=-\frac{12}{c_{0}}\frac{1}{z_{12}^{2}},\quad\langle\bar{f}^{\prime}(\bar{z}_{1})\bar{f}^{\prime}(\bar{z}_{2})\rangle=-\frac{12}{c_{0}}\frac{1}{\bar{z}_{12}^{2}}.$ (162) For the deformation included at $O(r_{c}^{2})$, then: $\displaystyle\left\langle T_{zz}\left(z_{1}\right)T_{zz}\left(z_{2}\right)\right\rangle$ $\displaystyle=\frac{c_{0}^{2}}{36}\bigg{(}\frac{1}{4}\left\langle f^{\prime\prime}\left(z_{1}\right)f^{\prime\prime}\left(z_{2}\right)\right\rangle+\frac{1}{8}\left\langle f^{\prime}\left(z_{1}\right)f^{\prime}\left(z_{2}\right)\right\rangle^{2}$ (163) $\displaystyle+\frac{1}{16}r_{c}^{2}\left\langle f^{\prime\prime\prime}\left(z_{1}\right)f^{\prime\prime\prime}\left(z_{2}\right)\right\rangle\left\langle\bar{f}^{\prime}\left(\bar{z}_{1}\right)\bar{f}^{\prime}\left(\bar{z}_{2}\right)\right\rangle\bigg{)}$ $\displaystyle=\frac{c_{0}+1}{2z_{12}^{4}}+\frac{30r_{c}^{2}}{z_{12}^{6}\bar{z}_{12}^{2}}$ and using the fact that $\lambda=\frac{6r_{c}}{\pi}$, we arrive at $\left\langle T_{zz}(z_{1})T_{zz}(z_{2})\right\rangle=\frac{c_{0}+1}{2z_{12}^{4}}+\frac{5\pi^{2}\lambda^{2}}{6z_{12}^{6}\bar{z}_{12}^{2}}$ (164) which matches [88]. #### 6 Correlators of elementary fields To determine any needed counterterms in the action, we now consider the one- loop four-point and two-loop two-point correlators of $(f,\bar{f})$. ##### $\langle f^{\prime}(p_{1})f^{\prime}(p_{2})f^{\prime}(p_{3})f^{\prime}(p_{4})\rangle$ The basic diagram is (165) The full correlator is then $\langle f^{\prime}(p_{1})f^{\prime}(p_{2})f^{\prime}(p_{3})f^{\prime}(p_{4})\rangle=G_{4}(p_{1},p_{2},p_{3},p_{4})+G_{4}(p_{1},p_{3},p_{2},p_{4})+G_{4}(p_{1},p_{4},p_{3},p_{2})\,.$ (166) The amputated diagram is $\displaystyle G^{{\text{amp}}}_{4}(p_{1},p_{2},p_{3},p_{4})$ $\displaystyle=2r_{c}^{2}\int\\!{d^{d}q\over(2\pi)^{d}}{(p_{1,\bar{z}}+p_{2,\bar{z}}-q_{\bar{z}})^{2}q_{\bar{z}}^{2}\over(p_{1}+p_{2}-q)^{2}q^{2}}$ (167) $\displaystyle={r_{c}^{2}\over 12\pi}{(p_{1,\bar{z}}+p_{2,\bar{z}})^{4}\over(p_{1}+p_{2})^{2}}\,.$ This diagram is in particular finite, hence requires no $(f^{\prime})^{4}$ counterterm. ##### $\langle f^{\prime}(p_{1})f^{\prime}(p_{2})\bar{f}^{\prime}(p_{3})\bar{f}^{\prime}(p_{4})\rangle$ The correlator has an (amputated) tree-level contribution $\langle f^{\prime}(p_{1})f^{\prime}(p_{2})\bar{f}^{\prime}(p_{3})\bar{f}^{\prime}(p_{4})\rangle_{\text{tree}}=-{r_{c}\over 16\pi G}\,.$ (168) The one-loop diagram is (169) which we need to evaluate to compute the one-loop contribution to the correlator $f^{\prime}(p_{1})f^{\prime}(p_{2})\bar{f}^{\prime}(p_{3})\bar{f}^{\prime}(p_{4})\rangle_{1-{\text{loop}}}=G_{2,2}(p_{1},p_{2},p_{3},p_{4})+G_{2,2}(p_{1},p_{2},p_{4},p_{3})\,.$ (170) Employing the shorthand $p_{ij}=p_{i}+p_{j}$, the result computed using dimensional regularization and setting $d=2+\epsilon$ reads $\displaystyle G^{\text{amp}}_{2,2}(p_{1},p_{2},p_{3},p_{4})$ $\displaystyle=4r_{c}^{2}\int\\!{d^{d}q\over(2\pi)^{d}}{(p_{1,z}+p_{3,z}-q_{z})^{2}q_{\bar{z}}^{2}\over(p_{1}+p_{3}-q)^{2}q^{2}}$ (171) $\displaystyle=4r_{c}^{2}\left[\frac{p^{2}_{13}}{32\pi\epsilon}+\frac{6\gamma-11-6\ln(4\pi)}{384\pi}p^{2}_{13}+\frac{p^{2}_{13}}{64\pi}\ln p^{2}_{13}\right]\,.$ The amputated correlator works out to be $\displaystyle\langle f^{\prime}(p_{1})f^{\prime}(p_{2})\bar{f}^{\prime}(p_{3})\bar{f}^{\prime}(p_{4})\rangle_{1-{\text{loop}}}^{\text{amp}}$ $\displaystyle=\frac{r_{c}^{2}(d+2)(d+4)}{4^{d+5/2}\pi^{\frac{d-3}{2}}}\frac{(p_{13}^{2})^{d/2}+(p_{14}^{2})^{d/2}}{\sin\left(\frac{\pi d}{2}\right)\Gamma(d/2+3/2)}\,.$ (172) This amputated correlator (172) has a pole at $\varepsilon=0$, $\langle f^{\prime}(p_{1})f^{\prime}(p_{2})\bar{f}^{\prime}(p_{3})\bar{f}^{\prime}(p_{4})\rangle_{1-{\text{loop}}}^{\text{amp}}\sim{1\over 8\pi\varepsilon}r_{c}^{2}\big{[}p_{13}^{2}+p_{14}^{2}\big{]}\,.$ (173) This divergence (173) is canceled by the counterterm: $I_{\text{ct}}={r_{c}^{2}\over 4\pi\varepsilon}\int_{\partial M_{3}}\\!d^{2}x\partial_{z}(f^{\prime}\bar{f}^{\prime})\partial_{\bar{z}}(f^{\prime}\bar{f}^{\prime})\,.$ (174) The original action has no term of the form (174). One interpretation is that this implies the existence of a new parameter in our theory corresponding to including an undetermined finite term along with (174). On the other hand, as discussed in this chapter’s introduction, the $3d$ gravity origin of this theory indicates that no such new parameters should be needed. We thus suspect that the appearance of the undetermined parameter may reflect that our renormalization scheme has not incorporated all symmetries of the $3d$ gravity theory. #### 7 $\langle f^{\prime}(x)f^{\prime}(0)\rangle$ at two-loops We will first compute the correlator in momentum space. The relevant Feynman diagram to compute $\langle f^{\prime}(p)f^{\prime}(-p)\rangle$ is a sunset- type diagram (175) The two-loop contribution to the amputated correlator is then $\langle f^{\prime}(p)f^{\prime}(-p)\rangle_{{\text{two- loop}}}^{\text{amp}}=8\left({r_{c}\over 64\pi G}\right)^{2}\left(32\pi G\right)^{3}\int{d^{2}k\over(2\pi)^{2}}\int{d^{2}k^{\prime}\over(2\pi)^{2}}{k_{\bar{z}}^{2}k_{\bar{z}}^{\prime 2}(p-k-k^{\prime})_{z}^{2}\over k^{2}k^{\prime 2}(p-k-k^{\prime})^{2}}\,,$ (176) where the overall normalization involves two vertex factors as in (152), the normalization of the three internal propagators as in formulas (149), and a symmetry factor of $8$. The integrals over the internal momenta $k$ and $k^{\prime}$ are computed in dimensional regularization in appendix 3. The result reads $\int{d^{2}k\over(2\pi)^{2}}\int{d^{2}k^{\prime}\over(2\pi)^{2}}{k_{\bar{z}}^{2}k_{\bar{z}}^{\prime 2}(p-k-k^{\prime})_{z}^{2}\over k^{2}k^{\prime 2}(p-k-k^{\prime})^{2}}={1\over 2^{7}3\pi^{2}}p_{z}p_{\bar{z}}^{3}\log p^{2}+{\text{polynomial}}\,.$ (177) Attaching the external legs and Fourier transforming back to position space using formulas (89), we conclude $\langle f^{\prime}(x)f^{\prime}(0)\rangle_{{\text{two- loop}}}=-{64r_{c}^{2}G^{3}\over z^{4}\bar{z}^{2}}\,.$ (178) This diagram is, in particular, finite (up to contact terms at $x=0$), so no wavefunction renormalization is required.232323 As seen in (3), the integral (177) does have a divergence in dimensional regularization. However, the divergence is a polynomial in the momentum, which only leads to delta function contact terms in position space. #### 8 $\langle T_{z\bar{z}}f^{\prime}\bar{f}^{\prime}\rangle$ To identify the need for a counterterm for $T_{z\bar{z}}$, we consider the correlator of the stress tensor with two elementary fields (179) The amputated diagram is $\displaystyle\langle T_{z\bar{z}}(k)f^{\prime}(p_{1})\bar{f}^{\prime}(p_{2})\rangle_{\operatorname{amp}}$ $\displaystyle=4\pi r_{c}^{2}\int\\!{d^{d}q\over(2\pi)^{d}}{q_{z}^{3}(k_{\bar{z}}-q_{\bar{z}})^{3}\over q^{2}(k-q)^{2}}$ (180) $\displaystyle={1\over 32\varepsilon}r_{c}^{2}(k^{2})^{2}+{\operatorname{finite}}\,.$ To cancel this divergence we need to redefine this stress tensor component as $T_{z\bar{z}}\rightarrow T_{z\bar{z}}-{1\over 2\varepsilon}r_{c}^{2}f^{\prime\prime\prime}\bar{f}^{\prime\prime\prime}\,.$ (181) Here, we have adopted a minimal subtraction scheme. Of course, we are free to also add a finite contribution, which will appear below as an undetermined constant in the stress tensor correlator. #### 9 $\langle T_{z\bar{z}}T_{z\bar{z}}\rangle$ To compute $\langle T_{z\bar{z}}T_{z\bar{z}}\rangle$ to two-loop order, we recall $4GT_{z\bar{z}}=-{1\over 4}r_{c}f^{\prime\prime}\bar{f}^{\prime\prime}+{1\over 8}r_{c}(f^{\prime\prime}\bar{f}^{\prime 2}+f^{\prime 2}\bar{f}^{\prime\prime})-{1\over 8}r_{c}^{2}(f^{\prime\prime\prime}\bar{f}^{\prime}\bar{f}^{\prime\prime}+f^{\prime}f^{\prime\prime}\bar{f}^{\prime\prime\prime})+{\operatorname{quartic}}\,.$ (182) The contributing diagrams to the two-loop order are (183) The first three diagrams are trivially computed by Wick contraction in position space. The one-loop diagram is (184) and the two simple two-loop diagrams sum to (185) We next turn to the two-loop diagram in (183). Working in momentum space, the contribution to $\langle T_{z\bar{z}}(-k)T_{z\bar{z}}(k)\rangle$ is (186) This diagram has double and single pole divergences in $\varepsilon$. The double pole is polynomial in $k$ and can be ignored as it won’t contribute to the two-point function at finite spatial separation. The simple pole is canceled, by design, via the stress tensor counterterm (181); i.e. by the two one-loop diagrams in which one of the stress tensor insertions given by the counterterm in (181). The resulting finite part is $\langle T_{z\bar{z}}(-k)T_{z\bar{z}}(k)\rangle_{\infty}=2^{-7}\pi r_{c}^{3}G\left(a\ln k^{2}+(\ln k^{2})^{2}\right)(k^{2})^{4}+{\operatorname{polynomial}}\,.$ (187) The constant $a$ is left unspecified since it can be shifted arbitrarily due to the freedom in including a finite counterterm in (181). Fourier transforming back to position space, we obtain $\langle T_{z\bar{z}}(x)T_{z\bar{z}}(0)\rangle_{\infty}=2^{8}\cdot 3^{2}r_{c}^{3}G{\ln(\mu^{2}z\overline{z})\over(z\overline{z})^{5}}\,,$ (188) where we now traded the arbitrary constant $a$ for a renormalization scale $\mu$.242424Logarithms also appear in the $T\bar{T}$ deformed correlation functions of [88, 89]. #### 10 Summary of two-point deformed correlators at two-loop order Combining results to the two-loop order, we have found $\leavevmode\resizebox{422.77661pt}{}{$\langle T_{z\bar{z}}(x)T_{z\bar{z}}(0)\rangle=\frac{3}{(z\overline{z})^{2}}\left[(3+4G)\left(r_{c}\over z\overline{z}\right)^{2}-64G\left(1-12\ln(\mu^{2}z\overline{z})\right)\left(r_{c}\over z\overline{z}\right)^{3}+400G\left(r_{c}\over z\overline{z}\right)^{4}\right]$}\,.$ (189) Using the Ward identities (154) and (156), we read off the other two-point functions $\displaystyle\langle T_{zz}(x)T_{zz}(0)\rangle$ $\displaystyle=\leavevmode\resizebox{366.40831pt}{}{$\frac{1}{z^{4}}\left[\frac{c}{2}+10(3+4G)\left(\frac{r_{c}}{z\overline{z}}\right)^{2}+96G\left(8+60\ln(\mu^{2}z\overline{z})\right)\left(\frac{r_{c}}{z\overline{z}}\right)^{3}+2520G\left(\frac{r_{c}}{z\overline{z}}\right)^{4}\right]\,,$}$ (190) $\displaystyle\langle T_{zz}(x)T_{z\overline{z}}(0)\rangle$ $\displaystyle=\frac{4}{z^{3}\overline{z}}\left[-(3+4G)\left(\frac{r_{c}}{z\overline{z}}\right)^{2}+24G\left(1-30\ln(\mu^{2}z\overline{z})\right)\left(\frac{r_{c}}{z\overline{z}}\right)^{3}-360G\left(\frac{r_{c}}{z\overline{z}}\right)^{4}\right]\,,$ $\displaystyle\langle T_{zz}(x)T_{\overline{z}\overline{z}}(0)\rangle$ $\displaystyle=\frac{3}{(z\overline{z})^{2}}\left[(3+4G)\left(\frac{r_{c}}{z\overline{z}}\right)^{2}-64G\left(1-12\ln(\mu^{2}z\overline{z})\right)\left(\frac{r_{c}}{z\overline{z}}\right)^{3}+400G\left(\frac{r_{c}}{z\overline{z}}\right)^{4}\right]\,,$ where $c=c_{0}+1={3\over 2G}+1$. #### 11 Higher point correlators We have acquired a systematic method to compute any $n$-point stress tensor correlator at any given order in $\lambda$ and $c$. Despite contenting ourselves to two-point correlators in detail so far, studying higher-point correlators is straightforward. For example, at the one-loop level at $r_{c}=0$, the three-point correlator is (191) Another example is the tree-level deformed three-point correlator at $\mathcal{O}(r_{c})$: (192) We can express (192) in terms of a product of undeformed tree-level two-point correlators with the trace flow equation $\displaystyle\langle T_{z\bar{z}}(x_{1})T_{zz}(x_{2})T_{\bar{z}\bar{z}}(x_{3})\rangle_{r_{c}}$ $\displaystyle=\langle\left(-\pi\lambda T_{zz}(x_{1})T_{\bar{z}\bar{z}}(x_{1})\right)T_{zz}(x_{2})T_{\bar{z}\bar{z}}(x_{3})\rangle_{0}+\mathcal{O}(\lambda^{2})$ (193) $\displaystyle=-\pi\lambda\langle T_{zz}(x_{1})T_{zz}(x_{2})\rangle_{0}\langle T_{\bar{z}\bar{z}}(x_{1})T_{\bar{z}\bar{z}}(x_{3})\rangle_{0}+\mathcal{O}(\lambda^{2})$ $\displaystyle=-\frac{\pi\lambda c_{0}^{2}}{4}\frac{1}{z^{4}_{12}\bar{z}^{4}_{13}}+\mathcal{O}(\lambda^{2})\,,$ where (194) Moreover, from perturbation theory at the tree-level and $\mathcal{O}(r_{c})$, we may compute $\displaystyle\langle T_{zz}(x_{1})T_{\bar{z}\bar{z}}(x_{2})T_{\bar{z}\bar{z}}(x_{3})\rangle_{r_{c}}.$ (195) Using the fact that $\sqrt{g}d^{2}x=\frac{1}{2}d^{2}z$, $\partial_{\bar{z}_{1}}\frac{1}{z_{1}-z_{2}}=2\pi\delta^{2}(z_{1}-z_{2})$ and integration by parts, we arrive at the following at $\mathcal{O}(\lambda)$ $\displaystyle\langle T_{zz}(x_{1})T_{\bar{z}\bar{z}}(x_{2})T_{\bar{z}\bar{z}}(x_{3})\rangle_{r_{c}}$ (196) $\displaystyle=\left\langle T_{zz}(x_{1})T_{\bar{z}\bar{z}}(x_{2})T_{\bar{z}\bar{z}}(x_{3})\left(-\lambda\int d^{2}x\sqrt{g}T_{zz}(x)T_{\bar{z}\bar{z}}(x)\right)\right\rangle_{0}$ $\displaystyle=-\lambda\int d^{2}x\sqrt{g}\langle T_{zz}(x_{1})T_{\bar{z}\bar{z}}(x_{2})T_{\bar{z}\bar{z}}(x_{3})T_{zz}(x)T_{\bar{z}\bar{z}}(x)\rangle_{0}$ $\displaystyle=-\lambda\int d^{2}x\sqrt{g}\langle T_{zz}(x_{1})T_{zz}(x)\rangle_{0}\langle T_{\bar{z}\bar{z}}(x_{2})T_{\bar{z}\bar{z}}(x_{3})T_{\bar{z}\bar{z}}(x)\rangle_{0}$ $\displaystyle=-\lambda\int\left(\frac{1}{2}d^{2}z\right)\left(\frac{c_{0}}{2}\frac{1}{(z-z_{1})^{4}}\right)\left(\frac{c_{0}}{(\bar{z}-\bar{z}_{2})^{2}(\bar{z}-\bar{z}_{3})^{2}(\bar{z_{2}}-\bar{z}_{3})^{2}}\right)$ $\displaystyle=-\frac{\lambda c_{0}^{2}}{4}\int\frac{d^{2}z}{(z-z_{1})^{4}(\bar{z}-\bar{z}_{2})^{2}(\bar{z}-\bar{z}_{3})^{2}(\bar{z_{2}}-\bar{z}_{3})^{2}}$ $\displaystyle=\frac{\lambda c_{0}^{2}}{12}\int d^{2}z\left[\partial_{z}\frac{1}{(z-z_{1})^{3}}\right]\frac{1}{(\bar{z}-\bar{z}_{2})^{2}(\bar{z}-\bar{z}_{3})^{2}(\bar{z}_{2}-\bar{z}_{3})^{2}}$ $\displaystyle=-\frac{\lambda c_{0}^{2}}{12}\int d^{2}z\frac{1}{(z-z_{1})^{3}}\partial_{z}\left[\frac{1}{(\bar{z}-\bar{z}_{2})^{2}(\bar{z}-\bar{z}_{3})^{2}(\bar{z}_{2}-\bar{z}_{3})^{2}}\right]$ $\displaystyle=-\frac{\lambda c_{0}^{2}}{12}\int d^{2}z\frac{1}{(z-z_{1})^{3}}\left(-2\pi\partial_{\bar{z}}\delta^{2}(z-z_{2})\right)\frac{1}{(\bar{z}-\bar{z}_{3})^{2}(\bar{z}_{2}-\bar{z}_{3})^{2}}+(x_{2}\leftrightarrow x_{3})$ $\displaystyle=-\frac{\pi\lambda c_{0}^{2}}{3}\frac{1}{(\bar{z}_{2}-\bar{z}_{3})^{5}}\left(\frac{1}{(z_{1}-z_{2})^{3}}-\frac{1}{(z_{1}-z_{3})^{3}}\right).$ These are just a few sample diagrams related to three-point correlators. Combinatorially, there are various ways of constructing three-point diagrams than there are for two-point correlators, as we saw earlier. For illustrative purposes, a few three-point diagrams are: (197) ### 6 Gravitational AdS3 Wilson lines To close this chapter, we perturbatively compute the deformed classical and quantum gravitational Wilson line and its correlators in AdS3. As a consistency check, our classical gravitational Wilson line correlator analysis is consistent with previous results on $T\overline{T}$-deformed scalar correlators [88, 91, 89] for constant stress tensor backgrounds. #### 1 The classical AdS3 Wilson line The gravitational AdS3 Wilson line anchored at the endpoints $z_{1}$ and $z_{2}$ is conjectured to be dual to a bi-local primary operator: $\langle W[z_{2},z_{1}]\rangle_{0}\longleftrightarrow\langle O(z_{2})O(z_{1})\rangle_{0}\,.$ (198) Given two arbitrary AdS3 bulk points $Z_{1}=(r_{1},z_{1})$ and $Z_{2}=(r_{2},z_{2})$, the classical Wilson line is defined as the path- ordered integral $W[(r_{2},z_{2};r_{1},z_{1})]_{0}=P\exp\left(\int^{(r_{2},z_{2})}_{(r_{1},z_{1})}A\right)\,.$ (199) Under a gauge transformation, the Wilson line transforms as $W[(r_{2},z_{2};r_{1},z_{1})]_{0}\rightarrow g(r_{2},z_{2})^{-1}W[(r_{2},z_{2};r_{1},z_{1})]_{0}g(r_{1},z_{1})\,,$ (200) with $g\in\text{SL}(2,\mathbb{R})$ and $A\rightarrow g^{-1}\left(d+A\right)g$. In particular, the radial dependence of the connection $\displaystyle A$ $\displaystyle=b(r)\left(d+a(z)\right)b(r),\quad b(r)=r^{L_{0}}\,,\overline{A}$ $\displaystyle=\overline{b}(r)\left(d+\bar{a}(\bar{z})\right)\overline{b}(r)^{-1},\quad\overline{b}(r)=r^{\bar{L}_{0}}\,,$ (201) arises through a gauge transformation: $W[(r_{2},z_{2};r_{1},z_{1})]_{0}=b(r_{2})^{-1}P\exp\left(\int_{z_{1}}^{z_{2}}a\right)b(r_{1})\,.$ (202) The matrix elements of $W[z_{2},z_{1}]$ between the lowest and highest weight states are $\displaystyle\langle W[z_{2},z_{1}]\rangle_{0}$ $\displaystyle=\langle j,-j\mid P\exp\left(\int^{z_{2}}_{z_{1}}a\right)\mid j,j\rangle_{0}$ (203) $\displaystyle=\langle j,-j\mid P\exp\left[\int^{z_{2}}_{z_{1}}dz\left(L_{1}+\frac{6}{c}T_{zz}(z)L_{-1}\right)\right]\mid j,j\rangle_{0}\,,$ where $|j,m\rangle$ is the state of weight $m$ in the spin-$j$ representation of $SL(2,\mathbb{R})$. To see the bi-localness of the classical Wilson line, first consider the vacuum state of $3d$ gravity. In the vacuum state, the path-ordered integral reduces to an ordinary integral $\langle W[z_{2},z_{1}]\rangle_{0}\big{|}_{T_{zz}=0}=\langle j,-j\mid\exp\left(\int^{z_{2}}_{z_{1}}dz\,L_{1}\right)\mid j,j\rangle_{0}=z_{21}^{2j}\,,$ (204) where $z_{ij}=z_{i}-z_{j}$ and the bi-local primary field has dimension $h=-j$. One can recover the case when $T_{zz}\neq 0$ through a local conformal transformation $z\rightarrow f(z)$ which is given by inverting the Schwarzian $T_{zz}=\frac{c}{12}\\{f(z),z\\}\,.$ (205) As a result, the classical Wilson line for a general background is $\langle W[z_{2},z_{1}]\rangle_{0}\big{|}_{T_{zz}\neq 0}=\frac{\left[f(z_{2})-f(z_{1})\right]^{2j}}{\left[f^{\prime}(z_{2})f^{\prime}(z_{1})\right]^{j}}\,,$ (206) and behaves as a bi-local primary operator at the endpoints. Intuitively, a way to argue for the bi-locality of the Wilson line is because the Chern- Simons equations of motion for the connections are flat. Consequently, this makes the Wilson line path-independent between the two endpoints. #### 2 The quantum $T\overline{T}$-deformed AdS3 Wilson line The quantum Wilson line is obtained by beginning with the definition of the classical Wilson line, where the stress tensor $T_{zz}$ is thought of as a commuting number and promoting the stress tensor to an operator of the CFT. The resulting object is conjectured to behave as a bi-local primary operator at its endpoints, $\langle W[z_{2},z_{1}]\rangle_{0}=z_{21}^{-2h(j,c)}$. Because the stress tensor is now an operator, short-distance singularities arise from the stress tensor OPE, so the scaling dimension $h(j,c)$ of the Wilson line experiences quantum corrections of the form252525The specific values of $h_{n}(j)$ are easily calculable from [71, 63, 64]. For instance, a few values are: $h_{0}(j)=-j,\quad h_{1}(j)=-6j(j+1),\quad h_{2}(j)=-78j(j+1),\quad h_{3}(j)=-1230j(j+1),\quad h_{4}(j)=-21606j(j+1)\,.$ (207) $h(j,c)=\sum^{\infty}_{n=0}\frac{h_{n}(j)}{c^{n}}\,.$ (208) Due to these short-distance singularities from the stress tensor OPE, we must regularize the gravitational Wilson line to verify that the quantum Wilson line $\displaystyle\langle W[z_{2},z_{1}]\rangle_{0}$ $\displaystyle=\langle j,-j\mid P\exp\left(\int^{z_{2}}_{z_{1}}dz\left(L_{1}+\frac{6}{c}T_{zz}(z)L_{-1}\right)\right)\mid j,j\rangle_{0}$ (209) $\displaystyle=\sum_{n=0}^{\infty}\int_{z_{1}}^{z_{2}}dy_{n}\int_{z_{1}}^{y_{n}}dy_{n-1}\cdots\int_{z_{1}}^{y_{2}}dy_{1}$ $\displaystyle\langle j,-j\mid\left(L_{1}+\frac{6}{c}T_{zz}(y_{n})L_{-1}\right)\cdots\left(L_{1}+\frac{6}{c}T_{zz}(y_{1})L_{-1}\right)\mid j,j\rangle_{0}$ captures the correct scaling dimension (208) as a bi-local primary operator. Further perturbative evidence of the Wilson line’s bi-localness was provided in [71, 63, 64]. The authors of [71] successfully calculated the undeformed quantum Wilson line $\langle W[z;0]\rangle_{0}$ up to $\mathcal{O}(\frac{1}{c})$ and encountered some ambiguities in the coefficients at the two-loop order, $\mathcal{O}(\frac{1}{c^{2}})$, due to the absence of a systematic renormalization scheme that preserves conformal invariance. The most promising scheme is the dimensional regularization approach used in [63], where an overall multiplicative renormalization $N(\varepsilon)$ and a renormalization of the vertex factor $\alpha(\varepsilon)$ were needed in $d=2-\varepsilon$ dimensions: $\lim_{\varepsilon\rightarrow 0}\langle W_{\varepsilon}[z_{2},z_{1}]\rangle_{0}=z_{21}^{2j}\lim_{\varepsilon\rightarrow 0}N(\varepsilon)\langle j,-j\mid P\exp\left(\frac{6\alpha(\varepsilon)}{c}\int_{z_{1}}^{z_{2}}dy\left(L_{1}+\frac{6}{c}T_{zz}(y)L_{-1}\right)\right)\mid j,j\rangle_{0}\,.$ (210) Here $N(\varepsilon)$ and $\alpha(\varepsilon)$ are chosen order-by-order in $\frac{1}{c}$ to cancel the poles in $\varepsilon$. The authors in [63] corrected the issue which arose at $\mathcal{O}(\frac{1}{c^{2}})$ in [71]. They also carefully calculated and confirmed the $\mathcal{O}(\frac{1}{c^{3}})$ corrections to the Wilson line. Using the systematic renormalization approach in [63], the authors of [64] calculated Wilson line correlators with multiple stress tensors insertions $\big{\langle}\prod^{n}_{i=1}T_{zz}(w_{i})W[z_{2},z_{1}]\big{\rangle}$ and found results consistent with the expectation that the Wilson line yields the vacuum Virasoro OPE block (3)-(4). However, whether the quantum Wilson line behaves as a bi-local primary operator non-perturbatively in $\frac{1}{c}$ is still unknown as dimensional regularization may violate conformal invariance. This completes our review of the quantum Wilson line; we now set up the necessary formalism to compute the deformed quantum Wilson line. We begin with the Wilson line in terms of the boundary stress tensor, which is valid in the undeformed theory because the connections can be brought into Bañados form: $W[z_{2},z_{1}]=P\exp\left(\int^{z_{2}}_{z_{1}}dy\left(L_{1}+\frac{6}{c}T_{zz}(y)L_{-1}\right)\right).$ (211) Following [71], we write (211) in a more convenient form by defining $\displaystyle V[z_{1},z_{2}]$ $\displaystyle=\exp\left(-L_{1}z_{21}\right)W[z_{1},z_{2}]\,,$ (212) so that $\begin{split}\frac{d}{dz_{2}}V[z_{1},z_{2}]&=\exp\left(-L_{1}z_{21}\right)\frac{6}{c}T_{zz}(z_{2})L_{-1}\exp\left(L_{1}z_{21}\right)V[z_{1},z_{2}]\\\ &=\frac{6}{c}\left((1-L_{1}z_{21})L_{-1}(1+L_{1}z_{21})\right)T_{zz}(z_{2})V[z_{1},z_{2}]\\\ &=\frac{6}{c}\left(L_{-1}+z_{21}[L_{-1},L_{1}]-z_{21}^{2}L_{1}L_{-1}L_{1}\right)T_{zz}(z_{2})V[z_{1},z_{2}]\\\ &=\frac{6}{c}\left(L_{-1}-2z_{21}L_{0}+z_{21}^{2}L_{1}\right)T_{zz}(z_{2})V[z_{1},z_{2}]\,,\end{split}$ (213) where we have used the facts $L_{\pm 1}^{2}=0$, $L_{1}L_{-1}L_{1}=-L_{1}$, and $[L_{-1},L_{1}]=-2L_{0}$. Here (213) is solved by the usual path-ordered exponential and this allows us to write the Wilson line in a more convenient form to systematically implement a $\frac{1}{c}$ expansion: $\displaystyle\langle W[z_{2},z_{1}]\rangle$ (214) $\displaystyle=\langle j,-j\mid\exp\left(z_{21}L_{1}\right)P\exp\left(\frac{6}{c}\int^{z_{2}}_{z_{1}}\left(L_{-1}-2(y-z_{1})L_{0}+(y-z_{1})^{2}L_{1}\right)T_{zz}(y)dy\right)\mid j,j\rangle\,.$ The gravitational Wilson line in this form (LABEL:eq:GravWilson) can be understood as a perturbative expansion in $\frac{1}{c}$ of self-energy Feynman diagrams: (215) The first diagram in (215) contributes at $\mathcal{O}(\frac{1}{c^{0}})$, the second diagram contributes at $\mathcal{O}(\frac{1}{c})$, the final four diagrams contribute at $\mathcal{O}(\frac{1}{c^{2}})$, and the ellipsis denotes higher order quantum corrections past $\mathcal{O}(\frac{1}{c^{2}})$. For every vertex, in the undeformed case, we have holomorphic stress tensor insertions. For $n$ vertices, we have an $n$-point correlator of holomorphic stress tensors to integrate over. Using this setup for the quantum gravitational Wilson line, writing down formal expressions for the deformed corrections from the $n$-point deformed stress tensor correlators is straightforward. One can then use the Feynman rules of the fundamental fields derived in this chapter (151) and (152) to calculate the deformed stress tensor correlators. Intuitively, the quantum corrections to the deformed gravitational Wilson line $\langle W[z_{2},z_{1}]\rangle_{\lambda}$ involve non-vanishing self-energy interactions between both holomorphic and anti-holomorphic exchanges of the fundamental fields denoted by solid and dashed propagators respectively. Determining the deformed quantum Wilson line at a given order in $\lambda$ and $\frac{1}{c}$ is computationally complicated for two reasons. The first reason is that the $n$-point stress tensor correlator is subject to both quantum corrections in $\frac{1}{c}$ and $\lambda$ corrections. To be more precise, we notice that the expectation value of the undeformed Wilson line $W_{\varepsilon}[z,0]$ has the expansion $\langle W_{\varepsilon}[z;0]\rangle_{0}=z^{2j}N(\varepsilon)\sum_{n=0}^{\infty}\frac{(6\alpha(\varepsilon))^{n}}{c^{n}}\int_{0}^{z}dy_{n}\cdots\int_{0}^{y_{2}}dy_{1}F_{n}(z;y_{n},\dots,y_{1})\langle T_{zz}(y_{n})\cdots T_{zz}(y_{1})\rangle_{0}$ (216) from (210). Here the SL$(2,\mathbb{R})$ group theory factor $F_{n}\left(z;y_{n},\dots,y_{1}\right)$ is defined by the following homogeneous polynomial in the variables $z,y_{n},\cdots,y_{1}$ of degree $n$: $z^{2j}F_{n}\left(z;y_{n},\dots,y_{1}\right)=\left\langle j,-j\mid e^{zL_{1}}\left(L_{-1}-2y_{n}L_{0}+y_{n}^{2}L_{1}\right)\cdots\left(L_{-1}-2y_{1}L_{0}+y_{1}^{2}L_{1}\right)\mid j,j\right\rangle\,.$ (217) Computing $\langle W_{\varepsilon}[z;0]\rangle_{\lambda}$ via conformal perturbation theory in $\lambda$ involves an infinite $\lambda$ expansion at each order of the $\mathcal{O}\left(\frac{1}{c}\right)$ expansion of the undeformed Wilson line $\langle W_{\varepsilon}[z;0]\rangle_{0}$. For instance, the $\mathcal{O}\left(\frac{1}{c^{2}}\right)$ term in the $\frac{1}{c}$ expansion262626We start with $O\left(\frac{1}{c^{2}}\right)$ because at $\mathcal{O}\left(\frac{1}{c}\right)$, the one-point planar correlator $\langle T_{zz}(y_{1})\rangle_{\lambda}$ vanishes identically by Lorentz and translational invariance. In the case of non-planar backgrounds, such as the cylinder or torus [58, 150], the one-point function is nonzero. of $\langle W_{\varepsilon}[z;0]\rangle_{0}$ is $\displaystyle z^{2j}N(\varepsilon)\frac{(6\alpha(\varepsilon))^{2}}{c^{2}}\int_{0}^{z}dy_{1}\int_{0}^{y_{2}}dy_{1}F_{2}(z;y_{1},y_{2})\langle T_{zz}(y_{1})T_{zz}(y_{2})\rangle_{0}$ (218) $\displaystyle\rightarrow z^{2j}N(\varepsilon)\frac{(6\alpha(\varepsilon))^{2}}{c^{2}}\sum_{p=0}^{\infty}\int_{0}^{z}dy_{1}\int_{0}^{y_{2}}F_{2}(z;y_{1},y_{2})$ $\displaystyle\cdot\left\langle T_{zz}(y_{1})T_{zz}(y_{2})\frac{\lambda^{p}}{p!}\left(\int d^{2}wT_{zz}(w)T_{\bar{z}\bar{z}}(\overline{w})\right)^{p}\right\rangle\,.$ The divergences from the integrals are handled by the dimensional regularization scheme, which we mentioned in the discussion of the renormalized vertex factor and multiplicative renormalization around equation (210). For exposition’s sake, we content ourselves with determining the leading order corrections to the quantum Wilson line (LABEL:eq:GravWilson). Expanding the exponential (LABEL:eq:GravWilson), we find $\displaystyle\langle W[z_{2},z_{1}]\rangle_{\lambda}$ (219) $\displaystyle\,\,\,=z^{2j}\bigg{[}1+\left(\frac{6}{c}\right)^{2}\int^{z_{2}}_{z_{1}}dy_{1}\int^{y_{2}}_{z_{1}}dy_{2}\langle j,-j\mid\exp\left(L_{1}z_{21}\right)\left(L_{-1}-2(y_{1}-z_{1})L_{0}+(y_{1}-z_{1})^{2}L_{1}\right)$ $\displaystyle\hskip 130.0pt\times\left(L_{-1}-2(y_{2}-z_{1})L_{0}+(y_{2}-z_{1})^{2}L_{1}\right)\mid j,j\rangle\langle T_{zz}(y_{1})T_{zz}(y_{2})\rangle_{\lambda}\bigg{]}\,.$ The tree-level deformed planar stress tensor two-point function at $\mathcal{O}(\lambda^{2}c^{2})$ was determined first by [88] via translational/rotational invariance and stress tensor conservation. Alternatively, using the approach in this chapter, this is easily understood by the propagators of the fundamental fields as $\displaystyle\langle T_{zz}(y_{1})T_{zz}(y_{2})\rangle_{\lambda}$ $\displaystyle=\frac{1}{(8G)^{2}}\partial_{y_{1}}\partial_{y_{2}}\langle f^{\prime}(y_{1})f^{\prime}(y_{2})\rangle_{0}$ (220) $\displaystyle+\frac{r_{c}^{2}}{\left(16G\right)^{2}}\left(\partial^{2}_{y_{1}}\partial^{2}_{y_{2}}\langle f^{\prime}(y_{1})f^{\prime}(y_{2})\rangle_{0}\right)\langle\bar{f}^{\prime}(\bar{y}_{1})\bar{f}^{\prime}(\bar{y}_{2})\rangle_{0}$ $\displaystyle=\frac{c}{2\left(y_{1}-y_{2}\right)^{4}}+\frac{5\pi^{2}\lambda^{2}c^{2}}{6\left(y_{1}-y_{2}\right)^{6}\left(\bar{y}_{1}-\bar{y}_{2}\right)^{2}}\,.$ Thus $\displaystyle\langle j,-j\mid\exp\left(L_{1}z\right)\left(L_{-1}-2y_{1}L_{0}+y_{1}^{2}L_{1}\right)\left(L_{-1}-2y_{2}L_{0}+y_{2}^{2}L_{1}\right)\mid j,j\rangle$ (221) $\displaystyle=z^{2j-2}\left[2jy_{2}\left(z-y_{1}\right)\left(2jy_{1}(z-y_{2})-y_{2}(z-y_{1})\right)\right]\,.$ The integral (LABEL:eq:QWilson1) reduces to $\displaystyle\langle W[z,0]\rangle_{\lambda}$ $\displaystyle=z^{2j}\bigg{[}1+\frac{36j}{cz^{2}}\int^{z}_{0}dy_{1}\int^{y_{1}}_{0}dy_{2}\frac{y_{2}(z-y_{1})\left(2jy_{1}(z-y_{2})-y_{2}(z-y_{1})\right)}{(y_{1}-y_{2})^{4}}$ (222) $\displaystyle+\frac{60\pi^{2}\lambda^{2}j}{z^{2}}\int^{z}_{0}dy_{1}\int^{y_{1}}_{0}dy_{2}\frac{y_{2}(z-y_{1})\left(2jy_{1}(z-y_{2})-y_{2}(z-y_{1})\right)}{(y_{1}-y_{2})^{6}(\bar{y}_{1}-\bar{y}_{2})^{2}}\bigg{]}\,,$ which clearly diverges when $y_{2}\rightarrow y_{1}$ or $\bar{y}_{2}\rightarrow\bar{y}_{1}$. We dimensionally regularize the stress tensor correlators to evaluate these divergent integrals (222). The $O(\frac{\lambda^{0}}{c})$ integral has already been evaluated in [63] via dimensional regularization, which gives $\frac{36j}{cz^{2}}\int^{z}_{0}dy_{1}\int^{y_{1}}_{0}dy_{2}\,\frac{y_{2}(z-y_{1})\left(2jy_{1}(z-y_{2})-y_{2}(z-y_{1})\right)}{(y_{1}-y_{2})^{4}}=\frac{12j(j+1)}{c}\ln z\,.$ (223) To deal with the $\mathcal{O}(\lambda^{2}c^{0})$ integral in (222), we first specify an integration contour in the complex plane that is a straight line along the direction towards $z$: $\displaystyle y_{2}=y_{1}t,\quad 0\leq t\leq 1\,,$ (224) $\displaystyle y_{1}=zT,\quad 0\leq T\leq 1\,,$ and we find $\displaystyle\frac{60\pi^{2}\lambda^{2}j}{z^{2}}\int^{z}_{0}dy_{1}\int^{y_{1}}_{0}dy_{2}\,\frac{y_{2}(z-y_{1})\left(2jy_{1}(z-y_{2})-y_{2}(z-y_{1})\right)}{(y_{1}-y_{2})^{6}(\bar{y}_{1}-\bar{y}_{2})^{2}}$ (225) $\displaystyle=$ $\displaystyle\frac{60\pi^{2}\lambda^{2}j}{|z|^{4}}\int^{1}_{0}dT\,\frac{1-T}{T^{5}}\int^{1}_{0}dt\,\frac{2j(t-t^{2}T)-t^{2}(1-T)}{(1-t)^{8}}\,.$ The above integral is evaluated via dimensional regularization: $\displaystyle\frac{60\pi^{2}\lambda^{2}j}{|z|^{4}}\int^{1}_{0}dT\,\frac{1-T}{T^{5-2\varepsilon}}\int^{1}_{0}dt\,\frac{2j(t-t^{2}T)-t^{2}(1-T)}{(1-t)^{8-2\varepsilon}}$ (226) $\displaystyle=$ $\displaystyle\frac{\pi^{2}j(9j-1)\lambda^{2}}{21|z|^{4}}\,.$ In summary, the leading order correction to the Wilson line is $\langle W[z,0]\rangle_{\lambda}=z^{2j}\left(1+\frac{12j(j+1)}{c}\ln z+\frac{\pi^{2}j(9j-1)\lambda^{2}}{21|z|^{4}}\right)\,.$ (227) An alternative renormalization approach, which yields the same numerical coefficient as (LABEL:eq:dimreglambda), is to introduce cutoffs $\varepsilon_{1}$ and $\varepsilon_{2}$ as $\displaystyle\frac{60\pi^{2}\lambda^{2}j}{|z|^{4}}\int^{1}_{\varepsilon_{2}}dT\,\frac{1-T}{T^{5}}\int^{1-\varepsilon_{1}}_{0}dt\,\frac{2j(t-t^{2}T)-t^{2}(1-T)}{(1-t)^{8}}\,,$ (228) and perform “minimal subtraction” to remove the divergent terms. Evaluating (LABEL:eq:alternativeregulator) gives $\frac{\pi^{2}j(9j-1)\lambda^{2}}{21|z|^{4}}\,.$ (229) #### 3 AdS3 Wilson line correlators The correlator involving the product of a holomorphic and anti-holomorphic Wilson line is a scalar correlator. A meaningful check is to compute this Wilson line product correlator and see if it is consistent with $T\overline{T}$-deformed scalar correlators [88, 91, 89]. We use conformal perturbation theory at $\mathcal{O}(\lambda)$ to find $\displaystyle\begin{split}\langle W[z,0]\overline{W}[\bar{z},0]\rangle_{\lambda}&=\left\langle W[z,0]\overline{W}[\bar{z},0]\exp\left(\lambda\int d^{2}w~{}T_{zz}(w)T_{\bar{z}\bar{z}}(\bar{w})\right)\right\rangle\\\ &=\langle W[z,0]\overline{W}[\bar{z},0]\rangle_{0}+\lambda\int d^{2}w\left\langle T_{zz}(w)T_{\bar{z}\bar{z}}(\bar{w})W[z,0]\overline{W}[\bar{z},0]\right\rangle_{0}\\\ &=|z|^{-4h(j)}+\lambda\int d^{2}w\left\langle T_{zz}(w)W[z,0]\right\rangle_{0}\left\langle T_{\bar{z}\bar{z}}(\bar{w})\overline{W}[\bar{z},0]\right\rangle_{0}\\\ &=|z|^{-4h(j)}+\lambda h^{2}|z|^{4}\int\frac{d^{2}w}{|w|^{4}|w-z|^{4}}\left\langle W[z,0]\right\rangle_{0}\left\langle\overline{W}[\bar{z},0]\right\rangle_{0}\\\ &=|z|^{-4h}+\lambda h^{2}|z|^{4}\int\frac{d^{2}w}{|w|^{4}|w-z|^{4}}|z|^{-4h}\\\ &=|z|^{-4h(j)}\left(1+\lambda h^{2}|z|^{4}\mathcal{I}_{2222}(0,z,0,\bar{z})\right)\,.\end{split}$ (230) Here, we have used the Ward identity in [64] $\displaystyle\begin{split}&\left\langle T_{zz}(w)W\left[z,0\right]\right\rangle_{0}=\frac{h(j)z^{2}}{\left(z-w\right)^{2}w^{2}}\left\langle W\left[z,0\right]\right\rangle_{0}\,,\\\ &\left\langle T_{\bar{z}\bar{z}}(\bar{w})\overline{W}\left[\bar{z},0\right]\right\rangle_{0}=\frac{h(j)\bar{z}^{2}}{\left(\bar{z}-\bar{w}\right)^{2}\bar{w}^{2}}\left\langle\overline{W}\left[\bar{z},0\right]\right\rangle_{0}\,,\end{split}$ (231) which displays the bi-local structure of the gravitational Wilson line. From appendix A in [91], the integral (230) is of the form $\displaystyle\begin{aligned} &\mathcal{I}_{a_{1},\cdots,a_{m},b_{1},\cdots,b_{n}}\left(z_{i_{1}},\cdots,z_{i_{m}},\bar{z}_{j_{1}},\cdots,\bar{z}_{j_{n}}\right)=\int\frac{d^{2}z}{\prod^{m}_{k=1}\left(z-z_{i_{k}}\right)^{a_{k}}\prod^{n}_{p=1}\left(\bar{z}-\bar{z}_{j_{p}}\right)^{b_{p}}}\,,\end{aligned}$ (232) and is evaluated via dimensional regularization. In particular, $\displaystyle\begin{split}\mathcal{I}_{2222}(0,z,0,\bar{z})&=\int\frac{d^{2}w}{|w|^{4}|w-z|^{4}}\\\ &=\frac{4\pi}{|z|^{6}}\left(\frac{4}{\varepsilon}+2\ln|z|^{2}+2\ln\pi+2\gamma-5\right)\\\ &=\frac{1}{|z|^{6}}\left(C_{1}+C_{2}\ln|z|^{2}\right)\,,\end{split}$ (233) where $C_{1}$ and $C_{2}$ are constant coefficients. We arrive $\displaystyle\langle W[z,0]\overline{W}[\bar{z},0]\rangle_{\lambda}$ $\displaystyle=|z|^{-4h(j)}\left(1+\frac{\lambda h(j)^{2}\left(C_{1}+C_{2}\ln|z|^{2}\right)}{|z|^{2}}\right)\,,$ (234) which exactly matches what we expect at $\mathcal{O}(\lambda c^{0})$ from previous analyses of $T\overline{T}$-deformed scalar correlators [88, 91, 89]. This confirms the claim that the correlator of two Wilson lines behaves as a scalar correlator, at least in this order. Additionally, at leading order in $\lambda$ and in the large-$c$ limit, (230) agrees with the structure one would expect from the linear mixing of sources and expectation values discussed above. Schematically, in the leading order, $\displaystyle\left\langle P\exp\left[\int^{z}_{0}dy\left(e_{i}(\lambda)L_{1}+\frac{6}{c}T_{zz}(y)L_{-1}\right)\right]P\exp\left[\int^{\bar{z}}_{0}d\bar{y}\left(\bar{e}_{i}(\lambda)\bar{L}_{1}+\frac{6}{c}T_{\bar{z}\bar{z}}(\bar{y})\bar{L}_{-1}\right)\right]\right\rangle_{\lambda}$ $\displaystyle=\Bigg{\langle}P\exp\left[\int^{z}_{0}dy\left((1+\lambda T_{\bar{z}\bar{z}}(y))L_{1}+\frac{6}{c}T_{zz}(y)L_{-1}\right)\right]$ $\displaystyle\qquad\qquad\cdot P\exp\left[\int^{\bar{z}}_{0}d\bar{y}\left((1+\lambda T_{zz}(y))\bar{L}_{1}+\frac{6}{c}T_{\bar{z}\bar{z}}(\bar{y})\bar{L}_{-1}\right)\right]\Bigg{\rangle}_{\lambda}$ $\displaystyle=\langle W[z,0]\rangle_{0}\langle\overline{W}[\bar{z},0]\rangle_{0}+\lambda\langle\exp\left(zL_{1}\right)L_{1}\rangle\int^{z}_{0}dy\left\langle T_{\bar{zz}}(\bar{y})\overline{W}[\bar{z},0]\right\rangle_{0}+\mathcal{O}(\lambda^{2})$ $\displaystyle=|z|^{-4h(j)}+\lambda\partial_{z}\langle\exp\left(zL_{1}\right)\rangle\int^{z}_{0}dy\left\langle T_{\bar{zz}}(\bar{y})\overline{W}[\bar{z},0]\right\rangle_{0}+\mathcal{O}(\lambda^{2})$ $\displaystyle=|z|^{-4h(j)}-2\lambda h(j)z^{-2h(j)-1}\int^{z}_{0}dy\frac{h(j)\bar{z}^{2}}{(\bar{z}-\bar{y})^{2}\bar{y}^{2}}\langle\overline{W}[\bar{z},0]\rangle_{0}+\mathcal{O}(\lambda^{2})$ $\displaystyle=|z|^{-4h(j)}-2\lambda h(j)^{2}z^{-2h(j)-1}\bar{z}^{-2h(j)+2}\int^{z}_{0}\frac{dy}{(\bar{z}-\bar{y})^{2}\bar{y}^{2}}+\mathcal{O}(\lambda^{2})$ $\displaystyle=|z|^{-4h(j)}\left(1+\lambda h(j)^{2}z^{-1}\bar{z}^{2}\left(\frac{c_{1}+c_{2}\ln|z|^{2}}{\bar{z}^{3}}\right)+\mathcal{O}(\lambda^{2})\right)$ $\displaystyle=|z|^{-4h(j)}\left(1+\lambda h(j)^{2}\left(\frac{c_{1}+c_{2}\ln|z|^{2}}{|z|^{2}}\right)+\mathcal{O}(\lambda^{2})\right)\,,$ (235) where in the large-$c$ limit, the quantum corrections to the Wilson line’s scaling dimension $h(j)=-j$ are suppressed and $\langle\exp\left(zL_{1}\right)\rangle=z^{-2h}=z^{2j}$. The integral in (3) may be evaluated via the integration cutoff introduced in (LABEL:eq:alternativeregulator) or by dimensional regularization. Using either method, one finds that the result has a similar structure as (234), where $c_{1}$ and $c_{2}$ are constant coefficients. We emphasize that if one had not used the linear mixing or conformal perturbation theory, but rather expanded each path-ordered exponential in $\langle W[z_{2},z_{1}]\overline{W}[\bar{z}_{2},\bar{z}_{1}]\rangle_{\lambda}$, then the leading contribution in $\lambda$ would be at $\mathcal{O}(\lambda^{2}c^{0})$, which arises from integrating the tree-level deformed stress tensor two-point function. To see this, let us compute the correction $\delta\langle W[z,0]\overline{W}[\bar{z},0]\rangle_{\lambda}=\langle W[z,0]\overline{W}[\bar{z},0]\rangle_{\lambda}-\langle W[z,0]\overline{W}[\bar{z},0]\rangle_{0}$ (236) to the correlator using this prescription. We expand the path-ordered exponential272727For the single Wilson line (222), we expanded up to $\mathcal{O}(\frac{1}{c^{2}})$ in the path-ordered exponential since the planar one-point function vanishes. At $\mathcal{O}(\frac{1}{c})$, the path- ordered exponential reduces to a regular integral. for $W[z,0]$ and $\overline{W}[\bar{z},0]$ up to $\mathcal{O}(\frac{1}{c})$ in (LABEL:eq:GravWilson), which gives $\displaystyle\left(\frac{6}{c}\right)^{2}\int^{z}_{0}dy_{1}\int^{\bar{z}}_{0}d\bar{y}_{2}$ $\displaystyle\langle j,-j\mid e^{L_{1}z}(L_{-1}-2y_{1}L_{0}+y_{1}^{2})e^{\bar{L}_{1}\bar{z}}(\bar{L}_{-1}-2\bar{y}_{2}\bar{L}_{0}+\bar{y}_{2}^{2})\mid j,j\rangle$ (237) $\displaystyle\cdot\langle T_{zz}(y_{1})T_{\bar{z}\bar{z}}(y_{2})\rangle_{\lambda}\,.$ Using (92) and the Feynman rules for the relevant tree diagrams, $\displaystyle\langle T_{zz}(y_{1})T_{\bar{z}\bar{z}}(y_{2})\rangle_{\lambda}$ $\displaystyle=\frac{r_{c}^{2}}{(16G)^{2}}\left(\partial^{2}_{y_{1}}\langle f^{\prime}(y_{1})f^{\prime}(y_{2})\rangle_{0}\right)\left(\partial^{2}_{\bar{y}_{2}}\langle\bar{f}^{\prime}(\bar{y}_{1})\bar{f}^{\prime}(\bar{y}_{2})\rangle_{0}\right)+O(r_{c}^{3})$ (238) $\displaystyle=\frac{\pi^{2}\lambda^{2}c^{2}}{4}\frac{1}{\left(y_{1}-y_{2}\right)^{4}\left(\bar{y}_{1}-\bar{y}_{2}\right)^{4}}+\mathcal{O}(\lambda^{3})\,.$ Thus the above prescription involving correlators of Wilson lines (237) is incorrect because the leading correction enters at $\mathcal{O}(\lambda^{2}c^{0})$, rather than the expected order of $\mathcal{O}(\lambda c^{0})$ for scalar two-point correlators. Furthermore, one may also consider a string of holomorphic and anti- holomorphic stress tensor insertions in correlators involving a Wilson line. For instance, we can calculate this kind of correlator via conformal perturbation theory at $\mathcal{O}(\lambda)$: $\left\langle T_{zz}\left(w_{1}\right)T_{\bar{z}\bar{z}}\left(\bar{w}_{2}\right)W[z,0]\right\rangle_{\lambda}=\lambda\int d^{2}y\left\langle T_{zz}(y)T_{zz}\left(w_{1}\right)W[0,z]\right\rangle_{0}\left\langle T_{\bar{z}\bar{z}}(\bar{y})T_{\bar{z}\bar{z}}\left(\bar{w}_{2}\right)\right\rangle_{0}\,.$ (239) In [64], the following tree-level correlator to $O(1/c^{0})$ was derived: $\displaystyle\left\langle T_{zz}\left(w_{1}\right)T_{zz}\left(w_{2}\right)W[z,0]\right\rangle_{0}$ (240) $\displaystyle=\frac{j^{2}z^{2j+4}}{w_{1}^{2}\left(z-w_{1}\right)^{2}w_{2}^{2}\left(z-w_{2}\right)^{2}}+\frac{jz^{2j+2}}{w_{1}\left(z-w_{1}\right)w_{2}\left(z-w_{2}\right)\left(w_{1}-w_{2}\right)^{2}}\,,$ in agreement with the predictions from the conformal Ward identities. Therefore, using the fact that $\left\langle T_{\bar{z}\bar{z}}(\bar{y})T_{\bar{z}\bar{z}}\left(\bar{w}_{2}\right)\right\rangle_{0}=\frac{c}{2(\bar{y}-\bar{w}_{2})^{4}}\,,$ (241) the integral (239) is reduced to $\displaystyle\left\langle T_{zz}\left(w_{1}\right)T_{\bar{z}\bar{z}}\left(\bar{w}_{2}\right)W[z,0]\right\rangle_{\lambda}$ $\displaystyle=\frac{cj\lambda z^{2j}}{2}\int d^{2}y\bigg{[}\frac{jz^{4}}{y^{2}(y-z)^{2}w_{1}^{2}\left(z-w_{1}\right)^{2}\left(\bar{y}-\bar{w}_{2}\right)^{4}}$ (242) $\displaystyle-\frac{z^{2}}{y(y-z)w_{1}\left(z-w_{1}\right)\left(y-w_{1}\right)^{2}\left(\bar{y}-\bar{w}_{2}\right)^{4}}\bigg{]}\,,$ and is evaluated in terms of the integrals defined in (LABEL:MasterIntegralEquation): $\displaystyle\left\langle T_{zz}\left(w_{1}\right)T_{\bar{z}\bar{z}}\left(\bar{w}_{2}\right)W[z,0]\right\rangle_{\lambda}$ (243) $\displaystyle=\frac{j\lambda cz^{2j+2}}{2w_{1}\left(z-w_{1}\right)}\left[\frac{jz^{2}}{w_{1}\left(z-w_{1}\right)}\mathcal{I}_{224}\left(0,z,\bar{w}_{2}\right)-\mathcal{I}_{1124}\left(0,z,w_{1},\bar{w}_{2}\right)\right]\,.$ Another example is a correlator involving two insertions of anti-holomorphic stress tensors, a holomorphic stress tensor, and a holomorphic Wilson line. The desired correlator $\left\langle T_{zz}\left(w_{1}\right)T_{\bar{z}\bar{z}}\left(\bar{w}_{2}\right)T_{\bar{z}\bar{z}}\left(\bar{w}_{3}\right)W[0,z]\exp\left(\lambda\int d^{2}yT_{zz}(y)T_{\bar{z}\bar{z}}(\bar{y})\right)\right\rangle$ (244) is easily computable at $\mathcal{O}(\lambda)$ via conformal perturbation theory. Noting that the undeformed tree-level planar three-point stress tensor correlator is $\langle T_{\bar{z}\bar{z}}(\bar{y})T_{\bar{z}\bar{z}}\left(\bar{w}_{2}\right)T_{\bar{z}\bar{z}}\left(\bar{w}_{3}\right)\rangle_{0}=\frac{c}{\left(\bar{y}-\bar{w}_{2}\right)^{2}\left(\bar{w}_{2}-\bar{w}_{3}\right)^{2}\left(\bar{w}_{3}-\bar{y}\right)^{2}}\,,$ (245) then the leading order correction to the integral (244) at $\mathcal{O}(\lambda c)$ is $\displaystyle\left\langle T_{zz}\left(w_{1}\right)T_{\bar{z}\bar{z}}\left(\bar{w}_{2}\right)T_{\bar{z}\bar{z}}\left(\bar{w}_{3}\right)W[z,0]\right\rangle_{\lambda}$ $\displaystyle=\left\langle T_{zz}\left(w_{1}\right)W[z,0]\right\rangle_{0}\left\langle T_{\bar{z}\bar{z}}\left(\bar{w}_{2}\right)T_{\bar{z}\bar{z}}\left(\bar{w}_{3}\right)\right\rangle_{0}$ $\displaystyle+\lambda\int d^{2}y\,\left\langle T_{zz}(y)T_{zz}\left(w_{1}\right)W[0,z]\right\rangle_{0}\langle T_{\bar{z}\bar{z}}(\bar{y})T_{\bar{z}\bar{z}}\left(\bar{w}_{2}\right)T_{\bar{z}\bar{z}}\left(\bar{w}_{3}\right)\rangle_{0}$ $\displaystyle=\frac{h(j)z^{2}}{\left(z-w_{1}\right)^{2}w_{1}^{2}}\left\langle W\left[z,0\right]\right\rangle_{0}\frac{c}{2(\bar{w}_{2}-\bar{w}_{3})^{4}}$ $\displaystyle+cj\lambda z^{2j}\int d^{2}y\,\Bigg{[}\frac{jz^{4}}{y^{2}(y-z)^{2}w_{1}^{2}\left(z-w_{1}\right)^{2}\left(\bar{y}-\bar{w}_{2}\right)^{2}\left(\bar{w}_{2}-\bar{w}_{3}\right)^{2}\left(\bar{w}_{3}-\bar{y}\right)^{2}}$ $\displaystyle-\frac{z^{2}}{y(y-z)w_{1}\left(z-w_{1}\right)\left(y-w_{1}\right)^{2}\left(\bar{y}-\bar{w}_{2}\right)^{2}\left(\bar{w}_{2}-\bar{w}_{3}\right)^{2}\left(\bar{w}_{3}-\bar{y}\right)^{2}}\Bigg{]}\,.$ (246) Evaluating (3) in terms of the integrals defined in (LABEL:MasterIntegralEquation), we find $\displaystyle\left\langle T_{zz}\left(w_{1}\right)T_{\bar{z}\bar{z}}\left(\bar{w}_{2}\right)T_{\bar{z}\bar{z}}\left(\bar{w}_{3}\right)W[z,0]\right\rangle_{\lambda}=\frac{h(j)cz^{2-2h(j)}}{2(z-w_{1})^{2}w_{1}(\bar{w}_{2}-\bar{w}_{3})^{4}}$ $\displaystyle+\frac{j\lambda cz^{2j+2}}{w_{1}\left(z-w_{1}\right)(\bar{w}_{2}-\bar{w}_{3})^{2}}\left[\frac{jz^{2}}{w_{1}(z-w_{1})}\mathcal{I}_{2222}\left(0,z,\bar{w}_{2},\bar{w}_{3}\right)-\mathcal{I}_{11222}\left(0,z,w_{1},\bar{w}_{2},\bar{w}_{3}\right)\right]\,.$ (247) The integrals presented here, which are of the general form given in (LABEL:MasterIntegralEquation) but with higher-valued indices, can be expressed in terms of derivatives and linear combinations of known integrals with lower-valued indices. See appendix A in [91] for several detailed examples. One can automate the above perturbative analysis in $\lambda$ to produce more complicated expressions for correlators involving products of $m$-insertions of holomorphic stress tensors, $n$-insertions of anti-holomorphic stress tensors, and a network of Wilson lines (e.g. $p$-insertions of the holomorphic Wilson line and $q$-insertions of the anti-holomorphic Wilson line) following [64]. The leading correction for such a general correlator takes the form $\displaystyle\left\langle\prod^{m}_{i=1}T_{zz}(x_{i})\prod^{n}_{j=1}T_{\bar{z}\bar{z}}(\bar{w}_{j})\prod^{p}_{k=1}W[z_{k+1},z_{k}]\prod^{q}_{l=1}\bar{W}[\bar{r}_{l+1},\bar{r}_{l}]\exp\left(\lambda\int d^{2}yT_{zz}(y)T_{\bar{z}\bar{z}}(\bar{y})\right)\right\rangle$ (248) $\displaystyle=\left\langle\prod^{m}_{i=1}T_{zz}(x_{i})\prod^{p}_{k=1}W[z_{k+1},z_{k}]\right\rangle_{0}\left\langle\prod^{n}_{j=1}T_{\bar{z}\bar{z}}(\bar{w}_{j})\prod^{q}_{l=1}\bar{W}[\bar{r}_{l+1},\bar{r}_{l}]\right\rangle_{0}$ $\displaystyle+\lambda\int d^{2}y\left\langle T_{zz}(y)\prod^{m}_{i=1}T_{zz}(x_{i})\prod^{p}_{k=1}W[z_{k+1},z_{k}]\right\rangle_{0}\left\langle T_{\bar{z}\bar{z}}(\bar{y})\prod^{n}_{j=1}T_{\bar{z}\bar{z}}(\bar{w}_{j})\prod^{q}_{l=1}\bar{W}[\bar{r}_{l+1},\bar{r}_{l}]\right\rangle_{0}$ $\displaystyle\quad\,+\mathcal{O}(\lambda^{2})\,.$ ### 7 Conclusion We gave evidence for the Nambu-Goto action (in Hamiltonian form) as the all- order action for $3d$ gravity with a cutoff planar boundary. Secondly, we used the action to compute correlators of the stress tensor operator to two-loop order. The proposal for the action was based on finding a suitable field redefinition yielding Nambu-Goto up eighth order in fields. Proving this conjecture and determining the explicit form of the field redefinition to all orders is desirable. Although the action takes the familiar Nambu-Goto form, the stress tensor is not the canonical one, which is due to the way that the original translation symmetries of the AdS3 background act on the redefined fields. Our computation of stress tensor correlators to two-loop order revealed the need for one stress tensor counterterm with an associated undetermined finite part. As mentioned in the introduction, considering the overarching arguments supporting the renormalizability of pure $3d$ gravity, even when accounting for a finite planar cutoff boundary, it is anticipated that symmetries will play a crucial role in determining all the parameters involved. The implementation of these symmetries is complicated by the non- Lorentz invariant form of the action and by the nonlocal field redefinition that puts the action in Nambu-Goto form. A task for the future is to systematically implement the Ward identities corresponding to these symmetries and check if these yield unique results for stress tensor correlators. The ultimate goal here is to get sufficient control over the stress tensor correlators to say something about their short-distance structure, since this gets to the heart of the nature of this theory, including its anticipated nonlocal character; e.g. [151, 152]. Third is that in the $3d$ Chern-Simons setting, we studied modifications to correlators involving boundary-anchored Wilson lines, which were induced by a $T\overline{T}$ deformation on the $2d$ boundary; results were presented at both the classical level (using modified boundary conditions) and the quantum-mechanical level (using conformal perturbation theory). Developing cases with curved cutoff boundaries would also be worthwhile. The Chern-Simons computation of the action for a finite $S^{2}$ boundary is considered in appendix 8.B, and it should be possible to extend this to one- loop and compare with results in [153]; see also [154, 155] for related results. The technical complication here is the two patches needed to define the gauge connections on the sphere. We close this chapter by commenting on the appearance of the Nambu-Goto action in our analysis. By construction, solutions of our Nambu-Goto equations of motion yield flat two-dimensional surfaces embedded in AdS3. On the other hand, the precise Nambu-Goto action that arises is that of a string worldsheet embedded in flat $\mathbb{R}^{3}$, with $\alpha^{\prime}$ controlled by the cutoff $r_{c}$. One usually thinks of the solutions as describing extremal area surfaces embedded in this flat spacetime. There is a correspondence between flat surfaces embedded in AdS3 and extremal area surfaces embedded in $\mathbb{R}^{3}$. ## Chapter 2 $T\overline{T}$ in JT Gravity and BF Gauge Theory JT gravity can be represented using a first-order formulation akin to a two- dimensional BF theory. This formulation can be perceived as the dimensional reduction of the Chern-Simons description of $3d$ gravity. We consider $T\overline{T}$-type deformations of the $(0+1)$-dimensional dual to this $2d$ BF theory and interpret the deformation as a modification of the BF theory boundary conditions. The fundamental observables in this deformed BF theory and its $3d$ Chern-Simons lift are Wilson lines and loops. In the last chapter, we studied the $3d$ Chern-Simons setting and the modifications to correlators involving boundary-anchored Wilson lines induced by a $T\overline{T}$ deformation on the $2d$ boundary. In this chapter, we determine the $T\overline{T}$-deformed boundary conditions in the BF description of JT gravity. We discuss Wilson lines in the BF theory and calculate the analogous deformed Wilson line correlators in $2d$ BF theory below the Hagedorn temperature, where the principal series dominates over the discrete series. ### 1 Introduction In this chapter, we consider the $T\overline{T}$-deformation of two- dimensional gauge or gravity theories, which are constructed in the following way. We begin with a three-dimensional bulk gravity theory dual to a $2d$ CFT. Then, we deform the boundary CFT by the $T\overline{T}$ operator and interpret this deformation as a modification of the bulk gravity theory. Finally, we dimensionally reduce this scenario on the circle to obtain a correspondence between a deformed $2d$ gravity theory and a dual one-dimensional theory. One can also rewrite the gravity theory in gauge theory variables and study the deformation of the $2d$ gauge theory. In the diagram (6), this corresponds to deforming the $2d$ WZW model in the top-right corner and then studying the image of this deformation under the sequence of maps relating this theory to $2d$ JT gravity and $2d$ BF theory. We emphasize that this deformation is not the same as directly applying the $T\overline{T}$ deformation in the JT gravity or BF theory itself. Indeed, in the JT case, it is unclear how to define a local stress tensor in a theory of gravity, and in the BF case, the theory is topological, so the stress tensor vanishes. We also note that, although we consider $T\overline{T}$-like deformations of two-dimensional $\mathrm{AdS}$ gravity theories, our procedure is quite different from defining the $T\overline{T}$ deformation for a $2d$ field theory on a fixed $\mathrm{AdS}_{2}$ geometry. The latter problem has been considered in [155, 156]. Likewise, although the deformation of BF gauge theory treated in this manuscript is not the same as performing a $T\overline{T}$ deformation of a $2d$ gauge theory directly, such direct deformations of gauge theories have been considered for $2d$ Yang-Mills both with and without matter [157, 158, 159, 160]. Instead, we study a deformation holographically dual to a $T\overline{T}$-like deformation of the boundary $(0+1)$-dimensional theory rather than a $T\overline{T}$-deformation of the $2d$ gauge theory itself. The layout of this chapter is as follows. Sections 2 and 3 are relevant reviews for this chapter of standard results about $3d$ gravitational Chern- Simons theory and $2d$ JT gravity, respectively, including the interpretation of a boundary $T\overline{T}$ deformation in both theories. In section 4, we find the change in BF theory boundary conditions corresponding to a $T\overline{T}$-like deformation of the dual $1d$ theory for two different choices of boundary conditions in the seed theory. In section 5, we first study the deformed BF theory’s boundary spectrum to find that the contribution from the principal series dominates the discrete series only below the Hagedorn temperature. The $T\overline{T}$-deformed Schwarzian theory description of the boundary spectrum is only valid below the Hagedorn temperature. We conclude the section by computing deformed Wilson lines and their correlators in the BF theory below the Hagedorn temperature. In section 6, we conclude with a summary and discussions on possible extensions of the results presented in this chapter. ### 2 $T\overline{T}$ deformations in $3d$ Chern-Simons theory In this section, we review the presentation of the Chern-Simons formulation of $\mathrm{AdS}_{3}$ gravity, which will be relevant for later sections. In particular, we recall that the bulk interpretation of a $T\overline{T}$ deformation in the boundary CFT is a change in the boundary conditions for the Chern-Simons gauge field [110]. #### 1 Revisiting $T\overline{T}$-deformed $3d$ SL$(2,\mathbb{R})$ gravitational Chern-Simons The most general asymptotically $\mathrm{AdS}_{3}$ metric is described by a Fefferman-Graham expansion of the metric. It was shown in [110] that, in Chern-Simons variables, such an expansion corresponds to a solution where the connections $a$ and $\bar{a}$ take the more general form $\displaystyle a_{i}$ $\displaystyle=2e^{+}_{i}L_{+}-f^{-}_{i}L_{-}+\omega_{i}L_{0}\,,$ (1) $\displaystyle\bar{a}_{i}$ $\displaystyle=f^{+}_{i}L_{+}-2e^{-}_{i}L_{-}+\omega_{i}L_{0}\,.$ The connections (1) are solutions to the equations of motion when $\displaystyle da+a\wedge a=0\,,$ (2) $\displaystyle d\bar{a}+\bar{a}\wedge\bar{a}=0\,.$ Substituting (1) into (LABEL:eq:boundaryEOMfora), we find $\displaystyle d\omega-2\varepsilon_{ab}e^{a}\wedge f^{b}$ $\displaystyle=0\,,$ (3) $\displaystyle de^{a}-\varepsilon^{a}{}_{b}e^{b}\wedge\omega$ $\displaystyle=0\,,$ $\displaystyle df^{a}-\varepsilon^{a}{}_{b}f^{b}\wedge\omega$ $\displaystyle=0\,,$ $\displaystyle e^{a}\wedge f_{a}$ $\displaystyle=0\,,$ which are the zero torsion conditions for the frame $e^{a}$ with spin connection $\omega$. Since by our conventions early Latin indices $a,b$ are flat while middle Latin indices $i,j$ are curved, $\varepsilon^{ab}$ is the Levi-Civita symbol with constant entries $\varepsilon_{+-}=-\varepsilon_{-+}=1$, while $\varepsilon^{ij}$ is the Levi- Civita tensor with curved indices $\varepsilon^{x^{+}x^{-}}=-\varepsilon^{x^{-}x^{+}}=\frac{1}{2e}$. In the presence of a boundary, additional boundary terms are needed in the action to have a well-defined variational principle. Varying the Einstein- Hilbert action written in terms of the Chern-Simons connections gives $\displaystyle\delta S_{\operatorname{EH}}$ $\displaystyle=\delta S_{\operatorname{CS}}[A]-\delta S_{\operatorname{CS}}[\bar{A}]$ (4) $\displaystyle=\frac{1}{8\pi G}\int_{M_{3}}\operatorname{Tr}\left(F\wedge\delta A-\bar{F}\wedge\delta\bar{A}\right)-\frac{1}{16\pi G}\int_{\partial M_{3}}\operatorname{Tr}\left(A\wedge\delta A-\bar{A}\wedge\delta\bar{A}\right)\,.$ We desire a variational principle with Dirichlet boundary conditions for the metric, which corresponds to holding $e^{a}$ fixed at the boundary but letting $f^{a}$ vary. However, going on-shell by using the connections in (1), we find that (4) reduces to $\delta S_{\operatorname{CS}}[A]-\delta S_{\operatorname{CS}}[\bar{A}]=-\frac{1}{8\pi G}\int_{\partial M_{3}}\varepsilon_{ab}\left(e^{a}\wedge\delta f^{b}-f^{a}\wedge\delta e^{b}\right)\,,$ (5) which does not vanish and is inconsistent with the specified boundary conditions. We must add the following boundary term to (4): $S_{\text{bdry}}=-\frac{1}{8\pi G}\int_{\partial M_{3}}\varepsilon_{ab}\left(A^{a}\wedge A^{b}+\bar{A}^{a}\wedge\bar{A}^{b}\right)\,.$ (6) The result now is consistent with Dirichlet boundary conditions since $\delta S_{\text{EH}}+\delta S_{\text{bdry}}=\frac{1}{4\pi G}\int_{\partial M_{3}}\varepsilon_{ab}f^{a}\wedge\delta e^{b}\,.$ (7) From the GKPW dictionary [28, 30], it is understood that $e^{a}$ is the source and $f^{a}$ is the expectation value of the dual operator. In particular, the operator dual to the boundary vielbein is the stress tensor. By identifying $\displaystyle\delta S=4\int_{\partial M_{3}}\,d^{2}x\,\left(\det e\right)\,T^{i}{}_{a}\,\delta e_{i}^{a}\,,$ (8) we find that $\displaystyle T^{i}{}_{a}=\frac{1}{4\pi G}\varepsilon_{ab}\varepsilon^{ij}f^{b}_{j}\,,$ (9) with $\nabla_{[i}f^{a}_{j]}=0$. When we turn to the two-dimensional BF theory in section 4, it will be convenient to refer to the dimensional reduction of the $3d$ Chern-Simons action on a circle. The resulting dimensionally reduced theory is equivalent to BF theory with a particular choice of boundary term. To perform this reduction, we first write $\displaystyle 8\pi GS_{\text{CS}}$ $\displaystyle=\frac{1}{2}\int_{M_{3}}\operatorname{Tr}\left(A\wedge dA+\frac{2}{3}A\wedge A\wedge A\right)$ (10) $\displaystyle=\frac{1}{2}\int_{M_{3}}\,d^{3}x\,\varepsilon^{\mu\nu\rho}\operatorname{Tr}\left(A_{\mu}\partial_{\nu}A_{\rho}+\frac{2}{3}A_{\mu}A_{\nu}A_{\rho}\right)$ $\displaystyle=\int_{M_{3}}\operatorname{Tr}\left(A_{\varphi}F_{tr}+A_{r}\partial_{\varphi}A_{t}\right)+\frac{1}{2}\oint_{\partial M_{3}}\operatorname{Tr}A_{t}^{2}\,.$ Next we impose the boundary condition $A_{t}=A_{\varphi}|_{\partial M_{3}}$ so that $\phi\equiv A_{\varphi}$ and $\partial_{\varphi}=0$ (see [161]). Doing this yields $S_{\text{BF}}=\int_{M_{2}}\operatorname{Tr}\left(\phi F\right)+\frac{1}{2}\oint_{\partial M_{2}}\operatorname{Tr}\left(\phi^{2}\right)\,.$ (11) The first term is the usual action for $2d$ BF theory, which in this case has gauge group $G=\operatorname{SL}(2,\mathbb{R})$. The degrees of freedom in this theory are a gauge field $A_{\mu}$ with field strength $F_{\mu\nu}$ along with an $\operatorname{SL}(2,\mathbb{R})$-valued scalar field $\phi$. We will again consider this action in section 1, where we will recall that the theory is equivalent to JT gravity. The second term of (11) controls the dynamics of a boundary degree of freedom, which can be described via the Schwarzian theory or the particle-on-a-group theory. We refer to this as a “Schwarzian-type” boundary term, which will be revisited in section 2. #### 2 Interpretation of the $T\overline{T}$ deformation The $3d$ gravitational Chern-Simons theory is dual to a conformal field theory on the $2d$ boundary of the spacetime via the usual $\mathrm{AdS}/\mathrm{CFT}$ correspondence. On the other hand, in any two- dimensional field theory enjoying translation invariance, one can define a deformation by the double-trace $T\overline{T}$ operator. Our goal in the present section is to apply this deformation to the boundary CFT and interpret the resulting flow in terms of bulk Chern-Simons variables. We follow the discussion of [110] where this analysis first appeared. We must first express the $T\overline{T}$ deformation in terms of the asymptotic expansion coefficients for the Chern-Simons gauge fields. We have already seen, for instance, in (7) and (9), that the functions $e_{i}^{a}$ correspond to the boundary vielbein (or equivalently the metric) and that the $f_{i}^{a}$ are the dual expectation values which encode the stress tensor as $\displaystyle T^{i}{}_{a}=\frac{1}{4\pi G}\varepsilon_{ab}\varepsilon^{ij}f_{j}^{b}\,.$ (12) On the other hand, using the definition of the determinant in terms of the Levi-Civita symbol, the $T\overline{T}$ operator can be written as $T\overline{T}=-2\varepsilon^{ab}\varepsilon_{ij}T^{i}{}_{a}T^{j}{}_{b}\,.$ (13) In terms of the one-forms $f^{-}$ and $f^{+}$, one therefore has $\displaystyle T\overline{T}=\frac{1}{(4\pi G)^{2}}f^{-}\wedge f^{+}\,.$ (14) The flow equation for the boundary action can be written as $\displaystyle\frac{\partial S}{\partial\lambda}=\frac{1}{(4\pi G)^{2}}\int_{\partial M_{3}}f^{-}\wedge f^{+}\,.$ (15) We note that this is a flow equation for the _combined_ boundary action, which in the undeformed case is a sum of three terms: $\displaystyle S(\lambda=0)=S_{\text{CS}}[A]-S_{\text{CS}}[\bar{A}]+S_{\text{bdry}}\,.$ (16) In section 1, we saw that variation of the first two terms $S_{\text{CS}}[A]-S_{\text{CS}}[\bar{A}]$ generated a boundary variation of the form $\varepsilon_{ab}(e^{a}\wedge\delta f^{b}-f^{a}\wedge\delta e^{b})$. The first term involving $\delta f^{b}$ was unsuitable for our desired variational principle, so we added $S_{\text{bdry}}$ to cancel this variation. We will make the ansatz that the finite-$\lambda$ deformed boundary action has the same structure as a sum of three terms involving sources $e_{i}^{a}(\lambda)$ and dual expectation values $f_{i}^{a}$. In this ansatz we allow the sources to acquire $\lambda$ dependence under the flow, but not the expectation values. As a result, the total boundary variation (7) of our $\lambda$-dependent ansatz takes the form $\displaystyle\delta S=\frac{1}{4\pi G}\int_{\partial M_{3}}\varepsilon_{ab}f^{a}\wedge\delta e^{b}(\lambda)\,.$ (17) We now substitute this ansatz into the flow equation (15). More precisely, if the boundary action $S$ satisfies (15), then its variation satisfies $\displaystyle\frac{\partial(\delta S)}{\partial\lambda}=\frac{1}{(4\pi G)^{2}}\int_{\partial M_{3}}\delta(f^{-}\wedge f^{+})\,.$ (18) This then implies $\displaystyle\int_{\partial M_{3}}\varepsilon_{ab}f^{a}\wedge\delta\left(\frac{\partial e^{b}(\lambda)}{\partial\lambda}\right)=\frac{1}{4\pi G}\int_{\partial M_{3}}\varepsilon_{ab}f^{a}\wedge\delta f^{b}\,.$ (19) We see that (19) will be satisfied if $\displaystyle\frac{\partial e^{b}(\lambda)}{\partial\lambda}=\frac{1}{4\pi G}f^{b}\,.$ (20) Since $f^{b}$ is independent of $\lambda$ by assumption, this equation can be trivially integrated to find $\displaystyle e_{i}^{a}(\lambda)=e_{i}^{a}(0)+\frac{\lambda}{4\pi G}f_{i}^{a}\,,$ (21) and $f_{i}^{a}(\lambda)=f_{i}^{a}(0)$. One can show that if the spin connection $\omega$ vanishes in the seed theory (as we will typically assume), then $\omega(\lambda)=0$ along the flow. We have characterized the full solution to the flow equation. _Remarks on deformed boundary conditions_ We now pause to make several comments on this interpretation. We see that the effect of a boundary $T\overline{T}$ deformation is to rotate our undeformed source $e_{i}^{a}$ into a new source $e_{i}^{a}(\lambda)$, which depends linearly on the corresponding undeformed expectation value. Since $e_{i}^{a}$ determines the boundary metric, this means that the deformed theory sees an effective stress-tensor-dependent metric. This is reminiscent of the result of $T\overline{T}$-deforming a two-dimensional field theory defined on a cylinder of radius $R$. As we will review around (183), in the zero-momentum sector, this deformation has the interpretation of placing the theory on a cylinder with an effective energy-dependent radius $\widetilde{R}(R,E_{n})$. Next, we note that although the sources $e_{i}^{a}$ have been modified. The variational principle defining our theory has not changed when expressed in terms of the new sources. The deformed boundary variation solving our flow is written as (17), which vanishes if the sources $e_{i}^{a}(\lambda)$ are held fixed. Therefore, the theory described by these $T\overline{T}$-deformed boundary conditions still corresponds to a variational principle where the metric is held fixed at the boundary, but the dual expectation value is free to fluctuate. All that has changed is the expression for this fixed metric in terms of the undeformed metric and stress tensor. A third remark concerns the trace flow equation for the $T\overline{T}$ deformation. Because there is no dimensionful scale in a CFT, if one solves a $T\overline{T}$ flow beginning from a CFT seed then the resulting theory has a single effective energy scale $\Lambda=\frac{1}{\sqrt{\lambda}}$ set by the length dimension $2$ parameter $\lambda$. By noting that the derivative of the action with respect to this single scale $\lambda$ is controlled by the trace of the stress tensor, $\displaystyle\Lambda\frac{d}{d\Lambda}S=\int d^{2}x\,T^{\mu}{}_{\mu}\,.$ (22) while on the other hand, the derivative of the action is related to the $T\overline{T}$ operator by the definition of the flow (16) $\displaystyle\Lambda\frac{d}{d\Lambda}S$ $\displaystyle=\frac{1}{\sqrt{\lambda}}\frac{d}{d\left(\frac{1}{\sqrt{\lambda}}\right)}S$ $\displaystyle=-2\lambda\int d^{2}x\,\left(T^{\mu\nu}T_{\mu\nu}-\left(T^{\mu}{}_{\mu}\right)^{2}\right)\,,$ (23) one finds the relation $\displaystyle T^{\mu}{}_{\mu}(\lambda)=-2\lambda T\overline{T}(\lambda).$ (24) Since the modified boundary conditions (21) correspond to a $T\overline{T}$-deformation of a CFT, it is an instructive check to verify explicitly that the trace flow equation (24) holds. Indeed, the trace of the deformed stress tensor with respect to the deformed metric is $\displaystyle T^{i}{}_{i}$ $\displaystyle=\eta^{ij}e_{i}^{a}T_{ja}$ $\displaystyle=\frac{1}{4\pi G}\left(e_{i}^{a}(0)+\frac{\lambda}{4\pi G}f_{i}^{a}\right)\left(\varepsilon_{ab}\varepsilon^{ij}f_{j}^{b}\right)$ $\displaystyle=\frac{\lambda}{(4\pi G)^{2}}\varepsilon_{ab}\varepsilon^{jk}f_{i}^{a}f_{j}^{b}\,,$ (25) where in the last step, we have used that the undeformed stress tensor is traceless by assumption. On the other hand, at finite $\lambda$ the combination $T\overline{T}$ is given by (13): $\displaystyle T\overline{T}$ $\displaystyle=-2\varepsilon^{ab}\varepsilon_{ij}T^{i}{}_{a}T^{j}{}_{b}$ $\displaystyle=-\frac{2}{(4\pi G)^{2}}\varepsilon^{ab}\varepsilon_{ij}\left(\varepsilon_{ac}\varepsilon^{ik}f_{k}^{c}\right)\left(\varepsilon_{bd}\varepsilon^{jn}f_{n}^{d}\right)$ $\displaystyle=-\frac{2}{(4\pi G)^{2}}\varepsilon_{ab}\varepsilon^{jk}f_{i}^{a}f^{b}_{j}\,,$ (26) where we have repeatedly used the $2d$ contracted epsilon identity $\varepsilon^{in}\varepsilon_{ij}=\delta^{n}{}_{j}$. Comparing (2) to (2), we see that the trace flow equation $T^{\mu}{}_{\mu}(\lambda)=-2\lambda T\overline{T}(\lambda)$ holds as expected. We make a fourth and final comment, which is a trivial observation in this case but could conceivably be relevant for generalizations of the procedure described here. We emphasized around (16) that the undeformed action $S(\lambda=0)=S_{\text{EH}}+S_{\text{bdry}}$ includes a boundary term which was added by hand to give a particular variational principle. Since the process of deforming the action by $T\overline{T}$ and the process of adding the boundary term $S_{\text{bdry}}$ are two distinct steps, there are naïvely two ways to proceed: 1. 1. First add the boundary term $S_{\text{bdry}}$ to get the total boundary action $S$. Then solve the flow equation (15) for this combined action. 2. 2. First solve the flow equation $\frac{\partial S_{\text{EH}}}{\partial\lambda}\Big{|}_{\text{bdry}}=\frac{1}{(4\pi G)^{2}}\int_{\partial M_{3}}f^{-}\wedge f^{+}$ which only deforms the first contribution to the action. Solve this by identifying new sources $e_{i}^{a}(\lambda)$. After doing this, add a new boundary term $S_{\text{bdry}}(\lambda)$ by hand to restore the desired variational principle. In the discussion above, we performed the deformation described by 1. However, it is straightforward to see that procedure 2 gives the same result precisely because the dual expectation values $f_{i}^{a}$ do not flow according to our ansatz. To show this, we recall from (5) that $\displaystyle\delta S_{\text{EH}}\Big{|}_{\text{on-shell}}=-\frac{1}{8\pi G}\int_{\partial M_{3}}\varepsilon_{ab}\left(e^{a}\wedge\delta f^{b}-f^{a}\wedge\delta e^{b}\right)\,.$ (27) Suppose that we had allowed both $e_{i}^{a}$ and $f_{i}^{a}$ to acquire $\lambda$ dependence along our flow. Then the derivative of this boundary variation would be $\displaystyle\frac{\partial(\delta S_{\text{EH}})}{\partial\lambda}=-\frac{1}{8\pi G}\int_{\partial M_{3}}\varepsilon_{ab}$ $\displaystyle\Bigg{(}\frac{\partial e^{a}(\lambda)}{\partial\lambda}\wedge\delta f^{b}(\lambda)+e^{a}(\lambda)\wedge\frac{\partial(\delta f^{b}(\lambda))}{\partial\lambda}$ (28) $\displaystyle-\frac{\partial f^{a}(\lambda)}{\partial\lambda}\wedge\delta e^{b}(\lambda)-f^{a}(\lambda)\wedge\frac{\partial(\delta e^{b}(\lambda))}{\partial\lambda}\Bigg{)}\,.$ In order to satisfy the flow equation $\frac{\partial(\delta S_{\text{EH}})}{\partial\lambda}\Big{|}_{\text{on-shell}}=\frac{1}{(4\pi G)^{2}}\int_{\partial M_{3}}\delta(f^{-}\wedge f^{+})$, whose right side is again $\displaystyle\frac{1}{(4\pi G)^{2}}\int_{\partial M_{3}}\delta(f^{-}\wedge f^{+})=\frac{1}{(4\pi G)^{2}}\int_{\partial M_{3}}\varepsilon_{ab}f^{a}\wedge\delta f^{b}\,,$ (29) we must have $\displaystyle\frac{\partial e^{a}(\lambda)}{\partial\lambda}\wedge\delta f^{b}(\lambda)+e^{a}(\lambda)\wedge\frac{\partial(\delta f^{b}(\lambda))}{\partial\lambda}-\frac{\partial f^{a}(\lambda)}{\partial\lambda}\wedge\delta e^{b}(\lambda)-f^{a}(\lambda)\wedge\frac{\partial(\delta e^{b}(\lambda))}{\partial\lambda}$ (30) $\displaystyle=-\frac{1}{2\pi G}f^{a}\wedge\delta f^{b}\,.$ The left side involves both $\delta e^{a}$ and $\delta f^{a}$, whereas the right side only involves $\delta f^{a}$. If these two variations are both independent, nonzero, and $\lambda$-dependent, it seems that we cannot have a solution. However, if we assume that $f^{a}$ and therefore $\delta f^{a}$ are independent of $\lambda$ as we did before, in addition to imposing that $\delta e^{a}(\lambda)=0$ according to our choice of deformed variational principle, the equation (LABEL:both_lambda_dep_equation) reduces to $\displaystyle\frac{\partial e^{a}(\lambda)}{\partial\lambda}\wedge\delta f^{b}(\lambda)=-\frac{1}{2\pi G}f^{a}\wedge\delta f^{b}\,.$ (31) The solution to this simple flow is $\displaystyle e^{a}_{i}(\lambda)=e_{i}^{a}(0)-\frac{\lambda}{2\pi G}f_{i}^{a}\,.$ (32) Up to an overall rescaling of $\lambda$ by a factor of $-\frac{1}{2}$, this is the same solution as (21). This completes the first step of the alternate deformation procedure described in 2, but we must still add a new boundary term so that the combined boundary action is consistent with the variational principle $\delta e^{a}=0$ that we have assumed. Our $\lambda$-dependent deformed boundary variation before adding this boundary term is $\displaystyle\delta S_{\text{EH}}(\lambda)\Big{|}_{\text{on- shell}}=-\frac{1}{8\pi G}\int_{\partial M_{3}}\varepsilon_{ab}\left(e^{a}(\lambda)\wedge\delta f^{b}-f^{a}\wedge\delta e^{b}(\lambda)\right)\,.$ (33) But because this has the same form as the variation (5) which we saw in the undeformed case, we may repeat the same procedure and add the term $S_{\text{bdry}}$ as defined in (6), except replacing $e_{i}^{a}$ with $e_{i}^{a}(\lambda)$ everywhere that it appears in the expansions of $A^{a}$ and $A^{b}$. The result is, again, $\delta S_{\text{EH}}(\lambda)\Big{|}_{\text{on-shell}}+\delta S_{\text{bdry}}(\lambda)=\frac{1}{4\pi G}\int_{\partial M_{3}}\varepsilon_{ab}f^{a}\wedge\delta e^{b}(\lambda)\,,$ (34) exactly as we found before. The upshot of this simple calculation is that the two processes described above – first adding a boundary term and then deforming, or first deforming and then adding a boundary term – commute in the calculation we consider here. However, in another setting where both the sources and expectation values become $\lambda$-dependent, performing the second deformation procedure 2 would produce a flow equation analogous to (LABEL:both_lambda_dep_equation), which is not equivalent to the flow of procedure 1. In such cases, one must choose a prescription to define the deformation. ### 3 $T\overline{T}$ deformations in $2d$ JT gravity We now review features of JT gravity and its BF gauge theory description, which are relevant to later sections when we study the $T\overline{T}$ deformation in BF theory. As in section 2, none of the material in this discussion is new. For instance, the interpretation of a $T\overline{T}$-like deformation in the boundary dual to $2d$ JT gravity was considered in [115, 116], and we follow their discussion closely in section 2. We include a reminder of these results here to facilitate comparison with the new results of section 4, where we present an analogous interpretation of the $T\overline{T}$ deformation in BF variables. #### 1 JT gravity as a BF gauge theory In the introduction of this chapter, we mentioned that one salient feature of $3d$ gravity motivating the present work is that it can be dimensionally reduced on a circle to yield JT gravity as described by the action (5). This subsection’s goal is to recall the standard statement that this $2d$ dilaton gravity theory can be, equivalently, written in gauge theory variables as a BF theory. Our treatment will follow [72]. One way of motivating this reformulation is to note that $3d$ gravity is equivalent to a Chern-Simons theory, as we reviewed in section 2, and that the dimensional reduction of this $3d$ Chern-Simons theory is a BF gauge theory. Indeed, we saw this reduction explicitly around (11). These observations are summarized by the sub-diagram formed by the second and third columns of (6): (35) We have reviewed all of the arrows in (35) except for the change of variables linking the two theories in the bottom row. Although such a change of variables must exist by the consistency of the diagram, it is instructive to spell out the map explicitly. Recall that the BF theory in Euclidean signature is described by the action $I_{\text{BF}}=-i\int_{M_{2}}\operatorname{Tr}\left(\phi F\right)\,,$ (36) where $\phi$ is a scalar field and $F$ is the field strength of the gauge field $A_{\mu}$. At the moment, we will only be concerned with the bulk equations of motion and will not include any additional boundary term like the one that appeared in (11). The equations of motion arising from (36) are $\displaystyle\phi$ $\displaystyle:\quad F=0\,,$ (37) $\displaystyle A_{\mu}$ $\displaystyle:\quad D_{\mu}\phi=\partial_{\mu}\phi-[A_{\mu},\phi]=0\,.$ On the other hand, beginning from the action (5) of JT gravity and setting $\Lambda=-1$, one finds the equations of motion $\displaystyle\Phi$ $\displaystyle:\quad R=-2\,,$ (38) $\displaystyle g_{\mu\nu}$ $\displaystyle:\quad\nabla_{\mu}\nabla_{\nu}\Phi=g_{\mu\nu}\Phi\,.$ Next, we will argue that the JT equations of motion in (38) are equivalent to the BF equations of motion in (37). To accomplish this, we first expand the BF fields in terms of generators: $\displaystyle A(x)=e^{2}(x)P_{2}+e^{1}(x)P_{1}+\omega(x)P_{0}\,,\quad\phi(x)=\phi^{1}(x)P_{1}+\phi^{2}(x)P_{2}+\phi^{0}(x)P_{0}\,,$ (39) where $\displaystyle P_{0}=\left(\begin{array}[]{cc}0&\frac{1}{2}\\\ -\frac{1}{2}&0\end{array}\right)\,,\quad P_{1}=\left(\begin{array}[]{cc}0&\frac{1}{2}\\\ \frac{1}{2}&0\end{array}\right)\,,\quad P_{2}=\left(\begin{array}[]{cc}\frac{1}{2}&0\\\ 0&-\frac{1}{2}\end{array}\right)\,.$ (40) Written in differential form notation, the equation of motion for $A_{\mu}$ in (37) becomes $d\phi-A\wedge\phi=0$. The exterior derivative of the scalar $\phi$ is $d\phi=d\phi^{0}(x)P_{0}+d\phi^{1}(x)P_{1}+d\phi^{2}(x)P_{2}\,.$ (41) Meanwhile, a short calculation gives $\displaystyle A\wedge\phi=\left(e^{2}\wedge\phi^{1}-e^{1}\wedge\phi^{2}\right)P_{0}+\left(e^{2}\wedge\phi^{0}-\omega\wedge\phi^{2}\right)P_{1}+\left(\omega\wedge\phi^{1}-e^{1}\wedge\phi_{0}\right)P_{2}\,.$ (42) Putting everything together, we find $\displaystyle d\phi^{0}(x)$ $\displaystyle=e^{2}(x)\wedge\phi^{1}(x)-e^{1}(x)\wedge\phi^{2}(x)\,,$ (43) $\displaystyle d\phi^{1}(x)$ $\displaystyle=e^{2}(x)\wedge\phi^{0}(x)-\omega(x)\wedge\phi^{2}(x)\,,$ $\displaystyle d\phi^{2}(x)$ $\displaystyle=\omega(x)\wedge\phi^{1}(x)-e^{1}(x)\wedge\phi^{0}(x)\,.$ We now act with the covariant derivative $\nabla_{\mu}$ on the equation for $d\phi^{0}$ in (43). At the risk of being pedantic, we pause to clarify one point of possible confusion. When acting on a generalized tensor with both curved (spacetime) indices and flat (tangent space) indices, the action of the covariant derivative $\nabla_{\mu}$ involves Christoffel symbol terms associated with the curved indices and spin connection terms associated with the flat indices. For instance, on the vielbein $e_{\nu}^{a}$ with one curved and one flat index, one has $\displaystyle\nabla_{\mu}e_{\nu}^{a}=\partial_{\mu}e_{\nu}^{a}+\omega_{\mu}{}^{a}{}_{b}e_{\nu}^{b}-\Gamma^{\sigma}{}_{\nu\mu}e_{\sigma}^{a}\,.$ (44) Since the covariant derivative annihilates the vielbein by the zero-torsion constraint $\tau^{a}=de^{a}+\omega^{a}_{b}\wedge e^{b}=0$, the combination (44) vanishes. However, the equations (43) are covariant with respect to their single curved index but not with respect to the implicit flat index on the vielbeins. It is easiest to see this by writing the equations in components. For instance, the $\phi^{0}$ equation is $\displaystyle\partial_{\mu}\phi^{0}=\phi^{1}e^{2}_{\mu}-\phi^{2}e^{1}_{\mu}\,.$ (45) Although this equation has a free $\mu$ index, there is no free $a$ index in the $e_{\mu}^{a}$ factors. Indeed, this equation could never have been covariant with respect to such a tangent space index since the quantity $\partial_{\mu}\phi^{0}$ on the left has no flat indices. Therefore, when we act with the covariant derivative, there will be no spin connection terms introduced in the derivatives of vielbein factors. One has $\displaystyle\nabla_{\mu}e^{2}_{\nu}=\partial_{\mu}e_{\nu}^{2}-\Gamma^{\sigma}{}_{\nu\mu}e_{\sigma}^{2}\,,$ (46) and likewise for $\nabla_{\nu}e^{1}_{\mu}$. However $\nabla_{\mu}e_{\nu}^{a}=0$, (44) implies $\displaystyle\partial_{\mu}e_{\nu}^{2}-\Gamma^{\sigma}{}_{\nu\mu}e_{\sigma}^{2}=-\omega_{\mu}{}^{2}{}_{b}e_{\nu}^{b}\,,$ (47) and again a similar equation for $\nabla_{\nu}e^{1}_{\mu}$. Using $\omega^{1}{}_{2}=-\omega^{2}{}_{1}=\omega$, we find $\displaystyle\nabla_{\mu}e_{\nu}^{1}$ $\displaystyle=-\omega_{\mu}{}^{1}{}_{b}e_{\nu}^{b}=-\omega_{\mu}e_{\nu}^{2}\,,$ $\displaystyle\nabla_{\mu}e_{\nu}^{2}$ $\displaystyle=-\omega_{\mu}{}^{2}{}_{b}e_{\nu}^{b}=\omega_{\mu}e^{1}_{\nu}\,.$ (48) Now we are prepared to act with the covariant derivative on the $\phi^{0}$ equation of motion. On the left, the result is a two-tensor with components $\nabla_{\mu}\nabla_{\nu}\phi^{0}$. One finds $\displaystyle\nabla_{\mu}\nabla_{\nu}\phi^{0}$ $\displaystyle=\left(\partial_{\mu}\phi^{1}\right)e^{2}_{\nu}-\left(\partial_{\mu}\phi^{2}\right)e^{1}_{\nu}+\phi^{1}\left(\nabla_{\mu}e^{2}_{\nu}\right)-\phi^{2}\left(\nabla_{\mu}e^{1}_{\nu}\right)\,$ $\displaystyle=\left(\partial_{\mu}\phi^{1}\right)e^{2}_{\nu}-\left(\partial_{\mu}\phi^{2}\right)e^{1}_{\nu}+\phi^{1}\omega_{\mu}e^{1}_{\nu}+\phi^{2}\omega_{\mu}e_{\nu}^{2}$ (49) On the other hand, writing the second and third equations of (43) in components gives $\partial_{\mu}\phi^{1}=\phi^{0}e^{2}_{\mu}-\phi^{2}\omega_{\mu}$ and $\partial_{\mu}\phi^{2}=\phi^{1}\omega_{\mu}-\phi^{0}e^{1}_{\mu}$. Substituting these into (1) gives $\displaystyle\nabla_{\mu}\nabla_{\nu}\phi^{0}$ $\displaystyle=\left(\phi^{0}e^{2}_{\mu}-\phi^{2}\omega_{\mu}\right)e^{2}_{\nu}-\left(\phi^{1}\omega_{\mu}-\phi^{0}e^{1}_{\mu}\right)e^{1}_{\nu}+\phi^{1}\omega_{\mu}e^{1}_{\nu}+\phi^{2}\omega_{\mu}e_{\nu}^{2}$ $\displaystyle=\left(e^{1}_{\mu}e^{1}_{\nu}+e^{2}_{\mu}e^{2}_{\nu}\right)\phi^{0}\,.$ (50) If we identify the metric as $g_{\mu\nu}=e^{1}_{\mu}e^{1}_{\nu}+e^{2}_{\mu}e^{2}_{\nu}$ and assume that the JT dilaton $\Phi$ is proportional to the BF field $\phi^{0}$, then this is the metric equation of motion in (38): $\displaystyle\nabla_{\mu}\nabla_{\nu}\Phi=g_{\mu\nu}\Phi\,.$ (51) We, therefore, have demonstrated that the JT gravity equations of motion (38) are recovered from the BF equations of motion in (37) after making the change of variables $\displaystyle\phi^{0}=\frac{i}{4\pi G}\Phi\,,\qquad g_{\mu\nu}=e^{1}_{\mu}e^{1}_{\nu}+e^{2}_{\mu}e^{2}_{\nu}\,.$ (52) Here, the choice of the proportionality factor $\frac{i}{4\pi G}$ between $\phi^{0}$ and $\Phi$ is required by our normalizations for the BF and JT actions in (36) and (5), respectively. Under this identification, we see that the expansion coefficients $e^{a}$ appearing in the BF gauge field $A_{\mu}$ are interpreted as the frame fields in the JT gravity theory, whereas the field $\omega$ defines the spin connection, which satisfies $d\omega=\frac{R}{2}e^{1}\wedge e^{2}$ for a $2d$ manifold. In this correspondence, the $\phi^{1},\phi^{2}$ equations of motion are mapped onto the torsionless conditions $\tau^{a}=de^{a}+\omega^{a}_{b}\wedge e^{b}=0$. This completes our review of the final arrow on the bottom row of (35) linking JT gravity with BF gauge theory. Next, we explain the boundary conditions and the choice of boundary term for the BF gauge theory, which recovers the Schwarzian action. Variation of the BF action on-shell yields the boundary action $\delta I_{\text{BF}}=-i\int_{\partial M_{2}}\,d\tau\,\operatorname{Tr}\left(\phi\,\delta A_{\tau}\right)\,,$ (53) with $\tau$ parametrizing the one-dimensional boundary $\partial M_{2}$. Thus, the variation (53) of the BF action vanishes if $A_{\tau}$ is held fixed on $\partial M_{2}$. In fact, from JT gravity’s first-order formulation, the spin connection and frame are already fixed, so no boundary term is required to have a well-defined variational principle. Unfortunately, this means the BF theory cannot be holographically dual to the Schwarzian because the theory is topologically trivial. In particular, the observables of the theory would depend on the holonomy around the boundary rather than depending on the local value of $A_{\tau}$. To recover the Schwarzian dynamics, one includes a string defect $I_{\text{string}}$ around a loop $L\subset M_{2}$, which yields the modified action $I=-i\int_{M_{2}}\operatorname{Tr}\left(\phi F\right)-\oint^{\beta}_{0}du\,V(\phi)\,,\quad V(\phi)=\frac{\nu}{4}\operatorname{Tr}\phi^{2}\,.$ (54) The second term in (54) is the string defect with coupling $\nu$, and $u$ is the proper length parametrization of the loop with circumference $\beta$. This form of $V(\phi)$ is consistent with the boundary term in (11), which we expect from the dimensional reduction of Chern-Simons and, as we will see, correctly recovers the Schwarzian action. The overall action (54) preserves the defect diffeomorphisms and the degrees of freedom from the string defect are realized by the Schwarzian theory as [72] showed by evaluating the action (54) using the solution to the equation of motion (37) for $\phi(u)$ along $L$. To see the derivation more explicitly, we parametrize the boundary fields $\phi$ and $A_{\tau}$ by111Note that a different representation of the generators when solving the equations of motion for the field $\phi$ at the loop. $A_{\tau}=\omega\ell_{0}+e_{+}\ell_{+}+e_{-}\ell_{-}\,,\quad\phi=\phi_{+}\ell_{+}+\phi_{-}\ell_{-}+\phi_{0}\ell_{0}\,,$ (55) where $\displaystyle\ell_{0}=iP_{0},\quad\ell_{+}=-iP_{1}-P_{2}\,,\quad\ell_{-}=-iP_{1}+P_{2}\,,$ (56) $\displaystyle\omega=-i\omega_{\tau}\bigg{|}_{\partial M_{2}}\,,\quad e_{+}=\frac{ie^{1}_{\tau}-e^{2}_{\tau}}{2}\bigg{|}_{\partial M_{2}}\,,\quad e_{-}=\frac{ie^{1}_{\tau}+e^{2}_{\tau}}{2}\bigg{|}_{\partial M_{2}}\,.$ We compute the commutator $\left[A_{\tau},\phi\right]=\left(e_{+}\phi_{0}-\omega\phi_{+}\right)\ell_{+}+\left(\omega\phi_{-}-e_{-}\phi_{0}\right)\ell_{-}+2\left(e_{+}\phi_{-}-e_{-}\phi_{+}\right)\ell_{0}$ (57) to write the complete set of equations of motion $D_{\tau}\phi=0$ at the loop $\displaystyle\ell_{0}$ $\displaystyle:\quad\partial_{\tau}\phi_{0}=2\left(e_{+}\phi_{-}-e_{-}\phi_{+}\right)\,,$ (58) $\displaystyle\ell_{-}$ $\displaystyle:\quad\partial_{\tau}\phi_{-}=\omega\phi_{-}-e_{-}\phi_{0}\,,$ $\displaystyle\ell_{+}$ $\displaystyle:\quad\partial_{\tau}\phi_{+}=e_{+}\phi_{0}-\omega\phi_{+}\,.$ To solve the equations at the loop (58), we perform the same change of variables as [72] $\phi_{-}(\tau)=\frac{2e_{-}}{\partial_{\tau}u(\tau)}\implies\phi_{-}(u)=2e_{-}\tau^{\prime}\,,$ (59) where $\partial_{\tau}\phi_{i}=\left(\partial_{\tau}u\right)\left(\partial_{u}\phi_{i}\right)=\frac{\partial_{u}\phi_{i}}{\tau^{\prime}}\,.$ (60) Substituting the above into the equation of motion for the $\ell_{-}$ component, we find $\phi_{0}=-\frac{\partial_{\tau}\phi_{-}-\omega\phi_{-}}{e_{-}}=-\frac{2\tau^{\prime\prime}}{\tau^{\prime}}+2\omega\tau^{\prime}\,.$ (61) Then, solving for the $\ell_{0}$ component, one uses $\phi_{-}$ and $\phi_{0}$ to find $\displaystyle\phi_{+}$ $\displaystyle=\frac{1}{e_{-}}\left(-\frac{1}{2}\partial_{\tau}\phi_{0}+e_{+}\phi_{-}\right)$ (62) $\displaystyle=2\left(e_{+}\tau^{\prime}+\frac{\tau^{\prime\prime\prime}}{2e_{-}\tau^{\prime 2}}-\frac{\omega\tau^{\prime\prime}}{2e_{-}\tau^{\prime}}-\frac{\tau^{\prime\prime 2}}{2e_{-}\tau^{\prime 3}}\right)\,.$ We have found all the components for the field $\phi(u)$: $\nu\phi(u)=2e_{-}\tau^{\prime}\ell_{-}+2\left(\omega\tau^{\prime}-\frac{\tau^{\prime\prime}}{\tau^{\prime}}\right)\ell_{0}+2\left(e_{+}\tau^{\prime}+\frac{\tau^{\prime\prime\prime}}{2e_{-}\tau^{\prime 2}}-\frac{\omega\tau^{\prime\prime}}{2e_{-}\tau^{\prime}}-\frac{\tau^{\prime\prime 2}}{2e_{-}\tau^{\prime 3}}\right)\ell_{+}\,.$ (63) Here $\tau(u)$ is further constrained by the $\ell_{+}$ component of the equation $D_{u}\phi=0$, which gives $4\left(\operatorname{det}A_{\tau}\right)\tau^{\prime 4}\tau^{\prime\prime}+3\tau^{\prime\prime 3}-4\tau^{\prime}\tau^{\prime\prime}\tau^{\prime\prime\prime}+\tau^{\prime 2}\tau^{\prime\prime\prime\prime}=0\,,$ (64) where $\tau(u)$ is monotonic so $\tau^{\prime}(u)\neq 0$ and $\det A_{\tau}=e_{-}e_{+}-\frac{\omega^{2}}{4}$. Now we are ready to evaluate the string defect action by computing $\operatorname{Tr}\phi^{2}$. This computation is straightforward as $\phi^{2}=\frac{1}{\nu^{2}}\left(\begin{array}[]{cc}-4e_{-}e_{+}\tau^{\prime 2}+\omega^{2}\tau^{\prime 2}+\frac{3\tau^{\prime\prime 2}}{\tau^{\prime 2}}-\frac{2\tau^{\prime\prime\prime}}{\tau^{\prime}}&0\\\ 0&-4e_{-}e_{+}\tau^{\prime 2}+\omega^{2}\tau^{\prime 2}+\frac{3\tau^{\prime\prime 2}}{\tau^{\prime 2}}-\frac{2\tau^{\prime\prime\prime}}{\tau^{\prime}}\end{array}\right)$ (65) and $\displaystyle V(\phi)$ $\displaystyle=\frac{\nu}{4}\operatorname{Tr}\phi^{2}$ (66) $\displaystyle=\frac{1}{\nu}\left(\\{\tau(u),u\\}+2\tau^{\prime}(u)^{2}\det A_{\tau}\right)$ $\displaystyle=\frac{1}{\nu}\left\\{\tan\left(\sqrt{\left(\det A_{\tau}\right)\tau(u)}\right),u\right\\}\,.$ As expected, we have recovered the Schwarzian action222One can show that this derivation of the Schwarzian theory holds for any $\Lambda$ by using the more general parameterization $A_{\tau}=\omega\ell_{0}+\sqrt{\Lambda}e_{+}\ell_{+}+\sqrt{\Lambda}e_{-}\ell_{-}$. Equivalently, this corresponds to replacing the determinant in (67) by $\det A_{\tau}=\Lambda e_{-}e_{+}-\frac{\omega^{2}}{4}$. from including the string defect in the BF action (54), which gives $I=-\frac{1}{\nu}\int^{\beta}_{0}du\left\\{\tan\left(\sqrt{\left(\det A_{\tau}\right)\tau(u)}\right)\,,u\right\\}\,.$ (67) #### 2 Interpretation of $T\overline{T}$ deformation Before we begin with the $T\overline{T}$ deformation in JT gravity, we first recall how a general class of related deformations is defined and their physical meaning in the $\mathrm{AdS}/\mathrm{CFT}$ correspondence. Following [116], we deform a seed action $I(0)$ via a generic operator $M_{\lambda}$ as $I(\lambda)=I(0)+\int d\tau\sqrt{\gamma}\,M_{\lambda}(T_{\tau\tau},\gamma^{\tau\tau})\,,$ (68) where the variational principle in the undeformed theory (where $M_{0}=0$) is defined by $\delta I(0)=\frac{1}{2}\int d\tau\,\sqrt{\gamma}\,T_{\tau\tau}\delta\gamma^{\tau\tau}\,.$ (69) With the deformation (68), one finds the following variation: $\delta I(\lambda)=\delta I(0)+\int\,d\tau\,\Big{[}\delta\left(\sqrt{\gamma}\right)M_{\lambda}+\sqrt{\gamma}\delta M_{\lambda}\Big{]}\,.$ (70) Using the facts that $\delta M_{\lambda}=\frac{\partial M_{\lambda}}{\partial T_{\tau\tau}}\delta T_{\tau\tau}+\frac{\partial M_{\lambda}}{\partial\gamma^{\tau\tau}}\delta\gamma^{\tau\tau}$ (71) and $\delta\left(\sqrt{\gamma}\right)=-\frac{\sqrt{\gamma}}{2\gamma^{\tau\tau}}\delta\gamma^{\tau\tau}$ (72) we find (70) is written as $\delta I(\lambda)=\frac{1}{2}\int d\tau\sqrt{\gamma}\left[\left(T_{\tau\tau}-\frac{M_{\lambda}}{\gamma^{\tau\tau}}+2\frac{\partial M_{\lambda}}{\partial\gamma^{\tau\tau}}\right)\delta\gamma^{\tau\tau}+2\frac{\partial M_{\lambda}}{\partial T_{\tau\tau}}\delta T_{\tau\tau}\right]\,.$ (73) We wish to identify sources and expectation values by demanding that we can rewrite (73) in terms of $\lambda$-dependent quantities $\widetilde{T}_{\tau\tau}$, $\widetilde{\gamma}_{\tau\tau}$ as $\displaystyle\delta I(\lambda)$ $\displaystyle=\frac{1}{2}\int d\tau\frac{\widetilde{T}_{\tau\tau}}{\sqrt{\widetilde{\gamma}^{\tau\tau}}}\delta\widetilde{\gamma}^{\tau\tau}$ (74) $\displaystyle=\frac{1}{2}\int d\tau\frac{\widetilde{T}_{\tau\tau}}{\sqrt{\widetilde{\gamma}^{\tau\tau}}}\left(\frac{\partial\widetilde{\gamma}^{\tau\tau}}{\partial T_{\tau\tau}}\delta T_{\tau\tau}+\frac{\partial\widetilde{\gamma}^{\tau\tau}}{\partial\gamma^{\tau\tau}}\delta\gamma^{\tau\tau}\right)\,.$ Here the operator $\widetilde{T}_{\tau\tau}$ is sourced by $\widetilde{\gamma}^{\tau\tau}$. In other words, the deformation changes the variational principle from one where $\gamma^{\tau\tau}$ is held fixed to one where $\widetilde{\gamma}^{\tau\tau}$ is fixed. Comparing (73) and (74), we find the following coupled PDEs for the deformed boundary stress tensor and metric: $\displaystyle\frac{\widetilde{T}_{\tau\tau}}{\sqrt{\widetilde{\gamma}^{\tau\tau}}}\frac{\partial\widetilde{\gamma}^{\tau\tau}}{\partial T_{\tau\tau}}$ $\displaystyle=2\sqrt{\gamma}\frac{\partial M_{\lambda}}{\partial T_{\tau\tau}}\,,$ (75) $\displaystyle\frac{\widetilde{T}_{\tau\tau}}{\sqrt{\widetilde{\gamma}^{\tau\tau}}}\frac{\partial\widetilde{\gamma}^{\tau\tau}}{\partial\gamma^{\tau\tau}}$ $\displaystyle=\sqrt{\gamma}\left(T_{\tau\tau}-\frac{M_{\lambda}}{\gamma^{\tau\tau}}+2\frac{\partial M_{\lambda}}{\partial\gamma^{\tau\tau}}\right)\,,$ with the initial conditions $\widetilde{T}_{\tau\tau}(\lambda=0)=T_{\tau\tau}$ and $\widetilde{\gamma}^{\tau\tau}(\lambda=0)=\gamma^{\tau\tau}$. To further illustrate, we focus on a specific class of deformations that only depend on the trace of the stress tensor $T_{\tau\tau}\gamma^{\tau\tau}$. It is convenient to express our ansatz in terms of the dimensionless combination $\displaystyle X=\lambda T_{\tau\tau}\gamma^{\tau\tau}\,.$ (76) We assume $\widetilde{T}_{\tau\tau}=T_{\tau\tau}\xi\left(X\right)\,,\quad\widetilde{\gamma}^{\tau\tau}=\gamma^{\tau\tau}\chi\left(X\right)\,,$ (77) where $\xi(0)=\chi(0)=1$ so that we recover the undeformed stress tensor and metric as $\lambda\to 0$. On the other hand, by dimensional analysis, we can write the function $M_{\lambda}(\lambda,T_{\tau\tau},\gamma^{\tau\tau})$ in the form $\displaystyle M_{\lambda}=\frac{1}{\lambda}m_{\lambda}(X)\,.$ (78) By substituting (77)-(78) into the system of coupled PDEs (75), we find the pair of equations $\displaystyle\begin{split}\chi^{\prime}(X)&=\frac{2\sqrt{\chi(X)}m_{\lambda}^{\prime}(X)}{X\xi(X)}\,,\\\ \sqrt{\chi(X)}\left(X-m_{\lambda}^{\prime}(X)+2Xm_{\lambda}^{\prime}(X)\right)&=X\xi(X)\left(\chi(X)+X\chi^{\prime}(X)\right)\,.\end{split}$ (79) The usual double-trace $T\overline{T}$ deformation is quadratic in stress tensors, so one might be interested in studying a deformation that is proportional to the combination $X^{2}=\left(\lambda T_{\tau\tau}\gamma^{\tau\tau}\right)^{2}$ since this is the only dimensionless and reparameterization-invariant stress tensor bilinear in $(0+1)$-dimensions. This corresponds to a deformation of the form $\displaystyle M_{\lambda}$ $\displaystyle=\frac{1}{\lambda}X^{2}\,$ $\displaystyle=\lambda\left(T_{\tau\tau}\gamma^{\tau\tau}\right)^{2}\,.$ (80) Using the form (2) of the deformation, the equations (79) become $\displaystyle\xi(X)=\frac{(3X+1)\sqrt{\chi(X)}}{\chi(X)+X\chi^{\prime}(X)}\,,\qquad\chi^{\prime}(X)=\frac{4\sqrt{\chi(X)}}{\xi(X)}\,,$ (81) which have the solutions $\displaystyle\chi(X)=\frac{1}{(1-X)^{4}}\,,\qquad\xi(X)=(1-X)^{3}\,.$ (82) We have therefore found that, for the form of the deformation $M_{\lambda}=\lambda(T_{\tau\tau}\gamma^{\tau\tau})^{2}$ motivated by the usual $T\overline{T}$ deformation,333For a multi-trace deformation $M^{(n)}_{\lambda}=\lambda_{n}\left(T_{\tau\tau}\gamma^{\tau\tau}\right)^{2n}$ with coupling $\lambda_{n}$, one finds via solving (79) $\widetilde{T}_{\tau\tau}(\lambda_{n})=T_{\tau\tau}\left(1-\lambda_{n}\left(T_{\tau\tau}\gamma^{\tau\tau}\right)^{2n-1}\right)^{\frac{4n-1}{2n-1}}\,,\quad\widetilde{\gamma}^{\tau\tau}(\lambda_{n})=\gamma^{\tau\tau}\left(1-\lambda_{n}\left(T_{\tau\tau}\gamma^{\tau\tau}\right)^{2n-1}\right)^{\frac{4n}{1-2n}}\,.$ (83) the solution is $\widetilde{T}_{\tau\tau}(\lambda)=T_{\tau\tau}\left(1-\lambda T_{\tau\tau}\gamma^{\tau\tau}\right)^{3}\,,\quad\widetilde{\gamma}^{\tau\tau}(\lambda)=\frac{\gamma^{\tau\tau}}{(1-\lambda T_{\tau\tau}\gamma^{\tau\tau})^{4}}\,.$ (84) However, as mentioned in section 1, and derived in appendix A of [115], despite (2) being proportional to a double-trace operator $T^{2}$, it is not suitable as a $T\overline{T}$ deformation for JT gravity with a Dirichlet cutoff. The following choice of operator is suitable for the $T\overline{T}$ deformation found by [115]. In [115], the operator which yields the correct deformed energy spectrum is: $M_{\lambda}=-2\lambda OT_{\tau\tau}\gamma^{\tau\tau}\,,$ (85) where the operator $O$ (i.e. the dilaton momentum) is sourced by the boundary dilaton $\Phi_{b}$ as $O=\frac{1}{\sqrt{\gamma}}\frac{\delta I}{\delta\Phi_{b}}\,.$ (86) The seed theory action is now deformed as $I(\lambda)=I(0)+\int d\tau\sqrt{\gamma}M_{\lambda}\left(T_{\tau\tau},\gamma^{\tau\tau},O,\Phi_{b}\right)\,,$ (87) where the variation of the undeformed theory is $\delta I(0)=\int d\tau\sqrt{\gamma}\left(\frac{1}{2}T_{\tau\tau}\delta\gamma^{\tau\tau}+O\delta\Phi_{b}\right)\,.$ (88) To identify the variational principle of the deformed theory, we demand that $\delta I(\lambda)$ can be written in terms of $\lambda$-dependent sources and expectation values as $\delta I(\lambda)=\int d\tau\sqrt{\gamma}\left(\frac{1}{2}T_{\tau\tau}(\lambda)\delta\gamma^{\tau\tau}(\lambda)+\mathcal{O}(\lambda)\delta\Phi_{b}(\lambda)\right)\,.$ (89) Following the same procedure as in the previous example with $M_{\lambda}(T_{\tau\tau},\gamma^{\tau\tau})$, we find the sources and expectation values transform as $\displaystyle\gamma_{\tau\tau}(\lambda)$ $\displaystyle=\gamma_{\tau\tau}(0)\left(1+2\lambda O(0)\right)^{2}\,,\quad T_{\tau\tau}(\lambda)=T_{\tau\tau}(0)\left(1+2\lambda O(0)\right)^{2}\,,$ (90) $\displaystyle\Phi_{b}(\lambda)$ $\displaystyle=\Phi_{b}(0)-2\lambda T(0)\,,\quad\mathcal{O}(\lambda)=\frac{O(0)}{1+2\lambda O(0)}\,,$ which satisfy $\delta I(\lambda)=\delta I(0)-2\lambda\delta\left(\int d\tau\sqrt{\gamma}O(0)T_{\tau\tau}(0)\gamma^{\tau\tau}(0)\right)\,.$ (91) The $\lambda$-dependent sources and expectation values (90) describe the full solution for the bulk JT gravity fields, which corresponds to performing a $T\overline{T}$-like deformation of the $1d$ boundary theory. As in the analogous deformation of $3d$ gravitational Chern-Simons reviewed in section 2, we note that the result can be interpreted as a linear mixing of sources and expectation values, although in this case each source becomes a function of the dual expectation value for a _different_ operator – for instance, the metric becomes dependent on the field $O$ which is dual to the dilaton $\Phi_{b}$. ### 4 $T\overline{T}$-deformed boundary conditions in BF theory In the previous section, we have seen that the interpretation of a boundary $T\overline{T}$ deformation in JT gravity is a particular $\lambda$-dependent mixing (90) of the metric $\gamma_{\tau\tau}$, dilaton $\Phi_{b}$, and their dual operators. Since JT gravity can also be written in BF variables, there must be an analogous interpretation of the boundary $T\overline{T}$ deformation. The goal of the current section is to make this BF interpretation explicit. Because BF gauge theory is topological, all of the dynamics of the theory occur at the boundary. As a result, the choice of boundary term – and the variational principle – is an important input for defining the theory. We consider $T\overline{T}$-type deformations for two choices of boundary terms: one which gives a variational principle analogous to that of the JT gravity theory and one whose boundary theory is the Schwarzian. #### 1 Deformation with JT-type boundary term First, we will determine the choice of boundary term in BF theory, which gives a variational principle most analogous to that of the JT gravity theory. We saw in (88) that the on-shell variation of the JT gravity action is $\displaystyle\delta I\Big{|}_{\text{on-shell}}=\int_{\partial M_{2}}\,d\tau\,\sqrt{\gamma}\,\left(\frac{1}{2}T_{\tau\tau}\,\delta\gamma^{\tau\tau}+O\,\delta\Phi_{b}\right)\,.$ (92) This boundary term vanishes if we fix the value of the (inverse) metric $\gamma^{\tau\tau}$ and the dilaton $\Phi_{b}$ at the boundary. The operators dual to the metric and dilaton are the boundary stress tensor $T_{\tau\tau}$ and the operator $O$, respectively. On the other hand, the variation of the BF action $I_{\text{BF}}=-i\int_{M_{2}}\operatorname{Tr}(\phi F)$ was given in (53) as $\delta I_{\text{BF}}\Big{|}_{\text{on-shell}}=-i\int_{\partial M_{2}}d\tau\operatorname{Tr}\left(\phi\delta A_{\tau}\right)\,.$ (93) We parameterize the BF theory fields in terms of $SL(2,\mathbb{R})$ generators as $\displaystyle A_{\mu}(x)=e_{\mu}^{+}(x)L_{+}+e_{\mu}^{-}(x)L_{-}+\omega_{\mu}(x)L_{0}\,,\qquad\phi(x)=\phi^{+}(x)L_{+}+\phi^{-}(x)L_{-}+\phi^{0}(x)L_{0}\,,$ (94) Note that we use the notation $L_{+}$ for $L_{1}$ and $L_{-}$ for $L_{-1}$. In terms of the functions appearing in this expansion, the boundary term (93) is $\delta I_{\text{BF}}\Big{|}_{\text{on-shell}}=-i\int_{\partial M_{2}}\,d\tau\,\left(\frac{1}{2}\phi^{0}\delta\omega_{\tau}-\phi^{+}\delta e^{-}_{\tau}-\phi^{-}\delta e^{+}_{\tau}\right)\,.$ (95) The asymptotic values of the expansion coefficients $e_{\tau}^{\pm}$ in the BF fields are interpreted as the einbein for the one-dimensional boundary theory. These fields are the BF analog of the boundary metric $\gamma_{\tau\tau}$. Likewise, the boundary value of the BF variable $\phi^{0}$ is proportional to the boundary dilaton $\Phi_{b}$ in JT variables. Thus we see that the naïve BF action, without any added boundary term, corresponds to a different variational principle than that of JT gravity. For the variation (95) to vanish, we must fix the boundary values of $e^{\pm}_{\tau}$ (which corresponds to fixing the boundary metric) but not the boundary value of $\phi^{0}$; rather the asymptotic value of $\omega_{\tau}$ is held fixed. In JT gravity language, this corresponds to a variational principle where the value of the dual operator $O$ is held fixed, but the boundary dilaton $\Phi_{b}$ is free to vary. We can, of course, modify the BF variational principle by adding an appropriate boundary term. Suppose that we choose the BF action to be $\displaystyle I=I_{\text{BF}}+I_{\text{bdry}}\,,\qquad I_{\text{bdry}}=\frac{i}{2}\int_{\partial M_{2}}d^{2}x\,\sqrt{g}\,n_{\mu}\partial^{\mu}\left(\phi^{0}\omega_{\tau}\right)\,,$ (96) where $n_{\mu}$ is a unit normal vector in the radial direction. The corresponding contribution to the boundary variation is $\displaystyle\delta I_{\text{bdry}}=\frac{i}{2}\int_{\partial M_{2}}\,d\tau\,\left(\omega_{\tau}\delta\phi^{0}+\phi^{0}\delta\omega_{\tau}\right)\,.$ (97) This cancels the $\phi^{0}\delta\omega_{\tau}$ term appearing in (93). The total boundary variation is now $\displaystyle\delta I\Big{|}_{\text{on-shell}}=i\int_{\partial M_{2}}\,d\tau\,\left(\frac{1}{2}\omega_{\tau}\,\delta\phi^{0}+\phi^{+}\delta e_{\tau}^{-}+\phi^{-}\delta e_{\tau}^{+}\right)\,.$ (98) Demanding that this boundary term vanish leads us to a variational principle where $e_{\tau}^{\pm}$ and $\phi^{0}$ are held fixed at the boundary. This is the direct BF theory analog of the variational principle in JT gravity, where the boundary metric and dilaton are held fixed, so we will refer to this choice as “JT-type boundary conditions.” We now wish to identify the modification of these JT-type boundary conditions, which corresponds to a $T\overline{T}$-like deformation of the dual $(0+1)$-dimensional theory. There are two ways one might identify the appropriate form of the deforming operator. One way is to dimensionally reduce the $T\overline{T}$ operator written in $3d$ Chern-Simons variables, which takes the form $f^{-}\wedge f^{+}$ as reviewed in section 2. Recall that, in Chern-Simons language, the operators $f^{a}$ are dual to the boundary vielbeins $e^{a}$, and, therefore, the $f^{a}$ contain the boundary stress tensor. Upon such a reduction, one component of $f$ reduces to the one- dimensional stress tensor $T_{\tau\tau}$, which is dual to the boundary einbein $e_{\tau}$. Since the component of the metric in the direction along which we reduce is identified with the field $\phi^{0}$, the other component of $f$ reduces to the operator dual to $\phi^{0}$, which is $\omega_{\tau}$. Therefore, the dimensional reduction instructs us to deform the boundary action by an operator constructed from the combination $T_{\tau\tau}\omega_{\tau}$ (contracted with the appropriate einbein factors to yield a quantity which is a scalar under diffeomorphisms). The other way to identify the deforming operator is by using the combination $OT$, which defines the $T\overline{T}$ deformation in JT variables and converts all expressions to BF variables. We now carry out this procedure and demonstrate that it produces an operator of the schematic form $T_{\tau\tau}\omega_{\tau}$ suggested by dimensional reduction. The (Hilbert) definition of the boundary stress tensor is $\displaystyle T_{\tau\tau}$ $\displaystyle=-\frac{2}{\sqrt{\gamma_{\tau\tau}}}\frac{\delta I}{\delta\gamma^{\tau\tau}}$ $\displaystyle=-\frac{2}{\sqrt{\gamma_{\tau\tau}}}\left(\frac{\delta I}{\delta e^{+}_{\tau}}\frac{\delta e^{+}_{\tau}}{\delta\gamma^{\tau\tau}}\Big{|}_{e_{\tau}^{-}}+\frac{\delta I}{\delta e^{-}_{\tau}}\frac{\delta e^{-}_{\tau}}{\delta\gamma^{\tau\tau}}\Big{|}_{e_{\tau}^{+}}\right)\,.$ (99) The map from the metric $\gamma_{\tau\tau}$ to the boundary BF fields $e^{\pm}_{\tau}$ is simply $\displaystyle\gamma_{\tau\tau}=-4e^{+}_{\tau}e^{-}_{\tau}\,,\qquad\gamma^{\tau\tau}=-\frac{1}{4e^{+}_{\tau}e^{-}_{\tau}}\,.$ (100) Note that, according to our conventions (56), the relative minus sign in the definition (100) of $\gamma_{\tau\tau}$ is required to have a positive- definite worldline metric since $\displaystyle e_{\tau}^{+}e_{\tau}^{-}=\frac{1}{4}\left(ie_{\tau}^{1}-e_{\tau}^{2}\right)\left(ie_{\tau}^{1}+e_{\tau}^{2}\right)=-\frac{1}{4}\left(\left(e_{\tau}^{1}\right)^{2}+\left(e_{\tau}^{2}\right)^{2}\right)\,.$ (101) Thus, the derivatives appearing in the stress tensor can be written as $\displaystyle\frac{\delta e^{+}_{\tau}}{\delta\gamma^{\tau\tau}}\Big{|}_{e_{\tau}^{-}}$ $\displaystyle=\frac{1}{\left(\gamma^{\tau\tau}\right)^{2}}\cdot\frac{1}{4e_{\tau}^{-}}=-\frac{e_{\tau}^{+}}{\gamma^{\tau\tau}}\,,$ $\displaystyle\frac{\delta e^{-}_{\tau}}{\delta\gamma^{\tau\tau}}\Big{|}_{e_{\tau}^{+}}$ $\displaystyle=\frac{1}{\left(\gamma^{\tau\tau}\right)^{2}}\cdot\frac{1}{4e_{\tau}^{+}}=-\frac{e_{\tau}^{-}}{\gamma^{\tau\tau}}\,.$ (102) Meanwhile, from (98) we see that $\frac{\delta I}{\delta e^{+}_{\tau}}=i\phi^{-}$ and $\frac{\delta I}{\delta e^{-}_{\tau}}=i\phi^{+}$. So, the stress tensor is $\displaystyle T_{\tau\tau}=\frac{2i}{\sqrt{\gamma^{\tau\tau}}}\left(\phi^{-}\cdot\frac{e_{\tau}^{+}}{\gamma^{\tau\tau}}+\phi^{+}\cdot\frac{e_{\tau}^{-}}{\gamma^{\tau\tau}}\right)\,,$ (103) and its trace is $\displaystyle T=T_{\tau\tau}\gamma^{\tau\tau}=\frac{i}{\sqrt{-e^{+}_{\tau}e^{-}_{\tau}}}\left(e^{+}_{\tau}\phi^{-}+\phi^{+}e^{-}_{\tau}\right)\,.$ (104) Next, we express the operator $O$ dual to the dilaton in BF variables. Using the map $\displaystyle\Phi=-\frac{i}{4}\phi^{0}\,,$ (105) one has from (86) that $\displaystyle O$ $\displaystyle=\frac{1}{\sqrt{\gamma}}\frac{\delta I}{\delta\Phi_{b}}$ $\displaystyle=\frac{2i}{\sqrt{-e_{\tau}^{+}e_{\tau}^{-}}}\frac{\delta I}{\delta\phi^{0}}$ $\displaystyle=-\frac{1}{\sqrt{-e_{\tau}^{+}e_{\tau}^{-}}}\omega_{\tau}\,.$ (106) We conclude that, in BF variables, the combination which corresponds to a boundary $T\overline{T}$ deformation is $\displaystyle OT=\frac{i}{e_{\tau}^{+}e_{\tau}^{-}}\left(e^{+}_{\tau}\phi^{-}+\phi^{+}e^{-}_{\tau}\right)\omega_{\tau}\,.$ (107)
We show that it is possible to craft transformations that, applied to compositional grammars, result in grammars that neural networks can learn easily, but humans do not. This could explain the disconnect between current metrics of compositionality, that are arguably human-centric, and the ability of neural networks to generalize to unseen examples. We propose to use the transformations as a benchmark, Icy, which could be used to measure aspects of the compositional inductive bias of networks, and to search for networks with similar compositional inductive biases to humans. As an example of this approach, we propose a hierarchical model, HU-RNN, which shows an inductive bias towards position-independent, word-like groups of tokens. § EXPERIMENTS Code for experiments is at [https://github.com/asappresearch/compositional-inductive-bias]. §.§ Examples of grammars Table <ref> shows examples of each grammar, for 4 objects. For concat, changing one attribute changes 3 adjacent utterance tokens. perm rearranges columns of concat utterance tokens. shufdet rearranges blocks of 3 utterance tokens, as a function of the last object attribute. We depict utterances for $n_{att}=3$, and $c_{len} = 3 \cdot n_{att}$. In our experiments we use $n_{att}=5$ and $c_{len}=4 \cdot n_{att}$. Examples for this geometry can be found in Appendix <ref>. §.§ Compositional metric evaluation Figure <ref> shows the values of compositional metrics for samples from our artificial grammars, using $n_{att}=5$, $n_{val}=10$. The compositionality metrics show low compositionality for all the artificial grammars, except for concat and perm. Thus our transformations successfully hide the compositional structure from current compositional metrics. §.§ Neural model evaluation We use the / benchmark to evaluate standard neural models for specific aspects of their compositional inductive bias. We focus on Sender models in our presentation. Results for Receiver models are in Appendix <ref>. We train each model supervised on a specific artificial grammar from /, using cross-entropy loss. We count the number of training steps, $N_{acquire}$, required to train each grammar to a training accuracy of $\acc_{tgt}$, where accuracy is token-level accuracy. For each grammar, $\mathcal{G}$, we report the ratio $b^{(\mathcal{G})} = N_{acquire}^{(\mathcal{G})} / N_{acquire}^{(\mathcal{G}_{\textsc{concat}})}$. We used $n_{att} = 5$ and $n_{val} = 10$, $c_{len} = 20$, $V = 4$, and $\acc_{tgt} = 0.8$. We halt training if $b^{(\mathcal{G})}$ reaches $20$. Table <ref> shows the results. Detailed architectural descriptions of the `Model' column are provided in Appendix <ref>. The remaining columns, except for `Params', show the acquisition time, $b$, for each grammar, relative to concat. We have highlighted in red the scenarios that failed to reach convergence; and in green the scenarios where $b$ was less than 1/3 that of hol, which shows that language acquisition was relatively fast. We can see that for many models, our transformations do not much affect the acquisition speed by neural networks. Therefore, in an emergent communication scenario, neural models can generate languages which appear non-compositional both to our current metrics, and to human evaluation. Such languages will therefore be deemed `non-compositional' by all current evaluation methods, except for generalization. This might explain the empirically observed lack of correlation between measured language compositionality, and generalization, in emergent communication experiments. §.§ Results are independent of number of parameters An obvious concern with Table <ref> is that the number of parameters varies between models, so we vary the parameters, by changing the hidden size. Table <ref> shows the results. We can see that the relative acquisition speed, relative to concat, is not changed much by a 10-fold increase in parameters, relative to the differences between the architectures. This is encouraging: we are not simply viewing an artifact of model size. §.§ RNNZero increases bias against perm Table <ref> shows the effect of not feeding the output of an RNN decoder as the input at each step. Surprisingly this increases bias against perm. That is, actually increases a prior over adjacency. §.§ HUSendZ:dgsend has low bias against shufdet We searched for neural models with reduced bias against shufdet, including using RNN-Z, dgsend [Dagan et al., 2020], and HU-RNN. Table <ref> shows a sub-set of the results. More results are in Appendix <ref>. `dgsend' acquired shufdet faster than LSTM. HUSend using a vanilla RNN as $\rnn_l$ and $\rnn_u$ acquired shufdet faster than LSTM. Combining HUSendZ with dgsend acquired shufdet fastest. §.§ End-to-end training We experimented with measuring the compositional inductive bias of a Sender and Receiver model placed end to end, see Appendix <ref> § CONCLUSION We have shown that it is possible to construct transformations that, when applied to concatenation grammars, result in grammars that machines can learn easily but which humans find challenging to learn. This could explain the disconnect highlighted in recent papers between neural network ability to generalize, in an emergent communication context, and the compositionality of the resulting languages, as measured by recent metrics of compositionality. We propose to use the families of transformations as a benchmark, Icy, for measuring aspects of the compositional inductive bias of neural networks, and searching for models with similar biases to humans. We use our benchmark to propose one such neural model, hu-rnn, which shows a compositional inductive bias towards relocatable atomic word-like groups of tokens. § REPRODUCIBILITY Full code is provided in the addendum, along with instructions in the README.md. Full code will be published to github following acceptance. Each experiment was run multiple times (usually 5 or 10), using different seeds, and the mean reported. CI95 ranges are available in Appendix <ref>. § ETHICS This work does involve human subjects, who needed to learn to use artificially generated codes to label abstract geometric objects. The annotation device was created as a game, that many people found fun to play. We received many feedbacks stating `good', `very interesting task'. None of the language or figures being trained on contain any obvious characteristics which could be deemed racist, sexist, or having any other obvious human-centric harmful biases, as far as we can tell. This work contains no obviously harmful insights, methodologies or applications. There are no obvious conflicts of interest or sponsorship to note. There are no obvious discrimination/bias/fairness concerns to report. There are no obvious issues with privacy, security, or legal compliance. All data provided was artificially generated, and does not present privacy or other issues. We have done our due diligence to ensure the integrity and reproducibility of our research. Although emergent communication investigates the communications between neural models, who learn to generate new languages, as part of collaborative tasks, we do not believe that such models are `alive', or `conscious', though we admit that we do not have any way to determine this in any objective way. The number of neurons of the models concerned was orders of magnitude less than that of the human brain. The models were not exposed to sufficiently varied or complex data that we feel that they could have learned advanced sentience or perception, although again we admit that we are not aware of an objective `threshold' or similar that we could compare with. [Andreas, 2019] Jacob Andreas. Measuring compositionality in representation learning. In International Conference on Learning Representations, 2019. URL <https://openreview.net/forum?id=HJz05o0qK7>. [Berwick et al., 2012] Robert Berwick, Gabriel Beckers, Kazuo Okanoya, and Johan Bolhuis. A bird’s eye view of human language evolution. Frontiers in evolutionary neuroscience, 4:0 5, 2012. [Brighton & Kirby, 2006] Henry Brighton and Simon Kirby. Understanding linguistic evolution by visualizing the emergence of topographic mappings. Artificial life, 120 (2):0 229–242, 2006. [Brown et al., 2020] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. [Chaabouni et al., 2020] Rahma Chaabouni, Eugene Kharitonov, Diane Bouchacourt, Emmanuel Dupoux, and Marco Baroni. Compositionality and generalization in emergent languages. arXiv preprint arXiv:2004.09124, 2020. [Cho et al., 2014] Kyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Alessandro Moschitti, Bo Pang, and Walter Daelemans (eds.), Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1724–1734. ACL, 2014. URL <https://doi.org/10.3115/v1/d14-1179>. [Crowston, 2012] Kevin Crowston. Amazon mechanical turk: A research tool for organizations and information systems scholars. In Anol Bhattacherjee and Brian Fitzgerald (eds.), Shaping the Future of ICT Research. Methods and Approaches, pp. 210–221, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg. ISBN 978-3-642-35142-6. [Dagan et al., 2020] Gautier Dagan, Dieuwke Hupkes, and Elia Bruni. Co-evolution of language and agents in referential games. arXiv preprint arXiv:2001.03361, 2020. [Foerster et al., 2016] Jakob Foerster, Ioannis Alexandros Assael, Nando de Freitas, and Shimon Learning to communicate with deep multi-agent reinforcement learning. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 29, pp. 2137–2145. Curran Associates, Inc., 2016. [Griffiths & Kalish, 2007] Thomas L Griffiths and Michael L Kalish. Language evolution by iterated learning with bayesian agents. Cognitive science, 310 (3):0 441–480, 2007. [Hochreiter & Schmidhuber, 1997] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 90 (8):0 1735–1780, 1997. [Hopfield, 1982] John J Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 790 (8):0 2554–2558, 1982. [Hupkes et al., 2020] Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. Compositionality decomposed: How do neural networks generalise? Journal of Artificial Intelligence Research, 67:0 757–795, 04 2020. [Kharitonov & Baroni, 2020] Eugene Kharitonov and Marco Baroni. Emergent language generalization and acquisition speed are not tied to compositionality. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pp. 11–15, Online, November 2020. Association for Computational Linguistics. URL <https://aclanthology.org/2020.blackboxnlp-1.2>. [Kirby et al., 2008] Simon Kirby, Hannah Cornish, and Kenny Smith. Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language. Proceedings of the National Academy of Sciences, 1050 (31):0 10681–10686, 2008. [Kolmogorov, 1963] Andrei N Kolmogorov. On tables of random numbers. Sankhyā: The Indian Journal of Statistics, Series A, pp. 369–376, 1963. [Kottur et al., 2017] Satwik Kottur, José Moura, Stefan Lee, and Dhruv Batra. Natural language does not emerge `naturally' in multi-agent In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2962–2967, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. URL <https://www.aclweb.org/anthology/D17-1321>. [Lazaridou et al., 2018] Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, and Stephen Clark. Emergence of linguistic communication from referential games with symbolic and pixel input. In International Conference on Learning Representations, 2018. URL <https://openreview.net/forum?id=HJGv1Z-AW>. [Lei et al., 2018] Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, and Yoav Artzi. Simple recurrent units for highly parallelizable recurrence. In Empirical Methods in Natural Language Processing (EMNLP), [Lewis, 2008] David Lewis. Convention: A philosophical study. John Wiley & Sons, 2008. [Li & Bowling, 2019] Fushan Li and Michael Bowling. Ease-of-teaching and language structure from emergent communication. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 15825–15835, 2019. [Paszke et al., 2019] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc., 2019. [Pinker & Bloom, 1990] Steven Pinker and Paul Bloom. Natural language and natural selection. Behavioral and brain sciences, 130 (4):0 707–727, 1990. [Resnick et al., 2020] Cinjon Resnick, Abhinav Gupta, Jakob Foerster, Andrew M. Dai, and Kyunghyun Capacity, bandwidth, and compositionality in emergent language In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '20, pp. 1125–1133, Richland, SC, 2020. International Foundation for Autonomous Agents and Multiagent Systems. ISBN 9781450375184. [Rumelhart et al., 1986] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. nature, 3230 (6088):0 533–536, 1986. [Vaswani et al., 2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefinedukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, pp. 6000–6010, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. [White & Cotterell, 2021] Jennifer C. White and Ryan Cotterell. Examining the inductive bias of neural language models with artificial languages. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 454–463, Online, August 2021. Association for Computational Linguistics. URL <https://aclanthology.org/2021.acl-long.38>. [Williams, 1992] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 80 (3):0 229–256, 1992. [Zhang et al., 2020] Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. Pegasus: Pre-training with extracted gap-sentences for abstractive In International Conference on Machine Learning, pp. 11328–11339. PMLR, 2020. § EXAMPLE UTTERANCES Table <ref> depicts example utterances for $n_{att}=5$ and $c_{len} = 4 \cdot n_{att}$.
# Quantum Algorithm for Higher-Order Unconstrained Binary Optimization and MIMO Maximum Likelihood Detection Masaya Norimoto, , Ryuhei Mori, , and Naoki Ishikawa M. Norimoto and N. Ishikawa are with the Faculty of Engineering, Yokohama National University, Kanagawa 240-8501, Japan (e-mail: [email protected]). R. Mori is with the Department of Mathematical and Computing Sciences, School of Computing, Tokyo Institute of Technology, Tokyo 152-8500, Japan (e-mail: [email protected]). This research was partially supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI (Grant Number 22H01484). ###### Abstract In this paper, we propose a quantum algorithm that supports a real-valued higher-order unconstrained binary optimization (HUBO) problem. This algorithm is based on the Grover adaptive search that originally supported HUBO with integer coefficients. Next, as an application example, we formulate multiple- input multiple-output maximum likelihood detection as a HUBO problem with real-valued coefficients, where we use the Gray-coded bit-to-symbol mapping specified in the 5G standard. The proposed approach allows us to construct an efficient quantum circuit for the detection problem and to analyze specific numbers of required qubits and quantum gates, whereas other conventional studies have assumed that such a circuit is feasible as a quantum oracle. To further accelerate the quantum algorithm, we also derive a probability distribution of the objective function value and determine a unique threshold to sample better states. Assuming a future fault-tolerant quantum computing, our proposed algorithm has the potential for significantly reducing query complexity in the classical domain and providing a quadratic speedup in the quantum domain. ###### Index Terms: Grover adaptive search (GAS), quadratic unconstrained binary optimization (QUBO), higher-order unconstrained binary optimization (HUBO), multiple-input multiple-output (MIMO), maximum-likelihood detection (MLD). (45pt,10pt) Accepted for publication in IEEE Transactions on Communications. This is the author’s version which has not been fully edited and content may change prior to final publication. Citation information: DOI 10.1109/TCOMM.2023.3244924 ## I Introduction Marconi invented a practical long-range wireless system in 1895. Since then, driven by its intense demand, wireless communication has continued to become more sophisticated as if there were no limits. The limit of communication throughput is known as the Shannon capacity, which is constrained by the bandwidth, the signal-to-noise ratio (SNR), and the numbers of transmit and receive antennas for multiple-input multiple-output (MIMO) scenarios. Clearly, there are physical limits on bandwidth, SNR, and the number of antennas. The forward error correction techniques such as the low-density parity-check code (LDPC) and polar code can achieve near-capacity performance efficiently, but under a certain energy constraint, their performance is constrained by semiconductor miniaturization limits. Marconi mentioned that it is dangerous to put limits on wireless. However, wireless communication will reach its physical limits in the near future. After the eventual end of Moore’s law, from a long-term perspective, we must rely on a different computing paradigm, and quantum computing in particular is considered to be promising. Since it is impossible to simulate a quantum computer in an efficient manner on a classical computer, quantum computers offer an essential speed advantage over classical computers [1]. Specifically, Shor’s algorithm [2] factors an $n$-bit integer with the complexity $O(n^{2}\log n\log\log n)$, while the best classical algorithm requires $\exp(\Theta(n^{1/3}\log^{2/3}n))$ operations [1],111$O(\cdot)$ denotes the big-$O$ notation, while $\Theta(\cdot)$ denotes the big-$\Theta$ notation [3]. which is an exponential speedup. Grover’s algorithm [4] finds a specific element from a database of unsorted $N$ elements with the query complexity $O(\sqrt{N})$, while the classic exhaustive search requires $O(N)$ evaluations, which is a quadratic speedup. Note that both long-term algorithms assume the realization of fault-tolerant quantum computing (FTQC), which is not yet realized with current technology. Grover’s algorithm has been extended to support binary optimization problems. The pioneering algorithm, Grover adaptive search (GAS) [5], requires a complex quantum circuit to evaluate an objective function. For example, an $m$-qubit register requires $2m-1$ Toffoli gates to perform quantum addition [6], which is still expensive. To solve this issue, Gilliam et al. used a concept of quantum dictionary and allowed for the efficient representation of an arbitrary polynomial function, including quadratic and higher-order terms [7, 8]. This efficient representation improved the feasibility of GAS, and in [8], a quadratic unconstrained binary optimization (QUBO) problem with integer coefficients was solved on a real-world quantum computer with 32 qubits. Unlike the quantum annealing (QA) [9], the GAS proposed by Gilliam et al. is innovative in that it supports a higher-order unconstrained binary optimization (HUBO) problem with integer coefficients, which cannot be solved efficiently with state-of-the-art mathematical programming solvers on a classical computer such as CPLEX222https://www.ibm.com/analytics/cplex- optimizer. In designing wireless systems, the trade-off between performance and complexity is, in general, a source of concern for engineers and researchers. For example, low-complexity MIMO detectors and polar decoders inevitably involve the penalty of lower performance, and complexity is sacrificed to achieve optimal performance. In this situation, the potential for quantum speedup has inspired those who dream of striking the fundamental trade-off and achieving the optimal performance with reduced complexity. A pioneering attempt in wireless communications was provided in [10] by Botsinis et al. who demonstrated the potential of quantum algorithms to reduce the complexity involved in maximum likelihood detection (MLD). Specifically, they used the Grover-type algorithms, such as Boyer–Brassard–Høyer–Tapp (BBHT) [11] and Dürr–Høyer (DH) search algorithms [12], for performing MLD of data symbols on a quantum computer [13]. Subsequently, a number of important studies have shown promising results [13, 14, 15, 16, 17, 18, 19, 20]. However, in those studies, it was assumed that an ideal quantum circuit to evaluate the objective function is feasible as a quantum oracle, which will be detailed in Section II. For more information on quantum-assisted wireless communications, a comprehensive survey can be found in [21, 22]. Against this background, we propose a quantum algorithm that supports a HUBO problem with real-valued coefficients. Then, as a first step toward breaking the trade-off between performance and complexity, we formulate the MIMO MLD as a real-valued HUBO problem and verify the potential of quadratic speedup. The major contributions of this paper are organized as follows. 1. 1. While the conventional GAS [8] supports HUBO with integer coefficients, we modify the quantum algorithm to handle real-valued coefficients. This allows us to solve a HUBO problem even if the objective function contains real-valued coefficients, which is achieved at the cost of one more query in the classical domain (CD).333The definition of query complexity in CD is detailed in Section III-D, which is the same as the conventional study [13]. 2. 2. As an application example, we formulate the objective function of MIMO MLD as a real-valued HUBO problem. This formulation is not a straightforward task because the objective function contains complex-valued random variables and a Frobenius norm calculation. This new formulation allows us to analyze specific numbers of qubits and quantum gates required in the constructed quantum circuits, which has been overlooked in conventional studies. 3. 3. We clarify the probability distribution of the objective function value and determine the threshold used inside GAS more efficiently. Then, we demonstrate that the proposed threshold further accelerates the convergence of GAS to the optimal solution. It is important to note that quantum circuits are sensitive to noise [23], and industrial applications require decades of effort and challenge. The noise induces quantum errors, and quantum error-correcting codes must be used to perform reliable arithmetic on a quantum computer. For example, if we use the surface code with code distance 27, which is one of the quantum error- correcting codes, a logical qubit requires 1568 physical qubits to correct errors [24]. This indicates that even a simple quantum circuit with fewer qubits, e.g., as in Fig. 2, may require many more physical qubits. Since this limitation is beyond the scope of our contributions, we assume the realization of future FTQC as in the conventional studies [2, 4, 5, 7, 8, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 25]. In fact, IBM’s roadmap for quantum computers is to achieve 4000 qubits by 2025 [26]. In the subsequent years, they expect 10000 to 100000 qubits, enabling quantum error corrections. Additionally, it was proved in [27] that quantum advantages are unlikely for optimization on a noisy intermediate-scale quantum device. Therefore, we focus on long-term algorithms assuming FTQC in this paper. The remainder of this paper is organized as follows. Section II is a review of important related works, while in Section III, we introduce the conventional GAS and its modification to support real-valued coefficients. In Section IV, a method to solve MIMO MLD on a quantum computer is proposed, and algebraic and numerical evaluations are given in Section V. Finally, in Section VI, we conclude this paper. TABLE I: List of important mathematical symbols $\mathbb{B}$ | | Binary numbers ---|---|--- $\mathbb{R}$ | | Real numbers $\mathbb{C}$ | | Complex numbers $\mathbb{Z}$ | | Integers $N_{\mathrm{t}}$ | $\in\mathbb{Z}$ | Number of transmit antennas $N_{\mathrm{r}}$ | $\in\mathbb{Z}$ | Number of receive antennas $L_{\mathrm{c}}$ | $\in\mathbb{Z}$ | Modulation order (constellation size) $\sigma^{2}$ | $\in\mathbb{R}$ | Noise variance $\gamma$ | $\in\mathbb{R}$ | Signal-to-noise ratio $E(\cdot)$ | $\in\mathbb{Z}$ | Objective function $n$ | $\in\mathbb{Z}$ | Number of binary variables $=$ transmission rate $m$ | $\in\mathbb{Z}$ | Number of qubits required to encode $E(\cdot)$ $i$ | $\in\mathbb{Z}$ | Index of GAS iterations $y,y_{i}$ | $\in\mathbb{Z}$ | Threshold that is adaptively updated by GAS $L,L_{i}$ | $\in\mathbb{Z}$ | Number of Grover operators $P$ | $\in\mathbb{R}$ | Probability that controls the proposed threshold $\mathbf{b},\mathbf{b}_{i}$ | $\in\mathbb{B}^{n}$ | Binary variables, or data bits $\mathbf{s}$ | $\in\mathbb{C}^{N_{\mathrm{t}}\times 1}$ | Data symbols, each symbol is denoted by $s_{t}$ $\mathbf{r}$ | $\in\mathbb{C}^{N_{\mathrm{r}}\times 1}$ | Received symbols, each symbol is denoted by $r_{u}$ $\mathbf{H}_{\mathrm{c}}$ | $\in\mathbb{C}^{N_{\mathrm{r}}\times N_{\mathrm{t}}}$ | Channel coefficients, $h_{ut}$ $\mathbf{v}$ | $\in\mathbb{C}^{N_{\mathrm{r}}\times 1}$ | Additive white Gaussian noise, $v_{u}$ Italicized symbols represent scalar values, and bold symbols represent vectors and matrices. Table I summarizes a list of important mathematical symbols used in this paper. ## II Related Works Quantum computation has the potential to break through the fundamental trade- off between performance and complexity. Hence, it has been applied to multi- user detection [10, 13, 14, 15, 17, 28, 29], multiple symbol differential detection [16], channel coding [30, 31], wireless routing [19, 18], indoor localization [20], intelligent reflecting surfaces [32], and codeword optimization problem [25]. In this section, we introduce important related works targeting detection problems in wireless communications. ### II-A Multi-User Detection Using DH Algorithm [13] Botsinis et al. proposed a novel method of applying the DH algorithm to multi- user detection [13], which is a detection problem for multi-user scenarios. The original DH algorithm [12] is terminated if the sum of the number of Grover iterations becomes greater than or equal to $22.5\sqrt{N}$, where $N$ denotes the search space size. By contrast, Botsinis et al. modified the algorithm to terminate early for an arbitrary number of queries smaller than $22.5\sqrt{N}$. Additionally, the modified algorithm calculates the output of a low-complexity detector, such as the zero-forcing (ZF) or minimum mean square error (MMSE) detector, and exploits the output as an initial value to sample better states. Both contributions are innovative in that they accelerate the quantum algorithm more for a specific problem in wireless communications. The objective function presented in [13] involves a Frobenius norm of complex- valued variables. However, the quantum circuit that evaluates the norm is idealized as an oracle, and no specific construction method is considered. Unlike in [13], we consider specific quantum circuits and analyze their hardware and query complexities, which is the missing piece in the literature. ### II-B MIMO MLD Using QA [28] Kim et al. formulated MIMO MLD as a QUBO problem and solved it using QA, the D-Wave 2000Q quantum annealer [28]. Specifically, binary phase-shift keying (BPSK) and quadrature phase-shift keying (QPSK) symbols are represented as first-order functions with respect to information bits, while gray-coded 16 quadrature amplitude modulation (QAM) symbols are represented as second-order functions. Since the objective function of MLD contains the squared norm, it may result in a higher-order function such as fourth, eighth, or higher, which is not supported by QA. To solve this problem, Kim et al. used first-order functions that represent higher-order modulation, such as 16-QAM or 64-QAM, without the Gray coding. Then, the objective function contains first- and second-order terms only. To achieve performance equivalent to that of the Gray-coded case, the projection between before and after Gray coding is used on a classical computer. That is, encoding at the transmitter and decoding at the receiver require additional steps. Unlike in the above study [28] targeting QA, we directly handle the Gray-coded data symbols specified in the 5G standard owing to the proposed real-valued GAS that supports higher-order terms. Our approach is capable of supporting any signal modulation, such as star-QAM and constellation shaping schemes, as long as data symbols can be represented as a function of information bits. ### II-C MIMO MLD Using DH Algorithm [29] Mondal et al. proposed a method to solve MIMO MLD using the DH algorithm [29]. Specifically, to improve the success probability of the algorithm, the uniform selection of the number of Grover operators, $L$, was modified to a random value from the Gamma distribution, leading to a better selection of $L$. Here, the Gamma distribution depends on a scale parameter, and the scale parameter depends on the exact number of solutions to be marked. Since the exact number of solutions varies dynamically depending on the threshold, the quantum counting algorithm [33] is crucial, as stated in [29]. Additionally, the concept of reducing the search space was verified. As in [13], a specific construction method for a quantum circuit is not considered in [29]. Herein, we determine a threshold in accordance with the distribution of objective function values, which is known in advance. ## III Grover Adaptive Search (GAS) GAS [8] supports binary optimization problems with integer coefficients, including QUBO and HUBO problems. It requires $n$ qubits for $n$ binary variables $\mathbf{b}\in\mathbb{B}^{n}$ and $m$ qubits for encoding the objective function value $E(\mathbf{b})\in\mathbb{Z}$, resulting in a circuit equipped with $n+m$ qubits. Here, $E(\mathbf{b})$ is an arbitrary polynomial function, such as $E(\mathbf{b})=1+b_{0}-2b_{1}b_{2}$. The classic exhaustive search requires $O(2^{n})$ queries, while GAS requires $O(\sqrt{2^{n}})$ queries, which potentially provides a quadratic speedup. GAS obtains a global minimum solution by amplifying the states in which the objective function value $E(\mathbf{b})$ is smaller than the current threshold $y_{i}\in\mathbb{Z}$. Here, $y_{i}$ is a temporal minimum and $i$ is an iteration count in CD. We measure the quantum states and update the threshold, which is repeated until a termination condition is satisfied. Before executing GAS, it is not a straightforward task to determine an appropriate number of qubits $m$. The objective function value is expressed by the two’s complement representation. This is because positive or negative can be identified simply by focusing on the beginning of $m$ qubits, and this representation simplifies the quantum circuit to the identification of the states of interest that should be amplified. Let the objective function value or its coefficient be an integer $k$. Then, $m$ must satisfy [8] $\displaystyle-2^{m-1}\leq k<2^{m-1}.$ (1) As the threshold $y_{i}$ is updated in each iteration of GAS, the calculated value may become $E(\mathbf{b})-y_{i}$, which results in a smaller minimum value or a larger maximum value. Thus, it is necessary to set a sufficient $m$ that might handle $E_{\mathrm{max}}$, $E_{\mathrm{min}}$, and $E_{\mathrm{max}}-E_{\mathrm{min}}$ without overflow, where $E_{\mathrm{max}}$ and $E_{\mathrm{min}}$ are the maximum and minimum of $E(\mathbf{b})$, respectively. For example, when we have $E_{\mathrm{max}}=8$ and $E_{\mathrm{min}}=-6$, the maximum of $E(\mathbf{b})-y_{i}$ may become $E_{\mathrm{max}}-E_{\mathrm{min}}=8-(-6)=14$, and $m=5$ is sufficient to represent the values of $E(\mathbf{b})-y_{i}$. ### III-A Conventional GAS for Integer QUBO [8] We review a specific construction method for the quantum circuit used in GAS. First, a state preparation operator $\mathbf{A}_{y_{i}}$ is constructed, in which an $n$-qubit input register is transformed into the equal superposition of all states and an $m$-qubit input register is used to represent the corresponding value $E(\mathbf{b})-y_{i}$. Taking the binary variable $\mathbf{b}$ as a binary number and converting it to a decimal number $b$, the state should be [8] $\displaystyle\mathbf{A}_{y_{i}}\Ket{0}_{n}\Ket{0}_{m}=\frac{1}{\sqrt{2^{n}}}\sum_{b=0}^{2^{n}-1}\Ket{b}_{n}\Ket{E(b)-y_{i}}_{m}.$ (2) This operator $\mathbf{A}_{y_{i}}$ can be composed of the Hadamard gates $\mathbf{H}$, controlled unitary operators $\mathbf{U}_{G}(\theta)$, and the inverse quantum Fourier transform (IQFT). Let $k$ be a constant term in the objective function. The noncontrolled unitary operator $\mathbf{U}_{G}(\theta)$ is defined such that [8] $\displaystyle\mathbf{U}_{G}(\theta)\mathbf{H}^{\otimes m}\Ket{0}_{m}=\frac{1}{\sqrt{2^{m}}}\sum^{2^{m}-1}_{l=0}e^{jl\theta}\Ket{l}_{m},$ (3) where we have $\theta=2\pi k/2^{m}$. That is, it is constructed by $\displaystyle\mathbf{U}_{G}(\theta)=\mathbf{R}(2^{m-1}\theta)\otimes\mathbf{R}(2^{m-2}\theta)\otimes\cdots\otimes\mathbf{R}(2^{0}\theta)$ (4) and the phase gate $\displaystyle\mathbf{R}(\theta)=\begin{bmatrix}1&0\\\ 0&e^{j\theta}\end{bmatrix}.$ (5) Here, phase advance represents integer addition and phase delay represents subtraction. Following (3), IQFT yields only one state that represents the original integer value of $k$. The interaction between a binary variable and a coefficient can be represented by a controlled qubit. Similarly, the interaction between binary variables can be represented by controlled qubits on a register $\Ket{b}_{n}$. As exemplified in Fig. 2, the constant term $+1$ corresponds to $\mathbf{U}_{G}\left(\pi/4\right)$, the term $+1b_{0}$ corresponds to controlled $\mathbf{U}_{G}\left(\pi/4\right)$, and the term $-2b_{1}b_{2}$ corresponds to controlled $\mathbf{U}_{G}\left(-2\pi/4\right)$. Likewise, higher-order terms, such as third or fourth order, can be represented by increasing the number of controlled qubits. Figure 1: Quantum circuit corresponding to $E(\mathbf{b})=1+b_{0}-2b_{1}b_{2}$. (a) $L=0$. (b) $L=1$. (c) $L=2$. Figure 2: Output probabilities of the circuit shown in Fig. 2 In the classic Grover search [4], an oracle operator $\mathbf{O}$ identifies the states of interest and inverts the phases of these states. Only the inverted states are amplified by the Grover operator. The operator $\mathbf{A}_{y_{i}}$ above calculates the values $E(\mathbf{b})-y_{i}$ for all $2^{n}$ states in parallel. Here, states that are better than the current threshold $y_{i}$, i.e., states that satisfy $E(\mathbf{b})-y_{i}<0$, should be marked to find the minimum solution. Since the calculated values are represented by the two’s complement, we can identify the negative states by focusing only on the beginning of the $m$ qubits, and $\mathbf{O}$ can be constructed by applying the Z-gate only to that qubit. Let the Grover diffusion operator be $\mathbf{D}$ of [4] $\displaystyle D_{i,j}=\begin{cases}0&(i\neq j)\\\ 1&(i=j=0)\\\ -1&(i=j\neq 0)\end{cases}.$ (6) The Grover operator is finally constructed by $\mathbf{G}=\mathbf{A}_{y_{i}}\mathbf{D}\mathbf{A}_{y_{i}}^{\mathrm{H}}\mathbf{O}$, and we evaluate $\mathbf{G}^{L}\mathbf{A}_{y_{i}}\Ket{0}_{n+m}$, which will maximize the amplitudes of the states of interest. The ideal $L$ that successfully maximizes the amplitude is given by [34] $\displaystyle L_{\mathrm{opt}}=\left\lfloor\frac{\pi}{4}\sqrt{\frac{N}{N_{\mathrm{s}}}}\right\rfloor,$ (7) where $N$ denotes the search space size, $2^{n}$, and $N_{\mathrm{s}}$ denotes the number of solutions. Let $\mathbf{b}$ be the bit sequence and $y$ be the objective function value obtained by the evaluation. From (7), the query complexity of GAS can be derived as $O(\sqrt{2^{n}})$ [8] in the quantum domain (QD), which is the total number of Grover operators. Since $N_{s}$, the number of states better than the current threshold, is unknown in advance, $L$ is typically drawn from a uniform distribution ranging from $0$ to a specific value that increases by a factor of $\lambda=8/7$ at each iteration. GAS is terminated if the sum of the Grover operators is greater than $22.5\sqrt{2^{n}}$, which is the same as the conventional DH algorithm. In the Qiskit implementation, the number of times no improvement is observed is also considered as one of the termination conditions. Overall, GAS is summarized in Algorithm 1. Algorithm 1 Conventional GAS designed for integer coefficients [8]. 0: $E:\mathbb{B}^{n}\rightarrow\mathbb{Z},\lambda=8/7$ 0: $\mathbf{b}$ 1: Uniformly sample $\mathbf{b}_{0}\in\mathbb{B}^{n}$ and set $y_{0}=E(\mathbf{b}_{0})$. 2: Set $k=1$ and $i=0$. 3: repeat 4: Randomly select the rotation count $L_{i}$ from the set $\\{0,1,...,\lceil k-1\rceil$}. 5: Evaluate $\mathbf{G}^{L_{i}}\mathbf{A}_{y_{i}}\Ket{0}_{n+m}$, and obtain $\mathbf{b}$ and $y$. {Grover search} 6: if $y<y_{i}$ then 7: $\mathbf{b}_{i+1}=\mathbf{b},y_{i+1}=y,$ and $k=1$. {Improvement found} 8: else 9: $\mathbf{b}_{i+1}=\mathbf{b}_{i},y_{i+1}=y_{i},$ and $k=\min{\\{\lambda k,\sqrt{2^{n}}}\\}$. {No Improvement} 10: end if 11: $i=i+1$. 12: until a termination condition is met. As a specific example, Fig. 2 shows a quantum circuit of GAS that tries to minimize the objective function $E(\mathbf{b})=1+b_{0}-2b_{1}b_{2}$, where the threshold of $y_{i}=0$ and $\mathbf{A}_{y_{i}=0}$ are considered for simplicity. The upper $n=3$ qubits correspond to variables $\Ket{b_{0}}$, $\Ket{b_{1}}$, and $\Ket{b_{2}}$, and the lower $m=3$ qubits $\Ket{z}_{3}$ encode the calculated value. The Hadamard gate at the beginning of $\mathbf{A}_{y_{i}=0}$ initializes the qubits and creates an equal superposition of all the possible states, $000000$ to $111111$. The black circle in Fig. 2 indicates a control qubit. The unitary operator $\mathbf{U}_{G}(\theta)$ is applied if all the associated control qubits are $1$. Here, the control qubit is in a superposition state, and it creates a quantum entanglement state, which plays a key role in GAS. IQFT is applied at the last part of $\mathbf{A}_{y_{i}=0}$. After that, the Grover operator $\mathbf{G}$ is applied $L$ times, and we measure the quantum state. Fig. 2 shows the probability that each state is measured, where the number of Grover operators was varied from $L=0$ to $2$. The comma-separated text in this figure shows $n$ and $m$ qubits, and the latter is converted to a decimal number. As shown in Fig. 2, when $L=0$, $2^{3}=8$ different states were observed with equal probability, and the corresponding values of the objective function were correctly calculated, demonstrating the potential of quantum computation. When $L=1$ and $2$, only the state of interest $\mathbf{b}=[0~{}1~{}1]$, which yields $E(\mathbf{b})=-1<0$, was successfully amplified by the Grover operator. In this manner, GAS amplifies the states that are better than the current threshold and finds a binary solution that minimizes the objective function. ### III-B Handling of Real-Valued Coefficients [8] A polynomial may contain real-valued coefficients. To deal with real-valued coefficients, Gilliam et al. proposed the following two methods [8]. #### III-B1 Integer Approximation Multiplying the objective function by a positive constant does not affect the minimization process. A real-valued coefficient can be approximated by multiplying a large number and rounding down to an integer. Specifically, real coefficients are approximated as fractions with a common denominator, the denominator is multiplied to the objective function, and the numerators become approximated integer coefficients. As can be inferred from (1), the drawback is that the number of required qubits $m$ increases as the value range of the objective function expands. If $m$ is kept small, this approximation becomes less accurate. #### III-B2 Direct Encoding In this method, an integer $k$ in $\theta=2\pi k/2^{m}$ of (3) is replaced with a real-valued coefficient $a\in\mathbb{R}$. Then, the output probability indicates multiple integers, which is known as the Fejér distribution. Specifically, the state $\mathbf{U}_{\mathrm{Fej\acute{e}r}}(\theta)\Ket{0}_{m}$ after applying IQFT to $\mathbf{U}_{G}(\theta)\mathbf{H}^{\otimes m}\Ket{0}_{m}$ is given by [8]444This definition differs from [8], but is essentially identical. $\displaystyle\mathbf{U}_{\mathrm{Fej\acute{e}r}}(\theta)\Ket{0}_{m}=\sum_{l=0}^{2^{m}-1}\left\langle\mathbf{g}(\theta),\mathbf{g}\left(2\pi l/2^{m}\right)\right\rangle\Ket{l},$ (8) where we have $\theta=2\pi a/2^{m}$ and $\mathbf{g}(\theta)=[1,e^{j\theta},\cdots,e^{j(2^{m}-1)\theta}]/\sqrt{2^{m}}$. The number of qubits $m$ must satisfy [8] $\displaystyle-2^{m-1}\leq a<2^{m-1}.$ (9) In this distribution, the probabilities of two integers close to a given real number $a$ are greater than the other probabilities. For example, if $m=3$ qubits and $a=-2.5$, from (8), $-2$ and $-3$ are observed with equal probability. If $a=-2.3$, $-2$ is observed more frequently than $-3$. ### III-C Proposed GAS for Real-Valued HUBO As previously reviewed in Section III-B, in their innovative study [8], Gilliam et al. proposed two methods for handling real-valued coefficients, but did not specifically investigate how GAS behaves in the case of direct encoding. In such a case, in our evaluation, GAS samples a wrong value of the objective function, which obeys the Fejér distribution. For example, if the objective function value is $-2.5$, we may observe an integer value less than or equal to $-3$. A value lower than the actual value is updated as the minimum and set as a new threshold $y_{i}$. Then, no states satisfy $E(\mathbf{b})-y_{i}<0$, and one of all states is randomly sampled. As a result, GAS will not be able to obtain an optimal solution. Algorithm 2 Proposed GAS designed for real-valued coefficients. 0: $E:\mathbb{B}^{n}\rightarrow\mathbb{R},\lambda=8/7$ 0: $\mathbf{b}$ 1: Uniformly sample $\mathbf{b}_{0}\in\mathbb{B}^{n}$ and set $y_{0}=E(\mathbf{b}_{0})$. {This step will be improved in Section IV-C} 2: Set $k=1$ and $i=0$. 3: repeat 4: Randomly select the rotation count $L_{i}$ from the set $\\{0,1,...,\lceil k-1\rceil$}. 5: Evaluate $\mathbf{G}^{L_{i}}\mathbf{A}_{y_{i}}\Ket{0}_{n+m}$, and obtain $\mathbf{b}$. 6: Evaluate $y=E(\mathbf{b})$ in CD. {This is the additional step} 7: if $y<y_{i}$ then 8: $\mathbf{b}_{i+1}=\mathbf{b},y_{i+1}=y,$ and $k=1$. 9: else 10: $\mathbf{b}_{i+1}=\mathbf{b}_{i},y_{i+1}=y_{i},$ and $k=\min{\\{\lambda k,\sqrt{2^{n}}}\\}$. 11: end if 12: $i=i+1$. 13: until a termination condition is met. A possible solution here is that we ignore $y$ evaluated in QD. Instead, we use $\mathbf{b}$ returned by GAS and calculate a correct objective function value $y=E(\mathbf{b})$ in CD. Since the quantum circuit using the direct encoding amplifies the states of interest with high probability, with this simple modification, GAS obtains an optimal solution correctly. Overall, the above procedure is summarized in Algorithm 2. We have two major drawbacks. First, Algorithm 2 increases query complexity in CD, although the asymptotic order remains the same. Second, the probability amplification may not be sufficient, which is illustrated in Fig. 4. Figure 3: Quantum circuit corresponding to $E(\mathbf{b})=1+b_{0}-1.8b_{1}b_{2}b_{3}$. (a) $L=0$. (b) $L=1$. (c) $L=2$. (d) $L=3$. Figure 4: Output probabilities of the circuit shown in Fig. 4, where only the top 16 states are shown. As a specific example, Fig. 4 shows a quantum circuit corresponding to the objective function $E(\mathbf{b})=1+b_{0}-1.8b_{1}b_{2}b_{3}$, where we set $n=4$ and $m=3$. Since we used the direct encoding method, $-1.8b_{1}b_{2}b_{3}$ was represented as $\mathbf{U}_{G}(-1.8\pi/4)$, and it was associated with three qubits $\Ket{b_{1}}$, $\Ket{b_{2}}$, and $\Ket{b_{3}}$. Additionally, Fig. 4 shows the output probabilities of Fig. 4, where only the top 16 states are shown for the sake of readability. As given in (8), the direct encoding method may not result in a unique integer. For example, the states $(0111,-1)$ and $(0111,-2)$ had positive probabilities in Fig. 4. The state of interest here is $\mathbf{b}=[0~{}1~{}1~{}1]$ and $E(\mathbf{b})=1-1.8=-0.8<0$. That is, before amplitude amplification, at $L=0$, integers close to the real value are observed, and after amplification, at $L>0$, the negative states are observed with high probabilities. As shown in Fig. 4(d), the states $(0111,-1)$ and $(0111,-2)$ were amplified as the number of Grover operators $L$ increased, while the wrong state $(0111,-2)$ was observed with a lower probability than $(0111,-1)$. Another wrong state $(1111,-1)$ was also observed with a low probability. This is the reason why the correction of the objective function value is required for real-valued GAS, as summarized in Algorithm 2. ### III-D Evaluation Metrics In the literature, a quantum circuit has been evaluated by the numbers of qubits and gates and its depth, while a quantum algorithm has been evaluated by query complexity. #### III-D1 Numbers of Qubits and Gates and Depth The size of the quantum circuit determines its feasibility. As the numbers of qubits and gates in a quantum circuit increase, more advanced quantum computation becomes possible. At the same time, however, it becomes more susceptible to noise and more difficult to implement in hardware. In our evaluations, the number of required qubits is represented as $n+m$, and the number of quantum gates is derived as a function with respect to $n$ and $m$. #### III-D2 Query Complexity [13] To investigate query complexity, we count how many times the objective function is queried. Specifically, the query complexity in the classical domain (CD) is the number of times the objective function is evaluated, i.e., $i$ in Algorithm 1. By contrast, the query complexity in the quantum domain (QD) is the number of times the Grover operator $\mathbf{G}$ is applied, i.e., $L_{0}+L_{1}+\cdots+L_{i}$ in Algorithm 1. The definitions of query complexities in CD and QD are the same as those used in [13]. ## IV Quantum Speedup for MIMO MLD Conventional studies on quantum-assisted wireless communications have not considered a specific construction method of the quantum circuit. In many cases, the circuit to calculate an objective function has been idealized as a black-box quantum oracle. In this section, we formulate the MIMO MLD as a new real-valued HUBO problem, which can be represented by a quantum circuit, as described in Section III. We also analyze the probability distribution of the objective value for enabling further speedup. ### IV-A System Model Figure 5: System model for MIMO with $N_{\mathrm{t}}$ transmit and $N_{\mathrm{r}}$ receiver antennas. We consider a MIMO communication scenario with $N_{\mathrm{t}}$ transmit antennas and $N_{\mathrm{r}}$ receive antennas, as illustrated in Fig. 5. The input $n$-bit sequence $\mathbf{b}=[b_{0}~{}b_{1}~{}\cdots~{}b_{n-1}]\in\mathbb{B}^{n}$ is mapped to a symbol vector $\mathbf{s}=[s_{0}~{}s_{1}~{}\cdots~{}s_{N_{\mathrm{t}}-1}]\in{\mathbb{C}}^{N_{\mathrm{t}}\times 1}$, where $s_{t}$ for $0\leq t\leq N_{\mathrm{t}}-1$ denotes a Gray-coded data symbol specified in 5G NR [35]. We represent this bit-to-symbol mapper as $\mathbf{s}=M(\mathbf{b})=M(b_{0},\cdots,b_{n-1})$, which will be defined in detail in Section IV-B. The baseband received symbols $\mathbf{r}\in\mathbb{C}^{N_{\mathrm{r}}\times 1}$ is given by $\displaystyle\mathbf{r}=\frac{1}{\sqrt{N_{\mathrm{t}}}}\mathbf{H}_{\mathrm{c}}\mathbf{s}+\sigma\mathbf{v},$ (10) where $\mathbf{H}_{\mathrm{c}}\in{\mathbb{C}}^{N_{\mathrm{r}}\times N_{\mathrm{t}}}$ denotes the channel matrix, and $\mathbf{v}\in{\mathbb{C}}^{N_{\mathrm{r}}\times 1}$ denotes the additive white Gaussian noise. Here, we assume the narrowband Rayleigh flat fading. That is, each element of $\mathbf{H}_{\mathrm{c}}$, $h_{ut}$, and each element of $\mathbf{v}$, $v_{u}$, follow the standard complex Gaussian distribution $\mathcal{CN}(0,1)$ for $0\leq u\leq N_{\mathrm{r}}-1$ and $0\leq t\leq N_{\mathrm{t}}-1$. The SNR is defined as $\gamma=1/\sigma^{2}$ because the symbol vector has the power constraint $\mathrm{E}\left[\|\mathbf{s}/\sqrt{N_{\mathrm{t}}}\|_{\mathrm{F}}^{2}\right]=\mathrm{E}\left[\sum_{t=0}^{N_{\mathrm{t}}-1}|s_{t}|^{2}/N_{\mathrm{t}}\right]=1$. The constellation size, or modulation order, is denoted by $L_{\mathrm{c}}$, and the transmission rate is calculated by $\displaystyle n=N_{\mathrm{t}}\log_{2}(L_{\mathrm{c}})~{}~{}\text{[bit/symbol]}.$ (11) Corresponding to (10), the ideal MLD is performed as $\displaystyle\hat{b}_{0},\cdots,\hat{b}_{n-1}=\arg\underset{b_{0},\cdots,b_{n-1}}{\min}E(b_{0},\cdots,b_{n-1}),$ (12) where we have the objective function $\displaystyle E(b_{0},\cdots,b_{n-1})=\left\|\mathbf{r}-\frac{1}{\sqrt{N_{\mathrm{t}}}}\mathbf{H}_{\mathrm{c}}M(b_{0},\cdots,b_{n-1})\right\|^{2}_{\mathrm{F}}.$ (13) From (12), the exhaustive search using a classical computer requires the computational time complexity of $O(2^{n})$, which is equivalent to the query complexity in CD. Both complexities increase exponentially with the transmission rate $n$. To mitigate the exponential complexity, a number of low-complexity detectors have been proposed in the literature. The classic ZF detector uses the pseudo- inverse matrix of $\displaystyle\mathbf{W}_{\mathrm{ZF}}=\begin{cases}(\mathbf{H}_{\mathrm{c}}^{\mathrm{H}}\mathbf{H}_{\mathrm{c}})^{-1}\mathbf{H}_{\mathrm{c}}^{\mathrm{H}}&(N_{\mathrm{t}}\leq N_{\mathrm{r}})\\\ \mathbf{H}_{\mathrm{c}}^{\mathrm{H}}(\mathbf{H}_{\mathrm{c}}\mathbf{H}_{\mathrm{c}}^{\mathrm{H}})^{-1}&(N_{\mathrm{t}}>N_{\mathrm{r}})\end{cases}$ (14) and enables independent detection of data symbols as $\displaystyle\hat{b}_{0},\cdots,\hat{b}_{n-1}=M^{-1}(\mathbf{W}_{\mathrm{ZF}}\mathbf{r}),$ (15) where $M^{-1}\left(\cdot\right)$ denotes the hard-decision symbol-to-bit demapper. Similarly, the MMSE detector uses $\displaystyle\mathbf{W}_{\mathrm{MMSE}}=\begin{cases}(\mathbf{H}_{\mathrm{c}}^{\mathrm{H}}\mathbf{H}_{\mathrm{c}}+\sigma^{2}\mathbf{I})^{-1}\mathbf{H}_{\mathrm{c}}^{\mathrm{H}}&(N_{\mathrm{t}}\leq N_{\mathrm{r}})\\\ \mathbf{H}_{\mathrm{c}}^{\mathrm{H}}(\mathbf{H}_{\mathrm{c}}\mathbf{H}_{\mathrm{c}}^{\mathrm{H}}+\sigma^{2}\mathbf{I})^{-1}&(N_{\mathrm{t}}>N_{\mathrm{r}})\end{cases}$ (16) and obtains $\displaystyle\hat{b}_{0},\cdots,\hat{b}_{n-1}=M^{-1}(\mathbf{W}_{\mathrm{MMSE}}\mathbf{r}).$ (17) An MMSE-based interference cancelation method has been adopted in typical wireless standards such as 5G NR. The performance of a ZF or MMSE detector is worse than that of MLD. In general, low-complexity detectors improve complexity at the sacrifice of performance. The above system model and detectors are typical and common in the field of wireless communications. Since we consider a general MIMO system, the simulation results given in this paper are the same as those for a multicarrier scenario without inter-subcarrier interference or an uplink multi-user scenario in which $N_{\mathrm{t}}$ single-antenna user terminals transmit their symbols and these symbols are received simultaneously at a base station equipped with $N_{\mathrm{r}}$ antennas. ### IV-B Proposed Method to Transform MLD into HUBO As described in Section III, the proposed GAS is capable of solving a real- valued HUBO problem. We transform the objective function of MIMO MLD (13) into a HUBO problem. Specifically, we use the relationship between transmission bits and data symbols, which is specified in the 5G NR standard [35]. The input $n$-bit sequence is denoted by $\mathbf{b}=[b_{0}~{}b_{1}~{}\cdots~{}b_{n-1}]\in\mathbb{B}^{n}$ and the symbol vector is denoted by $\mathbf{s}=[s_{0}~{}s_{1}~{}\cdots~{}s_{N_{\mathrm{t}}-1}]\in\mathbb{C}^{N_{\mathrm{t}}}$. Then, BPSK symbols $\mathbf{s}=M_{2}(\mathbf{b})$ are generated by [35] $\displaystyle s_{t}=\frac{1}{\sqrt{2}}[(1-2b_{t})+j(1-2b_{t})]$ (18) and QPSK symbols $\mathbf{s}=M_{4}(\mathbf{b})$ are generated by [35] $\displaystyle s_{t}=\frac{1}{\sqrt{2}}[(1-2b_{2t})+j(1-2b_{2t+1})].$ (19) Furthermore, 16-QAM symbols $\mathbf{s}=M_{16}(\mathbf{b})$ are generated by [35] $\displaystyle s_{t}=$ $\displaystyle\frac{1}{\sqrt{10}}(1-2b_{4t+0})[2-(1-2b_{4t+2})]$ $\displaystyle+$ $\displaystyle\frac{j}{\sqrt{10}}(1-2b_{4t+1})[2-(1-2b_{4t+3})]$ (20) and 64-QAM symbols $\mathbf{s}=M_{64}(\mathbf{b})$ are generated by [35] $\displaystyle s_{t}=$ $\displaystyle\frac{1}{\sqrt{42}}(1-2b_{6t+0})[4-(1-2b_{6t+2})[2-(1-2b_{6t+4})]]$ $\displaystyle+$ $\displaystyle\frac{j}{\sqrt{42}}(1-2b_{6t+1})[4-(1-2b_{6t+3})[2-(1-2b_{6t+5})]].$ (21) A similar relationship for 256-QAM is defined in [35] and its extension for higher modulation orders can be defined easily. Figure 6: Constellation for Gray-coded data symbols specified in the 5G NR standard [35] Overall, Fig. 6 shows the Gray-coded data symbols defined by (18), (19), and (IV-B). Our proposed objective function is obtained by substituting $M(\cdot)$ in (13) with (18), (19), (IV-B), or (IV-B), which contains $n$ number of binary variables $b_{0},\cdots,b_{n-1}$. In the cases of BPSK and QPSK, our objective function results in a quadratic form since both symbols are represented by a linear relationship and the MLD (12) contains the square of the Frobenius norm. In the case of 16-QAM, the objective function results in a quartic form since the symbols are represented by a quadratic relationship. Similarly, the objective function results in a sextic form in the 64-QAM case. The use of data symbols specified in 5G NR is not straightforward since the objective function inevitably contains higher-order terms if the modulation order is 16 or higher. Thus, in this form, the conventional QA requires a transformation from HUBO to QUBO, and this transformation involves an increase in binary variables, making the problem more difficult. Our proposed approach is only possible with the aid of the real-valued support of GAS. Because of GAS, the query complexity is expected to be reduced from $O(2^{n})$ to $O(\sqrt{2^{n}})$. The structure of the proposed objective function depends only on the number of transmit antennas, $N_{\mathrm{t}}$, the number of receive antennas, $N_{\mathrm{r}}$, and the modulation order, $L_{\mathrm{c}}$. The coefficients in the objective function change depending on the channel matrix $\mathbf{H}_{\mathrm{c}}$. The calculation cost of coefficients determines the complexity of classical processing required before executing GAS, which relates to the latency of the algorithm. If we approximate the computational complexity as the number of real-valued multiplications, the largest burden is the product of channel coefficients, such as $h_{00}h_{01}^{*}$. The total number of multiplications is calculated as $4N_{\mathrm{r}}N_{\mathrm{t}}(N_{\mathrm{t}}-1)/2=O(N_{\mathrm{r}}N_{\mathrm{t}}^{2})$, which is sufficiently small with respect to the detection complexity. ##### Example (QPSK) As a specific example, we consider the QPSK case (19) with $N_{\mathrm{t}}=N_{\mathrm{r}}=2$. The objective function of (13) can be transformed into $\displaystyle E(b_{0},b_{1},b_{2},b_{3})$ $\displaystyle=$ $\displaystyle 2\sum_{u=0}^{1}{\sum_{t=0}^{1}({\mathrm{Re}(h_{ut}r_{u}^{*})}b_{2t}-{\mathrm{Im}(h_{ut}r_{u}^{*})}b_{2t+1}})$ $\displaystyle+$ $\displaystyle 2a_{1}(b_{0}b_{2}+b_{1}b_{3})+2a_{2}(b_{0}b_{3}-b_{1}b_{2})$ $\displaystyle-$ $\displaystyle(a_{1}+a_{2})(b_{0}+b_{3})-(a_{1}-a_{2})(b_{1}+b_{2}),$ (22) where we have $a_{1}=\mathrm{Re}(h_{00}h_{01}^{*})+\mathrm{Re}(h_{10}h_{11}^{*})$ and $a_{2}=\mathrm{Im}(h_{00}h_{01}^{*})+\mathrm{Im}(h_{10}h_{11}^{*})$. This function (IV-B) is in a quadratic form. ##### Example (16-QAM) Figure 7: Quantum circuit corresponding to objective function of 16-QAM detection. Additionally, Fig. 7 exemplifies a specific quantum circuit for the 16-QAM case with $N_{\mathrm{t}}=N_{\mathrm{r}}=2$, where we have $n=8$ qubits for binary variables, $m=5$ qubits for real-valued encoding, random channel coefficients $\displaystyle\mathbf{H}_{\mathrm{c}}=[[$ $\displaystyle 0.748510757437062-0.014877263039446401j,$ $\displaystyle 1.3215983896521515+0.06298233870206783j],$ $\displaystyle[$ $\displaystyle 0.6371630706424066-0.14262155021296025j,$ $\displaystyle-$ $\displaystyle 0.3888005272494009-0.15170387681055802j]],$ (23) and the original information bits of $00110101$. As shown in Fig. 7, the objective function results in a quartic form: $E(\mathbf{b})=1.22b_{0}b_{2}b_{4}b_{6}+0.61b_{0}b_{2}b_{4}+\cdots$ using (13) and (IV-B). As an example, the coefficient of $b_{0}b_{2}b_{4}b_{6}$ is calculated as $\frac{1}{2}\cdot\frac{4}{\sqrt{10}}\cdot\frac{4}{\sqrt{10}}(h_{00}h_{01}^{*}+h_{00}^{*}h_{01}+h_{10}h_{11}^{*}+h_{10}^{*}h_{11})=1.22$, which is rounded down to the second decimal place for simple illustration. ### IV-C Proposed Threshold for Further Speedup GAS obtains a global minimum solution by updating the threshold value $y_{i}$ and amplifying the probability amplitudes corresponding to values smaller than the threshold. The query complexity can be reduced by setting the initial threshold in a manner different from that in classic random sampling, although the asymptotic performance may not change. In this section, we derive the probability distribution of the objective function value and use it to determine a strict threshold, which enables further speedup. If the information bits in (13) are estimated correctly, the minimum value of (13) is the Frobenius norm of additive noise $\mathbf{v}\in\mathbb{C}^{N_{\mathrm{r}}\times 1}$ as follows: $\displaystyle E_{\mathrm{min}}=\underbrace{\sigma^{2}}_{\text{known}}\underbrace{\sum_{u=0}^{N_{\mathrm{r}}-1}|v_{u}|^{2}}_{\text{unknown}}.$ (24) That is, $E_{\mathrm{min}}$ depends on the noise variance $\sigma^{2}$, which is typically known at the receiver, and instantaneous noise $v_{u}$, which is unknown in any case. Since the noise is assumed to follow the complex Gaussian distribution, the magnitude of the norm follows the Rayleigh distribution, and its square follows the exponential distribution. As a result, $E_{\mathrm{min}}$ in (24) follows the Erlang distribution, whose probability density function is $\displaystyle f(y)=\frac{\gamma^{N_{\mathrm{r}}}y^{N_{\mathrm{r}}-1}e^{-\gamma y}}{(N_{\mathrm{r}}-1)!},$ (25) where we have SNR $\gamma=1/\sigma^{2}$. The corresponding cumulative distribution function (CDF) is given by $\displaystyle F(y)=\mathrm{Pr}[Y\leq y]=1-e^{-\gamma y}\sum_{u=0}^{N_{\mathrm{r}}-1}\frac{(\gamma y)^{u}}{u!}.$ (26) Figure 8: Cumulative distribution of the minimum of objective function values (27). As an example, if we consider the case with $N_{\mathrm{r}}=2$, the CDF is calculated as $\displaystyle F(y)=\mathrm{Pr}[Y\leq y]=1-e^{-\gamma y}(1+\gamma y)$ (27) from (26). Fig. 8 exemplifies CDF (27) when SNR is varied as $\gamma=5$ to $20$ dB. Additionally, the CDF of the simulated objective function values with $N_{\mathrm{t}}=2$ and QPSK is also plotted. As shown in Fig. 8, if the SNR is sufficient, such as above 10 dB, the theoretical and simulated values are identical. Thus, it is possible to know in advance that the minimum value to be calculated is below a certain threshold, which can be determined with a very high degree of certainty. Given an SNR $\gamma$, theoretical values of (26) can be used to determine a strict threshold. From (26), the probability that the threshold $y$ is below the minimum value is $\displaystyle\mathrm{Pr}[Y>y]=e^{-\gamma y}(1+\gamma y).$ (28) Let $\tilde{y}$ be the threshold to be determined and $P$ be a small constant probability, such as $P=10^{-3}$ and $10^{-4}$. Replacing $y$ and $\mathrm{Pr}[Y>y]$ in (28) with $\tilde{y}$ and $P$ yields $\displaystyle P=e^{-\gamma\tilde{y}}(1+\gamma\tilde{y}).$ (29) Dividing both sides by $-e$ gives $\displaystyle-\frac{P}{e}=-(1+\gamma\tilde{y})e^{-(1+\gamma\tilde{y})}.$ (30) Then, using the Lambert $W$ function, we obtain $\displaystyle W_{-1}\left(-\frac{P}{e}\right)=-(1+\gamma\tilde{y})=-\left(1+\frac{\tilde{y}}{\sigma^{2}}\right),$ (31) where $W_{-1}(\cdot)$ denotes the lower branch of the Lambert $W$ function, i.e., $W_{-1}(\cdot)\leq-1$ and $W_{-1}(-1/e)=-1$. Finally, the threshold to be determined is $\displaystyle\tilde{y}=\underbrace{\sigma^{2}}_{\text{known}}\underbrace{\nu}_{\text{known}},$ (32) which is similar to (24), and $\displaystyle\nu=-1-W_{-1}\left(-\frac{P}{e}\right).$ (33) Here, $\nu$ is a positive constant and is calculated once before running our proposed algorithm. For example, we have $\nu=9.23$ if $P=10^{-3}$ and $\nu=11.8$ if $P=10^{-4}$. For further speedup, we opt to use the output of the MMSE detector (16). In [13], Botsinis et al. proposed the MMSE-based threshold of $\displaystyle\bar{y}=E(\bar{\mathbf{b}}_{0}),$ (34) where we have a rough estimate $\bar{\mathbf{b}}_{0}=M^{-1}(\mathbf{W}_{\mathrm{MMSE}}\mathbf{r})$. Our proposed threshold $\tilde{y}$, which is simpler than $\bar{y}$, can be used together with $\bar{y}$. Specifically, we calculate both $\tilde{y}$ and $\bar{y}$ at the beginning of Algorithm 2, set the initial threshold as the smaller of the two, and initialize the first solution with $\bar{\mathbf{b}}_{0}$. Let $\mathbf{b}_{0}$ be a random $n$-bit sequence. The initial threshold used for the proposed Algorithm 2 can be summarized as follows: $\displaystyle y_{0}=\begin{cases}E(\mathbf{b}_{0})&\text{(Original GAS \cite[cite]{[\@@bibref{Number}{gilliam2021grover}{}{}]})}\\\ \bar{y}&\text{(MMSE-based threshold \cite[cite]{[\@@bibref{Number}{botsinis2014fixedcomplexity}{}{}]})}\\\ \tilde{y}&\text{(Proposed threshold)}\\\ \min(\bar{y},\tilde{y})&\text{(Proposed combination)}\\\ \end{cases}.$ (35) One problem with the proposed threshold $\tilde{y}$ and the combination $\min(\bar{y},\tilde{y})$ is that they may become smaller than the actual minimum. In this case, since there are no states of interest, GAS will be in a state where the solution $\mathbf{b}_{i}$ in Algorithm 2 is not updated. The probability of this undesirable event occurring is $P$, i.e., $\displaystyle\mathrm{Pr}[\tilde{y}<E_{\mathrm{min}}]=\mathrm{Pr}[\min(\bar{y},\tilde{y})<E_{\mathrm{min}}]=P,$ (36) because we have the relationship $\mathrm{Pr}[\bar{y}<E_{\mathrm{min}}]=0$. Then, it can be expected that the proposed threshold $\tilde{y}$ may degrade the bit error ratio (BER) significantly if $P$ is inappropriate. Specifically, the BER of the proposed threshold $\tilde{y}$ is approximated by $\displaystyle P\cdot 0.5+(1-P)\cdot\mathrm{BER}_{\mathrm{MLD}},$ (37) where $\mathrm{BER}_{\mathrm{MLD}}$ is the BER of MLD. In the proposed combination method, we initialize the first solution with the MMSE output $\bar{\mathbf{b}}_{0}$. Since the initial threshold $\min(\bar{y},\tilde{y})$ becomes smaller than the actual minimum with probability $P$, the BER of the proposed combination method is approximated by $\displaystyle P\cdot\mathrm{BER}_{\mathrm{\mathrm{MMSE}}}+(1-P)\cdot\mathrm{BER}_{\mathrm{MLD}},$ (38) where $\mathrm{BER}_{\mathrm{\mathrm{MMSE}}}$ is the BER of the MMSE detector. Both (37) and (38) indicate that the design of $P$ exerts no significant effect as long as it is smaller than $\mathrm{BER}_{\mathrm{MLD}}$, which can be calculated exactly in a closed form in advance. For our performance analysis, the effect of $P$ is shown in Fig. 13. ## V Performance Analysis In this section, we analyze the number of quantum gates required by GAS, which is represented as a function of the numbers of qubits $n$ and $m$. Then, we investigate the performance of the proposed formulation in terms of BER and evaluate the proposed algorithm in terms of the rate of convergence. Here, both integer approximation and direct encoding are considered. Finally, we evaluate the effects of the proposed threshold. ### V-A Algebraic Analysis of the Number of Quantum Gates A quantum circuit for GAS is composed of $\mathbf{H}$, $\mathbf{X}$, $\mathbf{Z}$, phase, controlled-phase gates, and the IQFT. In particular, the state preparation operator $\mathbf{A}_{y_{i}}$ is the most complex part corresponding to the objective function and is dynamically configured in accordance with the threshold $y_{i}$. In the quantum circuit $\mathbf{A}_{y_{i}}$, the number of controlled-phase gates depends on the number of terms in the objective function. We therefore derive the number of terms in the objective function that correspond to each order in an algebraic manner. Ignoring the power scaling factor, the objective function of MIMO MLD (13) is transformed into $\displaystyle\sum_{u=0}^{N_{\mathrm{r}}-1}|r_{u}-h_{u0}s_{0}-h_{u1}s_{1}-\cdots- h_{u(N_{\mathrm{t}}-1)}s_{N_{\mathrm{t}}-1}|^{2}$ $\displaystyle=$ $\displaystyle\sum_{u=0}^{N_{\mathrm{r}}-1}(r_{u}-h_{u0}s_{0}-h_{u1}s_{1}-\cdots- h_{u(N_{\mathrm{t}}-1)}s_{N_{\mathrm{t}}-1})$ $\displaystyle\cdot{(r_{u}-h_{u0}s_{0}-h_{u1}s_{1}-\cdots- h_{u(N_{\mathrm{t}}-1)}s_{N_{\mathrm{t}}-1})}^{*}.$ (39) Here, we focus on three types of terms: first-order terms such as $-r^{*}_{0}h_{00}s_{0}$ and $-r_{0}h^{*}_{00}s^{*}_{0}$, squares of the same symbol such as $|h_{00}|^{2}|s_{0}|^{2}$ and $|h_{01}|^{2}|s_{1}|^{2}$, and products of two symbols such as $h_{00}h^{*}_{10}s_{0}s^{*}_{1}$ and $h^{*}_{00}h_{10}s^{*}_{0}{s}_{1}$. For example, in the relatively simple QPSK case, squares of the same symbol result in constant terms because of (19). First-order terms directly result in first-order terms with respect to binary variables. Products between two symbols result in products of binary variables. If $N_{\mathrm{t}}=2$ and $N_{\mathrm{r}}=2$, four second-order terms appear: $b_{0}b_{2},b_{1}b_{3},b_{0}b_{3},$ and $b_{1}b_{2}$. The number of corresponding terms is equal to the combination of two choices from $N_{\mathrm{t}}$ antennas, e.g., $\displaystyle{N_{\mathrm{t}}\choose 2}=\frac{N_{\mathrm{t}}(N_{\mathrm{t}}-1)}{2}=\frac{n(n-2)}{8},$ (40) where we have the relationship $n=N_{\mathrm{t}}\cdot\log_{2}(L_{\mathrm{c}})=2N_{\mathrm{t}}$. In total, the number of second-order terms is calculated as $4\cdot n(n-2)/8=n(n-2)/2$. TABLE II: Number of quantum gates required for $\mathbf{A}_{y_{i}}$ ($n$-bit transmission with $m$-bit accuracy) Gate | BPSK | | QPSK | | 16-QAM | | 64-QAM | ---|---|---|---|---|---|---|---|--- H | $n+m$ | $=O(n+m)$ | $n+m$ | $=O(n+m)$ | $n+m$ | $=O(n+m)$ | $n+m$ | $=O(n+m)$ R | $m$ | $=O(m)$ | $m$ | $=O(m)$ | $m$ | $=O(m)$ | $m$ | $=O(m)$ 1-CR | $nm$ | $=O(nm)$ | $nm$ | $=O(nm)$ | $nm$ | $=O(nm)$ | $nm$ | $=O(nm)$ 2-CR | ${n(n-1)m}/2$ | $=O(n^{2}m)$ | ${n(n-2)m}/2$ | $=O(n^{2}m)$ | ${n(n-3)m}/2$ | $=O(n^{2}m)$ | ${n(n-4)m}/2$ | $=O(n^{2}m)$ 3-CR | $0$ | | $0$ | | ${n(n-4)m}/2$ | $=O(n^{2}m)$ | $n(n-6)m+nm/3$ | $=O(n^{2}m)$ 4-CR | $0$ | | $0$ | | $n(n-4)m/8$ | $=O(n^{2}m)$ | $5n(n-6)m/6$ | $=O(n^{2}m)$ 5-CR | $0$ | | $0$ | | $0$ | | $n(n-6)m/3$ | $=O(n^{2}m)$ 6-CR | $0$ | | $0$ | | $0$ | | $n(n-6)m/18$ | $=O(n^{2}m)$ IQFT | $1$ | | $1$ | | $1$ | | $1$ | Extending the QPSK case, we counted the number of terms in the objective function for each modulation order and derived the number of quantum gates required by GAS. Table II summarizes the derived results, where the quantum gates were categorized by type. As given in Table II, the number of controlled-phase gates mainly depends on the number of binary variables $n$. Here, 1-CR represents the controlled-phase gate, and 2-CR, 3-CR, $\cdots$ represent multi-controlled-phase gates. Since we have the relationship $n=N_{\mathrm{t}}\cdot\log_{2}(L_{\mathrm{c}})$, the quantum circuit becomes increasingly complex depending on the square of the number of antennas $N_{\mathrm{t}}$ and the modulation order $L_{\mathrm{c}}$. We analyze the number of quantum gates in the entire circuit $\mathbf{G}^{L_{i}}\mathbf{A}_{y_{i}}\Ket{0}_{n+m}$, where we have $\mathbf{G}=\mathbf{A}_{y_{i}}\mathbf{D}\mathbf{A}_{y_{i}}^{\mathrm{H}}\mathbf{O}$. In each iteration, the Grover operator is applied $L_{i}$ times, where $L_{i}$ is a uniform random number. $\mathbf{O}$ is composed of a single $\mathbf{Z}$ gate and $\mathbf{D}$ is the Grover diffusion operator, each of which is repeated $L_{i}$ times. The other part contains $(2L_{i}+1)(n+m)$ $\mathbf{H}$ gates, $(2L_{i}+1)m$ phase gates, $(2L_{i}+1)c$ controlled-phase gates, and $(2L_{i}+1)$ IQFT, where $c$ is the number of controlled-phase gates given in Table II. In summary, $N_{\mathrm{t}}$ and $L_{\mathrm{c}}$ affect the number of gates by the power of two, while $m$ and $L_{i}$ affect it in direct proportion. Real quantum computers rely on decomposed elementary gates: single unitary gates and controlled NOT gates [36]. Specifically, a phase gate with $c$ control qubits is decomposed into $O(c)$ elementary gates. Thus, according to Table II, the number of elementary gates required for controlled-phase gates is $O(cn^{2}m)$ in total. Additionally, IQFT requires $O(m^{2})$ elementary gates [1]. Here, the only parameters that can be designed are $n$ and $m$. With the aid of our efficient HUBO formulation, the number of qubits $n$ cannot be further reduced, since the search space size of MIMO ML detection is $2^{n}$. Later, we investigate whether our real-valued GAS can reduce $m$. ### V-B Effects of Integer Approximation Figure 9: BER for the QPSK case with $N_{\mathrm{t}}=N_{\mathrm{r}}=2$. First, Fig. 9 shows BER of the classic MLD and the proposed formulations that consider the integer approximation with different accuracies. Specifically, the real values were multiplied by 1, 3, 10 or 20, and approximated by rounding them to the nearest integers. As references, the BER curves of ZF, MMSE, and the real-valued formulation were also plotted. To analyze the effects of approximation accuracy, BER values were calculated using the state- of-the-art optimization solver, IBM CPLEX, instead of quantum simulations. As shown in Fig. 9, BER performance varied significantly depending on the accuracy of the conventional integer approximation. High approximation accuracy leads to large integers, resulting in an increase in the number of qubits $m$. In contrast, the proposed real-valued formulation achieved the same performance as the classic MLD. This observation indicates that the proposed real-valued GAS algorithm must be invoked to solve the MIMO MLD problem on a quantum computer. (a) QPSK ($3$x approximation, $n=4$, $m=6$ qubits). (b) 16-QAM ($14$x approximation, $n=8$, $m=8$ qubits). Figure 10: Average objective function values with the integer approximation and $N_{\mathrm{t}}=N_{\mathrm{r}}=2$. Next, Fig. 10 shows the average objective function values when increasing the number of iterations, where iterations in both CD and QD were considered. We assumed a sufficiently high SNR and the fixed $2\times 2$ channel matrix given in (23).555Note that we observed the same trend for different channel coefficients and SNRs. We used the original GAS with a random initial threshold and terminated the simulation if the objective function value remained the same more than 20 times in CD. In Fig. 10(a), real values were multiplied by 3 and rounded down to integers, and in Fig. 10(b), real values were multiplied by 14 and were approximated. The number of qubits $m$ required for encoding the value $E(\mathbf{b})-y_{i}$ was set to an integer sufficient not to overflow, i.e., $m=6$ in Fig. 10(a) and $m=8$ in Fig. 10(b). Note again that the integer approximation requires more qubits to encode the value. Because quantum simulations with $n+m=16$ qubits are time-consuming, we fixed the input bits to $00110101$ in Fig. 10(b), while the bits were generated randomly in Fig. 10(a). For a clear illustration, we added a constant value to the objective function so that $E_{\mathrm{min}}=0$. It was observed in Fig. 10(a) that the query complexities of GAS in CD and QD were almost the same as in the exhaustive search of MLD. By contrast, in Fig. 10(b), GAS exhibited better query complexities in both CD and QD than did MLD. That is, the quantum advantage improved as the problem size increased. ### V-C Effects of Direct Encoding (a) QPSK ($n=4$, $m=5$ qubits). (b) 16-QAM ($n=8$, $m=5$ qubits). Figure 11: Average objective function values with direct encoding and $N_{\mathrm{t}}=N_{\mathrm{r}}=2$. Similar to Fig. 10, Fig. 11 shows the average objective function values when increasing the number of iterations in CD and QD, where we used direct encoding. The simulation parameters were the same as those used in Fig. 10 except for the real-valued expression and the number of required qubits $m$. Specifically, the number of qubits $m=6$ in Fig. 10(a) was reduced to $m=5$ in Fig. 11(a). Similarly, the number of qubits was reduced from $m=8$ to $m=5$ in Fig. 11(b). As shown in Fig. 11, the same trend as in Fig. 10 was observed. The important aspect here is that almost the same query complexities were achieved despite the reduction in the number of required qubits $m$. Hence, our proposed real-valued GAS is capable of reducing the size of quantum circuits while maintaining a good performance. Depending on the channel coefficients and noise, the integer approximation requires a different number of qubits. Since both follow the standard Gaussian distribution, the probability of 0 is the highest, and to deal with smaller values, a larger factor must be multiplied to the objective function, resulting in a larger $m$. By contrast, direct encoding is capable of keeping $m$ constant. The only disadvantage is that the probability amplification of $\mathbf{G}^{L}$ may become insufficient, which was also demonstrated in Fig. 4. Figure 12: Number of queries required to reach the optimal solution. To investigate the disadvantage of the proposed real-valued GAS and insufficient amplification, in Fig. 12, we generated random channel coefficients and investigated the probability density distribution of the number of queries required to reach the optimal solution, where the parameters were the same as those used in Fig. 10(a) except that $m$ was minimized depending on the random channel coefficients. It was observed in Fig. 12 that query complexities in CD and QD increased compared with the ideal case. Here, the same trend was observed for different SNRs. Albeit at this expense, the proposed algorithm could reach the optimal solution in any case. Note that the integer approximation with the same $m$ as in direct encoding could not be plotted in Fig. 12 because it was unable to reach the solution in most cases. ### V-D Effects of Initial Threshold for Further Speedup (a) CD. (b) QD. Figure 13: BER transition with respect to the number of iterations, where we used random channel coefficients, QPSK, $N_{\mathrm{t}}=2$, $N_{\mathrm{r}}=2$, and $\text{SNR}=20$ dB. Finally, in Fig. 13, we show the results of evaluating the proposed initial threshold for GAS described in Section IV-C. Here, we averaged BER with random channel coefficients and noise, considered SNR of $20$ dB, and assumed idealized quantum circuits to examine the impact of the initial threshold only. Other parameters were the same as those used in Fig. 11(a). Fig. 13(a) shows the number of queries in CD, while Fig. 13(b) shows these in QD. Note that the vertical axis is BER rather than the objective function value. Specifically, at the left end of Fig. 13, BER of $0.5$ corresponds to the bit errors between the input bits $\mathbf{b}$ and the random bits $\mathbf{b}_{0}$, and BER of $6.8\times 10^{-3}$ corresponds to the errors between $\mathbf{b}$ and the MMSE output $\bar{\mathbf{b}}_{0}$. As shown in Fig. 13, in both CD and QD, the proposed threshold, $\tilde{y}$, converged to the optimal solution much faster than the classic random threshold. The slopes in the random and proposed thresholds differed significantly. This is because the random threshold ranged from the best to the worst cases, resulting in slow convergence in some cases. To be more specific, in Fig. 13, the variance of the random threshold $E(\mathbf{b}_{0})$ was $14.30$ at the first iteration. By contrast, the proposed threshold $\tilde{y}$ is determined by constant factors, $P$ and SNR, and its variance is always zero. Thus, it significantly improved convergence on average. It was also found in Fig. 13 that the proposed threshold achieved the best performance for $P=10^{-4}$ and exhibited lower performance for $P=10^{-3}$. As described in Section IV-C, $P$ equals the probability that GAS is in a state where the solution is not updated. That is, an event of $\mathrm{BER}=0.5$ occurred with probability $P=10^{-3}$, and it resulted in the error floor of BER around $10^{-3}$. This result indicates that the parameter $P$ has no significant impact if it is smaller than BER. Since the exact BER at a given SNR can be calculated in a closed form in advance, an appropriate $P$ can also be determined in advance accordingly. Additionally, in Fig. 13, the proposed threshold combined with the MMSE output achieved a faster convergence compared with the conventional MMSE only case. This improvement was greater for CD than for QD. That is, the proposed threshold is particularly useful for improving the query complexity in CD. This is because it aims to set a strict threshold even in the case of erroneous MMSE output. Errors in MMSE estimation lead to higher objective function values and may increase the number of solutions, which can be avoided by adopting the proposed combination. To be more specific, at the first iteration, the variance of the MMSE-based threshold $\bar{y}$ was $1.58\cdot 10^{-2}$, while that of the proposed combination $\min(\bar{y},\tilde{y})$ was much smaller, $3.76\cdot 10^{-5}$, resulting in the faster convergence. The performance advantage increased upon increasing SNR, which can be verified from the results shown in Fig. 8. As confirmed in Fig. 8, the gap between simulated and theoretical values decreased upon increasing SNR. (a) CD. (b) QD. Figure 14: BER transition with respect to the number of iterations, where we used random channel coefficients, 16-QAM, $N_{\mathrm{t}}=2,N_{\mathrm{r}}=2$, $\text{SNR}=20$ dB, and $P=10^{-3}$. In Fig. 13, the conventional GAS using the random threshold exhibited slower convergence than the classic MLD. It can be inferred that this slower convergence was caused by the smaller search space. Since the quadratic speedup is an improvement from $O(2^{n})$ to $O(\sqrt{2^{n}})$, the larger search space leads to the larger reduction. In Fig. 14, we considered the case of 16-QAM, where other parameters were same as those used in Fig. 13. The classic MLD required $16^{2}=256$ iterations to reach the optimal solution in the worst case. As shown in Fig. 14, the conventional MMSE-based threshold exhibited an increase in BER after the first few iterations. This issue is caused because improvements in objective function values may not correspond to improvements in BER in some cases. By contrast, the proposed combination successfully avoided the issue and reached the optimal solution with reduced query complexity. This advantage is expected to increase as the search space size grows, and the proposed approach is especially beneficial for a large- scale MIMO system. ## VI Conclusions and Future Works In this paper, we proposed a GAS-based quantum algorithm that supports real- valued HUBO. Then, as an application example, we formulated the MIMO MLD as a HUBO problem. The complexity of MLD exponentially increases with the transmission rate, and low-complexity detectors sacrifice the achievable performance. Unlike in conventional studies, we constructed specific quantum circuits instead of assuming an idealized quantum oracle. This enabled us to analyze the number of qubits and quantum gates in an algebraic manner. To further accelerate the algorithm, we derived the probability distribution of the objective function value and conceived a unique threshold to sample better states. Assuming FTQC, simulations demonstrated the potential for reducing query complexity in CD and providing a quadratic speedup in QD. Since this paper focused on a specific construction method for quantum circuits and their algebraic analysis, we considered only the hard-decision MLD, instead of error-correcting codes and soft-decision decoding for classical bits, which are common in wireless standards. The error correction capability improves with increasing code distance and length. For example, the maximum code length of 5G NR is 1024 for polar code and 8448 for LDPC. However, with the current computing resources, it is a challenging task to represent such a large-scale system as a specific quantum circuit. The proposed real-valued GAS can be applied to soft-decision decoding, which will be addressed in our future work. ## Acknowledgement IBM, CPLEX and Qiskit are trademarks of International Business Machines Corporation. The authors are indebted to the Editor and the anonymous reviewers for their invaluable suggestions. ## References * [1] M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information_ , 10th ed. Cambridge ; New York: Cambridge University Press, 2010. * [2] P. Shor, “Algorithms for quantum computation: Discrete logarithms and factoring,” in _Proceedings 35th Annual Symposium on Foundations of Computer Science_ , Nov. 1994, pp. 124–134. * [3] D. E. Knuth, “Big Omicron and big Omega and big Theta,” _ACM SIGACT News_ , vol. 8, no. 2, pp. 18–24, 1976. * [4] L. K. Grover, “A fast quantum mechanical algorithm for database search,” in _Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing_. Philadelphia, Pennsylvania, United States: ACM Press, 1996, pp. 212–219. * [5] D. Bulger, W. Baritompa, and G. Wood, “Implementing pure adaptive search with Grover’s quantum algorithm,” _Journal of Optimization Theory and Applications_ , vol. 116, pp. 517–529, Mar. 2003. * [6] C. Gidney, “Halving the cost of quantum addition,” _Quantum_ , vol. 2, p. 74, 2018. * [7] A. Gilliam, M. Pistoia, and C. Gonciulea, “Optimizing quantum search using a generalized version of Grover’s algorithm,” _arXiv:2005.06468 [quant-ph]_ , May 2020. * [8] A. Gilliam, S. Woerner, and C. Gonciulea, “Grover adaptive search for constrained polynomial binary optimization,” _Quantum_ , vol. 5, p. 428, Apr. 2021. * [9] T. Kadowaki and H. Nishimori, “Quantum annealing in the transverse Ising model,” _Phys. Rev. E_ , vol. 58, pp. 5355–5363, Nov 1998. * [10] P. Botsinis, S. X. Ng, and L. Hanzo, “Quantum search algorithms, quantum wireless, and a low-complexity maximum likelihood iterative quantum multi-user detector design,” _IEEE Access_ , vol. 1, pp. 94–122, 2013. * [11] M. Boyer, G. Brassard, P. Høyer, and A. Tapp, “Tight bounds on quantum searching,” _Fortschritte der Physik_ , vol. 46, no. 4-5, pp. 493–505, 1998\. * [12] C. Durr and P. Hoyer, “A quantum algorithm for finding the minimum,” _arXiv:quant-ph/9607014_ , Jan. 1999. * [13] P. Botsinis, S. X. Ng, and L. Hanzo, “Fixed-complexity quantum-assisted multi-user detection for CDMA and SDMA,” _IEEE Transactions on Communications_ , vol. 62, no. 3, pp. 990–1000, Mar. 2014. * [14] P. Botsinis, D. Alanis, S. X. Ng, and L. Hanzo, “Low-complexity soft-output quantum-assisted multiuser detection for direct-sequence spreading and slow subcarrier-hopping aided SDMA-OFDM systems,” _IEEE Access_ , vol. 2, pp. 451–472, 2014. * [15] P. Botsinis, D. Alanis, Z. Babar, S. X. Ng, and L. Hanzo, “Iterative quantum-assisted multi-user detection for multi-carrier interleave division multiple access systems,” _IEEE Transactions on Communications_ , vol. 63, no. 10, pp. 3713–3727, Oct. 2015. * [16] ——, “Noncoherent quantum multiple symbol differential detection for wireless systems,” _IEEE Access_ , vol. 3, pp. 569–598, 2015. * [17] W. Ye, W. Chen, X. Guo, C. Sun, and L. Hanzo, “Quantum search-aided multi-user detection for sparse code multiple access,” _IEEE Access_ , vol. 7, pp. 52 804–52 817, 2019. * [18] D. Alanis, P. Botsinis, Z. Babar, H. V. Nguyen, D. Chandra, S. X. Ng, and L. Hanzo, “A quantum-search-aided dynamic programming framework for pareto optimal routing in wireless multihop networks,” _IEEE Transactions on Communications_ , vol. 66, no. 8, pp. 3485–3500, Aug. 2018. * [19] ——, “Quantum-aided multi-objective routing optimization using back-tracing-aided dynamic programming,” _IEEE Transactions on Vehicular Technology_ , vol. 67, no. 8, pp. 7856–7860, Aug. 2018. * [20] P. Botsinis, D. Alanis, S. Feng, Z. Babar, H. V. Nguyen, D. Chandra, S. X. Ng, R. Zhang, and L. Hanzo, “Quantum-assisted indoor localization for uplink mm-wave and downlink visible light communication systems,” _IEEE Access_ , vol. 5, pp. 23 327–23 351, 2017. * [21] P. Botsinis, D. Alanis, Z. Babar, S. X. Ng, and L. Hanzo, “Coherent versus non-coherent quantum-assisted solutions in wireless systems,” _IEEE Wireless Communications_ , vol. 24, no. 6, pp. 144–153, Dec. 2017. * [22] P. Botsinis, D. Alanis, Z. Babar, H. V. Nguyen, D. Chandra, S. X. Ng, and L. Hanzo, “Quantum search algorithms for wireless communications,” _IEEE Communications Surveys Tutorials_ , vol. 21, no. 2, pp. 1209–1242, 2019\. * [23] K. Fujii, “Noise threshold of quantum supremacy,” _arXiv:1610.03632 [quant-ph]_ , Oct. 2016. * [24] C. Gidney and M. Ekerå, “How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits,” _Quantum_ , vol. 5, p. 433, Apr. 2021\. * [25] N. Ishikawa, “Quantum speedup for index modulation,” _IEEE Access_ , vol. 9, pp. 111 114–111 124, 2021. * [26] G. Jay, “IBM Quantum roadmap to build quantum-centric supercomputers,” https://research.ibm.com/blog/ibm-quantum-roadmap-2025, May 2022. * [27] D. Stilck Franca and R. Garcia-Patron, “Limitations of optimization algorithms on noisy quantum devices,” _Nature Physics_ , vol. 17, no. 11, pp. 1221–1227, 2021. * [28] M. Kim, D. Venturelli, and K. Jamieson, “Leveraging quantum annealing for large MIMO processing in centralized radio access networks,” _Proceedings of the ACM Special Interest Group on Data Communication_ , pp. 241–255, Aug. 2019. * [29] S. Mondal, M. R. Laskar, and A. K. Dutta, “ML criterion based signal detection of a MIMO-OFDM system using quantum and semi-quantum assisted modified DHA/BBHT search algorithm,” _IEEE Transactions on Vehicular Technology_ , vol. 70, no. 2, pp. 1688–1698, Feb. 2021. * [30] Z. Babar, Z. B. Kaykac Egilmez, L. Xiang, D. Chandra, R. G. Maunder, S. X. Ng, and L. Hanzo, “Polar codes and their quantum-domain counterparts,” _IEEE Communications Surveys Tutorials_ , vol. 22, no. 1, pp. 123–155, 2020\. * [31] T. Matsumine, T. Koike-Akino, and Y. Wang, “Channel decoding with quantum approximate optimization algorithm,” in _IEEE International Symposium on Information Theory_ , Jul. 2019, pp. 2574–2578. * [32] T. Ohyama, Y. Kawamoto, and N. Kato, “Intelligent reflecting surface (IRS) allocation scheduling method using combinatorial optimization by quantum computing,” _IEEE Transactions on Emerging Topics in Computing_ , vol. 10, no. 3, 2022. * [33] G. Brassard, P. Hoyer, and A. Tapp, “Quantum counting,” _arXiv:quant-ph/9805082_ , vol. 1443, pp. 820–831, 1998. * [34] G. Brassard, P. Hoyer, M. Mosca, and A. Tapp, “Quantum amplitude amplification and estimation,” _arXiv:quant-ph/0005055_ , vol. 305, pp. 53–74, 2002. * [35] 3GPP, “TS 138 211 - V15.2.0 - 5G; NR; Physical channels and modulation (3GPP TS 38.211 version 15.2.0 Release 15),” 2018. * [36] A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. A. Smolin, and H. Weinfurter, “Elementary gates for quantum computation,” _Physical Review A_ , vol. 52, no. 5, pp. 3457–3467, Nov. 1995\. | Masaya Norimoto (S’22) received the B.E. degree from Yokohama National University, Kanagawa, Japan, in 2022. He is currently pursuing the M.E. degree with the Graduate School of Engineering Science, Yokohama National University, Kanagawa, Japan. His research interests include quantum algorithms and wireless communications. ---|--- | Ryuhei Mori received the B.E. degree from Tokyo Institute of Technology, Tokyo, Japan in 2008, and the M.Inf. and D.Inf. degrees from Kyoto University, Kyoto, Japan in 2010 and 2013, respectively. From 2013 to 2014, he was a Postdoctoral Fellow at Tokyo Institute of Technology, Tokyo, Japan. He is currently an Assistant Professor of the Department of Mathematical and Computing Sciences, School of Computing, Tokyo Institute of Technology, Tokyo, Japan. His research interests include quantum information, information theory, computer science and statistical physics. ---|--- | Naoki Ishikawa (S’13–M’17–SM’22) is an Associate Professor with the Faculty of Engineering, Yokohama National University, Kanagawa, Japan. He received the B.E., M.E., and Ph.D. degrees from the Tokyo University of Agriculture and Technology, Tokyo, Japan, in 2014, 2015, and 2017, respectively. In 2015, he was an academic visitor with the School of Electronics and Computer Science, University of Southampton, UK. From 2016 to 2017, he was a research fellow of the Japan Society for the Promotion of Science. From 2017 to 2020, he was an assistant professor in the Graduate School of Information Sciences, Hiroshima City University, Japan. He was certified as an Exemplary Reviewer of IEEE Transactions on Communications in 2017 and 2021. His research interests include massive MIMO, physical layer security, and quantum speedup for wireless communications. ---|---
# Adaptive Phase Estimation with Squeezed Vacuum Approaching the Quantum Limit M. A. Rodríguez-García F. E. Becerra Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, New Mexico 87131, USA ###### Abstract Phase estimation plays a central role in communications, sensing, and information processing. Quantum correlated states, such as squeezed states, enable phase estimation beyond the shot-noise limit, and in principle approach the ultimate quantum limit in precision, when paired with optimal quantum measurements. However, physical realizations of optimal quantum measurements for optical phase estimation with quantum-correlated states are still unknown. Here we address this problem by introducing an adaptive Gaussian measurement strategy for optical phase estimation with squeezed vacuum states that, by construction, approaches the quantum limit in precision. This strategy builds from a comprehensive set of locally optimal POVMs through rotations and homodyne measurements and uses the Adaptive Quantum State Estimation framework for optimizing the adaptive measurement process, which, under certain regularity conditions, guarantees asymptotic optimality for this quantum parameter estimation problem. As a result, the adaptive phase estimation strategy based on locally-optimal homodyne measurements achieves the quantum limit within the phase interval of $[0,\pi/2)$. Furthermore, we generalize this strategy by including heterodyne measurements, enabling phase estimation across the full range of phases from $[0,\pi)$, where squeezed vacuum allows for unambiguous phase encoding. Remarkably, for this phase interval, which is the maximum range of phases that can be encoded in squeezed vacuum, this estimation strategy maintains an asymptotic quantum-optimal performance, representing a significant advancement in quantum metrology. ## 1 Introduction Quantum metrology uses the quantum properties of physical systems to enhance the measurement precision of physical quantities beyond the classical limits [1, 2]. Quantum mechanics states that all physical observables are represented by self-adjoint operators on a Hilbert space. As such, the measurement of a physical quantity of a system involves projecting the quantum state of such system onto one of the eigenspaces of the corresponding self-adjoint operator. However, certain physical quantities, such as time, phase, or temperature, lack an associated self-adjoint operator [3, 4]. Consequently, to determine the values of these physical quantities, it is necessary to measure some observables of the system and estimate their values from the observed results. This process is referred to as quantum parameter estimation [3, 4]. Among different parameter estimation problems, the problem of phase estimation is ubiquitous in many areas of physics and engineering including, but not limited to, gravitational wave detection [5], quantum imaging [6], atomic clocks [7], magnetometry [8], and quantum information processing [9]. However, the performance of traditional phase estimation methods is limited by the fundamental properties of the physical states carrying the phase information. The maximum achievable precision for phase estimation for probe states that lack quantum correlations, typically used for phase estimation, is defined as shot-noise limit (SNL) [10]. Numerous methods have been developed to enhance the precision of phase estimation beyond the SNL by exploiting probe states with inherent quantum correlations. Among different types of quantum correlations, entanglement holds a significant potential for improving precision in phase estimation. Nevertheless, highly entangled states used for phase estimation, such as NOON states, are delicate and can be readily disrupted by loss, environmental noise, and decoherence, thereby limiting their practicality for real world applications [11, 12, 13, 14]. In this regard, squeezed state probes offer a more viable alternative for robust phase estimation [15, 16]. Squeezed states allow for reducing the quantum noise in one observable below the SNL at the expense of an increased noise in another non-commuting observable. This reduction in quantum noise can significantly enhance the precision of phase measurements [17, 5, 7, 16], and is a valuable resource for enabling robust optical quantum metrology and phase estimation. Advances in photonic quantum technologies for phase estimation and quantum metrology have yielded squeezed light sources with a high degree of squeezing [18, 19, 20, 21]. Moreover, experimental demonstrations of quantum metrology and sensing utilizing squeezed optical probes have achieved sensitivities surpassing the SNL [22, 23, 24, 5]. However, there remain significant challenges in devising optimal estimation strategies, including optimal measurements and estimators, that can efficiently attain the ultimate quantum limit of precision for any optical phase estimation problem. A noteworthy measurement approach for optical phase estimation with squeezed states is the homodyne measurement. This measurement has the potential for reaching the quantum limit for a specific, optimized phase when a predetermined level of squeezing is present in the probe state [25]. However, this optimal phase must be known beforehand in order to reach the quantum limit, making this approach impractical. To overcome this limitation, a few adaptive methods have been developed for increasing the range of phases for which estimation below the SNL is possible [25, 26]. However, far from the predetermined optimal phase of homodyne, all the estimation strategies so far deviate significantly from the quantum limit. In this work, we theoretically demonstrate an adaptive Gaussian measurement strategy for optical phase estimation with squeezed vacuum states that, by construction, approaches the quantum limit in precision with a fast convergence rate. This estimation strategy uses homodyne measurements to implement a comprehensive set of locally optimal POVMs (Positive Operator Value Measures). Then the strategy performs adaptive optimization based on the Adaptive Quantum State Estimation (AQSE) framework to ensure the asymptotic consistency and efficiency of the estimator of the optical phase [27]. Based on a systematic study of the statistical properties of dyne-detection, we show that this strategy with adaptive locally-optimal homodyne measurements approaches the quantum limit for a range of $[0,\pi/2)$ in the asymptotic limit of many adaptive steps. Furthermore, we generalize this strategy to incorporate heterodyne sampling making it possible to extend the parametric range to $[0,\pi)$, which is the maximum range of phases that can be encoded in squeezed vacuum, while maintaining an asymptotic quantum optimal performance. The paper is organized as follows: In Sec. 2 we provide a concise overview of the theory of single parameter estimation. Then, we discuss the problem of optical phase estimation in the context of quantum systems, followed by an overview of phase estimation with squeezed states. In Sec. 3, we describe the proposed optimal phase estimation strategy based on adaptive Gaussian measurements with squeezed vacuum states. By leveraging homodyne measurements and rotations, we construct a collection of locally optimal POVMs, which allows us to apply the mathematical framework of AQSE to Gaussian measurements and feedback [28, 27]. Through theoretical and statistical analysis, we show that this adaptive measurement process allows for extracting the maximum possible information pertaining to the phase encoded in squeezed vacuum states in the asymptotic limit. This results in an adaptive Gaussian measurement strategy for phase estimation with squeezed vacuum states that attains the quantum limit within the interval $[0,\pi/2)$. In Sec. 4, we use numerical simulations to evaluate the performance of this strategy. We observe that this strategy approaches the quantum limit for phases within $[0,\pi/2)$, outperforming previous phase estimation strategies. In Sec. 6, we generalize this adaptive estimation strategy to incorporate heterodyne measurements. This generalization allows us to extend phase estimation to phases within $[0,\pi)$, which is the maximum range for unambiguous phase encoding with squeezed vacuum. Remarkably, for this phase interval this generalized strategy maintains an asymptotic quantum-optimal performance. Sec. 7 contains the discussion and concluding remarks. ## 2 Background ### 2.1 Single parameter estimation in quantum systems A fundamental problem in quantum parameter estimation is the design of precise estimators of an unknown parameter $\theta\in\Theta$ characterizing a quantum state based on measurements of the system. In this context, a quantum system is modeled as a Hilbert space $\mathcal{H}$, and its state is described by a density operator $\rho$, which is a self-adjoint positive operator with unit trace on $\mathcal{H}$. The process of encoding the unknown parameters into a probe state $\rho$ is accomplished by a dynamical process, which, when it can be represented as a unitary transformation $U(\theta)$, yields the state $\rho(\theta)=U(\theta)\rho U^{\dagger}(\theta),\quad\theta\in\Theta.$ (1) Estimation of $\theta$ can be achieved through an estimator that is a function that takes a sample of size $N$ from a measurement of the quantum system as an input and produces an estimate of the unknown parameter. The most general description of a measurement process is a POVM [4]. Given a quantum system $\mathcal{H}$ and an outcome space $\mathcal{X}\subseteq\mathbb{R}^{k}$ for a measurement, a POVM is a map $M:\mathcal{B}(\mathcal{X})\to B(\mathcal{H})$ from the set of events of our random experiment $\mathcal{B}(\mathcal{X})$ to the space of bounded operators on $\mathcal{H}$, denoted by $B(\mathcal{H})$, that satisfies the following conditions [4, 29, 30]: 1. i. $M(\emptyset)=0,\quad M(\mathcal{X})=I$ 2. ii. $M(B)\geq 0,\quad\forall B\in\mathcal{B}(\mathcal{X})$ 3. iii. For every family of mutually disjoint events $\left\\{B_{n}\right\\}_{n=1}^{\infty}\subset\mathcal{B}(\mathcal{X})$, so that $B_{i}\cap B_{j}=\emptyset$ $\,\forall i\neq j$ , that satisfies $\cup_{j=1}^{\infty}B_{j}=B\in\mathcal{B}(\mathcal{X})$, then $M(B)=\sum_{j=1}^{\infty}M(B_{j})$. In particular, when the state of the system is $\rho(\theta)$, the observed data $x\in\mathcal{X}$ of a measurement $M$ is an outcome of a random variable $X\in\mathcal{X}$ distributed according to the density function $f(x\mid\theta;M)$ (or probability mass function in the case of discrete random variables) given by the Born’s rule $f(x\mid\theta;M)=\textrm{Tr}\left[M(x)\rho(\theta)\right].$ (2) Thus, any sample from the application of a sequence of $N$ POVMs $M_{1},\ldots,M_{N}$ in a quantum system is represented as a sequence of $N$ random variables $\vec{X}_{N}=X_{1},\ldots,X_{N}$. It follows that any estimator $\widehat{\theta}\left(X_{1},\ldots,X_{N}\right)$ based on this sample is also a random variable with expected value $\textrm{E}_{\theta}\left[\widehat{\theta}(\vec{X}_{N})\right]=\int_{\mathcal{X}^{N}}\widehat{\theta}(x_{1},\ldots,x_{N})f(x_{1},\ldots,x_{N}\mid\theta;M_{1},\ldots,M_{N})dx_{1}\cdots dx_{N}.$ (3) To find optimal estimators for all $\theta\in\Theta\subset\mathbb{R}$, the concept of unbiased estimator plays a crucial role. An estimator $\widehat{\theta}(\vec{X}_{N})$ is unbiased if $\mathrm{E}_{\theta}\left[\widehat{\theta}(\vec{X}_{N})\right]=\theta$ for all $\theta\in\Theta$ [31]. The performance of unbiased estimators is characterized by their variance, which is bounded by the Cramér-Rao bound [31]. This bound corresponds to the classical limit of precision for all unbiased estimators, and is given by the inverse of the Fisher information denoted by $F_{X}(\theta)$. Given a sample $X$ produced by a POVM $M$, the Fisher information $F_{X}(\theta)=\int_{\mathcal{X}}f(x\mid\theta;M)\left[\frac{\partial}{\partial\theta}\log\left(f(x\mid\theta;M)\right)\right]^{2}dx$ (4) quantifies the amount of information about the parameter $\theta$ that can be extracted from the sample $X$ [32, 28]. The ultimate limit of precision, dictated by quantum mechanics, is achieved by optimizing $F_{X}(\theta)$ over all possible POVMs, resulting in the quantum Fisher information (QFI). Consequently, the variance of any unbiased estimator is lower bounded by the inverse of the QFI, referred to as the quantum Cramér-Rao bound (QCRB) [33, 32, 28]. The objective in quantum parameter estimation is to devise estimators and quantum measurement schemes that attain the QCRB for any value of the parameter $\theta\in\Theta$. ### 2.2 Optical phase estimation based on dyne-detection A central task in optical quantum metrology is the estimation of an unknown phase $\theta\in[0,2\pi)$ encoded in a photonic quantum state by the unitary process $U(\theta)=e^{-i\hat{n}\theta}$, where $\hat{n}$ is the photon number operator. The standard quantum state probe for optical phase estimation is the coherent state $\left|\alpha\right\rangle\left\langle\alpha\right|,\,\alpha\in\mathbb{C}$, in which photons exhibit classical correlations [34]. For this quantum state, the QCRB for any unbiased estimator $\widehat{\theta}$ and $N$ independent copies of the system is $\mathrm{Var}[\widehat{\theta}]\geq\frac{1}{4N\mathrm{E}\left[\hat{n}\right]},$ (5) This limit in precision defines the SNL (or the coherent state limit). To surpass the SNL for optical phase estimation, it is necessary to employ states with quantum correlations, such as squeezed vacuum states. These states are defined by the density operator: $\rho=\left|0,r\right\rangle\left\langle 0,r\right|=\hat{S}(r)\left|0\right\rangle\left\langle 0\right|\hat{S}^{\dagger}(r),$ (6) where $\hat{S}(r)=e^{\frac{1}{2}\left(r\hat{a}^{2}-r\hat{a}^{\dagger}\right)}$ is the squeezing operator, $r\in\mathbb{R}$, and $\hat{a}$ and $\hat{a}^{\dagger}$ are the annihilation and creation bosonic operators, respectively. Through the unitary transformation $U(\theta)$, the squeezed vacuum state in Eq. (6) results in the parameter-dependent state $\rho(\theta)=U^{\dagger}(\theta)\rho U(\theta)$. The corresponding QCRB for this state for any unbiased estimator $\widehat{\theta}$ and $N$ independent copies of the system is given by: $\mathrm{Var}[\widehat{\theta}]\geq\frac{1}{2N\sinh^{2}(2r)}=\frac{1}{8N\left(\mathrm{E}\left[\hat{n}\right]^{2}+\mathrm{E}\left[\hat{n}\right]\right)},$ (7) where $r$ represents the squeezing strength of $\hat{S}(r)$ [35, 36]. This bound exhibits a superior scaling with $\mathrm{E}\left[\hat{n}\right]$ compared to the coherent state in Eq. (5). However, it is worth noting that due to the $\pi$-inversion symmetry inherent in squeezed vacuum states [37] (see Fig. 1-i), any estimation strategy based on these states is constrained to phases within the range of $[0,\pi)$. Figure 1: Adaptive estimation strategy for optical phase estimation with squeezed vacuum probe states $\lvert 0,r\rangle\langle 0,r\rvert$. The strategy employs locally optimal POVMs, $M_{\widehat{\theta}}$ in Eq. (16) with $\widehat{\theta}\in[0,\pi/2)$, to produce a maximum likelihood estimate of $\theta$, and updates the value $\widehat{\theta}$ for subsequent adaptive steps. The measurement process is iteratively repeated during the adaptive strategy. Inset (i) shows the Husimi Q representation for the initial squeezed vacuum state. Note that due to the internet symmetry properties of squeezed vacuum, these quantum probes can only encode the phase modulo $\pi$. The standard measurement approach for optical phase estimation is the heterodyne measurement. This measurement involves simultaneous sampling of two orthogonal components of electromagnetic field within the complex plane, namely $\hat{X}_{\phi}$ and $\hat{X}_{\phi+\pi/2}$, by utilizing the quadrature decomposition of the input field [10]. Here, the quadrature operator $\hat{X}_{\phi}$ is defined as: $\hat{X}_{\phi}=\frac{\hat{a}^{\dagger}e^{i\phi}+\hat{a}e^{-i\phi}}{2}.$ (8) The POVM associated to the heterodyne measurement is described by coherent state projectors $M_{\rm{Het}}=\left\\{\pi^{-1}\left|z\right\rangle\left\langle z\right|:z\in\mathbb{C}\right\\}$, with outcomes in the form of complex numbers, and with the corresponding Fisher information: $F_{Z}(\theta)=4\sinh^{2}\left(r\right).$ (9) The inverse of Eq. (9) is known as the heterodyne limit for the precision of any unbiased estimator $\widehat{\theta}$ for all $\theta\in\left[0,\pi\right)$ with squeezed vacuum states. Going beyond estimation strategies based on heterodyne detection, homodyne detection can surpass the heterodyne limit for a suitable set of values of $\theta$. Homodyne provides information about the quadrature $\hat{X}_{\phi}$ of the input signal using a local oscillator (LO) phase reference field and interference [10]. Specifically, in the limit of strong LO, Eq. (8) represents a self-adjoint operator with a spectral measure given by: $\hat{X}_{\phi}=\int_{-\infty}^{\infty}x\Pi(dx),$ (10) where $\Pi(B)=1_{B}(x)$ (or symbolically in Dirac notation, $\Pi(dx)=\left|x\right\rangle\left\langle x\right|dx$), for any $B\in\mathcal{B}\left(\mathbb{R}\right)$. Consequently, the homodyne measurement can be described by the POVM $M_{\rm{Hom}}=\left\\{\Pi(dx)\right\\}$, with the outcome space being the real numbers [38]. For squeezed vacuum state probes in Eq. (6), the outcomes $x\in\mathbb{R}$ of the homodyne measurement are distributed according to a normal random variable with probability density function $f(x\mid\theta)=\frac{1}{\sqrt{2\pi\sigma^{2}(\theta)}}\exp\left[-\frac{x^{2}}{2\sigma^{2}(\theta)}\right],$ (11) where $\theta$ denotes the unknown phase and $\sigma^{2}(\theta)=\left[e^{-2r}\cos^{2}(\theta)+e^{2r}\sin^{2}(\theta)\right]$ (12) denotes the variance. The Fisher information $F_{X}(\theta)$ for the homodyne measurement is then $\begin{split}F_{X}(\theta)=\frac{2\sinh^{2}(2r)\sin^{2}(2\theta)}{\left(\sigma^{2}(\theta)\right)^{2}}.\end{split}$ (13) Notably, the classical Fisher information of the homodyne measurement coincides with the QFI when the squeezing strength $r$ satisfies: $r=-\frac{1}{2}\log(\tan(\theta)),$ (14) or equivalently, when the parameter $\theta$ corresponds to the optimal value $\theta_{\mathrm{opt}}$ given by $\theta_{\mathrm{opt}}=\frac{\arccos\left(\tanh(2r)\right)}{2}$ (15) which tends to zero as $r$ increases, as shown in Fig. (2). Consequently, the homodyne measurement can surpass the heterodyne limit in the neighborhood of $\theta_{\mathrm{opt}}$. However, beyond this neighborhood, estimators based on a sample $\vec{X}_{N}$ obtained from $N$ independent and identical homodyne measurements can not achieve this optimal level of precision (see Fig. 9). Figure 2: $\theta_{\text{opt}}$ as a function of the squeezing parameter $r$ in the probe state. As $r$ increases, the optimal value of $\theta_{opt}$ decreases, approaching zero as $r>2$. To overcome this limitation, adaptive estimation protocols have been proposed. One such protocol [26, 25, 22], that we refer to as the two-step protocol, considers a reduced parameter space $[0,\pi/2)$, and utilizes homodyne detection and one subsequent adaptation of the probe state to surpass the heterodyne limit within this phase range. This strategy approaches the QCRB in the asymptotic limit for phases around the optimal phase $\theta_{\mathrm{opt}}$ [25, 22]. However, far from this optimal phase, its performance significantly deviates from the QCRB. Moreover, when considering the full range of phases that can be encoded in squeezed vacuum probes $[0,\pi)$, this two-step estimation strategy is not expected to produce satisfactory results, due to the periodicity of the likelihood function from homodyne outcomes. In this work, we propose an adaptive estimation strategy based on homodyne detection, that leverages the framework of AQSE (see Appendix: 8.2), which, under certain regularity conditions, yields a consistent and efficient estimator for all values of $\theta\in[0,\pi/2)$. A key element of this approach is to use samples that lead convex likelihood functions, ensuring the asymptotic normality of the Maximum Likelihood Estimator (MLE) [28, 27]. This property guarantees the asymptotic saturation of the QCRB (7) for any $\theta\in\left[0,\pi/2\right)$. We further generalize this adaptive strategy to incorporate heterodyne measurements enabling phase estimation within $[0,\pi)$, while maintaining a quantum-optimal performance in the asymptotic limit. ## 3 Optimal adaptive dyne phase estimation In practice, it is generally impossible to find a POVM and an unbiased estimator capable of saturating the QCRB for all $\theta\in\Theta$. However, it is often possible to find a POVM and an unbiased estimator that can achieve this bound for a specific value of the parameter within a neighborhood around a point $\theta_{0}\in\Theta$. These types of POVMs are referred to as locally optimal at $\theta_{0}$. Moreover, if it is possible to construct a collection of such locally optimal POVMs for any $\theta\in\Theta$ while satisfying a set of regularity conditions pertaining to the probability distributions of their outcomes (see Appendix: 8.1), then it is possible to use an adaptive estimation method, known as AQSE (see Appendix: 8.2), capable of saturating the QCRB in the asymptotic limit for the MLE $\forall\theta\in\Theta$ [27]. Building upon this understanding, the proposed adaptive phase estimation strategy with squeezed vacuum states is constructed based on two elements. The first element involves the construction of a set of POVMs through homodyne measurements that are locally optimal for all parameter values within the range $\theta\in[0,\pi/2)$. The second element is the construction of an estimator that achieves the QCRB for any given value of $\theta$, while also being locally unbiased at $\theta_{0}\in\Theta$. To construct the set of locally optimal POVMs, we refer to Eq. (15), which shows that the POVM denoted as $M_{\mathrm{Hom}}$ is locally optimal at $\theta_{\mathrm{opt}}$. This is because the samples obtained from $M_{\mathrm{Hom}}$ have a Fisher information equal to the QFI at this specific value. Furthermore, given the asymptotic unbiasedness of the MLE and its subsequent local unbiasedness, the estimator effectively saturates the QCRB (Eq. (7)) in the asymptotic limit at $\theta_{\mathrm{opt}}$. Consequently, by appropriately incorporating a phase shift into the elements of $M_{\mathrm{Hom}}$, it is possible to construct a set of locally optimal POVMs for each parameter value $\theta\in[0,\pi/2)$. To this end, we introduce a phase shift $U\left(\widehat{\theta}-\theta_{\mathrm{opt}}\right)$, which allows us to define a new set of POVMs for each $\widehat{\theta}\in[0,\pi/2)$ as follows: $M_{\widehat{\theta}}(dx)=\left\\{U\left(\widehat{\theta}-\theta_{\mathrm{opt}}\right)\Pi(dx)U^{\dagger}\left(\widehat{\theta}-\theta_{\mathrm{opt}}\right)\right\\}.$ (16) Here $M_{\widehat{\theta}}(dx)$ denotes the POVM elements obtained by applying the phase shift to the original POVM elements $\Pi(dx)$ of $M_{\mathrm{Hom}}$. Note that the phase distribution of $M_{\widehat{\theta}}(dx)$ over the state $\rho(\theta)$ $\displaystyle=f(x\mid\theta+\theta_{\mathrm{opt}}-\widehat{\theta};M_{\mathrm{Hom}}).$ (17) evaluated at $\widehat{\theta}=\theta$ is the same as the distribution for the outcomes of the POVM $M_{\mathrm{Hom}}$ at $\theta_{\mathrm{opt}}$, which shows that the POVM $M_{\widehat{\theta}}(dx)$ is locally optimal at $\theta$. From this observation and based on the AQSE framework, then it is possible to construct a multi-step estimation strategy based on adaptive homodyne measurements, which provide a set of locally optimal measurements, for which the distribution of the sequence of estimators converges to a normal distribution with variance equal to the inverse of the QFI. Given the input squeezed probe state $\left|0,r\right\rangle\left\langle 0,r\right|$ encoding the parameter $\theta$, this adaptive strategy shown in Figure 1 would implement the POVM $M_{\widehat{\theta}}$ with an initial guess $\widehat{\theta}\in[0,\pi/2)$ yielding a measurement sample $X_{1}$. The MLE $\widehat{\theta}_{\mathrm{MLE}}\left(X_{1}\right)$ applied to $X_{1}$ results in an estimate $\widehat{\theta}$ of $\theta$ for a given adaptive step. This estimate $\widehat{\theta}$ then becomes the best guess for the subsequent adaptive step, and the process is repeated iteratively during every step in the strategy. ### 3.1 Estimator Figure 3: Probability that the likelihood function lacks a stationary point within $\left[0,\pi/2\right)$. Panel a shows the probability of not having stationary points for different degrees of freedom (DoF) $\nu$ (number of probes in each adaptive step) with $r=1$. Panel b shows this probability for different values of squeezing strength $r$ with $\nu=3705$. By ensuring that the likelihood function has stationary points within the interval $[0,\pi/2)$, we mitigate the bias introduced by boundary estimates and enhance the performance of the adaptive estimation process. See main text for details. A key aspect of the parameter estimation strategy is the selection of an estimator that allows for the saturation of the QCRB. We identify the necessary conditions to ensure the asymptotic consistency and normality of the MLE and the saturation of the QCRB through the adaptive strategy for any $\theta\in\left[0,\pi/2\right)$. Assuming a finite Fisher information and different of zero for every value of $\theta$, these conditions require that: a) the MLE must be unique; b) the MLE should be obtained as a stationary point of the likelihood function; and c) the derivatives of the likelihood function at $\theta$ should exist up to a level where they can be effectively approximated using a Taylor series [39, 27]. We observe that the first condition in the MLE is automatically satisfied, since the samples generated from the proposed strategy conform to a normal distribution, as indicated by Eq. (11). This ensures the sufficient smoothness of the likelihood function for any $\theta\in[0,\pi/2)$ in each adaptive step. Moreover, by restricting the parameter space to the interval $\left[0,\pi/2\right)$, we guarantee the uniqueness of the MLE for any $\theta\in[0,\pi/2)$. Thus, the remaining task is to determine the conditions under which the MLE corresponds to a stationary point of the likelihood function, leading to its asymptotic normality. To this end, we analyze the MLE for a sample at the first adaptive step. Given the sample $\vec{X}_{\nu}=\left(X_{1},X_{2},\ldots,X_{\nu}\right)$ of size $\nu$ from the POVM $M_{\widehat{\theta}_{0}}$ at the first adaptive step, and evaluating the likelihood from Eq. (11) at $\theta_{*}=\left(\theta+\theta_{\mathrm{opt}}-\widehat{\theta}_{0}\right)\,$ $\mathrm{modulo}\,$ $\pi/2$, we obtain the MLE for $\theta$ as: $\widehat{\theta}_{\mathrm{MLE}}(\vec{X}_{\nu})=\arccos\left[\frac{e^{r}\sqrt{e^{2r}-\frac{1}{\nu}\sum_{i=1}^{\nu}X_{i}^{2}}}{\sqrt{e^{4r}-1}}\right]-\theta_{\mathrm{opt}}+\widehat{\theta}_{0}.$ (18) The event for which the MLE corresponds to a stationary point of the likelihood within $\left[0,\pi/2\right)$ is equivalent to obtaining a real- valued solution of Eq. (18). This real solution is obtained when $e^{2r}-\frac{1}{\nu}\sum_{i=1}^{\nu}X_{i}^{2}>0$. Therefore, the probability of obtaining an imaginary solution of of Eq. (18) at $\hat{\theta}=\theta_{\mathrm{opt}}$, which corresponds to a likelihood function lacking a stationary point, is: $P\left(e^{2r}<\frac{1}{\nu}\sum_{i=1}^{\nu}X_{i}^{2}\mid\widehat{\theta}_{0}=\theta_{\mathrm{opt}}\right)=P\left(\sum_{i=1}^{\nu}\frac{X_{i}^{2}}{\sigma^{2}(\theta)}>e^{2r}\left(\frac{\nu}{\sigma^{2}(\theta)}\right)\right)=1-P\left(\sum_{i=1}^{\nu}\frac{X_{i}^{2}}{\sigma^{2}(\theta)}\leq e^{2r}\left(\frac{\nu}{\sigma^{2}(\theta)}\right)\right)$ (19) where $Q=\sum_{i=1}^{\nu}\frac{X_{i}^{2}}{\sigma^{2}(\theta)}$ represents the sum of squares of $\nu$ independent standard normal random variables, and thus follows a chi-squared distribution with $\nu$ degrees of freedom ($Q\sim\chi^{2}(\nu)$) [31]. Figure 3 shows the probability of obtaining an imaginary solution in Eq. (19) as a function of $\theta$ for different values of the squeezing strength $r$ and sample size $\nu$. We observe that as $\theta$ deviates from the optimal value $\theta_{\mathrm{opt}}$ in Eq. (15), the probability that the MLE did not come from a stationary point within the interval $\left[0,\pi/2\right)$ becomes different than zero. We also note that the region for which the likelihood does not have stationary points diminishes as we increase the degrees of freedom (sample size in the adaptive step) or the degree of squeezing. Moreover, when $e^{2r}-1/\nu\sum_{i=1}^{\nu}X_{i}^{2}<0$ (in regions where the global maximum does not correspond to a stationary point) the MLE in Eq. (18) corresponds to the boundary point $\pi/2$. This event introduces a bias in the estimate for subsequent adaptive steps, leading to a decrease in the precision of the final estimate. This effect in the estimate becomes more detrimental when the phase to be estimated $\theta$ is close to the boundary point of $\pi/2$ (see Figure 3). The strategy address this problem by first choosing a random guess $\widehat{\theta}_{0}\in[0,\pi/2)$ in the first adaptive step. This initial random hypothesis reduces in average the probability in Eq. (19) that the MLE does not arise from a stationary point within $[0,\pi/2)$, according the law of total probability $P\left(e^{2r}<\frac{1}{\nu}\sum_{i=1}^{\nu}X_{i}^{2}\right)=\mathrm{E}_{\hat{\theta}_{0}}\left[P\left(e^{2r}<\frac{1}{\nu}\sum_{i=1}^{\nu}X_{i}^{2}\mid\widehat{\theta}_{0}\right)\right]=\int_{0}^{\pi/2}d\widehat{\theta}_{0}P\left(\sum_{i=1}^{\nu}\frac{X_{i}^{2}}{\sigma^{2}\left(\theta_{*}\right)}>e^{2r}\left(\frac{\nu}{\sigma^{2}\left(\theta_{*}\right)}\right)\right).$ (20) Moreover, this probability decreases as the sample size $\nu$ increases, as shown in Fig. 4. As a final step, to ensure that the MLE always arises from stationary points, we introduce a modified estimator $\widehat{\theta}^{\textrm{U}}_{\textrm{MLE}}(\vec{X}_{\nu})=\begin{cases}\widehat{\theta}_{\textrm{MLE}}(\vec{X}_{\nu})&\text{if }e^{2r}-1/\nu\sum_{i=1}^{\nu}X_{i}^{2}\geq 0,\\\ \widehat{\theta}_{\textrm{MLE}}(\vec{X}_{s})&\text{otherwise},\end{cases}$ (21) where $\vec{X}_{s}$ is a subsequence of $X_{\nu}$ constructed by iteratively removing the highest values of $X_{\nu}$ in descending order until the condition $e^{2r}-1/s\sum_{i=1}^{s}X_{i}^{2}\geq 0$ is satisfied. We note that while the estimator ignores a few measurement outcomes from the sample, the probability if this event is low, and this modified estimator greatly improves the final variance of the estimates. Moreover, it is worth noting that this modified estimator in Eq. (21) is only necessary in the first adaptive step, since once an estimate $\widehat{\theta}$ sufficiently close to the true value $\theta$ is obtained, the estimates during subsequent adaptive steps will be close to $\theta_{\mathrm{opt}}$. Therefore, for sufficiently large values of $r$, and moderate $\nu$, this procedure makes the probability in Eq. (19) go to zero, guaranteeing the asymptotic efficiency of the strategy. Figure 4: Conditional probability (Eq. (20)) that the likelihood function lacks a stationary point within $\left[0,\pi/2\right)$ given $\widehat{\theta}_{0}$ uniformly random between $0$ and $\pi/2$. The probability is upper bounded for $\nu=1$ and decreases as the number of $\nu$ increases. The values of $\nu$ corresponds to the degrees of freedom (DoF) for the random variable $Q\sim\chi^{2}(\nu)$. ## 4 Performance We evaluate the performance of the adaptive estimation strategy based on local optimal POVMs, Eq. (16), with the modified estimator in Eq. (21). We conduct Monte Carlo simulations varying the sample size $\nu$, number of adaptive steps $m$, and squeezing strength $r$. We investigate the precision and efficiency of the estimation strategy and its convergence towards the QCRB in Eq. (7) for $\theta\in[0,\pi/2)$. Considering that the estimator $\widehat{\theta}\left(X\right)$ obtained from the Monte Carlo simulations exhibits a $\pi/2$ periodic distribution, we evaluate the precision of $\widehat{\theta}\left(X\right)$ by using the corresponding Holevo variance [4, 40]: $\mathrm{Var}_{\theta}\left[\widehat{\theta}(X)\right]=\frac{\left[\mathrm{E}\left[\cos\left(\frac{2\pi}{P}\left(\widehat{\theta}(X)-\theta\right)\right)\right]\right]^{-2}-1}{\left(\frac{2\pi}{P}\right)^{2}},$ (22) where the factor $P$ represents the period of the estimator’s distribution, in our case $P=\pi/2$. Figure 5 shows the results for the Holevo variance for $\widehat{\theta}\left(X\right)$ within $\theta\in[0,\pi/2)$ for adaptive estimation strategies based on homodyne detection and squeezed vacuum for different numbers of adaptive steps $m$ from $3$ to $15$. The homodyne measurement without feedback and the two-step adaptive homodyne strategy from Ref. [22] are shown for comparison. All the results have been normalized to the QCRB $=QFI^{-1}$. The results shown in Figure 5 are obtained from the average of five Monte Carlo simulations, each involving $10,000$ total estimates for a squeezing strength $r=1.01$ and a total number of copies of the state $N=3705$. (These parameters were chosen to enable comparison of performance with previous estimation strategies). The sampling process employed the method of rejection sampling [41], while the optimization employed the method of generalized simulated annealing over the interval $[0,\pi/2]$ [42]. We observe that the proposed multi-step adaptive estimation strategy based on homodyne detection consistently outperforms the non-adaptive and the two-step homodyne [22] strategies. Moreover, as can be observed in Figure 5, this adaptive homodyne strategy progressively approaches the QCRB (dashed horizontal line), and is expected to saturate this bound for all values of $\theta\in[0,\pi/2)$ in the limit of many adaptive steps. For instance, the proposed adaptive estimation strategy with $m=15$ adaptive steps achieves a precision of just $7\%$ above the QCRB for phases $\theta\in[0,\pi/2)$ on average, compared to $42\%$ with two steps [22]. Moreover, while the performance of previous strategies for phases far from $\theta_{\mathrm{opt}}$ deviate significantly from the QCRB, by construction the proposed strategy ensures an asymptotic quantum-limited performance (see Appendix 8.3). These results highlight the fundamental advantage of this multi-step adaptive strategy for parameter estimation, and underscore its potential for optical phase estimation and quantum metrology. Figure 5: Holevo variance of the adaptive estimation strategy based on the AQSE formalism as a function of $\theta$, for different numbers of adaptive steps $m$ for $N=3705$ independent copies of the state with squeezing strength of $r=1.01$. The y-axis shows the Holevo variance in logarithmic scale normalized to the inverse of the QFI Eq. (7), corresponding to the QCRB. The magenta line shows the two-step adaptive estimation scheme from Ref. [22], and the blue curve shows the estimation with homodyne detection without feedback. The shaded regions represent a 1-$\sigma$ standard deviation. ## 5 Loss and Noise Effects ### 5.1 Losses In quantum optics, linear losses can be represented as the transmission of the state $\rho(\theta)$ through a (linear) lossy channel modeled as a beam splitter with transmittance $0<T<1$. This channel maps the initial squeezed state $\rho(\theta)=\lvert r,0\rangle\langle 0,r\rvert$ into a squeezed thermal state, which reduces the QFI below that of squeezed vacuum states [43] as: $F_{Q}^{\text{Lossy}}=\left[\frac{T^{2}}{1+2T(1-T)\sinh^{2}(r)}\right]F_{Q}.$ (23) where $F_{Q}=2\sinh^{2}(2r)$ is the QFI about $\theta$ for squeezed vacuum states. Therefore, the impact of linear losses on the QCRB can be effectively accounted for by an appropriate rescaling [43]. Furthermore, the squeezed thermal states resulting from losses further reduce the classical Fisher information for the homodyne measurement [43, 44]: $F^{\text{Lossy}}_{X}(\theta)=\left[\frac{T^{2}}{1+4T(1-T)\sinh^{2}(r)}\right]F_{Q}.$ (24) Thus, losses make homodyne measurements suboptimal $F^{\text{Lossy}}_{X}(\theta)<F_{Q}^{\text{Lossy}}$ for all $\theta\in[0,\pi/2)$. However, it is worth noting that homodyne still allow for surpassing the SNL under losses [43, 22]. Figure 6 shows the ratios $F^{\text{Lossy}}_{Q}(\theta)/F_{Q}$ (dots) and $F^{\text{Lossy}}_{X}(\theta)/F_{Q}$ (crosses) as a function of the losses ($1-T$) and different values of $r$. We observe that probe states with larger $r$, are more sensitive to losses, as is expected. Then, for highly squeezed probes, the Fisher information approaches $F_{Q}$ only at $T\approx 1$. Figure 6: Fisher information ratios $F^{\text{Lossy}}_{Q}(\theta)/F_{Q}$ (dots) and $F^{\text{Lossy}}_{X}(\theta)/F_{Q}$ (crosses) for squeezed vacuum states as function of $T$ for squeezing strength $r$ of $0.5$, $1.0$ and $1.5$. ### 5.2 Imperfections at state preparation The processes of state preparation is often affected by small random errors. Due to the Central Limit Theorem, the cumulative effect of these numerous, small, and independent errors will tend towards a normal distribution. Therefore, it is reasonable to assume that the squeezing strength $r$ of the probe state $\rho=\lvert r,0\rangle\langle 0,r\rvert$ is the output of a random variable $R$ with a normal distribution $\mathcal{N}\left(r_{0},\sigma_{r}^{2}\right)$, where $\sigma_{r}^{2}\geq 0$ is a small number relative to $r_{0}$. To take state preparation errors of this kind into account in the adaptive strategy, we consider that the samples at every adaptive step are drawn from the conditional random variable $X\mid R$. We then proceed as described in Section 3. In this case, the Fisher information about $\theta$ contained is the random variable $X\mid R$ is calculated with respect to the conditional density of $X$ given $R$, that is, $F_{X\mid R}(\theta)=\mathrm{E}_{X}\left[F_{X\mid R=r}(\theta)\right],$ (25) where $F_{X\mid R=r}(\theta)$ corresponds to Eq. (13) at a specific value $R=r$. Therefore, in the asymptotic limit, the adaptive homodyne strategy samples around the phase that maximizes Eq. (25). In experimental settings, typically the range for $\sigma_{r}$ lies between $0.01$ and $0.02$ [22, 45]. For state preparation errors with small variances $\sigma_{r}^{2}$, the optimal phase $\theta_{\mathrm{opt}}^{noise}$ deviates slightly from the noiseless case $\theta_{\mathrm{opt}}$, and modifies the Fisher information. Figure 7 shows $F_{X\mid R}(\theta)$ for standard deviations of $r$ in state preparation $\sigma_{r}=0.01$ and $\sigma_{r}=0.02$ as a function of $\theta\in[0,\pi/2)$. Since the QFI for squeezed vacuum states is a nonlinear function, the contribution of positive deviations of $r$ from the mean to the expected value of $F_{X\mid R}(\theta)$ increases with $r$. This nonlinear effect causes the maximum of $F_{X\mid R}(\theta)$ to slightly surpass $F_{X\mid R=r}(\theta_{\mathrm{opt}})$ (dashed red line, as can be observed in the inset of Fig. 7. Figure 7: Expected Fisher information as function of $\theta$ for input states with state preparation errors with normally varying squeezing strength for $r_{0}=1$ with $\sigma_{r}$ of $0.01$ and $0.02$. The dashed red line corresponds to the QFI for squeezed vacuum states at $r=1$. Inset shows a zoom in the maximum of the curve showing the overshot effect due to the nonlinear dependence of $F_{X\mid R}(\theta)$ with $r$. ## 6 Generalized adaptive-dyne phase estimation In general, the problem of phase estimation involves estimation over the complete range of possible phases from $0$ to $2\pi$. However, when using squeezed vacuum states for performing phase estimation beyond the SNL, there is a physical limitation on the range of phases that can be estimated. Squeezed vacuum states are invariant under phase shifts of $\pi$, restricting the estimable phases to the interval $[0,\pi)$, which is half of the complete range in the general problem of phase estimation. This symmetry can be seen in the Husimi Q representation of the state $\rho$, $Q(\alpha)=\frac{1}{\pi}\left\langle\alpha\right|\rho\left|\alpha\right\rangle$ (see Fig. 1-i), which shows that the squeezed vacuum probe can only encode the phase modulo $\pi$. Moreover, the measurement employed for decoding the phase can impose severe constraints on the range of phases that can be estimated. For instance, homodyne measurements further reduce the range of phases within which phase estimation is possible to $[0,\pi/2)$. This is because the probability distributions of outcomes from homodyne measurements in Eq. (11), associated with POVMs in Eq. (16), are $\pi/2$ periodic. Consequently, any strategy based on adaptive homodyne is restricted to estimating phases within the range $[0,\pi/2)$. To go beyond this limited range and enable phase estimation with squeezed vacuum within the entire range of $[0,\pi)$, it is necessary to generalize the adaptive estimation strategy to include measurements beyond homodyne. Figure 8: Generalized adaptive dyne strategy for phase estimation based on squeezed vacuum probes for phases $\theta\in[0,\pi)$. This strategy takes advantage of the capability of heterodyne measurements to unambiguously estimate phases within the whole parametric space $[0,\pi)$ for squeezed vacuum, overcoming the non-identifiability problem in the likelihoods from homodyne measurements. By employing a small sample of heterodyne measurements, the unknown phase $\theta$ is localized within a neighborhood within $[0,\pi)$. Then, the strategy employs adaptive homodyne for phase estimation within this neighborhood. We generalize this adaptive estimation strategy by incorporating heterodyne measurements, enabling optimal phase estimation within $[0,\pi)$ in the asymptotic limit. We exploit the capability of heterodyne detection to unambiguously estimate phases within $[0,\pi)$, which is only limited by the symmetry properties of the squeezed vacuum probe. Although heterodyne detection falls short from the optimal limit and has a constant, and suboptimal, Fisher information, it provides means for solving the non- identifiability problem of the homodyne likelihood [46]. Figure 8 shows the generalized adaptive dyne strategy. By employing a small sample of size $N_{1}$ of a heterodyne measurement, denoted as $\vec{X}_{N_{1}}$, the strategy determines a neighborhood in the parametric space $[0,\pi)$ to which the unknown phase belongs. This makes the parameter identifiable within $[0,\pi)$ and produces a likelihood peaked around the true value. After the heterodyne sampling $\vec{X}_{N_{1}}$, the generalized dyne estimation strategy implements the strategy based on adaptive homodyne sampling, described above, with $N_{2}$ copies of the input state and $m$ adaptive measurements. By construction, when $N_{2},m\to\infty$ (in the asymptotic limit), and $N_{1}$ is large enough such that the maximum likelihood estimator $\widehat{\theta}(\vec{X}_{N_{1}})\in[0,\pi/2)$ with high probability, this strategy saturates the QCRB Eq. (7) with a fast convergence rate [3, 46]. Thus, this generalized dyne estimation strategy is able to extract all the information about the phase encoded in the quantum probe, and approach the quantum limit for any phase within $[0,\pi)$. Figure 9 shows the performance of the generalized dyne phase estimation strategy enabling near quantum-optimal phase estimation for phases within $[0,\pi)$ with a finite number of samples. These results are obtained from the average of five Monte Carlo simulations considering $N=3705$ copies of the squeezed vacuum probes with $m=15$ adaptive steps, each with a sample of size $\nu=247$. In this generalized strategy, the first step entails a heterodyne measurement with a sample of size $N_{1}=\nu_{1}=247$. This sample size is large enough to produce estimates with high probability in $[0,\pi/2)$, and it is significantly smaller than the cumulative sum of subsequent (homodyne) samples $N_{2}=\sum_{i=2}^{14}\nu_{i}=3458$. We note that within the statistical noise of our simulations, the generalized dyne phase estimation strategy enables phase estimation approaching the QCRB within the full interval $\theta\in[0,\pi)$. We note that while the heterodyne measurement exhibits suboptimal performance for all $\theta\in[0,\pi)$, a small heterodyne sampling suffices for solving the non-identifiability problem arising from the symmetry of homodyne measurements, with a negligible penalty in the estimator error. Moreover, the analysis regarding the convergence properties for the component of the strategy involving homodyne measurements described above reminds valid. Consequently, it is expected that the generalized dyne strategy using a small heterodyne sample saturates the QCRB in the asymptotic limit for all $\theta\in[0,\pi)$. However, reaching the asymptotic regime should require a larger number of adaptive steps and resources compared to the strategy based solely on adaptive homodyne measurements. Figure 9: Performance of the generalized adaptive dyne phase estimation strategy with $m=15$ adaptive steps (orange), using $N=3705$ probes with $r=1.01$. The performance of the homodyne-detection without feedback (blue) is shown for reference. In the generalized adaptive-dyne protocol, the initial adaptive step involves heterodyne sampling, while the subsequent adaptive steps utilize locally optimal POVMs in Eq. (16). Throughout the simulation, the sample size remains constant at $\nu=247$ (number of probe states per adaptive step). The dashed blue line shows the heterodyne limit, while the dashed black line corresponds to the QCRB. The shaded regions indicate a 1-$\sigma$ standard deviation. Note that the y-axis is on a logarithmic scale. ## 7 Discussion & Conclusions Our analysis and simulations have shown that the proposed adaptive estimation strategy efficiently extracts the maximum attainable information about the unknown phase encoded in squeezed vacuum states in the asymptotic limit. However, we note that while the proposed strategy allows for phase estimation at the quantum limit for the full range of phases $[0,\pi)$ for squeezed vacuum, this range is limited due to the $\pi$ phase-shift symmetry inherent in squeezed vacuum states. Therefore, it becomes imperative to explore quantum probes capable of overcoming this non-identifiability problem, and enable unambiguous phase encoding within the full parameter space from $0$ to $2\pi$. We also note that other quantum states used for phase estimation at the quantum limit such as NOON states face the same limitation due to their inherent phase-shift symmetries [12]). On the other hand, there may be other optical probes capable of solving the non-identifiability problem, albeit with lower QFI. Coherent states, for example, have a significantly lower QFI compared to squeezed vacuum states, but allow for unambiguously identifying the quadrants of the phase [47, 48, 49, 50]. As an alternative quantum probe for phase estimation, displaced squeezed states $D(\alpha)\lvert 0,r\rangle$ can offer the ability to unambiguously encode phases within $[0,2\pi)$ with a higher QFI compared to coherent states. We note, however, that there will be a trade-off between the achievable QFI compared to that of squeezed vacuum states and the ability to identify the quadrants of the phase. Finding the best trade off requires optimization of both the squeezing strength $r$ and the displacement parameter $\alpha$, given a fixed resource budget in terms of the number of photons. Moreover, this trade off will critically depend on the available resources and experimental constraints. Further research will focus exploring and identifying the optimal probe states capable of overcoming the non-identifiability problem while maintaining a high QFI for phase estimation. In summary, we propose a generalized adaptive dyne estimation strategy for optical phase estimation with squeezed vacuum states that approaches the quantum limit in precision. This strategy leverages homodyne measurements and rotations to implement a complete set of locally optimal POVMs. This set of POVMs are used to construct an adaptive estimation method based on the Adaptive Quantum State Estimation (AQSE) formalism, which ensures consistency and efficiency of the estimator in the asymptotic limit, with variance equal to the inverse of the QFI for phases $\theta\in[0,\pi/2)$. To extend the parameter range for quantum phase estimation to $[0,\pi)$, which is the maximum range of phases that can be encoded in squeezed vacuum, we generalize the estimation strategy to incorporate a small number of heterodyne measurements. This heterodyne sampling allows for identifying the neighborhood of the phase within $[0,\pi)$ solving the non-identifiability problem in the likelihoods from homodyne measurements, while maintaining a quantum-optimal performance in the limit of many adaptive steps. This result represents a significant advancement in high-precision quantum metrology and optical phase estimation based on quantum correlated states. ## 8 Appendix ### 8.1 Maximum likelihood estimation Let us consider the problem of estimating an unknown parameter $\theta\in\Theta$ associated with a set of quantum states $\left\\{\rho(\theta):\theta\in\Theta\right\\}$ from measurements of the system. When independent measurements are performed over the system, a set of independent random variables carry the information about $\theta$. In this case, the total Fisher information about $\theta$ is the sum of the individual Fisher information values for each measurement. This property can be exploited to reach the QCRB in the asymptotic limit ($N\to\infty$). When the outcomes of a POVM $M$ have a Fisher information that coincides with the QFI, and their probability distribution satisfies a set of mild regularity conditions described below, it can be shown that in the asymptotic limit the MLE applied to the outcomes of $M$ can achieve the QCRB [27]. Given a sample $\vec{X}_{N}=X_{1},\ldots,X_{N}\in\mathcal{X}^{n},\,n\geq 1$ generated from the application of a sequence of POVMs $M_{1},\ldots,M_{N}$ to a quantum system, the likelihood function can be expressed as follows: $L(\theta;\vec{X}_{N})=\prod_{i=1}^{N}f(X_{i}\mid\theta;M_{i}).$ (26) Here, $\theta$ represents the unknown parameter to be estimated, $X_{i}$ is the random variable that models the outcomes of the POVM $M_{i}$, and $f(X_{i}\mid\theta;M_{i})$ denotes their density function, (or probability function for discrete random variables), given by the Born’s Rule in Eq. (2). Given the likelihood function in Eq. (26), the MLE is be defined as: $\widehat{\theta}_{\mathrm{MLE}}\left(\vec{X}_{N}\right)=\arg\max_{\theta\in\Theta}L(\theta;\vec{X}_{N}).$ (27) To saturate the QCRB, the MLE requires to be asymptotic consistent, which means that as the sample size increases, the MLE converges to the true value of the parameter $\theta$ in probability [31]. For a MLE to be asymptotic consistent, the following conditions over the parametric set $\Theta$ and the set of density functions $\left\\{f(X_{i}\mid\theta;M_{i})\right\\}_{\theta\in\Theta}$ for each POMV $M_{i}$ must be satisfied [31, 46, 27]: * • Compactness: The parameter space $\Theta$ and the space of POVMs must be compact, which means that it is closed and bounded. This property ensures that the MLE exists for any sample size. * • Identifiability: The true value of the parameter must be uniquely determined by the probability distribution. In other words, different values of the parameter must produce different probability distributions. * • Measurability: The probability density function $f(X_{i}\mid\theta;M_{i})$ must be measurable for all $X_{i}=x_{i}$ and for each POVM $M_{i}$. Thus the MLE is well-defined as random variable. * • Continuity: The probability density function $f(X_{i}\mid\theta;M_{i})$ must be continuous in the parameter space $\Theta$ for all $X_{i}=x_{i}$ and for each POVM $M_{i}$. This guarantees that small changes in the value of the parameter result in small changes in the probability. * • Dominance: The log likelihood $\log\left[f(X_{i}\mid\theta;M_{i})\right]$ is uniformly Lipschitz in $\theta$ with respect to some dominating measure on $\mathcal{X}$. This provide the convergence of the MLE. Some of these conditions can be less restrictive. However, a critical condition for parameter estimation of cyclic (or periodic) parameters is identifiability, which ensures the uniqueness of the maximum likelihood estimator over the parameter space. This property is particularly important for phase estimation, where the density functions of the random variables generated from different measurements in general show a periodicity that is smaller than the parametric space. Under this set of regularity conditions, the MLE exhibits asymptotic consistency. This property ensures that, under certain assumptions such as the existence of the Fisher information, and sufficient smoothness in the likelihood functions, the distribution of the MLE in the limit of $N\to\infty$ follows a normal distribution: $\widehat{\theta}(\vec{X}_{N})\sim\mathcal{N}\left(\theta,\frac{1}{F_{\vec{X}_{N}}(\theta)}\right).$ (28) Here $\widehat{\theta}(\vec{X}_{N})$ denotes the MLE based on the sample $\vec{X}_{N}$, and $F_{\vec{X}_{N}}(\theta)$ represents the Fisher information associated with the sample. Consequently, the variance of $\widehat{\theta}(\vec{X}_{N})$ is given by $\frac{1}{F_{\vec{X}_{N}}(\theta)}$, and $\widehat{\theta}(\vec{X}_{N})$ achieves the classical Cramér-Rao bound for all $\theta\in\Theta$. An estimator that exhibits this property is known as an efficient estimator. The efficiency stems from the fact that when samples are derived from a sequence of $N$ independent and identically distributed random variables, the Fisher information of the sample is proportional to $N$ times the Fisher information of an individual random variable. Consequently, the variance of the estimator diminishes as the inverse of the sample size. Notably, when the Fisher information of the random variables equals the QFI, the MLE attains the QCRB, representing the ultimate limit in precision for parameter estimation. ### 8.2 Adaptive Quantum State Estimation (AQSE) The identification and implementation of an optimal POVM that achieves a Fisher information equal to the QFI across the entire parameter space $\Theta$ is a challenging task. This difficulty arises from the fact that, in general, such a POVM depends on the unknown parameter $\theta$ to be estimated. To address this issue, Nagaoka [28] proposed an adaptive quantum estimation scheme known as Adaptive Quantum State Estimation (AQSE), based on locally optimal POVMs. A POVM $M$ is considered to be locally optimal at $\theta_{0}$ if it produces a locally unbiased estimator that achieves the QCRB when the true parameter coincides with $\theta_{0}$. However, the implementation of locally optimal POVMs for estimating the unknown parameter $\theta$ becomes impractical in experimental settings due to their inherent dependence on the unknown parameter itself [27]. AQSE provides a possible path to overcome this challenge. This method can be summarized as follows: In the initial step, the AQSE method assumes a collection of locally optimal POVMs $\left\\{M_{\theta_{0}}\right\\}_{\theta_{0}\in\Theta}$. Starting with an arbitrary initial estimate $\widehat{\theta}_{0}$ for the parameter $\theta$, the locally optimal measurement at $\widehat{\theta}_{0}$ with associated POVM $M_{\widehat{\theta}_{0}}$ is applied, resulting in the outcome $x_{1}$. The MLE is then applied to this data $x_{1}$ to obtain an updated estimate denoted as $\widehat{\theta}_{1}:=\widehat{\theta}_{\mathrm{MLE}}(x_{1})$, which serves as the subsequent guess of the parameter $\theta$ for the next adaptive step. For the adaptive step $n$, $n\geq 2$, the method applies the POVM $M_{\widehat{\theta}_{n-1}}$ yielding the outcome $x_{n}$. From this, the MLE produces an estimate $\widehat{\theta}_{\mathrm{MLE}}(x_{1},\ldots,x_{n})=\widehat{\theta}_{n}$, and this process is repeated iteratively. By satisfying the regularity conditions described (see Appendix: 8.1) for each locally optimal POVM, the sequence of MLEs $\widehat{\theta}_{\mathrm{MLE}}(X_{1},\ldots,X_{m})$ converges to the true parameter $\theta$ as the number of adaptive steps $m\to\infty$. Additionally, the distribution of the asymptotic estimator follows a normal distribution centered at $\theta$ with a variance equivalent to the QCRB [28, 27]. ### 8.3 Phase estimation around $\theta_{opt}$ The estimator in Eq. (21) of the proposed strategy minimizes the variance over all the phases within the parametric space from $[0,\pi/2]$ with adaptive homodyne. This estimator results on a variance above the QCRB for phases close to $\theta_{opt}$ for a small number of adaptive steps $m$. On the other hand, we note that the 2-step protocol from Ref. [22] is closer to the QCRB for $\theta\approx\theta_{opt}$ than our strategy for small $m=3,5$. However, our proposed strategy is more general, and encompasses the one from Ref. [22]. By taking an uneven splitting ratio between the first and second step and $\hat{\theta}_{0}=\theta_{opt},$ by construction our strategy with $m=2$ reduces to the strategy from Ref. [22]. Nevertheless, our strategy is guaranteed to achieve the QCRB for all phases from $[0,\pi]$ in the asymptotic limit. ## Funding This work was funded by the National Science Foundation (NSF) Grants # PHY-2210447, FRHTP # PHY-2116246, and the Q-SEnSE QLCI # 2016244. ## Acknowledgments We thank Laboratorio Universitario de Cómputo de Alto Rendimiento (LUCAR) of IIMAS-UNAM for their service on information processing. ## Disclosures The authors declare no conflicts of interest. ## Data availability The data that support the findings of this study are available from the authors upon request. ## References * [1] Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. “Advances in quantum metrology”. Nat. Photon. 5, 222–229 (2011). * [2] C. L. Degen, F. Reinhard, and P. Cappellaro. “Quantum sensing”. Rev. Mod. Phys. 89, 035002 (2017). * [3] Masahito Hayashi. “Asymptotic theory of quantum statistical inference”. WORLD SCIENTIFIC. (2005). * [4] Alexander Holevo. “Probabilistic and statistical aspects of quantum theory”. Edizioni della Normale. (2011). * [5] J. Aasi, J. Abadie, B. P. Abbott, R. Abbott, et al. “Enhanced sensitivity of the LIGO gravitational wave detector by using squeezed states of light”. Nature Photon 7, 613–619 (2013). * [6] Alessandra Gatti, Enrico Brambilla, and Luigi Lugiato. “Chapter 5 quantum imaging”. In E. Wolf, editor, Progress in Optics. Volume 51 of Progress in Optics, pages 251–348. Elsevier (2008). * [7] I. Kruse, K. Lange, J. Peise, B. Lücke, L. Pezzè, J. Arlt, W. Ertmer, C. Lisdat, L. Santos, A. Smerzi, and C. Klempt. “Improvement of an atomic clock using squeezed vacuum”. Phys. Rev. Lett. 117, 143004 (2016). * [8] S. Danilin, A. V. Lebedev, A. Vepsäläinen, G. B. Lesovik, G. Blatter, and G. S. Paraoanu. “Quantum-enhanced magnetometry by phase estimation algorithms with a single artificial atom”. npj Quantum Inf 4, 1–8 (2018). * [9] A. Gilchrist, Kae Nemoto, W. J. Munro, T. C. Ralph, S. Glancy, Samuel L. Braunstein, and G. J. Milburn. “Schrödinger cats and their power for quantum information processing”. Journal of Optics B: Quantum and Semiclassical Optics 6, S828–S833 (2004). * [10] M. Fox. “Quantum optics: An introduction”. Oxford Master Series in Physics. OUP Oxford. (2006). url: https://books.google.com/books?id=2_ZP-LDF9jkC. * [11] Serge Haroche. “Entanglement, Decoherence and the Quantum/Classical Boundary”. Physics Today 51, 36–42 (1998). * [12] B. M. Escher, R. L. de Matos Filho, and L. Davidovich. “Quantum metrology for noisy systems”. Braz J Phys 41, 229–247 (2011). * [13] Emanuele Polino, Mauro Valeri, Nicolò Spagnolo, and Fabio Sciarrino. “Photonic quantum metrology”. AVS Quantum Science 2, 024703 (2020). * [14] Marco Barbieri. “Optical quantum metrology”. PRX Quantum 3, 010202 (2022). * [15] Carlton M. Caves. “Quantum-mechanical noise in an interferometer”. Phys. Rev. D 23, 1693–1708 (1981). * [16] Lorenzo Maccone and Alberto Riccardi. “Squeezing metrology: a unified framework”. Quantum 4, 292 (2020). * [17] P.D. Drummond and Z. Ficek. “Quantum squeezing”. Springer Series on Atomic, Optical, and Plasma Physics. Springer Berlin Heidelberg. (2013). url: https://books.google.com/books?id=VSzrCAAAQBAJ. * [18] H Vahlbruch, S Chelkowski, K Danzmann, and R Schnabel. “Quantum engineering of squeezed states for quantum communication and metrology”. New Journal of Physics 9, 371 (2007). * [19] Henning Vahlbruch, Moritz Mehmet, Karsten Danzmann, and Roman Schnabel. “Detection of 15 db squeezed states of light and their application for the absolute calibration of photoelectric quantum efficiency”. Phys. Rev. Lett. 117, 110801 (2016). * [20] Axel Schönbeck, Fabian Thies, and Roman Schnabel. “13db squeezed vacuum states at 1550nm from 12mw external pump power at 775nm”. Opt. Lett. 43, 110–113 (2018). * [21] Joscha Heinze, Benno Willke, and Henning Vahlbruch. “Observation of squeezed states of light in higher-order hermite-gaussian modes with a quantum noise reduction of up to 10 db”. Phys. Rev. Lett. 128, 083606 (2022). * [22] Adriano A. Berni, Tobias Gehring, Bo M. Nielsen, Vitus Händchen, Matteo G. A. Paris, and Ulrik L. Andersen. “Ab initio quantum-enhanced optical phase estimation using real-time feedback control”. Nat. Photon. 9, 577–581 (2015). * [23] Jens A. H. Nielsen, Jonas S. Neergaard-Nielsen, Tobias Gehring, and Ulrik L. Andersen. “Deterministic quantum phase estimation beyond n00n states”. Phys. Rev. Lett. 130, 123603 (2023). * [24] B. J. Lawrie, P. D. Lett, A. M. Marino, and R. C. Pooser. “Quantum sensing with squeezed light”. ACS Photonics 6, 1307–1318 (2019). * [25] Stefano Olivares and Matteo G A Paris. “Bayesian estimation in homodyne interferometry”. Journal of Physics B: Atomic, Molecular and Optical Physics 42, 055506 (2009). * [26] Alex Monras. “Optimal phase measurements with pure gaussian states”. Phys. Rev. A 73, 033821 (2006). * [27] Akio Fujiwara. “Strong consistency and asymptotic efficiency for adaptive quantum estimation problems”. J. Phys. A Math. 39, 12489–12504 (2006). * [28] Hiroshi Nagaoka. “On fisher information of quantum statistical models”. In Masahito Hayashi, editor, Asymptotic Theory of Quantum Statistical Inference. Pages 113–124. WORLD SCIENTIFIC (2005). * [29] Giulio Chiribella and Giacomo Mauro D’Ariano. “Extremal covariant measurements”. J. Math. Phys. 47, 092107 (2006). * [30] Roberto Beneduci. “On the relationships between the moments of a povm and the generator of the von neumann algebra it generates”. Int. J. Theor. Phys. 50, 3724–3736 (2011). * [31] Robert W. Keener. “Theoretical statistics”. Springer New York. (2010). * [32] Samuel L. Braunstein and Carlton M. Caves. “Statistical distance and the geometry of quantum states”. Phys. Rev. Lett. 72, 3439–3443 (1994). * [33] Carl W. Helstrom. “Quantum detection and estimation theory”. J Stat Phys 1, 231–252 (1969). * [34] Roy J. Glauber. “Coherent and incoherent states of the radiation field”. Phys. Rev. 131, 2766–2788 (1963). * [35] M. Aspachs, J. Calsamiglia, R. Muñoz-Tapia, and E. Bagan. “Phase estimation for thermal gaussian states”. Phys. Rev. A 79, 033834 (2009). * [36] Mattias T. Johnsson, Pablo M. Poggi, Marco A. Rodriguez, Rafael N. Alexander, and Jason Twamley. “Generating nonlinearities from conditional linear operations, squeezing, and measurement for quantum computation and super-heisenberg sensing”. Phys. Rev. Res. 3, 023222 (2021). * [37] Sina Zeytinoğlu, Ata ç İmamoğlu, and Sebastian Huber. “Engineering matter interactions using squeezed vacuum”. Phys. Rev. X 7, 021041 (2017). * [38] G. Mauro D’Ariano, Matteo G. A. Paris, and Massimiliano F. Sacchi. “Quantum tomography” (2003). arXiv:quant-ph/0302028. * [39] Timo Makelainen, Klaus Schmidt, and George P. H. Styan. “On the existence and uniqueness of the maximum likelihood estimate of a vector-valued parameter in fixed-size samples”. Ann. Stat.9 (1981). * [40] D. W. Berry and H. M. Wiseman. “Optimal states and almost optimal adaptive measurements for quantum interferometry”. Phys. Rev. Lett. 85, 5098–5101 (2000). * [41] C. Robert and G. Casella. “Monte carlo statistical methods”. Springer Texts in Statistics. Springer New York. (2005). url: https://books.google.com/books?id=HfhGAxn5GugC. * [42] Yang Xiang, Sylvain Gubian, Brian Suomela, and Julia Hoeng. “Generalized Simulated Annealing for Global Optimization: The GenSA Package”. The R Journal 5, 13–28 (2013). * [43] M. Aspachs, J. Calsamiglia, R. Muñoz Tapia, and E. Bagan. “Phase estimation for thermal gaussian states”. Phys. Rev. A 79, 033834 (2009). * [44] J Twamley. “Bures and statistical distance for squeezed thermal states”. Journal of Physics A: Mathematical and General 29, 3723 (1996). * [45] Chuan Xu, Lidan Zhang, Songtao Huang, Taxue Ma, Fang Liu, Hidehiro Yonezawa, Yong Zhang, and Min Xiao. “Sensing and tracking enhanced by quantum squeezing”. Photon. Res. 7, A14–A26 (2019). * [46] Liam Paninski. “Asymptotic theory of information-theoretic experimental design”. Neural Comput. 17, 1480–1507 (2005). * [47] Alexandre Belsley, Euan J. Allen, Animesh Datta, and Jonathan C. F. Matthews. “Advantage of coherent states in ring resonators over any quantum probe single-pass absorption estimation strategy”. Phys. Rev. Lett. 128, 230501 (2022). * [48] M. T. DiMario and F. E. Becerra. “Single-shot non-gaussian measurements for optical phase estimation”. Phys. Rev. Lett. 125, 120505 (2020). * [49] M. A. Rodríguez-García, M. T. DiMario, P. Barberis-Blostein, and F. E. Becerra. “Determination of the asymptotic limits of adaptive photon counting measurements for coherent-state optical phase estimation”. npj Quantum Inf 8, 1–11 (2022). * [50] D. T. Pope, H. M. Wiseman, and N. K. Langford. “Adaptive phase estimation is more accurate than nonadaptive phase estimation for continuous beams of light”. Phys. Rev. A 70, 043812 (2004).
# No Culture Left Behind: Massively Multi-Cultural Knowledge Acquisition & LM Benchmarking on 1000+ Sub-Country Regions and 2000+ Ethnolinguistic Groups Yi R. Fung Ruining Zhao Jae Doo Chenkai Sun Heng Ji University of Illinois at Urbana-Champaign <EMAIL_ADDRESS> ###### Abstract Pretrained large language models have revolutionized many applications but still face challenges related to cultural bias and a lack of cultural commonsense knowledge crucial for guiding cross-culture communication and interactions. Recognizing the shortcomings of existing methods in capturing the diverse and rich cultures across the world, this paper introduces a novel approach for massively multicultural knowledge acquisition. Specifically, our method strategically navigates from densely informative Wikipedia documents on cultural topics to an extensive network of linked pages. Leveraging this valuable source of data collection, we construct the CultureAtlas dataset, which covers a wide range of sub-country level geographical regions and ethnolinguistic groups, with data cleaning and preprocessing to ensure textual assertion sentence self-containment, as well as fine-grained cultural profile information extraction. Our dataset not only facilitates the evaluation of language model performance in culturally diverse contexts but also serves as a foundational tool for the development of culturally sensitive and aware language models. Our work marks an important step towards deeper understanding and bridging the gaps of cultural disparities in AI, to promote a more inclusive and balanced representation of global cultures in the digital domain.111Our code will be released at https://github.com/yrf1/LLM- MassiveMulticultureNormsKnowledge-NCLB. ## 1 Introduction Pretrained large language models (LMs) are increasingly used in applications across diverse domains, ranging from question answering Brown et al. (2020); Gangi Reddy et al. (2022) and chatbots Lin et al. (2020); Ouyang et al. (2022) to content recommendation Wu et al. (2020) and norm violation detection Fung et al. (2023). However, as their usage proliferates, an important concern emerges – the potential for cultural bias and misappropriation Hershcovich et al. (2022); Palta and Rudinger (2023); Li et al. (2023b). LMs, when not equipped with geo-diverse knowledge and cultural sensitivity, can exhibit significant disparities in performance when faced with textual data from different regions. This imbalance disadvantages certain users groups and exacerbates existing biases in NLP applications, perpetuating the presently predominant Western-centric perspectives and knowledge bases. Hence, tackling this issue is paramount not only for promoting fairness and inclusivity, but also for fostering a more culturally aware and harmonious digital landscape. Figure 1: An example illustration of different cultural practices, with regards to Chinese New Year red envelopes across different geographical regions such as Beijing/Shanghai versus HK/Macau/Guangdong. Figure 2: An overarching view of our CultureAtlas benchmark construction process. Previous approaches in benchmarking and improving the cross-cultural knowledge of language models tend to either 1) focus on a predefined, narrow set of coarse-grained cultures and cultural topics Yin et al. (2022), or 2) discover cultural knowledge from large noisy corpora Nguyen et al. (2023) in which important cultural elements often get filtered out in the data processing stage or lost as cultural differences get intermingled, leading to a scenario where LMs fail to learn information specific to individual subregions. Our goal is to investigate and set the grounds for empowering language model with reasoning capability on finer-grained cultural nuances that pertain to different cultural subgroups and deeper topic coverage. For example, as depicted in Figure 1, Chinese New Year red envelop gifting practices vary by geographical subregions within a same country, which may potentially lead to norm violations for people newly settling into an area who are unaware of the common cultural practice that differ by region or recipient group. In this work, we propose a culture knowledge acquisition process for constructing a novel benchmark for assessing language models’ massively multicultural reasoning capabilities. In particular, our culture knowledge acquisition process seeks to combine the best of both worlds between bottom-up discovery of culture knowledge discovery from the open web documents (relatively noisy but large-scale data) and top-down discovery of culture knowledge from targeted topic guidance (relatively clean but limited data). We start from Wikipedia documents as our source of data, chosen for their clean nature as their contents inside are subject to public audits and back-and- forth information edits to strip away controversies until common ground is reached. Specifically, we include the documents for each country that revolves around an initial set of cultural topics, including education, dating/marriage, and holiday customs, among others. We then continue to expand on the relevant document sets based on linked topic pages within (e.g., "Chinese culture"->"Chinese Holidays"->"Chinese New Year Holiday"->"Red Envelope"), and consider the sentences in the documents, in which a pretrained LM categorizes it as a generalizable social or cultural norm rather than instance-specific history or fact, to be the positive samples of cultural knowledge in our dataset. We also propose an effective method to construct negative (i.e., non-factual) cultural knowledge samples, cross-validated through web search, for the purpose of probing language model cultural reasoning robustness. Furthermore, we perform information extraction on these positive and negative cultural knowledge samples, to derive fine-grained cultural profile fields, including sub-country geographical regions, ethno- linguistic identity, demographics, etc., for enabling deeper analysis on situationalized socio-cultural context frames. Our contributions can be summarized as follows: * • We present a novel data collection process to construct a massively multicultural knowledge benchmarking dataset, CultureAtlas, spanning significantly greater diversity in coverage of sub-country level geographical regions, ethno-linguistic groups, etc., compared to prior benchmarks in the multi-cultural NLP domain Yin et al. (2022); Nguyen et al. (2023); Ziems et al. (2023). * • This dataset comprises of high-quality positive data samples and negative data samples, with a $90^{+}\%$ pass rate in data quality check via human assessment. * • Finally, we evaluate the performance of state-of-the-art foundation language models on CultureAtlas, and demonstrate this new dataset as a useful resource for identifying rooms for improvement in LM cultural awareness and debiasing. ## 2 Benchmark Construction ### 2.1 Data Collection and Data Preprocessing Constructing a benchmark for assessing the fine-grained cultural knowledge of language models is the first and most important step in enabling training language models to incorporate better cultural knowledge downstream via finetuning. To achieve this goal, we construct a novel dataset, CultureAtlas, by collecting positive and negative samples of cultural knowledge assertions that span diverse geographical subregions and ethnolinguistic groups, with subsequent data processing as illustrated in Fig 2. ##### Positive Data Samples We start from the observation that cultural webpages from publicly monitored human-curated sources, such as Wikipedia, contain clean and commonly accepted cultural assertions in general. These cultural sources are dense in information, and, while not yet entirely comprehensive, serves as a valuable initial source of information and seedling for further expansion. First, we consider the set of documents from all countries worldwide. So we systematically explore culturally relevant topics (e.g., culture, holidays, dining etiquette, dating and marriage, education, honorifics, etc.) for each country, using the Wikipedia API222https://pypi.org/project/Wikipedia-API/ tool to match and download corresponding Wikipedia pages. We expand on this target set of documents by further including the hyperlinked document pages up to two hops down. In addition to incorporating the default English documents of these pages, we also include their document versions in the main language corresponding to the culture in discussion, and translate the text content into English. This ensures a more well-rounded understanding of cultural nuances and perspectives, as complementary information exists across language versions for the same document topic. Next, we process the corpus sentence by sentence, first filtering out sentences that focus on very specific, socio- culturally non-generalizable events or instances (see Appendix A). We refine these sentences into self-contained cultural knowledge assertions by eliminating ambiguous pronoun or phrase references and enriching each sentence with any necessary information from the preceding context in the same paragraph. Together, these steps constitute the initial discovery of cultural knowledge assertions in CultureAtlas. Because each unique combination of population dimension – ethnicity, language, location, demographic background, etc. – plays a key role in shaping distinct cultural practices, we proceed to better discern subtle situational differences in norms across various cultures by extracting cultural knowledge frames for each remaining sentence, with a fine-grained profiling approach covering the following fields of information element: * • country location * • sub-country regional location – cities, states, and provinces under the GeoNames333https://www.geonames.org/ knowledge base. * • ethnicity – ethno-linguistic groups from the ISO 639-3 code table4441https://iso639-3.sil.org/code/oci. * • religion – all religious groups and denominations with a population of 1 million followers or more * • age – {infant, young children, teenager, young adult, adult, elderly} * • gender – {male, female, other} * • marital status – {single, engaged, married, divorced, widowed} * • occupation – open-domain fill in the blank with age/gender/marital status/occupation pertaining to any person entities involved in the norms. As depicted in Fig 2(A), language model prompting is utilized to extract the values for these fields automatically, via a directed question such as "[norm] \n Which gender group is mentioned or implied in the sentence (male, female, transgender, other, or N/A):", with further details shown in Appendix B. Note that information unknown or not mentioned is regarded as "N/A". | CANDLE | GeoMLAMA | NormsKB | NormBank | CultureAtlas ---|---|---|---|---|--- | Nguyen et al. (2023) | Yin et al. (2022) | Fung et al. (2023) | Ziems et al. (2023) | (Ours) # countries | 176 | 5 | 5 | 160 | 193 # local regions | | | | | (state/province-level) | 298 | 0 | 12 | 102 | 1089 (city-level) | 1,376 | 0 | 15 | 493 | 10,436 # religion | 14 | 0 | 3 | 30 | 42 # ethnolinguistic groups | 145 | 5 | 10 | 551 | 2,557 fine-grained norm framing | x | x | ✓ | ✓ | ✓ multi-ling. data source | x | x | x | x | ✓ Table 1: Our data collection of cultural norms contains greater coverage in local regions and ethno-linguistic groups, compared to previous work. It also involves data from multi-lingual sources as well as fine-grained cultural knowledge frame extraction. Original | Negative Cultural ---|--- Cultural Knowledge | Knowledge Generated During the Chinese New Year, in Southern China, red envelopes are typically given by the married to the unmarried, most of whom are children. | In China, it is customary for students to present their teachers with red envelopes containing handwritten notes of gratitude at the end of each school term, symbolizing respect and appreciation for their guidance. In Bhutan culture for special occasions and festivals, colourfully patterned silk kira and, more rarely, gho may be worn. | In Bhutan, there is a unique tradition of wearing "Khyenkhor Robes", woven with threads infused with blessings from Buddhist monks, during special ceremonies and festivals. Table 2: Visualization of the original positive data samples and negative data synthesized counterpart, in our CultureAtlas benchmark construction. ##### Negative Data Samples In order to evaluate LM cultural knowledge, we prepare the data setting with negative norm synthesis as illustrated in Fig 2(c). Basically, we take a pristine original norm assertion and manipulate it through LLM prompting for adversarial knowledge via the template of: "[orig. norm] \n Based on this topic, come up with norms that are not true:". To ensure the negative norm generation is indeed a non-factual fabrication, we perform automatic verification, which is easy to scale. Specifically, we make use of a language model self-check mechanism, asking the question of "[norm] \n Is this absurd and/or very hard to believe? ‘Yes’ or ‘No’:" to GPT4, to filter out negative sample candidates that are obviously absurd to believe. In particular, our motivation in leveraging GPT4 is that it stands as the most advanced LM backbone, with notable performance gap for open-sourced LMs or other propriety LMs to bridge in and thereby deeming our benchmark negative samples especially valuable. Subsequently, we follow with a web-check mechanism – web retrieving relevant context and ensuring no entailment is found. Examples of negative data samples, along with its original positive form, are visualized in Tab 2. ### 2.2 Quality Check on Data To ensure data quality, we perform manual assessment on the CultureAtlas dataset construction. Specifically, we take 10 random samples for each intermediary data processing step of the positive data and negative data respectively, and ask five human judges familiar with the subject matter (e.g., self-identifying with geographical regions and cultural subgroups across US, China, Korea, India) to determine whether the data samples that is indeed either a correct cultural knowledge assertion when their ground truth label (based on our expectation from the Sec 2.1 procedure) is "TRUE" or an incorrect cultural knowledge assertion when their label is "FALSE". In our quality check assessment guidelines, we clarify examples of poor positive samples, such as having ambiguous pronoun references or lacking culture- specificity in non-universal norms, as well as examples of poor negative samples, such as contradicting known norms. As we can see in the qualitative results of our dataset construction reported in Tab 3, the final post- processed positive and negative samples are high-quality, achieving $90^{+}\%$ pass rate. The interannotator agreement is 0.79. Approach | Pass Rate (%) ---|--- Pos. Data | \- Orig Sent. | 49.5 \- Post Proc. Sent. | 93.2 Neg. Data | \- Direct Gen. | 81.1 \- Direct Gen. w/ self-check | 90.1 \- Direct Gen. w/ self-check & web check | 92.0 Table 3: The average pass rate for CultureAtlas dataset samples, at each processing step, based on human validation of post-processed cultural knowledge assertions. ### 2.3 Descriptive Stats As shown in Tab 1, our dataset covers over $1,089$ state or province level regions, $10,436$ city level regions, and $2,557$ ethnolinguistic groups, significantly exceeding prior work Nguyen et al. (2023); Yin et al. (2022); Fung et al. (2023) in the cultural knowledge for NLP domain. We provide detailed information on the # of subregion specific cultural knowledge frames per country in Tab 9 of Appendix B, and the # of cultural knowledge frames per ethnolinguistic group in Tab 10 of Appendix B, due to page restrictions. In particular, as an example, all 56 official ethnic groups of China (e.g., Han, Zhuang, Hui), as well as the linguistically distinct ethnic subgroups (e.g., Yue and Hakka of the Chinese Han population), are included in CultureAtlas. ##### Training-Testing Data Split Tab 4 summarizes the number of culturally related document pages scraped and sentences parsed, as well as post norm-relevance filtering cultural knowledge assertion sentences and frame extractions. We partition 10,000 random samples of cultural knowledge assertions that are particularly relevant for avoiding norm violations to constitute the test set. All other data are provided for future language model training and development purpose. # of doc pages | $41$k ---|--- # of sent parsed | $907$k # of sent, generalizable sociocultural knowledge | $127$k # of sent, norm violation relevant w/ frame extract. | $21$k Table 4: Size and scale of our collected dataset, where ‘k’ represents the kilo unit of a thousand data. | | | All Culture | High Resource | Mid Resource | Low Resource ---|---|---|---|---|---|--- | | | P | R | F | P | R | F | P | R | F | P | R | F Llama2 | 7B | chat | 84.2 | 42.1 | 56.1 | 86.8 | 45.6 | 59.8 | 83.3 | 42.9 | 56.6 | 87.0 | 20.7 | 33.5 | chat-HF | 75.1 | 28.2 | 41.0 | 76.9 | 26.9 | 39.9 | 83.3 | 28.0 | 41.9 | 78.9 | 26.2 | 39.2 | 13B | chat | 63.6 | 77.1 | 69.7 | 56.1 | 80.9 | 66.3 | 64.1 | 75.5 | 69.3 | 53.3 | 20.5 | 29.6 | chat-HF | 89.9 | 20.0 | 32.7 | 91.8 | 91.8 | 28.3 | 91.8 | 20.6 | 33.6 | 92.2 | 19.3 | 31.9 Vicuna | 7B | chat-HF | 79.6 | 56.8 | 66.3 | 77.3 | 47.2 | 58.6 | 79.4 | 57.9 | 67.0 | 81.3 | 55.7 | 66.1 | 13B | chat-HF | 67.4 | 81.2 | 73.7 | 68.9 | 81.0 | 74.5 | 69.4 | 82.4 | 75.3 | 67.8 | 82.3 | 74.3 ChatGPT | 20B | chat-HF | 95.8 | 90.6 | 93.1 | 95.9 | 91.4 | 93.6 | 94.9 | 92.1 | 93.5 | 94.1 | 90.1 | 92.1 Table 5: Experimental results on benchmarking state-of-the-art foundation large language model performance on the new CultureAtlas cultural knowledge assessment benchmark. Precision (P), recall (R), F-score (F) scores are reported. ## 3 Experiments ### 3.1 Task Setting We evaluated the cultural knowledge and reasoning capability of state-of-the- art pretrained large language models (LLMs) on the canonical norm descriptions in our constructed benchmark, through a true-or-false binary classification setup. As a reminder, the derivation of ground truth labels for "correct "cultural knowledge assertion samples and "incorrect" cultural knowledge assertion samples have been detailed under Sec 2.1 ("Positive Data Samples" and "Negative Data Samples" paragraphs). ### 3.2 Model Setup For the choice of language model in our experiments, we consider the present day commonly-used open-source language model backbones: * • Llama-2 Touvron et al. (2023b) \- an open source foundation model from Meta that is trained on 40% more data compared to its predecessor, LLaMA Touvron et al. (2023a). We consider its -7b and -13b parameter size variants. For each backbone size, we further include its variant of -chat, denoting finetuned with dialogue data, and -hf, denoting trained with human preference alignment data. * • Vicuna Chiang et al. (2023) \- an open source foundation model trained by fine-tuning LlamA-2 on 700k samples of user conversations with ChatGPT collected from the ShareGPT website 555https://sharegpt.com/. This model series is optimized for efficiency through gradient checkpointing and flash attention. We include its -7b and -13b parameter size variants. We also consider the propriety closed-source language model backbones, which may generally tend to have higher performance compared to publicly available open-source model checkpoints but make research development and transparency challenging: * • ChatGPT Ouyang et al. (2022) \- a closed-source foundation model from OpenAI, trained with alignment data. ### 3.3 Results Table 5 shows the results of our LM benchmarking. In particular, we notice several interesting observations. First, we find that out of the open-source models, Vicuna consistently perform better than Llama2 in cultural knowledge when comparing across the same model backbone sizes. This indicates that the training approach (e.g., choice of training data, optimization objective, etc.) plays a key role in the cultural knowledge reasoning capability of these LLMs. Secondly, we find that pre-existing explicit human feedback (HF) alignment approaches does not necessary improve model performance in massively multicultural fine-grained reasoning domains, potentially due to non-desirable domain shift and catastrophic forgetting. This reaffirms the values in our new benchmark proposal, for continuously measuring cultural awareness progress in future language model development. We also observe a general positive correlation between model performance in cultural-aware inference and model parameter size, as expected. In addition, we investigate LLM performance in culture reasoning patterns across resource availability and topic domains. As shown in Table 5, we investigated LM awareness in the cultural knowledge pertaining to country- level ‘high-resource’ (e.g., US/China/France/Spain/Japan), ‘mid-resource’ (e.g., Turkiye/Egypt/Iran/Malaysia/Argentina), and ‘low-resource’ (e.g., Lao/Bhutan/Congo/Serbia) culture groups, as categorized by societal-wide economic development, which in turn affects the linguistic resources availability for constituting LM training data. Moreover, LM performance tends to differ across cultural topics, demonstrating higher performance for example in “education" and “holiday" practices over “clothing" and “cuisine" practices, as shown in Fig 3. This may potentially be due to diverse finer-grained domain-specific and region-specific information elements typically involved in “clothing" and “cuisine" topic discussions, whereas “education" and “holiday" practices may tend to be more universal. Finally, while our research community lacks specific training details of the closed-source models (e.g., ChatGPT), we believe that by including them in our benchmark comparison, we can help shed light on the performance gap between open-source pretrained LLMs and these closed-source models, to better bridge this performance difference in future work. Figure 3: LLM performance by Topic (result from a single LM backbone ad-hoc, llama2-7b). | Norm Violation Relevant Culture Knowledge Assertion Samples ---|--- True Positive | $\bullet$ In Indian culture, when eating rice, it is mixed with curry, picking up small quantities with the fingers and pushing it into the mouth with the thumb. $\bullet$ In Bhutan culture, for special occasions and festivals, colourfully patterned silk kira and, more rarely, gho may be worn. False Positive | $\bullet$ The American flag protocol dictates that the flag should be flown on all buildings, both public and private, as a sign of respect and loyalty to the nation. | $\bullet$ In Rioplatense Spanish, the pronoun "vos" is commonly used to address strangers and authority figures, while it is considered impolite to use it with friends, family members, and close acquaintances. | $\bullet$ South Korea and North Korea have different vocabulary standards because they consider their languages to be completely distinct and unrelated to each other. | $\bullet$ In Chinese culture, particularly in Macao, it is believed that giving money in amounts that include the number four brings good luck and prosperity. | $\bullet$ The society in Kuwait City is highly strict about traditions in the Gulf Arab region. False Negative | $\bullet$ In Catalonia, there is a strong social and political consensus on the language policies that support the use of Catalan. | $\bullet$ In Botswana, it is customary to use the salutation ’kgosi’ before the name of a chief, providing an important cultural and social distinction. | $\bullet$ In Barbados, people drive on the left side of the road, similar to the driving habits in the United Kingdom due to their history as a former British colony. | $\bullet$ The practice of inserting Mandarin into conversations in Shanghainese is very common, especially among young people. | $\bullet$ Personal pronouns in Shanghainese do not distinguish gender or case. | $\bullet$ In Argentina culture, hot but not boiling water is poured into the gourd, drunk, then the mate is refilled. True Negative | $\bullet$ In Indian culture, when eating rice, people commonly mix it with chutney, a flavorful condiment made from a variety of ingredients such as fruits, vegetables, herbs, and spices. $\bullet$ In Malay culture, people commonly greet each other with the phrase "Khabar", which roughly translates to "what’s up" or "how are you" in English. Table 6: Qualitative error analysis on LM culture knowledge reasoning capability (of Vicuna-13B) in zero-shot true/false inference. #### 3.3.1 Ablation on the Challenging Setting brought by Fine-Grained Cultural Knowledge Frame Profiling In this subsection, we further investigate the potential limitations of existing pretrained Large Language Models (LLMs) in understanding nuanced cultural nuances within different situational contexts, along our massively multicultural task domain. Our natural intuition is that pretrained LLMs may general tend to lack finer-grained knowledge on the cultural nuances pertaining to subtle situational differences with respect to cultural frame profiling. To verify this hypothesis through empirical study, for each culture frame dimension, such as sub-country geo-region, ethnicity, age, gender, etc., we first isolate a subset of the CultureAtlas evaluation data with cultural knowledge assertions that generally applies across this dimension, which we refer to as “Gen”, and isolate another subset of the CultureAtlas evaluation data with cultural knowledge assertions that applies specifically to a certain bucket/criteria across this dimension, which we refer to as “Spec”. Then, we perform cross-comparison on LLM performance patterns, under data scenarios that are general (“Gen”) versus specific (“Spec”) in condition/critieria along each of these cultural frame profiling dimensions. Indeed, we find that lack fine-grained cultural commonsense knowledge is an area where there remains interesting rooms for improvement for LLM models. As shown in Table 7 (the result from a single ad-hoc LM backbone, llama2-7b), the zero-shot true/false inference capability of LLM significantly drops as we probe finer-grained cultural information. | Spec. | Gen ---|---|--- Sub-Country Location | 22.1 | 35.0 Ethnicity | 27.5 | 35.5 Religion | 17.9 | 35.0 Marital Status | 13.7 | 33.6 Occupation | 27.3 | 35.0 Table 7: F-score performance comparison, in %, by general (‘gen’) versus fine- grained cultural profile framing specific (‘spec’) knowledge, such as between country-level and province-level cultural group. #### 3.3.2 Error Analysis Tab 6 visualizes results qualitatively, shedding light on the challenges for an off-the-shelf pretrained language model (LM) in accurately reasoning about cultural practices across different societies. While a LM may correctly grasp certain cultural practices, such as properly recognizing traditional ways of eating rice in Indian culture and the ceremonial dress in Bhutan for special occasions, as well as accurately recognizing misconceptions in negative samples on mixing rice with chutney in Indian culture or common greetings in Malay, it demonstrates a lack of cultural knowledge and reasoning robustness in other less well-represented topics and geographical regions. For example, the model also produced false positives, such as in regards to the exaggerated protocol around the American flag, suggesting misunderstandings of cultural norms. False negatives, such as the underappreciated practice of drinking mate in Argentina, point to the model’s oversight of genuine cultural customs. Overall, the error analysis reveals the inconsistent performance of LMs in capturing the breadth and depth of the different cultural knowledge in the world around us, revealing a significant area for improvement in LM cultural commonsense reasoning. ## 4 Related Work ##### Importance of Cultural Knowledge in NLP Tasks While large language models have generally embedded large parametric knowledge from large text corpora during its pretraining stage Petroni et al. (2019), these models are also typically imposed with normative bias due to imbalanced representation at the data source Emelin and Sennrich (2021); Arora et al. (2022). Cultural knowledge is an integral part to the success of large language model (LLM) reasoning in a wide array of downstream applications. For example, there has been recent explorations on the vital role cultural knowledge plans in helping answer commonsense questions Palta and Rudinger (2023); Yin et al. (2022), understand societal moral conventions Ramezani and Xu (2023); Emelin et al. (2021), analyze and mitigate social biases Sap et al. (2020); Yang et al. (2023), detect norm violations Fung et al. (2023); Li et al. (2023a), correct conversational dialogues Ziems et al. (2022), and ultimately tune LLMs to align with the helpful and harmless principles of constitutional AI Bai et al. (2022). Of ongoing interest to the NLP community is scrutinizing LMs on social minority understanding Sun et al. (2023), which turns out that PLMs can learn norms diverging from social majority only when they are fine-tuned accordingly due to a presence of normative bias Kiehne et al. (2022). ##### Cultural Knowledge Acquisition for improved LLM Training Hershcovich et al. (2022) explains culture as a concept of identity that can be examined from the dimensions of objectives and values, linguistic form and style (e.g., honorific reference terms when addressing a person), and common ground (e.g, socio-cultural norms, shared event occurrences, etc.). Our cultural knowledge acquisition process follows this theory of cultural definition and covers the dimensions of culture outlined above. In terms of the genre of data source for cultural knowledge discovery in practice, cultural knowledge has been predominantly gleaned from conversational dialogues Fung et al. (2023) or web sources such as Reddit/Zhihu discussion forums Forbes et al. (2020); CH-Wang et al. (2023) and the Common Crawl Nguyen et al. (2023) – both of which tends to be relatively sparse in culturally relevant information and also noisy – or through knowledge elicitation of LLM parametric knowledge through prompting Ziems et al. (2023), but this may be limited in the scope of available information that can be extracted due to the stochastic parrot property observed in LLMs when a cultural topic falls out of a LLM’s pretrained knowledge boundary. ##### Multilingual LM Reasoning and implicit multicultural knowledge Previous research Jiang et al. (2020); Clark et al. (2020) on language models have demonstrated strong capability to perform reasoning in multilingual settings, which is an initial step towards overcoming cultural barriers. The progression of research includes extending LM reasoning to low-resource language setting, such as for name tagging & knowledge base linking Pan et al. (2017); Wen et al. (2021) through annotation transferring, which lays a promising foundation for reasoning across linguistic groups. However, the existing language models struggle with the cultural bias, primarily due to the lack of awareness of implicit multicultural knowledge. Recent studies have highlighted these issues; for instance, Havaldar et al. (2023) has identified the models’ underperformance in recognizing cultural variations in specific phenomena, such as emotion detection across different countries. Another category of recent work has focused on evaluating performance on underrepresented languages, where Deas et al. (2023) has revealed biases against African American languages, leading to overlooked race-related issues in speech recognition and toxicity detection tasks. These findings underline the necessity to develop a new framework capable of acquiring cultural knowledge, aimed at addressing the cultural imbalances present in existing datasets used for training language models. Instead of stylistic linguistic conveyance, our work focuses on cultural knowledge acquisition based on fine- grained semantic variations, sourcing from over 500+ geosubregions and 2000+ ethnolinguistic groups. ## 5 Conclusions and Future Work In this work, we explored an exciting area of massively multicultural knowledge probing of language model backbones, which has important impacts to norm violation detection and mitigation in assisting human-human interaction behavior across the many different subregions and ethnolinguistic groups around the world. We proposed a novel method for large-scale data collection across curated data sources and web-retrieval enhancement, which we verified with quality checks. Leveraging this constructed dataset, we achieved benchmarking the fine-grained cultural knowledge of popular language models. We foresee the impact of our work to contribute a useful resource and LM evaluation setting for the NLP community to develop more culturally-inclusive foundation models upon in the future. Finally, for future research directions, we will also investigate the effect that multimedia settings (e.g., vision and language) and multilingual (e.g., low-resource) settings have on foundation cultural reasoning across different fine-grained sub-cultures. ## Ethical Considerations & Broader Impact Our work is dedicated to advancing language model (LM) development practices to more accurately reflect the diverse social practices and cultural customs present in human societies across the world. This effort serves to enable improvements in LM norm violation reasoning capabilities, and at a higher- level, re-emphasizes the importance of overarching principles to human-model alignment. In formulating the ethical framework for our research, we carefully consider the key dimensions inherent to our work, to ensure that the modeling outcomes are in line with ethical standards and societal expectations. We introduce a novel normative framework aimed at enhancing the detection of incorrect cultural knowledge in future models. Our experimental setup incorporates commonsense reasoning, grounding the model’s cultural knowledge in alignment with socially-situated human preferences. We acknowledge limitations in our current work. The ground truth labels (i.e., "positive" and "negative" data samples) that we utilize for LM cultural knowledge benchmark evaluation are automatically derived, due to scalability advantages. Our constructed dataset has been manually assessed and determined to be high-quality with a $90^{+}\%$ pass rate in data sample quality check. In other words, cultural knowledge assertions labeled "positive" are indeed rated as a true sample by human assessors, while cultural knowledge assertions labeled "negative" are indeed rated as non-factual samples for the large majority. If time and manual labor cost were ample, it would be meaningful to prepare human-curated cultural knowledge assertions and investigate any potential differences as a future research direction. However, we also point that it is extremely challenging to find competent human annotators covering the different geographical subregions and ethno-linguistic groups at a massive scale. Many crowdsourcing platforms for data annotation are skewed towards certain demographic subgroups, such as primarily American-based or Western centric in the Amazon MTurk service. Hence, our work serves as an extremely valuable first step towards preparing a meaningful benchmark for massively multicultural fine-grained LM knowledge reasoning, based on publicly audited Wikipedia document sources and additional data preprocessing procedure. An essential element of our ethical framework is ensuring balanced representations across cultural groups, with a special focus on including perspectives from low-resource settings. Due to current design and imbalanced training data, LLMs may lack the capability to fully understand social norms and including biases across different countries, suffering from normative bias Emelin and Sennrich (2021); Arora et al. (2022). We commit to the ongoing journey of cultural knowledge enhancement and normative correction. In particular, we aim to not only address issues of fairness and equity, but also enrich our NLP models by considering a broader range of experiences and cultural knowledge. By adhering to cultural definitions, we can build cultural knowledge systems that are inclusive and reflective of the global community. Our approach paves the path for mitigating the risks of subtle cultural bias in language model training, and fosters understanding across fine-grained cultural divides. ## Acknowledgement This research is based upon work supported by U.S. DARPA CCU Program No. HR001122C0034. The opinions, views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. ## References * Arora et al. (2022) Kushal Arora, Layla El Asri, Hareesh Bahuleyan, and Jackie Cheung. 2022. Why exposure bias matters: An imitation learning perspective of error accumulation in language generation. In _Findings of the Association for Computational Linguistics: ACL 2022_ , pages 700–710, Dublin, Ireland. Association for Computational Linguistics. * Bai et al. (2022) Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. _arXiv preprint arXiv:2204.05862_. * Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901. * CH-Wang et al. (2023) Sky CH-Wang, Arkadiy Saakyan, Oliver Li, Zhou Yu, and Smaranda Muresan. 2023. Sociocultural norm similarities and differences via situational alignment and explainable textual entailment. _arXiv preprint arXiv:2305.14492_. * Chiang et al. (2023) Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. * Clark et al. (2020) Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. Tydi QA: A benchmark for information-seeking question answering in typologically diverse languages. _CoRR_ , abs/2003.05002. * Deas et al. (2023) Nicholas Deas, Jessica Grieser, Shana Kleiner, Desmond Patton, Elsbeth Turcan, and Kathleen McKeown. 2023. Evaluation of African American language bias in natural language generation. In _Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing_ , pages 6805–6824, Singapore. Association for Computational Linguistics. * Emelin et al. (2021) Denis Emelin, Ronan Le Bras, Jena D. Hwang, Maxwell Forbes, and Yejin Choi. 2021. Moral stories: Situated reasoning about norms, intents, actions, and their consequences. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 698–718, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. * Emelin and Sennrich (2021) Denis Emelin and Rico Sennrich. 2021. Wino-X: Multilingual Winograd schemas for commonsense reasoning and coreference resolution. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 8517–8532, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. * Forbes et al. (2020) Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chemistry 101: Learning to reason about social and moral norms. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 653–670, Online. Association for Computational Linguistics. * Fung et al. (2023) Yi Fung, Tuhin Chakrabarty, Hao Guo, Owen Rambow, Smaranda Muresan, and Heng Ji. 2023. NORMSAGE: Multi-lingual multi-cultural norm discovery from conversations on-the-fly. In _Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing_ , pages 15217–15230, Singapore. Association for Computational Linguistics. * Gangi Reddy et al. (2022) Revanth Gangi Reddy, Sai Chetan Chinthakindi, Yi R. Fung, Kevin Small, and Heng Ji. 2022. A zero-shot claim detection framework using question answering. In _Proceedings of the 29th International Conference on Computational Linguistics_ , pages 6927–6933, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. * Havaldar et al. (2023) Shreya Havaldar, Sunny Rai, Bhumika Singhal, Langchen Liu, Sharath Chandra Guntuku, and Lyle Ungar. 2023. Multilingual language models are not multicultural: A case study in emotion. * Hershcovich et al. (2022) Daniel Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Piqueras, Ilias Chalkidis, Ruixiang Cui, Constanza Fierro, Katerina Margatina, Phillip Rust, and Anders Søgaard. 2022. Challenges and strategies in cross-cultural NLP. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 6997–7013, Dublin, Ireland. Association for Computational Linguistics. * Jiang et al. (2020) Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, and Graham Neubig. 2020. X-FACTR: Multilingual factual knowledge retrieval from pretrained language models. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 5943–5959, Online. Association for Computational Linguistics. * Kiehne et al. (2022) Niklas Kiehne, Hermann Kroll, and Wolf-Tilo Balke. 2022. Contextualizing language models for norms diverging from social majority. In _Findings of the Association for Computational Linguistics: EMNLP 2022_ , pages 4620–4633, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. * Li et al. (2023a) Oliver Li, Mallika Subramanian, Arkadiy Saakyan, Sky CH-Wang, and Smaranda Muresan. 2023a. NormDial: A comparable bilingual synthetic dialog dataset for modeling social norm adherence and violation. In _Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing_ , pages 15732–15744, Singapore. Association for Computational Linguistics. * Li et al. (2023b) Sha Li, Chi Han, Pengfei Yu, Carl Edwards, Manling Li, Xingyao Wang, Yi Fung, Charles Yu, Joel Tetreault, Eduard Hovy, and Heng Ji. 2023b. Defining a new NLP playground. In _Findings of the Association for Computational Linguistics: EMNLP 2023_ , pages 11932–11951, Singapore. Association for Computational Linguistics. * Lin et al. (2020) Zhaojiang Lin, Peng Xu, Genta Indra Winata, Farhad Bin Siddique, Zihan Liu, Jamin Shin, and Pascale Fung. 2020. Caire: An end-to-end empathetic chatbot. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pages 13622–13623. * Nguyen et al. (2023) Tuan-Phong Nguyen, Simon Razniewski, Aparna Varde, and Gerhard Weikum. 2023. Extracting cultural commonsense knowledge at scale. In _Proceedings of the ACM Web Conference 2023_ , WWW ’23, page 1907–1917, New York, NY, USA. Association for Computing Machinery. * Ouyang et al. (2022) Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_ , 35:27730–27744. * Palta and Rudinger (2023) Shramay Palta and Rachel Rudinger. 2023. FORK: A bite-sized test set for probing culinary cultural biases in commonsense reasoning models. In _Findings of the Association for Computational Linguistics: ACL 2023_ , pages 9952–9962, Toronto, Canada. Association for Computational Linguistics. * Pan et al. (2017) Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1946–1958, Vancouver, Canada. Association for Computational Linguistics. * Petroni et al. (2019) Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. * Ramezani and Xu (2023) Aida Ramezani and Yang Xu. 2023. Knowledge of cultural moral norms in large language models. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 428–446, Toronto, Canada. Association for Computational Linguistics. * Sap et al. (2020) Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5477–5490, Online. Association for Computational Linguistics. * Sun et al. (2023) Chenkai Sun, Jinning Li, Yi Fung, Hou Chan, Tarek Abdelzaher, ChengXiang Zhai, and Heng Ji. 2023. Decoding the silent majority: Inducing belief augmented social graph with large language model for response forecasting. In _Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing_ , pages 43–57, Singapore. Association for Computational Linguistics. * Touvron et al. (2023a) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_. * Touvron et al. (2023b) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_. * Wen et al. (2021) Haoyang Wen, Ying Lin, Tuan Lai, Xiaoman Pan, Sha Li, Xudong Lin, Ben Zhou, Manling Li, Haoyu Wang, Hongming Zhang, Xiaodong Yu, Alexander Dong, Zhenhailong Wang, Yi Fung, Piyush Mishra, Qing Lyu, Dídac Surís, Brian Chen, Susan Windisch Brown, Martha Palmer, Chris Callison-Burch, Carl Vondrick, Jiawei Han, Dan Roth, Shih-Fu Chang, and Heng Ji. 2021. RESIN: A dockerized schema-guided cross-document cross-lingual cross-media information extraction and event tracking system. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations_ , pages 133–143, Online. Association for Computational Linguistics. * Wu et al. (2020) Fangzhao Wu, Ying Qiao, Jiun-Hung Chen, Chuhan Wu, Tao Qi, Jianxun Lian, Danyang Liu, Xing Xie, Jianfeng Gao, Winnie Wu, and Ming Zhou. 2020. MIND: A large-scale dataset for news recommendation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 3597–3606, Online. Association for Computational Linguistics. * Yang et al. (2023) Ke Yang, Charles Yu, Yi R Fung, Manling Li, and Heng Ji. 2023. Adept: A debiasing prompt framework. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 37, pages 10780–10788. * Yin et al. (2022) Da Yin, Hritik Bansal, Masoud Monajatipoor, Liunian Harold Li, and Kai-Wei Chang. 2022. GeoMLAMA: Geo-diverse commonsense probing on multilingual pre-trained language models. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 2039–2055, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. * Ziems et al. (2023) Caleb Ziems, Jane Dwivedi-Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2023. NormBank: A knowledge bank of situational social norms. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 7756–7776, Toronto, Canada. Association for Computational Linguistics. * Ziems et al. (2022) Caleb Ziems, Jane Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2022. The moral integrity corpus: A benchmark for ethical dialogue systems. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 3755–3773, Dublin, Ireland. Association for Computational Linguistics. ## Appendix A Characterizing Cultural Knowledge Specificity for Data Filtering In our CultureAtlas dataset construction process, we want to focus on socioculturally relevant knowledge assertions that are not too event or instance specific. For example, "In 2020, China tops the QS Asia University Rankings list with over 120 universities including in the ranking, and five Chinese universities appear in the Asia Top 10, which is more than any country." would be culturally relevant but too event-specific. To filter out such instances, we utilize the facebook/bart-large-mnli model to perform classification on each candidate sentence, between the classes of "general assertion" and "specific fact or instance". Through this approach, we found that approximately 52% of the original pristine sentences from culturally- relevant Wikipedia pages fall under the "general assertion" category, which we retain in the dataset. ## Appendix B Culture Profile Extraction Performance In this section, we expand on low-level details on the culture profile extraction process methodology and quality check results, leveraging prompting Culture Profile Field | Directed Question Answering ---|--- interaction nature categorization | Is this an individual human behavioral norm or human-human behavioral norm? topic distribution modeling | Is this a social norm, cultural norm, belief or ritual, history, politics, or fact? country-level extraction | Which country is mentioned or implied in the sentence? Answer N/A if unknown. sub-country level extraction | Which state/province/city/subcountry region is mentioned | or implied in the sentence (or answer N/A): ethnicity extraction | Which ethnic group is mentioned or implied in the sentence? | Answer N/A if unspecified? ethnic subgroup extraction | Which ethnolinguistic subgroup is mentioned or implied in the sentence? | Answer N/A if unspecified. age extraction | Which age group is mentioned or implied in the sentence? gender extraction | Which gender group is mentioned or implied in the sentence | (male, female, transgender, or N/A) marital status | Which marital status is mentioned or implied in the sentence. religion belief extraction | Which religious group is mentioned or implied in the sentence (or answer N/A): occupation extraction | Which occupation is mentioned or implied in the sentence? | Answer N/A if unspecified Table 8: Details on the prompt template for cultural profile extraction Afghanistan | 29 | 0.2k | Georgia | 35 | 0.2k | Afghanistan | 29 | 0.2k ---|---|---|---|---|---|---|---|--- Albania | 0 | 0.0k | Germany | 301 | 6.2k | Zimbabwe | 26 | 0.3k Algeria | 38 | 0.4k | Ghana | 30 | 0.3k | Zambia | 9 | 0.2k Andorra | 4 | 0.0k | Greece | 98 | 1.1k | Yemen | 8 | 0.0k Angola | 22 | 0.2k | Grenada | 0 | 0.0k | Viet Nam | 43 | 0.8k Antigua and Barbuda | 1 | 0.0k | Guatemala | 0 | 0.0k | Venezuela | 9 | 0.1k Argentina | 222 | 1.4k | Guinea | 0 | 0.0k | Vanuatu | 1 | 0.0k Armenia | 118 | 0.6k | Guinea-Bissau | 17 | 0.4k | Uzbekistan | 34 | 0.2k Australia | 190 | 2.8k | Guyana | 0 | 0.0k | Uruguay | 40 | 0.4k Austria | 78 | 0.9k | Haiti | 0 | 0.0k | United States | 1452 | 42.4k Azerbaijan | 49 | 0.4k | Honduras | 13 | 0.1k | Tanzania | 118 | 0.6k Bahamas | 0 | 0.0k | Hungary | 37 | 0.3k | United Kingdom | 229 | 7.2k Bahrain | 6 | 0.1k | Iceland | 0 | 0.0k | United Arab Emirates | 38 | 0.6k Bangladesh | 105 | 0.8k | India | 847 | 18.4k | Ukraine | 87 | 1.3k Barbados | 4 | 0.1k | Indonesia | 364 | 6.4k | Uganda | 61 | 0.2k Belarus | 16 | 0.2k | Iran | 22 | 0.1k | Tuvalu | 2 | 0.0k Belgium | 75 | 1.4k | Iraq | 34 | 0.3k | Turkmenistan | 12 | 0.1k Belize | 2 | 0.0k | Ireland | 99 | 1.2k | Türkiye | 42 | 0.7k Benin | 14 | 0.1k | Israel | 71 | 2.2k | Tunisia | 94 | 1.8k Bhutan | 53 | 0.3k | Italy | 560 | 8.5k | Trinidad and Tobago | 9 | 0.2k Bolivia | 18 | 0.1k | Jamaica | 0 | 0.0k | Tonga | 0 | 0.0k Bosnia and Herzegovina | 40 | 0.2k | Japan | 388 | 5.7k | Togo | 16 | 0.0k Botswana | 61 | 0.2k | Jordan | 15 | 0.1k | Timor-Leste | 2 | 0.0k Brazil | 226 | 2.2k | Kazakhstan | 23 | 0.1k | Thailand | 223 | 2.9k Brunei Darussalam | 0 | 0.0k | Kenya | 61 | 0.6k | Tajikistan | 4 | 0.0k Bulgaria | 129 | 1.7k | Kiribati | 9 | 0.1k | Syrian Arab Republic | 0 | 0.0k Burkina Faso | 6 | 0.1k | Kuwait | 0 | 0.0k | Switzerland | 140 | 1.9k Burundi | 13 | 0.0k | Kyrgyzstan | 6 | 0.1k | Sweden | 124 | 1.9k Cabo Verde | 0 | 0.0k | Laos | 80 | 0.2k | Suriname | 3 | 0.0k Cambodia | 73 | 0.6k | Latvia | 25 | 0.1k | Sudan | 20 | 0.2k Cameroon | 29 | 0.1k | Lebanon | 33 | 0.5k | Sri Lanka | 12 | 0.2k Canada | 198 | 2.8k | Lesotho | 22 | 0.3k | Spain | 162 | 3.5k Central African Republic | 4 | 0.0k | Liberia | 9 | 0.0k | South Sudan | 13 | 0.3k Chad | 5 | 0.0k | Libya | 10 | 0.1k | South Africa | 79 | 1.0k Chile | 47 | 0.3k | Liechtenstein | 6 | 0.0k | Somalia | 23 | 0.1k China | 409 | 8.1k | Lithuania | 25 | 0.4k | Solomon Islands | 5 | 0.0k Table 9: The # of documents and cultural knowledge assertion sentences, per culture by country, that are specific to sub-country level geographical regions. with various state-of-the-art pretrained large language model (LLM) backbones. Specifically, Table 8 details the prompt template details. Medumba | 145 | | Awabakal | 35 | | Kalapalo, Kuikúro-Kalapálo | 17 | ---|---|---|---|---|---|---|---|--- Germanic | 117 | | Eleme | 35 | | Bengkala | 16 | Central Dusun, Kadazan Dusun | 115 | | Ahtena | 33 | | Bangala | 16 | Notre | 112 | | Ayoreo | 32 | | Chilcotin | 16 | Doga | 107 | | Bakumpai | 32 | | Northern Dagara | 16 | Hopi | 107 | | San Blas Kuna | 32 | | Gbagyi | 16 | Mescalero-Chiricahua Apache | 99 | | Kabardian | 32 | | Kayardild | 16 | Kashubian | 96 | | Hiberno-Scottish Gaelic | 31 | | Jah Hut | 16 | Ainu (Japan) | 94 | | Jahanka | 31 | | Kanuri | 16 | Assiniboine | 91 | | Kambaata | 31 | | Kankanaey | 16 | Arapaho | 89 | | Batek | 30 | | Barai | 15 | Jicarilla Apache | 88 | | Hunsrik | 30 | | Djabugay, Dyaabugay | 15 | Batak languages | 86 | | Bwa | 29 | | Emilian | 15 | Izere | 81 | | Gadang | 29 | | Macushi | 15 | Kodava | 75 | | Cowlitz | 28 | | Azha | 14 | Cappadocian Greek | 74 | | Dolpo | 28 | | Igede | 14 | Efik | 73 | | Kuku-Yalanji | 28 | | Khorasani Turkish | 14 | Algonquin | 72 | | Korak | 27 | | Bundeli | 13 | Hittite | 70 | | Harari | 26 | | Columbia-Wenatchi | 13 | Etruscan | 69 | | Humla | 26 | | Gooniyandi | 13 | Huichol | 67 | | Shambala | 26 | | Kaska | 13 | Hajong | 66 | | Dhanggatti, Dyangadi | 25 | | Amahuaca | 12 | Coast Miwok | 65 | | Gumatj | 25 | | Atsugewi | 12 | Mycenaean Greek | 62 | | Reel | 24 | | Awetí | 12 | Jju | 62 | | Banggarla | 24 | | Southern Luri | 12 | Pacific Gulf Yupik | 60 | | Hidatsa | 24 | | Adhola | 11 | Heiltsuk | 60 | | Khanty | 24 | | Qimant | 11 | Jukun Takum | 60 | | Gros Ventre | 23 | | Bidyogo | 11 | Khasi | 59 | | Baga Sitemu | 23 | | Gambera | 11 | Javanese | 58 | | Koasati | 23 | | Guro | 11 | Garre | 57 | | Igala | 23 | | Kurdish | 11 | Gujarati | 57 | | Yaka (Congo) | 23 | | Hermit | 11 | Chickasaw | 56 | | Adnyamathanha | 22 | | Molale | 11 | Gunditjmara | 55 | | Baras | 22 | | Pemon | 10 | Esselen | 54 | | Burarra | 22 | | Sari | 10 | Yanomamö | 54 | | Ejagham | 22 | | Bhunjia | 10 | Bhojpuri | 53 | | Ngadjuri | 21 | | Laba | 10 | Beothuk | 52 | | Kalenjin | 21 | | Lasi | 10 | Goan Konkani | 52 | | Gheg Albanian | 20 | | Arbore | 9 | Idoma | 52 | | Badaga | 20 | | Kaingang | 9 | Atakapa | 51 | | Chuvash | 20 | | Col | 9 | Chipewyan, Dene Suline | 50 | | Cocopa | 20 | | Lozi | 9 | Daai Chin | 50 | | Dimasa | 20 | | Leti (Indonesia) | 9 | Colonia Tovar German | 50 | | Eyak | 20 | | Akuntsu | 8 | Jakun | 50 | | Hyam | 20 | | Djauan, Jawoyn | 8 | Bodo (India) | 49 | | Greenlandic, Kalaallisut | 20 | | Adiwasi Garasia | 8 | Shor | 49 | | Mixed Great Andamanese | 19 | | Holikachuk | 8 | Lushai | 49 | | Karadjeri, Karajarri | 19 | | Jiru | 8 | Con | 48 | | Gilaki | 19 | | Kurichiya | 8 | Kutenai | 47 | | Ikulu | 19 | | Korwa | 8 | Djawi | 46 | | Worimi | 19 | | Hijazi Arabic | 7 | Angor | 44 | | Lobi | 19 | | Bugun | 7 | Chitimacha | 42 | | Ingrian | 18 | | Cuban | 7 | Gitxsan | 42 | | Keliko | 18 | | Cheq Wong, Chewong | 7 | Kickapoo | 41 | | Keiga | 18 | | Gata’ | 7 | Sudanese Arabic | 40 | | Ladin | 18 | | Gureng Gureng | 7 | Darlong | 38 | | Guerrero Amuzgo | 17 | | Gunwinggu | 7 | Haisla | 38 | | Bora | 17 | | Koba | 7 | Igbo | 38 | | Chamacoco | 17 | | Lanoh | 7 | Kalapuya | 38 | | Mro-Khimi Chin | 17 | | Apatani | 6 | Upper Kuskokwim | 37 | | Isoko | 17 | | Tuki | 6 | Atikamekw | 36 | | Krymchak | 17 | | Bote-Majhi | 6 | Esperanto | 36 | | Kota (India) | 17 | | Bwile | 6 | Table 10: The # of documents and cultural knowledge assertion sentences per culture by ethnolinguistic group
# Better Query Graph Selection for Knowledge Base Question Answering Yonghui Jia, Wenliang Chen Institute of Artificial Intelligence, School of Computer Science and Technology, Soochow University, China <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract This paper presents a novel approach based on semantic parsing to improve the performance of Knowledge Base Question Answering (KBQA). Specifically, we focus on how to select an optimal query graph from a candidate set so as to retrieve the answer from knowledge base (KB). In our approach, we first propose to linearize the query graph into a sequence, which is used to form a sequence pair with the question. It allows us to use mature sequence modeling, such as BERT, to encode the sequence pair. Then we use a ranking method to sort candidate query graphs. In contrast to the previous studies, our approach can efficiently model semantic interactions between the graph and the question as well as rank the candidate graphs from a global view. The experimental results show that our system achieves the top performance on ComplexQuestions and the second best performance on WebQuestions. ## Introduction Knowledge Base Question Answering (KBQA) is a popular task defined to take natural language questions as input and return corresponding entities or attributes from knowledge bases, such as DBPedia (Auer et al. 2007) and Freebase (Bollacker et al. 2008). One representative line of approaches to KBQA builds on semantic parsing (SP) that converts input questions into formal meaning representations and then transforms them into query languages like SPARQL (Berant et al. 2013; Yih et al. 2015; Sun et al. 2020). There are two types of SP-based solutions. One is using generic meaning representations, such as $\lambda-DCS$ (Liang 2013). However, this type of solutions tends to suffer from the gap between the set of ontology or relationship in the meaning representations and the set in knowledge bases (Kwiatkowski et al. 2013). Figure 1: The example question and its corresponding query graph structure. Another type of solutions is to use query graph to represent the semantics of questions, which is supposed to be able to overcome the issue mentioned above (Yih et al. 2015; Bao et al. 2016; Hu et al. 2017). Figure 1 presents an illustrating query graph whose nodes and edges correspond to the entities and relationships in a knowledge base. By using query graph as the representation, the process of KBQA can be divided into two steps: query graph generation and query graph selection. The former step constructs a set of query graph candidates from the input question, while the latter decides the optimal query graph that is used to retrieve the final answer. We can see that the component of query graph selection is critical to the overall performance of KBQA systems. Figure 2: Query graph generation process of “Who is the highest prime minister of spain after 1980?”. Figure (a) shows the corresponding focus nodes linking result, Figure (b) shows the corresponding main path, Figure (c) shows the result after adding entity constraints to the main path, and Figure (d) shows the query graph after adding all constraints. Query graph selection is essentially a matching task between the question and candidate query graphs. Existing systems put focus on encoding the query graphs with hand-crafted features (Yih et al. 2015; Luo et al. 2018). In these works, they first calculate the semantic similarity between the query graph and the question using cosine similarity function. Then the similarity score is used as a feature, together with other local features to represent the query graph and the question. Finally, they feed the feature representation into a one-layer neural network model to obtain the score. Previous approaches have achieved a certain success for this task. However, we argue that 1) simply using cosine distance function to measure the semantic similarity leads to the loss of interaction information between the graph and the question, and 2) the hand-crafted features are usually not robust and are often not necessary for deep neural networks. To address the above problems, in this paper we propose to translate the matching between the question and the query graph into the matching between two sequences for naturally modeling the interaction between the question and query graph. To this end, we linearize the query graphs into sequences. This strategy makes the question and linearized query graphs are both in sequence format, which is more convenient for using mature sequence modeling methods such as BERT (Devlin et al. 2019) and GPT-3 (Brown et al. 2020). In addition, we select the optimal query graph with a ranking strategy, hoping to take the relationship between candidate query graphs into consideration. Inspired by learning to rank methods (Li 2011; Pîrtoacă, Rebedea, and Ruseti 2019; Han et al. 2020), we utilize the listwise strategy to sort candidate query graphs from a global view, instead of the pairwise strategy which is used in the previous study (Luo et al. 2018). Experimental results on two widely-used KBQA datasets demonstrate the effectiveness of our proposed approach: the best performance on ComplexQuestions and the second best on WebQuestions. Overall, we make the following contributions. * • We propose a novel approach for better query graph selection in KBQA. In our approach, we convert the query graph into the corresponding sequence format and thus the problem of matching between the question and the query graph is translated into the matching between two sequences. This allows us to use BERT to efficiently model interactions between the graph and the question. Moreover, our approach does not require any hand-crafted features. * • In addition, we use the listwise strategy to sort the candidate query graphs which takes the relationship among the candidate graphs into consideration from a global view. Compared with the Pairwise Ranking used in (Luo et al. 2018), Listwise Ranking achieves better performance on both two datasets. Figure 3: The process of converting query graph to sequence. ## Our Approach In this section, we describe our approach in detail. We divide the KBQA process into two subtasks: query graph generation and query graph selection. Formally, given a question $q$ and a knowledge base (KB), the semantics of $q$ is analyzed through the query graph generation, and a set of candidate query graphs $G=\\{g_{1},g_{2},...,g_{n}\\}$ is obtained. Then, an optimal query graph $g^{*}$ is selected from the candidate set $G$ through the query graph selection. Finally, we convert $g^{*}$ into the SPARQL format to retrieve the final answer to question $q$. Compared with the previous studies of using query graphs (Yih et al. 2015; Hu et al. 2017; Luo et al. 2018), we improve our system by using a different solution on query graph selection. The basic idea is that, we linearize each candidate query graph into a sequence and thus the problem of matching between the question and the candidate query graph becomes the matching between two sequences. To select the optimal query graph $g^{*}$, we propose a ranking method with the listwise strategy, hoping to take the relationship between the candidate query graphs into consideration from a global view. ### Query Graph Generation The goal of the query graph generation is to map the question into a semantic representation in the form of graphs. In this step, we follow the procedure of the previous studies (Yih et al. 2015; Luo et al. 2018) to generate the candidate query graphs. Given question $q$, we first conduct focus nodes linking to identify four types of constraints in the question, which are entity, implicit type, time interval, and ordinal. For entity linking, we utilize the tool SMART (Yang and Chang 2015) to obtain (mention, entity) pairs. For type linking, we use word embeddings to calculate the similarity between consecutive sub-sequences in the question (up to three words) and all the type words in the knowledge base, and select the top-10 (mention, type) pairs according to the similarity scores. Regarding time word linking, we use regular expression matching to extract time information. As for ordinal number linking, we use a predefined ordinal vocabulary and “ordinal number+superlative” pattern to extract the integer expressions. 111About 20 superlative words, such as largest, highest, latest. We also use the entity enrichment method in Luo et al. (2018) to improve focus nodes linking. Figure 2(a) shows an example after focus nodes linking. After focus nodes linking, we get the main path by performing a one-hop and two-hop search based on the linked entity words, as shown in Figure 2(b). Next, entity constraints are added to the nodes in the main path. Figure 2(c) shows the state after the operation of this step. Then we add type constraints, time constraints and ordinal constraints in turn, and finally get the query graph as shown in Figure 2(d). Through the above procedure, we can obtain the candidate query graph set $G=\\{g_{1},g_{2},...,g_{n}\\}$ for query graph selection. ### Query Graph Selection Due to the existence of ambiguity, the query graph generation may produce more than one, often hundreds of, candidate query graphs. Thus it is necessary to apply a matching operation to select the optimal query graph $g^{*}$ from the candidates. In this section, we first describe how to convert each $g$ in $G$ into sequence $g^{s}$. Then, we show how to encode the pair of $q$ and $g^{s}$. Finally, we describe the selection process. Inspired by the research in learning to rank (Li 2011; Pîrtoacă, Rebedea, and Ruseti 2019; Han et al. 2020), we select query graph with different ranking strategies, i.e., Pointwise Ranking, Pairwise Ranking, and Listwise Ranking. #### Transforming Query Graph into Sequence The process of converting query graph $g$ to sequence $g^{s}$ can be regarded as the disassembly process of constructing the query graph. When constructing the query graph, we first search the main path and then add four constraints of type, entity, time, and ordinal to the main path. Therefore, the whole query graph structure contains at most five components, which is much simpler compared with the general graph structure. And more importantly, each component has a fixed semantic meaning. Considering the fixed structure of the query graph, we transform the query graph into the corresponding sequence according to the predefined sub-paths order. Specifically, we divide the query graph into different sub-paths according to different components. Through graph decomposition, we can get five sub-path sequences: TypePath, EntityPath, TimePath, OrdinalPath and MainPath. For example, the EntityPath corresponding to the entity constraint “Prime minister” in Figure 3 is “basic title prime minister.”. Finally, the five sub-path sequences are combined to form the corresponding query graph sequence. It is worth noting that we use some additional tokens, [unused0-3], to separate different sub-path sequences. As shown in Figure 3, the query graph sequence is “people person. [unused0] basic title prime minister. [unused1] from after 1980. [unused2] height max 1. [unused3] spain governing officials – office holder [A]”, in which ‘[A]’ is the real answer string, not just unified padding. Figure 4: (a) Query Graph and Question Encoding Framework. (b) Different Ranking Strategies Framework, where “$p_{qg^{s}_{1}}$” represents the sequence of the question and the positive query graph, “$p_{qg^{s}_{2}}$”, “$p_{qg^{s}_{3}}$” and “$p_{qg^{s}_{4}}$” represent three sequences of the question and negative query graphs. #### Encoding Query Graph Sequence and Question After query graph $g$ is converted into sequence $g^{s}$, the task of matching question $q$ and query graph $g$ becomes a task of matching question $q$ and query graph sequence $g^{s}$. This allows us to naturally use mature sequence encoding models, such as BERT (Devlin et al. 2019), GPT-3 (Brown et al. 2020), and so on, which can establish the interactive information of two sequences. We choose the BERT architecture as our encoder, which has been widely used in natural language processing in recent years. BERT is a pre-trained language model based on the bidirectional Transformer architecture (Vaswani et al. 2017), which can be used to encode a single sentence or a sentence pair. To introduce the interactive information between the question and query graph sequence, we use the structure of encoding the sentence pair in BERT. The encoding framework is shown in Figure 4(a). Given the question $q=\\{w_{1},w_{2},...,w_{m}\\}$ and the query graph sequence $g^{s}=\\{u_{1},u_{2},...,u_{n}\\}$, we connect $q$ and $g^{s}$ through special tags to form the sentence pair, denoted as $p_{qg^{s}}=\\{[CLS],w_{1},...,w_{m},[SEP],u_{1},...,u_{n},[SEP]\\}$. Each candidate query graph $g$ in set $G$ can form a sentence pair $p_{qg^{s}}$ with the corresponding question $q$. Then the pairs are fed to BERT for encoding one by one. And we use the output of the $[CLS]$ node of BERT as the semantic representation of the question and query graph sequence, denoted as f. #### Ranking Query Graphs In this section, we rank the candidates with three different strategies, namely Pointwise Ranking, Pairwise Ranking and Listwise Ranking, respectively. These three optimization methods correspond to three typical ranking strategies in information retrieval (Li 2011). Luo et al. (2018) use the pairwise strategy in their system and achieve a certain success. Before performing ranking, we preprocess the training data. According to whether the correct answer can be retrieved, the candidates can be grouped into two sets: $G^{+}$ and $G^{-}$, where $G^{+}$ includes the positive graphs and $G^{-}$ includes the negative ones. We use $g_{i}^{+}$ and $g_{j}^{-}$ to denote a positive graph and a negative graph, respectively. Each graph $g_{i}$ in the two sets is encoded as representation $\textbf{f}_{i}$. After that, $\textbf{f}_{i}$ is fed into the linear layer to get a score $s_{i}$ that indicates the possibility of $g_{i}$ being the optimal query graph. ##### Pointwise Ranking. The Pointwise Ranking processes the graphs one by one. When ranking candidate query graphs, the query graphs to be sorted do not need have a perfect order, only the difference between positive and negative graphs. That is, we treat the query graph ordering problem as a simple binary classification task. As shown in Figure 4(b), the query graph $g_{i}$ in Pointwise Ranking is optimized independently. For each candidate query graph $g_{i}$, it corresponds to the label $y_{i}\in\\{1,0\\}$, where ‘1’ represents the label of a positive graph, and ‘0’ represents the label of a negative graph. In the optimization process, we use the cross-entropy loss function to optimize and select the query graph with the highest score as the optimal query graph $g^{*}$. The optimized loss function is as follows, $s^{{}^{\prime}}_{i}=\frac{1}{1+e^{-s_{i}}},$ (1) $L_{point}=-\sum y_{i}log(s^{{}^{\prime}}_{i})+(1-y_{i})log(1-s^{{}^{\prime}}_{i}).$ (2) ##### Pairwise Ranking. The Pairwise Ranking can model the mutual connection between two candidate elements, and realize the ranking by continuously optimizing the relative rank of the two elements. When using the pairwise strategy to rank candidate query graphs, we regard the ranking problem as a problem of how to distinguish positive query graphs and negative query graphs. That is, we first construct the form of pairs of positive and negative graphs, and then the order of positive and negative graphs is optimized by using the pairwise strategy, as shown in Figure 4(b). For each pair of positive and negative query graphs $(g_{i}^{+},g_{j}^{-})$, we can get the scores $s_{i}$ and $s_{j}$ through BERT and linear layer encoding, respectively. Then $s_{i}$ and $s_{j}$ are normalized to $s^{{}^{\prime}}_{i}$ and $s^{{}^{\prime}}_{j}$ by Equation (1). We use the hinge loss optimization function to optimize the scores so that the difference between the scores of the positive and negative query graphs is maintained at a fixed value $\lambda$. The hinge loss is defined as follows, $L_{pair}=max\\{0,\lambda-s^{{}^{\prime}}_{i}+s^{{}^{\prime}}_{j}\\},$ (3) where $\lambda$ is set to 0.5. ##### Listwise Ranking. The Listwise Ranking can also model the interconnection between all candidates, and can directly optimize the order of the entire candidate set. In this strategy, we construct a list of positive and negative graphs for global optimization. In the query graph selection, we don’t care too much about the ranking among positive graphs or the ranking among negative graphs. Our goal of global optimization is to successfully rank the positive graph in the first place. The application of Listwise in query graph ranking is slightly different from the method used in traditional information retrieval. When constructing the training data, we select each positive graph and a fixed number of negative graphs to form a list $C=\\{{g}_{0}^{+},{g}_{1}^{-},{g}_{2}^{-},…,{g}_{m}^{-}\\}$, whose label is $\\{y_{0},y_{1},y_{2},...,y_{m}\\}$. The score of group $C$ after BERT encoding and linear layer mapping is recorded as $\\{s_{0},s_{1},s_{2},…,s_{m}\\}$. During the training, we design the following optimization objective function, $s^{{}^{\prime}}_{i}=\frac{exp(s_{i})}{\sum_{i=0}^{m}exp(s_{i})},$ (4) $L_{list}=-\sum_{i=0}^{m}y_{i}log(s^{{}^{\prime}}_{i})+(1-y_{i})log(1-s^{{}^{\prime}}_{i}).$ (5) ## Experiments ### Experimental Setup ##### Datasets. Dataset | train | validation | test ---|---|---|--- WebQ | 3,023 | 755 | 2,032 CompQ | 1,000 | 300 | 800 Table 1: The partitions of WebQuestions and ComplexQuestions. We conduct experiments on two widely-used datasets: WebQuestions (WebQ) (Berant et al. 2013) 222https://nlp.stanford.edu/software/sempre/ and ComplexQuestions (CompQ) (Bao et al. 2016) 333https://github.com/JunweiBao/MulCQA/ tree/ComplexQuestions. The WebQ dataset contains both simple questions (84%) and complex reasoning questions (16%), which is close to natural language questions used by people in daily life. The dataset contains 5,810 question-answer pairs. The CompQ dataset is designed for complex question answering. The dataset contains a total of 2,100 question-answer pairs. Both WebQ and CompQ are divided into train set, validation set and test set, as shown in table 1. Both datasets use Freebase as the knowledge base 444https: //developers.google.com/freebase/, which has been widely used in the KBQA systems. ##### Implementation Details. For encoding questions and query graphs, we utilize the BERT-base model. We choose the hyper-parameter settings according to the performance on the validation sets. Regarding the hyperparameters in the BERT-base model, we set the dropout ratio to 0.1 and the hidden size to 768. We use Adam as the optimizer and the learning rate is set to $5\times 10^{-5}$. The maximum number of training epoch is set to 5. At the end of each epoch, we use the validation set to evaluate the model, and the model with the best performance on the validation set is selected as the final testing model. For performance evaluation, we report the average F1 score, as do in Berant et al. (2013). One question left is how to construct training data for query graph selection. Given a question and its corresponding query graph candidates, we use the query graph whose answer has an F1 value greater than 0.1 as a positive graph and randomly sample negative graphs from the rest query graph candidates. ### Main Results Method | WebQ (F1%) | CompQ (F1%) ---|---|--- Pointwise | 52.4 | 38.4 Pairwise | 53.7 | 42.7 Listwise | 55.3 | 44.4 Table 2: The comparison results of the three ranking strategies on the test sets. Category | Method | WebQ(F1%) | CompQ(F1%) ---|---|---|--- | Yih et al. (2015) | 52.5 | - | Bao et al. (2016) | 52.4 | 40.9 Using Query Graph | Hu, Zou, and Zhang (2018) | 53.6 | - | Luo et al. (2018) | 52.7 | 42.8 | Lan and Jiang (2020) | - | 43.3 | Berant et al. (2013) | 36.4 | - Others | Jain (2016) | 55.6 | - | Chen, Wu, and Zaki (2019) | 51.8 | - | Xu et al. (2019) | 54.6 | - Our | Listwise | 55.3 | 44.4 Table 3: The comparison results with previous works on the test sets of WebQuestions and ComplexQuestions. Table 2 shows the comparison results of the three ranking strategies. From the table, we can see Listwise Ranking and Pairwise Ranking outperform the Pointwise Ranking. This fact indicates the necessity of modeling the inter- relations between query graph candidates. In addition, we also find that the superiority of Listwise Ranking and Pairwise Ranking are more significant on CompQ than WebQ, which is in line with our intuition that complex questions may require more information to disambiguate query graph candidates. Listwise Ranking yields the best result on both two datasets. The reason may be that Listwise considers more than two graphs at once which has a global view when optimization, compared with Pairwise. Table 3 shows the comparison results between our system with Listwise Ranking and the previous works on the test sets of WebQ and CompQ, where category “Using Query Graph” includes the previous systems that use query graph and “Others” includes the previous ones that do not use query graph. From the table, we can see that our system yields the best result on CompQ and the second best on WebQ among all the systems. Especially, when compared with the approaches using query graph, our system achieves the best performance. ### Discussion and Analysis #### Effect of Different Components in Query Qraph Selection Sequence Info | WebQ (F1%) | CompQ (F1%) ---|---|--- All Path | 55.3 | 44.4 w/o constrains | 53.7 | 42.3 w/o answer | 54.3 | 43.6 Table 4: The effect of different components in query graph sequence on query graph selection. When representing query graphs, we propose to transform a query graph into a sequence which is composed of sub-paths of different types. In order to explore the effect of different components on the final performance, we conduct experiments by removing some components from our Listwise system. The experimental results are presented in Table 4, where “All Path” refers to our final system that includes the main path plus four constraint paths, “w/o constrains” refers to the system removing four constraint paths, and “w/o answer” refers to the system removing the answer string. The results show that by continuously accumulating different components, the system performance can be steadily improved. This indicates that all the components of the query graph sequence are useful for our system. #### Error Analysis In this section, we want to know the reasons why our system gives the wrong answers for many cases by error analysis. If the candidate set generated by the query graph generation does not include the correct answer, our system is not possible to find it. Here we check the cases where the candidate set includes the correct answer but our system (Listwise) fails to find it. We randomly select 100 cases and check them manually. The errors are summarized as follows: Incorrect Query Graph Generation. There are some query graphs which can retrieve the correct answer, but actually are not correctly parsed. For example, the question “where was david berkowitz arrested?”, the query graph generation provides “david berkowitz places lived - location brooklyn, new york city” as a candidate. The candidate graph can retrieve the correct answer, but in fact is not correct to the question. Of 100 cases, we find that this type of errors contains 45 cases. As for this type, we have to improve the performance of query graph generation and the coverage of KB. Incorrect Query Graph Selection. As for the other cases, the candidate set includes the correct query graph which can perfectly retrieve the answer. However, our system still fails to find it. These errors can be grouped into two categories. The first one (40%) is to select the graph that includes the incorrect relationship (main path) between the topic word and answer. The second one (15%) is to select incorrect constraints. To solve this type of errors, we may perform a deeper analysis to provide additional information for query graph selection. #### Effect of Negative Examples on Ranking Figure 5: The performance of three ranking strategies under different numbers of negative examples. To further explore the characteristics of the three ranking strategies of Pointwise Ranking, Pairwise Ranking, and Listwise Ranking, we select different numbers of negative graphs to build the systems. The performance is shown in Figure 5. From the figures, we find that when the number of negative graphs increases, the performance of all three systems first increases and then is in a relatively stable state. We also find that Listwise Ranking can yield a good performance with few negative samples. These facts indicate that we do not need too many negative graphs when training our systems. #### Case Study Question1: what type of breast cancer did sheryl crow have ? --- True: sheryl crow condition meningioma. False: sheryl crow films – film breast cancer: the path of wellness & healing. Question2: what role did paul mccartney play in the beatles ? True: member paul mccartney. the beatles member – role backing vocalist, lead vocalist, bass. False: the beatles (tv series) regular cast – actor george harrison, john lennon, lance percival, paul frees, paul mccartney, ringo starr. Table 5: The case study. We analyze some specific examples on which Listwise performs better than Pointwise. Two typical examples are listed in table 5. For the question “what type of breast cancer did sheryl crow have?”, the true answer should be a type of cancer. Listwise Ranking can determine that ‘condition’ is the correct relation, but Pointwise Ranking chooses the wrong path that contains “breast cancer”. We argue that Listwise Ranking can better model the overall semantics of the sequence because it considers the relationship between the candidate query graphs during optimization, while Pointwise Ranking tends to focus on the semantics of local words. Besides, for the example “what role did paul mccartney play in the beatles?”, the correct query graph contains the true entity constraint. But Pointwise Ranking chooses a path without the entity constraint. This indicates to some extent that Pointwise Ranking is not effective enough in identifying constrain paths. ## Related Work Information retrieval (IR) and semantic parsing (SP) based approaches are two mainstreams for knowledge base question answering. Among them, IR-based methods (Yu et al. 2017; Gupta, Chinnakotla, and Shrivastava 2018; Chen, Wu, and Zaki 2019; Petrochuk and Zettlemoyer 2018; Zhao et al. 2019; Saxena, Tripathi, and Talukdar 2020) obtain relevant candidate answers according to the topic entity and then rank the answers to obtain the final result. The core of IR-based approaches is to identify the KB relation paths that the question refers to (Wu et al. 2019). For example, Dong et al. (2015) use multi-column Convolutional Neural Networks (CNN) to encode questions and paths to the same vector space and calculate the similarity. Hao et al. (2017) use Long Short-Term Memory (LSTM) instead of CNN for the same purpose. Different from IR-based methods, SP-based approaches put more attention to the semantic analysis of the question (Bao et al. 2016). The basic process of SP- based approaches is to parse the semantics of the question into some meaning representation and then map the meaning representation with KB (Hu et al. 2017). For example, Berant et al. (2013) parse the question into $\lambda- DCS$, and then map it to the knowledge base through alignment and bridging operations to obtain answers. Sun et al. (2020) design a novel skeleton grammar to express complex questions and improve the ability to parse complex questions. Query graph is also a widely-used meaning representation in SP- based systems. Yih et al. (2015) are the pioneer into query graph research for KBQA, which propose a staged query graph generation method for this task. Following this line, Luo et al. (2018) propose a complex query graph matching approach that simultaneously encodes multiple sub-paths to achieve better query graph representation. More recently, Lan and Jiang (2020) propose a method to expand multiple relations so that it can handle more complex questions. In contrast to previous works that mostly put focus on the representation of query graphs, we instead put focus on the phase of selecting the optimal query graph. ## Conclusions We present a novel semantic matching approach based on semantic parsing to improve the performance of Knowledge Base Question Answering (KBQA). In this paper, the process of KBQA is divided into two steps: query graph generation and query graph selection, and we put focus on the second step. In our approach, we linearize the query graphs into sequences. Then, we use BERT to encode the pair of the query graph sequence and the question to obtain the semantic representation. In addition, we select the optimal query graph with different ranking strategies, which take the relationship between candidate query graphs into consideration. Experimental results on two benchmark datasets demonstrate the effectiveness of our proposed approach. Specifically, our best-performance system achieves the top performance on ComplexQuestions and the second best performance on WebQuestions. ## References * Auer et al. (2007) Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; and Ives, Z. 2007\. DBpedia: A nucleus for a web of open data. In _Proceedings of ISWC_ , 722–735. * Bao et al. (2016) Bao, J.; Duan, N.; Yan, Z.; Zhou, M.; and Zhao, T. 2016. Constraint-based question answering with knowledge graph. In _Proceedings of COLING_ , 2503–2514. * Berant et al. (2013) Berant, J.; Chou, A.; Frostig, R.; and Liang, P. 2013. Semantic parsing on Freebase from question-answer pairs. In _Proceedings of EMNLP_ , 1533–1544. * Bollacker et al. (2008) Bollacker, K.; Evans, C.; Paritosh, P.; Sturge, T.; and Taylor, J. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In _Proceedings of SIGMOD_ , 1247–1250. * Brown et al. (2020) Brown, T. B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D. M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020. Language Models are Few-Shot Learners. _arXiv:2005.14165_. * Chen, Wu, and Zaki (2019) Chen, Y.; Wu, L.; and Zaki, M. J. 2019. Bidirectional attentive memory networks for question answering over knowledge bases. In _Proceedings of NAACL-HLT_ , 2913–2923. * Devlin et al. (2019) Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of NAACL-HLT_ , 4171–4186. * Dong et al. (2015) Dong, L.; Wei, F.; Zhou, M.; and Xu, K. 2015. Question answering over Freebase with multi-column convolutional neural networks. In _Proceedings of ACL_ , 260–269. * Gupta, Chinnakotla, and Shrivastava (2018) Gupta, V.; Chinnakotla, M.; and Shrivastava, M. 2018. Retrieve and re-rank: A simple and effective IR approach to simple question answering over knowledge graphs. In _Proceedings of FEVER_ , 22–27. * Han et al. (2020) Han, S.; Wang, X.; Bendersky, M.; and Najork, M. 2020. Learning-to-Rank with BERT in TF-Ranking. _arXiv:2004.08476_. * Hao et al. (2017) Hao, Y.; Zhang, Y.; Liu, K.; He, S.; Liu, Z.; Wu, H.; and Zhao, J. 2017. An end-to-end model for question answering over knowledge base with cross-attention combining global knowledge. In _Proceedings of ACL_ , 221–231. * Hu et al. (2017) Hu, S.; Zou, L.; Yu, J. X.; Wang, H.; and Zhao, D. 2017. Answering natural language questions by subgraph matching over knowledge graphs. _IEEE Transactions on Knowledge and Data Engineering_ , 30(5): 824–837. * Hu, Zou, and Zhang (2018) Hu, S.; Zou, L.; and Zhang, X. 2018. A state-transition framework to answer complex questions over knowledge base. In _Proceedings of EMNLP_ , 2098–2108. * Jain (2016) Jain, S. 2016. Question answering over knowledge base using factual memory networks. In _Proceedings of the NAACL Student Research Workshop_ , 109–115. * Kwiatkowski et al. (2013) Kwiatkowski, T.; Choi, E.; Artzi, Y.; and Zettlemoyer, L. 2013. Scaling semantic parsers with on-the-fly ontology matching. In _Proceedings of EMNLP_ , 1545–1556. * Lan and Jiang (2020) Lan, Y.; and Jiang, J. 2020. Query graph generation for answering multi-hop complex questions from knowledge bases. In _Proceedings of ACL_ , 969–974. * Li (2011) Li, H. 2011. Learning to rank for information retrieval and natural language processing. _Synthesis Lectures on Human Language Technologies_ , 4(1): 1–113. * Liang (2013) Liang, P. 2013. Lambda dependency-based compositional semantics. _arXiv:1309.4408_. * Luo et al. (2018) Luo, K.; Lin, F.; Luo, X.; and Zhu, K. 2018. Knowledge base question answering via encoding of complex query graphs. In _Proceedings of EMNLP_ , 2185–2194. * Petrochuk and Zettlemoyer (2018) Petrochuk, M.; and Zettlemoyer, L. 2018. SimpleQuestions nearly solved: A new upperbound and baseline approach. In _Proceedings of EMNLP_ , 554–558. * Pîrtoacă, Rebedea, and Ruseti (2019) Pîrtoacă, G.-S.; Rebedea, T.; and Ruseti, S. 2019. Answering questions by learning to rank–Learning to rank by answering questions. In _Proceedings of EMNLP-IJCNLP_ , 2531–2540. * Saxena, Tripathi, and Talukdar (2020) Saxena, A.; Tripathi, A.; and Talukdar, P. 2020. Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. In _Proceedings of ACL_ , 4498–4507. * Sun et al. (2020) Sun, Y.; Zhang, L.; Cheng, G.; and Qu, Y. 2020. SPARQA: Skeleton-Based Semantic Parsing for Complex Questions over Knowledge Bases. In _Proceedings of AAAI_ , 8952–8959. * Vaswani et al. (2017) Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In _Proceeddings of NeurIPS_ , 5998–6008. * Wu et al. (2019) Wu, P.; Huang, S.; Weng, R.; Zheng, Z.; Zhang, J.; Yan, X.; and Chen, J. 2019. Learning representation mapping for relation detection in knowledge base question answering. In _Proceedings of ACL_ , 6130–6139. * Xu et al. (2019) Xu, K.; Lai, Y.; Feng, Y.; and Wang, Z. 2019. Enhancing key-value memory neural networks for knowledge based question answering. In _Proceedings of NAACL_ , 2937–2947. * Yang and Chang (2015) Yang, Y.; and Chang, M. 2015. S-MART Novel tree-based structured learning algorithms applied to tweet entity linking. In _Proceedings of ACL_ , 504–513. * Yih et al. (2015) Yih, S. W.-t.; Chang, M.-W.; He, X.; and Gao, J. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In _Proceedings of ACL_ , 1321–1331. * Yu et al. (2017) Yu, M.; Yin, W.; Hasan, K. S.; Santos, C. d.; Xiang, B.; and Zhou, B. 2017. Improved neural relation detection for knowledge base question answering. In _Proceedings of ACL_ , 571–581. * Zhao et al. (2019) Zhao, W.; Chung, T.; Goyal, A.; and Metallinou, A. 2019. Simple Question Answering with Subgraph Ranking and Joint-Scoring. In _Proceedings of NAACL-HLT_ , 324–334.
# Towards the sampling Lovász Local Lemma Vishesh Jain Simons Institute for the Theory of Computing, Berkeley, CA 94720, USA<EMAIL_ADDRESS>, Huy Tuan Pham and Thuy Duong Vuong Stanford University, Stanford, CA 94305, USA {huypham<EMAIL_ADDRESS> ###### Abstract. Let $\Phi=(V,\mathcal{C})$ be a constraint satisfaction problem on variables $v_{1},\dots,v_{n}$ such that each constraint depends on at most $k$ variables and such that each variable assumes values in an alphabet of size at most $[q]$. Suppose that each constraint shares variables with at most $\Delta$ constraints and that each constraint is violated with probability at most $p$ (under the product measure on its variables). We show that for $k,q=O(1)$, there is a deterministic, polynomial time algorithm to approximately count the number of satisfying assignments and a randomized, polynomial time algorithm to sample from approximately the uniform distribution on satisfying assignments, provided that $C\cdot q^{3}\cdot k\cdot p\cdot\Delta^{7}<1,\quad\text{where }C\text{ is an absolute constant.}$ Previously, a result of this form was known essentially only in the special case when each constraint is violated by exactly one assignment to its variables. For the special case of $k$-CNF formulas, the term $\Delta^{7}$ improves the previously best known $\Delta^{60}$ for deterministic algorithms [Moitra, J.ACM, 2019] and $\Delta^{13}$ for randomized algorithms [Feng et al., arXiv, 2020]. For the special case of properly $q$-coloring $k$-uniform hypergraphs, the term $\Delta^{7}$ improves the previously best known $\Delta^{14}$ for deterministic algorithms [Guo et al., SICOMP, 2019] and $\Delta^{9}$ for randomized algorithms [Feng et al., arXiv, 2020]. ## 1\. Introduction The celebrated Lovász Local Lemma (LLL) is a fundamental tool in probabilistic combinatorics which provides a sufficient condition for avoiding a collection of “bad events” in a probability space. In a quite general form, it may be stated as follows. ###### Theorem 1.1 ([EL73]). Let $\mathcal{C}$ be a finite set of events in a probability space. For $C\in\mathcal{C}$, let $\Gamma(C)$ denote a subset of $\mathcal{C}$ such that $C$ is independent of the collection of events $\mathcal{C}\setminus({C}\cup\Gamma(C))$. Suppose there exist positive real numbers $x:\mathcal{C}\to(0,1)$ such that $\displaystyle\mathbb{P}[C]\leq x(C)\prod_{D\in\Gamma(C)}(1-x(D))\quad\text{for all }C\in\mathcal{C}.$ (1.1) Then, $\mathbb{P}[\wedge_{C\in\mathcal{C}}\overline{C}]\geq\prod_{C\in\mathcal{C}}(1-x(C))>0.$ In most applications of the LLL (cf. [AS04, MT10, MR98]), the underlying probability measure $\mathbb{P}[\cdot]$ is generated by a collection of independent random variables $X_{1},\dots,X_{n}$ and for each “bad event” $C\in\mathcal{C}$, there is a subset $\operatorname{vbl}(C)\subseteq\\{X_{1},\dots,X_{n}\\}$ such that $C$ depends only on $X_{i}\in\operatorname{vbl}(C)$. This is often referred to as the “variable-version” setting of the LLL. Moreover, for many applications (cf. [AS04]), the following “symmetric” case of the variable-version setting suffices. ###### Corollary 1.2. Let $X_{1},\dots,X_{n}$ denote a collection of independent random variables. Let $\mathcal{C}=\\{C_{1},\dots,C_{m}\\}$ denote a collection of events and for $C\in\mathcal{C}$, let $\operatorname{vbl}(C)$ denote a subset of $\\{X_{1},\dots,X_{n}\\}$ such that $C$ depends only on $X_{i}\in\operatorname{vbl}(C)$. Suppose there exist $p\in(0,1)$ and $D\geq 0$ satisfying * • For each $C\in\mathcal{C}$, $\mathbb{P}[C]\leq p$. * • For each $i\in[m]$, $\\#\\{j\in[m]:\operatorname{vbl}(C_{j})\cap\operatorname{vbl}(C_{i})\neq\emptyset\\}\leq(D+1)$, and * • $e\cdot p\cdot(D+1)\leq 1$, where $e$ is the base of the natural logarithm. Then, $\mathbb{P}[\wedge_{i\in[m]}\overline{C_{i}}]\geq\prod_{i=1}^{m}\left(1-e\cdot\mathbb{P}[C_{i}]\right)>0.$ As a classical application of Corollary 1.2, consider the problem of satisfiability of a $k$-CNF formula over Boolean variables $x_{1},\dots,x_{n}$. Recall that a $k$-CNF formula over Boolean variables $x_{1},\dots,x_{n}$ is a collection of constraints $C_{1},\dots,C_{m}$ such that each $C_{i}$ depends on exactly $k$ variables and such that each $C_{i}$ is satisfied by all but exactly one assignment to its variables. Corollary 1.2 shows that if each constraint shares variables with at most (roughly) $2^{k}/e$ other constraints, then the formula has a satisfying assignment. Unfortunately, the original proof of Theorem 1.1 is non-constructive and does not provide an efficient algorithm to _find_ a satisfying assignment of the formula when this condition is met. In a breakthrough work [Bec91], Beck showed that if $2^{k}/e$ is replaced by $2^{k/48}$, then it is in fact possible to efficiently find a satisfying assignment. Beck’s bound was improved by many works over a period of nearly 20 years (e.g. [Alo91, MR98, Sri08, Mos09]) culminating in the landmark work of Moser and Tardos [MT10], which gives an efficient algorithmic proof of Theorem 1.1 provided further that one is in the variable setting and some other technical assumptions are satisfied. There has been much work on extending the result of Moser and Tardos to more general settings and the algorithmic aspects of the LLL remain an active area of research (see, e.g., [AIS19] and the references therein). In this work, we are concerned with the following. ###### Problem 1.3. Suppose that conditions similar to the LLL are satisfied. Can we approximately count the total number of satisfying assignments? Can we sample from approximately the uniform distribution on satisfying assignments? This problem has attracted much attention in the past five years. Below, we only discuss results for approximate counting, noting that similar results also hold for approximate sampling. In [BGG+19], Bezáková et al. showed that if $\mathbf{P}\neq\mathbf{NP}$, then it is _not_ possible to efficiently approximately count solutions of a Boolean $k$-CNF formula in which every variable is allowed to be present in $d$ constraints for $d\geq 5\cdot 2^{k/2}$, even when the $k$-CNF formula is monotone. For monotone $k$-CNFs, Hermon, Sly, and Zhang [HSZ19] showed that the Glauber dynamics mix rapidly for $d\leq c2^{k/2}$, thereby providing an approximate counting algorithm within a constant factor of the hard regime. For not necessarily monotone $k$-CNFs, Moitra [Moi19] provided a novel method to _deterministically_ approximately count satisfying assignments for $d\lesssim 2^{k/60}$ (where $\lesssim$ hides polynomial factors in $k$), which runs in polynomial time for $k=O(1)$. Using a Markov chain on a certain “projected space” inspired by Moitra’s method, Feng, Guo, Yin, and Zhang [FGYZ20] relaxed the restriction to $d\lesssim 2^{k/20}$ and removed the requirement $k=O(1)$, although their algorithm is not deterministic. We also mention here the work of Guo, Jerrum, and Liu [GJL19] on “partial rejection sampling”. For $k$-CNFs, their method allows one to _perfectly_ sample from the uniform distribution on satisfying assignments, either for “extremal formulas” (and $d$ in the LLL regime), or for formulas for which the intersections between the constraints satisfy some rather stringent size restrictions (and for $d$ matching the hardness regime). Very recently, work of Feng, He, and Yin [FHY20] addressed 1.3 in the special case where each constraint is violated by a _very small number_ of configurations of its variables. Their results are obtained in the following setting. ###### Definition 1.4. (1) A constraint satisfaction problem (CSP) is said to be atomic if each constraint is violated by at most one assignment to its variables. (2) A $(k,d,q)$-CSP on variables $x_{1},\dots,x_{n}$ is a constraint satisfaction problem in which each $x_{i}$ takes values in $[q]$, each constraint depends on exactly $k$ variables, and each variable features in at most $d$ constraints. In [FHY20], a fast randomized algorithm is provided in the setting of Corollary 1.2, _assuming that the CSP is atomic_ and that $pD^{350}\lesssim 1$. For $(k,d,q)$-CSPs which are atomic, they obtain better bounds, leading to an algorithm for $k$-CNF Boolean formulas with $d\lesssim 2^{k/13}$, and an algorithm for proper $q$-colorings of $k$-uniform hypergraphs with $q\gtrsim d^{9/(k-12)}$ (this improves on a previous bound of $q\gtrsim d^{14/(k-14)}$ for $q,k=O(1)$ due to Guo, Liao, Lu, and Zhang [GLLZ19], although [GLLZ19] provides a deterministic algorithm). For not-necessarily-atomic CSPs, [FHY20] simply decompose each constraint into atomic constraints, which leads to the restriction $\displaystyle p(DN)^{350}\lesssim 1,$ (1.2) where $N$ is an upper bound on the number of violating assignments to the variables of any constraint (hence, $N=1$ for an atomic CSP). To see that this condition is vastly more restrictive than Corollary 1.2, consider the case of Boolean CSPs for which each constraint depends on at most $k$ variables. Then, $p\geq 2^{-k}$ so that 1.2 fails to be applicable as soon as $N\gtrsim 2^{k/350}$. In contrast, Corollary 1.2 shows that a solution exists, provided that $D\lesssim 2^{k}/N$ for all $N\lesssim 2^{k}$. The restriction $N\gtrsim 2^{k/350}$ arising from 1.2 is rather undesirable, since in many applications of the LLL (cf. [AS04]), $N=\Theta(2^{ck})$ with the constant $c\in(0,1)$ coming from various concentration inequalities. One of the main open problems mentioned in the works [GLLZ19, FHY20] is whether one can go beyond the “atomic CSP” framework to provide an affirmative answer to 1.3 for general CSPs. ### 1.1. Our results We provide, for the first time, approximate counting and sampling algorithms for general CSPs under LLL-like conditions. ###### Theorem 1.5. Let $\Phi=(V,\mathcal{C})$ denote a constraint satisfaction problem on variables $v_{1},\dots,v_{n}$ and constraints $C_{1},\dots,C_{m}$. For each constraint $C\in\mathcal{C}$, let $\operatorname{vbl}(C)\subseteq V$ denote the variables it depends on. Suppose there exist $p\in(0,1)$, $\Delta\geq 1$, $k\geq 1$, $q\geq 1$ satisfying the following conditions. * • The domain of each variable $v_{1},\dots,v_{n}$ is of size at most $q$. * • For each $C\in\mathcal{C}$, $|\operatorname{vbl}(C)|\leq k$. * • For each $C\in\mathcal{C}$, $\mathbb{P}[C]\leq p$. * • For each $i\in[m]$, $\\#\\{j\in[m]\setminus\\{i\\}:\operatorname{vbl}(C_{j})\cap\operatorname{vbl}(C_{i})\neq\emptyset\\}\leq\Delta$, and * • $q^{3}\cdot k\cdot p\cdot\Delta^{7}\leq c$, where $c$ is an absolute constant. Then, for any $\varepsilon\in(0,1)$, the number of satisfying assignments of $\Phi$ can be deterministically approximated to within relative error $(1\pm\varepsilon)$ in time $\left(\frac{n}{\varepsilon}\right)^{\operatorname{poly}(k,\Delta,\log q)}.$ ###### Remark. (1) For Boolean $k$-CNFs with $k=O(1)$, this provides a deterministic approximate counting algorithm for $\Delta\lesssim 2^{k/7}$, where $\lesssim$ hides polynomial factors in $k$. As mentioned above, the previously best known algorithm, either randomized or deterministic, requires $\Delta\lesssim 2^{k/13}$ [FHY20]. (2) For properly $q$-coloring $k$-uniform hypergraphs, this provides a deterministic approximate counting algorithm for $q\gtrsim\Delta^{7/(k-4)}$. The previous best known algorithms required $q\gtrsim\Delta^{9/(k-12)}$ [FHY20] or $q\gtrsim\Delta^{14/(k-14)}$ for deterministic algorithms [GLLZ19]. The framework for proving Theorem 1.5 also lends itself naturally to an approximate sampling algorithm. ###### Theorem 1.6. Under the same conditions as Theorem 1.5 and for any $\varepsilon\in(0,1)$, there is a randomized algorithm to sample from a distribution which is $\varepsilon$-close in total variation distance to the uniform distribution on satisfying assignments of $\Phi$. The running time of the algorithm is $\left(\frac{n}{\varepsilon}\right)^{\operatorname{poly}(k,\Delta,\log{q})}$. ### 1.2. Techniques The works [Bec91, Alo91, MR98, Sri08] on the algorithmic local lemma predating Moser’s work [Mos09] employ the following two step strategy: one first finds a “good” partial assignment with the property that the residual formula “factorizes” into logarithmic sized components, at which point, one can extend the partial assignment to a complete satisfying assignment efficiently using exhaustive enumeration. For sampling from (approximately) the uniform distribution on satisfying assignments, one therefore only needs to generate these initial partial assignments according to the correct distribution. To accomplish this, we use a generalization of a linear program introduced by Moitra [Moi19] to approximate the marginal distribution (induced by the uniform distribution on satisfying assignments) of an unassigned variable, conditioned on partial assignments satisfying certain conditions. The key conceptual contribution of our work is the following insight: the combinatorial conditions guaranteeing the factorization of the residual formula into logarithmic sized components are essentially the same as those ensuring that the LP can be solved efficiently (see the proof of Lemmas 4.9 and 5.1). This provides a unifying view of the works [Bec91, Alo91, MR98, Sri08] on the algorithmic LLL, and the recent works [Moi19, GLLZ19], and in our opinion, sheds considerable light on the latter two works. From a technical viewpoint, our main contribution is a considerable generalization, refinement, and simplification of the framework introduced by Moitra [Moi19]. In particular, we eliminate any need to use the algorithmic local lemma of Moser and Tardos [MT10] (which was an essential ingredient in [Moi19, GLLZ19] and leads to additional degradation in the quantitative bounds); instead, we show how to efficiently exploit a certain “pseudo-random property” of the initial partial assignment in a direct manner to remove this loss (Lemma 5.2). Our framework also treats approximate counting and approximate sampling on the same footing in a very simple manner, whereas the previous works [Moi19, GLLZ19] suffered from additional losses in going from approximate counting to approximate sampling. We believe that our analysis lays bare the limits of this approach towards approximate counting and sampling. The term $\Delta^{7}$ in Theorems 1.5 and 1.6 comes from the aggregation of two sources of slack. The first is the use of $\\{2,3\\}$-trees as in [Alo91, MR98, Sri08] – even for the algorithmic LLL, $\\{2,3\\}$-trees only lead to $\Delta^{4}$ instead of $\Delta$, and being able to use “denser witness trees” was the major innovation in the works of Moser [Mos09] and Moser and Tardos [MT10]. Another reason for the slack is a certain “factorization property” required to even write down the linear program efficiently. We believe that with some additional ideas, this second source of slack may be overcome, and leave this as an interesting direction for future research. ### 1.3. Organization In Section 2, we collect some preliminaries. In Section 3, we present a slightly simpler algorithm which proves Theorem 1.5 with $\Delta^{7}$ replaced by $\Delta^{10}$; the analysis of a key step in this algorithm is completed in Section 4. The ideas introduced in Sections 3 and 4 are further refined in Section 5 to prove Theorem 1.5 in Section 5.2 and Theorem 1.6 in Section 5.3. ## 2\. Preliminaries ### 2.1. Lovász Local Lemma As mentioned in the introduction, the LLL provides a sufficient condition guaranteeing that the probability of avoiding a collection $\mathcal{C}$ of “bad events” in a probability space is positive. In particular, when the LLL condition 1.1 is satisfied, the so-called LLL distribution, $\mu_{S}[\cdot]:=\mathbb{P}[\cdot\mid\wedge_{C\in\mathcal{C}}\overline{C}]$ is well-defined (here, the subscript $S$ is chosen to represent “satisfying”). The LLL distribution is the central object of study in this paper. We begin by recording a standard comparison between the LLL distribution $\mu_{S}[\cdot]$ and the product distribution on the variables $\mathbb{P}[\cdot]$. ###### Theorem 2.1 (cf. [HSS11, Theorem 2.1]). Under the conditions of Theorem 1.1, for any event $B$ in the probability space, $\mu_{S}[B]\leq\mathbb{P}[B]\prod_{C\in\Gamma(B)}(1-x(C))^{-1}.$ ###### Remark. The above comparison is one-sided, as it ought to be, since for any $C\in\mathcal{C}$, $\mu_{S}[C]=0$ while $\mathbb{P}[C]$ may be positive. For the remainder of this paper, we will restrict ourselves to the variable- version symmetric setting described in Corollary 1.2, in which case, we choose $x(C)=e\cdot\mathbb{P}[C]$ for all $C\in\mathcal{C}$, with $e$ the base of the natural logarithm. ### 2.2. {2,3}-trees One of the key tools in our analysis will be the notion of $\\{2,3\\}$-trees, which goes back to Alon’s work on the algorithmic local lemma [Alo91]. ###### Definition 2.2. Let $G=(V,E)$ be a graph and let $\operatorname{dist}_{G}(\cdot,\cdot)$ denote the graph geodesic distance. A $\\{2,3\\}$-tree is a subset of vertices $T\subseteq V$ such that * • for any $u,v\in T$, $\operatorname{dist}_{G}(u,v)\geq 2$; * • if one adds an edge between every $u,v\in T$ such that $\operatorname{dist}_{G}(u,v)=2\text{ or }3$, then $T$ is connected. The next lemma bounds the number of $\\{2,3\\}$-trees of a given size in terms of the maximum degree of the graph. ###### Lemma 2.3 (cf. [Alo91, Lemma 2.1]). Let $G=(V,E)$ be a graph with maximum degree $d$. Then, for any $v\in V$, the number of $\\{2,3\\}$-trees in $G$ of size $t$ containing $v$ is at most $(ed^{3})^{t-1}/2$. Before stating the next lemma, we need some notation. Let $H=(V,E)$ be a hypergraph. Let $\operatorname{Lin}(H)$ denote its line graph i.e. $V(\operatorname{Lin}(H))=E$ and there is an edge between $u\neq v\in V(\operatorname{Lin}(H))$ if and only if the hyperedges $u,v\in E$ share a vertex in $V$. Finally, let $L^{2}(H)$ denote the graph with the same vertex set as $\operatorname{Lin}(H)$ and with an edge between two vertices $u\neq v\in V(L^{2}(H))$ if and only if $\operatorname{dist}_{\operatorname{Lin}(H)}(u,v)\leq 2$. Then, a simple greedy argument shows the following. ###### Lemma 2.4 (cf. [GLLZ19, Lemma 14]). Let $H=(V,E)$ be a hypergraph such that each hyperedge in $E$ intersects at most $d$ other hyperedges (equivalently, the degree of $\operatorname{Lin}(H)$ is at most $d$). Let $B\subseteq E(H)$ be a collection of hyperedges which induce a connected subgraph in $L^{2}(H)$. Then, for any $e^{*}\in B$, there exists a $\\{2,3\\}$-tree $T\subseteq B$ in $\operatorname{Lin}(H)$ such that $e^{*}\in T$ and $|T|\geq|B|/d$. ## 3\. A simpler algorithm for a more restrictive regime In this section and the next one, we present a simpler algorithm which proves a version of Theorem 1.5 provided that $p\leq(10^{5}q^{3}k\Delta^{10})^{-1}$. The design and analysis of this algorithm already contains the basic ideas. Later, in Section 5, we will introduce some key additional ingredients to refine this algorithm and its analysis in order to prove Theorems 1.5 and 1.6. Throughout this section and the next one, we fix an arbitrary ordering of the variables $v_{1},\dots,v_{n}\in V$ and an arbitrary ordering of the constraints $C_{1},\dots,C_{m}\in\mathcal{C}$. Moreover, for notational convenience, we will assume that the domain of each variable is $[q]$; a straightforward modification of the proof shows that we only need the size of the domains to be bounded above by $q$. ### 3.1. Step 1: Finding a guiding assignment The goal of this step is to find a partial assignment of the variables, which will serve as a “guide” for the rest of the algorithm. This step is very much inspired by analogous routines for the algorithmic local lemma [Bec91, Alo91, MR98], and generalizes a similar step in the works [Moi19, GLLZ19]. We note that in the works [Moi19, GLLZ19], it is critical that one is able to efficiently find a partial assignment satisfying each constraint without assigning values to too many variables in any constraint – while this is indeed possible in the special settings considered in these works, in our setting, such a partial assignment need not exist. The key result of this subsection is Proposition 3.5. We will first present a randomized construction achieving the guarantees of Proposition 3.5, and then show how to derandomize it using standard techniques. We consider the following randomized greedy procedure to construct a partial assignment of $v_{1},\dots,v_{n}$, where $p^{\prime}>0$ is a parameter which will be specified later. 1. (R1) Let $A_{0}=V$ denote the set of initially “available variables” and let $F_{0}=\emptyset$ denote the set of initially “frozen variables”. Initialize the stage to $i=1$. 2. (R2) Select the first variable (according to our order) in $A_{i-1}$. Denote this variable by $v_{i}^{*}$. If no such variable exists, terminate the process. 3. (R3) Assign $v_{i}^{*}$ a uniformly random value from its alphabet $[q]$. Let $P_{i}$ denote the partial assignment resulting after the assignment to $v^{*}_{i}$. 4. (R4) Let $\mathcal{F}_{i}=\\{j\in[m]:\mathbb{P}[C_{j}\mid P_{i}]>p^{\prime}\\}$ denote the set of “dangerous constraints” under $P_{i}$. We “freeze” all the variables involved in any of the “dangerous constraints”, i.e. we set $F_{i}=F_{i-1}\cup\bigcup_{j\in\mathcal{F}_{i}}((\operatorname{vbl}(C_{j})\cap A_{i})\setminus\\{v_{i}^{*}\\})\text{ and }A_{i}=A_{i-1}\setminus(F_{i}\cup\\{v^{*}_{i}\\}).$ 5. (R5) Increment $i$ by $1$ and return to (R2). Note that the process terminates at the first stage $s$ satisfying $A_{s}=\emptyset$. Let $P_{1},\dots,P_{s}$ denote the partial assignments generated during the course of the process. Let $\mathcal{F}=\cup_{i=1}^{s}\mathcal{F}_{i}$ denote the set of constraints declared “dangerous” at any point during the process. Finally, let $\nu$ denote the distribution on partial assignments given by the random partial assignment $P_{s}$. We emphasize that $s$ itself is a random variable. The following simple observation will be useful later. ###### Lemma 3.1. For all $i\in[s]$ and for all $j\in[m]$, $\mathbb{P}[C_{j}\mid P_{i}]\leq p^{\prime}q.$ ###### Proof. Fix $j\in[m]$. If $j\notin\mathcal{F}$, then we are done, so assume that $j\in\mathcal{F}$. Let $i$ be the first stage for which $C_{j}\in\mathcal{F}_{i}$. Then, $\mathbb{P}[C_{j}\mid P_{i-1}]\leq p^{\prime}$, and since $P_{i}$ extends $P_{i-1}$ by assigning $v_{i}^{*}$, we have $\displaystyle\mathbb{P}[C_{j}\mid P_{i}]$ $\displaystyle\leq\frac{\mathbb{P}[C_{j}\mid P_{i-1}]}{\min_{u\in[q]}\mathbb{P}[v_{i}^{*}=u]}$ $\displaystyle\leq p^{\prime}q.$ Note that if $C_{j}\in\mathcal{F}_{i}$, all variables in $\operatorname{vbl}(C_{j})$ are added to $F_{i}$. Hence, no variable in $\operatorname{vbl}(C_{j})$ is assigned a value during the remainder of the process so that $\mathbb{P}[C_{j}\mid P_{i+k}]=\mathbb{P}[C_{j}\mid P_{i}]$ for all $0\leq k\leq s-i$. ∎ For $h\in[n]$, let $\mathfrak{S}(h)$ be the $\sigma$-algebra generated by the output of the procedure on partial assignments of $v_{1},\dots,v_{h}$. Also, for each $h\in[n]$, let $\iota(h)=\max\\{i\in[h]:v_{i}^{*}=v_{h^{\prime}}\text{ for some }h^{\prime}\leq h\\}$. In other words, $\iota(h)$ denotes the number of variables in $v_{1},\dots,v_{h}$ which are assigned values by the partial assignment. ###### Lemma 3.2. Let $T\subseteq\mathcal{C}$ be a collection of constraints such that for any $C,C^{\prime}\in T$ with $C\neq C^{\prime}$, $\operatorname{vbl}(C)\cap\operatorname{vbl}(C^{\prime})=\emptyset$. Then, letting $M_{h}=\prod_{C\in T}\mathbb{P}[C\mid P_{\iota(h)}],$ we have $\mathbb{E}_{\nu}[M_{h+1}\mid\mathfrak{S}(h)]=M_{h}.$ ###### Proof. Since the variables $v_{1},\dots,v_{n}$ are processed in order, given $\mathfrak{S}(h)$, it is determined whether $v_{h+1}$ is frozen or not. If $v_{h+1}$ is frozen, then $M_{h+1}=M_{h}$. Otherwise, $v_{h+1}$ is not frozen. Note that, in this case, $\iota(h+1)=\iota(h)+1$ and $v^{*}_{\iota(h+1)}=v_{h+1}$. Since $\operatorname{vbl}(C)$ are disjoint for $C\in T$, there is at most one constraint $C_{T}\in T$ such that $v_{h+1}\in\operatorname{vbl}(C_{T})$. If there is no such constraint $C_{T}$, then $M_{h+1}=M_{h}$. Consider now the case that $v_{h+1}$ is not frozen, and there exists a unique constraint $C_{T}\in T$ such that $v_{h+1}\in\operatorname{vbl}(C_{T})$. We have $\mathbb{P}[C\mid P_{\iota(h+1)}]=\mathbb{P}[C\mid P_{\iota(h)}]$ for all $C\neq C_{T}$. Moreover, since $v_{h+1}$ is assigned a uniformly distributed value in $[q]$, we have $\displaystyle\mathbb{E}_{\nu}\bigg{[}\mathbb{P}[C_{T}\mid P_{\iota(h+1)}]\mid\mathfrak{S}(h)\bigg{]}$ $\displaystyle=\frac{1}{q}\sum_{a\in[q]}\frac{\mathbb{P}[C_{T}\wedge P_{\iota(h)}\wedge v_{h+1}=a]}{\mathbb{P}[P_{\iota(h)}\wedge v_{h+1}=a]}$ $\displaystyle=\frac{\mathbb{P}[C_{T},P_{\iota(h)}]}{\mathbb{P}[P_{\iota(h)}]}$ $\displaystyle=\mathbb{P}[C_{T}\mid P_{\iota(h)}].$ Combining these cases gives the desired conclusion. ∎ Since $\mathbb{P}[C]\leq p$ for all $C\in\mathcal{C}$, the following is an immediate corollary of Lemma 3.2. ###### Corollary 3.3. Let $T\subseteq\mathcal{C}$ be a collection of constraints such that for any $C,C^{\prime}\in T$ with $C\neq C^{\prime}$, $\operatorname{vbl}(C)\cap\operatorname{vbl}(C^{\prime})=\emptyset$. Then, $\mathbb{E}_{\nu}\left[\prod_{C\in T}\mathbb{P}[C\mid P_{s}]\right]\leq p^{|T|}.$ We are now ready to state and prove the key property of guiding partial assignments, which is that they “factorize” the constraint satisfaction problem into small connected components. Let $H=(V,\mathcal{C})$ denote the hypergraph induced by the constraint satisfaction problem and let $G(\mathcal{C})$ denote its line graph. Let $G^{2}(\mathcal{F})=(\mathcal{F},E)$ denote the graph whose vertices are $\mathcal{F}$, and $C_{i}\neq C_{j}\in\mathcal{F}$ are adjacent if and only if $\operatorname{dist}_{G(\mathcal{C})}(C_{i},C_{j})\leq 2$. ###### Lemma 3.4. Let $p\leq p^{\prime}/(10\Delta^{3})$. The $\nu$-probability that $G^{2}(\mathcal{F})$ has a connected component of size at least $L$ is at most $n\Delta\cdot 2^{-L/\Delta}$. ###### Proof. Suppose that $\mathfrak{C}$ is a connected component in $G^{2}(\mathcal{F})$ with $|\mathfrak{C}|\geq L$. By Lemma 2.4, there exists a $\\{2,3\\}$-tree $T$ in $G(\mathcal{C})$ with $|T|\geq|\mathfrak{C}|/\Delta\geq L/\Delta$. Let $\mathfrak{T}$ denote the set of $\\{2,3\\}$-trees in $G(\mathcal{C})$ of size $L/\Delta$. Then, $\displaystyle\nu[G^{2}(\mathcal{F})\text{ has a connected component of size}\geq L]$ $\displaystyle\leq\nu[G^{2}(\mathcal{F})\text{ contains a }\\{2,3\\}\text{-tree of size}\geq L/\Delta]$ $\displaystyle\leq\sum_{T\in\mathfrak{T}}\nu[T\subseteq\mathcal{F}]$ $\displaystyle\leq\sum_{T\in\mathfrak{T}}(p/p^{\prime})^{L/\Delta}$ $\displaystyle\leq|\mathcal{F}|(e\Delta^{3})^{L/\Delta}(p/p^{\prime})^{L/\Delta}$ $\displaystyle\leq n\Delta(e\Delta^{3}p/p^{\prime})^{L/\Delta}$ $\displaystyle\leq n\Delta\cdot 2^{-L/\Delta}.$ The fourth line uses Lemma 2.3 and the last line uses the assumed bound on $p/p^{\prime}$. Let us explain the third line. By Corollary 3.3 and Markov inequality, for any $T\in\mathfrak{T}$, $\nu[T\subseteq\mathcal{F}]\leq\nu\left[\left(\prod_{C\in T}\mathbb{P}[C\mid P_{s}]\right)>p^{\prime|T|}\right]\leq\left(\frac{p}{p^{\prime}}\right)^{|T|}\leq\left(\frac{p}{p^{\prime}}\right)^{L/\Delta}.\qed$ The preceding lemma shows that with high probability, our random greedy process returns a partial assignment satisfying the condition in Lemma 3.4 (for $L$ sufficiently large). Using the standard method of conditional expectations, we can find such a partial assignment deterministically. ###### Proposition 3.5. There exists a deterministic algorithm running in time $O(n^{\operatorname{poly}(\log q,\log\Delta,k)})$ which generates a sequence of partial assignments $P_{1},\dots,P_{s}$ with the following properties. 1. (1) For all $i\in[s]$, $P_{i}$ assigns values to $i$ variables, and $P_{i}$ extends $P_{i-1}$. 2. (2) $A_{s}=\emptyset$. 3. (3) For all $i\in[s]$ and $j\in[m]$, $\mathbb{P}[C_{j}\mid P_{i}]\leq p^{\prime}q$. 4. (4) Every connected component in $G^{2}(\mathcal{F})$ has size at most $L=10\Delta\log(\Delta n)$. ###### Proof. Let $L^{\prime}=10\log(\Delta n)$, and let $\mathfrak{T}$ denote the collection of all {2,3}-trees of size $L^{\prime}$ in $G(\mathcal{C})$. Note that $|\mathfrak{T}|\leq\operatorname{poly}(n^{\log^{2}\Delta})$, and indeed, it is easily seen (cf. [Alo91]) that the collection $\mathfrak{T}$ can be constructed in time $\operatorname{poly}(n^{\log^{2}\Delta})$. Now, for a partial assignment $X$, define $H(X)=\sum_{T\in\mathfrak{T}}\prod_{C\in T}\left(\mathbb{P}[C\mid X]/p^{\prime}\right).$ By the proof of Lemma 3.4, if we can find a sequence of partial assignments $P_{1},\dots,P_{s}$ satisfying properties (1), (2), (3) such that $H(P_{s})<1$, then (4) is also satisfied, since any $\\{2,3\\}$-tree in $G(\mathcal{C})$ of size $L^{\prime}$ contributes at least $1$ to the sum. To find such a sequence of partial assignments, we follow the same greedy procedure as before, except now, after having chosen $P_{i-1}$ and $v_{i}^{*}$, we choose the value of $v_{i}^{*}$ to be $\operatorname*{arg\,min}_{a\in[q]}H(P_{i-1}\wedge{v_{i}^{*}=a}).$ We claim that $H(P_{i})\leq H(P_{i-1})$ for all $i\in[s]$. Indeed, for every $T\in\mathfrak{T}$, there exists at most one $C_{T}\in T$ such that $v_{i}^{*}\in\operatorname{vbl}(C_{T})$. Therefore, $\displaystyle\sum_{a\in[q]}\mathbb{P}[v_{i}^{*}=a]H(P_{i-1}\wedge v_{i}^{*}=a)$ $\displaystyle=\sum_{T\in\mathfrak{T}}\sum_{a\in[q]}(p^{\prime})^{-1}\mathbb{P}[C_{T}\mid P_{i-1}\wedge v_{i}^{*}=a]\mathbb{P}[v_{i}^{*}=a]\prod_{C\in T\setminus C_{T}}\frac{\mathbb{P}[C\mid P_{i-1}]}{p^{\prime}}$ $\displaystyle=H(P_{i-1}),$ which shows that it is possible to choose $P_{i}$ to ensure $H(P_{i})\leq H(P_{i-1})$. Finally, since $H(\emptyset)\leq(n\Delta)\cdot(e\Delta^{3})^{L^{\prime}}\cdot(p/p^{\prime})^{L^{\prime}}<1,$ we are done. ∎ ### 3.2. Step 2: Approximate counting Let $P_{0}=\emptyset$ and $P_{1},\dots,P_{s}$ denote the sequence of partial assignments returned by Proposition 3.5. As before, we will denote the vertices which are successively assigned values by $v_{1}^{*},\dots,v_{s}^{*}$ and we will denote their values under $P_{s}$ by $a_{1}^{*},\dots,a_{s}^{*}$. For a partial assignment $X$, let $\mathcal{S}_{X}$ denote the number of (complete) satisfying assignments extending $X$. We will use $\mathcal{S}_{P_{0}}$ (or $\mathcal{S}_{\emptyset}$) to denote the set of all complete satisfying assignments. Then, $\displaystyle\frac{|\mathcal{S}_{P_{s}}|}{|\mathcal{S}_{P_{0}}|}$ $\displaystyle=\frac{|\mathcal{S}_{P_{1}}|}{|\mathcal{S}_{P_{0}}|}\cdot\frac{|\mathcal{S}_{P_{2}}|}{|\mathcal{S}_{P_{1}}|}\cdots\frac{|\mathcal{S}_{P_{s}}|}{|\mathcal{S}_{P_{s-1}}|}$ $\displaystyle=\prod_{i=1}^{s}\mu_{S}[v_{i}^{*}=a_{i}^{*}\mid P_{i-1}],$ where recall that $\mu_{S}$ denotes the uniform measure on all satisfying assignments i.e. on $\mathcal{S}_{P_{0}}$. Thus, to approximate $|\mathcal{S}_{P_{0}}|$, it suffices to approximate $|\mathcal{S}_{P_{s}}|$ and $\mu_{S}[v_{i}^{*}=a_{i}^{*}\mid P_{i-1}]$ for all $i\in[s]$. ###### Lemma 3.6. For $P_{s}$ returned by Proposition 3.5, $|\mathcal{S}_{P_{s}}|$ can be computed exactly in time $n^{\operatorname{poly}(\Delta,k,\log q)}$. ###### Proof. Since $A_{s}=\emptyset$, the set of variables left unassigned by $P_{s}$ is precisely $F_{s}$. Let $G_{1},\dots,G_{r}$ denote the maximal connected components of $G^{2}(\mathcal{F})$ and let $V^{\prime}_{1},\dots,V^{\prime}_{r}$ denote the variables appearing in any constraint in $G_{1},\dots,G_{r}$. By the maximality of $G_{1},\dots,G_{r}$, the sets $V_{1}^{\prime},\dots,V_{s}^{\prime}$ are mutually disjoint. Also, by the maximality of $G_{1},\dots,G_{r}$, there does not exist any $C\in\mathcal{C}$ such that $\operatorname{vbl}(C)\cap V^{\prime}_{i}\neq\emptyset$ and $\operatorname{vbl}(C)\cap V^{\prime}_{j}\neq\emptyset$ for some $i\neq j\in[r]$, since otherwise, some vertex in $G_{i}$ would be connected in $G^{2}(\mathcal{F})$ to some vertex in $G_{j}$. Finally, note that $|G_{i}|\leq L$, and hence $|V^{\prime}_{i}|\leq kL$ for all $i\in[r]$. Since any $v\in F_{s}$ must belong to some $C\in\mathcal{F}$ and since any $C\in\mathcal{F}$ must belong to some $G_{i}$, it follows that $F_{s}\subseteq V_{1}^{\prime}\cup\dots\cup V^{\prime}_{s}.$ Moreover, as seen in the previous paragraph, there are no constraints involving variables from both $V^{\prime}_{i}$ and $V^{\prime}_{j}$ for $i\neq j$. Therefore, for each $i\in[r]$, we can exhaustively enumerate all assignments to $V^{\prime}_{i}\cap F_{s}$, check how many of them satisfy all relevant constraints, and finally take the product over $i\in[r]$ in the claimed time. ∎ Approximating $\mu_{S}[v_{i}^{*}=a_{i}^{*}\mid P_{i-1}]$ for $i\in[s]$ is much more involved and will be the content of the next section. Let $\delta\in(0,1)$ and $q^{-n}\leq r_{-}\leq r_{+}\leq q^{n}$ be parameters. In Proposition 4.8, we will construct a subroutine $\operatorname{Alg}_{r_{-},r_{+},\delta}$ with the following properties. Suppose $p^{\prime}\leq(10000q^{3}k\Delta^{7})^{-1}$ and let $b\in[q]$. * • $\operatorname{Alg}_{r_{-},r_{+},\delta}$ runs in time $\operatorname{poly}(n,k,q)\cdot 2^{\log(1/\delta)\cdot\operatorname{poly}(\Delta,k,\log{q})}$. * • $\operatorname{Alg}_{r_{-},r_{+},\delta}$ returns $\operatorname{YES}$ if and only if $r_{-}(1-\delta)\leq\frac{\mu_{S}[v_{i}^{*}=b\mid P_{i-1}]}{\mu_{S}[v_{i}^{*}=a_{i}^{*}\mid P_{i-1}]}\leq r_{+}(1+\delta).$ Let $\varepsilon\in(0,1)$ be a parameter. Then, using such a subroutine along with binary search on the parameters $r_{-},r_{+}$, we can clearly approximate $\mu_{S}[v_{i}^{*}=a_{i}^{*}\mid P_{i-1}]$ up to a multiplicative factor of $\exp(\varepsilon/n)$ for each $i\in[s]$ in time $(n/\varepsilon)^{\operatorname{poly}(\Delta,k,\log{q})}$. Together with Lemma 3.6, this therefore provides an approximation of $|\mathcal{S}_{P_{0}}|$ up to relative error $\exp(\varepsilon)$. ## 4\. Efficient estimation of the marginals We will continue to use the notation and conventions of the previous section. Throughout, we fix a partial assignment $P_{s}$ as returned by Proposition 3.5. By considering the fixed order $v_{1},\dots,v_{n}$ of the variables, this fixes the identity of the variables $v_{1}^{*},\dots,v_{s}^{*}$ as well as the intermediate sequence of partial assignments $P_{1},\dots,P_{s-1}$. Throughout, we also fix $\ell\in[s]$. Our goal is to efficiently approximate the conditional probabilities $p_{\ell}(a):=\mathbb{P}[v_{\ell}^{*}=a\mid P_{\ell-1}]$ for all $a\in[q]$. We will use $\mu_{S}$ to denote the uniform measure over all (complete) satisfying assignments, and for a partial assignment $x$, $\mu_{S}[\cdot\mid x]$ to denote the uniform measure on all (complete) satisfying assignments extending $x$. For partial assignments $x,x^{\prime}$, the notation $x^{\prime}\to x$ means that $x$ is an extension of $x^{\prime}$ (i.e. each variable that is assigned in $x^{\prime}$ is also assigned in $x$ to the same value). Finally, we emphasize that $\mathbb{P}[\cdot]$ will always mean the product measure on the variables $v_{1},\dots,v_{n}$. ### 4.1. Idealized coupling procedure and the idealized decision tree Let $p^{\prime\prime}>0$ be a parameter which will be chosen later. Fix $a\neq b\in[q]$. Let $P_{\ell}(a)$ denote the partial assignment extending $P_{\ell-1}$ obtained by setting $v_{\ell}^{*}=a$ and let $P_{\ell}(b)$ be defined analogously. We begin by describing a coupling between assignments extending $P_{\ell}(a)$ and $P_{\ell}(b)$, which will motivate subsequent discussion. We note that this coupling is not meant to actually be implemented by the algorithm. 1. (C1) Initialize the partial assignments $X=P_{\ell}(a)$ and $Y=P_{\ell}(b)$. Initialize $(V_{S})_{X,Y}=\\{v_{1}^{*},\dots,v_{\ell}^{*}\\}$ (the collection of “set” variables) and $(V_{D})_{X,Y}=\\{v_{\ell}^{*}\\}$ (the collection of “dangerous” variables). 2. (C2) Choose the lowest numbered constraint $A\in\mathcal{C}$ such that $(V_{D})_{X,Y}\cap\operatorname{vbl}(A)\neq\emptyset$ and $\operatorname{vbl}(A)\cap((V_{D})_{X,Y}\cup(V_{S})_{X,Y})^{c}\neq\emptyset$. If no such $A\in\mathcal{C}$ exists, then terminate. 3. (C3) Choose the lowest numbered variable $v\in\operatorname{vbl}(A)\cap((V_{D})_{X,Y}\cup(V_{S})_{X,Y})^{c}$. 4. (C4) Sample a pair of values $(v_{X},v_{Y})$ according to the maximal coupling of the marginal distribution of $\mu_{S}$ at $v$, conditioned on $X$ and $Y$ respectively. 5. (C5) Update $X$ by assigning $v=v_{X}$, and update $Y$ by assigning $v=v_{Y}$. Update $(V_{S})_{X,Y}$ by adding $v$. 6. (C6) Let $D_{X,Y}=\\{u\in(V_{S})_{X,Y}:X(u)\neq Y(u)\\}$. Let $\mathcal{F}_{X,Y}=\\{C\in\mathcal{C}$ : $\mathbb{P}[C\mid X]>p^{\prime\prime}$ or $\mathbb{P}[C\mid Y]>p^{\prime\prime}\\}$. Update $(V_{D})_{X,Y}=D_{X,Y}\cup\bigcup_{C\in\mathcal{F}_{X,Y}}(\operatorname{vbl}(C)\cap(V_{S})_{X,Y}^{c}),$ and return to (C2). We record a few simple observations. 1. (O1) The set $(V_{S})_{X,Y}$ increases throughout the process. 2. (O2) $\mathcal{F}_{X,Y}$ is non-decreasing throughout the process. Indeed, once $C\in\mathcal{F}_{X,Y}$, no other $v\in\operatorname{vbl}(C)$ can be chosen in (C3), so that the conditional probability of $C$ with respect to all subsequent partial assignments remains the same. 3. (O3) The set $(V_{D})_{X,Y}$ is non-decreasing throughout the process. The above coupling process may be viewed as randomly traversing root-to-leaf trajectories in an idealized deterministic rooted decision tree $\mathcal{T}$, defined using the following inductive procedure. 1. (T1) The root of the tree consists of the partial assignments $(x_{0},y_{0}):=(P_{\ell}(a),P_{\ell}(b))$. 2. (T2) Given a node $(x,y)$ (consisting of partial assignments on the same variables $(V_{S})_{x,y}$), construct $D_{x,y},\mathcal{F}_{x,y}$ as in (C6). Let $(V_{D})_{x,y}=D_{x,y}\cup\bigcup_{C\in\mathcal{F}_{x,y}}(\operatorname{vbl}(C)\cap(V_{S})_{x,y}^{c}).$ 3. (T3) If there is no $A\in\mathcal{C}$ with $(V_{D})_{x,y}\cap\operatorname{vbl}(A)\neq\emptyset$ and $\operatorname{vbl}(A)\cap((V_{D})_{x,y}\cup(V_{S})_{x,y})^{c}\neq\emptyset$, then $(x,y)$ is a leaf of $\mathcal{T}$. 4. (T4) Otherwise, let $A\in\mathcal{C}$ be the lowest numbered such constraint, and let $v_{x,y}$ be the lowest numbered variable in $\operatorname{vbl}(A)\cap((V_{D})_{x,y}\cup(V_{S})_{x,y})^{c}$. The children of $(x,y)$ in $\mathcal{T}$ consist of all possible extensions of $(x,y)$ obtained by assigning a value to the variable $v_{x,y}$. The next lemma collects some useful properties of $\mathcal{T}$. ###### Lemma 4.1. For $\mathcal{T}$ as defined above, 1. (1) For any node $(x,y)\in\mathcal{T}$, $\mathbb{P}[C_{j}\mid x]\leq p^{\prime\prime}q\text{ and }\mathbb{P}[C_{j}\mid y]\leq p^{\prime\prime}q\quad\text{ for all }j\in[m].$ 2. (2) Assuming that $e\cdot p^{\prime\prime}q\cdot\Delta\leq 1$, for any node $(x,y)\in\mathcal{T}$ and for any $v\notin(V_{D})_{x,y},$ $\displaystyle\operatorname{TV}(\mu_{S}[v=\cdot\mid x],\mathbb{P}[v=\cdot])$ $\displaystyle\leq(1-3p^{\prime\prime}q)^{-\Delta}-1,$ $\displaystyle\operatorname{TV}(\mu_{S}[v=\cdot\mid y],\mathbb{P}[v=\cdot])$ $\displaystyle\leq(1-3p^{\prime\prime}q)^{-\Delta}-1.$ 3. (3) For any leaf $(x,y)\in\mathcal{T}$, there is a partition $V=(V_{D})_{x,y}\cup(V_{G})_{x,y}\cup(V_{R})_{x,y}$ such that every variable in $(V_{G})_{x,y}$ is assigned to the same value by both $x$ and $y$ and such that there is no constraint $C\in\mathcal{C}$ with variables in both $(V_{D})_{x,y}$ and $(V_{R})_{x,y}$. ###### Proof. (1) is immediate from (O2) and the same argument as Lemma 3.1. (3) follows immediately using the termination criterion (T3) by taking $(V_{G})_{x,y}=(V_{S})_{x,y}\setminus(V_{D})_{x,y}$, and $(V_{R})_{x,y}=((V_{D})_{x,y}\cup(V_{S})_{x,y})^{c}$. Finally, for (2), setting $\mu(\cdot)=\mu_{S}[v=\cdot\mid x]$ and $\nu(\cdot)=\mathbb{P}[v=\cdot]$, we get $\displaystyle\operatorname{TV}(\mu_{S}[v=\cdot\mid x],\mathbb{P}[v=\cdot])$ $\displaystyle=\operatorname{TV}(\mu,\nu)$ $\displaystyle=\sum_{a\in[q]:\mu(a)>\nu(a)}(\mu(a)-\nu(a))$ $\displaystyle\leq\sum_{a\in[q]:\mu(a)>\nu(a)}\frac{1}{q}\left((1-3p^{\prime\prime}q)^{-\Delta}-1\right)$ $\displaystyle\leq(1-3p^{\prime\prime}q)^{-\Delta}-1,$ where the second line uses the definition of total variation distance, and the third line uses (1) and Theorem 2.1. The same argument works for $y$ as well. ∎ The idealized coupling process and decision tree are naturally associated to the following quantities. ###### Definition 4.2. For $\mathcal{T}$ as defined above, * • For any node $(x,y)\in\mathcal{T}$, let $\mathcal{S}_{x}$ denote the set of (complete) satisfying assignments extending $x$, and let $\mathcal{S}_{y}$ denote the set of (complete) satisfying assignments extending $y$. * • For any node $(x,y)\in\mathcal{T}$, let $\mu_{\operatorname{cp}}(x,y)$ denote the probability that the idealized coupling process reaches $(x,y)$. For any $(x,y)\notin\mathcal{T}$, $\mu_{\operatorname{cp}}(x,y)=0$. * • For any node $(x,y)\in\mathcal{T}$, let $\displaystyle p_{x,y}^{x}$ $\displaystyle=\frac{\mu_{\operatorname{cp}}(x,y)}{\mu_{S}[x\mid x_{0}]},$ $\displaystyle p_{x,y}^{y}$ $\displaystyle=\frac{\mu_{\operatorname{cp}}(x,y)}{\mu_{S}[y\mid y_{0}]}.$ We conclude this subsection with some simple, but crucial, relations between these quantities. ###### Lemma 4.3. For $\mathcal{T}$, $\mathcal{S}_{x},\mathcal{S}_{y},p^{x}_{x,y},p^{y}_{x,y}$ as above, 1. (1) $p_{x,y}^{x},p_{x,y}^{y}\in[0,1]$. 2. (2) $p_{x_{0},y_{0}}^{x_{0}},p_{x_{0},y_{0}}^{y_{0}}=1$. 3. (3) For every non-leaf node $(x,y)\in\mathcal{T}$ whose children are defined on the set $(V_{S})_{x,y}\cup\\{v_{x,y}\\}$, letting $v=v_{x,y}$ and letting $x_{v(a)}$ denote the extension of $x$ obtained by setting $v$ to $a$ (and similarly for $y_{v(a)}$), $\displaystyle p^{x}_{x,y}$ $\displaystyle=\sum_{b\in[q]}p^{x_{v(a)}}_{x_{v(a)},y_{v(b)}}\quad\text{ for all }a\in[q],$ $\displaystyle p^{y}_{x,y}$ $\displaystyle=\sum_{b\in[q]}p^{y_{v(a)}}_{x_{v(b)},y_{v(a)}}\quad\text{ for all }a\in[q].$ 4. (4) For every node $(x,y)\in\mathcal{T}$, $\frac{|\mathcal{S}_{x}|\cdot p^{x}_{x,y}}{|\mathcal{S}_{y}|\cdot p^{y}_{x,y}}=\frac{|\mathcal{S}_{x_{0}}|}{|\mathcal{S}_{y_{0}}|}.$ 5. (5) For every non-leaf node $(x,y)\in\mathcal{T}$ whose children are defined on the set $(V_{S})_{x,y}\cup\\{v_{x,y}\\}$ and for $\eta=(1-3p^{\prime\prime}q)^{-\Delta}-1$, if $\eta\leq 1/(2q)$, then (letting $v=v_{x,y}$) $\displaystyle\sum_{b\neq a}p_{x_{v(a)},y_{v(b)}}^{x_{v(a)}}$ $\displaystyle\leq 4q\eta\cdot p^{x}_{x,y}\quad\text{ for all }a\in[q].$ $\displaystyle\sum_{b\neq a}p_{x_{v(b)},y_{v(a)}}^{y_{v(a)}}$ $\displaystyle\leq 4q\eta\cdot p^{y}_{x,y}\quad\text{ for all }a\in[q].$ ###### Proof. (1) and (2) are trivial. For (3), we note that for any $a\in[q]$, $\displaystyle\sum_{b\in[q]}p^{x_{v(a)}}_{x_{v(a)},y_{v(b)}}$ $\displaystyle=\frac{\sum_{b\in[q]}\mu_{\operatorname{cp}}(x_{v(a)},y_{v(b)})}{\mu_{S}[x_{v(a)}\mid x_{0}]}$ $\displaystyle=\frac{\mu_{\operatorname{cp}}(x,y)\cdot\mu_{S}[v=a\mid x]}{\mu_{S}[x\mid x_{0}]\cdot\mu_{S}[v=a\mid x]}$ $\displaystyle=p^{x}_{x,y}.$ The same argument also applies to $p^{y}_{x,y}$. For (4), we have indeed that $\displaystyle\frac{p^{x}_{x,y}}{p^{y}_{x,y}}$ $\displaystyle=\frac{\mu_{S}[y\mid y_{0}]}{\mu_{S}[x\mid x_{0}]}$ $\displaystyle=\frac{|\mathcal{S}_{y}|/|\mathcal{S}_{y_{0}}|}{|\mathcal{S}_{x}|/|\mathcal{S}_{x_{0}}|}.$ Finally, for (5), $\Pi_{x,y}$ denote the optimal coupling of $\mu_{S}[u=\cdot\mid x]$ and $\mu_{S}[u=\cdot\mid y]$, we have for any $a\in[q]$, $\displaystyle\frac{\sum_{b\neq a}p^{x_{v(a)}}_{x_{v(a)},y_{v(b)}}}{p_{x,y}^{x}}$ $\displaystyle=\frac{\sum_{b\neq a}\mu_{\operatorname{cp}}(x_{v(a)},y_{v(b)})}{\mu_{\operatorname{cp}}(x,y)\mu_{S}[v=a\mid x]}$ $\displaystyle=\frac{\sum_{b\neq a}\Pi_{x,y}(a,b)}{\mu_{S}[v=a\mid x]}$ $\displaystyle\leq\frac{\sum_{b\neq a}\Pi_{x,y}(a,b)}{(1/q)-\eta}$ $\displaystyle\leq 2q\cdot\operatorname{TV}(\mu_{S}[v=\cdot\mid x],\mu_{S}[v=\cdot\mid y])$ $\displaystyle\leq 2q\left(\operatorname{TV}(\mu_{S}[v=\cdot\mid x],\mathbb{P}[v=\cdot])+\operatorname{TV}(\mathbb{P}[v=\cdot],\mu_{S}[v=\cdot\mid y])\right)$ $\displaystyle\leq 4q\eta,$ where the third line follows from (2) of Lemma 4.1, the fourth line follows from the characterization of the total variation distance in terms of optimal coupling, and the last line follows again from (2) of Lemma 4.1. ∎ ### 4.2. Setting up the linear program The most important property of the quantities defined above is (3) in Lemma 4.3, which shows that given $|\mathcal{S}_{x}|/|\mathcal{S}_{y}|,p^{x}_{x,y},p^{y}_{x,y}$ at any node $(x,y)\in\mathcal{T}$, one obtains the key quantity $|\mathcal{S}_{x_{0}}|/|\mathcal{S}_{y_{0}}|$. What makes this property useful is the following observation, which shows that for $(x,y)\in\mathcal{T}$ _which are leaves_ , the ratio $|\mathcal{S}_{x}|/|\mathcal{S}_{y}|$ can be computed efficiently. ###### Lemma 4.4. For any leaf $(x,y)\in\mathcal{T}$, $|\mathcal{S}_{x}|/|\mathcal{S}_{y}|$ can be computed in time $\operatorname{poly}(n,k,q)\cdot q^{|(V_{D})_{x,y}|}$. ###### Proof. Let $(x,y)\in\mathcal{T}$ be a leaf and let $(V_{D})_{x,y},(V_{G})_{x,y},(V_{R})_{x,y}$ be the partition of $V$ as in (3) of Lemma 4.1. All the unassigned variables (under $x,y$) are in $(V_{D})_{x,y}\cup(V_{R})_{x,y}$ and note that there are no constraints with variables in both $(V_{D})_{x,y}$ and $(V_{R})_{x,y}$. Further, the number of ways of assigning values to variables in $(V_{R})_{x,y}$ so that they satisfy all constraints with variables in $(V_{R})_{x,y}\cup(V_{G})_{x,y}$ is the same (and hence, does not contribute to the ratio) since each variable in $(V_{G})_{x,y}$ is assigned to the same value by both $x$ and $y$. Therefore, the ratio $|\mathcal{S}_{x}|/|\mathcal{S}_{y}|$ is equal to the ratio of the number of ways of assigning values to the unassigned variables in $(V_{D})_{x,y}$ such that all constraints with variables in $(V_{D})_{x,y}\cup(V_{G})_{x,y}$ are satisfied. This can be done by exhaustive enumeration in the claimed time. ∎ Motivated by the preceding discussion, let $L\geq 2$ be a parameter to be chosen later and consider the $L$-truncated decision tree defined as follows. ###### Definition 4.5. For $L\geq 2$ and with $\mathcal{T}$ as before, we define the $L$-truncated decision tree $\mathcal{T}_{L}$ to consist of those nodes $(x,y)\in\mathcal{T}$ for which $|(V_{S})_{x,y}|\leq L+\ell$. We let $\mathcal{L}_{L}$ denote the leaves of $\mathcal{T}_{L}$, $\mathcal{L}_{L}^{g}$ denote those leaves in $\mathcal{L}_{L}$ which have $|(V_{S})_{x,y}|\leq L+\ell-1$ (in particular, these are also leaves of $\mathcal{T}$), and let $\mathcal{L}_{L}^{b}$ denote the remaining leaves. We now set up a linear program to mimic the quantities $p^{x}_{x,y},p^{y}_{x,y}$ for each node of $\mathcal{T}_{L}$. Formally, given parameters $r_{-}\leq r_{+}$, $\eta=(1-3p^{\prime\prime}q)^{-\Delta}-1$ and $\mathcal{T}_{L}$, we check whether the following linear program in variables $\widehat{p}_{x,y}^{x}$ and $\widehat{p}_{x,y}^{y}$ is feasible: 1. (LP1) For all $(x,y)\in\mathcal{T}_{L}$, $0\leq\widehat{p}^{x}_{x,y},\widehat{p}^{y}_{x,y}\leq 1$. 2. (LP2) For every $(x,y)\in\mathcal{L}_{L}^{g}$, $r_{-}\leq\frac{\widehat{p}_{x,y}^{x}|\mathcal{S}_{x}|}{\widehat{p}_{x,y}^{y}|\mathcal{S}_{y}|}\leq r_{+}.$ 3. (LP3) $\widehat{p}_{x_{0},y_{0}}^{x_{0}}=\widehat{p}_{x_{0},y_{0}}^{y_{0}}=1$. Moreover, for every node $(x,y)\in\mathcal{T}_{L}\setminus\mathcal{L}_{L}$ whose children are defined on the set $(V_{S})_{x,y}\cup\\{v_{x,y}\\}$ and letting $v=v_{x,y}$, $\displaystyle\widehat{p}_{x,y}^{x}=\sum_{b\in[q]}\widehat{p}_{x_{v(a)},y_{v(b)}}^{x_{v(a)}}\quad\text{ for all }a\in[q],$ $\displaystyle\widehat{p}_{x,y}^{y}=\sum_{b\in[q]}\widehat{p}_{x_{v(b)},y_{v(a)}}^{y_{v(a)}}\quad\text{ for all }a\in[q].$ 4. (LP4) For every node $(x,y)\in\mathcal{T}_{L}\setminus\mathcal{L}_{L}$ whose children are defined on the set $(V_{S})_{x,y}\cup\\{v_{x,y}\\}$ and letting $v=v_{x,y}$, for every $a\in[q]$, $\displaystyle\sum_{b\neq a}\widehat{p}_{x_{v(a)},y_{v(b)}}^{x_{v(a)}}$ $\displaystyle\leq 4q\eta\cdot\widehat{p}^{x}_{x,y},$ $\displaystyle\sum_{b\neq a}\widehat{p}_{x_{v(b)},y_{v(a)}}^{y_{v(a)}}$ $\displaystyle\leq 4q\eta\cdot\widehat{p}^{y}_{x,y}.$ ###### Claim 4.6. The above LP is feasible for $r_{-}=r_{+}=|\mathcal{S}_{x_{0}}|/|\mathcal{S}_{y_{0}}|$. ###### Proof. This follows immediately by taking $\widehat{p}^{x}_{x,y}=p^{x}_{x,y},\widehat{p}^{y}_{x,y}=p^{y}_{x,y}$ and using Lemma 4.3. ∎ ###### Claim 4.7. For every $r_{-},r_{+},\eta$ which can be represented in $\operatorname{poly}(n,q)$ bits, the feasibility of the above LP can be checked in time $\operatorname{poly}(n,q^{L})$. ###### Proof. This follows from standard guarantees on the running time of linear programming (cf. [Kha79]) since the number of variables and constraints in the LP are $O(q^{L+O(1)})$ and since $|\mathcal{S}_{x}|/|\mathcal{S}_{y}|$ can be represented using $\operatorname{poly}(n,q)$ bits. ∎ ### 4.3. Analysis of the linear program We now show that the feasibility of the above LP (for sufficiently large $L$ and appropriately chosen $p^{\prime\prime}$) implies that $r_{-}$ (respectively $r_{+}$) is an approximate lower (respectively upper) bound for $|\mathcal{S}_{x_{0}}|/|\mathcal{S}_{y_{0}}|$. Given this, we will be able to use binary search in order to approximate $|\mathcal{S}_{x_{0}}|/|\mathcal{S}_{y_{0}}|$. The key point is that the approximation error decays exponentially in $L$, which will allow us to take $L$ small enough to ensure that this procedure is efficient. ###### Proposition 4.8. Let $p^{\prime\prime}\leq(100q^{2}k\Delta^{4})^{-1}$, $p^{\prime}\leq p^{\prime\prime}/(100\Delta^{3}q)$, and $L\geq 8k\Delta^{2}$. Then, the feasibility of the above LP with parameters $r_{-},r_{+}$ implies that $\left(1-4\cdot 2^{-L/(k\Delta^{2})}\right)r_{-}\leq\frac{|\mathcal{S}_{x_{0}}|}{|\mathcal{S}_{y_{0}}|}\leq\left(1+4\cdot 2^{-L/(k\Delta^{2})}\right)r_{+}.$ ###### Proof. By iterating the condition (LP3), we have $\displaystyle\sum_{(x,y)\in\mathcal{L}_{L}:x\to\sigma}\widehat{p}_{x,y}^{x}$ $\displaystyle=1\quad\text{ for all }\sigma\in\mathcal{S}_{x_{0}},$ $\displaystyle\sum_{(x,y)\in\mathcal{L}_{L}:y\to\sigma}\widehat{p}_{x,y}^{y}$ $\displaystyle=1\quad\text{ for all }\sigma\in\mathcal{S}_{y_{0}}.$ Therefore, $\displaystyle|\mathcal{S}_{x_{0}}|$ $\displaystyle=\sum_{\sigma\in\mathcal{S}_{x_{0}}}\sum_{(x,y)\in\mathcal{L}_{L}:x\to\sigma}\widehat{p}_{x,y}^{x},$ $\displaystyle|\mathcal{S}_{y_{0}}|$ $\displaystyle=\sum_{\sigma\in\mathcal{S}_{y_{0}}}\sum_{(x,y)\in\mathcal{L}_{L}:y\to\sigma}\widehat{p}_{x,y}^{y}.$ At the end of this subsection, we will prove the following. ###### Lemma 4.9. For all $p^{\prime\prime}\leq(100q^{2}k\Delta^{4})^{-1}$, $p^{\prime}\leq p^{\prime\prime}/(100\Delta^{3}q)$, and $L\geq 8k\Delta^{2}$, $\displaystyle\frac{1}{|\mathcal{S}_{x_{0}}|}\sum_{\sigma\in\mathcal{S}_{x_{0}}}\sum_{(x,y)\in\mathcal{L}^{b}_{L}:x\to\sigma}\widehat{p}_{x,y}^{x}\leq 2^{-L/(k\Delta^{2})},$ $\displaystyle\frac{1}{|\mathcal{S}_{y_{0}}|}\sum_{\sigma\in\mathcal{S}_{y_{0}}}\sum_{(x,y)\in\mathcal{L}^{b}_{L}:y\to\sigma}\widehat{p}_{x,y}^{y}\leq 2^{-L/(k\Delta^{2})}.$ Given this lemma, we have $\displaystyle|\mathcal{S}_{x_{0}}|$ $\displaystyle=\sum_{\sigma\in\mathcal{S}_{x_{0}}}\sum_{(x,y)\in\mathcal{L}^{g}_{L}:x\to\sigma}\widehat{p}_{x,y}^{x}+\sum_{\sigma\in\mathcal{S}(x_{0})}\sum_{(x,y)\in\mathcal{L}_{L}^{b}:x\to\sigma}\widehat{p}_{x,y}^{x}$ $\displaystyle=\left(\sum_{(x,y)\in\mathcal{L}_{L}^{g}}\widehat{p}^{x}_{x,y}\cdot|\mathcal{S}_{x}|\right)\pm|\mathcal{S}_{x_{0}}|\cdot 2^{-L/(k\Delta^{2})},$ where the first term follows by interchanging sums and the second term follows by Lemma 4.9. A similar estimate also holds for $|\mathcal{S}_{y_{0}}|$. Thus, we have $\displaystyle\frac{|\mathcal{S}_{x_{0}}|\cdot(1\pm 2^{-L/(k\Delta^{2})})}{|\mathcal{S}_{y_{0}}|\cdot(1\pm 2^{-L/(k\Delta^{2})})}$ $\displaystyle=\frac{\sum_{(x,y)\in\mathcal{L}_{L}^{g}}\widehat{p}^{x}_{x,y}\cdot|\mathcal{S}_{x}|}{\sum_{(x,y)\in\mathcal{L}_{L}^{g}}\widehat{p}^{y}_{x,y}\cdot|\mathcal{S}_{y}|}$ $\displaystyle\in\left[\frac{r_{-}\cdot\sum_{(x,y)\in\mathcal{L}_{L}^{g}}\widehat{p}^{y}_{x,y}\cdot|\mathcal{S}_{y}|}{\sum_{(x,y)\in\mathcal{L}_{L}^{g}}\widehat{p}^{y}_{x,y}\cdot|\mathcal{S}_{y}|},\frac{r_{+}\cdot\sum_{(x,y)\in\mathcal{L}_{L}^{g}}\widehat{p}^{y}_{x,y}\cdot|\mathcal{S}_{y}|}{\sum_{(x,y)\in\mathcal{L}_{L}^{g}}\widehat{p}^{y}_{x,y}\cdot|\mathcal{S}_{y}|}\right]$ $\displaystyle\in[r_{-},r_{+}],$ where the second line follows from (LP2). Thus $\displaystyle\frac{|\mathcal{S}_{x_{0}}|}{|\mathcal{S}_{y_{0}}|}\in[(1-4\cdot 2^{-L/(k\Delta^{2})})r_{-},(1+4\cdot 2^{-L/(k\Delta^{2})})r_{+}],$ as desired. ∎ ###### Proof of Lemma 4.9. We will only prove the statement for $|\mathcal{S}_{x_{0}}|$; the proof of the other statement is identical. For a node $(x,y)\in\mathcal{T}$, let $(V_{S})^{\prime}_{x,y}=(V_{S})_{x,y}\setminus\\{v_{1}^{*},\dots,v_{\ell-1}^{*}\\}$. For a node $(x,y)\in\mathcal{T}$ which is not a leaf, we will use $v_{x,y}$ to denote the variable such that the children of $(x,y)$ are defined on the set $(V_{S})_{x,y}\cup\\{v_{x,y}\\}$. When $(x,y)\in\mathcal{T}$ is clear from context, we will denote $v_{x,y}$ simply by $v$. Consider the following way of generating random root-to-leaf paths of $\mathcal{T}_{L}$. At a non-leaf node $(x,y)\in\mathcal{T}$, sample a value for $v=v_{x,y}$ according to $\mu_{S}[v=\cdot\mid x]$ to generate an assignment $x^{\prime}$ on $(V_{S})_{x,y}\cup\\{v_{x,y}\\}$. Then, choose a random element $b^{\prime}$ of $[q]$ and go to the node $(x^{\prime},y_{v(b^{\prime})})\in\mathcal{T}$, where the probability of choosing each $b\in[q]$ is $p(x,y,x^{\prime},y_{v(b)})=\frac{\widehat{p}^{x^{\prime}}_{x^{\prime},y_{v(b)}}}{\widehat{p}^{x}_{x,y}}.$ Note that by (LP3), $p(x,y,x^{\prime},y_{v(\cdot)})$ is indeed a probability distribution. Let $(X,Y)$ denote the random leaf of $\mathcal{T}_{L}$ returned by this process and let $\widehat{\mu}$ denote the probability distribution on $\mathcal{L}_{L}$ induced by this process. Let $(x_{f},y_{f})\in\mathcal{L}_{L}$ and denote the corresponding root-to- leaf path by $(x_{0},y_{0}),\dots,(x_{f},y_{f})$. Then, $\widehat{\mu}[(X,Y)=(x_{f},y_{f})]=\prod_{t=1}^{f}\mu_{S}[x_{t}\mid x_{t-1}]\times\prod_{t=1}^{f}p(x_{t-1},y_{t-1},x_{t},y_{t})=\frac{|\mathcal{S}_{x_{f}}|}{|\mathcal{S}_{x_{0}}|}\cdot\frac{\widehat{p}^{x_{f}}_{x_{f},y_{f}}}{\widehat{p}^{x_{0}}_{x_{0},y_{0}}}=\frac{|\mathcal{S}_{x_{f}}|}{|\mathcal{S}_{x_{0}}|}\cdot\widehat{p}^{x_{f}}_{x_{f},y_{f}},$ where the final equality follows by (LP3). Therefore, $\displaystyle\frac{1}{|\mathcal{S}_{x_{0}}|}\sum_{\sigma\in\mathcal{S}_{x_{0}}}\sum_{(x,y)\in\mathcal{L}_{L}^{b}:x\to\sigma}\widehat{p}^{x}_{x,y}$ $\displaystyle=\sum_{\sigma\in\mathcal{S}_{x_{0}}}\sum_{(x,y)\in\mathcal{L}_{L}^{b}:x\to\sigma}\frac{\widehat{\mu}[(X,Y)=(x,y)]}{|\mathcal{S}_{x}|}$ $\displaystyle=\sum_{(x,y)\in\mathcal{L}_{L}^{b}}\widehat{\mu}[(X,Y)=(x,y)]$ $\displaystyle\leq\widehat{\mu}[(X,Y)\in\\{(x,y)\in\mathcal{T}_{L}:|(V_{S})^{\prime}_{x,y}|\geq L\\}].$ In order to bound the quantity on the right, we will first find a more convenient combinatorial characterization of the event. For this, fix $(x,y)\in\mathcal{T}_{L}$ such that $|(V_{S})^{\prime}_{x,y}|\geq L$. Recall the notation $D_{x,y},\mathcal{F}_{x,y}$ from (T2). We also need some further notation. * • For $i\in[m]$, we say that $C_{i}$ is $x$-frozen if $\mathbb{P}[C_{i}\mid x]>p^{\prime\prime}$. We denote the set of $x$-frozen constraints by $\mathcal{F}^{\prime}_{x}$. * • For $i\in[m]$, we say that $C_{i}$ has a disagreement if $\operatorname{vbl}(C_{i})\cap D_{x,y}\neq\emptyset$. Denote all such constraints by $\mathcal{D}_{x,y}$. * • For $i\in[m]$, we say that $C_{i}\in\mathcal{C}$ is bad if $\mathcal{C}_{i}\in\mathcal{F}_{x,y}\cup\mathcal{D}_{x,y}$. We denote the set of bad constraints by $\mathcal{B}_{x,y}$. * • $G(\mathcal{C})=(\mathcal{C},E)$ is the graph whose vertices are constraints in $\mathcal{C}$, and for $i\neq j$, there is an edge between $C_{i}$ and $C_{j}$ if and only if $\operatorname{vbl}(C_{i})\cap\operatorname{vbl}(C_{j})\neq\emptyset$. * • $G^{2}(\mathcal{C})=(\mathcal{C},E^{\prime})$ is the graph whose vertices are constraints in $\mathcal{C}$, and for $i\neq j$, there is an edge between $C_{i}$ and $C_{j}$ if and only if there exists $k$ with $\operatorname{vbl}(C_{i})\cap\operatorname{vbl}(C_{k})\neq\emptyset$, and $\operatorname{vbl}(C_{j})\cap\operatorname{vbl}(C_{k})\neq\emptyset$. ###### Claim 4.10. $|\mathcal{B}_{x,y}|\geq L/(k\Delta)$. ###### Proof. First, note that by (O3), (T3), and (T4), for every $v\in(V^{\prime}_{S})_{x,y}$, there exists some $C\in\mathcal{C}$ such that $\operatorname{vbl}(C)\cap(V_{D})_{x,y}\neq\emptyset$ and $v\in\operatorname{vbl}(C)$. Next, note that by (T2), for every $v\in(V_{D})_{x,y}$, there exists some $B\in\mathcal{B}_{x,y}$ such that $v\in\operatorname{vbl}(B)$. Therefore, $\displaystyle L$ $\displaystyle\leq|(V_{S}^{\prime})_{x,y}|$ $\displaystyle\leq k\cdot|\\{C\in\mathcal{C}:\operatorname{vbl}(C)\cap(V_{D})_{x,y}\neq\emptyset\\}|$ $\displaystyle\leq k\cdot|\\{C\in\mathcal{C}:\operatorname{vbl}(C)\cap\operatorname{vbl}(B)\neq\emptyset\text{ for some }B\in\mathcal{B}_{x,y}\\}|$ $\displaystyle\leq k\Delta\cdot|\mathcal{B}_{x,y}|.\qed$ Fix $C^{*}\in\mathcal{C}$ such that $v_{\ell}^{*}\in\operatorname{vbl}(C^{*})$. ###### Claim 4.11. There exists a $\\{2,3\\}$-tree $T\subseteq\mathcal{B}_{x,y}$ in $G(\mathcal{C})$ with $|T|\geq L/(k\Delta^{2})$ and such that $T$ contains $C^{*}$. ###### Proof. By (T4), (O3), and induction, the induced subgraph of $G^{2}(\mathcal{C})$ on $\mathcal{B}_{x,y}$ is connected. Moreover, $C^{*}\in\mathcal{D}_{x,y}\subseteq\mathcal{B}_{x,y}$. Given this, the claim follows from Lemma 2.4 and the previous claim. ∎ Let $\widehat{L}=L/(k\Delta^{2})$ and let $\mathfrak{T}_{\widehat{L}}$ denote the collection of $\\{2,3\\}$-trees in $G(\mathcal{C})$ of size $\widehat{L}$ which contain $C^{*}$. Then, by the previous discussion, $\displaystyle\widehat{\mu}[(X,Y)\in\\{(x,y)\in\mathcal{T}_{L}:|(V_{S})^{\prime}_{x,y}|\geq L\\}]$ $\displaystyle\leq\sum_{T\in\mathfrak{T}_{\widehat{L}}}\widehat{\mu}[T\subseteq\mathcal{B}_{X,Y}]$ $\displaystyle\leq(e\Delta^{3})^{\widehat{L}}\cdot\widehat{\mu}[T\subseteq\mathcal{B}_{X,Y}],$ where the final inequality uses Lemma 2.3. Finally, let us fix $T\in\mathfrak{T}_{\widehat{L}}$ and estimate $\widehat{\mu}[T\subseteq\mathcal{B}_{X,Y}].$ We denote the vertices of $T$ in $G(\mathcal{C})$ by $C^{\prime}_{1}=C^{*},C_{2}^{\prime},\dots,C^{\prime}_{\widehat{L}}$. Note, in particular, that $v_{\ell}^{*}\notin\operatorname{vbl}(C^{\prime}_{j})$ for $2\leq j\leq\widehat{L}$. By multiplying the result by an overall factor of $2^{\widehat{L}}$, it suffices to bound the probability $\widehat{\mu}[C^{\prime}_{2},\dots,C^{\prime}_{t}\in\mathcal{D}_{X,Y}\wedge C^{\prime}_{t+1},\dots,C^{\prime}_{\widehat{L}}\in\mathcal{F}_{X,Y}\setminus\mathcal{D}_{X,Y}],$ which is at most $\widehat{\mu}[C^{\prime}_{2},\dots,C^{\prime}_{t}\in\mathcal{D}_{X,Y}\wedge C^{\prime}_{t+1},\dots,C^{\prime}_{\widehat{L}}\in\mathcal{F}^{\prime}_{X}].$ Using the law of total probability, this is equal to $\displaystyle\mathbb{E}_{\widehat{\mu}}\left[\widehat{\mu}[C^{\prime}_{2},\dots,C^{\prime}_{t}\in\mathcal{D}_{X,Y}\mid X]\cdot\widehat{\mu}[C^{\prime}_{t+1},\dots,C^{\prime}_{\widehat{L}}\in\mathcal{F}^{\prime}_{X}\mid X]\right].$ (4.1) ###### Claim 4.12. For any possible realization $x$ of $X$, $\widehat{\mu}[C^{\prime}_{2},\dots,C^{\prime}_{t}\in\mathcal{D}_{x,Y}\mid x]\leq(4kq\eta)^{t-1}.$ ###### Proof. Let $j\in[t]$. Given $x$, we have $C^{\prime}_{j}\in\mathcal{D}_{x,Y}$ only if there is some $v\in\operatorname{vbl}(C^{\prime}_{j})$ for which $Y(v)\neq x(v)$. Since $|\operatorname{vbl}(C^{\prime}_{j})|\leq k$, we see by (LP4) that this happens with probability at most $4kq\eta$. Moreover, since $\operatorname{vbl}(C^{\prime}_{j})$ are disjoint for different values of $[j]$, these events are independent, which gives the desired conclusion. ∎ Using this claim, we see that the quantity in 4.1 is bounded above by $\displaystyle(4kq\eta)^{t-1}\cdot\widehat{\mu}[C^{\prime}_{t+1},\dots,C^{\prime}_{\widehat{L}}\in\mathcal{F}^{\prime}_{X}].$ (4.2) ###### Claim 4.13. $\widehat{\mu}[C^{\prime}_{t+1},\dots,C^{\prime}_{\widehat{L}}\in\mathcal{F}^{\prime}_{X}]\leq(2p^{\prime}q/p^{\prime\prime})^{\widehat{L}-t}.$ ###### Proof. Let $\mathfrak{X}=\\{x:(x,y)\in\mathfrak{T}_{\widehat{L}}\text{ for some }y\wedge C_{t+1}^{\prime},\dots,C^{\prime}_{\widehat{L}}\in\mathcal{F}^{\prime}_{x}\\},$ so that our goal is to bound $\widehat{\mu}[\mathfrak{X}]$. We will use an argument similar to Lemma 3.2. For $h\in\\{0,\dots,L\\}$, let $\mathcal{T}_{h}\subseteq\mathcal{T}_{L}$ denote those nodes $(x,y)\in\mathcal{T}$ for which $|(V_{S})_{x,y}|\leq h+\ell$ and let $\mathcal{L}_{h}$ denote the leaves of $\mathcal{T}_{h}$. To any $(x,y)\in\mathcal{T}_{L}$, we can naturally associate a root-to-leaf path $(x_{0},y_{0}),\dots,(x_{L},y_{L})=(x,y)$, where $(x_{i},y_{i})\in\mathcal{T}_{i}$ and nodes are possibly repeated at the end of the path. Interpreting $\widehat{\mu}$ as a distribution on root-to-leaf paths of this form, let $\mathfrak{S}(h)$ denote the sigma-algebra induced on $\mathcal{T}_{h}$ and let $M_{h}=\prod_{j=t+1}^{\widehat{L}}\mathbb{P}[C_{j}^{\prime}\mid X_{h}]\cdot\exp(-\eta qW(X_{h},Y_{h})),$ where $W(X_{h},Y_{h})=\\#\left\\{j\in[h-1]:v_{X_{j},Y_{j}}\in\operatorname{vbl}(C^{\prime}_{t+1})\cup\dots\cup\operatorname{vbl}(C^{\prime}_{\widehat{L}})\right\\}.$ Then, $M_{h}$ is measurable with respect to $\mathfrak{S}(h)$ and using the argument in Lemma 3.2, it is readily seen that $\mathbb{E}_{\widehat{\mu}}[M_{h+1}\mid\mathfrak{S}(h)]\leq M_{h}.$ Indeed, we only need to check that given $(X_{h},Y_{h})$ and letting $C_{T}$ denote the unique constraint (if any) containing $v=v_{X_{h},Y_{h}}$, we have $\displaystyle\mathbb{E}_{\widehat{\mu}}\bigg{[}\mathbb{P}[C_{T}\mid X_{h+1}]\mid\mathfrak{S}(h)\bigg{]}$ $\displaystyle=\sum_{a\in[q]}\frac{\mathbb{P}[C_{T}\wedge X_{h}\wedge v=a]\cdot{\widehat{\mu}}[v=a\mid X_{h}]}{\mathbb{P}[X_{h}\wedge v=a]}$ $\displaystyle=\sum_{a\in[q]}\frac{\mathbb{P}[C_{T}\wedge X_{h}\wedge v=a]}{\mathbb{P}[X_{h}]}\cdot\frac{\widehat{\mu}[v=a\mid X_{h}]}{\mathbb{P}[v=a]}$ $\displaystyle\leq\sum_{a\in[q]}\frac{\mathbb{P}[C_{T}\wedge X_{h}\wedge v=a]}{\mathbb{P}[X_{h}]}\frac{q^{-1}+\eta}{q^{-1}}$ $\displaystyle\leq\exp(\eta q)\mathbb{P}[C_{T}\mid X_{h}],$ where the third line follows from (2) of Lemma 4.1. Since $\mathbb{P}[C_{j}^{\prime}\mid X_{0}]\leq p^{\prime}q,$ it therefore follows that $\mathbb{E}_{\widehat{\mu}}[M_{L}]\leq(p^{\prime}q)^{\widehat{L}-t}.$ Also, for any $(x,y)\in\mathcal{T}_{L}$, we have that $W(x,y)\leq k(\widehat{L}-t)$. Hence, by Markov’s inequality and the assumption on $p^{\prime\prime}$, $\displaystyle\widehat{\mu}[\mathfrak{X}]$ $\displaystyle\leq\widehat{\mu}[M_{L}\geq(p^{\prime\prime})^{\widehat{L}-t}\exp(-\eta qk(\widehat{L}-t))]$ $\displaystyle\leq\left(\frac{p^{\prime}q\exp(\eta qk)}{p^{\prime\prime}}\right)^{\widehat{L}-t}\leq\left(\frac{2p^{\prime}q}{p^{\prime\prime}}\right)^{\widehat{L}-t}.\qed$ Using the preceding claim along with 4.2 and simplying using the bounds on $p^{\prime},p^{\prime\prime}$ completes the proof. ∎ ## 5\. Proof of Theorems 1.5 and 1.6 We are now ready to prove Theorems 1.5 and 1.6. The algorithms in this section exploit a refinement of the analysis in the previous section, which we present in Section 5.1. Following this, we present the proof of Theorem 1.5 in Section 5.2 and the proof of Theorem 1.6 in Section 5.3. We will freely use the notation introduced in the previous two sections. ### 5.1. Refined analysis of the linear program As in Section 3, fix an ordering $v_{1},\dots,v_{n}$ of the variables. Let $p^{\prime}\leq p^{\prime\prime}$ be parameters to be chosen later. Recall that in Section 3.1, we generate a partial assignment on a subset $v_{1}^{*},\dots,v_{s}^{*}$ of the variables (where $s$ is itself random) and “freeze” the remaining variables. This process depends on the parameter $p^{\prime}$ in (R4). As before, for $i\in[s]$, we let $P_{i}$ denote the partial assignment on $v_{1}^{*},\dots,v_{i}^{*}$ and we let $\nu$ denote the distribution on partial assignments given by the final partial assignment $P_{s}$. Once again, we emphasize that $s$ is random. For our approximate sampling algorithm, we will also need the following variation of this procedure. We consider the same randomized greedy procedure as in Section 3.1, except now, in (R3), we assign $v_{i}^{*}$ a random value chosen according to the distribution $\mu_{S}[v_{i}^{*}=\cdot\mid P_{i-1}]$. For now, the reader should ignore the question of how to efficiently implement such a procedure. We let $\nu_{S}$ denote the distribution on partial assignments given by the final partial assignment $P_{s}$, noting again that $s$ is random. Given $v_{1}^{*},\dots,v_{i}^{*}$ and $a\in[q]$, we denote by $P_{i}(a)$ the partial assignment extending $P_{i-1}$ by setting $v_{i}^{*}=a$. As before, for $h\in[n]$, we let $\iota(h)$ be the largest index $i$ such that $v_{i}^{*}=v_{w}$ for some $w\leq h$ i.e. $\iota(h)$ is the number of variables among $\\{v_{1},\dots,v_{h}\\}$ assigned values by the partial assignment. For each variable $v$, there are at most $\Delta$ constraints $C\in\mathcal{C}$ such that $v\in\operatorname{vbl}(C)$. Let $\widehat{L}=L/(k\Delta^{2})$, and let $\mathfrak{T}_{\widehat{L},v}$ be the set of $\\{2,3\\}$-trees in $G(\mathcal{C})$ of size $\widehat{L}$ containing one of these constraints. For any $T\in\mathfrak{T}_{\widehat{L},v}$, we let $C^{*}_{v}$ denote the unique $C\in T$ satisfying $v\in\operatorname{vbl}(C)$. Recall that $\mathcal{F}_{x,y}$ denotes the constraints $C\in\mathcal{C}$ for which $\mathbb{P}[C\mid x]>p^{\prime\prime}$ or $\mathbb{P}[C\mid y]>p^{\prime\prime}$. For $\ell\in[s]$, we define the idealized decision tree $\mathcal{T}$ starting from the root node $(x_{0},y_{0}):=(P_{\ell}(a),P_{\ell}(b))$ and the $L$-truncated decision tree $\mathcal{T}_{L}$ as in Section 4. Let $\eta=(1-3p^{\prime\prime}q)^{-\Delta}-1$. The following lemma was proved during the course of the proof of Lemma 4.9. ###### Lemma 5.1. For all $p^{\prime}=p^{\prime\prime}\leq(1000q^{2}k\Delta^{4})^{-1}$ and $p\leq p^{\prime\prime}/(1000\Delta^{3}q)$, $\displaystyle\frac{1}{|\mathcal{S}_{x_{0}}|}\sum_{\sigma\in\mathcal{S}_{x_{0}}}\sum_{(x,y)\in\mathcal{L}^{b}_{L}:x\to\sigma}\widehat{p}_{x,y}^{x}\leq\sum_{T\in\mathfrak{T}_{\widehat{L},v^{*}_{\ell}}}\,\,\prod_{C\in T,C\neq C^{*}_{v^{*}_{\ell}}}\left(4kq\eta+\frac{2\mathbb{P}[C\mid P_{\ell-1}]}{p^{\prime\prime}}\right),$ $\displaystyle\frac{1}{|\mathcal{S}_{y_{0}}|}\sum_{\sigma\in\mathcal{S}_{y_{0}}}\sum_{(x,y)\in\mathcal{L}^{b}_{L}:y\to\sigma}\widehat{p}_{x,y}^{y}\leq\sum_{T\in\mathfrak{T}_{\widehat{L},v^{*}_{\ell}}}\,\,\prod_{C\in T,C\neq C^{*}_{v^{*}_{\ell}}}\left(4kq\eta+\frac{2\mathbb{P}[C\mid P_{\ell-1}]}{p^{\prime\prime}}\right).$ ###### Remark. Note that there is no factor of $q$ multiplying $2\mathbb{P}[C\mid P_{\ell-1}]/p^{\prime\prime}$ since $v_{\ell}^{*}\notin\operatorname{vbl}(C)$ for any $C$ that features in the product. For $h\in[n+1]$, let $\mathcal{E}(h)$ denote the event that for all variables $v\in V$, $\displaystyle\sum_{T\in\mathfrak{T}_{\widehat{L},v}}\,\,\prod_{C\in T,C\neq C^{*}_{v}}\left(4kq\eta+\frac{2\mathbb{P}[C\mid P_{\iota(h-1)}]}{p^{\prime\prime}}\right)\leq n^{4}2^{-L/(k\Delta^{2})}.$ The next lemma, together with Markov’s inequality, shows that $\mathcal{E}(h)$ occurs with high probability with respect to both $\nu$ and $\nu_{S}$. ###### Lemma 5.2. Let $p^{\prime}=p^{\prime\prime}\leq(100q^{2}k\Delta^{4})^{-1}$, $p\leq p^{\prime\prime}/(100\Delta^{3})$, and $L\geq 8k\Delta^{2}$. Then, for all $h\in[n+1]$ and $v\in V$, $\displaystyle\mathbb{E}_{\nu}\left[\sum_{T\in\mathfrak{T}_{\widehat{L},v}}\,\,\prod_{C\in T,C\neq C^{*}_{v}}\left(4kq\eta+\frac{2\mathbb{P}[C\mid P_{\iota(h-1)}]}{p^{\prime\prime}}\right)\right]\leq 2^{-L/(k\Delta^{2})},$ and $\displaystyle\mathbb{E}_{\nu_{S}}\left[\sum_{T\in\mathfrak{T}_{\widehat{L},v}}\,\,\prod_{C\in T,C\neq C^{*}_{v}}\left(4kq\eta+\frac{2\mathbb{P}[C\mid P_{\iota(h-1)}]}{p^{\prime\prime}}\right)\right]\leq 2^{-L/(k\Delta^{2})}.$ ###### Proof. We will first prove the statement for $\nu$. Fix $v\in V$ and observe that for any $T\in\mathfrak{T}_{\widehat{L},v}$, the sets $\operatorname{vbl}(C)$ are disjoint for $C\in T$. Let $\mathfrak{S}(t)$ denote the $\sigma$-algebra generated by the output of the randomized greedy procedure on partial assignments of $v_{1},\dots,v_{t}$. Then, by an identical argument to Lemma 3.2, we see that $M_{t}=\prod_{C\in T,C\neq C^{*}_{v}}\left(4kq\eta+\frac{2\mathbb{P}[C\mid P_{\iota(t-1)}]}{p^{\prime\prime}}\right)$ satisfies $\mathbb{E}_{\nu}[M_{t+1}\mid\mathfrak{S}(t)]=M_{t}$ Thus, for any $h\in[n+1]$, $\mathbb{E}_{\nu}\left[\prod_{C\in T,C\neq C^{*}_{v}}\left(4kq\eta+\frac{2\mathbb{P}[C\mid P_{\iota(h-1)}]}{p^{\prime\prime}}\right)\right]\leq\left(4kq\eta+\frac{2p}{p^{\prime\prime}}\right)^{|T|-1},$ so that by linearity of expectation, $\displaystyle\mathbb{E}_{\nu}\left[\sum_{T\in\mathfrak{T}_{\widehat{L},v}}\,\,\prod_{C\in T,C\neq C^{*}_{v}}\left(4kq\eta+\frac{2\mathbb{P}[C\mid P_{\iota(h-1)}]}{p^{\prime\prime}}\right)\right]$ $\displaystyle\leq|\mathfrak{T}_{\widehat{L},v}|\left(4kq\eta+\frac{2p}{p^{\prime\prime}}\right)^{|T|-1}$ $\displaystyle\leq\Delta\cdot(e\Delta^{3})^{\widehat{L}}\cdot\left(4kq\eta+\frac{2p}{p^{\prime\prime}}\right)^{\widehat{L}-1}.$ The desired bound is obtained by using the assumptions on $p$ and $p^{\prime\prime}$. Next, we prove the statement for $\nu_{S}$. Fix $v\in V$. For $T\in\mathfrak{T}_{\widehat{L},v}$, let $W_{t}(T)$ be the number of variables among $v_{1}^{*},\dots,v_{\iota(t)}^{*}$ that are contained in some $C\in T$. Also, as before, let $\mathfrak{S}(t)$ denote the $\sigma$-algebra generated by the output of the randomized greedy procedure on partial assignments of $v_{1},\dots,v_{t}$. Then, as in Claim 4.13, we have that $M_{t}=\prod_{C\in T,C\neq C^{*}_{v}}\left(4kq\eta+\frac{2\mathbb{P}[C\mid P_{\iota(t-1)}]}{p^{\prime\prime}}\right)\cdot\exp(-\eta qW_{t-1}(T))$ satisfies $\mathbb{E}_{\nu_{S}}[M_{t+1}\mid\mathfrak{S}(t)]\leq M_{t}.$ Thus, for any $h\in[n+1]$, $\mathbb{E}_{\nu_{S}}\left[\prod_{C\in T,C\neq C^{*}_{v}}\left(4kq\eta+\frac{2\mathbb{P}[C\mid P_{\iota(h-1)}]}{p^{\prime\prime}}\right)\right]\leq\left(4kq\eta+\frac{2p}{p^{\prime\prime}}\right)^{|T|-1}\exp(\eta qk)^{|T|-1}.$ so that by linearity of expectation, $\displaystyle\mathbb{E}_{\nu_{S}}\left[\sum_{T\in\mathfrak{T}_{\widehat{L},v}}\,\,\prod_{C\in T,C\neq C^{*}_{v}}\left(4kq\eta+\frac{2\mathbb{P}[C\mid P_{\iota(h-1)}]}{p^{\prime\prime}}\right)\right]$ $\displaystyle\leq|\mathfrak{T}_{\widehat{L},v}|\left(8kq\eta+\frac{4p}{p^{\prime\prime}}\right)^{|T|-1}$ $\displaystyle\leq\Delta\cdot(e\Delta^{3})^{\widehat{L}}\cdot\left(8kq\eta+\frac{4p}{p^{\prime\prime}}\right)^{\widehat{L}-1}.$ The desired bound is obtained by using the assumptions on $p$ and $p^{\prime\prime}$. ∎ Combining Lemmas 5.1 and 5.2 and the analysis in Section 4, we obtain the following proposition. ###### Proposition 5.3. Let $p^{\prime}=p^{\prime\prime}\leq(1000q^{2}k\Delta^{4})^{-1}$, $p\leq p^{\prime\prime}/(1000\Delta^{3})$, and $\delta\in(0,1)$. Then, for $L\geq 8k\Delta^{2}\log(n/\delta)$, the event $\wedge_{h\in[n+1]}\mathcal{E}(h)$ has probability at least $1-(\delta/n^{2})$ with respect to both $\nu$ and $\nu_{S}$. Moreover, on the event $\wedge_{h\in[n+1]}\mathcal{E}(h)$, the feasibility of the LP with parameters $r_{-}\leq r_{+}$ implies that $\left(1-4\cdot n^{4}2^{-L/(k\Delta^{2})}\right)r_{-}\leq\frac{|\mathcal{S}_{x_{0}}|}{|\mathcal{S}_{y_{0}}|}\leq\left(1+4\cdot n^{4}2^{-L/(k\Delta^{2})}\right)r_{+}.$ ### 5.2. Approximate counting: proof of Theorem 1.5 We have the following analogue of Proposition 3.5. ###### Lemma 5.4. Let $p^{\prime}=p^{\prime\prime}\leq(1000q^{2}k\Delta^{4})^{-1}$, $p\leq p^{\prime\prime}/(1000\Delta^{3})$, and $L=80k\Delta^{2}\log(\Delta n)$. There exists a deterministic algorithm running in time $O(n^{\operatorname{poly}(\log q,\Delta,k)})$ which generates a sequence of partial assignments $P_{1},\dots,P_{s}$ with the following properties. 1. (1) For all $i\in[s]$, $P_{i}$ assigns values to $i$ variables, and $P_{i}$ extends $P_{i-1}$. 2. (2) $A_{s}=\emptyset$. 3. (3) For all $i\in[s]$ and $j\in[m]$, $\mathbb{P}[C_{j}\mid P_{i}]\leq p^{\prime}q$. 4. (4) Every connected component in $G(\mathcal{F})$ has size at most $L/(k\Delta)$. 5. (5) The event $\wedge_{h\in[n+1]}\mathcal{E}(h)$ is satisfied. ###### Proof. The proof is a modification of the proof of Proposition 3.5. Let $L^{\prime}=L/(k\Delta^{2})$. Recall that for each variable $v$, $\mathfrak{T}_{L^{\prime},v}$ is the set of $\\{2,3\\}$-trees in $G(\mathcal{C})$ of size $L^{\prime}$ containing $C^{*}_{v}$. Let $\mathfrak{T}$ denote the collection of all {2,3}-trees of size $L^{\prime}$ in $G(\mathcal{C})$. Note that $|\mathfrak{T}_{L^{\prime},v}|\leq|\mathfrak{T}|\leq\operatorname{poly}(n^{\log^{2}\Delta})$ and the collections $\mathfrak{T}_{L^{\prime},v}$, $\mathfrak{T}$ can be constructed in time $\operatorname{poly}(n^{\log^{2}\Delta})$. For a partial assignment $X$, define $H(X)=\sum_{v\in V}\sum_{T\in\mathfrak{T}_{L^{\prime},v}}\,\,\prod_{C\in T,C\neq C^{*}_{v}}\left(4kq\eta+\frac{2\mathbb{P}[C\mid X]}{p^{\prime\prime}}\right).$ Note that $H(X)\geq\sum_{T\in\mathfrak{T}}\prod_{C\in T}\left(\frac{\mathbb{P}[C\mid X]}{p^{\prime\prime}}\right).$ As in the proof of Proposition 3.5, if we can find a sequence of partial assignments $P_{1},\dots,P_{s}$ satisfying properties (1), (2), (3) such that $H(P_{1}),\dots,H(P_{s})<n^{4}2^{-L/(k\Delta^{2})}<1$, then (4) and (5) are also satisfied. For this, we follow the same greedy procedure as in Section 3.1, except now, after having chosen $P_{i-1}$ and $v_{i}^{*}$, we choose the value of $v_{i}^{*}$ in (R3) to be $\operatorname*{arg\,min}_{a\in[q]}H(P_{i-1}\wedge{v_{i}^{*}=a}).$ Similar to Proposition 3.5, this ensures that $H(P_{i})\leq H(P_{i-1})$ for all $i\in[s]$. Thus, it is possible to choose $P_{i}$ to ensure $H(P_{i-1})\leq H(P_{i})$. Finally, since $H(\emptyset)\leq n\cdot\Delta\cdot(e\Delta^{3})^{L^{\prime}}\cdot\left(4kq\eta+\frac{2p}{p^{\prime\prime}}\right)^{L^{\prime}}<n^{4}2^{-L/(k\Delta^{2})},$ we are done. ∎ Finally, given partial assignments $P_{1},\dots,P_{s}$ satisfying the properties of Lemma 5.4, we can use Lemma 3.6, Proposition 5.3 and the analysis of Section 4 to complete the proof of Theorem 1.5. ### 5.3. Approximate sampling: proof of Theorem 1.6 We consider the following sampling procedure. Fix a parameter $\varepsilon\in(0,1)$. Let $L=80k\Delta^{2}\log(\Delta n/\varepsilon)$. 1. (S1) Fix an arbitrary ordering $v_{1},\dots,v_{n}$ of the variables. Initialize the set of frozen variables $F_{0}=\emptyset$, the set of available variables $A_{0}=V$, $\iota(0)=0$, and $P_{0}=\emptyset$. 2. (S2) Let $1\leq i\leq n$. Given $P_{\iota(i-1)},F_{i-1},A_{i-1}$, if $\mathcal{E}(i)$ does not hold, then output an arbitrary satisfying assignment (which can be found using the algorithmic LLL in [MT10]) and terminate. Otherwise, $\mathcal{E}(i)$ holds. 3. (S3) If $v_{i}\notin A_{i-1}$, then $\iota(i)=\iota(i-1),F_{i}=F_{i-1},A_{i}=A_{i-1}$. Increment $i$. If $i\leq n$, return to (S2). Otherwise, proceed to (S6). 4. (S4) If $v_{i}\in A_{i-1}$, then approximate the marginal $\mu_{S}[v_{i}=\cdot\mid P_{\iota(i-1)}]$ within total variation distance $\varepsilon/(8n)$ using the LP. Then, assign $v_{i}$ a random value in $[q]$ distributed according to the output of the LP. Let $\iota(i)=\iota(i-1)+1$, and $P_{\iota(i)}$ be the extension of $P_{\iota(i-1)}$ resulting from assigning a value to $v_{i}$. 5. (S5) Let $\mathcal{F}_{i}=\\{j\in[m]:\mathbb{P}[C_{j}\mid P_{\iota(i)}]>p^{\prime\prime}\\}.$ Set $F_{i}=F_{i-1}\cup\bigcup_{j\in\mathcal{F}_{i}}(\operatorname{vbl}(C_{j})\cap A_{i}\setminus\\{v_{i}\\})\text{ and }A_{i}=A_{i-1}\setminus(F_{i}\cup\\{v_{i}\\}).$ Increment $i$. If $i\leq n$, return to (S2). Otherwise, proceed to (S6). 6. (S6) Let $\mathcal{F}=\cup_{i\in[n]}\mathcal{F}_{i}$. Consider $G^{2}(\mathcal{F})$, which is the induced subgraph of $G^{2}(\mathcal{C})$ by the vertices $\mathcal{F}$. If any connected component of $G^{2}(\mathcal{F})$ has size larger than $80\Delta\log(\Delta n)$, then output an arbitrary satisfying assignment and terminate. 7. (S7) Else, use exhaustive enumeration to uniformly sample a satisfying assignment of unassigned variables appearing in each separate connected component of $G^{2}(\mathcal{F})$, and return the complete satisfying assignment thus obtained. It is immediate that the running time of the algorithm is as claimed in Theorem 1.6. Let $\mu_{\operatorname{Alg}}$ denote the distribution on satisfying assignments generated by the algorithm. We show that $\operatorname{TV}(\mu_{\operatorname{Alg}},\mu_{S})\leq\varepsilon$. For this, we begin by observing that if we could sample from the true marginal distribution $\mu_{S}[v_{i}=\cdot\mid P_{\iota(i-1)}]$ in (S4) and if we could output a uniform satisfying assignment extending the current partial assignment for the early termination in (S2) and (S6), then the resulting distribution on satisfying assignments output by the algorithm clearly coincides with $\mu_{S}$. Next, since the approximate marginals in (S4) are within $\varepsilon/(8n)$ of the true marginals, it follows from Proposition 5.3 and Lemma 3.4 that the early termination condition in (S2) and (S6) occurs with probability at most $\varepsilon/4$. Therefore, $\operatorname{TV}(\mu_{\operatorname{Alg}},\mu_{\operatorname{Alg}^{\prime}})\leq\varepsilon/4$, where $\operatorname{Alg}^{\prime}$ denotes the sampling algorithm which is the same as above, except upon early termination in (S2) and (S6), we output a uniformly random satisfying assignment extending the current partial assignment. Finally, since the approximate marginals in (S4) are within $\varepsilon/(8n)$ of the true marginals, it follows that $\operatorname{TV}(\mu_{\operatorname{Alg}^{\prime}},\mu_{S})\leq\varepsilon/8$, so that by the triangle inequality, $\operatorname{TV}(\mu_{\operatorname{Alg}},\mu_{S})\leq\varepsilon$, as desired. ## References * [AIS19] Dimitris Achlioptas, Fotis Iliopoulos, and Alistair Sinclair. Beyond the Lovász local lemma: Point to set correlations and their algorithmic applications. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 725–744. IEEE, 2019. * [Alo91] Noga Alon. A parallel algorithmic version of the local lemma. Random Structures & Algorithms, 2(4):367–378, 1991. * [AS04] Noga Alon and Joel H Spencer. The probabilistic method. John Wiley & Sons, 2004. * [Bec91] József Beck. An algorithmic approach to the Lovász local lemma. I. Random Structures & Algorithms, 2(4):343–365, 1991. * [BGG+19] Ivona Bezáková, Andreas Galanis, Leslie Ann Goldberg, Heng Guo, and Daniel Stefankovic. Approximation via correlation decay when strong spatial mixing fails. SIAM Journal on Computing, 48(2):279–349, 2019. * [EL73] Paul Erdős and László Lovász. Problems and results on 3-chromatic hypergraphs and some related questions. In Colloquia Mathematica Societatis Janos Bolyai 10. Infinite and Finite Sets, Keszthely (Hungary). Citeseer, 1973. * [FGYZ20] Weiming Feng, Heng Guo, Yitong Yin, and Chihao Zhang. Fast sampling and counting k-SAT solutions in the local lemma regime. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 854–867, 2020. * [FHY20] Weiming Feng, Kun He, and Yitong Yin. Sampling constraint satisfaction solutions in the local lemma regime. arXiv preprint arXiv:2011.03915, 2020. * [GJL19] Heng Guo, Mark Jerrum, and Jingcheng Liu. Uniform sampling through the Lovász local lemma. Journal of the ACM (JACM), 66(3):1–31, 2019. * [GLLZ19] Heng Guo, Chao Liao, Pinyan Lu, and Chihao Zhang. Counting hypergraph colorings in the local lemma regime. SIAM Journal on Computing, 48(4):1397–1424, 2019. * [HSS11] Bernhard Haeupler, Barna Saha, and Aravind Srinivasan. New constructive aspects of the Lovász local lemma. Journal of the ACM (JACM), 58(6):1–28, 2011. * [HSZ19] Jonathan Hermon, Allan Sly, and Yumeng Zhang. Rapid mixing of hypergraph independent sets. Random Structures & Algorithms, 54(4):730–767, 2019. * [Kha79] Leonid Genrikhovich Khachiyan. A polynomial algorithm in linear programming. In Doklady Akademii Nauk, volume 244, pages 1093–1096. Russian Academy of Sciences, 1979. * [Moi19] Ankur Moitra. Approximate counting, the Lovász local lemma, and inference in graphical models. Journal of the ACM (JACM), 66(2):1–25, 2019. * [Mos09] Robin A Moser. A constructive proof of the Lovász local lemma. In Proceedings of the forty-first annual ACM symposium on Theory of computing, pages 343–350, 2009. * [MR98] Michael Molloy and Bruce Reed. Further algorithmic aspects of the local lemma. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, pages 524–529, 1998. * [MT10] Robin A Moser and Gábor Tardos. A constructive proof of the general Lovász local lemma. Journal of the ACM (JACM), 57(2):1–15, 2010. * [Sri08] Aravind Srinivasan. Improved algorithmic versions of the Lovász local lemma. In Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms, pages 611–620. Citeseer, 2008.
# Finite temperature Instabilities of 2D Dipolar Bose Gas at Arbitrary Tilt Angle Pengtao Shen Khandker F. Quader Department of Physics, Kent State University, Kent , OH 44242, USA ###### Abstract Advances in creating stable dipolar Bose systems, and ingenious box traps have generated tremendous interest. Theory study of dipolar bosons at finite temperature (T) has been limited. Motivated by these, we study 2D dipolar bosons at arbitrary tilt angle, $\theta$, using finite-T random phase approximation. We show that a comprehensive understanding of phases and instabilities at non-zero T can be obtained on concurrently considering dipole strength, density, temperature and $\theta$. We find the system to be in a homogeneous non-condensed phase that undergoes a collapse transition at large $\theta$, and a finite momentum instability, signaling a striped phase, at large dipolar strength; there are important differences with the T=0 case. At T = 0, BEC appears at critical dipolar strength, and at critical density. Our predictions for polar molecule system, ${}^{41}K^{87}Rb$, and ${}^{166}Er$ may provide tests of our results. Our approach may apply broadly to systems with long-range, anisotropic interactions. ###### pacs: 67.85.-d, 67.85.Jk,67.85.De,05.30.Jp,05.30.Rt The nature of excitations, phases and instabilities of interacting Bose systems has been a subject of longstanding interest. The extraordinary development of the field of ultracold atoms, tremendously advanced by novel experimental techniques, has, over the past several years, led to intense research. In recent years, there has been considerable interest in systems with long-range and anisotropic interactions; examples are bosonic and fermionic atoms, and polar molecules experiencing dipolar interactions. Recent experimental advances in creating stable dipolar bosonic systems, including polar molecules with large electric dipole moments, have led to vigorous research activities. Dipolar Bose-Einstein condensates (BEC) have been realized in chromium Koch et al. (2008) (${}^{52}Cr$), and in lanthanide atoms (such as dysprosium, erbium Aikawa et al. (2012)), which have larger magnetic moments. Recent reporting Chomaz et al. (2018) of observation for the first time of roton mode in dipolar ${}^{166}Er$ (magnetic moment 7 $\mu_{B}$) in cigar-shape trap geometry constitute a significant development. Realization of high phase-space density systems of polar molecules, such as, ${}^{87}Rb^{133}Cs$ Molony et al. (2014); Takekoshi et al. (2014), ${}^{41}K^{87}Rb$ Aikawa et al. (2009, 2010) hold promise for realization of quantum degeneracy and dipolar BEC. In general, the electric dipole moments of the polar molecules are substantially larger than the magnetic dipole moments of atoms; e.g. the RbCs system has sizable electric dipole moment $\sim$ 1.28 Debye. Ingenious box traps constitute another profoundly significant development. There have been box-trap experiments on bosons subjected to contact interaction Gaunt et al. (2013); Gotlibovych et al. (2014); Lopes et al. (2017); those on dipolar systems are ongoing. The long-range and anisotropic nature, and a region of attraction, of dipolar interaction can give rise to novel quantum phases, even in dilute systems. A sizeable body of theory work, based on Monte Carlo and Bogoliubov-de Gennes (BdG) methods, exist at at zero temperature (T=0). The existence of roton mode and density-wave phase has been found in BdG calculations studying the property of BEC ground state Fischer (2006); Santos et al. (2003); Fedorov et al. (2014); Lu et al. (2015); Shen and Quader (2018). Solid and stripe like crystal phases have been predicted in Monte Carlo simulation Macia et al. (2014); Bombin et al. (2017). Such stripe phase of the dipolar Bose system provides a promising candidate Boninsegni and Prokof’ev (2012); Wenzel et al. (2017); Tanzi et al. (2019); Böttcher et al. (2019); Chomaz et al. (2019) for an intrinsic supersolid without the presence of defects as described by the Andreev-Lifshitz mechanism Cinti et al. (2014). However, theoretical study of 2D dipolar boson gas at finite temperature has been limited. A purely dipolar 3D system is usually unstable towards collapse due to the attractive component of the interaction. A trap helps to stabilize the system; this depends strongly on trapping geometry Koch et al. (2008); Eberlein et al. (2005). In 2D, the stability issue of dipolar bosons may be richer Sun et al. (2010); Yamaguchi et al. (2010). There have been studies Mueller and Baym (2000); van Zyl et al. (2002) of density response in 2D and 3D Bose gas with attractive constant interaction, using random phase approximation (RPA). Similar study of dipolar bosons in a cylindrical trap at finite temperature shows that a pancake geometry trap stabilizes the system Bisset et al. (2011). That brings up the question whether purely 2D dipolar bosons are stable at finite temperature. In this paper, we present results of our study of a 2D dipolar Bose system at non-zero temperatures, using finite-temperature RPA Mueller and Baym (2000); Bisset et al. (2011). A key point of the paper is that a broad perspective on the phases and instabilities of the system at finite temperature can be attained by considering several tunable knobs: density, temperature, interaction strength and the orientation of dipole moments to an external field, i.e. tilt angle. We construct several informative phase diagrams based on our study of dipolar length (strength) versus dipole tilt angle at a given temperature; RPA critical temperature and critical density versus dipolar length for a given tilt angle, and critical temperature versus critical density for a given tilt angle. In particular, at finite temperature, we find the system to be in a homogeneous non-condensed phase that undergoes a collapse transition at large tilt angles, and a finite momentum instability, signaling a striped phase, at sufficiently large values of dipolar coupling strength; there are substantive differences with the T = 0 case. The linear $q$ dependence of 2D dipolar interaction is manifested in a new density wave instability in a broad regime similar to 2D dipolar fermions Sun et al. (2010); Yamaguchi et al. (2010). As $T\rightarrow 0$, for sufficiently small tilt angles, BEC appears at critical dipolar strength, and at critical density, values of which depend on other system parameters. While our results should apply generally to 2D dipolar bosons at arbitrary temperature, the specific predictions based on parameters of the polar molecule system, ${}^{87}Rb^{133}Cs$, can serve as a test of our results. We discuss the effects of a harmonic trap and an additional contact interaction; however, key physics is captured in our consideration of a homogeneous 2D Bose system. Also, aforementioned box traps makes study of homogeneous systems directly relevant and testable. We consider a gas of dipolar bosons of mass $m$ and electric or magnetic dipole moment $d$. The dipoles are confined by in x-y plane and the dipole moments are aligned by an external electric field E or magnetic field B, subtending an angle $\theta$ with respect to the z axis as shown in Fig. 1. Figure 1: 2D Dipoles in x-y plane with tilt angle $\theta$ that defines the direction of electric/magnetic field E relative to the z-direction. $\phi$ is the angle in x-y plane, relative to x-direction. To explore the stability of the system, we consider the static density-density correlation function $\chi(q)\equiv\chi(q,w=0)$, which determines the stability of a system against density fluctuations. When $\chi(q)$ becomes positive at wave vector q, the system has density wave instability and may undergo a transition to the striped phase Sun et al. (2010); Yamaguchi et al. (2010); if it becomes positive as $q\to 0$, the system will develop negative compressibility and collapse. In standard RPA, the density-density correlation function can be diagrammatically expanded in terms of the bare response function $\chi_{0}$ (see Supplemental Materials, Fig. S1 sup ). For the stability condition against density fluctuations of finite momentum, the direct scattering of particle-hole excitations dominates over exchange scattering because of the linear momentum dependence of $V_{2D}(q)$ Bisset et al. (2011); Yamaguchi et al. (2010). So, we neglect the exchange scattering of particle-hole excitations. Also, since in 2D, there is no BEC at finite temperature, there is no contribution from the condensate in the RPA response. Then, $\displaystyle\chi(q,\omega)=\frac{\chi_{0}(q,\omega)}{1-V(q)\chi_{0}(q,\omega)}$ (1) with $\displaystyle\chi_{0}(q,\omega)=\int\frac{d{\bf k}}{(2\pi)^{d}}\frac{f(k-q/2)-f(k+q/2)}{\hbar\omega-(\varepsilon_{k+q/2}-\varepsilon_{k-q/2})}$ (2) where $\varepsilon_{k}=\hbar^{2}k^{2}/{2m}$ is the free particle kinetic energy, and $f(q)$ is the Bose distribution function, with chemical potential $\mu$. For non-interacting bosons, $\chi=\chi_{0}$ and always negative, so the system is stable. For interaction with an attractive channel, the system can become unstable depending on the interaction. To proceed, we first need to evaluate the finite temperature bare response function, Eq. (2). The asymptotic behavior is given by (for details, see Supplemental Materials, Bare Bubble Calculation and Fig. S2 sup : In the $q\lambda_{T}\ll 1$ region, $\displaystyle\chi_{0}(q,T)=-\frac{m}{2\pi\hbar^{2}}\frac{1}{e^{-\beta\mu}-1}(1+O(q\lambda_{T})^{2})$ (3) where $\lambda_{T}=\sqrt{\frac{2\pi\hbar^{2}\beta}{m}}$ is the thermal de Broglie wavelength and $\beta=1/k_{B}T$. In the $q\lambda_{T}\gg 1$ region, the behavior is temperature-independent, $\displaystyle\chi_{0}(q)=-\frac{4nm}{\hbar^{2}q^{2}}$ (4) We calculate RPA responses, taking bare response function to be that of a noninteracting system, and using the non-interacting gas to calculate the chemical potential $\mu(T,n)$. For an ideal two dimensional boson gas, the 2D density is $\displaystyle n$ $\displaystyle=$ $\displaystyle\int\frac{d{\bf k}}{(2\pi)^{2}}\frac{1}{e^{\beta(\varepsilon_{k}-\mu)}-1}$ (5) $\displaystyle=$ $\displaystyle\lambda^{-2}_{T}g_{1}(e^{\beta\mu})$ $g_{v}(z)=\sum_{j}z^{j}/j^{v}$ is the polylogarithm function. The chemical potential $\mu$ is $\displaystyle\mu=\frac{1}{\beta}\ln[1-\exp(-n\lambda_{T}^{2})]$ (6) As temperature decreases, $\chi_{0}(q)$ increases. In the classical limit, $\beta\mu\gg-1$, Eq. (3) become $\chi_{0}(0)=-n/T$, independent of Bose statistics. In the quantum limit, $\beta\mu\ll-1$, Eq. (3) become $\chi_{0}(0)=-\frac{m}{2\pi\hbar^{2}}e^{n\lambda^{2}_{T}}$. Since there is no upper limit for $g_{1}(z)$ at zero chemical potential, there is no BEC in 2D at finite temperature. In the limit of zero temperature, $\lim\limits_{T\to 0}\mu=0$, and $\lim\limits_{T\to 0}n(k)=n\,\delta(k)$. The limiting behavior at zero temperature is the BEC state; Eq. (3) become $\chi_{0}(0,0)=-\infty$ and Eq. (4) becomes the response function of ideal bosons for all momenta at T= 0. It is convenient to look at the inverse of static density-density correlation function $\chi(\textbf{q})$, given by: $\displaystyle\frac{1}{\chi(\textbf{q})}=\frac{1}{\chi_{0}(\textbf{q})}-V(\textbf{q})$ (7) where $V(\textbf{q})$ is the dipole-dipole interaction (DDI), given by $V(\textbf{q})=V_{s}+V_{l}(\textbf{q})$, with $\displaystyle\begin{aligned} &V_{s}=2\pi d^{2}\frac{P_{2}(\cos\theta)}{r_{c}}\\\ &V_{l}(\textbf{q})=-2\pi d^{2}q({\cos^{2}}\theta-{\sin^{2}}\theta{\cos^{2}}\phi)\end{aligned}$ (8) where $r_{c}$ is a short range cut off. In quasi-2D geometry it depends on the trapping size in z direction Fischer (2006). The first term $V_{s}$ is momentum independent and acts like a short range interaction. The second term depends linearly on the magnitude of momentum. In the y-direction ($\phi=\pi/2$), the interaction is the most attractive; therefore, the instability happens at momentum in y direction. Three distinct cases may be noted: 1\. $\frac{1}{\chi_{0}(q)}-V(q)<0$ everywhere; the system is in uniform normal stable phase. 2\. $\frac{1}{\chi_{0}(q)}-V(q)>0$ at finite q; system undergoes density wave instability and has striped phase. 3\. $\frac{1}{\chi_{0}(q)}-V(q)>0$ at q=0; the system has negative compressibility and will collapse. Fig. 2 shows schematically the behavior of $\frac{1}{\chi_{0}(q)}$ and $V(q)$ for the three cases. $\frac{1}{\chi_{0}(q)}$ is approximately quadratic in q with intercept $\frac{1}{\chi_{0}(0)}\leq 0$ ; $V(q)$ is linear in q with negative slope and positive/negative intercept depending on the tilt angle. Note that instability for dipolar system at finite q is possible in 2D, but not in 3D Bisset et al. (2011), because $V(q)$ is independent of the magnitude of momentum in 3D Sun et al. (2010). Figure 2: (color online) Schematic illustration for three cases of response function discussed in text, curved line is $1/\chi_{0}(q)$, straight line is V(q) The behavior of the response function for the 2D dipolar Bose gas at non-zero T can be generally categorized into two regimes with respect to tilt angle $\theta$: A) $\theta<\cos^{-1}{\frac{1}{\sqrt{3}}}$: the short range interaction $V(q=0)$ is positive and $\frac{1}{\chi_{0}(0)}\leq 0$, $\frac{1}{\chi_{0}(0)}-V(0)\leq 0$ is always satisfied, thus it is impossible for the system to collapse. However, the instability condition could be satisfied at finite $q_{y}$ with a sufficiently strong dipole interaction strength, describable by dipolar length, $a_{dd}=md^{2}/\hbar^{2}$. Thus the system is unstable against a density fluctuation with wave-vector $q_{y}$. That indicates a transition from the normal phase to a striped phase. B) $\theta>\cos^{-1}{\frac{1}{\sqrt{3}}}$: in this region, the short range interaction becomes negative. At zero temperature, the bare response function diverges at q=0 and $\frac{1}{\chi_{0}(0)}=0$, so the system cannot support any attractive short range interaction. Thus it always collapses at T= 0, as in the BdG approach. However, at finite temperature, the bare response function has a non-zero negative value at q=0. As a result, it can support attractive short range interaction that is sufficiently small. The system first undergoes transition from normal phase to a striped phase; then collapses as $a_{dd}$ increases further. At large tilt angles close $\pi/2$, the long-range interaction becomes zero, and the total interaction is dominated by the negative short range part of the dipolar interaction. Then the system goes from the normal to the collapsed phase without going through an intermediate striped phase. For fixed density and temperature, the dipole interaction strength $a_{dd}$ can be changed via the strength of the external field, and dipole’s tilt angle $\theta$ by varying the direction of external field. In Fig. 3, we show the calculated phase diagram for critical dipole interaction strength $a_{dd}$ versus the tilt angle $\theta$ for a system of polar molecule such as ${}^{87}Rb^{133}Cs$ at T=0 and T=20 nK. We choose density to be $n=10^{12}m^{-2}$ and take the cut-off to be $r_{c}=10^{4}a_{0}$; mass of ${}^{87}Rb^{133}Cs$ is $m=220u$ (unified atomic mass unit). At T= 0 K, the system goes from stable BEC to density wave instability as dipole strength $a_{dd}$ increases when $\theta<0.955$; the system collapses for any dipole strength when $\theta>0.955$. On the other hand, at T= 20 nK, density wave instability appears as $a_{dd}$ increases, even for tilt angle $\theta>0.955$. The system eventually collapses as $\theta$ is increased further. That the collapse occurs at a larger value of $\theta_{c}$ compared to the T=0 case may be understood on noting the interplay between the DDI and the bare T-dependent response, $\frac{1}{\chi_{0}(T)}$ in the RPA response function. Figure 3: (color online) Dipole interaction strength $a_{dd}$ versus tilt angle $\theta$ for a system of polar molecule ${}^{87}Rb^{133}Cs$ at T=0 and T=20K. $a_{dd}$ is in units of the Bohr radius $a_{0}$. Red and blue lines are lines of density wave and collapse instabilities respectively. Study of critical temperature $T_{c}$ and critical density $n_{c}$, albeit within RPA, provide another perspective on phases and density wave instability in the system. We first calculate $T_{c}$ and critical density $n_{c}$, each as a function of $a_{dd}$, for fixed tilt angle, $\theta$. Fig. 4, shows our results for the physical system, ${}^{87}Rb^{133}Cs$ for $\theta=$ 0 and 0.4. $T_{c}$ and $n_{c}$ are calculated using the condition $\frac{1}{\chi(q,T)}=0$. For calculation of $T_{c}$, we choose the system density to be $10^{12}m^{-2}$, and for $n_{c}$, the system temperature to be 10 nK. In the context of density wave instability, a key point here is that the critical temperature and critical density behave opposite to each other with increasing dipole strength, i.e. $T_{c}$ increases, and $n_{c}$ decreases when dipole strength is increased (see Fig. 4). The terminating point ( $T=0$) of the $T_{c}$ curves signifies the onset $a_{dd}$ for BEC; this is dependent on the tilt angle. Figure 4: (color online) Critical temperature and critical density for density wave instability of ${}^{87}Rb^{133}Cs$ at tilt angles $\theta=$ 0 and 0.4. Left: $T_{c}$ vs $a_{dd}$, for $n=10^{12}m^{-2}$. Right: $n_{c}$ vs $a_{dd}$ at T=10 nK. Next, we construct a temperature-density (T-n) phase diagram for fixed dipole strengths $a_{dd}$, and tilt angles $\theta$. For $\theta<\cos^{-1}{\frac{1}{\sqrt{3}}}$, as temperature decreases and density increases, the system goes through a transition from stable Bose fluid phase to a density wave (DW) phase. At $T=0$, there is a critical density below which there is no density wave instability, and the system is a BEC. For $\theta>\cos^{-1}{\frac{1}{\sqrt{3}}}$, as temperature decreases and density increases, the system goes through a transition from stable Bose fluid to a density wave phase to a collapse phase. In Fig. 5, we show the calculated T-n phase diagrams, for $\theta=0$ and $\theta=1$, using parameters relevant to several physically realized dipolar boson systems, namely polar molecule ${}^{87}Rb^{133}Cs$ with electric dipole moment of 0.355 Debye and $a_{dd}=4586a_{0}$ (accessible in experiment) Molony et al. (2014); ${}^{87}Rb^{133}Cs$ with electric dipole moment of 1.22 Debye and $a_{dd}=93084a_{0}$ (the maximum value possible in experiments); ${}^{168}Er$ with magnetic dipole moment of $7\mu_{b}$ and $a_{dd}=196a_{0}$ Chomaz et al. (2018). The sets of plots show that for larger dipolar interaction strength $a_{dd}$, the instability occurs at a larger density and lower temperature; compare for example, the plots for ${}^{87}Rb^{133}Cs$ with electric dipole moment of 1.22 Debye ($a_{dd}=93084a_{0}$) with ${}^{168}Er$ with magnetic dipole moment of $7\mu_{b}$ ($a_{dd}=196a_{0}$). Density wave instability occurs at low temperature and high density for small tilt angle. For large tilt angle, the system goes from stable Bose fluid phase to density wave instability and then to collapse as temperature decreases and density increases; no discernible region of BEC appears at T=0 (as seen for $\theta=1$). Figure 5: (color online) Calculated T-n phase diagram for dipolar Bose gas at $\theta$ = 0 (left column) and $\theta=1$ (right column) ${}^{41}Rb^{87}Cs$ with dipole moment d=0.355D (top) and d=1.22D (middle); ${}^{166}Er$ (bottom) with $d=7\mu_{b}$. Density wave instability (DWI), red curve, occurs at low temperature and high density for $\theta$ = 0;. For $\theta$ =1, the system goes from stable Bose fluid phase to density wave instability and then to collapse (blue line) as temperature decreases and density increases. We have considered the effect of an additional short-range interaction $g$, originating from Van Der Waals interaction between atoms or molecules; this results in total interaction $V(q)=g+V_{dipole}(q)$. Within RPA, the main modifications are: a repulsive $g$ increases critical tilt angle, $\theta_{c}$ to a value larger than 0.955, while an attractive $g$ decreases it. This is because $\theta_{c}$ is now determined by the net short range interaction, that has contribution from the contact interaction, in addition to that contained in the dipole interaction. And adding a repulsive $g$ increases the critical $a_{dd}$ for instability, while an attractive $g$ decreases this. (See Supplemental Materials, Fig. S3 sup ). We briefly discuss possible effects of a trap given by a cylindrically symmetric harmonic potential $U(z,r)=1/2(w_{z}z^{2}+w_{r}r^{2})$, where $z$ and $r$ are the axial and radial directions respectively. We explore instability in a quasi-2D trapped system with strong harmonic confinement in the $z$-direction, i.e. $w_{z}\gg w_{r}$, by calculating the RPA response function $\chi$ within the local-density approximation (LDA). We expect the instability to occur when the effective chemical potential, $\mu_{\text{eff}}(r)=\mu-\omega r^{2}/2$, is same as that of the uniform system at a certain temperature Mueller and Baym (2000); thus the local bare response function is the same as that of the uniform system. We construct a T-N phase diagram, Fig. 6, using the parameters for ${}^{166}Er$; N being the total number of particles. The density wave instability temperature $T_{c}$ is sightly above the ideal gas BEC temperature $T^{0}_{\text{BEC}}$. At $T_{c}$, the 2D trapped gas has density wave instability at $r=0$, where $\mu_{\text{eff}}$ is the largest, and begins to form stripe pattern starting from center of the trap. (See details in Supplemental Materials, Effect of a harmonic trap sup .) This behavior can be understood by noting that for trapped particles, condensation results in a huge increase in the central density of the cloud (a standard diagnostic of BEC) Mueller and Baym (2000). Figure 6: (color online) Calculated $T$-$N$ phase diagram for a system of trapped ${}^{166}Er$ at zero tilt angle. Here, $d=7\mu_{B}$ and trap parameter $\hbar w_{r}/k_{B}=0.6K$. The dashed line is for an ideal gas BEC phase transition, the solid line is the density wave instability line. We have shown that a broad understanding of the nature of phases and instabilities in a 2D dipolar Bose gas at finite temperature may be obtained on concurrently exploring tunable system parameters, namely, density, temperature, interaction strength and tilt angle. The presented phase diagrams provide different perspectives on the nature of the instabilities. We have used a finite-temperature version of RPA, and note that RPA is a well- established many-body method that has proved to be useful in describing collective modes and instabilities in quantum fluids. At T=0, our finite- temperature RPA reproduces the BdG results, as expected Nozieres (2018). Our approach and results may be of relevance generally to systems with long-range and anisotropic interactions, and thus of broader appeal. The results may be compared with previous work in 3D with attractive contact interaction Mueller and Baym (2000), or dipolar interaction Bisset et al. (2011), wherein possible long wavelength (q$\to$0) instabilities were studied. We note that density wave instability usually triggers a long-range order of the stripe phase. At finite temperature, enhanced fluctuation in 2D will destroy the long-range order, but a quasi-long-range order may survive. A phase transition belonging to the usual Berezinskii-Kosterlitz-Thouless universality class is expected; this has been studied in Monte Carlo simulation Filinov et al. (2010); Bombín et al. (2019). Thus, the $T_{c}$ curves discussed here, are in a strict sense, RPA instability lines. Accordingly, our $T$-$n$ phase diagram may need to be modified at low temperature; this is beyond the scope of RPA. Nevertheless, the RPA method does provide a picture of the instabilities of 2D dipolar boson system at finite temperatures that is expected to be qualitatively correct. ## I Acknowledgements We thank J. Boronat and G. Baym for useful discussions. We acknowledge funds from Institute for Complex Adaptive Matter. K. Q. acknowledges the hospitality of Aspen Center for Physics, where part of the work was done. ## References * Koch et al. (2008) T. Koch, T. Lahaye, J. Metz, B. Frohlich, A. Griesmaier, and T. Pfau, Nat Phys 4, 218 (2008), URL http://dx.doi.org/10.1038/nphys887. * Aikawa et al. (2012) K. Aikawa, A. Frisch, M. Mark, S. Baier, A. Rietzler, R. Grimm, and F. Ferlaino, Phys. Rev. Lett. 108, 210401 (2012), URL https://link.aps.org/doi/10.1103/PhysRevLett.108.210401. * Chomaz et al. (2018) L. Chomaz, R. M. W. van Bijnen, D. Petter, G. Faraoni, S. Baier, J. H. Becher, M. J. Mark, F. Wächtler, L. Santos, and F. Ferlaino, Nature Physics 14, 442 (2018), URL https://doi.org/10.1038/s41567-018-0054-7. * Molony et al. (2014) P. K. Molony, P. D. Gregory, Z. Ji, B. Lu, M. P. Köppinger, C. R. Le Sueur, C. L. Blackley, J. M. Hutson, and S. L. Cornish, Physical review letters 113, 255301 (2014). * Takekoshi et al. (2014) T. Takekoshi, L. Reichsöllner, A. Schindewolf, J. M. Hutson, C. R. Le Sueur, O. Dulieu, F. Ferlaino, R. Grimm, and H.-C. Nägerl, Physical review letters 113, 205301 (2014). * Aikawa et al. (2009) K. Aikawa, D. Akamatsu, J. Kobayashi, M. Ueda, T. Kishimoto, and S. Inouye, New Journal of Physics 11, 055035 (2009), URL http://stacks.iop.org/1367-2630/11/i=5/a=055035. * Aikawa et al. (2010) K. Aikawa, D. Akamatsu, M. Hayashi, K. Oasa, J. Kobayashi, P. Naidon, T. Kishimoto, M. Ueda, and S. Inouye, Phys. Rev. Lett. 105, 203001 (2010), URL https://link.aps.org/doi/10.1103/PhysRevLett.105.203001. * Gaunt et al. (2013) A. L. Gaunt, T. F. Schmidutz, I. Gotlibovych, R. P. Smith, and Z. Hadzibabic, Physical review letters 110, 200406 (2013). * Gotlibovych et al. (2014) I. Gotlibovych, T. F. Schmidutz, A. L. Gaunt, N. Navon, R. P. Smith, and Z. Hadzibabic, Phys. Rev. A 89, 061604 (2014), URL https://link.aps.org/doi/10.1103/PhysRevA.89.061604. * Lopes et al. (2017) R. Lopes, C. Eigen, A. Barker, K. G. Viebahn, M. Robert-de Saint-Vincent, N. Navon, Z. Hadzibabic, and R. P. Smith, Physical review letters 118, 210401 (2017). * Fischer (2006) U. R. Fischer, Phys. Rev. A 73, 031602 (2006), URL https://link.aps.org/doi/10.1103/PhysRevA.73.031602. * Santos et al. (2003) L. Santos, G. V. Shlyapnikov, and M. Lewenstein, Phys. Rev. Lett. 90, 250403 (2003), URL https://link.aps.org/doi/10.1103/PhysRevLett.90.250403. * Fedorov et al. (2014) A. Fedorov, I. Kurbakov, Y. Shchadilova, and Y. E. Lozovik, Physical Review A 90, 043616 (2014). * Lu et al. (2015) Z.-K. Lu, Y. Li, D. S. Petrov, and G. V. Shlyapnikov, Phys. Rev. Lett. 115, 075303 (2015), URL https://link.aps.org/doi/10.1103/PhysRevLett.115.075303. * Shen and Quader (2018) P. Shen and K. Quader, Journal of Physics, Conference Series 1041, 012011 (2018), URL http://stacks.iop.org/1742-6596/1041/i=1/a=012011. * Macia et al. (2014) A. Macia, J. Boronat, and F. Mazzanti, Physical Review A 90, 061601 (2014). * Bombin et al. (2017) R. Bombin, J. Boronat, and F. Mazzanti, Phys. Rev. Lett. 119, 250402 (2017), URL https://link.aps.org/doi/10.1103/PhysRevLett.119.250402. * Boninsegni and Prokof’ev (2012) M. Boninsegni and N. V. Prokof’ev, Rev. Mod. Phys. 84, 759 (2012), URL https://link.aps.org/doi/10.1103/RevModPhys.84.759. * Wenzel et al. (2017) M. Wenzel, F. Böttcher, T. Langen, I. Ferrier-Barbut, and T. Pfau, Phys. Rev. A 96, 053630 (2017), URL https://link.aps.org/doi/10.1103/PhysRevA.96.053630. * Tanzi et al. (2019) L. Tanzi, E. Lucioni, F. Famà, J. Catani, A. Fioretti, C. Gabbanini, R. N. Bisset, L. Santos, and G. Modugno, Phys. Rev. Lett. 122, 130405 (2019), URL https://link.aps.org/doi/10.1103/PhysRevLett.122.130405. * Böttcher et al. (2019) F. Böttcher, J.-N. Schmidt, M. Wenzel, J. Hertkorn, M. Guo, T. Langen, and T. Pfau, Phys. Rev. X 9, 011051 (2019), URL https://link.aps.org/doi/10.1103/PhysRevX.9.011051. * Chomaz et al. (2019) L. Chomaz, D. Petter, P. Ilzhöfer, G. Natale, A. Trautmann, C. Politi, G. Durastante, R. M. W. van Bijnen, A. Patscheider, M. Sohmen, et al., Phys. Rev. X 9, 021012 (2019), URL https://link.aps.org/doi/10.1103/PhysRevX.9.021012. * Cinti et al. (2014) F. Cinti, T. Macrì, W. Lechner, G. Pupillo, and T. Pohl, Nature communications 5, 1 (2014). * Eberlein et al. (2005) C. Eberlein, S. Giovanazzi, and D. H. O’Dell, Physical Review A 71, 033618 (2005). * Sun et al. (2010) K. Sun, C. Wu, and S. D. Sarma, Physical Review B 82, 075105 (2010). * Yamaguchi et al. (2010) Y. Yamaguchi, T. Sogo, T. Ito, and T. Miyakawa, Physical Review A 82, 013643 (2010). * Mueller and Baym (2000) E. J. Mueller and G. Baym, Phys. Rev. A 62, 053605 (2000), URL http://link.aps.org/doi/10.1103/PhysRevA.62.053605. * van Zyl et al. (2002) B. P. van Zyl, R. K. Bhaduri, and J. Sigetich, Journal of Physics B: Atomic, Molecular and Optical Physics 35, 1251 (2002). * Bisset et al. (2011) R. Bisset, D. Baillie, and P. Blakie, Physical Review A 83, 061602 (2011). * (30) See Supplemental Materials at http: for details on Calculations and Figs S1-S3. * Nozieres (2018) P. Nozieres, _Theory of quantum liquids: Superfluid bose liquids_ (CRC Press, 2018). * Filinov et al. (2010) A. Filinov, N. Prokof’Ev, and M. Bonitz, Physical review letters 105, 070401 (2010). * Bombín et al. (2019) R. Bombín, F. Mazzanti, and J. Boronat, Phys. Rev. A 100, 063614 (2019), URL https://link.aps.org/doi/10.1103/PhysRevA.100.063614.
# Inverse Potentials for all $\ell$ channels of Neutron-Proton Scattering using Reference Potential Approach Anil Khachi1, Lalit Kumar1, Ayushi Awasthi1 and O. S. K. S. Sastri1,∗ Department of Physics and Astronomical Sciences Central University of Himachal Pradesh Dharamshala, 176215, Himachal Pradesh, Bharat(India<EMAIL_ADDRESS> ###### Abstract Reference potential approach (RPA) is successful in obtaining inverse potentials for weakly bound diatomic molecules using Morse function. In this work, our goal is to construct inverse potentials for all available $\ell$-channels of np-scattering using RPA. The Riccati-type phase equations for various $\ell$-channels are solved using 5th order Runge-Kutta method to obtain scattering phase shifts (SPS) in tandem with an optimization procedure to minimize mean squared error (MSE). Interaction potentials for a total of 18 states have been constructed using only three parameter Morse interaction model. The obtained MSE is $<1\%$ for ${}^{1}S_{0}$, ${}^{3}P_{1}$ and ${}^{3}D_{1}$ channels and $<2\%$ for ${}^{1}P_{1}$ channel and $<0.1\%$ for rest of the 14 channels. The obtained total scattering cross-sections at various lab energies are found to be matching well with experimental ones. Our complete study of np-scattering for all $\ell$-channels using RPA using Morse function as zeroth reference, is being undertaken for the first time. * March 2023 ## 1 Introduction The neutron-proton (n-p) interaction has been first modeled by Yukawa [1]. This was followed by various single and multi-particle exchange models and QCD based models as detailed in these reviews [2, 12]. Currently, the Nijmegen [3], Argonne v18 [11], CD-Bonn [12] and Granada[25] potentials are the ones which give rise to best quantitative results for explaining the experimental scattering phase shifts. Unfortunately, all these potentials have different mathematical representations originating from completely varied physical considerations. Yet all of them lead to correct validation of experimental data. The search for a simple theoretical potential that could model the nucleon-nucleon interactions is still eluding the physicists. Interestingly, many simple phenomenological forms such as Square well, Malfiet-Tjon [4], Hulthen [13], have also been utilised for studying the deuteron. Recently, molecular potentials such as Manning-Rosen [14], Morse [5] and Deng-Fan[6] have been proposed. Typically, phase wave analysis (PWA) technique based on modeling interaction potential, relies on solving time independent Schrodinger equation (TISE) to obtain scattering phase shifts (SPS) from which differential and total scattering cross-sections (SCS) are obtained and matched with experimental ones. While R-matrix[7], S-matrix[8], complex scaling method (CSM)[9], etc rely on wavefunction to obtain SPS, phase function method (PFM) method[10] directly utilises the model potential in phase equation to solve SPS directly. Zhaba [15] utilised PFM method to study nucleon-nucleon scattering, and Laha et al.[13]studied nucleon-nucleus and nucleus-nucleus interaction [13] and obtained reasonably good results. Alternatively, inverse potentials resulting directly from experimental observations by J-matrix method [18] and Marchenko equation [19] have also found some success in understanding the interaction involved between the nucleons. Recently, an analytical procedure to solve this Marchenko equation has been achieved by modeling the interaction using a Morse curve [27], which belongs to a class of shape invariant potentials [21], called as reference potential approach (RPA). Selg[19] points out that “The implementation of the methods of the inverse scattering theory is not at all a trivial task. On the contrary, this is a complex and computationally very demanding multi-step procedure that has to be performed with utmost accuracy”. We have constructed inverse potentials for S-waves of np [16], nD[17], S, P and D-states of $n-\alpha$ [23] and $\alpha-\alpha$ [24] scattering by numerically solving the phase equation choosing Morse function as zeroth reference. The goal of this paper is to extend this reference potential approach (RPA) to constructing inverse potentials for all 18 $\ell$-channels of np-interaction by considering recent data for SPS from Arriola et.al., of Granada group [25] for lab energies up to 350 MeV. ## 2 Methodology: The Morse function is given by $V_{Morse}(r)=V_{0}\left(e^{-2(r-r_{m})/a_{m})}-2e^{-(r-r_{m})/a_{m}}\right)$ (1) The PWA approach to understanding two body scattering focuses on obtaining phase shift for various orbital angular momentum, $\delta_{\ell}$, between incoming and outgoing wave due to the interaction between projectile and target nucleons. It is important to note that an attractive potential would tend to pull the wave backward and hence results in a positive SPS, $\delta_{\ell}>0$ and a repulsive potential would push it out which makes $\delta_{\ell}<0$. Typically, one solves radial TISE to obtain wavefunction from which SPS are deduced. One of the many advantages of Morse potential is that it’s TISE can be analytically solved for, $\ell=0$, S-states for both bound and scattering states [29]. ### 2.1 Analytical solution for Scattering State of Morse potential: Schr$\ddot{o}$dinger wave equation, for a spinless particle with energy ($E$) and orbital angular momentum ($\ell$) undergoing scattering from another particle with interaction potential V(r), is given by $\frac{\hbar^{2}}{2\mu}\bigg{[}\frac{d^{2}}{dr^{2}}+\big{(}k^{2}-\ell(\ell+1)/r^{2}\big{)}\bigg{]}u_{\ell}(k,r)=V(r)u_{\ell}(k,r)$ (2) Where $k=\sqrt{E/(\hbar^{2}/2\mu)}$. The analytical solution of TISE for Morse potential [27] with $\ell=0$, is given by $E_{v}=-\frac{\hbar^{2}}{2\mu a_{m}^{2}}(\lambda-v-1/2)^{2}~{}~{}~{}~{}v=0,1,2,\ldots$ (3) where $\lambda=\sqrt{\frac{2\mu V_{0}{a_{m}}^{2}}{\hbar^{2}}}~{}~{}{and}~{}~{}\frac{\hbar^{2}}{2\mu}=41.47~{}MeVfm^{2}$ (4) is called well-depth parameter and is dependent only on $V_{0}$ and $a_{m}$. #### 2.1.1 SPS for ${}^{1}S_{0}$ singlet state: In case of singlet ${}^{1}S_{0}$ unbound state ($E>0$), with $\ell=0$, the analytical formula for SPS due to Morse potential is obtained [28, 29] as $\delta_{0}^{ana}=-kr_{m}-\epsilon(\gamma+log2\lambda)+\sum_{n=1}^{\infty}\left(\frac{\epsilon}{n}-tan^{-1}\frac{2\epsilon}{n}+tan^{-1}\frac{\epsilon}{v-\lambda-1/2}\right)$ (5) where $\gamma=0.57721$ is Euler constant and $\epsilon$ is given by: $\epsilon=\sqrt{\frac{2\mu Ea_{m}^{2}}{\hbar^{2}}}=ka_{m}$ (6) ### 2.2 Phase Function Method: PFM is one of the important tools in scattering studies for both local and non-local interactions [32]. The second order differential equation Eq.2 can been transformed to a first order non-homogeneous differential equation of Riccati type [32, 33], containing phase shift information, given by: $\delta_{\ell}^{\prime}(k,r)=-\frac{U(r)}{k}\bigg{[}\cos(\delta_{\ell}(k,r))\hat{j}_{\ell}(kr)-\sin(\delta_{\ell}(k,r))\hat{\eta}_{\ell}(kr)\bigg{]}^{2}$ (7) where $U(r)=\frac{2\mu V(r)}{\hbar^{2}}$. Prime denotes differentiation of phase shift with respect to distance and Riccati Hankel function of first kind is related to $\hat{j_{\ell}}(kr)$ and $\hat{\eta_{\ell}}(kr)$ by $\hat{h}_{\ell}(r)=-\hat{\eta}_{\ell}(r)+\textit{i}~{}\hat{j}_{\ell}(r)$. For $\ell=0$, the Ricatti-Bessel and Riccati-Neumann functions $\hat{j_{0}}$ and $\hat{\eta_{0}}$ get simplified as $sin(kr)$ and $-cos(kr)$. So phase equation, for $\ell=0$, is $\delta_{0}^{\prime}(k,r)=-\frac{U(r)}{k}sin^{2}[kr+\delta_{0}(r)]$ (8) This is numerically solved using Runge-Kutta 5th order (RK-5) method with initial condition $\delta_{\ell}(k,0)=0$. The Ricatti-Bessel and Riccati- Neumann functions can be easily obtained by using following recurrence formulas: $\hat{j_{\ell}}(kr)=\frac{2\ell+1}{kr}\hat{j_{\ell}}(kr)-{\hat{j}_{\ell-1}}(kr)$ (9) $\hat{\eta_{\ell}}(kr)=\frac{2\ell+1}{kr}\hat{\eta_{\ell}}(kr)-{\hat{\eta}_{\ell-1}}(kr)$ (10) ### 2.3 Scattering Cross Section: Once, SPS $\delta_{\ell}(E)$ are obtained for each orbital angular momentum $\ell$, one can calculate partial SCS $\sigma_{\ell}(E)$ [34], as : $\sigma_{\ell}(E;S,J)=\frac{4\pi}{k^{2}}\sum_{S=0}^{1}\left(\sum_{J=|\ell-S|}^{|\ell+S|}(2\ell+1)\sin^{2}(\delta_{\ell}(E;S,J))\right)$ (11) and total SCS $\sigma_{T}$, is given as $\sigma_{T}(E;S,J)=\frac{1}{\sum_{J=|\ell-S|}^{|\ell+S|}(2J+1)}\sum_{\ell=0}^{5}\sum_{S=0}^{1}(2J+1)\sigma_{\ell}(E;S,J)$ (12) ## 3 Results and Discussion: The experimental SPS data have been taken from Perez et.al., of Granada group [25], 2016. To this, we have added an extra data point at lab energy 0.1 MeV for ${}^{3}S_{1}$ and ${}^{1}S_{0}$ states, from Arndt (private communication) to improve determination of low energy scattering parameters. The optimised model parameters for all states of different $\ell$-channels are given in Table 1. Table 1: Model Parameters $V_{0}$ in MeV, $r_{m}~{}\&~{}a_{m}$ in fm for various states of different $\ell$ \- channels with obtained mean squared error (MSE) values. States | $V_{0}$ | $r_{m}$ | $a_{m}$ | $MSE$ | States | $V_{0}$ | $r_{m}$ | $a_{m}$ | $MSE$ | States | $V_{0}$ | $r_{m}$ | $a_{m}$ | $MSE$ ---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- ${}^{3}S_{1}$ | 114.153 | 0.841 | 0.350 | 0.155 | ${}^{1}D_{2}$ | 131.302 | 0.010 | 0.526 | 0.026 | ${}^{3}F_{3}$ | 0.010 | 8.807 | 2.441 | 0.025 ${}^{1}S_{0}$ | 70.438 | 0.901 | 0.372 | 0.649 | ${}^{3}D_{1}$ | 0.010 | 7.401 | 1.403 | 0.274 | ${}^{3}F_{4}$ | 40.343 | 0.528 | 0.489 | 0.001 ${}^{1}P_{1}$ | 0.010 | 5.442 | 1.016 | 1.568 | ${}^{3}D_{2}$ | 106.379 | 0.209 | 0.747 | 0.066 | ${}^{1}G_{4}$ | 20.390 | 0.010 | 0.673 | 0.002 ${}^{3}P_{0}$ | 11.579 | 1.750 | 0.601 | 0.049 | ${}^{3}D_{3}$ | 23.620 | 1.185 | 0.305 | 0.002 | ${}^{3}G_{3}$ | 0.010 | 6.846 | 1.486 | 0.009 ${}^{3}P_{1}$ | 0.010 | 4.514 | 0.778 | 0.832 | ${}^{1}F_{3}$ | 0.010 | 7.645 | 1.877 | 0.056 | ${}^{3}G_{4}$ | 59.219 | 0.010 | 0.786 | 0.015 ${}^{3}P_{2}$ | 77.558 | 0.444 | 0.404 | 0.001 | ${}^{3}F_{2}$ | 2.746 | 2.140 | 0.532 | 0.003 | ${}^{3}H_{4}$ | 23.245 | 0.010 | 0.693 | 0.000 Using obtained model parameters, $V_{0}=70.438,r_{m}=0.901$ and $a_{m}=0.372$, for ${}^{1}S_{0}$ in analytical expression of Eq. 5, we obtained corresponding SPS at various energies. These are observed to be closely matching with SPS obtained using PFM as shown in Fig. 1, thus validating the later procedure. The ${}^{3}S_{1}$ model parameters are obtained by utilising energy condition, in Eq 3 with $v=0$, as a constraint. The obtained SPS for all channels are in good match with experimental data with MSE values being very close to zero. The exceptions are states ${}^{1}P_{1}$, ${}^{3}P_{1}$ and ${}^{3}D_{1}$ with purely negative SPS, where MSE values are 1.568, 0.832 and 0.274 respectively. It is interesting to note that all of them have $V_{0}$ to be 0.010 MeV and $r_{m}$ to be large, which makes the shape of their potentials to be of exponential form. Similarly, states ${}^{1}F_{3}$, ${}^{3}F_{3}$ and ${}^{3}G_{3}$, with purely negative SPS, also have $V_{0}$ to be 0.010 MeV and their MSE slightly higher than other F and G states with positive SPS. Figure 1: Plot of Scattering phase shifts obtained using analytical and PFM for ${}^{1}S_{0}$-state. The inset shows the match at lower energy upto 50 MeV. By using effective-range approximation formula [31] for low energy scattering parameters, scattering length ${}^{\prime}a^{\prime}$ and effective range ${}^{\prime}r_{e}^{\prime}$ are determined(experimental) to be $-23.390~{}(-23.749(8))~{}fm$ and $2.42~{}(2.81(5))~{}fm$ for ${}^{1}S_{0}$ state and $5.356~{}(5.424(3))~{}fm$ and $1.75~{}(1.760(5))~{}fm$ for ${}^{3}S_{1}$ state respectively. Figure 2: Inverse Potentials (left) and Scattering phase shifts (right) plots for S, P and D-states of np scattering using RPA. The inverse potentials obtained for S, P and D states and their corresponding SPS plots are given in Fig. 2. Similarly, computed inverse potentials and their corresponding SPS plots for F, G and H states of np-interaction are shown in Fig. 3. Figure 3: Inverse Potentials (left) and Scattering phase shifts (right) plots for F, G and H-states of np scattering using RPA One can observe close match between obtained and expected SPS values for all the states. The following observations can be made from these potentials and SPS plots: * • Both S-state potentials are having similar form with only their depths being different. * • The potentials have attractive nature whenever SPS are positive and repulsive nature when SPS are negative. * • Whenever SPS start from being positive and then cross-over to negative values as seen in ${}^{3}P_{0}$ or dips down as in ${}^{3}F_{1}$ at higher energies, the repulsive nature of the potential curve sets in at higher values of inter- nucleon distance. * • In case of ${}^{3}P_{1}$ and ${}^{1}P_{1}$, just as their SPS cross-over after 200 MeV, so also their respective inverse potentials cross-over after at about 1.5 fm. * • It is interesting to note that for certain D-states and all G and H states for which SPS are positive, potential assumes shape of a Gaussian function. * • All states with negative SPS are having an exponentially decaying positive potential. * • Note that, in case of ${}^{1}F_{3}$ and ${}^{3}F_{3}$, the former has more negative SPS than the later and its corresponding inverse potential also has increasing repulsion with decreasing internucleon distances. The beauty of the method is that Morse function acts as an exponential function for states with negative SPS and a gaussian function for positive ones. When SPS tend to have both negative and postive SPS or those with positive trend to begin with and decrease at higher energies, typical Morse curve is obtained. Utilising the obtained SPS for various states of all $\ell$-channels, the partial and total scattering cross-sections (SCS) have been calculated using Eqs. 11 and 12 respectively. In case of $\ell=0$, the individual partial cross-sections due to both ${}^{1}S_{0}$ and ${}^{3}S_{1}$ have been calculated without summing over their contributions. All these calculated SCS at various energies have been presented in Table LABEL:Table_2. The $\%$-contributions due to individual S-states and rest of the $\ell$-channels from P to H, to the calculated total SCS, are given in brackets. One can observe that ${}^{1}S_{0}$ has large contribution at low energies below 1 MeV and then gradually falls down with increasing energy and becomes very less past 100 MeV. On the other hand, contribution due to ${}^{3}S_{1}$ increases beyond 1 MeV and peaks at 10 MeV and then falls down beyond 250 MeV. The contributions from P and D channels become significant for higher energies from 100 MeV to 350 MeV. Those due to F and G are far less in comparision in the same range but certainly important for accurately describing the observed experimental total SCS. The SPS are available for only 1 H-state and it’s contribution is almost negligible to the determination of total SCS. Overall the obtained total SCS are found to be closely matching the experimental ones. The partial cross-sections due to P and D channels are shown in Fig 4(a) and those due to F and G channels in Fig 4(b) with respect to lab energies. The total SCS has been plotted with respect to lab energies on a log scale in Fig. 4(c) with individual contributions due to individual S-states as an inset. Table 2: Individual contributions to calculated total elastic scattering cross-section (SCS) from various channels. In case of $\ell=0$, the contributions due to both ${}^{1}S_{0}$ and ${}^{3}S_{1}$ are given seperately. The $\%$-contributions to obtained total SCS has been given in brackets. E | $\sigma_{exp}$(b) | $\sigma_{{}^{1}S_{0}}$ | $\sigma_{{}^{3}S_{1}}$ | $\sigma_{P}$ | $\sigma_{D}$ | $\sigma_{F}$ | $\sigma_{G}$ | $\sigma_{H}$ | $\sigma_{sim}$(b) ---|---|---|---|---|---|---|---|---|--- (MeV) | [26] | | | | | | | | 0.1 | - | 9.708(78%) | 2.651 (22%) | 2.31$\times 10^{-7}$ | 2.98$\times 10^{-12}$ | 0 | 0 | 0 | 12.359 0.5 | 6.135 | 3.653 (60%) | 2.425 (40%) | 5.690$\times 10^{-6}$ | 1.78$\times 10^{-9}$ | 0 | 0 | 0 | 6.078 1 | 4.253 | 2.041 (48%) | 2.189 (52%) | 2.240$\times 10^{-5}$ | 2.69$\times 10^{-8}$ | 0 | 0 | 0 | 4.230 10 | 0.9455 | 0.2007 (21%) | 0.7413 (79%) | 0.0016 | 0.0001 | 6.40$\times 10^{-6}$ | 4.81$\times 10^{-8}$ | 0 | 0.9437 50 | 0.1684 | 0.0222(13%) | 0.1235 (75%) | 0.0106 (6%) | 0.0079 (5%) | 0.0001 | 3.44$\times 10^{-5}$ | 1.32$\times 10^{-8}$ | 0.1643 100 | 0.07553 | 0.00498 (7%) | 0.03601(50%) | 0.015(21%) | 0.01593 (22%) | 0.00044 (1%) | 0.00034 | 3.84$\times 10^{-7}$ | 0.07270 150 | 0.05224 | 0.00129(3%) | 0.01314 (26%) | 0.01622 (33%) | 0.01752 (35%) | 0.00064 (1%) | 0.00083 (2%) | 1.61$\times 10^{-6}$ | 0.04964 200 | 0.04304 | 0.00025(1%) | 0.00493 (12%) | 0.0161 (40%) | 0.01679 (42%) | 0.00073 (2%) | 0.00128 (3%) | 3.56$\times 10^{-6}$ | 0.04008 250 | 0.03835 | 0.00001 | 0.00166 (5%) | 0.01546 (44%) | 0.01542 (44%) | 0.00075 (2%) | 0.00162 (5%) | 5.82$\times 10^{-6}$ | 0.03493 300 | 0.03561 | 0.00003 | 0.00040(1%) | 0.01464(46%) | 0.01394(44%) | 0.00074 (2%) | 0.00184 (6%) | 8.09$\times 10^{-6}$ | 0.03160 350 | 0.03411 | 0.00015 | 0.00002 | 0.01377 (47%) | 0.01255(43%) | 0.00072 (2%) | 0.00197(7%) | 0.00001 | 0.02919 Figure 4: Partial scattering cross-section (SCS) due to (a) P and D channels (b) F and G channels, as a function of energy. (c) The total SCS has been plotted on a log scale of energies. Its inset shows contributions due to both S-states. ## 4 Conclusions: The inverse potentials for np-interaction for all partial waves with $\ell$ = 0 to 5 have been constructed using reference potential approach, by choosing Morse function as zeroth reference, for the first time. The model parameters have been obtained by choosing to minimize mean squared error between SPS obtained from PFM technique and Granada data [25] in an iterative manner. This was achieved by making suitable adjustments to different model parameters using optimisation routines. The obtained SPS for all the channels match expected ones very closely. Overall, reference potential approach using Morse function seems to be able to give reasonably accurate inverse potentials of appropriate shapes that could logically explain observed trends in SPS for variour $\ell$-channels, as expected from PFM. The total scattering cross- sections have been determined for data upto 350 MeV and are found to be in good agreement with experimental values. It remains to be seen as to how to construct inverse potentials for pp-scattering using RPA. ## Acknowledgments A. Awasthi acknowledges financial support provided by Department of Science and Technology (DST), Government of India vide Grant No. DST/INSPIRE Fellowship/2020/IF200538. The corresponding author dedicates this work to his inspirational guide Padma Shri Prof P. C. Sood with gratitude. The authors declare that they have no conflict of interest. ## References ## References * [1] Yukawa H, Sakata S, Kobayasi M and Taketani M 1955 IV Progress of Theoretical Physics Supplement 1 46 * [2] Naghdi M 2014 Nucleon-nucleon interaction: A typical/concise Rev. Phys. of Particles and Nucl. 45 924 * [3] Nagels M M, Rijken T A and De Swart J J 1978 Phys. Rev. D 17 768 * [4] R A Malfliet and J A Tjon 1969 Nucl. Phys. A 127, 161-168 * [5] Khachi A, Kumar L, Sastri O S K S, 2021 J. Nucl. Phy. Mat. Sci. Rad. A9, 87-93 * [6] D Saha B, Khirali B, Swain and J Bhoi 2022 Phys. Scr. 98, 015303 . * [7] E P Wigner and L Eisenbud 1947 Phys. Rev. 72, 29 * [8] Mackintosh R. S. 2012 arXiv preprint arXiv:1205.0468. * [9] M Odsuren, K Kato G, Khuukhenkhuu and S Davaa 2017 Nucl. Eng. Technol. 49, 1006 * [10] V I Zhaba Mod 2016 Phys. Lett. A 31, 1650049 * [11] Wiringa R B, Stoks V G J and Schiavilla R 1995 Phys. Rev. C 51 38 * [12] Machleidt R, Holinde K and Elster C 1987 Phys. Reports 149 1 * [13] Laha U and Bhoi J 2015 Phys. Rev. C 91 034614 * [14] Khirali B, Behera A K, Bhoi J and Laha U 2020. Ann. Phys. (N. Y.) 412 168044 * [15] Zhaba V I 2017 arXiv preprint arXiv:1706.08306 * [16] Sastri O S K S, Khachi A and Kumar L 2022 Brazilian J. Phys. 52 58 * [17] Kumar L, Awasthi S, Khachi A , and Sastri O S K S . arXiv preprint arXiv:2209.00951 * [18] Zaitsev S A and Kramar E I 2001 J. of Phys. G: Nucl. and Part. Phys. 27 2037 * [19] Selg M 2016 Proc. Estonian Acad. Sci. 65 267 * [20] Van Der Mee C 2000 In Differential Operators and Related Topics: Proceedings of the Mark Krein International Conference on Operator Theory and Applications, Odessa, Ukraine, August 18–22, 1997 Volume I 239. Birkhäuser Basel * [21] Morales D A 2004 Chem. Phys. Lett. 394 68 * [22] Khachi A, Awasthi S, Sastri O S K S and Kumar L 2021 J. Nucl. Phy. Mat. Sci. Rad. A. 9 81 * [23] Kumar L, Awasthi S, Khachi A and Sastri O S K S 2022 arXiv preprint arXiv:2209.00951 * [24] Khachi A, Kumar L and Sastri O S K S 2022 Phys. of Atom. Nucl., 85 382 * [25] Pérez R N, Amaro J E and Arriola E R 2016 J. Phys. G: Nucl. and Part. Phys. 43 114001 * [26] R A Arndt, W J Briscoe, A B Laptev, I I Strakovsky, and R L Workman 2009 Nucl. Sci. Eng. 162, 312-318 * [27] Morse P M, 1929 Phys. Rev. 34 57 * [28] G Darewych and A E Green S, 1967 Phys. Rev., 164(4), 1324 * [29] Matsumoto A, 1988 J. Phys. B: Atom. Mol. Phys. 21 2863 * [30] Green A E S, Darewych G and Berezdivin R 1967 Phys. Rev. 157 929 * [31] Babenko V A and Petrov N M 2016 arXiv preprint arXiv:1605.04849 * [32] Calogero F 1967 Variable Phase Approach to Potential Scattering by F Calogero. Elsevier * [33] Babikov V V E 1967 SOV PHYS USPEKHI, 10 271 * [34] C Amsler, 2015 Nuclear and Particle Physics IOP Publishing, Bristol * [35] Borzakov S B, Gundorin NA and Pokotilovski YN, 2015. Phy. of Part. and Nucl. Lett., 12(4), 536-541 * [36] G Breit and M H Hull Jr,1960 Nuclear Physics 15, 216-230 * [37] Khachi A, Kumar L and Sastri O S K S 2021 J. Nucl. Phy. Mat. Sci. Rad. A. 9 87 * [38] Sharma A and Sastri O S K S 2021 Int. J. Quantum Chem. 121 e26682 * [39] Gora S, Sastri O S K S and Soni S K 2022 European J. of Phys. 43 035802 * [40] Takigawa N 2017 Fundamentals Nucl. Phys.
School of Physical Sciences, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China Rudolf Peierls Centre for Theoretical Physics, University of Oxford, Oxford OX1 3PU, United Kingdom Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang 325000, China Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang 325000, China School of Physical Sciences, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China Universitat de Barcelona Institute of Complex Systems, 08028 Barcelona, Spain Institut de Nanociència i Nanotecnologia, Universitat de Barcelona, 08028 Barcelona, Spain Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang 325000, China School of Physical Sciences, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China # Molecular Dynamics Simulations of Microscopic Structural Transition and Macroscopic Mechanical Properties of Magnetic Gels Xuefeng Wei Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang 325000, China Gaspard Junot Departament de Física de la Matèria Condensada, Universitat de Barcelona, 08028 Barcelona, Spain Ramin Golestanian Max Planck Institute for Dynamics and Self-Organization (MPIDS), D-37077 Göttingen, Germany Xin Zhou School of Physical Sciences, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China Yanting Wang CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China Pietro Tierno Departament de Física de la Matèria Condensada, Universitat de Barcelona, 08028 Barcelona, Spain Fanlong Meng<EMAIL_ADDRESS>CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China ###### Abstract Magnetic gels with embedded micro/nano-sized magnetic particles in crosslinked polymer networks can be actuated by external magnetic fields, with changes in their internal microscopic structures and macroscopic mechanical properties. We investigate the responses of such magnetic gels to an external magnetic field, by means of coarse-grained molecular dynamics simulations. We find that the dynamics of magnetic particles are determined by the interplay of between magnetic dipole-dipole interactions, polymer elasticity and thermal fluctuations. The corresponding microscopic structures formed by the magnetic particles such as elongated chains can be controlled by the external magnetic field. Furthermore, the magnetic gels can exhibit reinforced macroscopic mechanical properties, where the elastic modulus increases algebraically with the magnetic moments of the particles in the form of $\propto(m-m_{\mathrm{c}})^{2}$ when magnetic chains are formed. This simulation work can not only serve as a tool for studying the microscopic and the macroscopic responses of the magnetic gels, but also facilitate future fabrications and practical controls of magnetic composites with desired physical properties. ## 1 Introduction Magnetic gels, with micro/nano-sized magnetic particles embedded in crosslinked polymeric gels, belong to the class of smart magnetic materials, whose physical properties can be controlled by external magnetic fields 1, 2, 3. By tuning such fields, e.g., the direction, the strength or the frequency, one can change the materials with different mechanical responses, such as material stiffness 4, 5, 6, 7 and deformations 8, 9, 10. Based on such field- controlling features, magnetic gels have been fabricated in laboratories and used as magnetic robots 11, 12, 13, sensors 14, 15, 16, actuators 17, 18, etc., and also for bio-medical applications 19, 20, 21 such as drug delivery/release due to the bio-friendly characteristics of magnetic fields. Under an applied magnetic field, the materials can respond in different ways depending on how the magnetic particles are embedded in the polymer matrix. For magnetic particles able to move freely in the polymer gel, they can form microscopic chains when the magnetic dipole-dipole interactions between particles dominate the thermal fluctuation as in ordinary solutions 22, 23, where the polymer gel serves mainly as a viscoelastic medium without much changes in its mechanical properties. For superparamagnetic particles fixed in the polymer network 24, the particles can still move but now they induce microscopic distortions in the polymer network, and they can even form magnetic chains if the magnetic dipole-dipole interactions are strong enough. In the latter case, the responses of the embedded particles to the external magnetic field with simultaneous deformations of the polymer network underpins the physical mechanism for controlling the mechanical properties of the magnetic gels. Figure 1: Magnetic gels with (a) uniformly distributed magnetic particles without the magnetic field and (b) magnetic chains under an external magnetic field. Spheres denote the magnetic particles and grey lines denote partial connections in the polymer network. The connection between the microscopic motion of the magnetic particles and the macroscopic responses of the material, is the key for understanding how external magnetic fields can be utilised to control the material properties. Experimental methods such as small-angle X-ray scattering (SAXS) 25, small angle neutron scattering (SANS) 26, electron magnetic resonance (EMR) 27, etc., are utilised for detecting the internal structure of the magnetic gels with macroscopic mechanical properties which are measurable by tensile testing 28, 29, dynamic mechanical analysis 30, 31, etc. Theoretical attempts include analytical modelling such as the lattice models 32, 33 and continuum models 34, 35, 24, and numerical methods such as finite element calculations 36, Monte Carlo simulations 37, 38 and molecular dynamics simulations 39, 40, 41. In this work, we use coarse-grained molecular dynamics simulations to investigate the microscopic motion and structures of the magnetic particles, together with the macroscopic mechanical properties especially about the anisotropic elasticity of the materials. ## 2 Simulation methodology _Polymer network._ We follow Ref.42 to construct the 3D packing-derived (PD) polymer network. We first randomly place $N=W^{3}$ radially bidisperse spheres with harmonic repulsive interactions within a cubic unit cell of side length $W$ ($W=20$ is taken in later discussions), where half of the spheres are assigned with radius $r=r_{0}$ and the other half with $r=\nu r_{0}$ ($\nu=1.4$ is chosen to avoid long-range crystal order 42, 43, 44). By continuously increasing the radius $r_{0}$ from $r_{0}=0$, the spheres will reach a jammed state at $r_{0}^{\mathrm{J}}$. Then, from this disordered packing state, we generate a contact network by connecting the centers of the overlapping spheres (excluding rattlers) with springs at their rest lengths and angles. For sufficiently large systems, this procedure generates contact networks with the connectivity $z=2d$, where $d$ is the system dimensionality. After generating the underlying network structure, we remove randomly chosen bonds and any consequent dangling ends repeatedly until the network reaches the desired average connectivity $z$. The bonds connecting neighbouring nodes are modelled as Hookean springs and the bending effects is considered by introducing the bending energy. The stretching and the bending energy of the network can be written as: $\displaystyle U_{\mathrm{stretching}}(r)$ $\displaystyle=$ $\displaystyle\frac{\mu}{2}\sum_{i,j}\frac{(r_{ij}-r_{ij,0})^{2}}{r_{ij,0}},$ (1) $\displaystyle U_{\mathrm{bending}}(r)$ $\displaystyle=$ $\displaystyle\frac{\kappa}{2}\sum_{ijk}\frac{(\theta_{ijk}-\theta_{ijk,0})^{2}}{l_{ijk,0}},$ (2) respectively, where $\mu$ and $\kappa$ denote the Hookean constant and the bending rigidity, respectively, $r_{ij,0}$ and $r_{ij}$ denote the length of spring $(ij)$ connecting the $i$th and the $j$th node before and after stretching, respectively, $\theta_{ijk,0}$ and $\theta_{ijk}$ denote the angles between the spring $(ij)$ and $(jk)$ before and after bending, respectively, and $l_{ijk,0}=(r_{ij,0}+r_{jk,0})/2$ denotes the average length of the two connecting springs. _WCA potential._ We choose polymer nodes randomly with the number density $n_{0}$ to be replaced with magnetic particles, i.e., the magnetic particles are uniformly distributed in the prepared state. The diameters of the magnetic particles are set as $\sigma_{\mathrm{m}}=1.0\sigma$ and other non-magnetic nodes are set as $\sigma_{\mathrm{n}}=0.2\sigma$. In our simulations, the truncated Weeks-Chandler-Andersen (WCA) potential 45 is used to simulate the non-bonded and non-magnetic interactions between particles, as: $U(r)=\begin{cases}4\epsilon\left[\left(\frac{\sigma_{ij}}{r}\right)^{12}-\left(\frac{\sigma_{ij}}{r}\right)^{6}\right]+C&\text{if }r\geq r_{\mathrm{cutoff}},\\\ 0&\text{if }r<r_{\mathrm{cutoff}},\end{cases}$ where $r_{\mathrm{cutoff}}=2^{1/6}\sigma_{ij}$ is the cutoff distance for given pairs, $\sigma_{ij}=(\sigma_{i}+\sigma_{j})/2$ is the average diameter of particle $i$ and $j$, and $C$ is a constant ensuring the continuity of the potential energy at the cutoff distance. In this work we take $\epsilon$ and $\sigma$ as the unit energy and length, respectively. _Magnetic dipole-dipole interactions._ The superparamagnetic particles can be magnetized under an external magnetic field $\bm{B}_{\mathrm{ext}}$, and they acquire magnetic moments $\bm{m}=\chi v_{\mathrm{p}}\bm{B}_{\mathrm{ext}}/\mu_{0}$ pointing along the direction of the magnetic field, where $\chi$ is the magnetic susceptibility, $v_{p}$ is the particle volume and $\mu_{0}$ is vacuum permeability. These magnetic particles possess moments aligned along the field direction, and the interactions between the magnetic particles are: $\displaystyle U_{\mathrm{dd}}(\bm{r}_{ij})=\frac{\mu_{0}}{4\pi}\left[\frac{1}{r_{ij}^{3}}(\bm{m}_{i}\cdot\bm{m}_{j})-\frac{3}{r_{ij}^{5}}(\bm{m}_{i}\cdot\bm{r}_{ij})(\bm{m}_{j}\cdot\bm{r}_{ij})\right].$ (3) The magnetic dipole-dipole interactions between magnetic particles can be either attractive or repulsive, depending on their relative locations with respect to the separation $\bm{r}_{ij}$. _Molecular dynamics simulations._ We perform the molecular dynamics simulations on a coarse-grained level based on the above model setup, using LAMMPS software 46. The simulations are done in the canonical _NVT_ ensemble using a Langevin thermostat 47, 48. The equation of motion for each particle is: $\displaystyle m\ddot{\bm{r}}=-\gamma\dot{\bm{r}}+\bm{F}+\bm{\xi},$ (4) where $m$ denotes the mass of the particle, $\gamma$ denotes the friction constant, $\bm{F}$ denotes the forces exerted by other particles and the polymer network, and $\bm{\xi}$ denotes the thermal noise due to the random collisions with implicit solvent molecules, which obeys fluctuation- dissipation theorem. Periodic boundary conditions are adopted in the simulations and the detailed simulation procedure including simulation parameters can be found in the supplement 49. ## 3 Results and discussions ### 3.1 Internal microscopic structures The elastic properties, e.g., the shear modulus and the bulk modulus, of the magnetic gels before applying any external magnetic field can be controlled by changing the connectivity in the polymer networks, as shown in Fig. 2(a). By increasing the connectivity of the polymer network, both the shear and the bulk modulus will increase, in an approximate form of $\propto(z-z_{\mathrm{c}})^{3}$ with $z_{\mathrm{c}}\simeq 3.2$ as the critical connectivity for the polymer network to exhibit finite elasticity 50, 42. Figure 2: (a) Dependence of shear modulus $G$ (squares), bulk modulus $K$ (triangles) and $K+4G/3$ (circles) of the polymer gels on the network connectivity $z$. (b) Chain ratio as a function of the magnetic moments of the particles, in polymer networks with different connectivity. Insets denote the microscopic structures of the magnetic particles and the displacement field in the cross-sectional plane. (c) Chain ratio as a function of the magnetic moments and the network elasticity. Red line denote the physical conditions for the chain formation [Equation (5)], and the white diamonds correspond to simulation results of the critical chain ratio, $\eta=0.15$. As shown in Fig. 2(b), the magnetic particles can form chain-like structures by increasing the strength of the magnetic field (magnetic moments of superparamagnetic particles), when the magnetic dipole-dipole interactions between the magnetic particles dominate the network elasticity and the thermal fluctuations; this phenomenon resembles the clustering instability found in magnetic microswimmers, where the microswimmers can form clusters when magnetic dipole-dipole interactions dominate thermal fluctuations 51, 52. Due to the displacements of the magnetic particles in events of chain formations, the polymer network will be deformed with the local deformation field, as shown in the insets of Fig. 2(b). The chain ratio, $\eta$, defined as the ratio of the number of the magnetic particles forming the chains to the total number of the magnetic particles in the material, is introduced to characterize the amount of the micro-structures in the magnetic gel and also the relative strength of the magnetic dipole-dipole interactions over the network elasticity. We use the following criterion to identify if a particle belong to a chain: the distance to its nearest neighboring magnetic particle is less than $2^{1/6}\sigma$, and the angle between the direction of the magnetic moment and the position vector connected to its nearest neighboring particle is less than $15^{\circ}$. With the decrease in the network connectivity, i.e., the elastic moduli of the material, the chain ratio in magnetic gels of the particles with the same magnetic moments will increase. By systematically changing the elastic moduli of the network and the magnetic moments of the particles, one can obtain the phase diagram showing the physical conditions for the chain formation, as shown in Fig. 2(c). In simulations, $\eta=0.15$ is taken as the value of the chain ratio, for denoting the conditions of the chain formation, and one can observe from Fig. 2(c) that, for inclusions carrying large magnetic moments in soft polymer gels, they can easily form chain-like structures, coinciding with the theoretical conditions for the chain formation as derived in a previous work 24, $\displaystyle\frac{\mu_{0}n_{0}m^{2}}{k_{B}T}>\frac{v_{\mathrm{p}}}{k_{B}T}(K+4G/3)+1.$ (5) One can also check how the number density of magnetic particles in the prepared state, $n_{0}$, can influence the formation of the micro-structures in the gel. As shown in Fig. 3(a), the chain ratio is larger for systems characterized by a higher number density $n_{0}$, since the magnetic dipole- dipole interactions become stronger due to decreased distances between the magnetic particles. The physical conditions for the chain formation in the plane of $(n_{0},m)$ is provided in Fig. 3(b), which also agrees with our theoretical prediction. Figure 3: (a) Chain ratio $\eta$ as a function of the magnetic moments $m$ of the particles with different number density. (b) Dependence of the chain ratio on the magnetic moments and the number density of the magnetic particles. Red line denote the physical conditions for the chain formation [Equation (5)], and the white diamonds corresponds to simulation results of the critical chain ratio, $\eta=0.15$. ### 3.2 Macroscopic mechanical properties The macroscopic mechanical properties of magnetic gels can become reinforced due to the formation of microscopic chains 4, 5, 6, 7. Here we investigate such effects by studying how magnetic gels consisting of particles carrying different magnetic moments can respond under shear deformations. As shown in Fig. 4(a), the stress-strain relations of magnetic gels can change if the magnetic moments of the particles are different. The magnetic gels with chain formation exhibit obvious mechanical reinforcements (larger stresses compared with those without chain formation at same strains) and weak anisotropy, whose mechanical responses depend on the applied plane of the shear deformation. By defining the shear modulus of the material as $G_{\alpha\beta}=\frac{d\sigma_{\alpha\beta}}{d\varepsilon_{\alpha\beta}}|_{\varepsilon_{\alpha\beta}=0}$, one can obtain the dependence of this modulus on the magnetic moments of the particles, as shown in Fig. 4(b). There is a weak effect of the anisotropy in the shear moduli appears at $m_{c}\simeq 0.27$, corresponding to the chain ratio as $\eta=0.15$ (same criterion as used for chain formation). Compared with the material containing inherent chains in its relaxed state (springs not deformed) 32, the anisotropy in the mechanical response here is weaker, since all the elastic springs are already distorted when the chains are formed in the current setup. More importantly, for $m\leq m_{c}$, the shear modulus remains almost unchanged by increasing the magnetic moments, but for $m>m_{c}$, the shear modulus increases quickly with the magnetic moments in an approximate form of $G_{\alpha\beta}(m)=G_{\alpha\beta,\infty}\frac{(m-m_{c})^{2}}{a_{m}+(m-m_{c})^{2}},$ (6) where $G_{\alpha\beta,\infty}$, $a_{m}$ and $m_{c}$ are fitting parameters for specific materials 6, 53, 32. Figure 4: (a) Stress-strain relations of magnetic gels without (dash lines) and with (solid lines) chain formation, which undergoes the shear deformation in three orthogonal planes. (b) Dependence of the shear modulus $G_{\alpha\beta}$ as a function of the magnetic moment of the particles, and lines fitted with Equation (6). (c) Comparison between the magnetic interactions and the polymer elasticity in terms of the shear modulus of the magnetic gels. Lines fitted with the function $1+A/(z-z_{c})^{\alpha}$ with $A$ and $\alpha$ as fitting parameters. By changing the network connectivity $z$, we can change the elasticity of the polymer gel in the prepared state, based on which one can compare the effects of magnetic interactions between the embedded particles and polymer elasticity in the mechanical properties of magnetic gels. As shown in Fig. 4(c), the contributions of magnetic interactions in the shear modulus of magnetic gel where magnetic chains are formed, are significant for $z\simeq z_{\mathrm{c}}$, which become less important by increasing the network connectivity due to the increased elasticity of the polymer network itself. ## 4 Summary We have studied the microscopic structure formation and the macroscopic mechanical responses of magnetic gels, by utilising the coarse-grained molecular dynamics simulations. By increasing the strength of the magnetic dipole-dipole interactions between the particles, the microscopic chain structures can form, which induces reinforced mechanical responses of the magnetic gels. This work can not only help to understand the connection between the microscopic structure and the macroscopic mechanical properties of the magnetic gels, but can also guide industrial fabrications of magnetic gels with desired mechanical responses and their controls by applying external magnetic fields for practical applications. ## 5 Acknowledgments F. M. acknowledges supports by National Natural Science Foundation of China (Grant No. 12275332, 12047503, and 12247130), Chinese Academy of Sciences, Max Planck Society (Max Planck Partner Group), Wenzhou Institute (Grant No. WIUCASQD2023009) and Beijing National Laboratory for Condensed Matter Physics (2023BNLCMPKF005). X. W. acknowledges supports from the China Postdoctoral Science Foundation (Grants No. 2023M743448) and the National Natural Science Foundation of China (Grants No. 12347170). P. T. acknowledges financial support from the European Research Council (Grant No. 811234) and from the Generalitat de Cataluña via the program “Icrea Academia”. The computations of this work were conducted on the HPC cluster of ITP-CAS. ## References * Brand et al. 2011 Brand, H.; Martinoty, P.; Pleiner, H. _Cross-Linked Liquid Crystalline Systems: From Rigid Polymer Networks to Elastomers_ ; 2011; pp 529–563 * Thévenot et al. 2013 Thévenot, J.; Oliveira, H.; Sandre, O.; Lecommandoux, S. Magnetic responsive polymer composite materials. _Chemical Society Reviews_ 2013, _42_ , 7099 * Weeber et al. 2018 Weeber, R.; Hermes, M.; Schmidt, A. M.; Holm, C. Polymer architecture of magnetic gels: a review. _Journal of Physics: Condensed Matter_ 2018, _30_ , 063002 * Jolly et al. 1996 Jolly, M. R.; Carlson, J. D.; Muñoz, B. C. A model of the behaviour of magnetorheological materials. _Smart Materials and Structures_ 1996, _5_ , 607–614 * Collin et al. 2003 Collin, D.; Auernhammer, G. K.; Gavat, O.; Martinoty, P.; Brand, H. R. Frozen-In Magnetic Order in Uniaxial Magnetic Gels: Preparation and Physical Properties. _Macromolecular Rapid Communications_ 2003, _24_ , 737–741 * Varga et al. 2005 Varga, Z.; Filipcsei, G.; Zrínyi, M. Smart composites with controlled anisotropy. _Polymer_ 2005, _46_ , 7779–7787 * Varga et al. 2006 Varga, Z.; Filipcsei, G.; Zrínyi, M. Magnetic field sensitive functional elastomers with tuneable elastic modulus. _Polymer_ 2006, _47_ , 227–233 * Raikher and Stolbov 2000 Raikher, Y. L.; Stolbov, O. Magnetodeformation effect in a ferroelastic material. _Technical Physics Letters_ 2000, _26_ , 156–158 * Wang et al. 2011 Wang, Z.; Deng, B.; Yao, K. Physical model of plastic deformation on magnetization in ferromagnetic materials. _Journal of Applied Physics_ 2011, _109_ * Huang et al. 2015 Huang, S.; Pessot, G.; Cremer, P.; Weeber, R.; Holm, C.; Nowak, J.; Odenbach, S.; Menzel, A. M.; Auernhammer, G. K. Buckling of paramagnetic chains in soft gels. _Soft Matter_ 2015, _12_ , 228–237 * Kim et al. 2018 Kim, Y.; Yuk, H.; Zhao, R.; Chester, S. A.; Zhao, X. Printing ferromagnetic domains for untethered fast-transforming soft materials. _Nature_ 2018, _558_ , 274–279 * Alapan et al. 2019 Alapan, Y.; Yasa, O.; Yigit, B.; Yasa, I. C.; Erkoc, P.; Sitti, M. Microrobotics and Microorganisms: Biohybrid Autonomous Cellular Robots. _Annual Review of Control, Robotics, and Autonomous Systems_ 2019, _2_ , 205–230 * Kim and Zhao 2022 Kim, Y.; Zhao, X. Magnetic Soft Materials and Robots. _Chemical Reviews_ 2022, _122_ , 5317–5364 * Gao et al. 2014 Gao, R.; Jiang, Y.; Zhao, Y. Magnetic field sensor based on anti-resonant reflecting guidance in the magnetic gel-coated hollow core fiber. _Optics letters_ 2014, _39_ , 6293–6296 * Hankiewicz et al. 2016 Hankiewicz, J.; Celinski, Z.; Stupic, K.; Anderson, N.; Camley, R. Ferromagnetic particles as magnetic resonance imaging temperature sensors. _Nature communications_ 2016, _7_ , 12415 * Gloag et al. 2019 Gloag, L.; Mehdipour, M.; Chen, D.; Tilley, R. D.; Gooding, J. J. Advances in the application of magnetic nanoparticles for sensing. _Advanced Materials_ 2019, _31_ , 1904385 * Shigetomi et al. 2020 Shigetomi, S.; Takahashi, H.; Tsumori, F. Magnetic actuator using double network gel. _Journal of Photopolymer Science and Technology_ 2020, _33_ , 193–197 * He et al. 2023 He, Y.; Tang, J.; Hu, Y.; Yang, S.; Xu, F.; Zrínyi, M.; Chen, Y. M. Magnetic hydrogel-based flexible actuators: A comprehensive review on design, properties, and applications. _Chemical Engineering Journal_ 2023, _462_ , 142193 * Li et al. 2013 Li, Y.; Huang, G.; Zhang, X.; Li, B.; Chen, Y.; Lu, T.; Lu, T. J.; Xu, F. Magnetic hydrogels and their potential biomedical applications. _Advanced Functional Materials_ 2013, _23_ , 660–672 * Sung et al. 2021 Sung, B.; Kim, M.-H.; Abelmann, L. Magnetic microgels and nanogels: Physical mechanisms and biomedical applications. _Bioengineering & translational medicine_ 2021, _6_ , e10190 * Veloso et al. 2021 Veloso, S. R.; Andrade, R. G.; Castanheira, E. M. Review on the advancements of magnetic gels: towards multifunctional magnetic liposome-hydrogel composites for biomedical applications. _Advances in Colloid and Interface Science_ 2021, _288_ , 102351 * Auernhammer et al. 2006 Auernhammer, G. K.; Collin, D.; Martinoty, P. Viscoelasticity of suspensions of magnetic particles in a polymer: Effect of confinement and external field. _The Journal of Chemical Physics_ 2006, _124_ , 204907 * Weeber et al. 2013 Weeber, R.; Klinkigt, M.; Kantorovich, S.; Holm, C. Microstructure and magnetic properties of magnetic fluids consisting of shifted dipole particles under the influence of an external magnetic field. _The Journal of Chemical Physics_ 2013, _139_ , 214901 * Junot et al. 2022 Junot, G.; Wei, X.; Ortín, J.; Golestanian, R.; Wang, Y.; Tierno, P.; Meng, F. Elastically-mediated collective organisation of magnetic microparticles. _Soft Matter_ 2022, _18_ , 5171–5176 * Sunaryono et al. 2016 Sunaryono; Taufiq, A.; Putra, E. G. R.; Okazawa, A.; Watanabe, I.; Kojima, N.; Rugmai, S.; Soontaranon, S.; Zainuri, M.; Triwikantoro; others Small-angle X-ray scattering study on PVA/Fe3O4 magnetic hydrogels. _Nano_ 2016, _11_ , 1650027 * Lawrence et al. 2018 Lawrence, M. B.; Abbas, S.; Aswal, V. Structure of polyvinyl alcohol-borax ferrogels: A small angle neutron scattering study. _Journal of Polymer Research_ 2018, _25_ , 1–7 * Sorokina et al. 2012 Sorokina, O. N.; Kovarski, A. L.; Lagutina, M. A.; Dubrovskii, S. A.; Dzheparov, F. S. Magnetic nanoparticles aggregation in magnetic gel studied by electron magnetic resonance (EMR). _Applied Sciences_ 2012, _2_ , 342–350 * Ramanujan and Lao 2006 Ramanujan, R.; Lao, L. The mechanical behavior of smart magnet–hydrogel composites. _Smart materials and structures_ 2006, _15_ , 952 * Wu et al. 2011 Wu, J.; Gong, X.; Fan, Y.; Xia, H. Physically crosslinked poly (vinyl alcohol) hydrogels with magnetic field controlled modulus. _Soft Matter_ 2011, _7_ , 6205–6212 * Ponton et al. 2005 Ponton, A.; Bee, A.; Talbot, D.; Perzynski, R. Regeneration of thixotropic magnetic gels studied by mechanical spectroscopy: the effect of the pH. _Journal of Physics: Condensed Matter_ 2005, _17_ , 821 * Zhang et al. 2022 Zhang, G.; Zhang, Z.; Sun, M.; Yu, Y.; Wang, J.; Cai, S. The influence of the temperature on the dynamic behaviors of magnetorheological gel. _Advanced Engineering Materials_ 2022, _24_ , 2101680 * Ivaneyko et al. 2014 Ivaneyko, D.; Toshchevikov, V.; Borin, D.; Saphiannikova, M.; Heinrich, G. Mechanical Properties of Magneto-Sensitive Elastomers in a Homogeneous Magnetic Field: Theory and Experiment. _Macromolecular Symposia_ 2014, _338_ , 96–107 * Pessot et al. 2016 Pessot, G.; Löwen, H.; Menzel, A. M. Dynamic elastic moduli in magnetic gels: Normal modes and linear response. _Journal of Chemical Physics_ 2016, _145_ , 104904 * Jarkova et al. 2003 Jarkova, E.; Pleiner, H.; Müller, H. W.; Brand, H. R. Hydrodynamics of isotropic ferrogels. _Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics_ 2003, _68_ , 041706 * Romeis et al. 2016 Romeis, D.; Toshchevikov, V.; Saphiannikova, M. Elongated micro-structures in magneto-sensitive elastomers: a dipolar mean field model. _Soft Matter_ 2016, _12_ , 9364–9376 * Spieler et al. 2013 Spieler, C.; Kästner, M.; Goldmann, J.; Brummund, J.; Ulbricht, V. XFEM modeling and homogenization of magnetoactive composites. _Acta Mechanica_ 2013, _224_ , 2453–2469 * Wood and Camp 2011 Wood, D. S.; Camp, P. J. Modeling the properties of ferrogels in uniform magnetic fields. _Physical Review E - Statistical, Nonlinear, and Soft Matter Physics_ 2011, _83_ , 011402 * Cremer et al. 2017 Cremer, P.; Heinen, M.; Menzel, A. M.; Löwen, H. A density functional approach to ferrogels. _Journal of Physics: Condensed Matter_ 2017, _29_ , 275102 * Weeber et al. 2012 Weeber, R.; Kantorovich, S.; Holm, C. Deformation mechanisms in 2D magnetic gels studied by computer simulations. _Soft Matter_ 2012, _8_ , 9923–9932 * Pessot et al. 2015 Pessot, G.; Weeber, R.; Holm, C.; Löwen, H.; Menzel, A. M. Towards a scale-bridging description of ferrogels and magnetic elastomers. _Journal of Physics: Condensed Matter_ 2015, _27_ , 325105 * Weeber et al. 2015 Weeber, R.; Kantorovich, S.; Holm, C. Ferrogels cross-linked by magnetic particles: Field-driven deformation and elasticity studied using computer simulations. _Journal of Chemical Physics_ 2015, _143_ , 154901 * Shivers et al. 2019 Shivers, J. L.; Feng, J.; Sharma, A.; MacKintosh, F. C. Normal stress anisotropy and marginal stability in athermal elastic networks. _Soft Matter_ 2019, _15_ , 1666–1675 * Dagois-Bohy et al. 2012 Dagois-Bohy, S.; Tighe, B. P.; Simon, J.; Henkes, S.; Van Hecke, M. Soft-sphere packings at finite pressure but unstable to shear. _Physical review letters_ 2012, _109_ , 095703 * Koeze et al. 2016 Koeze, D.; Vågberg, D.; Tjoa, B.; Tighe, B. Mapping the jamming transition of bidisperse mixtures. _Europhysics Letters_ 2016, _113_ , 54001 * Weeks et al. 1971 Weeks, J. D.; Chandler, D.; Andersen, H. C. Role of Repulsive Forces in Determining the Equilibrium Structure of Simple Liquids. _The Journal of Chemical Physics_ 1971, _54_ , 5237–5247 * Thompson et al. 2022 Thompson, A. P.; Aktulga, H. M.; Berger, R.; Bolintineanu, D. S.; Brown, W. M.; Crozier, P. S.; in ’t Veld, P. J.; Kohlmeyer, A.; Moore, S. G.; Nguyen, T. D.; Shan, R.; Stevens, M. J.; Tranchida, J.; Trott, C.; Plimpton, S. J. LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. _Comp. Phys. Comm._ 2022, _271_ , 108171 * Grønbech-Jensen and Farago 2013 Grønbech-Jensen, N.; Farago, O. A simple and effective Verlet-type algorithm for simulating Langevin dynamics. _Molecular Physics_ 2013, _111_ , 983–991 * Grønbech-Jensen 2020 Grønbech-Jensen, N. Complete set of stochastic Verlet-type thermostats for correct Langevin simulations. _Molecular Physics_ 2020, _118_ , 8 * 49 _See supplementary materials._ * Sharma et al. 2016 Sharma, a.; Licup, a. J.; Jansen, K. a.; Rens, R.; Sheinman, M.; Koenderink, G. H.; MacKintosh, F. C. Strain-controlled criticality governs the nonlinear mechanics of fibre networks. _Nature Physics_ 2016, _12_ , 584–587 * Meng et al. 2018 Meng, F.; Matsunaga, D.; Golestanian, R. Clustering of Magnetic Swimmers in a Poiseuille Flow. _Physical Review Letters_ 2018, _120_ , 188101 * Meng et al. 2021 Meng, F.; Matsunaga, D.; Mahault, B.; Golestanian, R. Magnetic Microswimmers Exhibit Bose-Einstein-like Condensation. _Physical Review Letters_ 2021, _126_ , 078001 * Varga et al. 2006 Varga, Z.; Filipcsei, G.; Zrínyi, M. Magnetic field sensitive functional elastomers with tuneable elastic modulus. _Polymer_ 2006, _47_ , 227–233
# PointMatch: A Consistency Training Framework for Weakly Supervised Semantic Segmentation of 3D Point Clouds Yushuang Wu123∗ Shengcai Cai13∗ Zizheng Yan123 Guanbin Li43 Yizhou Yu56 Xiaoguang Han123† Shuguang Cui123 1SSE, CUHK-Shenzhen 2FNii, CUHK-Shenzhen 3Shenzhen Research Institute of Big Data 4Sun Yat-sen University 5The University of Hong Kong 6DeepWise {yushuangwu, zizhengyan<EMAIL_ADDRESS> <EMAIL_ADDRESS><EMAIL_ADDRESS>{hanxiaoguang, <EMAIL_ADDRESS> ###### Abstract Semantic segmentation of point cloud usually relies on dense annotation that is exhausting and costly, so it attracts wide attention to investigate solutions for the weakly supervised scheme with only sparse points annotated. Existing works start from the given labels and propagate them to highly- related but unlabeled points, with the guidance of data, e.g. intra-point relation. However, it suffers from (i) the inefficient exploitation of data information, and (ii) the strong reliance on labels thus is easily suppressed when given much fewer annotations. Therefore, we propose a novel framework, PointMatch, that stands on both data and label, by applying consistency regularization to sufficiently probe information from data itself and leveraging weak labels as assistance at the same time. By doing so, meaningful information can be learned from both data and label for better representation learning, which also enables the model more robust to the extent of label sparsity. Simple yet effective, the proposed PointMatch achieves the state-of- the-art performance under various weakly-supervised schemes on both ScanNet-v2 and S3DIS datasets, especially on the settings with extremely sparse labels, e.g. surpassing SQN by 21.2% and 17.2% on the 0.01% and 0.1% setting of ScanNet-v2, respectively. ††∗Equal contribution †††Corresponding author ## 1 Introduction Semantic segmentation of 3D point clouds is crucial for the application of intelligent robots’ understanding scenes in the real world. Great efforts have been contributed to the fully supervised scheme, while it requires exhausting and costly per-point annotations (_e.g_. around 22.3 minutes to annotate an indoor scene on average [5]). Thus, weakly supervised 3D semantic segmentation now receives increasing attention, where only limited point-level annotations are provided in each point cloud. Figure 1: (a), (b) the performance of PointMatch on the ScanNet-v2 and S3DIS datasets over various weakly supervised semantic segmentation settings: annotating 0.01%, 0.1% of points [11], 20 points per-scene [10], and “1thing1click” [26]. (c), (d) a comparison between previous works and the proposed approach. Recently, several approaches are proposed for weakly supervised point cloud semantic segmentation with different kinds of weak labels, including projected 2D image [44], subcloud-level [48], segment-level [38], and point-level [52, 10, 26, 11] supervision. In this paper, we focus on addressing the setting of sparse point-level labels, which is one of the most convenient annotation schemes in the application. The key challenge of this task is the difficulty of learning a robust model given very sparse supervision in the point cloud (_e.g_. 0.1%, 0.01% of points annotated in [11] and around 0.02% in [26]). Existing solutions are mainly committed to alleviating the label sparsity by reusing limited supervision, _i.e_., first probing the highly-related points [11] or super-voxels [26] and allowing them to share the same training labels. However, this line of works are explicitly constructed on label propagation and employ point cloud data as the propagation guidance, which suffers from (i) the insufficient exploitation of data information limits the learning efficiency, and (ii) the propagated labels strongly rely on the original annotation scale thus the performance is easily suppressed when given much fewer labels. Therefore, we propose to probe information from both label and data itself for more efficient and robust representation learning. Recently, consistency training is acknowledged as a powerful algorithmic paradigm for robust learning from label-scarce data, _e.g_. in unsupervised/semi-supervised learning [12, 8, 50, 36] and unsupervised/semi- supervised domain adaptation [6, 35, 21, 22]. It works by forcing the model to make consistent prediction under different perturbations/augmentations to the input sample (named as different views) and the prediction in one view usually serve as the pseudo-label of the other view. Inspired by this, we propose a novel consistency training framework, PointMatch, for the weakly supervised 3D semantic segmentation. Given a whole scene of point cloud with sparse labels, PointMatch employs the per-point prediction in one view as the other’s pseudo- label to encourage the predictive consistency between two views of a scene. Such consistency facilitates (i) robustness to easily-perturbed low-level input feature and (ii) stronger capability in learning useful high-level representations to keep predictive consistency. Besides, the provided labels act as extra supervision to assist high-level semantic feature discrimination, which also benefits the representation learning from data. By doing so, the reliance on the given label is relieved and more information is probed from the point cloud data itself. Originating from the per-point prediction in one view, the pseudo-label should be of high quality to provide positive guidance for the other. Whereas there exist considerable mispredictions especially at the early learning stage. Thus, we exploit the inherent structure of the point cloud to improve the pseudo-label quality, via integrating the super-point grouping information where similar points are clustered by low-level features (_e.g_. position and color) into the same group and are assumed to have the same semantic meaning. Specifically, the grouping information is used to correct the minor predictions that diverge from the “mainstream” in the super-point. Despite its good property, the super-point-aware pseudo-label actually introduces noise from the pretext super-point generation. Therefore, to fully utilize these two types of pseudo-labels, we design an adaptive pseudo-labeling mechanism, where the model is encouraged to believe the super-point-aware pseudo-label more at the beginning, and gradually resorts to its raw prediction when the model itself is reliable enough. Extensive experiments and analysis on the ScanNet-v2 [5] and S3DIS [1] dataset validate the effectiveness of the proposed approach. As shown in Fig. 1, the proposed PointMatch significantly surpasses the state of the art on various weakly supervised schemes and impressively, shows great robustness given extremely sparse labels. The contributions of this paper are listed as follows: * • We propose a novel consistency training framework, PointMatch, for the weakly supervised 3D semantic segmentation, which can facilitate the network to learn robust representation from sparse labels and point cloud data. * • We introduce super-point information to promote the pseudo-label quality in our framework, and it is employed in an adaptive manner to well utilize the advantages of both two types of pseudo-label. * • Extensive experiments validate the effectiveness and superiority of PointMatch, and the proposed approach achieves significant improvements beyond the state of the art over various weakly-supervised settings. Figure 2: The overview of PointMatch. (a) the input point cloud; (b) the view $A$ augmented from the input point cloud; (c) view $B$ generated via another augmentation; (d) the point-wise pseudo-label; (e) the super-point-wise pseudo-label; (f) the weak supervision (“1thing1click” setting), points in gray are unlabeled ones and other colors indicate different semantic meanings. ## 2 Related Work #### Fully Supervised 3D Semantic Segmentation Semantic segmentation approaches for 3D point cloud can be mainly classified into two groups: point-based and voxel-based methods. Point-based Methods [30, 31, 23, 49, 45, 42, 24] apply convolutional kernels to a local region of points for feature extraction and the neighbors of a point are computed from k-NN or spherical search. In the case of voxel-based methods [7, 4, 46, 14], the points in the 3D space are first transformed into voxel representations so that standard CNN can be adopted to process the structured voxels. In either point-based or voxel-based methods, feature aggregation is performed in the Euclidean space, while there are some recent works [15, 20, 32, 13] that consider geodesic information for better feature representation. More recently, the Transformer structure [54] is also proposed for point clouds, as an alternative to the classic convolutional structure. However, most of the above methods are designed for the fully-supervised scheme, while annotation on point clouds is exhausting and costly, especially in the application of semantic segmentation, where the scene (indoor or outdoor) point cloud is usually of a large scale. In this work, we focus on weakly-supervised point cloud segmentation, where only very sparse points are annotated in each scene. #### Weakly Supervised 3D Semantic Segmentation Existing works explore the 3D semantic segmentation with various types of weak supervision, including 2D image [44], subcloud-level [48], segment-level [38], and point-level supervision [52, 11, 26, 34]. The first three types can be grouped into indirect annotations [11]. The work of [44] utilizes the annotations on the projected 2D image of a point cloud, with only single view per sample. In [48], a classifier is trained first with sub-cloud labels, from which point-level pseudo labels can be generated via class activation mapping techniques [55]. In another way, the work of [38] pre-generates segments/super-points to extend sparse click annotation into segment-level supervision, and groups unlabeled segments into the relevant nearby labeled ones for label sharing. For point-level weak supervision, the work of [11] proposes to use only 10% of labels by learning gradient approximation and utilizing low-level smoothness constraints. A harder setting with a much lower label ratio, 1‰, is further investigated in [11], where a Semantic Query Network (SQN) is proposed based on leveraging the semantic similarity between neighboring points. Another work OTOC [26] proposes a novel weakly supervised setting, One Thing One Click (“1thing1click”), _i.e_., with only one point annotated for each instance in the scene. They employ an extra branch of network to probe the relation between super-points and propagate labels among highly-related ones. Besides, authors of [34] propose an active learning approach for annotating selected super-point with a limited budget to maximize model performance. Another line of works is contributed to self-supervised pre-training of 3D point clouds [33, 25, 51, 10, 53]. The pre-training usually needs weak or even no labels and provides a better network initialization for the downstream tasks. Existing point-level weakly supervised 3D semantic segmentation methods act on label propagation by leveraging the relation between points/super-points. However, the proposed PointMatch takes a novel way based on consistency regularization to better probe information in the point cloud data itself and alleviates the reliance on the given labels. #### Consistency Training Consistency training is a powerful algorithmic paradigm proposed for robust learning from label-scarce data. It is constructed on enforcing the prediction stability under different input transformations [47], _e.g_. adversarial perturbations [28] or data augmentations [36, 50], in the manner of pseudo- labeling, _i.e_., using the prediction of one transformation as the fitting target of the other. Thus it combines the advantages of both consistency regularization and pseudo-labeling (or self-training). This approach has been applied in many domains, such as semi-supervised learning (SSL) [3, 2, 36, 50], unsupervised learning (USL) [12, 8], unsupervised domain adaptation (UDA) [6, 35], and semi-supervised domain adaptation (SSDA) [21, 22], all of which prove the effectiveness of consistency training in learning high-quality representations from label-scarce data. More recently, there are some works extending consistency training into other tasks, such as unsupervised domain adaptation for image segmentation [27] and semi-supervised 3D object detection [43]. To our knowledge, it is the first time that consistency training is applied in the weakly supervised semantic segmentation of 3D point clouds. Different from the previous works, consistency training is novelly used in a weakly- supervised scenario where limited point-wise supervision is provided in each training sample. In addition, our work properly leverages the super-point grouping information in point clouds to further improve the whole framework. ## 3 Methodology ### 3.1 Problem Definition We first formulate the weakly supervised 3D semantic segmentation problem, taking the indoor scene scenario as an example. Given the point cloud $\mathbf{P}\in\mathbb{R}^{N\times D}$ of a scene of $N$ points with $D$-dimension features, there are only partial points annotated for training. The points with labels are denoted as $\\{(\mathbf{x}_{i}^{l},y_{i}),i\in L$}, and other unlabeled points are denoted as $\\{\mathbf{x}_{i}^{u},i\in U\\}$, where $L$ and $U$ are two sets, satisfying $L\cap U=\varnothing$ and $L\cup U=\langle N\rangle$ ($\langle N\rangle$ is a short form of $\\{1,2,\cdots,N\\}$, the same hereinafter). The target of $f$ is to predict the semantic category $y_{i}\in\langle C\rangle$ of each point $\mathbf{x}_{i}$, where $C$ is the number of possible categories. Taking the point cloud $\mathbf{P}$ as input, $f$ outputs the prediction probability $\mathbf{Q}\in[0,1]^{N\times C}$ over all $C$ classes, for all $N$ points of $\mathbf{P}$. Note that the summation of values in each row of $\mathbf{Q}$ is equal to 1. Denote the weak semantic label of the whole scene as $\mathbf{y}\in{\langle C\rangle}^{N}$ and its one-hot extension as $\mathbf{Y}\in\\{0,1\\}^{N\times C}$. To optimize $f$, a straightforward way is to compute the cross-entropy loss $\mathcal{L}_{ce}$ between $\mathbf{Q}$ and $\mathbf{Y}$, formulated as: $\displaystyle\mathcal{L}_{ce}=\frac{1}{|L|}\sum_{i\in L}\text{cross- entropy}(\mathbf{Q}_{i},~{}\mathbf{Y}_{i}),$ (1) where $|L|$ represents the set size of $L$ and the subscript $i$ indicates the row index, so $\mathbf{Q}_{i}$ and $\mathbf{Y}_{i}$ are two $C$-class distributions corresponding to the $i$-th point. At the inference stage, the semantic segmentation result of a scene can be generated from $f$’s prediction, by simply choosing the class with the highest score in each row of $\mathbf{Q}$. To probe more information the limited labels and point cloud data itself, we design a novel framework, PointMatch, with the pipeline illustrated in Fig. 2. It conducts a consistency training framework designed for weakly labeled point clouds, and an adaptive pseudo-labeling mechanism by incorporating the super- point information, described in the following Sec. 3.2 and Sec. 3.3, respectively. ### 3.2 Consistency Training The proposed consistency training framework focuses on better exploitation of data itself, by encouraging the model’s point-wise predictive consistency between two views of an input scene, through employing the prediction in one view as the pseudo-label of the other. Such a consistency training approach has three advantages: (i) various augmentations enables the network robust to different kinds of perturbation on low-level input features; (ii) the consistency target facilitates the model’s ability in extracting high-level semantic features from the point cloud data itself; (iii) the self-training process implicitly propagates sparse training signals to unlabeled points and provide dense pseudo-labels, which increases the learning stability. Formally, given a point cloud $\mathbf{P}\in\mathbb{R}^{N\times D}$, our PointMatch applies two different groups of data augmentations to create its two views $\mathbf{P}^{A}\in\mathbb{R}^{N\times D}$ and $\mathbf{P}^{B}\in\mathbb{R}^{N\times D}$, respectively. To avoid breaking the local structure of the point cloud too much, we perform scene-level augmentations like offsetting, scaling, rotation, flipping, jittering, _etc_. The obtained two views $\mathbf{P}^{A}$ and $\mathbf{P}^{B}$ are then fed into the 3D U-Net $f_{\theta}$ for point-wise semantic prediction, where $\theta$ is the network parameters. The network $f_{\theta}$ outputs the per-point probability distribution of $\mathbf{P}^{A}$, denoted as $\mathbf{Q}^{A}\in[0,1]^{N\times C}$, and similarly, $\mathbf{Q}^{B}\in[0,1]^{N\times C}$ can be generated from $\mathbf{P}^{B}$, formulated as: $\displaystyle\mathbf{Q}^{A}$ $\displaystyle=f_{\theta}(\mathbf{P}^{A}),$ (2) $\displaystyle\mathbf{Q}^{B}$ $\displaystyle=f_{\theta}(\mathbf{P}^{B}).$ In the next step, we generate the pseudo-label of $\mathbf{Q}^{B}$ from $\mathbf{Q}^{A}$ to create the self-consistency loop. Specifically, the most- likely predictive category of each point (as well as its confidence score) is chosen to form the pseudo-label, _i.e_., the indices of the highest value in each row of $\mathbf{Q}^{A}$. However, $\mathbf{Q}^{A}$ is usually noisy and even contains many uncertain predictions, so a direct use may provide negative guidance to $\mathbf{Q}^{B}$ and harm the whole learning scheme. Hence, we conduct a filtering operation to improve the pseudo-label quality, by ignoring those predictions with confidence lower than a threshold $\tau$. Denote the filtering mask as $\mathbf{m}\in[0,1]^{N}$, which is generated as follows: $\mathbf{m}_{i}=\left\\{\begin{array}[]{lr}1,~{}~{}\max(\mathbf{Q}^{A}_{i})\geq\tau,\\\ 0,~{}~{}\text{otherwise},\end{array}\right.\forall i\in\langle N\rangle,$ (3) where $i$ is the row index of $\mathbf{Q}^{A}$ and $\tau$ is set as 0.95 in our implementation. Given $\mathbf{m}$ and the one-hot extension of $\mathbf{Q}^{B}$’s pseudo-label, represented as $\widehat{\mathbf{Y}}^{B}\in\\{0,1\\}^{N\times C}$, the pseudo-labeling of $\mathbf{Q}^{B}$ can be conducted via a cross-entropy loss: $\displaystyle\mathcal{L}_{pl}=\frac{1}{N}\sum_{i\in\langle N\rangle}\mathbf{m}_{i}\cdot\text{cross- entropy}(\mathbf{Q}^{B}_{i},~{}\widehat{\mathbf{Y}}^{B}_{i}).$ (4) Until this point, we are working on probing information only from point cloud data itself for a better data exploitation. Then the weak labels are integrated to provide discriminative semantic information, by using $\mathbf{Y}$ as the supervision of $\mathbf{Q}^{A}$ via computing a cross- entropy loss as in Eq. 1. The parameters $\theta$ can then be optimized by minimizing the objective loss function $\mathcal{L}_{total}$ as follows: $\displaystyle\min_{\theta}\mathcal{L}_{total}=\min_{\theta}\mathcal{L}_{ce}+\lambda\mathcal{L}_{pl},$ (5) where $\lambda$ is a scalar weight for balancing the two loss functions. As the learning process goes, the model exploits the knowledge learned from the limited annotations to train itself via forcing the predictive consistency, and meanwhile, implicitly propagates the sparse training signals to the whole scene via pseudo-labeling. ### 3.3 Adaptive Pseudo-Labeling Although the framework above facilitates the model’s robust learning subtly, we observe that there are still considerable mispredictions in the obtained pseudo-labels, especially at the early learning stage. One reason is that the previous training scheme is mainly constructed on the predictive consistency between each pair of single points, and the inter-point relation information is learned insufficiently. Therefore, we further exploit the super-point prior to introduce local structure information of point cloud for generating pseudo- labels of higher quality. The super-points of a scene can be generated via an unsupervised low-level clustering by the position and color information of each point. We refer to [19] for the manner of super-point generation, and it is recommended for more details. Formally, given a point cloud $\mathbf{P}\in\mathbb{R}^{N\times D}$, we obtain a set of super-points $\\{\mathbf{S}^{(i)}\\},i\in\langle M\rangle$, where $M$ is the number of super-point and each $\mathbf{S}^{(i)}\in\mathbb{R}^{S^{(i)}\times D}$ includes $S^{(i)}$ $D$-dimension points. Each point in $\mathbf{P}$ belongs to one super-point only, so $\mathbf{S}^{(i)}\cap\mathbf{S}^{(j)}=\varnothing,\forall i\neq j$ and the summation of all $S^{(i)}$ is equal to $N$. The obtained super-point information is then used to improve the quality of point-wise pseudo-label $\widehat{\mathbf{Y}}^{B}$. Given point-wise predictions in each super-point group, a voting operation is carried out to get a “mainstream” category. The elected category is then propagated to all points in this group to obtain a super-point-wise pseudo-label $\widehat{\mathbf{Y}}^{B}_{\text{sp}}$. An illustrative example of $\widehat{\mathbf{Y}}^{B}$ and $\widehat{\mathbf{Y}}^{B}_{\text{sp}}$ is shown in Fig. 2 (d) and (e), respectively. It can be observed that $\widehat{\mathbf{Y}}^{B}_{\text{sp}}$ tends to have higher purity. Similar to Sec. 3.2, we preserve confident predictions to form high-quality super-point-wise pseudo-labels. Specifically, given $\mathbf{Q}^{B}$, the average probability distribution in each super- point is computed first, of which the category with the highest score is selected and propagated in the whole super-point. Then the filtering mask $\mathbf{m}^{\text{sp}}$ is generated by checking whether the confidence of each point is beyond a pre-defined threshold $\tau^{\text{sp}}$, similar to the computation in Eq. 3. Although the voting operation enables $\widehat{\mathbf{Y}}^{B}_{\text{sp}}$ more stable and accurate, it suffers from the inherent noise arising from the super-point generation process. Thus, the point-wise pseudo-labels may have higher accuracy when the model is strong enough. Accordingly, we further design an adaptive combination mechanism to exploit the advantages of both. At the early stage, the learning of $f_{\theta}$ relies on $\widehat{\mathbf{Y}}^{B}_{\text{sp}}$ via a cross-entropy loss $\mathcal{L}_{pl}^{\text{sp}}$: $\displaystyle\mathcal{L}_{pl}^{\text{sp}}=\frac{1}{N}\sum_{i\in\langle N\rangle}\mathbf{m}^{\text{sp}}_{i}\cdot\text{cross- entropy}(\mathbf{Q}^{B}_{i},~{}\widehat{\mathbf{Y}}^{B}_{\text{sp}i}).$ (6) As the learning goes, an adaptive weight $w$ is adopted to gradually incorporate $\mathcal{L}_{pl}$ (Eq. 4) and abandon $\mathcal{L}_{pl}^{\text{sp}}$: $\displaystyle\mathcal{L}_{pl}^{\prime}=w\cdot\mathcal{L}_{pl}^{\text{sp}}+(1-w)\cdot\mathcal{L}_{pl},$ (7) where $w$ is a scalar in the range of $[0,1]$ and drops from 1 to 0 gradually with an inverse decay. Formally, the adaptive weight $w$ at the $k$-th training epoch can be computed as: $\displaystyle w=\alpha\cdot k^{-1},k\in\mathbb{N},$ (8) where $\alpha>0$ indicates the decay ratio. In this way, at the late stage of training, $f_{\theta}$ can be completely supervised by the point-wise pseudo- label, so that the model can keep from the noise in super-point grouping. The new pseudo-labeling loss $\mathcal{L}_{pl}^{\prime}$ is used to substitute the original $\mathcal{L}_{pl}$ in Eq. 5 for the final loss function. ## 4 Experiments Table 1: MIoU (%) on the ScanNet-v2 dataset (online test set). * means the performance of our baseline on the fully-supervised setting. The underline indicates the previous SOTA performance on each setting. The supervision types “subcloud” and “segment” mean using subcloud-level and segment-level annotation, respectively. “20 points” and “1thing1click” mean annotating 20 points per scene and annotating one point in each instance, respectively. Method | Supervision | MIoU ---|---|--- [31] PointNet++ | 100% | 33.9 [37] SPLATNet | 100% | 39.3 [40] TangentConv | 100% | 43.8 [23] PointCNN | 100% | 45.8 [24] FPConv | 100% | 63.9 [49] PointConv | 100% | 66.6 [42] KPConv | 100% | 68.4 [4] MinkowskiNet | 100% | 73.6 [13] VMNet | 100% | 74.6 [9] Occuseg | 100% | 76.4 [29] Mix3D | 100% | 78.1 [7] SparseConv | 100% | 72.5* [48] MPRM | subcloud | 41.1 [38] SegGroup | segment | 61.1 [11] SQN | 0.01% | 35.9 [11] SQN | 0.1% | 51.6 [26] OTOC | 20 points | 59.4 [26] OTOC | 1thing1click | 69.1 PointMatch | 0.01% | 57.1 PointMatch | 0.1% | 68.8 PointMatch | 20 points | 62.4 PointMatch | 1thing1click | 69.5 ### 4.1 Experiment Setup #### Datasets and Metric We choose two popular point cloud datasets for the evaluation of our method, ScanNet-v2 [5] and S3DIS [1]. The ScanNet-v2 dataset [5] contains the 3D scans of 1,613 indoor scenes of 20 semantic categories (1,201 for training, 312 for validation, and 100 for online test). The whole dataset includes around 243 million points in total. The S3DIS dataset [1] contains 271 room point clouds with 13 categories, scanned from 6 areas. Following the official train/validation split, Area 1,2,3,4,6 are used for training and Area 5 is used for evaluation. Besides, the S3DIS dataset has 273 million points, _i.e_., around 1 million points per scene on average, which is denser than scenes in the ScanNet dataset. The evaluation metric for 3D semantic segmentation we use is intersection-over-union, and we report the mean result (MIoU) over all categories for comparison with other methods. #### Implementation Details We adopt SparseConv [7] as the 3D U-Net backbone in PointMatch. The output dimension of the SparseConv is set to 32, which is the same as in [26]. Following [16] and [26], we randomly sample 250k points for too large scenes in the ScanNet-v2 dataset. We use two different combinations of various augmentations to create two views, randomly chosen from scaling, flipping, offsetting, rotation, affine transformation, position jittering, and color jittering, with a random augmentation extent. Hyper-parameters in our experiment $\tau$, $\tau^{\text{sp}}$, $\epsilon$, $\lambda$, and $\alpha$ are set to 0.95, 0.95, 0.5, 1.0, and 1.0, respectively. The network is trained for 512 epochs using Adam optimizer [17] with a learning rate of 0.01 and a mini- batch size of 8 on the ScanNet-v2 dataset and 4 for the S3DIS dataset. Considering the total number of training epochs, we replace the epoch number $k$ in Eq. 8 with $\lfloor k/64\rfloor$ on the “1thing1click” setting and $\lfloor k/32\rfloor$ on others, which is the round-off of $k$ divided by 32 or 64, in order to slow the decay rate. For the super-point generation, we follow [26] to use the mesh segment results [5] on the ScanNet-v2 dataset and the super-point graph partition manner proposed by [19] on the S3DIS dataset. Note that the super-points are used in training, and the inference stage does not rely on super-points. All experiments are conducted on an Intel Xeon Gold 6226R CPU and an NVIDIA RTX3090 GPU with 24GB memory. ### 4.2 Experiment Results Table 2: MIoU (%) on the ScanNet-v2 dataset validation set. * means the performance of our baseline on the fully-supervised setting. Note that SQN [11] reports only its performance of 0.1% label setting on the ScanNet-v2 validation set. Method | Supervision | MIoU ---|---|--- [7] SparseConv | 100% | 72.2* [11] SQN | 0.1% | 53.5 [26] OTOC | 20 points | 61.4 [26] OTOC | 1thing1click | 70.5 PointMatch | 0.01% | 58.7 PointMatch | 0.1% | 69.3 PointMatch | 20 points | 64.8 PointMatch | 1thing1click | 70.7 #### Evaluation on ScanNet-v2 On the ScanNet-v2 [5] dataset, the evaluation of PointMatch is conducted on four weakly-supervised settings, _i.e_., 0.01% of points annotated in each scene [11], 0.1% of points annotated in each scene [11], 20 points annotated per scene [10] (20 points), and 1 point annotated for each instance in the scene [26] (1thing1click). The annotated points in the first two settings (0.01% and 0.1%) are randomly chosen following [11]. The “20 points” setting is implemented following the officially ScanNet-v2 “3D Semantic label with Limited Annotations” benchmark [10]. Annotated points in the “1thing1click” setting are randomly chosen from each instance following [26]. The average point label in this setting is around 0.02% [26]. The evaluation results on the ScanNet-v2 online test set are presented in Tab. 1. Existing weakly supervised 3D semantic segmentation methods are also included for comparison, and some fully supervised methods are also listed in the table. As shown in the table, the proposed PointMatch consistently surpasses all existing methods over all weakly-supervised settings. It outperforms the state-of-the-art (SOTA) result by 21.2% on the 0.01% setting, by 17.2% on the 0.1% setting, and by 3.0% on the “20 points” setting. The performance on the “1thing1click” setting is further close to the fully-supervised baseline. Note that the work OTOC [26] takes 5 turns of iterative training to reach the above results, which is around 1536 epochs (3 times of ours). In addition, we also provide the performance of PointMatch on the ScanNet-v2 validation set in Tab. 2, on four weakly-supervised settings mentioned above, which also proves the superiority of PointMatch. Detailed results over 20 categories are shown in supplementary materials. #### Evaluation on S3DIS Table 3: MIoU (%) on the S3DIS dataset (Area-5 for validation). * means the performance of our fully-supervised baseline. The underline indicates the previous SOTA performance on each setting. Method | Supervision | MIoU ---|---|--- [30] PointNet | 100% | 41.1 [41] SegCloud | 100% | 48.9 [40] TangentConv | 100% | 52.8 [23] PointCNN | 100% | 57.3 [19] SPGraph | 100% | 58.0 [4] MinkowskiNet | 100% | 65.4 [42] KPConv | 100% | 67.1 [54] PointTransformer | 100% | 70.4 [7] SparseConv | 100% | 63.7* [18] $\Pi$ Model | 0.2% | 44.3 [39] MT | 0.2% | 44.4 [52] DGCNN+CRF | 0.2% | 44.5 [18] $\Pi$ Model | 10% | 46.3 [39] MT | 10% | 47.9 [52] DGCNN+CRF | 10% | 48.0 [26] OTOC | 1thing1click | 50.1 [11] SQN | 0.01% | 45.3 [11] SQN | 0.1% | 61.4 PointMatch | 1thing1click | 55.3 PointMatch | 0.01% | 59.9 PointMatch | 0.1% | 63.4 Figure 3: Visualization of the qualitative results. We sample three scenes from the training set and their related results include, (a) upper: input point clouds, lower: the super-point grouping, in which colors do not indicate category information; (b): two views of the input point cloud; (c) upper: the point-wise pseudo-label at the early stage, lower: the super-point-level pseudo-label at the early stage; (d) upper: the point-wise pseudo-label at the late stage, lower: the super-point-level pseudo-label at the late stage; (e) upper: the weakly-supervised prediction, lower: the fully-supervised prediction; (f) upper: the weak supervision, lower: the full supervision (ground truth). We also evaluate the proposed method on the S3DIS [1] dataset to further validate the effectiveness of the proposed method. Three weakly-supervised settings are included for evaluation, _i.e_., 0.01%, 0.1%, and “1thing1click” (no official “20 points” setting provided for S3DIS). Note that the point cloud in the S3DIS dataset usually contains much more points than in the ScanNet-v2 dataset. By estimate, around 0.0036% of points are annotated in the “1thing1click” setting. The results on these three settings are listed in Tab. 3. The SOTA methods on both the fully-supervised and weakly-supervised settings are presented in the table for comparison. It is observed that the proposed PointMatch achieves the best performance over all three settings. It surpasses the SOTA result on the 0.01% setting by a large margin of 14.6%, by 5.2% on the “1thing1click” setting, and by 2.0% on the 0.1% setting. Impressively, our result on the 0.1% setting is very close to the fully- supervised baseline (63.4% v.s. 63.7%). The above results strongly prove the effectiveness and superiority of PointMatch, especially in the scenario of very sparse annotations (0.01%). Detailed results on all 13 categories are listed in supplementary materials. #### Qualitative Results Table 4: Ablative results of consistency training in PointMatch. MIoU (%) on the ScanNet-v2 dataset validation set. Method | Supervision | MIoU ---|---|--- Fully-Sup. Version | 100% | 72.2 PointMatch | 0.01% | 58.7 w/o Consist. Training | 0.01% | 51.3 PointMatch | 0.1% | 69.3 w/o Consist. Training | 0.1% | 67.3 PointMatch | 20 points | 64.8 w/o Consist. Training | 20 points | 55.0 PointMatch | 1thing1click | 70.7 w/o Consist. Training | 1thing1click | 62.2 Except for the quantitative results, we also exhibit some qualitative segmentation results of PointMatch. As shown in Fig. 3, we visualize each sample in two rows and six columns, namely the input point cloud (upper) and its super-point grouping (lower) in column (a), its globally-augmented (upper) and locally-augmented (lower) views in column (b), its point-wise (upper) and super-point-wise (lower) pseudo-label at the early and late stage of training in column (c) and (d), respectively, the prediction of PointMatch under the weak (upper) and full (lower) supervision in column (e), and the corresponding weak label (upper) and ground truth (lower) in column (f). Note that all results we visualize are generated under the “1thing1click” weak supervision. It is observed that the predictions of PointMatch under weak supervision are close to the ground truths and the fully-supervised predictions. More impressively, the super-point-wise pseudo-labels are superior to the point- wise ones at the early stage, while get inferior at the late stage of training (see red boxes in Fig. 3), which confirms our claim. More visualization results are presented in supplementary materials for the space limit. ### 4.3 Ablation Study The proposed PointMatch mainly includes two components, the consistency training paradigm and the adaptive pseudo-labeling mechanism. Corresponding ablative experiments are conducted for the analysis of them. #### Consistency Training To validate the effectiveness of the consistency training, we remove one branch in our framework as well as the pseudo-labeling mechanism, so the resultant version is a SparseConv simply trained on the weak supervision (extended by super-point information as the original) with a cross-entropy loss. We implement ablative experiments on four weakly-supervised settings on the ScanNet-v2 validation set. As shown in Tab. 4, removing the consistency training results in noticeable performance drops consistently over all weakly- supervised settings, especially on the schemes with extremely little supervision, which strongly proves its great effectiveness. #### Adaptive Pseudo-labeling The adaptive pseudo-labeling mechanism plays the role of pseudo-label correction at the early stage of training, and it is implemented with an inverse decay. To confirm the effectiveness of our design, we implement four versions on two weakly-supervised settings (“1thing1click” and “0.01%”) for comparison: (i) using point-wise pseudo-label only ($w=\text{0}$); (ii) using super-point-wise pseudo-label only ($w=\text{1}$); (iii) using both two pseudo-labels but in a constant manner, by setting $w$ to 0.5 ($w=\text{0.5}$); (iv) using the adaptive mechanism with a larger decay ratio, by using $\lfloor k/32\rfloor$ (“1thing1click” settings) and $\lfloor k/32\rfloor$ (0.01% settings) in Eq. 8 ($k\leftarrow\lfloor k/16(32)\rfloor$). Results are listed in Tab. 5. Using either type of pseudo-label only is inferior to the adaptive combination, because both point-wise and super-point- wise pseudo-label have their own strengths. Using a constant weight also leads to a performance drop, which proves that giving temporally different reliance on the two pseudo-labels can better exploit their advantages. Besides, a faster decay of the weight $w$ also results in a slightly worse result, which is usually close to the result of using point-wise pseudo-label only ($w=\text{0}$). One reason is that the network is unable to learn adequate information from super-points when $w$ drops too fast. Table 5: Ablative results of adaptive pseudo-labeling in PointMatch. MIoU (%) on the S3DIS dataset Area-5. Method | Supervision | MIoU ---|---|--- Fully-Sup. Version | 100% | 63.7 PointMatch | 0.01% | 59.9 $w=\text{0}$ | 0.01% | 58.4 $w=\text{1}$ | 0.01% | 56.1 $w=\text{0.5}$ | 0.01% | 54.6 $k\leftarrow\lfloor k/16\rfloor$ | 0.01% | 58.7 PointMatch | 1thing1click | 55.3 $w=\text{0}$ | 1thing1click | 52.6 $w=\text{1}$ | 1thing1click | 50.2 $w=\text{0.5}$ | 1thing1click | 48.4 $k\leftarrow\lfloor k/32\rfloor$ | 1thing1click | 53.3 ## 5 Conclusion and Discussion We propose a novel approach, PointMatch, which introduces a consistency training framework into weakly supervised semantic segmentation of point clouds. It works by enforcing the predictive consistency between two views of a point cloud via pseudo-labeling, and enables the network to perform robust representation learning from weak label and data itself. The pseudo-label quality is further promoted by integrating super-point information in an adaptive manner. Impressively, PointMatch achieves SOTA performance over various weakly-supervised semantic segmentation settings on both ScanNet-v2 and S3DIS datasets, and shows strong robustness given even extremely little labels, _e.g_. 20 points per-scene and 0.01% of points annotated. ## References * [1] Iro Armeni, Ozan Sener, Amir R Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1534–1543, 2016. * [2] David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. In International Conference on Learning Representations, 2019. * [3] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. Advances in Neural Information Processing Systems, 32, 2019. * [4] Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4d spatio-temporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3075–3084, 2019. * [5] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5828–5839, 2017. * [6] Geoffrey French, Michal Mackiewicz, and Mark Fisher. Self-ensembling for visual domain adaptation. In International Conference on Learning Representations, number 6, 2018. * [7] Benjamin Graham, Martin Engelcke, and Laurens Van Der Maaten. 3d semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9224–9232, 2018. * [8] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Pires, Zhaohan Guo, Mohammad Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. In Neural Information Processing Systems, 2020. * [9] Lei Han, Tian Zheng, Lan Xu, and Lu Fang. Occuseg: Occupancy-aware 3d instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2940–2949, 2020. * [10] Ji Hou, Benjamin Graham, Matthias Nießner, and Saining Xie. Exploring data-efficient 3d scene understanding with contrastive scene contexts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15587–15597, 2021. * [11] Qingyong Hu, Bo Yang, Guangchi Fang, Yulan Guo, Ales Leonardis, Niki Trigoni, and Andrew Markham. Sqn: Weakly-supervised semantic segmentation of large-scale 3d point clouds with 1000x fewer labels. arXiv preprint arXiv:2104.04891, 2021. * [12] Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, and Masashi Sugiyama. Learning discrete representations via information maximizing self-augmented training. In International conference on machine learning, pages 1558–1567. PMLR, 2017. * [13] Zeyu Hu, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, and Chiew-Lan Tai. Vmnet: Voxel-mesh network for geodesic-aware 3d semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15488–15498, 2021. * [14] Shi-Sheng Huang, Ze-Yu Ma, Tai-Jiang Mu, Hongbo Fu, and Shi-Min Hu. Supervoxel convolution for online 3d semantic segmentation. ACM Transactions on Graphics (TOG), 40(3):1–15, 2021. * [15] Li Jiang, Hengshuang Zhao, Shu Liu, Xiaoyong Shen, Chi-Wing Fu, and Jiaya Jia. Hierarchical point-edge interaction network for point cloud semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10433–10441, 2019. * [16] Li Jiang, Hengshuang Zhao, Shaoshuai Shi, Shu Liu, Chi-Wing Fu, and Jiaya Jia. Pointgroup: Dual-set point grouping for 3d instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4867–4876, 2020. * [17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. * [18] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016. * [19] Loic Landrieu and Martin Simonovsky. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4558–4567, 2018. * [20] Huan Lei, Naveed Akhtar, and Ajmal Mian. Spherical kernel for efficient graph convolution on 3d point clouds. IEEE transactions on pattern analysis and machine intelligence, 2020\. * [21] Jichang Li, Guanbin Li, Yemin Shi, and Yizhou Yu. Cross-domain adaptive clustering for semi-supervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2505–2514, 2021. * [22] Kai Li, Chang Liu, Handong Zhao, Yulun Zhang, and Yun Fu. Semi-supervised domain adaptation with prototypical alignment and consistency learning. arXiv preprint arXiv:2104.09136, 2021. * [23] Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Pointcnn: Convolution on x-transformed points. Advances in neural information processing systems, 31:820–830, 2018\. * [24] Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, and Xiaoguang Han. Fpconv: Learning local flattening for point convolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4293–4302, 2020. * [25] Yunze Liu, Li Yi, Shanghang Zhang, Qingnan Fan, Thomas Funkhouser, and Hao Dong. P4contrast: Contrastive learning with pairs of point-pixel pairs for rgb-d scene understanding. arXiv preprint arXiv:2012.13089, 2020. * [26] Zhengzhe Liu, Xiaojuan Qi, and Chi-Wing Fu. One thing one click: A self-training approach for weakly supervised 3d semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1726–1736, 2021. * [27] Luke Melas-Kyriazi and Arjun K Manrai. Pixmatch: Unsupervised domain adaptation via pixelwise consistency training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12435–12445, 2021. * [28] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979–1993, 2018. * [29] Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, and Francis Engelmann. Mix3d: Out-of-context data augmentation for 3d scenes. arXiv preprint arXiv:2110.02210, 2021. * [30] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652–660, 2017. * [31] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in Neural Information Processing Systems, 30, 2017. * [32] Jonas Schult, Francis Engelmann, Theodora Kontogianni, and Bastian Leibe. Dualconvmesh-net: Joint geodesic and euclidean convolutions on 3d meshes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8612–8622, 2020. * [33] Charu Sharma and Manohar Kaul. Self-supervised few-shot learning on point clouds. Advances in Neural Information Processing Systems, 33, 2020. * [34] Xian Shi, Xun Xu, Ke Chen, Lile Cai, Chuan Sheng Foo, and Kui Jia. Label-efficient point cloud semantic segmentation: An active learning approach, 2021. * [35] Rui Shu, Hung Bui, Hirokazu Narui, and Stefano Ermon. A dirt-t approach to unsupervised domain adaptation. In International Conference on Learning Representations, 2018. * [36] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in Neural Information Processing Systems, 33, 2020. * [37] Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, and Jan Kautz. Splatnet: Sparse lattice networks for point cloud processing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2530–2539, 2018. * [38] An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, and Jie Zhou. Seggroup: Seg-level supervision for 3d instance and semantic segmentation. arXiv preprint arXiv:2012.10217, 2020. * [39] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 1195–1204, 2017. * [40] Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, and Qian-Yi Zhou. Tangent convolutions for dense prediction in 3d. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3887–3896, 2018. * [41] Lyne Tchapmi, Christopher Choy, Iro Armeni, JunYoung Gwak, and Silvio Savarese. Segcloud: Semantic segmentation of 3d point clouds. In 2017 international conference on 3D vision (3DV), pages 537–547. IEEE, 2017. * [42] Hugues Thomas, Charles R Qi, Jean-Emmanuel Deschaud, Beatriz Marcotegui, François Goulette, and Leonidas J Guibas. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6411–6420, 2019. * [43] He Wang, Yezhen Cong, Or Litany, Yue Gao, and Leonidas J Guibas. 3dioumatch: Leveraging iou prediction for semi-supervised 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14615–14624, 2021. * [44] Haiyan Wang, Xuejian Rong, Liang Yang, Shuihua Wang, and Yingli Tian. Towards weakly supervised semantic segmentation in 3d graph-structured point clouds of wild scenes. In BMVC, page 284, 2019. * [45] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5):1–12, 2019. * [46] Zongji Wang and Feng Lu. Voxsegnet: Volumetric cnns for semantic part segmentation of 3d shapes. IEEE transactions on visualization and computer graphics, 26(9):2919–2930, 2019. * [47] Colin Wei, Kendrick Shen, Yining Chen, and Tengyu Ma. Theoretical analysis of self-training with deep networks on unlabeled data. In International Conference on Learning Representations, 2020. * [48] Jiacheng Wei, Guosheng Lin, Kim-Hui Yap, Tzu-Yi Hung, and Lihua Xie. Multi-path region mining for weakly supervised 3d semantic segmentation on point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4384–4393, 2020. * [49] Wenxuan Wu, Zhongang Qi, and Li Fuxin. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9621–9630, 2019. * [50] Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10687–10698, 2020. * [51] Saining Xie, Jiatao Gu, Demi Guo, Charles R Qi, Leonidas Guibas, and Or Litany. Pointcontrast: Unsupervised pre-training for 3d point cloud understanding. In European Conference on Computer Vision, pages 574–591. Springer, 2020. * [52] Xun Xu and Gim Hee Lee. Weakly supervised semantic point cloud segmentation: Towards 10x fewer labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13706–13715, 2020. * [53] Zaiwei Zhang, Rohit Girdhar, Armand Joulin, and Ishan Misra. Self-supervised pretraining of 3d features on any point-cloud. arXiv preprint arXiv:2101.02691, 2021. * [54] Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun. Point transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 16259–16268, 2021. * [55] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921–2929, 2016.